vvEPA
United States
Environmental Protection
Agency
EPA/100/B-19/001
October 2019
www.epa.gov/risk
Guidelines for Human Exposure Assessment
Risk Assessment Forum
-------
vyEPA
EPA/1OO/B-19/001
October 2019
Guidelines for Human Exposure Assessment
Risk Assessment Forum
U.S. Environmental Protection Agency
-------
DISCLAIMER
This document has been reviewed in accordance with U.S. Environmental Protection Agency
(EPA) policy. Mention of trade names or commercial products does not constitute endorsement
or recommendation for use.
Preferred citation: U.S. EPA (U.S. Environmental Protection Agency). (2019). Guidelines for
Human Exposure Assessment. (EPA/100/B-19/001). Washington, D.C.: Risk Assessment Forum,
U.S. EPA.
Page | ii
-------
TABLE OF CONTENTS
DISCLAIMER ii
LIST OF TABLES vii
LIST OF FIGURES viii
LIST OF BOXES ix
ABBREVIATIONS AM) ACRONYMS x
PREFACE xi
AUTHORS, CONTRIBUTORS AM) REVIEWERS xii
EXECUTIVE SUMMARY xiv
CHAPTER 1. INTRODUCTION 1
1.1. Overview 1
1.2. Purpose and Scope of the Guidelines 1
1.3. Organization of Guidelines for Human Exposure Assessment 2
1.4. Summary 3
CHAPTER 2. PRINCIPLES OF EXPOSURE SCIENCE AND EXPOSURE ASSESSMENT 4
2.1. Exposure Science 4
2.2. Definitions 8
2.2.1. Exposure Definitions 8
2.2.2. Dose Definitions 8
2.3. Concepts in Exposure Assessment 12
2.3.1. The Risk Assessment Process 12
2.3.2. Overview of Exposure Assessment 12
2.3.3. Approaches for Exposure Assessment 13
2.3.4. Uncertainty and Variability in Exposure Assessments 16
2.4. Calculating Exposure Estimates 17
2.4.1. Inhalation Exposure 17
2.4.2. Ingestion (Dietary and Nondietary) Exposure 18
2.4.3. Dermal Exposure 18
2.5. Development of Exposure Science and Exposure Assessments Related to EPA Risk
Assessments 19
2.6. Emerging Topics 21
2.7. Summary 23
CHAPTER 3. PLANNING AND SCOPING AND PROBLEM FORMULATION FOR
EXPOSURE ASSESSMENTS 25
3.1. Planning and Scoping 26
3.1.1. Exposure Assessment Goals and Scope 28
3.1.2. Overarching Considerations 30
3.1.3. Stakeholder Involvement 30
3.1.4. EPA's Tribal Program and Networks 33
3.1.5. Peer Review 33
3.2. Problem Formulation 34
3.2.1. Individuals, Lifestages, Groups, Populations 35
Page | iii
-------
3.2.2. Conceptual Model 36
3.3. Exposure Assessment Analysis Plan 38
3.3.1. Data Sources, Gaps, Limitations and Quality Objectives 38
3.3.2. Exposure Scenarios 39
3.4. Summary 40
CHAPTER 4. CONSIDERATION OF LIFESTAGES, VULNERABLE GROUPS AND
POPULATIONS OF CONCERN IN EXPOSURE ASSESSMENTS 41
4.1. History of EPA Exposure Assessments for Lifestages, Vulnerable Groups and
Populations of Concern 42
4.2. Vulnerability and Susceptibility in Exposure Assessment 42
4.3. Examples of Lifestages, Vulnerable Groups and Populations of Concern in
Exposure Assessment 45
4.3.1. Lifestages 45
4.3.2. Tribal and Indigenous Populations 49
4.3.3. Other Racial and Ethnic Populations 52
4.3.4. Traditional Methods 53
4.3.5. Case Studies 53
4.3.6. Neighborhood Methods 53
4.3.7. Population-Based Methods 54
4.3.8. Social Process Methods 54
4.3.9. National-Level versus Local/Community-Specific Assessments 54
4.4. Summary 55
CHAPTER 5. DATA FOR EXPOSURE ASSESSMENTS 57
5.1. Types of Data Used in an Exposure Assessment 57
5.1.1. Environmental Data 58
5.1.2. Biomonitoring Data 58
5.1.3. Exposure Factors 61
5.1.4. Observational Human Exposure Measurement Study Data 64
5.1.5. Using Different Types ofDatato Inform Decisions 64
5.2. Identifying Data Gaps and Data Needs 64
5.2.1. Identification of Data Gaps—Existing Data 66
5.2.2. Developing a New Data Sampling Program 66
5.3. Data Quality for New Data Collection 67
5.3.1. Data Quality System 70
5.3.2. Data Usability—Determining Whether Data Meet Assessment Factors 71
5.3.3. Assessment—Using Data to Evaluate Exposures 75
5.4. Acquiring and Evaluating Data for an Exposure Assessment 77
5.4.1. Environmental Data 79
5.4.2. Biomonitoring Data 83
5.4.3. Exposure Factor Information 86
5.4.4. Questionnaires, Surveys and Observations 87
5.4.5. Modeling 89
5.5. Data and Decision Uncertainty and Variability 90
5.6. Data Management 92
5.7. Data Communication 93
5.8. Summary 94
Page | iv
-------
CHAPTER 6. COMPUTATIONAL MODELING FOR EXPOSURE ASSESSMENTS 107
6.1. Principles and Definitions of Modeling 107
6.2. Selecting the Type of Model for Exposure Assessments 108
6.2.1. Setting the Objectives for the Modeling Effort 110
6.2.2. Level of Model Complexity 110
6.2.3. Categories of Models Used in Exposure Assessments 116
6.2.4. Estimates of Exposure Using Scenario Evaluation 118
6.2.5. Exposure and Dose Estimation Using Biomonitoring Data 120
6.2.6. High-Throughput Exposure Models 123
6.3. Evaluation of Models 124
6.3.1. Soundness of Assumptions, Methods and Conclusions, Appropriateness 125
6.3.2. Attainment of Quality Assurance Objectives 126
6.3.3. Qualitative and Quantitative Model Calibration 126
6.3.4. Model Uncertainty and Sensitivity Analyses 126
6.4. Summary 129
CHAPTER 7. PLANNING AND IMPLEMENTING AN OBSERVATIONAL HUMAN
EXPOSURE MEASUREMENT STUDY 130
7.1. Overview 130
7.2. Study Design 132
7.2.1. Budget and Logistical Planning 132
7.2.2. Identifying Critical Data Elements 133
7.2.3. Determining Sample Size for Each Data Element 133
7.2.4. Developing Criteria and Identifying Potential Study Locations 134
7.2.5. Developing Eligibility Criteria for Study Participants 134
7.2.6. Developing Data Quality Objectives and Identifying Sampling and Analysis
Methods 134
7.2.7. Developing Chain-of-Custody, Storage and Data Management Procedures 135
7.2.8. Engaging the Community 135
7.2.9. Engaging Stakeholders 136
7.2.10. Human Subjects Considerations 137
7.2.11. Samples To Be Collected—Environmental, Biological, Personal, Exposure
Factors and Questionnaires 139
7.2.12. Sampling Scheme 140
7.2.13. Data Analysis Plan and Database Design 140
7.3. Planning and Executing a Pilot Study 141
7.3.1. Community and Stakeholder Involvement in the Pilot Study 141
7.3.2. Implementation Plan for the Full Study 141
7.3.3. Communication Considerations 141
7.4. Planning and Executing a Full Field Study 142
7.5. Peer Review and Completion of the Final Report 142
7.6. Summary 143
CHAPTER 8. UNCERTAINTY AND VARIABILITY for EXPOSURE ASSESSMENTS 144
8.1. Terminology 145
8.1.1. Data Uncertainty 145
8.1.2. Decision Uncertainty 145
8.1.3. Variability Impacts on Uncertainty 149
Page | v
-------
8.2. Considerations for Conducting an Uncertainty and Variability Evaluation 149
8.2.1. Planning and Scoping for Characterizing Uncertainty and Variability 150
8.2.2. Assessing the Impact of Uncertainty 151
8.2.3. Conveying Uncertainty When Presenting Results 151
8.3. A Tiered Approach to Data and Decision Uncertainty and Variability Evaluations 152
8.3.1. Selecting Input Parameters 153
8.3.2. Screening-Level Analyses 156
8.3.3. Conducting a Sensitivity Analysis to Better Characterize Uncertainty 157
8.3.4. Using Uncertainty and Variability Analyses to Refine an Exposure
Assessment 158
8.4. Communicating the Results of the Uncertainty and Variability Evaluation 160
8.5. Summary 162
CHAPTER 9. DEVELOPING A COMMUNICATION PLAN AND PRESENTING
RESULTS FOR EXPOSURE ASSESSMENTS 164
9.1. Overview of Communication in Exposure Assessment 164
9.2. Development of a Communication Plan 165
9.3. Results of an Exposure Assessment: Exposure Characterization and Risk
Characterization 166
9.3.1. Elements of an Exposure Characterization 166
9.3.2. Development and Use of an Exposure Characterization in Characterizing
Risk 167
9.3.3. Formats for Exposure Characterization 168
9.3.4. Communicating Uncertainty 168
9.3.5. Stakeholders 169
9.4. Communication Products 169
9.5. Summary 170
CHAPTER 10. REFERENCES 172
Page | vi
-------
LIST OF TABLES
Table 2-1. General Exposure-Related Terms 9
Table 2-2. Key Dose-Related Terms 10
Table 2-3. Approaches for Exposure Assessments 13
Table 3-1. Examples of Datasets Useful for a Location-Specific Exposure Assessment 39
Table 4-1. Recommended Childhood Age Groups for Monitoring and Assessing Childhood
Exposures 47
Table 4-2. Integrating Childhood Age Groups Used for Assessing Exposure and Potency for
Selected Toxicants That Cause Cancer via a Mutagenic Mode of Action 48
Table 5-1. Hypothetical Exposure Scenario for Leaking Chemical Drums 65
Table 5-2. Questions to Ask When Evaluating/Considering Data 78
Table 5-3. Common Environmental Data Measurements 80
Table 5-4. Common Biomonitoring Measurements 83
Table 5-5. Common Exposure Factor Information Measurements 87
Table 5-6. Examples of Sources of Non-Occupational Data for an Exposure Assessment from
EPA and other Federal Agencies 96
Table 6-1. EPA Exposure-Related Inventories and Clearinghouses 109
Table 6-2. Example Publications on Modeling Exposure and Dose from Biomonitoring Data 121
Table 8-1. Types of Uncertainty and Contributing Errors 147
Table 8-2. Examples of Questions Asked to Examine Decision Uncertainty 148
Page | vii
-------
LIST OF FIGURES
Figure 2-1. Source-to-Outcome Framework 5
Figure 2-2. Conceptual Framework for Exposure Science Developed by NRC 6
Figure 2-3. Schematic of Exposure/Dose Terms 11
Figure 2-4. New Technologies for Advancing Exposure Science 22
Figure 3-1. Planning and Scoping and Problem Formulation for Exposure Assessment 26
Figure 3-2. The Overall Risk Assessment Process 27
Figure 3-3. Example of a Conceptual Site Model 36
Figure 4-1. Vulnerability and Susceptibility Factors 43
Figure 4-2. Children's Activities That Impact Exposure as a Function of Developmental Age 47
Figure 5-1. Representative Profiles of Hypothetical Biomarkers Following a Single Exposure to a
Persistent Chemical 59
Figure 5-2. Schematic of the Distribution of Exposures for Individual Receptors within a
Population 62
Figure 5-3. The Seven Iterative Steps in the Data Quality Objectives Process 68
Figure 5-4. EPA Quality System Components and Tools 70
Figure 6-1. A Tiered Approach for Modeling Analysis Ill
Figure 6-2. Deterministic versus Probabilistic Analysis 112
Figure 6-3. Iterative Use of Measurements and Models 125
Figure 8-1. Schematic Diagram of Tiered Approach to Data Uncertainty 154
Figure 8-2. Hypothetical Example of an Input Distribution for Drinking Water Intake Rates 155
Page | viii
-------
LIST OF BOXES
Box 2-1. Agency-Specific Actions to Implement the Guidelines 20
Box 3-1. Definitions of "Public," "Stakeholder" and "Community" 31
Box 3-2. Community Involvement Planning Resources 33
Box 3-3. Resources Relevant to Exposure Assessment for Tribal Populations 33
Box 3-4. Resources for Technical Study Design of Observational Human Exposure Measurement
Studies 38
Box 4-1. Provisions of Presidential Executive Orders 42
Box 4-2. Resources on Disparities in Exposure 43
Box 4-3. Key Sources of Childhood Exposure Concentration and Exposure Factor Information 47
Box 4-4. Federal Executive Order and Policies Establishing Inclusion of Tribal Exposure
Lifeways in Human Exposure Assessments 49
Box 4-5. Definitions of "Federally Recognized Tribe" and "Indigenous Peoples" 49
Box 4-6. Tools and Reports for Evaluating Tribal Exposures 51
Box 5-1. Terms Describing Exposure Distributions 63
Box 5-2. EPA Quality Assurance/Quality Control (QA/QC) Websites and Resources 69
Box 5-3. Guidance Documents and Resources for Planning and Implementing a Biomonitoring
Program 85
Box 5-4. Examples of Guidance Documents and Resources for Conducting Questionnaires,
Surveys or Observational Studies 88
Box 5-5. Guidance Documents and Resources to Support Modeling Efforts 90
Box 6-1. Pertinent Resources for Modeling 108
Box 6-2. Examples of Resources for Screening-Level Models 113
Box 6-3. Examples of Resources for Probabilistic Assessments and Models 114
Box 7-1. Examples of Observational Human Exposure Measurement Studies 131
Box 8-1. Terminology 146
Box 8-2. Guidance Documents and Resources Supporting Probabilistic Risk Assessment 159
Box 9-1. EPA Guidance and Resources on Public Involvement 165
Page | ix
-------
ABBREVIATIONS AND ACRONYMS
ADME
absorption, distribution, metabolism and elimination
CDC
Centers for Disease Control and Prevention
DQO
data quality objective
EPA
U.S. Environmental Protection Agency
EPC
exposure point concentration
FOIA
Freedom of Information Act
GIS
geographic information system
HSRRO
Human Subjects Research Review Official
IRB
Institutional Review Board
NHANES
National Health and Nutrition Examination Survey
NRC
National Research Council
OMB
Office of Management and Budget
OPP
Office of Pesticide Programs
OPPT
Office of Pollution Prevention and Toxics
PBPK
physiologically based pharmacokinetic
PK
pharmacokinetic
PM
particulate matter
QA
quality assurance
QC
quality control
SHEDS
Stochastic Human Exposure and Dose Simulation model
SHEDS-HT
Stochastic Human Exposure and Dose Simulation-High Throughput model
SOP
standard operating procedure
Page | x
-------
PREFACE
This document builds on and supersedes the U.S. Environmental Protection Agency's (hereafter
"EPA" or "the Agency") 1992 Guidelines for Exposure Assessment (U.S. EPA 1992c) to
incorporate advances in exposure assessment reflecting the best science currently conducted
across the Agency in all offices, programs and regions (hereafter "programs"). EPA's Risk
Assessment Forum obtained broad participation in its efforts to revise the 1992 document The
Risk Assessment Forum convened a colloquium of EPA exposure assessment scientists in 2005
to assess the state-of-the-science, discuss Agency practice and identify emerging issues. This
colloquium was followed by meetings with scientists from EPA, state agencies and the broader
scientific community (Bangs 2005a; Bangs 2005b; Dellarco and Bangs 2006), at which the
intention to revise the Guidelines for Exposure Assessment was announced and developments in
the field since 1992 were reviewed. In 2006, the Agency consulted with the Science Advisory
Board, describing its approach to the revision and summarizing comments received from Agency
scientists, the scientific community and the public. This revised document, Guidelines for
Human Exposure Assessment, benefits from many additional years of experience with exposure
assessments across the Agency, conversations with the broader scientific community and
products from the Science Advisory Board and the National Research Council of the National
Academy of Sciences.
This Guidelines for Human Exposure Assessment is designed to aid exposure scientists in
preparing exposure assessments, analyzing status and trends, developing mitigation strategies,
making regulatory decisions and conducting epidemiological studies. This revision focuses on
human exposure to chemical agents and presents the general principles of exposure science
(including assessment and monitoring). It is not a detailed instructional manual. In addition, the
focus of the work is on exposure assessment as currently practiced by programs at EPA. This
document does not include an exhaustive description of emerging topics such as high-throughput
exposure assessment or the implications of in v/Yro-based risk assessments on the field of
exposure assessment. Aspects of these programs published in the peer-reviewed literature,
however, are included. As emerging topics mature, EPA might update or supplement this
document. This Guidelines for Human Exposure Assessment is intended principally for exposure
and risk assessors in the Agency and consultants, contractors or others who perform this type of
work under Agency contract or sponsorship. It also serves as a resource for others as to how EPA
conducts exposure assessments. EPA risk managers/decision makers also need to be familiar with
this document because it describes approaches, defines terminology and summarizes methods
exposure and risk assessors use to support regulatory decisions.
Assessors need to consult with their programs for specific standard operating procedures or
guidelines. The technical materials cited and hyperlinked throughout this document provide specific
information for individual exposure assessment situations. At the time of publication, all cited
materials and hyperlinks were correct and functional.
Page | xi
-------
AUTHORS, CONTRIBUTORS AND REVIEWERS
This document is the product of a technical panel of EPA scientists under the auspices of EPA's
Risk Assessment Forum.
TECHNICAL PANEL
Nicolle Tulve, Chair
Office of Research and Development
Research Triangle Park, NC
Marian Olsen, Co-Chair
Superfund and Emergency Management Division
EPA Region 2
New York, NY
Michael Firestone
Office of Children's Health Protection
Washington, DC
Paul Price
Office of Research and Development
Research Triangle Park, NC
Cynthia Stahl
Air and Radiation Division
EPA Region 3
Philadelphia, PA
Valerie Zartarian
Office of Research and Development
Research Triangle Park, NC
Michael Broder, Science Coordinator
Office of Research and Development
Washington, DC
Haluk Ozkaynak (retired)
National Exposure Research Laboratory
Office of Research and Development
Research Triangle Park, NC
Eloise Mulford (retired)
Office of the Regional Administrator
EPA Region 5
Chicago, IL
Linda Sheldon (retired)
National Exposure Research Laboratory
Office of Research and Development
Research Triangle Park, NC
CONTRIBUTORS
The authors wish to acknowledge the many EPA staff (both current and former) that contributed
to the content of this document:
Jerry Blancato, Office of Research and Development
Denis Borum, Office of Congressional and Intergovernmental Relations
Elaine Cohen Hubal, Office of Research and Development
Jeff Dawson, Office of Pesticide Programs
Michael Dellarco, Formerly Office of Research and Development
Stephen Graham, Office of Air Quality Planning and Standards
Amanda Hauff, Office of Chemical Safety and Pollution Prevention
Warren Lux, Formerly Human Subjects Research Review Official
Marsha Morgan, Office of Research and Development
Deirdre Murphy, Office of Air Quality Planning and Standards
Page | xii
-------
Daniel Nelson, Office of Research and Development
Miles Okino, Formerly Office of Research and Development
Devon Payne-Sturges, Formerly Office of Research and Development
Toby Schonfeld, Formerly Human Subjects Research Review Official
Jon Sobus, Office of Research and Development
Kent Thomas, Office of Research and Development
Gary Bangs, Office of the Science Advisor (retired)
Michael Callahan, Region 6 (retired)
Alan Cimorelli, Region 3 (retired)
Bill Jordan, Office of Pesticide Programs (retired)
Stephen Kroner, Office of Land and Emergency Management (retired)
Matt Lorber, Office of Research and Development (retired)
Jacqueline Moya, Office of Research and Development (retired)
Harvey Richmond, Office of Air Quality Planning and Standards (retired)
Brad Schultz, Office of Research and Development (retired)
ICF technically edited this Guidelines for Human Exposure Assessment. EPA extends its
appreciation to Penelope Kellar and Whitney Mitchell for their diligent work. SCG assisted EPA
with earlier drafts of this document. EPA acknowledges the contributions of Agency engineers,
scientists and policy experts (listed above) who contributed content, editing, writing and review.
EPA also acknowledges the public, tribes and peer reviewers for their constructive comments.
The Science Coordinator and Chairs apologize in advance for any omissions.
EXTERNAL PEER REVIEWERS
Paloma Beamer, Ph.D.
University of Arizona
Nicole Cardello Deziel, Ph.D., MHS
Yale School of Public Health
Christopher W. Greene, M.S.
Minnesota Department of Health
Penelope A. Fenner-Crisp, Ph.D., DABT
Independent Consultant
Alan H. Stern, Dr.P.H., DABT
Independent Consultant
Rebecca T. Parkin, Ph.D., MPH
George Washington University
P. Barry Ryan, Ph.D.
Rollins School of Public Health of Emory University
Clifford P. Weisel, Ph.D.
Environmental and Occupational Health Sciences
Institute
Michael A. Jayjock, Ph.D., CIH
Independent Consultant
Page | xiii
-------
EXECUTIVE SUMMARY
The mission of the U.S. Environmental Protection Agency (hereafter "EPA" or "the Agency")
is to protect human health and the environment. This mission is, in part, accomplished by
understanding, characterizing and managing health risks associated with exposure to
environmental contaminants and other agents. Exposure science characterizes, estimates and
predicts exposures; it also provides information for preparing exposure assessments and for
developing effective strategies to reduce exposure and manage risk.
This Guidelines for Human Exposure Assessment provides an updated resource on assessing
human exposure for exposure and risk assessors in the Agency, and for consultants, contractors
or others who perform this type of work under Agency contract or sponsorship. EPA risk managers/
decision makers need to be familiar with this document because it describes approaches, defines
terminology and summarizes methods assessors use to support regulatory decisions. It also serves
as a resource for others as to how EPA conducts exposure assessments. This document builds on
and supersedes the 1992 Guidelines for Exposure Assessment (U. S. EPA 1992c), incorporates
advances in the field since then, reflects current scientific practice across Agency programs and
includes pertinent topics identified during public meetings and from a literature survey,
including publications issued by the National Research Council of the National Academy of
Sciences. It briefly describes the principles of exposure science and assessment, provides guidance
on the various approaches for conducting an exposure assessment and presents references for
more detailed information. It does not serve as a detailed instructional manual or supplant
specific exposure guidance in use by Agency programs, nor does it endorse specific models or
approaches that could have limited applicability or have become outdated. In addition, this
Guidelines focuses on exposure assessment as currently practiced in EPA programs. This document
does not include an exhaustive description of emerging topics, such as high-throughput
exposure assessment or the implications of in vitro-based risk assessments on the field of
exposure assessment. Aspects of these advances that have been published in the peer-reviewed
literature, however, are included. Finally, this Guidelines provides links to exposure assessment
tools and technical documents that address particular exposure assessment needs.
The focus of this Guidelines for Human Exposure Assessment is on human exposure to chemical
agents in the non-occupational environment. The exposed populations (i.e., receptors) to which this
document refers are adults and children or other vulnerable groups within the human population.
This document is organized in chapters, each of which explores a component of the exposure
assessment process.
Chapter 1 introduces this Guidelines for Human Exposure Assessment and discusses the purpose
and scope of the document.
Chapter 2 provides a general review of exposure science concepts and principles, including
approaches and tools for consideration when planning and conducting exposure assessments. Topics
include an overview of exposure science, the role of exposure assessment in the risk assessment
process, concepts and types of exposure assessments, equations and input variables for estimating
exposure, presentation of exposure assessment findings and a brief history of exposure science.
Page | xiv
-------
Exposure characterization is an important step in all exposure assessments, and the chapter
presents guidance regarding the synthesis of exposure information.
Chapter 3 describes a process for the planning and scoping and problem formulation steps for an
exposure assessment. The process builds on the Agency's Guidance on Cumulative Risk
Assessment: Parti. Planning and Scoping (U.S. EPA 1997a), Lessons Learned on Planning
and Scoping for Environmental Risk Assessment (U. S. EPA 2002g) and Framework for Human
Health Risk Assessment to Inform Decision Making (U.S. EPA 2014f). It emphasizes the
importance of establishing goals and objectives; building an interdisciplinary team; developing a
conceptual model; identifying assessment options, available resources and data needs;
producing an overall assessment plan; engaging and involving appropriate stakeholders; engaging
and involving the community; establishing data quality objectives; and conducting peer review.
Chapter 4 discusses possible increased risk of adverse health effects from environmental
contaminants for different lifestages, vulnerable groups and populations of concern because of
disproportionate exposure or varied responses to exposure. Consistent with the Agency's
guidance in Framework for Cumulative Risk Assessment (U.S. EPA 2003 d), exposure assessors
need to be aware of environmental justice issues, including unique population characteristics
and sociodemographic factors that might increase exposure or predispose a lifestage, vulnerable
group or population to greater risk. These factors can include age, sex, genetic susceptibility,
cultural characteristics, behaviors, occupation, socioeconomic status, access to a healthy diet,
race/ethnicity and geographic location. This chapter assembles other existing Agency guidance,
along with examples of case studies, to discuss where techniques and considerations associated
with lifestages, vulnerable groups and populations of concern can be applied in exposure
assessments.
Chapter 5 discusses various aspects of data used for exposure assessments, including
determining the data needed; whether data are currently available and their quality; and when
data are not available, whether they need to be developed. Understanding data availability,
applicability, characteristics, quality issues and limitations is critical to conducting a
scientifically-sound exposure assessment. This chapter presents guidance on the assessment of
data uncertainty and variability. It also emphasizes the importance of transparency and
communication of findings to the risk manager/decision maker and stakeholders.
Chapter 6 highlights basic concepts in modeling, including the principles of the modeling
process. It provides an overview of modeling for exposure assessment, outlines the criteria for
choosing appropriate models based on the goals and data quality objectives and describes how
to evaluate a model that might be useful for an exposure assessment. Chapter 6 includes
information on modeling inventories and clearinghouses and describes resources that support the
use of models of various levels of complexity, including probabilistic models.
Chapter 7 provides details on planning an observational human exposure measurement study.
Various parts of the Agency use such studies to quantify people's exposures to chemicals in their
everyday environments during their normal daily activities. The studies involve measurements of
chemical, physical or biological agents in environmental media; collection of information about
study participants and their homes, work environments and activities; and collection of personal
Page | xv
-------
exposure and biological samples. This chapter discusses the aspects of planning an
observational human exposure measurement study, including budget and logistical planning,
establishing a study design, planning and executing a pilot study and a full field study and the
importance of conducting peer review. It also addresses ethical considerations that exposure
assessors need to consider when interacting with study participants and the community. Scientific
and Ethical Approaches for Observational Exposure Studies (U. S. EPA 2008c) examines both
the scientific and ethical issues associated with observational human exposure measurement
studies in more detail and is an important resource in the design and implementation of this type
of study.
Chapter 8 discusses the compounded effects of uncertainty and variability in exposure
assessments, accounting for uncertainty and variability in planning and scoping and problem
formulation (Chapter 3), and uncertainty and variability within the data used for exposure
assessments (Chapter 5). Chapter 6 highlights how an assessor uses these concepts in the
application of models in an exposure assessment.
Chapter 9 synthesizes the concepts presented in the previous eight chapters into a communication
plan and supplements them with more specific information. Chapter 9 emphasizes the importance
of identifying the intended audience, the types of communication products, communication plans
that might be appropriate for different exposure assessments and related ethical considerations.
Chapter 10 provides references for all cited documents.
Assessors need to consult with their programs for specific standard operating procedures or
guidelines. The technical materials cited and hyperlinked throughout this document provide
specific information for individual exposure assessment situations. At the time of publication, all
cited materials and hyperlinks were correct and functional. As appropriate, the Risk Assessment
Forum will evaluate the need to update this document and make appropriate adjustments as the
field of exposure science evolves.
Page | xvi
-------
CHAPTER 1. INTRODUCTION
1.1. Overview
The mission of the U.S. Environmental Protection Agency (hereafter "EPA" or "the Agency") is
to protect human health and the environment by understanding, characterizing and reducing
health risks associated with exposure to environmental contaminants and other agents. Exposure
science characterizes and predicts the intersection of an agent and receptor in space and time. It
provides information to develop exposure assessments and the most effective strategies to reduce
human health risk through mitigating exposure. The Agency needs to understand whether the
agent could cause an adverse health effect, the level at which the effect might be observed, the
likelihood of the effect's occurring and, if necessary, how exposure to the agent could be
reduced. The increasing number and complexity of risk assessments the Agency conducts, and
the attendant risk management decisions, present new challenges. Advances in exposure science
require EPA to consider the best available science for conducting exposure assessments.
1.2. Purpose and Scope of the Guidelines
This document builds on and supersedes the Guidelines for Exposure Assessment (U.S. EPA
1992c). It incorporates EPA science policy, analytical methods, risk assessment guidance,
methods and data developed since publication of the 1992 document, including:
• New Policy on Evaluating Risk to Children (1995b) and the 2013 reaffirmation of that
policy (2013b)
• Policy for Use of Probabilistic Analysis in Risk Assessment (Hansen 1997a) and Guiding
Principles for Monte Carlo Analysis (1997b)
• Guidance on Cumulative Risk Assessment, Part 1. Planning and Scoping (1997a)
• General Principles for Performing Aggregate Exposure and Risk Assessments (2001 f)
• Exploration of Perinatal Pharmacokinetic Issues (2001 e)
• Example Exposure Scenarios (2003 c)
• Guidance on Selecting Age Groups for Monitoring and Assessing Childhood Exposures
to Environmental Contaminants (2005c)
• Supplemental Guidance for Assessing Susceptibility from Early-Life Exposure to
Carcinogens (2005h)
• A Framework for Assessing Health Risk of Environmental Exposures to Children (2006d)
• Concepts, Methods and Data Sources for Cumulative Health Risk Assessment of Multiple
Chemicals, Exposures, and Effects: A Resource Document (2007c)
• Scientific and Ethical Approaches for Observational Exposure Studies (2008c)
• Exposure Factors Handbook: 2011 Edition (20 lid) and Highlights of the Exposure
Factors Handbook (2011 e)
• Recommended Use of Body Weight 3/4 as the Default Method in Derivation of the Oral
Reference Dose (201 lh)
• Benchmark Dose Technical Guidance (2012b)
• Microbial Risk Assessment Guideline: Pathogenic Microorganisms with Focus on Food
and Water (2012e)
Page | 1
-------
• Framework for Human Health Risk Assessment to Inform Decision Making (2014f)
• Peer Review Handbook, 4th Edition (2015 c)
• Superfund Community Involvement Handbook (2016e).
This Guidelines describes the principles of exposure science and exposure assessment, offers
guidance to the reader on various approaches for use in conducting an exposure assessment and
provides references for more detailed information, including exposure assessment tools and
technical documents that address particular exposure assessment needs.
This Guidelines for Human Exposure Assessment does not serve as a detailed instructional guide
or supplant specific exposure guidance in use by Agency programs, nor does it emphasize
specific models or approaches that might have limited applicability or have become outdated. Its
focus is on current practices in EPA programs. This document does not include an exhaustive
description of emerging topics such as high-throughput exposure assessment or the implications
of in vitro-based risk assessments on the field of exposure assessment. Aspects of these programs
published in the peer-reviewed literature, however, are included. As emerging topics mature,
EPA might consider updating this document. Agency exposure and risk assessors are encouraged
to consult with their programs to obtain specific procedures and guidelines.
This Guidelines for Human Exposure Assessment focuses on human exposure to chemical agents
under non-occupational scenarios.1 Exposure assessments for physical and biological agents
(e.g., noise, radiation, microbial hazards, nanomaterials) are beyond the scope of this document
because of their unique characteristics. This document also does not address impacts of social
stressors on the biological response to chemical agents.
This document focuses on the data and information used in exposure assessments conducted
across the Agency. The type and purpose of an exposure assessment determine the data and
information requirements. Screening-level exposure assessments require few resources and often
use available data, whereas complex exposure assessments address the most demanding exposure
questions and can include observational human exposure measurement studies.
Many other resources are available from the Agency and external sources for use with this
Guidelines for Human Exposure Assessment. This document references sources with proven
principles and approaches EPA uses.
1.3. Organization of Guidelines for Human Exposure Assessment
The order in which the contents of this document are presented is the same as the order of the
steps that assessors commonly take in preparing exposure assessments:
• Chapter 2 - an overview of the basic concepts and principles of exposure science
• Chapter 3 - planning and scoping and problem formulation for exposure assessments
1 Wc use the term "agent" throughout this document to indicate any entity that an exposure assessor might measure
or analyze. An agent might or might not pose a risk at "environmental" levels. Chapter 2 uses the term "stressof' in
place of "agent" for consistency with National Research Council documents.
Page | 2
-------
• Chapter 4 -
• Chapter 5 -
• Chapter 6 -
• Chapter 7 -
• Chapter 8 -
• Chapter 9 -
• Chapter 10-
lifestages, vulnerable groups and populations of concern in exposure
assessments
collection and use of data for exposure assessment
modeling for exposure assessment
planning for an observational human exposure measurement study
information on evaluating uncertainty and variability in exposure
assessment
the presentation and communication of results of exposure assessments
full references for all cited documents.
Chapters 1 through 9 each conclude with a summary.
1.4. Summary
This Guidelines for Human Exposure Assessment incorporates EPA science policy, analytical
methods, risk assessment guidance and methods and data developed since publication of the
1992 Guidelines for Exposure Assessment. Describing the fundamental principles of exposure
science and exposure assessment, it presents information for taking various approaches to
exposure assessment, supplemented by detailed information published in the literature.
Page | 3
-------
CHAPTER 2. PRINCIPLES OF EXPOSURE SCIENCE
AND EXPOSURE ASSESSMENT
This chapter provides an overview of exposure science and exposure assessment principles and
practices. It covers:
• Concepts and definitions for exposure science (Sections 2.1 and 2.2)
• Concepts for exposure assessment (Section 2.3)
• Equations and input variables for estimating exposure (Section 2.4)
• Development of exposure science and exposure assessments (Section 2.5)
• Emerging topics (Section 2.6).
Chapter 2 introduces key concepts discussed in detail in subsequent chapters. It is not intended as
guidance for conducting exposure assessments but rather provides a review of the principles,
approaches and tools that might be considered when planning and engaging in exposure studies
and assessments. Supporting documents and resources are cited throughout the chapter.
Section 2.7 summarizes the chapter.
2.1. Exposure Science
Human exposure science is the study of human contact with chemical, physical or biological
agents occurring in their environments. It is intended to advance the knowledge of the
mechanisms and dynamics of events resulting in adverse health outcomes, either to understand
their cause(s) or to prevent them (Barr et al. 2006). Exposure science describes the environment,
the behavior of agents in the environment, the characteristics and activities of human receptors
and the processes that lead to human contact and uptake of agents. Exposure science uses this
information to describe conditions in the real world that could lead to human health risks. It
provides the scientific knowledge, methods, data and tools for developing current, prospective
and retrospective exposure assessments that link exposure to health outcomes and evaluate
various options to manage exposures effectively (NRC 2012a; Sheldon and Cohen Hubal 2009;
U.S. EPA2009a).
In 2012, the National Research Council (NRC) published Exposure Science in the 21st Century:
A Vision and a Strategy. That report defines exposure science as "the collection and analysis of
quantitative and qualitative information needed to understand the nature of contact between
receptors and physical, chemical, or biologic stressors" (NRC 2012a). Consistent with this
definition, the NRC committee considered that exposure science extends beyond the exposure
event itself (i.e., the point of contact) to study and describe the processes that affect the transport
and transformation of agents from their source to a dose at a target internal organ, tissue or
toxicity pathway associated with a disease process. The NRC committee chose to use the term
"stressor" rather than "agent."
Page | 4
-------
A source-to-outcome framework, as illustrated in Figure 2-1, helps visualize the processes and
information important for exposure science. The text under each box in Figure 2-1 shows the
information used to characterize the various processes and conditions represented in the boxes.
The arrows between the boxes represent the models used to link the processes. The processes
important for exposure science begin with a contaminant's entering the environment and end
with dose characterization. Starting in the upper left-hand corner, a source releases agents into
the environment. Chemical reactions and physical and biological degradation transform many
contaminants. Contaminants or their transformation products move through the environment and
can be found in many types of environmental media, including air, water, soil, dust, food and
surfaces. The magnitude of exposure depends on the contaminant's concentration in the medium,
activities that transfer a contaminant from an environmental medium to a receptor and duration
of contact of the contaminant with the receptor. An exposure becomes a dose when the
contaminant moves across the receptor's external exposure surface and is absorbed into the
body; it then can disperse throughout the body in its native form, metabolized form or both. The
endpoint for exposure science is the dose that the target internal tissue, organ or developing
embryo/fetus receives: the location where the dose initiates the toxicity pathways that trigger the
adverse effect. This endpoint serves as the starting point for toxicology (Pleil and Sheldon 2011).
Figure 2-1. Source-to-Outcome Framework
Source/Stressor
Characterization
Fate and Transport;
Transformation Models
Chemical
Biological
Physical
Non-Chemical
Atmosphere
Vegetation
Habitat Conditions
Hydrosphere
Lithosphere
Exposure
Effects
Acute
C-voni:
Environmental
Characterization
Transport and
Transformation
Absorbed Target
Exposure
Models
Environmental
Concentration
Air
Water
Soil/Dust
Food
Dose: PBPK
Models
Exposure
Individual
Lifestage
Group
Population
Pathway
Duration
Magnitude
Frequency
Route
Flow Dynamics
Dispersion
Kinetics
Thermodynamics
Spatial Variability
Distribution
Temporal Variability
Meteorology
Degradation
Chemical Reactions
Partitioning
Activity Patterns
Effects
Note: PBPK = physiologically based pharmacokinetic
Adapted from NRC (1983); NRC (1997)
In 2012, the NRC committee built on the source-to-outcome framework to develop the
conceptual systems framework for exposure science shown in Figure 2-2. In this figure, the basic
components from stressor release to adverse outcome are the same. The committee, however,
added several new concepts. On the left-hand side of Figure 2-2, actions or events might be
sources for stressors that cause changes in human and natural factors or alter human behaviors or
both. The outcomes on the right-hand side of the figure have feedback loops that, inherently, can
Page | 5
-------
lead to stressors or to different actions or events. The arrow across the top suggests the dynamic
nature of the system. The figure shows the instrumental role human activities play both in
describing the exposure event and in developing or mitigating exposures or risk. The figure
departs from a simple linear depiction of exposure by incorporating feedback loops resulting
from exposures or actions to those exposures. Concepts depicted in this systems framework will
become increasingly important as the Agency addresses issues of sustainability.
Figure 2-2. Conceptual Framework for Exposure Science Developed by NRC
Dynamic System
Actions or Events
•Disasters
•Climate change
•Market demands
•Population growth
•Policy decisions
•Health
•Function
•Service
•Societal
Demands
Adapted from NRC (2012a)
The NRC committee also recognized exposure as a multiscale problem that needs to incorporate
variations of exposure to multiple stressors across scales of time, space and biological
organization; thus, exposure is considerably more complex than Figure 2-1 and Figure 2-2
depict. Multiple stressors can enter the environment at the same time from multiple sources.
Stressors can remain unchanged or chemical, physical or biological processes can transform
them. Stressors in their native forms or their transformation products can take many different
pathways to reach human receptors. Exposure often is characterized for a single stressor, in a
single medium or as a single pathway. Real-world scenarios involve multistressor, multimedia
and multipathway exposures. Exposure scientists develop methods to characterize aggregate
exposure, the sum of exposures to a single stressor from all sources, and cumulative exposure,
which addresses exposures to multiple chemicals by multiple routes over multiple periods.
The focus for human exposure science often is the receptor and not the sources of the stressor,
considering potential contact based on the human receptor's location and behavior. A receptor-
based approach has two important advantages over a source-based approach. It simplifies the
problem by narrowing the universe to stressors that are actually important for human exposure
and health risk, and it enables us to develop a real-world description of risk by considering the
multiple stressors by which exposure actually occurs. Exposure science, however, often is
applied in the broader context of the source-to-outcome continuum where informing risk
assessment and risk management decisions using exposure scenario or other approaches is
essential and where understanding relationships between sources and exposures is critical.
Page | 6
-------
Receptors can be individuals, groups at specific lifestages within a population or the entire
population. Understanding the characteristics of human receptors, their behaviors and the
relationship between these factors and exposure or dose is crucial for a receptor-based approach.
Variability in exposure occurs because of location, occupation, activities within a location,
socioeconomic status, consumer preferences, dietary habits and other lifestyle choices. Behaviors
relative to lifestage can be particularly influential determinants for exposure, especially for
infants and toddlers and for the embryo/fetus during pregnancy. Lifestage, health status, sex and
genetic differences also can be important factors that determine dose. The drivers for human
activities are complex and, unlike stressors, cannot be predicted using first-principle models
based on physical/chemical properties. Instead, human activities are treated as stochastic
properties (random variables) described by population distributions based on available (e.g.,
observational or modeled) data.
Vulnerability refers to characteristics of individuals or populations that place them at increased
risk of an adverse health effect (U.S. EPA 2005c). It includes economic, demographic, social,
cultural, psychological and physical states of the receptor or population that influence patterns of
exposure to environmental contaminants and those states that alter the relationship between the
exposure of the environmental agent and its health effect on the receptor (Gee and Payne-Sturges
2004). Vulnerability also can include external stressors of socioeconomic/sociopolitical origins
(e.g., economic structural inequalities, psychosocial stressors) (NEJAC 2004; U.S. EPA 2003d).
In addition, during certain lifestages (such as fetal development), specific and characteristic
exposure routes can predominate, during which exposure might enhance adverse outcomes.
Section 4.2 addresses vulnerability in detail.
Exposure science describes an open system—the environment, with sources, stressors and
human receptors. As with all open systems, developing research and assessment strategies for
which conditions are carefully controlled and systematically varied to develop a complete
understanding of the important processes is not possible. Instead, we measure, observe and
analyze conditions and variables to elucidate the relationships between multiple variables at one
time. An important constraint associated with working in an open system is that we can confirm,
but not prove, hypotheses about exposure. Observational methods provide important information
for developing the science. Not all important parameters for describing human exposure can be
identified and known in detail because of the nature of working in an open system. This
limitation of an open system leads to increased uncertainty in exposure predictions. Exposure
research iterates between methods, measurements and models to develop scientific
understanding and principles. Methods research provides the tools that enable observational
measurements and their interpretation. Methods for human exposure science pose many
challenges, especially for personal exposure monitoring. Devices for personal monitoring need to
be extremely sensitive, accurate, selective, lightweight, easy to wear and self-powered.
Observational human exposure measurement studies (see Chapter 7) provide fundamental data
to understand exposure processes and human activities. Measurement studies provide inputs for
models and data for model evaluation. Models are the underpinnings for exposure science (see
Chapter 6). Both statistical models and models based on physicochemical processes provide the
ability to summarize and link our knowledge of exposure processes and to quantify and predict
levels of stressors, exposure and dose. Research relies on models to develop exposure
hypotheses, synthesize data on the state of the system, provide explanations of factors
influencing exposure and identify gaps in our knowledge requiring additional data. Decision
Page | 7
-------
making uses models to assess exposure/dose to stressors, weigh the contributions of different
sources, project future conditions or trends, extrapolate to situations lacking observations and
evaluate the impacts of different policies or future scenarios. Exposure scientists also use models
to develop estimates of uncertainty and variability in predicted exposures.
2.2. Definitions
2.2.1. Exposure Definitions
Developing, applying and communicating exposure science requires a standardized vocabulary
and consistent set of definitions for all concepts and technical terms. Exposure science overlaps
with other disciplines, many of which use different terms for the same concepts. The definitions
used in this document reflect the field of exposure science. Definitions of exposure, dose and
related concepts are presented in (Zartarian et al. 2007). In addition, the International Programme
on Chemical Safety developed and published a glossary intended to harmonize the terms used in
chemical hazard and risk assessment, which the International Society of Exposure Science
officially adopted (Zartarian et al. 2005). Table 2-1 summarizes general exposure-related terms
directly cited from that glossary. We explain the concepts associated with these terms below.
Exposure is the contact of an agent with an external boundary of a receptor (exposure surface)
for a specific duration (WHO 2004; Zartarian et al. 2005). For exposure to occur, the agent and
receptor need to come together in both space and time. The time of continuous contact between
the agent and receptor is the exposure period. Exposure can be described in terms of the
magnitude (how much), frequency (how often) and duration (how long) of contact at an external
boundary. External boundaries are characterized by external exposure surfaces, such as the
surface of the skin or a conceptual surface over the nose and open mouth. For most
contaminants, both magnitude and route of exposure are critical characteristics in determining
adverse effects. In addition, the frequency, duration and timing (e.g., lifestage considerations,
acute versus chronic exposure) of exposure/dose are influential in determining adverse effects.
These factors depend on the source of the contaminant, its transport and fate, its persistence in
the environment and the activities of individuals that lead to contact with the contaminant.
2.2.2. Dose Definitions
Dose refers to the amount of an agent that enters a receptor after crossing an external exposure
surface. Dose profiles over time depend on the factors described for exposure and the kinetics of
absorption into the body, distribution throughout the body, metabolism by various tissues within
the body and elimination from the body (ADME); thus, the duration of the dose always is equal
to or longer than the exposure duration.
Table 2-2 provides definitions for dose-related terms used in this document. When considering
dose terms, understanding that different disciplines use different terms to define the same
concepts is essential. As an example, within exposure science, the term "exposure" refers to the
amount of agent in contact with an external exposure surface, whereas in toxicology, the terms
"administered," "external" or "potential" dose refer to this metric. The definitions of the terms in
Table 2-2 derive from their use in exposure science. This document uses the exposure science
definition of "dose"—the amount of an agent that enters a receptor after crossing an exposure
surface.
Page | 8
-------
Table 2-1. General Exposure-Related Terms
Term
Definition
Agent
A chemical, physical or biological entity that contacts a receptor.
Exposure
The contact between an agent and the external boundary (exposure surface) of a receptor for
a specific duration.
Types of exposure include:
Aggregate exposure: combined exposure of a receptor to a specific agent from all sources
across all routes and pathways.
Cumulative exposure: total exposure to multiple agents that causes a common toxic
effect(s) on human health by the same, or similar, sequence of major biochemical events.
Exposure assessment
The process of estimating or measuring the magnitude, frequency and duration of exposure
to an agent and the size and characteristics of the population exposed.
Exposure duration
The length of time of contact with an agent. For example, if a receptor is in contact with an
agent for x minutes per day, for y days per year, the exposure duration is a year.
Exposure factors
Factors related to human behavior and characteristics that help determine a receptor's
exposure to an agent.
Exposure frequency
The number of exposure events in an exposure duration.
Exposure pathway
The course an agent takes from the source to the receptor.
Exposure period
The time of continuous contact between the agent and receptor. For example, if a receptor is
in contact with an agent for x minutes per day, for y days per year, the exposure period is x
minutes per year.
Exposure point
The location at which the receptor contacts the agent.
Exposure point concentration
An estimate of exposure parameters in specific media (e.g., air, water, sediment).
Exposure route
The way an agent enters a receptor after contact (e.g., by ingestion, inhalation, dermal
application).
Exposure scenario
A combination of facts, assumptions and inferences that define a discrete situation in which
potential exposures might occur.
Exposure science
A discipline that characterizes and predicts the intersection of an agent and receptor in space
and time.
Exposure surface (Contact
boundary)
A surface on a receptor where an agent is present. For example:
Outer exposure surfaces (e.g., the exterior of an eyeball, the skin surface, a conceptual
surface over the nose and open mouth).
Inner exposure surfaces (e.g., gastrointestinal tract, respiratory tract, urinary tract lining).
Medium
The material (e.g., air, water, soil, food, consumer products) surrounding or containing an
agent.
Receptor
Any biological entity (e.g., a human, human population, lifestage within a human population)
that receives an exposure or dose.
Source
The origin of an agent for the purposes of an exposure assessment.
Stressor
Any chemical, physical or biological entity that induces an adverse response.
Source: Sobus et al. (2010); U.S. EPA (2009a; 2019c); WHO (2004; 2012); Zartarian etal. (2005; 2007)
Page | 9
-------
Table 2-2. Key Dose-Related Terms
Term
Definition
Absorption barrier
Any exposure surface that can retard the rate of penetration of an agent into a receptor. Examples of
absorption barriers are the skin, respiratory tract lining and gastrointestinal tract wall (outer and inner
exposure surfaces).
Bioavailability
The extent to which an agent can be absorbed by an organism and be available for metabolism or
interaction with biologically significant receptors. Bioavailability involves both release from a medium (if
present) and absorption by an organism.
Biomarker (Biological
marker)
An indicator of changes or events in biological systems. Biomarkers of exposure refer to cellular,
biochemical, analytical or molecular measures obtained from biological media such as tissues, cells or
fluids that are indicative of exposure to an agent. Biomarkers of effect indicate cellular, biochemical or
molecular changes occurring as a result of human exposure to the agent.
Dose
Types of doses include:
Applied: amount of agent at an absorption barrier.
Biologically effective: amount of agent that reaches the target internal organ, tissue or toxicity
pathway where the adverse effect occurs.
Delivered: amount of agent transported to the location where the adverse effect occurs.
Absorbed/internal: amount of agent that enters a receptor by crossing an exposure surface acting
as an absorption barrier.
Potential: amount of agent that enters a receptor after crossing an exposure surface that is not an
absorption barrier.
Dose rate
The dose per unit time.
Uptake (Absorption)
The process by which an agent crosses an absorption barrier.
Source: WHO (2004; 2012); Zartarian et al. (2005; 2007)
Figure 2-3 expands the exposure-to-dose portion of the source-to-outcome framework. A
chemical can cross the boundary of the body by two processes: (1) Intake is the process by which
an agent crosses an outer exposure surface without passing an absorption barrier. Ingestion into
the gut is an example of an intake process. (2) Uptake involves crossing an external exposure
surface serving as a barrier and results in an internal dose. Absorption and transport through the
stomach lining to the blood are examples of uptake processes.
The capacity for a chemical to be absorbed via uptake processes is the chemical's
"bioavailability." Chemical properties, the physical state of the material to which an individual is
exposed and the ability of the individual to physiologically absorb the chemical because of
nutritional status or gut flora activity can all affect bioavailability. Bioavailability can vary by
exposure pathway, chemical and medium. For example, the bioavailability of metals in soils
depends on the physical and chemical characteristics of the soil and the interactions of the metals
and the soil. Lifestage and other biological factors also can affect bioavailability. For example,
the bioavailability of lead from the gut is higher for young children than for adults (U.S. EPA
2007j). The delivered dose, which internal processes such as transport, metabolism and excretion
affect, is the amount of agent transported to the location where the adverse effect occurs. The
biologically effective dose is the amount of agent reaching the target internal organ, tissue or
toxicity pathway where the adverse effect occurs (Sobus et al. 2010).
Page| 10
-------
Figure 2-3. Schematic of Exposure/Dose Terms
Dermal Route:
Exposure
Chemical —
Biologically
Effective
Potential
Dose
Applied
Dose'
Dose
Internal \
Dose
Metabolism
Organ
Effect
Skin
Uptake
Respiratory Route:
Exposure
Biologically
Effective
Chemical
\
Potential
Dose
Applied
Dose*
Metabolism
Dose
Internal \
Dose
Metabolism
Organ
-* Effect
Mouth/Nose
Intake
Lung
Uptake
Oral Route:
Biologically
Effective
Exposure
Chemical
\
Potential
Dose
Applied
Dose"
Metabolism
Dose
Internal \
Dose
Metabolism
Organ
Effect
Mouth
Intake
G.I. Tract
Uptake
Note: Terms unique to toxicology are shown in red; G.I. = gastrointestinal
An exposure assessment can be used to develop any of the exposure or dose measures listed in
Table 2-1 and Table 2-2. The specific measures selected depend on the objectives of the
exposure assessment and the availability of toxicity data. The selected exposure measures need
to match the dose measures used in the toxicity test to enable direct comparison between the
exposure of human population s and health outcome data. As an example, if dose/response
toxicity data are developed based on an inhalation dose, the exposure assessment needs to
provide inhalation exposure data. Likewise, if the risk assessment relies on toxicity tests that use
blood concentration as the dose measure, the exposure assessment needs to provide blood
concentrations.
Page| 11
-------
2.3. Concepts in Exposure Assessment
2.3.1. The Risk Assessment Process
Within EPA, the primary purpose of exposure assessment is to inform risk assessment.
Effectively developing exposure assessments, therefore, requires understanding the risk
assessment process. Briefly, risk assessment at EPA characterizes the potential health effects of
human exposure to chemical, physical and biological agents. In 1983, the NRC's i?z's&
Assessment in the Federal Government: Managing the Process (NRC 1983) introduced the
concept of two distinct but interrelated steps in the risk assessment process: a determination of
whether an agent constitutes a risk and what action is necessary to reduce that risk. Although the
underlying science has evolved since that time, the NRC risk paradigm remains the cornerstone
of EPA risk assessment practice (NRC 2009), and exposure assessment remains a fundamental
component of both steps in the process.
As part of the first step, risk assessment synthesizes scientific information to evaluate the health
effects associated with human exposure, generally viewed as a four-step process (NRC 1983).
• Hazard identification: identifies adverse effects (e.g., systemic effects, cancer) that
might occur from exposure to a chemical or harmful agent.
• Dose-response assessment: estimates the toxicity or potency of an agent by evaluating
the quantitative relationship between exposure/dose and response, generally derived from
animal toxicity tests.2
• Exposure assessment: estimates exposure to the agent(s) of concern to the human
receptor and describes the human receptor of concern. Because the exposure assessment
is compared to the dose-response assessment, the two steps need to use similar measures
for exposure and dose, where possible, or describe the uncertainty associated with using
different measures.
• Risk characterization: estimates the potential for adverse effects resulting from a human
exposure along with uncertainty in the findings.
The results of the risk assessment provide the basis for risk management decisions. Science and
Decisions: Advancing Risk Assessment (NRC 2009) emphasized that risk management questions
need to be an integral part of the planning process. Risk management entails determining
whether and how risks are to be managed, reduced or eliminated, which is achieved most often
by managing and reducing exposures. Managing risk relies directly on information about the
sources, pathways and routes that lead to exposure developed by the exposure assessment.
2.3.2. Overview of Exposure Assessment
Exposure assessment is the process of estimating or measuring the magnitude, frequency and
duration of exposure to an agent and the size and characteristics of the population exposed.
Ideally, it describes the sources, routes, pathways and uncertainty in the assessment (WHO 2012;
Zartarian et al. 2005; Zartarian et al. 2007); describes contact with agents as they occur in the
real world at various lifestages; and provides data to understand and quantify health outcomes as
they occur in various populations. Exposure assessments answer three key questions:
2 A study of the potential for harmful effects of chemicals on particular plants or animals.
Page| 12
-------
1. What are the characteristics of exposure (e.g., magnitude, frequency, duration,
route of entry)? The primary purpose of the exposure assessment is to estimate exposure
or dose, which then is combined with chemical-specific exposure-response or dose-
response data (often from animal studies) to estimate risk.
2. How can exposure be reduced? Exposure assessment provides information on the
individuals exposed and identifies the sources, routes and pathways for exposure. This
information helps determine the most effective ways to reduce exposure and, thus, risk.
Prospective exposure assessments can provide information on the overall impact of
mitigation strategies, including both regulatory and nonregulatory actions.
3. Has exposure changed over time? Exposure assessments monitor status and trends in
exposure over time. These assessments emphasize what the exposure is at a particular
time and how it changes over time. This type of assessment evaluates the potential for
emerging health risks and impact of risk mitigation actions.
2.3.3. Approaches for Exposure Assessment
The approach and methods used in an exposure assessment depend on the exposure assessment
questions (see Section 2.3.2) of the risk assessment, the risk management objectives and, in some
cases, the regulatory or statutory requirements, availability and cost of exposure mitigation
technologies and political and societal considerations. For example, an exposure assessment can
inform risk screening, priority setting, standard setting, permitting, enforcement, remediation
decisions or program and policy evaluation.
Many choices are available when selecting the approach for conducting an exposure assessment.
Table 2-3 summarizes the approaches and options for methods. Within an exposure assessment,
the choices are not mutually exclusive, and an assessor may choose several approaches. Potential
outputs of an exposure assessment include a population profile, a list of relevant chemicals,
chemical groups for use in risk analysis and characterization and a conceptual model for risk.
Table 2-3. Approaches for Exposure Assessments
Approach
Considerations
Description
Options/Methods
Design
Determines the fundamental
design of the exposure
assessment
Direct
Indirect
Biomonitoring
Tiered approach
Considers the resources and
the acceptable level of
uncertainty
Ranges from screening-level assessments that are rapid and use few
resources but are highly uncertain to very complex assessments that
minimize uncertainty but are resource intensive
Population
selections
Determines how the
population is described
Scenario based
Population based
Estimation approach
Determines how the
assessment is conducted
Deterministic
Probabilistic
Stressor evaluation
Determines how stressors are
considered
Exposure to single stressor, single source, single pathway
Aggregate exposure
Cumulative exposure
Page | 13
-------
Design—Direct versus Indirect versus Biomonitoring. Quantitative approaches for estimating
exposure use one of three approaches: direct measurements, indirect estimation or
biomonitoring. Direct (i.e., point-of-contact) methods measure the contact of the person with the
chemical concentration in the exposure medium over an identified period. Personal monitoring
techniques such as the collection of personal air or duplicate diet samples measure an
individual's exposure directly at a point in time. Indirect estimation uses available information
on concentrations of chemicals in the exposure medium and information about when, where and
how individuals might contact the exposure medium—activities that can lead to transfer of the
agent from the exposure medium to the individual. Dose estimates rely on factors that lead to
chemical uptake. The indirect approach develops specific exposure scenarios and then uses data
(e.g., pollutant concentrations), a series of exposure factors (e.g., contact duration, contact
frequency, breathing rate) and models to estimate exposure within the scenario. Biomonitoring
measures the amount of a stressor in biological matrices. Models (Sobus et al. 2010; U.S. EPA
2012c) can be used with biomarker data (CDC 2012a) to estimate the amount of agent to which a
person has been exposed, the corresponding dose or both. These modeling approaches use
information collected following exposure and "downstream" of the point of exposure. Modeling
tools can enhance estimates of exposure from biomonitoring data (see Section 6.2.3). An
example is a pharmacokinetic model and the data necessary to run it such as physiological
parameters, the biomarker measurement and information on the time between exposures and the
time of measurement. Alternatively, biomarker data can be used directly in risk assessments if
the toxicity data used in the assessment include biomarker data as part of the dose-response
assessment (NRC 2006b). Biomonitoring data aggregate exposures from all routes and pathways,
but not always equally or proportionally. Identifying sources for exposure when multiple sources
or routes exist can be difficult.
Complexity—Tiered Assessments. Given the numbers of chemicals, other potential stressors or
scenarios evaluated for environmental health risks, the need for efficiency, cost-effectiveness and
focus in the risk assessment process is critical. Selection of the assessment tier depends on the
purpose of the assessment and the quality and quantity of the available data, resources, level of
acceptable uncertainty and statistical methodologies. Thus, assessments use a tiered approach,
often starting with a screening-level assessment and increasing the level of complexity as
required. Lower tier assessments can require few resources and can evaluate large numbers of
agents. Complex risk assessments, in contrast, can address the most demanding problems in risk
assessment. The goal is to design the exposure assessment to fit the needs of the risk
managers/decision makers, balancing the complexity of the assessment against time and resource
constraints. Exposure, hazard and risk management information need to inform each level.
• Screening-level exposure assessments determine whether further work is needed to aid
the risk management decision. Readily available data, conservative assumptions and
simple models are the primary bases for these assessments. For example, a screening-
level assessment of a contaminated site might determine if additional data are needed or
whether input parameters need refinement. Screening-level assessments often use point
estimates (i.e., single exposure values). Depending on the needs of the assessment, an
assessor can generate screening-level exposure estimates for multiple exposure scenarios.
• As the accuracy and precision needed to limit uncertainty increase, exposure assessments
increase in complexity. Complex exposure assessments use sophisticated models or
observational human exposure measurement studies, or both, to collect the data and
Page| 14
-------
exposure factors, and they usually require more data. Complex exposure assessments
often use probabilistic distributions for one, some or all of the parameters. Depending on
the needs of the assessment, an assessor can generate complex exposure estimates for
actual environmental conditions or prospective or retrospective scenarios.
Population Selection—Scenario versus Population Based. For this type of exposure
assessment, either a scenario-based or a population-based approach describes populations. For
the scenario-based approach, a distinguishable set of behaviors or locations that lead to exposure
defines a specific receptor group of interest. Exposure scenarios then use sets of facts,
assumptions and inferences about how exposure takes place under a specific set of conditions.
The resulting exposure metric is usually a single point estimate (e.g., 95th percentile) for a
specific population. Carefully selected exposure factors avoid making unrealistically
conservative estimates. Population-based approaches provide information on the broader context
of exposure for a selected population, including variability within that population or
intrapersonal variability. One approach weights exposure input data in assessing the population
of interest. Input data represent the population of interest, its variability and correlation among
variables, and account for nonlinearity in exposure conditions. Interpersonal variability in
influential exposure factors or use of a time-series approach could introduce additional
complexity. Outputs from population-based assessments are population exposure and dose
distributions.
Estimation Approach—Deterministic versus Probabilistic. Deterministic exposure
assessments use point estimates (e.g., empirical data) as inputs to exposure equations or models.
This approach most often is screening level. Conservative input variables result in a quick
estimate of potential exposures and possible concerns. Depending on the purpose, exposure
estimates can use exposure factors representing the high end (90th percentile or above), median
(50th percentile) or low end (25th percentile or below) of the distribution.
Probabilistic exposure assessments use statistical (e.g., analytical) distributions for input
variables, parameterizing these distributions and characterizing the conditions or probabilities
associated with the use of particular distributions. Probabilistic approaches better account for the
uncertainty and variability in influential input variables. The degree of complexity captured
using a probabilistic approach can be far ranging, depending on the number of variables that use
statistical distributions, whether correlation is maintained among multiple variables (e.g., body
mass, fat-free body mass, overall fitness) and the degree to which the number and groups of
receptors in the exposure assessment are expanded. The outcome of an exposure assessment
using a probabilistic approach is a statistical distribution of the estimated exposures or doses for
the receptors.
Stressor Evaluation—Single Chemical versus Aggregate versus Cumulative. Historically,
exposure assessments largely have been oriented toward single-pathway and single-chemical
evaluations that yield point estimates of exposure. Aggregate or cumulative assessments help
describe real-world situations that consider multiple pathways and agents within a single
assessment.
Aggregate exposure is the sum of exposures of an individual or a defined population to a specific
agent from all sources and pathways. Aggregate exposure assessments provide qualitative or
Page | 15
-------
quantitative estimates of the combined exposures of an individual (or a defined group or
population) to a specific agent from all sources through all relevant exposure routes (i.e.,
inhalation, ingestion, dermal absorption); pathways (e.g., ingesting contaminated groundwater,
inhaling volatilized chemicals while showering); and environmental sources (e.g., air, surface
water, groundwater, soil, sediment, fish) (ILSI 1999; U.S. EPA 1991b). Often, physiologically
based pharmacokinetic or other dose models combine estimated exposures from multiple
sources, pathways and routes to provide a projected single dose or biologically effective dose
metric. Alternatively, biomonitoring can aggregate chemical stressors.
The Food Quality Protection Act3 mandates that EPA consider cumulative risk from exposure to
all pesticides with common toxicity mechanisms (U.S. EPA 1996c; U.S. EPA2002i). EPA's
Framework for Cumulative Risk Assessment provides a more general definition that recognizes
combined risks from aggregate exposures to multiple agents or stressors, which could be
chemical, physical or biological agents or the absence of a necessity (e.g., food, shelter, clothing)
(U.S. EPA2003d). Cumulative risk assessment can be very complex and can involve several
iterations to examine factors related to population vulnerabilities, public health information,
toxicological and epidemiological data, completed exposure pathways, differential exposures and
contact with environmental media and pollutant sources.
2.3.4. Uncertainty and Variability in Exposure Assessments
Exposure predictions that models generate provide a computational means of representing
complex real-world exposures using available data and various assumptions. The model
performance or predictions, however, vary in their reliability and accuracy, depending on many
factors. The most critical factor that influences the exposure estimate is the ability to capture
adequately the inherent variability in model inputs and parameters (e.g., those associated with
time activity patterns, product use, emission rates, distribution of chemicals within the media of
concern, exposure and ADME factors, physiological characteristics, dietary patterns, among
others), both within and between individuals. For computational exposure modeling,
incorporating the variability in the numerous information data streams as part of the integrative
exposure model calculations can be challenging; probabilistic methods such as Monte Carlo or
Bayesian modeling tools, however, can facilitate this procedure. More information on model
uncertainty is available in Sections 6.3.4 and 8.3.
Uncertainty regarding exposure or dose predictions typically arises due to limitations of available
information or input and parameter data, and limitations of the computational modeling
techniques used to simulate complex and challenging physical, chemical and behavioral or
stochastic processes. Unlike variability, which is due to inherent properties of the entire system,
uncertainty is due to lack of knowledge in the vital parts of the computational exposure modeling
process. Typically, three broad categories describe uncertainty: (1) scenario uncertainty
(consisting of several parts), (2) parameter uncertainty and (3) model uncertainty. Accounting for
and describing such uncertainties is critical for an exposure assessment. High-performance
computing methods and software now provide the ability to evaluate propagation of uncertainties
across each step of the source-to-exposure-to-dose continuum.
3Food Quality Protection Act of 1996, Pub. L. No. 104-170, 110 Stat. 1489 (U.S. EPA 1996c).
Page|16
-------
2.4. Calculating Exposure Estimates
By combining information and data describing exposure scenarios, concentrations, activity
patterns and other exposure factors, an exposure assessor can develop a quantitative estimate of
exposure for an individual or a population. As described previously, characterizing exposure
requires definitions of mass and time.
This section presents route-specific equations and associated input variables used to estimate
exposure via the inhalation, ingestion and dermal routes—the three most common exposure
routes. General equations are presented here, and more detail is found in Draft Protocol for
Measuring Children's Non-Occupational Exposure to Pesticides by All Relevant Pathways (U. S.
EPA 200 lb), including equations for estimating exposure via inhalation and dermal routes.
Additional details on different forms of these exposure equations, including model default values
used in various human exposure models, are found in Williams et al. (2010).
2.4.1. Inhalation Exposure
Exposure occurs via the inhalation route when an individual breathes a chemical. The chemical
can directly affect the respiratory tract (point-of-entry effect) or enter the bloodstream through
respiratory tract tissues, potentially affecting other systems of the body (target organ effect). A
simplifying assumption is that inhalation exposure equals dose for gases, aerosols and fine
("respirable") particles less than 2.5 micrometers ((J,m). More refined estimates of dose require
separate equations and models that account for AD ME parameters of the stressor. Larger inhaled
particles are less likely to reach the lowest parts of the lung (alveoli). Upper movement of cilia in
the lungs can sometimes remove such particles, which an individual then swallows. Nanometer-
sized particles might deposit in the upper airway and find their way to target organs (Oberdorster
et al. 2007). Estimating the dose associated with intake from inhalation exposure is complicated
because of the complex nature of the respiratory system as a portal of entry (U.S. EPA 1994b;
U.S. EPA2009f).
In its simplest term, inhalation exposure for a given exposure event is equal to the average
chemical concentration in the air in the person's breathing zone multiplied by the inhalation rate,
as shown in the equation below (U.S. EPA 2001b):
Einh = (Ca) (iR)
where:
EIMh = inhalation exposure (mass per time)
Ca = airborne concentration of the chemical contacted by the exposed individual (mass of chemical
per volume of air in breathing zone)
IR = inhalation rate (volume of air breathed per unit time)
Additional factors are considered for more refined estimates of exposure to specific chemicals.
For example, for complex situations involving exposure to particulate matter, deposition in the
lung and exhalation are also considered. Alternatively, exposure is estimated in terms of
exposure concentration for a pertinent exposure period when deriving hazard quotients using
EPA reference concentrations (U.S. EPA 2004a; U.S. EPA 2009f).
Page| 17
-------
2.4.2. Ingestion (Dietary and Nondietary) Exposure
Ingestion exposures occur when an individual eats, drinks or inadvertently introduces a chemical
into the gastrointestinal tract. Soil, dust or foreign objects can be ingested, and ingestion of both
food and nonfood items can contribute to an individual's exposure. Depending on the properties
of the chemical, absorption can occur throughout the entire gut. A chemical can directly target
the tissue in the gut or be absorbed from various locations in the gut into the bloodstream.
Dietary (food, liquids) and nondietary (soil, dust, other materials) exposure can be estimated as
shown in the equation below (U.S. EPA 2001b):
E^=(cmg)(iR)
where:
Emg = ingestion exposure (mass per time)
Cmg = concentration of the chemical in food or other exposure media (mass of chemical per mass of
medium or mass of chemical per volume of medium)
IR = ingestion rate (mass of medium ingested during the exposure per time)
When considering multiple media for ingestion exposure, exposure from each medium is
calculated separately and then summed. Exposure events usually are expressed in terms of a
frequency of ingestion event times the intake per event.
2.4.3. Dermal Exposure
Dermal exposure occurs when a chemical acts on or is absorbed through the skin to enter the
bloodstream. Examples of how agents can contact the skin include through swimming, bathing,
gardening, other hobby-related activities and use of personal care or cleaning products. Exposure
to an aerosol, liquid, solid or contaminated surface is the most common cause of dermal
exposure. Liquid or solid aerosols can result in measureable exposure, but gases generally
produce very low dermal exposures (U.S. EPA 2007d). As with the other exposure routes, the
chemical can affect the tissue directly or affect internal organs after it enters the bloodstream.
Absorption through damaged skin or tissue (e.g., cuts, blisters) can be greater than absorption
through healthy tissue. The chemical itself can act as the mechanism that damages the tissue and
affects absorption. The medium carrying the contaminant to the skin is important for estimating
absorption—for example, whether it is hydrophilic or lipophilic or causes skin damage.
Dermal exposure for a given exposure event can be estimated as the concentration or mass of
chemical in the medium contacting the skin. A general equation for estimating dermal exposure
is shown below (U.S. EPA 2001b):
Edem=(MRmedlJ(c)(SA)
where:
Eflcrin
-M-Rmedium
c =
SA =
dermal exposure (mass per time)
mass of medium contacting the skin per time (mass of medium per skin surface area per time)
average concentration in medium (mass of chemical per mass of medium)
skin surface area available for contact (area)
Page|18
-------
This dermal equation is represented in different ways and with additional variables by different
exposure models, depending on the medium considered (soil, dust, water, chemical residue on a
surface), available measurement methods and the data collected for the chemical or medium
transferred from a surface to the skin. For example, the equation differs slightly if a dermal
transfer coefficient (in units of area per time) versus dermal transfer efficiency (unitless) is used
to estimate the mass of medium contacting the skin per time. Some models also include terms for
fraction of skin clothed to estimate the skin surface area. The Agency uses different variations of
this equation (U.S. EPA 1997c; U.S. EPA 2004d; U.S. EPA 2007h).
Various programs at EPA evaluate dermal exposures using approaches specific to the program
needs. EPA compiled and summarized these approaches in a single document, Dermal Exposure
Assessment: A Summary of EPA Approaches (U.S. EPA 2007d). Quantifying dermal dose
depends on several variables influencing how a chemical can pass through skin. In general terms,
dose is calculated by multiplying exposure (mass per time) by the fraction of the chemical that
actually penetrates the surface barrier. Dose equations are outside the scope of this document, but
numerous Agency resources are available, including Risk Assessment Guidance for Superfund
Volume I: Human Health Evaluation Manual (Part E, Supplemental Guidance for Dermal Risk
Assessment) (U.S. EPA 2004d) and Dermal Exposure Assessment: A Summary of EPA
Approaches (U.S. EPA 2007d).
2.5. Development of Exposure Science and Exposure Assessments
Related to EPA Risk Assessments
Exposure science, in various forms, dates at least to the early 20th century and has provided
inputs to three fields with even earlier origins: epidemiology (Nieuwenhuijsen 2015; WHO
1983), industrial hygiene (Cook 1969; Paustenbach 1985) and health physics (Upton 1988).
Understanding and measuring exposures grew increasingly important in the 1970s because of
greater public, academic, industrial and government awareness of chemical pollution problems
and their potential health implications. At the same time, newly developed analytical methods
enabled scientists to measure low-level, general population exposures for many chemicals. Thus,
new data sources became available for exposure assessment.
In 1983, NRC published Risk Assessment in the Federal Government: Managing the Process
(NRC 1983), commonly referred to as the "Red Book." NRC described exposure assessment as
one of the four steps of risk assessment, noting then that "[djiscussion of specific components in
risk assessment is complicated by the fact that current methods and approaches to exposure
assessment appear to be medium- or route-specific" and that "exposure assessment has very few
components that could be applicable to all media."
Shortly after publication of the Red Book, EPA began issuing a series of guidelines for
conducting risk assessments (e.g., cancer, mutagenicity, chemical mixtures, developmental
toxicology and exposure). In the 1990s, the Agency adopted its basic model for human health
risk assessment and ecological risk assessment.
In 1992, EPA's Risk Assessment Forum issued Guidelines for Exposure Assessment (U.S. EPA
1992c), which described steps to construct exposure scenarios or to collect data in field studies to
estimate exposure. These guidelines used scientific advances to characterize exposure more
Page| 19
-------
accurately, rather than assuming worst-case or hypothetical maximum exposures. These
advances included more sensitive techniques to measure concentrations of contaminants in the
environment, the use of probabilistic models to characterize the full range of possible exposures
by a population and greater awareness of uncertainty in exposure assessments (Keenan et al.
1994). Various Agency programs have implemented the Guidelines for Exposure Assessment via
standard assessment procedures that are consistent with their statutory authority, as illustrated in
Box 2-1.
Box 2-1. Agency-Specific Actions to Implement the Guidelines
• The Office of Pollution Prevention and Toxics has consumer product use scenarios and generic scenarios for worker
exposure and environmental release, which are the basis for and are consistent with the Exposure and Fate Assessment
Screening Tool (E-FAST) and the Chemical Screening Tool for Exposures and Environmental Releases (ChemSTEER)
models (U.S. EPA 2004b; U.S. EPA2014b).
• Human health evaluation documents comprising the Risk Assessment Guidance for Superfund (U.S. EPA 1989b; U.S.
EPA 1991b; U.S. EPA1991c; U.S. EPA2001g; U.S. EPA2001h; U.S. EPA2004d; U.S. EPA2009f) are principal
examples of the interpretation and expansion of the Guidelines for Exposure Assessment to meet the needs of risk
assessors evaluating current and future risks under the Superfund program.
• The Office of Research and Development's National Center for Environmental Assessment has developed a summary
report for dermal exposure assessment (U.S. EPA 2000h), as has the Superfund division of the Office of Land and
Emergency Management (formerly the Office of Solid Waste and Emergency Response) (U.S. EPA 2004b).
• The Office of Pesticide Programs (OPP) has updated the residential standard operating procedures that describe
standardized scenarios for evaluating consumer product usage and other standardized behaviors for exposure analysis
(U.S. EPA 2012f). OPP also has guidance on risk assessment methods for children of workers in agricultural fields and
pesticides with no food uses (U.S. EPA 2009e).
When the Guidelines for Exposure Assessment was issued in 1992, exposure assessments were
devoted principally to chemical exposures of adults from the ambient environment in a single
medium (air, water, diet, dust, surface contact). Since 1992, the field of exposure science has
expanded and changed in several significant ways:
• Many more sources of exposure concentration data and information are available, ranging
from national surveys and registries to small studies of individual chemicals. Some data
sources are proprietary, and others are publicly available. Environmental and personal
monitoring study data are available from the peer-reviewed literature and are summarized
in government compendia.
• The Exposure Factors Handbook: 2011 Edition (U.S. EPA 201 Id) provides information
on principles of exposure assessment, exposure factors and activities and behaviors that
might influence exposures. EPA updates this handbook as study data are published and
become available in the peer-reviewed literature.
• The exposure science field has evolved to recognize the contribution of individual
characteristics and activities to exposure, acknowledging that not all individuals are alike,
behave the same way or are exposed to the same concentration of a chemical. This
recognition was realized, in part, through studies such as one on air pollution and
mortality in six U.S. cities (Dockery et al. 1993); the Total Exposure Assessment
Methodology Study (TEAM; U.S. EPA 1987c); the National Human Exposure
Assessment Survey (U.S. EPA 2003f); the Children's Total Exposure to Persistent
Page | 20
-------
Pesticides and Other Persistent Organic Pollutants Study (CTEPP; U.S. EPA 2005f); and
the Detroit Exposure and Aerosol Research Study (DEARS; U.S. EPA 201 la).
• Models that consider multipathway, multiroute exposures and apply probabilistic
methods to simulate behavior patterns have advanced in recent years. Improvements in
monitoring methodology and modeling now enable some exposure analyses and
assessments to consider the influences of age, sex, culture, ethnicity, activity patterns and
socioeconomic and demographic factors. Consequently, measurements and modeling of
individuals' exposures to a variety of chemicals and other stressors as they perform their
daily routines are now possible. This, in turn, provides a means to identify the sources
and routes for stressors of interest and the amount of exposure incurred because of
personal characteristics, location and behavior. As a result, for some stressors, exposure
assessors can construct a more complete and often more complex picture of exposure to
chemicals and other stressors in the environment.
• Advances in the field of analytical chemistry allow for biomonitoring programs that
directly measure the concentrations of certain chemicals or their metabolites present in
biological matrices, rather than in the environment (Paustenbach and Galbraith 2006). As
part of its National Health and Nutrition Examination Survey, the Centers for Disease
Control and Prevention continues to build a national database of biological levels to
select chemicals (CDC 2012a). A framework and methods have been developed for the
use and interpretation of biomonitoring data for assessing exposure and risk (Sobus et al.
2010).
• Improved exposure assessment models have been developed for use in the European
regulations for REACH (Registration, Evaluation, Authorisation and Restriction on
Chemicals) and other national authorities and international agencies such as Health
Canada and the Organisation for Economic Co-operation and Development.
2.6. Emerging Topics
In 2010, EPA commissioned NRC to develop a report with the goal of advancing the science of
exposure and its use. The NRC report, Exposure Science in the 21st Century: A Vision and a
Strategy (2012a), has created an opportunity to develop a "new" exposure science. Figure 2-4
illustrates the types of new technologies that are becoming available and will provide the
opportunity to:
• Extend data infrastructure for the collection, storage and retrieval of large quantities of
traditional and nontraditional data for use in describing the multidimensional aspects of
exposure more comprehensively
• Expand the data landscape by developing, evaluating and applying methods for efficient
monitoring
• Advance modeling by strategically collecting data needed to build, evaluate and apply
models
• Develop advanced analytical systems to convert data rapidly into information that
captures the dynamic aspects of the environment, stressors and receptors
• Build complex systems models that can account for and predict positive and negative
exposure influences on risks.
Page|21
-------
Figure 2-4. New Technologies for Advancing Exposure Science
Models, databases, decision support tools
Intensity
Human &
Factors
KBHlamWi
Integrated by Source
to Dose Models
Geolocation
Global Positioning
System
Adapted from NRC (2012a)
Additional topics that will advance exposure science include:
• ExpoBox (EPA's EXPOsure toolBOX): created to help individuals assess exposure.
ExpoBox is a compendium of exposure assessment tools that links guidance documents,
databases, models, reference materials and other related resources.
• ExpoFIRST (Exposure Factors Interactive Resource for Scenarios Tool): brings EPA s
Exposure Factors Handbook: 2011 Edition data to an interactive tool that maximizes
flexibility and transparency for exposure assessors. ExpoFIRST allows exposure
assessors to perform and document calculations for community and site-specific exposure
assessment.
• ExpoCast and high-throughput screening: EPA develops and uses innovative methods to
develop exposure estimates for thousands of chemicals to better protect human health and
the environment.
• Systematic review principles: a structured process of identifying, evaluating and
integrating evidence for exposure assessments developed as part of a risk assessment.
Examples of approaches for systematic reviews include the Office of Pollution
Prevention and Toxics Application of Systematic Review in TSCA Risk Evaluations: a
National Academy of Sciences workshop on Strategies and Tools for Conducting
Systematic Reviews of Mechanistic Data to Support Chemical Assessments: PRISMA
(Preferred Reporting Items for Systematic Reviews and Meta-Analyses); and a planned
approach for the IRIS (Integrated Risk Information System) program to optimize the
application of best practices for systematic reviews.
Page| 22
-------
• Exposome research: EPA incorporates exposome research within its research framework
to better understand the causal links between exposure and adverse effects on human
health and the environment, in support of the Agency's mission.
• Microbiome research: EPA supports microbiome research within its research framework
to better understand the role of the biome as a component of the body that influences the
absorption and metabolism of ingested chemicals.
• Social determinants of health research: Within its research framework, EPA supports
research on the interrelationships between chemical and non-chemical stressors and how
non-chemical stressors might change the biological response to a chemical exposure, in
support of the Agency's mission.
Looking to the future, an explosion of data is expected for characterizing the spatial and temporal
dimensions of the fate and transport of multiple stressors in the environment and movement,
activities and exposures to humans and ecosystems. New techniques for environmental
measurements and biomonitoring will enable the detection and verification of exposures and
their linkages to human and ecosystem outcomes. New models and informatics tools will allow
better description of the current condition, prediction of future conditions and understanding the
impacts of decision alternatives to reduce exposures. These new tools will enable EPA to address
exposures to the most vulnerable and the most highly exposed individuals and communities.
Communities and individuals will be able to understand their exposures and act to reduce them.
EPA will incorporate these new data, techniques and models into the Agency's exposure
assessments as they are evaluated and reviewed, as appropriate, for use in Agency decisions.
2.7. Summary
• Human exposure science is the study of the contact of humans with chemical, physical
or biological agents in their environment.
o Exposure science includes describing the processes influencing the transport and
transformation of agents from their source to a dose at a target internal organ, tissue
or toxicity pathway associated with a disease process,
o Exposure is multiscale across time, space and biological organization; and exposure
scenarios involve multiple stressors, multiple media and multiple pathways,
o The focus for human exposure science is the receptor, which can be individuals,
groups at specific lifestages within a population or the entire population,
o Exposure predictions are uncertain, requiring iteration between methods,
measurements and models to develop scientific understanding and principles.
• Exposure is the contact of an agent with an external boundary of a receptor for a specific
duration.
• Dose is the amount of an agent that enters a receptor after crossing an external exposure
surface.
• Risk assessment is a four-step process that synthesizes scientific information to evaluate
the health effects associated with human exposure: (1) hazard identification, (2) dose-
response assessment, (3) exposure assessment and (4) risk characterization.
• Exposure assessments answer three questions: (1) What are the characteristics of
exposure? (2) How can exposure be reduced? (3) Has exposure changed over time?
Page | 23
-------
• The exposure assessment questions, risk management objectives and any regulatory or
statutory requirements, availability and cost of exposure mitigation technologies and
political and societal considerations determine the approach for an exposure assessment,
o Quantitative approaches for estimating exposure use direct measurements, indirect
estimation or biomonitoring.
o Approaches range from screening-level assessments to complex, resource-intensive
assessments.
o Populations are described using either a scenario-based or a population-based
approach.
o Deterministic exposure assessments use point estimates as inputs to exposure
equations or models; probabilistic exposure assessments use statistical distributions
for input variables, parameterize the distributions and characterize the conditions or
probabilities associated with the use of particular distributions.
• Uncertainty is classified in three broad categories: (1) scenario uncertainty,
(2) parameter uncertainty and (3) model uncertainty.
• Variability is due to inherent properties of the entire system.
• Inhalation, ingestion and dermal are the three most common exposure routes.
o Inhalation exposure is the product of the chemical concentration in the air in the
person's breathing zone and the inhalation rate,
o Ingestion exposure is the product of the concentration of the chemical in food or
other exposure media and the ingestion rate,
o Dermal exposure is the product of the mass of medium contacting the skin per time,
the chemical concentration in the medium and the skin surface area available for
contact.
• Exposure science has developed to reflect the increased availability of sources of
exposure concentration data and information.
• Emerging topics for future consideration include (1) an explosion of available data for
characterizing fate and transport of multiple stressors in the environment, and movement,
activities and exposures to humans and ecosystems; (2) new techniques for
environmental measurements (e.g., sensors, citizen science) and biomonitoring; and
(3) new models and informatics tools to describe current conditions, predict future
conditions and understand the impacts of decision alternatives.
Page| 24
-------
CHAPTER 3. PLANNING AND SCOPING AND
PROBLEM FORMULATION FOR EXPOSURE
ASSESSMENTS
Three components comprise EPA risk assessments: hazard identification, dose-response and
exposure assessment. Risk characterization integrates these three components. Planning and
scoping and problem formulation are the first steps in the risk assessment process and in the
exposure assessment component. Activities at this stage establish the purpose, scope, approach,
participants, level of effort and resources (U.S. EPA2002g; U.S. EPA2014f). The decisions the
assessment is intended to inform drive the planning. As with the risk assessment, multiple
challenges and requirements can arise when conducting an exposure assessment. For example, an
assessment completed as part of a regulatory action could involve various legal considerations:
the statute under which it is being conducted (e.g., Clean Air Act,4 Clean Water Act5) and the
regulatory program of which it is part (e.g., Six-Year Review of Drinking Water Contaminants
under the Safe Drinking Water Act,6 Pesticide Registration Review, Risk and Technology
Review program). Such legal considerations could influence specific aspects of the assessment.
As with EPA's Framework for Human Health Risk Assessment to Inform Decision Making, EPA
has designed this Guidelines for Human Exposure Assessment to align Agency exposure
assessments with the needs of Agency decision makers (U.S. EPA 2014f). The Human Health
Risk Assessment Framework elaborates on the concepts of planning and scoping, including
consideration of stakeholder involvement, peer review and problem formulation.
The level of complexity of the planning and scoping process is commensurate with the
complexity of the assessment—from screening level to complex. EPA programs might
implement specific requirements for planning and scoping and for problem formulation to meet
the particular programmatic needs. We encourage risk assessors to consult with their programs
and follow the programs' standard operating procedures (SOPs) during planning and scoping and
problem formulation.
As in risk assessment, planning and scoping and problem formulation for exposure assessment
involve a series of interrelated and iterative steps, including ensuring effective engagement of
stakeholders. Figure 3-1 presents these steps, and this chapter presents guidance for each:
• Planning and scoping (Section 3.1)
• Problem formulation (Section 3.2)
• Exposure assessment plan development (Section 3.3).
Section 3.4 summarizes this chapter.
4Clean Air Act of 1963, 42 U.S.C. 7401 et seq.
5Clean Water Act of 1972, 33 U.S.C. 1251 et seq.
6Safe Drinking Water Act, 42 U.S.C. 300f et seq.
Page | 25
-------
Figure 3-1. Planning and Scoping and Problem Formulation for Exposure Assessment
3.1. Planning and Scoping
3.1.1. Exposure Assessment
Goals and Scope
3.1.2. Overarching
Considerations
3.1.3. Stakeholder Involvement
3.1.4. EPA's Tribal Program and
Networks
3.1.5. Peer Review
3.2. Problem Formulation
3.2.1. Individuals, Lifestages,
Groups, Populations
3.2.2. Conceptual Model
3.3. Exposure Assessment Analysis
Plan
3.3.1 Data Sources, Gaps,
Limitations and Quality
Objectives
3.3.2 Exposure Scenarios
Key Steps:
Begin dialogue on nature of the concern and establish goals
~ Build an assessment team and identify stakeholders
Develop a conceptual model
Identify assessment approach/options, data gaps and needs
Develop an exposure assessment plan
~ Develop a communication plan
Adapted from U.S. EPA (2002g)
Factors specific to a given exposure assessment, such as resources, regulatory drivers and
stakeholder considerations, will drive the nature and sequence of steps in the assessment.
Regardless of the drivers, informed planning and scoping processes are key to the success of any
exposure assessment. As exposure data are collected and analyzed during the development of an
exposure assessment, the planning and scoping and problem formulation process might need to
be revised or updated (Figure 3-2).
3.1. Planning and Scoping
Planning and scoping is an essential and integral part of exposure assessment. The planning
component involves identifying the underlying question on which the assessment is focused and
the constraints under which the assessment is conducted (timeframe, resources). Scoping entails
a search and review of available information and approaches for conducting the assessment. A
systematic and transparent planning and scoping process promotes:
• Assurance that exposures being assessed are relevant and important
• Efficient time and resource management
• Agreement among the exposure assessor, risk assessor and risk manager/decision maker
on the exposure assessment's purpose
Page| 26
-------
Figure 3-2. The Overall Risk Assessment Process
Hazard Identification
Risk
Characterization
Dose Response
Assessment
Exposure
Assessment
Planning and
Scoping
and
Problem
Formulation
Decision Making
• Policy
• Risk Reduction
Actions
Risk
Management
Economic, Regulatory, Political, Social, and Other Factors
Communications: Internal and External Stakeholder Input
Risk Assessment
Adapted from NRC (2009)
• Communication within the risk assessment team and with stakeholders
• Trust, buy-in and realistic expectations of stakeholders and other interested parties
• Better-informed decisions that use high-quality data, derived using scientifically
established methods, and are based 011 established objectives
• Participation from multiple disciplines to ensure the scope and degree of scientific
complexity adequately inform the exposure assessment question(s)
• Documentation of all decisions and the rationale behind those decisions.
A team approach to planning exposure assessments can be beneficial (NRC 2009; U.S. EPA
2002g). For routine screening-level exposure assessments conducted in accordance with
established SOPs, a project team might not be required; for more complex assessments, however,
a project team often is essential. At a minimum, the project team includes individuals with the
necessary scientific expertise, representative members of the exposure assessment team, human
health assessors, the risk assessor and the risk manager/decision maker. The team composition
reflects the specific expertise required during various parts of the exposure assessment, including
the planning discussion. For example, the risk manager/decision maker might identify the
regulatory needs of the risk assessment, timeframes and amount of resources available for the
analysis. The project team might focus on evaluating the current and future concentrations of
contaminants in various media. An exposure assessor could help the team consider the nature of
the contaminants and understand potential sources, contaminant routes and pathways,
environmental fate, extent of contamination and data availability at the national or local level.
Page| 27
-------
3.1.1. Exposure Assessment Goals and Scope
The goals of the exposure assessment determine its scope. The planning and scoping process for
an exposure assessment begins with a dialogue among the project team to define the question at
hand or the hypothesis the assessment seeks to address. The intent of problem formulation is to
develop the dimensions and elements of the exposure assessment and to define the objectives
clearly. Some assessments might benefit from stakeholder input, such as a site-specific
assessment for which the community has information unique to that site that is not readily
evident to the assessment team. Information gathered during this goal-setting stage helps an
exposure assessor answer questions such as the following: What questions should the exposure
assessment address? Why is an exposure assessment necessary? How will the assessment results
be used? What resources and expertise are needed? What is the timeframe? The more
information gathered at this step, the more clearly defined will be the scope.
Exposure assessments are carried out for many reasons, including for use in risk assessment,
status and trends measurements, mitigation, regulatory decisions, priority setting and
epidemiological study support. Specific goals for an exposure assessment might include
identifying exposed individuals, lifestages, groups or populations and disparities in that
exposure; screening chemicals for potential exposure and identifying the source(s) of
contamination; defining exposure pathways, fate and transport properties and routes of exposure;
and assessing temporal considerations.
To ensure the final assessment product meets the needs of the decision makers, the assessment
team needs a clear understanding of the exposure assessment's purpose. The particular purpose
for which an exposure assessment will be used (e.g., location-specific versus regional or national
decision) and the availability of data often have significant implications for the scope, level of
detail and approach. In addition, the team needs to understand the regulatory basis for the risk
assessment and the kind of information needed to satisfy such requirements.
The planning and scoping process defines the elements that will be included in an exposure
assessment. It helps the project team determine such issues as the bounds of the exposure
assessment and the approaches to consider. Understanding the boundaries of the problem helps
define the scope. For example, does an exposure or risk occur in a local community or nationally
(U.S. EPA2014f)?
Reconciling the limitations of the scope with the assessment questions requires careful
consideration. For example, if data limitations preclude addressing questions the risk
manager/decision maker poses, the assessment team will have to consider alternatives. These
alternatives might include limiting the scope of the assessment, willingness to accept a higher
level of uncertainty in the analysis by relying on default assumptions rather than empirical data,
delaying the assessment pending the completion of studies to provide the data or relying on
modeling with the associated assumptions. Thus, defining the scope of an exposure assessment is
a process that can include both analytical and deliberative aspects.
Reasons for limiting the technical scope of an exposure assessment need to be stated explicitly
and can include details on resource limitations, data and assumptions, impact of risk elements on
the risk estimate and available methods. If an exposure assessment study is considered necessary
but the resource commitment is uncertain, assessors might find conducting "back-of-the-
Page|28
-------
envelope" sensitivity analyses helpful in determining how important the study parameters are to
regulatory decision making or problem solving. When an element of risk is likely important but
no valid data are available, an exposure assessor needs to highlight this deficiency or use
judgment or default values to approximate the missing data. Both the exposure assessment and
risk characterization present such judgments and approximations, along with their implications
(see Chapter 8).
The purpose of the exposure assessment and the quality and quantity of the available data,
resources, level of acceptable uncertainty and statistical methodologies determine the approach.
Thus, assessments use a tiered approach, often starting at the screening level and increasing the
complexity, as required. Lower tier assessments might require few resources and can be
sufficient for evaluating large numbers of agents. In contrast, addressing the most challenging
problems might require a complex assessment. A goal during planning and scoping is to balance
the informational needs of the risk assessor and risk manager/decision maker with a judicious use
of time, expertise and other resources.
The initial phase of an exposure assessment often uses screening-level exposure analyses. At this
point, little location- or scenario-specific information typically is available, and an exposure
assessor relies on default values selected to ensure the analyses examine exposures that would
fall at or beyond the high end of the expected exposure distribution. The assumption is that, if—
in a bounding estimate scenario—risks are not anticipated, assessors, risk managers/decision
makers and stakeholders can be confident the exposure evaluated is not a concern (U.S. EPA
2004c). An exposure assessor also might use probabilistic exposure assessment approaches
during the screening-level analysis to identify key exposure parameters for further evaluation
(e.g., sensitivity analysis; see Section 8.3). These approaches, however, more often apply when
refining an exposure assessment. EPA programs also can implement specific procedures that
vary from this basic process. Exposure assessors need to consult with their programs and follow
their SOPs to the extent applicable to the particular situation.
Regulatory considerations can determine the exposure assessment approach. Aggregate exposure
assessments (see Section 2.3.3) are required under the Food Quality Protection Act,7 Safe
Drinking Water Act,8 Clean Air Act9 and other regulatory programs. Various statutes require
assessment of cumulative exposure (described in Section 2.3.3) to assess cumulative risk. For
example, the 1996 Food Quality Protection Act requires assessments based on multiple
pesticides with a common mechanism of action (later interpreted as mode of action by the Office
of Pesticide Programs), whereas the 1970 National Environmental Policy Act10 requires more
broadly based cumulative assessments that consider multiple chemical and nonchemical
stressors.
More complex exposure assessments might focus on exposures that attempt to represent actual
environmental conditions or "what-if' (hypothetical) scenarios. Such assessments might require
more data, use sophisticated models or rely on observational human exposure measurement
7Food Quality Protection Act of 1996, Pub. L. No. 104-170, 110 Stat. 1489 (U.S. EPA 1996c).
8Safe Drinking Water Act, 42 U.S.C. 300f et seq.
9Clean Air Act of 1963, 42 U.S.C. 7401 et seq.
10National Environmental Policy Act, 42 U.S.C. 4321 et seq.
Page| 29
-------
studies to collect data and determine exposure factors. Complex exposure assessments often use
probabilistic distributions for one, some or all of the parameters.
A systematic and transparent planning and scoping process promotes efficient time and resource
management. Questions to consider include the resources and time available for an exposure
assessor and risk manager/decision maker to address the problem, regulatory deadlines and
requirements. Available resources and the schedule for a decision determine the effort for
obtaining and analyzing the data. When conducting an exposure assessment, particularly when
constrained by time or resources, an exposure assessor needs to identify the essential questions;
translate those questions into specific scenarios; evaluate existing literature (e.g., systematic
review); and, using that information, design an exposure assessment that addresses the needs of
the risk manager/decision maker. The need to meet external deadlines or to coordinate with the
schedules of other organizations can become limiting factors in deciding what can be prepared.
In summary, a well-documented and rigorous planning and scoping process involving the
assessors and risk managers/decision makers needs to be systematic and transparent (U.S. EPA
2014f).
3.1.2. Overarching Considerations
Among the overarching themes that EPA's risk assessments might address are children's
environmental health protection, cumulative risk assessment and environmental justice
considerations. Although these considerations might not affect all analyses, early consideration
and discussion of these issues can enhance the utility of the risk assessment. Additionally, they
could receive particular attention in the risk management arena, depending on the decision
context. Such attention can be independent of a risk assessment or might require additional data
to address one or more of the overarching considerations. Exposure assessors need to be
cognizant of these issues so they can consider them during the planning and scoping process.
Chapter 4 considers lifestage, susceptibility and environmental justice more fully.
3.1.3. Stakeholder Involvement
For highly visible assessments (e.g., Superfund cleanup), communication is the foundation for
stakeholder involvement. For this document, EPA defines "communication" as the exchange of
information and viewpoints between the Agency and stakeholders to achieve a goal or objective
such as fostering greater understanding of science and assessment methods or gaining greater
insight into diverse public views and concerns about the scenarios affecting the potential for
exposure of individuals or a community to a defined agent [adapted from NAS (2017)]. This
definition is consistent with that provided in Superfund's Community Engagement Toolkit
documents (U.S. EPA 2013c), which note that "risk communication is a dialogue—an interactive
process of information exchange—among the Site Team and the community that discusses the
nature of risk and other concerns. This dialogue should be a genuine and sincere conversation
that aims to identify mutual solutions and respond to public concerns." "Internal
communication" refers to the exchange of information among the exposure science, assessment
and decision making team.
Technical experts and risk managers/decision makers need to work together, informed by
stakeholder input where applicable, to develop the rationale and scope for the exposure
assessment (Box 3-1). EPA's public involvement policy (U.S. EPA 2003g) and the framework
for its implementation (U.S. EPA 2003e) provide guidance for ensuring the public is engaged
Page | 30
-------
and informed. Involving risk
managers/decision makers, stakeholders and
exposure assessors up front is critical to
evaluating the exposure assessment
question(s) fully and ensuring the design of
the exposure assessment supports the agreed-
upon objectives. Communication and dialogue
with community and tribal members need to
be established during the initial phases of an
exposure assessment. EPA's Superfund
program developed the Community
Involvement Tools and Resources website,
which focuses on Superfund activities but has
application to other offices whose programs
involve community outreach. Chapter 9
provides additional information on
communication throughout the exposure
assessment process.
At the outset of the process, the assessment team determines the purpose of the assessment, what
its objectives are and how to proceed and if a communication plan is deemed essential. An
effective communication process is a two-way interaction between EPA staff and stakeholders:
EPA explains the purpose and scope of the assessment to stakeholders and addresses issues of
risk perception and, as applicable, stakeholders provide information to EPA on unique
considerations (demographics, cultural aspects, traditions, etc.) of the site or scenario. For this
reason, consulting with the community on the conceptual model (see Figure 3-1) might be
advisable. In addition, the EPA staff should consider which outreach vehicles are most effective
for interacting with the community: news media, local listservs, social media, community
meetings and others. As the project proceeds, an essential function of the assessment team is to
maintain ongoing communication with the stakeholders.
This dialogue might include asking the community to define questions they want answered and
the way in which they wish to receive the results of the exposure assessment. Payne-Sturges et
al. (2004) noted that effective communication and translation of the exposure assessment
approach enables the community to "credibly represent the study's implications to policy makers
and other stakeholders, thereby closing the loop between science and the community."
Stakeholder involvement helps ensure the exposure assessment process is transparent and risk-
based decision making proceeds effectively, efficiently and credibly (IOM 2013; NRC 2009).
The development of exposure assessments for regulatory decisions might require adhering to
administrative procedures that clearly define and describe the process for engaging stakeholders.
Stakeholders might include federal agencies; state, local and tribal governments; the regulated
community; community members affected by an environmental release; and members of the
public. The Presidential/Congressional Commission on Risk Assessment and Risk Management
(1997) suggests the following questions to identify potential stakeholders:
Box 3-1. Definitions of "Public,"
"Stakeholder" and "Community"
Public Involvement refers to the full range of activities that
EPA uses to engage tribal and the American people in the
Agency's decision making process (U.S. EPA 2014f).
Stakeholders are individuals or representatives from
organizations or interest groups who have a strong interest in
the Agency's work and policies (U.S. EPA 2014f).
• Internal Stakeholders include EPA programs (U.S. EPA
2007b)
• External Stakeholders include the public, affected
industries, public health or environmental organizations
and other government agencies (U.S. EPA 2007b)
Community Involvement is the process of engaging in
dialogue and collaboration with community and tribal
members (U.S. EPA2014f).
Page|31
-------
• Who might be affected by the exposure/risk assessment?
• Who has information and expertise that might be helpful?
• Who has been involved previously in similar exposure/risk situations?
• Who has previously expressed interest in being involved in similar decisions?
Deciding how and when to involve stakeholders depends on the goals of an exposure assessment
(i.e., regulatory or non-regulatory). For routine or well-defined screening exposure assessments,
input during planning and scoping might not be necessary, whereas for an exposure assessment
considered divisive, stakeholder involvement is appropriate. For community-based or location-
specific exposure assessments, seeking and encouraging community involvement is important. In
some cases, continuing dialogue with the community throughout the process is encouraged. Each
project plan should include a list of critical points for stakeholder input, such as discussions on
purpose, scope and approach. The team might decide to assign stakeholders with relevant
expertise to subgroups that have specific tasks within appropriate regulatory considerations (U.S.
EPA 2003 d).
EPA recognizes the community could be aware of unique activities or practices that might result
in higher or lower exposure assumptions than the default assumptions used in an exposure
assessment. Some members of the community might possess information that could influence
exposure scenarios and health concerns. Types of information community members and local
agencies might provide include:
• Local exposure conditions and exposure factors (e.g., population-specific survey(s) on
food consumption)
• Community health concerns and observations (e.g., specific areas where children play)
• Critical information on potential or actual exposure scenarios (e.g., past actions at a
landfill)
• Highly exposed or susceptible population groups (e.g., subsistence activities, proximity to
a smelter).
Through community involvement practices and communication, a project team establishes a plan
to work with the community to identify sources of exposure assessment-specific information and
concerns and to communicate with them throughout the exposure assessment (U.S. EPA 2016e).
Community involvement activities are essential to meeting data quality guidelines (U.S. EPA
2002f) and help improve transparency of the exposure assumptions, ultimately building trust and
credibility with the community. This is particularly important when dealing with scenarios that
differ substantially from the general population such as tribal and indigenous populations (see
Section 4.3.2). Information and suggestions regarding community involvement in the Superfund
process (U.S. EPA2016e) can be helpful for other Agency exposure assessments. Links to some
relevant community involvement resources are provided in Box 3-2.
Page| 32
-------
Box 3-2. Community Involvement Planning Resources
• U.S. EPA (1996a) Community Advisory Groups: Partners in Decisions at Hazardous Waste Sites Case Studies.
EPA/540/R-96/043.
• U.S. EPA (2000f) Presenter's Manual for "Superfund Risk Assessment and How You Can Help." A 40-Minute Videotape.
EPA/540/R-99/013.
• U.S. EPA (2001i) Stakeholder Involvement & Public Participation at the U.S. EPA: Lessons Learned, Barriers, &
Innovative Approaches. EPA/100/R-00/040.
• U.S. EPA (2016e) Superfund Community Involvement Handbook. EPA/540/K-05/003.
• Additional Resources for Citizen Involvement in Source Water Protection website. U.S. EPA. Includes community
resources on protecting drinking water and source water at the community level.
• Community Involvement Tools and Resources website. U.S. EPA. Includes community resources, community
involvement policies and guidance and Superfund community involvement publications.
• Plain English Guide to the Clean Air Act website. U.S. EPA.
• Public Participation Process for Registration Actions website. U.S. EPA.
3.1.4. EPA's Tribal Program and Networks
EPA's Tribal Program has established working relationships with tribes. Assessors should
engage with their Tribal Program Managers and with program-specific project managers and
coordinate with other Agency risk assessors, where appropriate, to include tribal perspectives
when conducting exposure assessments and to facilitate clear communication with tribal partners
(Box 3-3).
Box 3-3. Resources Relevant to Exposure Assessment for Tribal Populations
• U.S. EPA (2003d) Framework for Cumulative Risk Assessment. EPA/630/P-02/001F.
• U.S. EPA (2006b) Consulting with Indian Tribal Governments at Superfund Sites: A Beginner's Booklet. Introduces EPA
staff and managers to the basics of government-to-government consultation with Indian tribal governments within the
context of the Superfund program.
• U.S. EPA (2007a) Amendments to Superfund Hazard Ranking System Guidance Incorporating Native American
Traditional Lifeways. (OSWER-9200.0-66). Presents ways that EPA can consider traditional lifeways in the Hazard
Ranking System to determine eligibility for a site on the National Priorities List under the Superfund program.
• U.S. EPA (2007c) Concepts, Methods and Data Sources for Cumulative Risk Assessment of Multiple Chemicals,
Exposures and Effects: A Resource Document. EPA/600/R 06/013F.
The Agency's network of Tribal Partnership Groups facilitates the exchange of technical
information and communication between tribes and EPA. The National EPA-Tribal Science
Council works to integrate and increase tribal involvement in EPA's scientific activities, while
the National Tribal Toxics Council provides tribal input on issues related to toxic chemicals and
pollution prevention. Assessors should engage with these partnership groups, through EPA's
Tribal Program, to better understand tribal lifeways and to discuss and collaborate on research
needs.
3.1.5. Peer Review
EPA defines peer review as a documented process for enhancing an Agency product so that the
decision the Agency takes based on that product has a sound, credible basis (U.S. EPA 2015c).
Peer review is a critical appraisal of a specific Agency product conducted to evaluate the
Page| 33
-------
technical and scientific quality of an Agency product. Peer review usually involves a one-time
interaction or a limited number of interactions between the authors of the work product and the
peer reviewers. EPA encourages peer review to take place during the early stages of the project
or as part of the culmination of the work product, as appropriate (U.S. EPA 2015c). During the
planning and scoping process, the risk manager/decision maker might need to determine whether
any analyses or products of an exposure assessment merit a separate peer review. Evaluating
potential peer-review requirements early will help ensure allocation of adequate resources. In
addition, peer-review considerations are an integral part of setting exposure assessment
milestones and schedules.
EPA's Peer Review Handbook, 4th Edition (U.S. EPA 2015c) provides detailed guidance for
determining when peer review is required and how to plan and implement a peer review. The
principle underlying the Agency's peer-review policy is that expert panels will peer review all
influential scientific and technical work products used in decision making. The Office of
Management and Budget considers specific types of exposure assessments to be examples of
"highly influential scientific assessments" (U.S. EPA2006e). A scientific or technical work
product that has a major impact; involves precedential, novel or complex issues; or has a legal or
statutory requirement to be peer reviewed needs to undergo peer review. For example, major
assessments such as those involving arsenic, mercury or other agents with complex
methodological or scoping issues require peer review. In general, conceptual models and
exposure assessment plans can be candidates for peer review (U.S. EPA 2015c).
Exposure assessment products also could be the subject of public comment, as some specific
regulatory programs require. Public commenters generally include a wide range of interested
individuals not expected to provide the kind of independent, expert information and in-depth
analyses obtained from the peer-review process (U.S. EPA 2015c). An exposure assessment also
might benefit from other types of review such as peer input. EPA's Peer Review Handbook, 4th
Edition defines peer input as "a form of peer involvement that generally connotes an interaction
during the development of an evolving Agency work product, providing an open exchange of
data, insights and ideas." The risk manager/decision maker needs to consider whether to include
such reviews and factor them into the schedule and resources for the assessment.
3.2. Problem Formulation
Problem formulation builds on the information developed during the planning and scoping
process. Problem formulation is the process by which the project team develops preliminary
hypotheses about how exposure occurs and why adverse effects might occur or have occurred.
Standard Agency practice integrates the problem formulation concept, as described in Guidance
on Cumulative Risk Assessment. Part 1. Planning and Scoping (U.S. EPA 1997a), Microbial
Risk Assessment Guideline: Pathogenic Microorganisms with Focus on Food and Water (U.S.
EPA 2012e), Framework for Cumulative Risk Assessment (U.S. EPA 2003 d) and Framework for
Human Health Risk Assessment to Inform Decision Making (U.S. EPA 2014f). The National
Research Council (NRC) emphasizes problem formulation as an integral component of any
exposure assessment planning activity (NRC 2009). Problem formulation is a systematic
planning step that identifies major factors for consideration in the exposure assessment,
providing its foundation. It involves all relevant parties, including the exposure assessor, risk
Page| 34
-------
assessor, risk manager/decision maker, communication specialist and, when appropriate, relevant
stakeholders and other interested parties.
Three components comprise problem formulation: (1) identification of the individual,
lifestage(s), group(s) or population(s) of concern that are the subject(s) of the assessment
(e.g., general population, infants or nursing mothers, older adults); (2) a conceptual model that
presents the anticipated pathway of the agent from source to the subject of concern; and (3) an
analysis plan that lays out the approach for conducting the assessment.
3.2.1. Individuals, Lifestages, Groups, Populations
One important aspect of an exposure or risk analysis is the approach to representing the receptor
(see Chapter 4). When data are limited, assessors sometimes use a scenario-based approach—
based on a defined set of facts, assumptions and inferences about who is exposed and how—to
estimate exposures based on intensity, duration and frequency of exposure. For such an
approach, an exposure assessor defines a specific receptor of interest, usually because of a
distinguishable characteristic or behavior that might predispose the individual, lifestage, group or
population to a potentially greater exposure concentration.
Population-based approaches are common when assessors need exposure information within a
broader context. The Framework for Cumulative Risk Assessment defines population-based
approaches as those that look at one population for many stressors (U.S. EPA 2003d). The
scenario-based approach is a hypothetical situation based on a combination of measured and,
where data are unavailable, modeled estimates of a chemical in the environment or human tissue
(NRC 2009) and can include screening for pathways and chemicals of concern (U.S. EPA
1989b). In contrast to scenario-based approaches, a population-based approach frequently
incorporates probabilistic methods with an objective to better estimate interindividual variability
in exposure.
Exposed individuals or populations can be grouped by various characteristics (e.g., age, sex,
culture, behavior, socioeconomic status, location relative to the release of a contaminant,
occupation), lifestages or discrete populations for an exposure assessment. Exposure assessors
need to identify and characterize the conditions that lead to the highest concentrations and
resulting exposures and the situations that lead to exposure for the most susceptible individual,
lifestage, group or population (U.S. EPA 2009a). An exposure assessor often needs to establish a
dialogue with toxicologists/health scientists to consider whether a specific "window of
susceptibility" during a given lifestage is important to a particular risk assessment.
An assessor frequently calculates individual risk for some or all of the individuals who represent
the population. In reality, individuals within a population fall within a distribution of exposures
based on personal characteristics and individual activities and behaviors. As a result of multiple
broad-based exposure assessments (Dockery et al. 1993; U.S. EPA 1987c; U.S. EPA 2003f; U.S.
EPA 2005f; U.S. EPA 2009f; U.S. EPA 201 la), the exposure science field has evolved to
recognize the contribution of individual characteristics and activities to exposure, recognizing
that not all individuals are alike, behave the same way or are exposed to the same concentration
of a chemical.
Page| 35
-------
3.2.2. Conceptual Model
The conceptual model is a planning tool used for various types of exposure assessments,
including site-specific, local-scale and national-scale problems/assessments. The conceptual
model identifies known or potential sources of contamination (soil, groundwater, surface water,
air); release mechanisms and receptor routes; all potential exposure pathways (including
secondary pathways); and the media and receptors associated with each (U.S. EPA 2001g). The
conceptual model maps out a framework that demonstrates the theoretical links between the
pollutant source or agent and exposure points. It provides a convenient format to present an
overall understanding of the problem and organizes available information in a structure that
facilitates identifying missing data or uncertainty. The conceptual model has features of both a
scientific hypothesis and a work plan. Figure 3-3 shows an example conceptual model.
Figure 3-3. Example of a Conceptual Site Model
Volatilization
Dermal
Contact
Inhalation
Dermal
Contact/
Ingestion
Volatilization
Volatilization
Dust/
Volatilization
Runoff
Ingestion/
Dermal Contact
Driifas.
Lagoon
Spills/Leaks Leaks
Infiltration
Inflation
LEGEND
^ Groundwater Table
¦ Contaminated Media
Release Mechanism
Exposure Route
Ingestion/
Inhalation/
Dermal Contact
Surface Water Body
Residential Wells
Fine to Medium Sand
Adapted from U.S. EPA (1989a)
Developed at the start of a project, the conceptual model is refined and updated throughout the
duration of the exposure assessment activities. The conceptual model serves as an important
communication tool for the project team, stakeholders and other interested parties. Community
members can provide input along the way to help refine exposure scenarios and health concerns.
When developing a conceptual model, a project team needs to consider the technical elements of
the exposure assessment that are consistent with the six dimensions described in EPA's
Page | 36
-------
Guidance on Cumulative Risk Assessment: Part 1. Planning and Scoping (U.S. EPA 1997a), and
a seventh added to emphasize this important aspect of human health exposure assessment:
1. Individual/lifestage/group/population at risk: Who/what is at risk?
2. Sources: What are the relevant sources of agents?
3. Stressors: What are the agents of concern?
4. Pathways, fate and transport and routes of exposure: What are the relevant
exposures?
5. Health effect endpoints: What are the health effect endpoints? Are there specific
windows of susceptibility to address? How are the exposure outputs linked to the health
endpoints?
6. Timeframes of exposures: What are the relevant frequency, duration, magnitude and
overlap of exposure intervals for a chemical agent or mixture of agents?
7. Exposure-to-dose considerations: What is known about the toxicokinetics? How do
lifestage, race, sex, genetics and other factors influence exposure-dose?
An early step in developing the conceptual model is identifying health effect endpoints and
exposure-to-dose considerations, which are part of the hazard identification step of a risk
assessment. EPA programs also might implement specific procedures that vary from this basic
process. Exposure assessors need to consult with their programs and follow their SOPs regarding
development of a conceptual model.
Another early step in developing a conceptual model is to identify possible sources of the
agent(s). In some cases, the source is unknown. Whether the source is a point source
(e.g., discharge from a pipeline) or nonpoint source (e.g., runoff from a field) is important.
Identifying the principal source(s) can improve the ability to estimate the releases quantitatively
and to predict the exposure better. The conceptual model describes the relevant exposure
pathways and routes of exposure and the fate and transport of agents in the environment.
Understanding the possible movement and transformation of chemicals from their source
through the environment helps assessors evaluate the nature and form of the chemical that could
reach the exposed population. Characteristics of the source and medium dictate the fate of the
chemicals of interest. Physical (e.g., gas to aerosol or liquid) transformation of chemicals can
occur over the exposure pathway. Chemical degradation also can change the form and amount of
a chemical available for exposure (e.g., DDT [dichlorodiphenyltrichloroethane] degrades to DDE
[dichlorodiphenyldichloroethylene] and DDD [dichlorodiphenyldichloroethane]). Processes that
can alter the agent also can occur, including photolysis; reactions with other chemicals in air,
water or soil; degradation by microbes; or adsorption onto the medium. Environmental media are
sampled to characterize the concentration of chemicals in each medium and the fate and transport
of chemicals from a source to receptors.
The conceptual model can take a variety of forms, such as a flow diagram or a pictorial depiction
incorporating data, models and hypotheses (see Box 3-2). A detailed narrative explaining the
rationale for the elements and their linkages, including the risk management options can
accompany the graphical display. Examples of conceptual models are available from many
sources including U.S. EPA (1991a); U.S. EPA (1996d); U.S. EPA (2001g); and Brady (2011).
Page| 37
-------
In summary, a conceptual model involves identifying what chemical, physical and biological
processes act on the agent and the product resulting from the process.
3.3. Exposure Assessment Analysis Plan
Assessors conduct exposure assessments at various levels of technical detail. Sometimes the
estimation of exposure uses more than one approach. For example, the Total Exposure
Assessment Methodology Study combined point-of-contact measurements, the
microenvironment (scenario evaluation) approach and breath measurements for the
reconstruction of dose approach (U.S. EPA 1987c). The intended use of an exposure assessment
generally will favor one approach for quantifying exposure over others or suggest combining two
or more approaches. The analysis plan for the exposure assessment specifies the technical details
of how to conduct the exposure assessment. In developing the exposure assessment analysis
plan, an exposure assessor considers the data sources, gaps, limitations, quality and needs;
methods for developing exposure estimates; and exposure scenarios that reflect the conceptual
model. Box 3-4 presents resources for technical study design for various types of data acquisition
approaches.
Box 3-4. Resources for Technical Study Design of Observational
Human Exposure Measurement Studies
Data Acquisition
• Database Design (U.S. EPA 2018c).
• Sample Size (Baguley 2004; Dell etal. 2002; Devaneetal. 2004; Dupontand Plummer Jr. 1990; Dupontand Plummer
Jr. 1998; Kieseretal. 2004; Marshall 1996; Rippin 2001; Salganik 2006; Vaeth and Skovlund2004).
• Temporal Considerations (Buck etal. 1995).
General Study Design
• Observational Human Exposure Measurement Studies (Adgate et al. 2000; Buckley etal. 2000; Callahan etal. 1995;
Daston etal. 2004; Fenskeetal. 2005; Lebowitz et al. 1995; Morgenstern and Thomas 1993; Ozkaynaketal. 2005;
Pellizzari etal. 1995; Quackenboss et al. 2000; Rice etal. 2003; U.S. EPA 1998; U.S. EPA2005d; Vojtaetal. 2002).
For exposure assessments conducted as part of a risk assessment, the exposure assessment
analysis plan describes how the data will be collected, analyzed and used in a risk assessment.
Depending on the data needs, approach selected and complexity and interest in an exposure
assessment, the analysis plan might need additional documentation. Such documentation
includes descriptions of the sampling strategy (e.g., purpose, design, quality objectives/control
measures), modeling approach (e.g., needs, goals, availability of input parameters, use of model
outputs in the exposure assessment) and communication plan (e.g., personnel involved, types of
communication planned and scheduled) (see Chapter 9). The exposure assessment analysis plan
needs to be reevaluated throughout the life of the exposure assessment to ensure appropriate risk
management decisions. Exposure assessors need to consult with their programs and follow any
available SOPs.
3.3.1. Data Sources, Gaps, Limitations and Quality Objectives
The approach selected for an exposure assessment will determine data and information needs. As
part of the exposure assessment analysis plan, a project team characterizes the type of data
Page|38
-------
needed to answer an exposure assessment question or hypothesis. The information and rationale
described during the development of the conceptual model is instrumental in determining
assessment-specific data needs. An exposure assessor might consider the nature of the
contaminants, the location of the exposure, the extent of contamination and the availability and
representativeness of data at national, regional or local scales.
The data necessary for meeting exposure assessment objectives could be available or additional
data might need to be collected. Key steps in determining the availability of data include
conducting a review of the literature, identifying existing datasets and evaluating possible critical
data gaps and specific data needed to fulfill the assessment's data requirements.
The exposure assessment analysis plan also specifies data quality objectives and quality
assurance measures for all data used in an exposure assessment. As specified in the Guidance on
Systematic Planning Using the Data Quality Objectives Process (U. S. EPA 2006e), Data quality
objectives are a set of performance and acceptance criteria that ensure the newly collected and
existing data are of sufficient quality and quantity to address the project's goals (see Section 5.3).
A wide range of existing data can support an exposure assessment. When developing the
exposure assessment analysis plan, the project team identifies datasets relevant to the conceptual
model and associated assessment questions. Table 3-1 provides examples of the types of datasets
linked to key exposure questions and conceptual model elements.
Chapter 5 addresses the topic of data availability and quality.
Table 3-1. Examples of Datasets Useful for a Location-Specific Exposure Assessment
Populations
at Risk
Sources
Environmental
Data
Exposure
Pathways
Exposure-to-Dose
Considerations
Exposure
Factors
• Demographic
• Emission
• Historical environmental
• Surveys of
• Toxicological data
• Activity
data
inventories
sampling data (e.g., air,
activity patterns
• Bioconcentration/bio-
patterns
• Local survey
• Product
water, soil, biota)
used to establish
accumulation data
• Physiological
data
information
• Personal monitoring data
exposure factors
• Physiologically based
parameters
• Site
• Land use
• Climatic or
• Human exposure
pharmacokinetic
assessments
(current,
meteorological data
factors data
models
planned)
• Hydrogeological data
• Land use
• Stationary air sampling
(current, planned)
data
3.3.2. Exposure Scenarios
Exposure scenarios describe the combination of facts, assumptions and inferences that define a
discrete situation or activity in which potential exposures occur (Sheldon 2010; U.S. EPA
2003 c). Exposure assessors create exposure scenarios to help estimate exposure of humans to
chemicals in their environment. Exposure scenarios might include the source, exposed
population (e.g., young children), timeframe of exposure, routes and pathways of exposure,
microenvironment(s) and human activities. The term microenvironment refers to surroundings
(e.g., home, office, automobile) treated as homogeneous or well characterized with regard to the
concentrations of an agent. People are exposed to a variety of potentially harmful chemicals
through the air they breathe, food they eat and products they use and by skin contact with treated
or contaminated surfaces. Examples of sources are places, objects, activities or entities that
Page | 39
-------
release chemicals (e.g., hazardous waste disposal facility, pesticide application, vehicular traffic,
industrial or mining operations). Assessors might want to consider both current and future
exposure scenarios because land use and associated activities can change over time.
3.4. Summary
• Systematic and transparent planning and scoping promotes efficient time and resource
management; agreement on the assessment's purpose; communication within the team
and with stakeholders; stakeholder buy-in and realistic expectations; better-informed
decisions with high-quality data based on established objectives and using sound
methods; participation from multiple disciplines; and documentation of decisions made
and the rationales.
o A thorough understanding of the purpose of the assessment ensures utility of the
information evaluated in meeting the established goals. Understanding the boundaries
of the problem helps define the scope,
o Depending on the nature of the assessment, involving stakeholders is essential for
meeting EPA's data quality guidelines, improving transparency of the exposure
assumptions and building trust and credibility for the Agency,
o The principle underlying EPA's peer-review policy is that all influential scientific
and technical work products used in decision making need to be peer reviewed.
• Problem formulation has three key components: (1) identification of the individual,
lifestage(s), group(s) or population(s) of concern; (2) a conceptual model presenting the
anticipated pathway of the agent from the source to receptor of concern; and (3) an
analysis plan that charts the approach for conducting the assessment.
o Representing the receptor is critical to formulating the problem in exposure
assessment. In many cases, especially when data are limited, assessors can rely on a
scenario-based approach. Assessors might also use population-based approaches to
better estimate interindividual variability in exposures,
o Developed at the start of the exposure assessment and updated throughout the
duration, the conceptual model identifies known or potential sources of
contamination, postulates release mechanisms and receptor routes and suggests all
potential exposure pathways and the media and receptors associated with each,
o The exposure assessment analysis plan specifies the technical aspects of conducting
the assessment, including the data sources, gaps, limitations, quality and needs;
methods for developing exposure estimates; and exposure scenarios that reflect the
conceptual model. For exposure assessments conducted as part of a risk assessment,
an exposure assessment analysis plan describes how the data will be collected,
analyzed and used in the risk assessment.
Page | 40
-------
CHAPTER 4. CONSIDERATION OF LIFESTAGES,
VULNERABLE GROUPS AND POPULATIONS OF
CONCERN IN EXPOSURE ASSESSMENTS
Differences in exposure and varied responses to exposure can occur across individuals,
lifestages, specific groups and populations. Addressing one or more contributors of human
vulnerability and susceptibility in exposure assessment presents a challenge. Where appropriate,
exposure assessors consider unique characteristics and sociodemographic factors that might
increase exposure or predispose an individual, lifestage, specific group or population to greater
health risk. These factors include age, sex, genetic variation, cultural characteristics, behaviors,
occupation, socioeconomic status, race/ethnicity and geographic location. Incorporating
measures of population vulnerability (differential exposures), including racial, social and cultural
aspects, in developing and implementing environmental regulations and policies is an important
goal of EPA's environmental justice, children's environmental health protection and tribal
programs.
The public expects EPA to make advancements in developing exposure assessments that better
reflect reality, which is consistent with recommendations from the National Academy of
Sciences and the EPA Science Advisory Board (NRC 2009; NRC 2012a; SAB 2000). Tools and
methods are available and continue to be developed to incorporate vulnerability factors in
exposure assessments. Programs might need to tailor their approaches to incorporate population-
specific issues in exposure assessments to meet their regulatory, program or policy needs.
Assessments involving potentially vulnerable populations can consider economic, public health
and other factors along with environmental conditions (see Section 3.2). As appropriate,
exposure assessors identify and characterize those conditions that lead to the highest agent
intensities and resulting exposures and those situations that lead to exposure for the most
susceptible receptors (U.S. EPA 2009a). Considering vulnerability and susceptibility when
making risk management decisions is essential for protecting not only the general population but
also those populations at greatest risk (U.S. EPA 1995b; U.S. EPA 2010b).
This chapter describes available tools to identify and evaluate differential exposures of
individuals, lifestages, vulnerable groups and populations of concern:
• The history of EPA's activities in addressing populations of concern in exposure
assessment (Section 4.1)
• Vulnerability and susceptibility in exposure assessment (Section 4.2)
• Examples of exposure factors for populations of concern (Section 4.3).
Section 4.4 summarizes this chapter.
Page|41
-------
4.1. History of EPA Exposure Assessments for Lifestages,
Vulnerable Groups and Populations of Concern
Numerous executive orders, policies and legislative mandates emphasize EPA's commitment to
considering lifestages, vulnerable groups and populations of concern in exposure assessments
(Box 4-1). Additional information on consultation and policies related to working with federally
recognized tribes is found at EPA's website, Environmental Protection in Indian Country.
Provisions in the Food Quality Protection Act of 1996, the Safe Drinking Water Act
Amendments of 1996, the Frank R. Lautenberg Chemical Safety for the 21 st Century Act (which
amended TSC A) and other laws underscore these policy priorities by requiring a focus on the
evaluation of unique population exposures, susceptibilities and vulnerabilities in the context of
risk assessments and regulatory and policy decision making.
Box 4-1. Provisions of Presidential Executive Orders
• Executive Order No. 12898 (1994). Federal Actions to Address Environmental Justice in Minority Populations and Low-
income Populations. Federal agencies, wherever practicable and appropriate, are required to:
> Collect and analyze data assessing and comparing environmental and human health risks borne by ethnic minorities
and low-income populations
> Identify and address disproportionately high and adverse environmental or human health effects of its programs,
policies and activities on minority and low-income populations.
• Executive Order No. 13045 (1997). Protection of Children from Environmental Health Risks and Safety Risks. For each
regulatory action that meets the criteria of Executive Order 13045, federal agencies need to provide the following to the
Office of Management and Budget's Office of Information and Regulatory Affairs for review:
> An evaluation of the environmental health or safety effects of the planned regulation on children
> An explanation of why the planned regulation is preferable to other potentially effective and reasonably feasible
alternatives the Agency is considering.
• Executive Order No. 13175 (2000). Consultation and Coordination with Indian Tribal Governments. Federal agencies
have an accountability process to ensure meaningful and timely input by tribal officials in the development of regulatory
policies that have tribal implications.
A growing body of literature (e.g., Agency guidance documents, government reports, scientific
articles, reports by environmental health and justice advocates) also highlights the importance of
evaluating differences in exposures among sociodemographic groups and accounting for the
social, cultural, economic and political context in which such exposures occur (Box 4-2).
4.2. Vulnerability and Susceptibility in Exposure Assessment
Environmental exposures and health risks are distributed unequally across the landscape and, in
some cases, are concentrated among certain population groups and in potentially vulnerable
communities (Northridge 2011). The population characteristics related to vulnerability (e.g.,
lifestyle, culture, diet, daily activities) and susceptibility (e.g., genetics, lifestage, gender) are
important because these factors, in conjunction with the toxicity of environmental contaminants,
could translate into differential health risks. The planning and scoping phase of the exposure
assessment is the optimal point for the exposure assessor to outline the approach for identifying
and considering population vulnerability and for conceptualizing the linkages to health, exposure
and risk. Figure 4-1 depicts vulnerability and susceptibility factors in exposure assessments.
Page| 42
-------
Box 4-2. Resources on Disparities in Exposure
• NRC (1993) Pesticides in the Diets of Infants and Children. Recommends changes in policy and risk assessment
practices to better reflect children's health and exposure factors in evaluating exposure to pesticides in food and water.
• U.S. EPA (1999c) Sociodemographic Data Used for Identifying Potentially Highly Exposed Populations.
EPA/600/R-99/060. A companion document to the U.S. EPA (2011d) Exposure Factors Handbook: 2011 Edition.
E PA/600/R-09/052 F.
• NEJAC (2004) Ensuring Risk Reduction in Communities with Multiple Stressors: Environmental Justice and Cumulative
Risks/Impacts. Recommends incorporating measures of population vulnerability (differential exposures), especially social
and cultural aspects, in risk assessments.
• U.S. GAO (2005) Environmental Justice: EPA Should Devote More Attention to Environmental Justice When Developing
Clean Air Rules. Recommends more explicit analysis of disparities in exposures and risk because of air pollution.
• U.S. EPA (2006d) A Framework for Assessing Health Risks of Environmental Exposures to Children.
EPA/600/R-05/093F. Assists in conducting exposure assessments for children.
• U.S. EPA (2006f) Guide to Considering Children's Health When Developing EPA Actions: Implementing Executive Order
13045 and EPA's Policy on Evaluating Health Risks to Children.
• U.S. EPA (2011d) Exposure Factors Handbook: 2011 Edition. EPA/600/R-09/052F.
• U.S. EPA (2011 f) Plan EJ 2014. A roadmap for integrating environmental justice into the Agency's programs, policies and
activities.
• Frank R. Lautenberg Chemical Safety for the 21st Century Act (2016) website. U.S. EPA. Defines potentially exposed
susceptible subpopulations as a group of individuals within the general population that EPA identifies who, due to either
greater susceptibility or greater exposure, may be at greater risk than the general population of adverse health effects
from exposure to a chemical substance or mixture, such as infants, children, pregnant women, workers, or the elderly.
• Tribal Science Priorities website. U.S. EPA. Presents environmental and health priorities the National EPA-Tribal Science
Council identifies.
• Environmental Justice website. U.S. EPA. Presents information for environmental justice considerations for healthy
environments and communities.
• EPA-ExpoBox, an online toolkit to help individuals in government, industry and academia and the public assess exposure.
Figure 4-1. Vulnerability and Susceptibility Factors
Exposure
Greatest
Potential
Risk
Highest
Concentrations
Examples of
Vulnerability Factors
• Culture, Lifestyle, and Diet
• Activities
• Geographic Locations
• Microenvironments
• Socioeconomic Status
• Previous Exposures
Receptor
Susceptible
Individual/Lifestage/
Group/Population
Examples of
Susceptibility Factors
• Age or Lifestage
• Gender
• Genetic Differences
• Reduced Reserve Capacity
• Preexisting Health Status
Adapted from U.S. EPA (2009a)
Page | 43
-------
Vulnerability refers to characteristics of individuals or populations that place them at increased
risk of an adverse health effect. Vulnerability includes economic, demographic, social, cultural,
psychological and physical states of the receptor that influence patterns of exposure to
environmental contaminants or alter the relationship between the exposure to environmental
contaminants and the health effect of the exposed individual or population (ATSDR 1997; deFur
et al. 2007; U.S. EPA 2003 d). Susceptibility refers to the increased likelihood of an individual or
population to be more affected by exposure to an agent as compared to the general population
because of intrinsic biological factors such as lifestage, genetic polymorphisms, prior immune
reactions, disease state or prior damage to cells or systems (U.S. EPA 2003d).
EPA's Framework for Cumulative Risk Assessment (U.S. EPA 2003d) describes four properties
of vulnerability, the first two of which are most relevant for exposure assessment:
• Differential susceptibility: An increased likelihood of sustaining an adverse effect from
exposure to an agent. For example, an individual, group or population might be more
likely to show a response to an agent at a lower dose than the general population because
of a preexisting health condition (e.g., asthma, cardiovascular disease, genetic variation,
prior damage from exposure, concurrent exposures to other stressors or lifestage
[e.g., children, older adults, pregnant women]).
• Differential exposure: Differences in exposure (e.g., magnitude, duration, frequency,
pathway, route) from a variety of factors, including lifestage, socioeconomic status and
cultural characteristics. For example:
o Children might have a higher exposure and proportionally higher body burden of
pesticides than adults because of their behavior patterns or food consumption (Moya
et al. 2004; NRC 1993).
o When neighborhoods are racially segregated, nonwhites might live in lower
socioeconomic conditions where they experience higher exposures to air pollution
(Lopez 2003).
o Studies on fish consumption and subsistence fishing patterns have documented
racial/ethnic differences (Burger 2000; Burger 2002a; Burger 2002b; Burger et al.
2001; Burger et al. 1999a; Burger et al. 1998; Burger et al. 1993; Burger et al. 1999b;
Corburn 2002).
o Native Americans can be exposed differentially to toxicants when dietary patterns
involve consumption of locally caught fish or game for traditional or religious reasons
(Fitzgerald et al. 1999; Fitzgerald et al. 1995; Fitzgerald et al. 1998; Fitzgerald et al.
2001; Harper et al. 2002; Schell et al. 2003).
• Differential preparedness: The coping systems and resources that an individual,
community or population uses or can access to withstand the insult of agents.
• Differential ability to recover: Refers to resources and coping systems, such as income
level, ability to move from an affected area or access to health care, which can affect
recovery from the effects of an agent.
Page| 44
-------
4.3. Examples of Lifestages, Vulnerable Groups and Populations of
Concern in Exposure Assessment
Sections 4.3.1 through 4.3.3 discuss exposure concerns for lifestages (particularly for childhood),
tribal populations (e.g., American Indian, Alaska Native, other indigenous populations), other
racial and ethnic groups (e.g., African Americans, Hispanic or Latino Americans, Asian
Americans, Pacific Islanders) and socioeconomically disadvantaged populations. Note that the
concerns described under one section might overlap with another.
4.3.1. Lifestages
The term "lifestage" refers to a temporal stage of life with distinct anatomical, physiological,
behavioral or functional characteristics that contribute to potential differences in vulnerability to
environmental exposures (U.S. EPA 2006d). Unlike particular groups that form a relatively fixed
portion of the population (e.g., groups based on ethnicity), lifestages or age groups encompass
the entire population over time. Rather than considering children as a group, the Agency has
moved toward viewing childhood as a sequence of lifestages from conception through fetal
development, infancy and adolescence (U.S. EPA 2005c).
EPA's Guidance on Selecting Age Groups for Monitoring and Assessing Childhood Exposures to
Environmental Contaminants (U.S. EPA 2005c) follows the Agency's established policy of
viewing childhood as a sequence of lifestages. Other lifestages to consider when assessing
exposure and risk are pregnancy, nursing and older adults. For each lifestage, exposures typically
differ. For example, during pregnancy, eating habits and nutritional needs change; in the third
trimester, mobility can be affected, which in turn can alter exposure. During the nursing
lifestage, fluid intake increases. As one ages, mobility, the level or intensity of exercise and
caloric intake can decline, and aging can affect the body's ability to defend against toxic agents.
During the planning and scoping process (see Section 3.1), exposure assessors establish dialogue
with toxicologists/health scientists to consider specific "windows of susceptibility" for an
exposure assessment. For example, a window of susceptibility in development when an agent
causes the greatest effect, a description of that period and an estimate of the exposure during that
window are key in informing the exposure assessment. If data exist that support early life
exposures leading to effects later in life, exposure assessors can discuss them during planning.
Childhood
In 1993, the National Research Council (NRC) issued Pesticides in the Diets of Infants and
Children, which highlighted many important differences between children and adults regarding
exposure to and risks posed by pesticides (NRC 1993). The NRC report provided the impetus for
Executive Order 13045 (1997), which states that "each federal agency... shall ensure that its
policies, programs, activities and standards address disproportionate risks to children that result
from environmental health risks or safety risks." In response to these policies and statutes, EPA
is improving methods for conducting exposure assessments for children.
In 1995, EPA released its Policy on Evaluating Risk to Children, which directs the Agency to
take into account, explicitly and consistently, environmental health risks to infants and children
in all risk characterizations and public health standards set for the United States (U.S. EPA
Page | 45
-------
1995b). In 2013 and again in 2018. EPA reaffirmed its support of this important policy. Since
fall 1996, the Agency has followed the National Agenda to Protect Children's Health from
Environmental Threats (U.S. EPA 1996b). The Agency has developed guidance on selecting a
consistent set of age groups to consider when assessing childhood exposure to environmental
contaminants (U.S. EPA 2005c).
Childhood exposures to environmental contaminants often differ from those in later stages of life
for several reasons, including differences in behavior and physiology (Cohen Hubal et al. 2000;
Moya et al. 2004). Children consume more of certain foods (e.g., milk and fruit) and water and
have higher inhalation rates per unit of body weight than adults. For example, consumption of
apples by children between birth and 5 months of age is about 19 g/kg/day, whereas consumption
by adults 20 years and older is approximately 2 g/kg/day, almost a 10-fold difference (U.S. EPA
2003b). Children also have higher excretion and metabolic rates per unit body weight than
adults. Young children play close to the ground, contact soil outdoors, contact dust on surfaces
and carpets indoors and display more hand-to-mouth and object-to-mouth activity than adults
(Cohen Hubal et al. 2000; Moya et al. 2004; U.S. EPA 201 Id).
Maternal exposures also can affect childhood exposures. Fetal exposures are uniquely tied to the
pregnant mother through the placenta. Much research is reported in the peer-reviewed literature
attempting to understand the relationships between maternal and fetal exposures (Ashley-Martin
et al. 2014; Ashley-Martin et al. 2016; Ashley-Martin et al. 2015; Braun and Hauser 2011;
Callan et al. 2016; Guan et al. 2010; Lin et al. 2011; Mattison 2010; Perera and Herbstman 2011;
Perera et al. 2006; Ponsonby et al. 2016; Rothenberg et al. 2011; Whyatt et al. 2009). Similarly,
nursing infants and young children are exposed to concentrations of chemicals in breast milk
(Cooke 2014; Hines et al. 2015; LaKind et al. 2009; Lehmann et al. 2014). Information relating
maternal exposure to chemical concentrations in breast milk, however, is sparse. EPA is
developing models and other tools that can help exposure assessors evaluate this situation. For
example, Appendix C ofthq Human Health Risk Assessment Protocolfor Hazardous Waste
Combustion Facilities (U.S. EPA 2005e) provides guidance for estimating concentrations of
dioxins and dioxin-like PCBs (polychlorinated biphenyls) in breast milk. Development of
additional validated exposure models will strengthen the understanding of the relationship
between fetal and maternal exposures. Chapter 6 describes exposure and dose modeling.
Because childhood is a time of rapid behavioral and physiological changes, considering the
differences between childhood age groups is important when preparing exposure assessments
and calculating lifetime exposures that are integrated across all lifestages (Firestone et al. 2007).
EPA developed Guidance on Selecting Age Groups for Monitoring and Assessing Childhood
Exposures to Environmental Contaminants (U.S. EPA 2005 c) and A Framework for Assessing
Health Risks of Environmental Exposures to Children (U.S. EPA 2006d) to assist in exposure
assessments for children. Table 4-1 presents EPA's recommended set of childhood age groups
(U.S. EPA 2005c), and Box 4-3 lists key sources of childhood exposure information. Figure 4-2
illustrates children's activities that influence exposure as a function of developmental age (color
in the graph represents the gradual increase from "Initiating the Activity" to "Activity Most
Likely Occurring"). Information on how lifestages affect susceptibility is found in Supplemental
Guidance for Assessing Susceptibility from Early-Life Exposure to Carcinogens (U.S. EPA
2005h).
Page| 46
-------
Table 4-1. Recommended Childhood Age Groups for Monitoring and Assessing
Childhood Exposures
Age Groups <1 Year
Age Groups >1 Year
Birth to <1 month
3 to <6 months
1 to <2 years
3 to <6 years
11 to <16 years
1 to <3 months
6 to <12 months
2 to <3 years
6 to <11 years
16 to <21 years
Source: U.S. EPA (2005c)
Box 4-3. Key Sources of Childhood Exposure Concentration
and Exposure Factor Information
• U.S. EPA (2005c) Guidance on Selecting Age Groups for Monitoring and Assessing Childhood Exposures to Environmental
Contaminants (Final). EPA/630/P-03/003F.
• U.S. EPA (2006d) A Framework for Assessing Health Risk of Environmental Exposures to Children. EPA/600/R-05/093f.
Extensively reviews sources of exposure data relevant to early lifestages. The lifestages considered need to match the
periods of greatest susceptibility.
• U.S. EPA (2011d) Exposure Factors Fiandbook: 2011 Edition. EPA/600/R-09/052F. Provides exposure factor data for EPA-
recommended childhood age groups in the following areas:
>
Breast milk ingestion rates
>
Dermal exposure factors such as surface areas and soil
>
Food ingestion rates, including homegrown
adherence
foods and other dietary-related data
>
Inhalation rates
>
Drinking-water ingestion rates
>
Activity duration and frequency in different locations and
>
Soil ingestion rates
various microenvironments
>
Hand-to-mouth and object-to-mouth activity
>
Duration and frequency of consumer product use
associated with elevated ingestion rates
>
Body weight data
>
Duration of lifetime.
Figure 4-2. Children's Activities That Impact Exposure as a
Function of Developmental Age
CD
Moving
From Home
Driving
Smoking/Workplace _
Exposures
Sports
School Environments —
Wearing Adult
0 Style Clothing
5 Outdoor Play
»
: Hobbies
Physical Activities
Mouthing
Walking
Crawling
Food Consumption
Solid Food
Bottle Feeding
Nursing
Sleeping
Prevalence of Activity Behavior
~| Initiating Activity
J Activity most likely occurring
1—i—i—i—
1-2 3-5 6-11 mo
T
T
1-2 yr
3-5 yr 6-10 yr
Age Bins
1
11-15 yr
1
16-17 yr
>17yr
Adapted from WHO (2006)
Page| 47
-------
Older Adults
Examples of available resources and ongoing research associated with older adults include:
• In 2007, EPA convened an expert panel to consider the utility of an Exposure Factors
Handbook for the Aging. The resulting report (U.S. EPA 2007i) summarizes the
discussions held during the workshop, highlights several sources of existing data and
provides recommendations for additional research. This panel agreed that older adults
could have very different exposures than younger adults and recommended steps for
addressing these unique exposures. The panel noted that exposures in the aging
population were not wholly dependent on age but also were dependent on abilities
(e.g., fully functioning, compromised functioning, low functioning).
• In 2010, EPA completed a report compiling information sources and data available for
modeling environmental exposures in older adults in the United States. The report, Data
Sources for Modeling Environmental Exposures in the Older Adult Population (U.S. EPA
2010a), contains exposure factors, physical activity data and general health information
for people aged 60 years or older, with an emphasis on ages greater than 65 years.
Integrating Age-Specific Values in Exposure Assessment
When assessing long-term exposures to environmental chemicals, integrating age-specific values
for both exposure and toxicity/potency is advisable when such data are available (U.S. EPA
2005h). Historically, when assessing cancer risks, the assumption has been that risk is
proportional to the lifetime average daily dose for a "typical" adult. A lifestage-integrative
approach is a departure from this historical approach because it assesses total lifetime cancer risk
resulting from lifetime exposure or less-than-lifetime exposure during a specific portion of a
lifetime. For example, when assessing risks to carcinogens with a mutagenic mode of action,
different toxic potency adjustments are made for exposure of children less than 2 years of age
and between 2 and less than 16 years of age. Ideally, except in the case of higher end screening
assessments, average estimates of lifestage-integrative exposure are calculated by summing the
time-weighted exposures across all relevant age groups, including children, adults and older
adults, and averaging across the total exposure period (U.S. EPA 2005d; U.S. EPA2005h).
Table 4-2 presents the exposure duration and potency adjustments for the recommended set of
childhood age groups (from Table 4-1) (U.S. EPA 2005c). This information can be used to
integrate age-specific values for exposure and toxic potency to assess cancer risks for those
toxicants that cause cancer via a mutagenic mode of action.
Table 4-2. Integrating Childhood Age Groups Used for Assessing Exposure
and Potency for Selected Toxicants That Cause Cancer via a Mutagenic Mode of Action
Potency-Based Age Groups
(U.S. EPA 2005h)
Exposure Age Groups
(U.S. EPA 2005c)
Exposure Duration (Years)
Age-Dependent Adjustment
Factors (U.S. EPA2005h)
Birth to <2 years
Birth to <1 month
0.083
10x
1 to <3 months
0.167
10x
3 to <6 months
0.25
10x
6 to <12 months
0.5
10x
1 to <2 years
1
10x
2 to <16 years
2 to <3 years
1
3x
3 to <6 years
3
3x
6 to <11 years
5
3x
11 to <16 years
5
3x
16 years and above
16 to <21 years
5
1x
21 to <70 years
49
1x
Page|48
-------
4.3.2. Tribal and Indigenous Populations
EPA's relationship with federally recognized tribes differs from its relationship with other
agency stakeholders. EPA engages with federally recognized tribes on a government-to-
government basis (U.S. EPA 1984). Furthermore, EPA's interactions with tribes and other
indigenous peoples is significantly influenced by their traditions and customs, referred to as
lifeways. This section provides assessors with guidance on working with federally recognized
tribes and indigenous populations focusing on planning, analysis and communicating results.
Planning, Scoping and Problem Formulation for Assessing Tribal Lifeways
Executive Order and Policies Establishing Tribal Lifeways in Exposure Assessments
Inclusion of unique tribal lifeways into EPA human exposure assessments is established in one
federal executive order and multiple policies (Box 4-4). These documents identify the need to
consider tribal interests when the Agency takes action, conduct consultations on a government-
to-government basis with federally recognized tribal governments and consider the impacts of
actions on indigenous peoples.
Box 4-4. Federal Executive Order and Policies Establishing Inclusion of Tribal
Exposure Lifeways in Human Exposure Assessments
• Executive Order No. 12898 (1994). Federal Actions to Address Environmental Justice in Minority Populations and Low-
income Populations. Directs each federal agency to identify and address instances where disproportionately high levels of
adverse human health or environmental impacts affect tribal populations by improving research and data collection to
identify differential patterns of consumption of natural resources among minority populations.
• U.S. EPA (1984) EPA Policy for the Administration of Environmental Programs on Indian Reservations. Requires EPA to
consider tribal interests when the Agency takes an action.
• U.S. EPA (2011b) EPA Policy on Consultation and Coordination with Indian Tribes. Directs EPA to consult on a
government-to-government basis with federally recognized tribal governments when EPA actions and decisions might
affect tribal interest.
• U.S. EPA (2014d) EPA Policy on Environmental Justice for Working with Federally Recognized Tribes and Indigenous
PeoplesL Requires EPA to consider impacts on indigenous peoples in agency actions.
Definitions
Box 4-5 lists the definitions for "Federally Recognized Tribe" and "Indigenous Peoples" as used
in this document.
Box 4-5. Definitions of "Federally Recognized Tribe" and "Indigenous Peoples"
Federally Recognized Tribe - an Indian or Alaska Native tribe, band, nation, pueblo, village, or community that the
Secretary of the Interior acknowledges to exist as an Indian tribe pursuant to the Federally Recognized Indian Tribe List Act
of 1944,25 U.S.C.479a. The elected officials for the federally recognized tribe and the government structure they administer
are referred to as the federally recognized tribal government. When used in this document, "tribes" refers to federally
recognized tribes unless otherwise specified.
Indigenous Peoples - state-recognized tribes; indigenous and tribal community-based organizations; individual members of
federally recognized tribes, including those living on a different reservation or living outside Indian country; individual
members of state-recognized tribes; Native Hawaiians; Native Pacific Islanders; and individual Native Americans.
Page| 49
-------
Exposure Assessment Methodologies for Assessing Exposures
This section provides background information, important deliberations, examples and references
for an assessor to consider when planning and conducting an exposure assessment involving
tribes or indigenous populations.
Subsistence Lifeways
Tribes and indigenous populations are inextricably linked to their environments: They rely on
natural resources to maintain traditional diets, customs and languages. Natural resources provide
essential elements of tribal lifeways including economic, cultural, ceremonial, sacred,
recreational and subsistence practices. Examples of tribal lifeways include subsistence hunting,
gathering and fishing. Tribal and indigenous populations are interconnected with the ecosystem
that provides a variety of food, medicine and products for various uses and trades. Each tribal
and indigenous population has unique practices and cultural bonds to their environment. A
subsistence lifeway is the basis of cultural existence and survival. It is a communal activity rather
than an individual pursuit. Among many tribes, maintaining a subsistence lifestyle is a symbol of
their survival in the face of mounting political and economic pressures. It defines who they are as
a people (NPS 1999).
Tribal and indigenous people live across North America. EPA's ecoregion approach provides a
framework to identify vegetation, land uses, wildlife and other relevant ecological characteristics
that can assist assessors in understanding the type, quality and quantity of environmental and
cultural resources. An ecoregion approach enables researchers to combine quantitative and
qualitative descriptions of the environment when collaborating with tribal and indigenous
populations.
Unique Exposure Scenarios
Exposure scenarios for tribal and indigenous populations differ from general population
exposure scenarios, in that subsistence lifeways and diets are relevant, outdoor activities are
prevalent and traditional and cultural activities are frequent. Unique tribal practices might expose
tribal and indigenous populations to higher concentrations of contaminants through natural
resources that could differ substantially from exposures the general population experiences. In
addition, some tribal and indigenous populations live in areas with limited infrastructure that
might result in higher exposures (U. S. EPA 2015a). Examples of EPA resources relevant to tribal
exposure scenarios are presented in Box 4-6.
Limited data and information specific to tribes and indigenous populations are provided within
EPA's Exposure Factors Handbook: 2011 Edition. For example, several exposure assessments
have examined the exposure of Native Americans to contaminants in fish. Fish ingestion rates
for recreational marine anglers (adults) from the northern Pacific region were reported as
6.8 g/day (95th percentile) and 2.0 g/day (average). By comparison, fish ingestion rates for
Native Americans (adults) from the four Columbia River Nations of Oregon were 170 g/day
(95th percentile) and 59 g/day (average) (U.S. EPA 201 Id). The Exposure Factors Handbook:
2011 Edition also discusses exposure to chemicals via fish consumption and recommendations
for adult tribal soil ingestion. Exposure assessors evaluating tribal exposures need to consult the
Exposure Factors Handbook: 2011 Edition for information.
Page | 50
-------
Box 4-6. Tools and Reports for Evaluating Tribal Exposures
• EPA Guidance for Conducting Fish Consumption Survey
> Characterizes tribal subsistence fish consumption practices and rates, including estimates of heritage or historical fish
consumption for the development of ambient water quality criteria
• EPA's Expo Box: Tools by Lifestaqes and Populations - Highly Exposed or Other Susceptible Population Groups
> Includes tools for tribal and other populations that might experience greater exposure to environmental contaminants
• A Decade of Tribal Environmental Health Research: Results and Impacts from EPA's Extramural Grants and Fellowship
Programs
> Summarizes information collected through EPA's STAR (Science to Achieve Results) grant program that informs or
improves health outcomes
• Wabanaki Traditional Cultural Lifewavs Exposure Scenario
> Informs Agency assessors on cultural lifeways and their application to risk assessments
• Traditional Tribal Subsistence Exposure Scenario and Risk Assessment Guidance Manual (Harper etal. 2007)
> Provides a framework the grantee develops for evaluating risk in Indian country
An important consideration is that exposure scenarios developed for tribal and indigenous
populations are drawn from historical (pre-European settlement) time frames and account for the
many irreversible changes that have occurred in the United States that have fundamentally
altered the practice of traditional tribal lifeways. More importantly, tribal exposures should be
representative of current or plausible future conditions. EPA's Guidance on Systematic Planning
Using the Data Quality Objectives Process (U.S. EPA 2006e) can help determine how
information on tribal exposures is collected and used. This guidance ensures that Agency
decisions are supported by data of known and documented quality.
Traditional Ecological Knowledge
Traditional Ecological Knowledge (TEK) is a body of knowledge, practice and beliefs, evolving
by adaptive processes and handed down through generations by cultural transmission, about the
relationship of living beings (human and nonhuman) with one another and with the environment.
This knowledge, which can be specific to a location, includes the relationships among plants,
animals, natural phenomena, landscapes and timing of events that are used for lifeways,
including hunting, fishing, trapping, agriculture, forestry and sacred ceremonies. TEK
encompasses the world view of indigenous peoples, which includes ecology, spirituality and
human and animal relationships. Risk assessors should be aware that tribes have their own
guidelines for sharing TEK with federal agencies.
EPA policy (U.S. EPA 2014d) directs management and staff, as appropriate and to the extent
practicable and permitted by law, to integrate TEK into Agency environmental science policy
and decision making processes to understand and address environmental justice concerns and
facilitate program implementation.
Community-Based Participatory Research
Tribal and indigenous populations might have experienced historical trauma as a result of past
unethical research imposed on them. Using community-based participatory approaches in tribal
research involves partnering with the community when planning, implementing and conducting
Page|51
-------
needed research and exposure assessments to establish mutual trust and better understand tribal
lifeways.
Accordingly, assessors need to work with their Tribal Program Managers and program-specific
project managers to conduct outreach and coordination with tribes to learn about exposure
scenarios. The tribal consultation process can afford EPA an opportunity to obtain meaningful
input from tribes on exposures that directly impact them (U.S. EPA 201 lb). A benefit of these
processes is that assessors can better understand how tribes interact with their environment and
can learn more about direct and indirect exposure pathways, estimated doses, risks and more
(should a tribe be willing to share this information).
EPA '.s Tribal Program and Networks
EPA's Tribal Program has established working relationships with tribes. Assessors should
engage with their Tribal Program Managers and program-specific project managers and
coordinate with other Agency risk assessors to include tribal perspectives when conducting
exposure assessments and to facilitate clear communication with tribal partners.
The Agency's network of Tribal Partnership Groups facilitates the exchange of technical
information and communication between tribes and EPA. For example, the National EPA-Tribal
Science Council works to integrate and increase tribal involvement in EPA's scientific activities,
and the National Tribal Toxics Council provides tribal input on issues related to toxic chemicals
and pollution prevention. Assessors should engage with these partnership groups, through EPA's
Tribal Program, to better understand tribal lifeways and to discuss and collaborate on research
needs.
4.3.3. Other Racial and Ethnic Populations
"Race" refers to the socially constructed groups the Office of Management and Budget
specifies.11 "Ethnicity" refers to cultural groups, such as Hispanic or Latino. As Directive 1512
and numerous scholarly organizations note, racial and ethnic groups are social categories, not
biological taxa (i.e., no biological basis exists for assigning people to a given racial class).
Racial, ethnic and class differences and inequities in environmental exposures are related to
underlying social structural dynamics in our society—for example, economic and political
(Brulle and Pellow 2006).
Researchers have documented patterns of racial/ethnic differences in exposure to environmental
contaminants by proximity to hazardous land uses (Bullard 1990; Chakraborty et al. 2011; U.S.
GAO 1983; UCC 1987), ambient measures (Bullard 1990; CDC 2005; EJHU2003; IOM 1999;
Lopez 2002; Morello-Frosch and Lopez 2006; Morello-Frosch et al. 2002; Wernette and Nieves
1992; Woodruff et al. 2003), biomonitoring (Hightower et al. 2006; IOM 1999; McKelvey et al.
2007) and exposure modeling (Adamkiewicz et al. 2011; Houston et al. 2014; Morello-Frosch
and Jesdale 2006; Morello-Frosch and Lopez 2006; Morello-Frosch et al. 2002). Review articles
of such racial and ethnic disparities include Brulle and Pellow (2006) and Brown (1995). Other
"Revisions to the Standards for the Classification of Federal Data on Race and Ethnicity; Notice of Decision, 62
Fed. Reg. 58782 (October 30, 1997).
12Race and Ethnic Standards for Federal Statistics and Administrative Reporting; Directive No. 15 (May 12, 1977),
http://wonder.cdc.gov/wonder/help/populations/bridged-race/Directivel5.html.
Page| 52
-------
examples include racial, ethnic and income disparities in exposures to lead, mercury and other
metals, PCBs (polychlorinated biphenyls) and pesticides, and proximity to hazardous land uses.
Cultural traditions and practices can influence exposures for many diverse populations found
throughout the United States. Exposure assessors need to be aware of these traditions and
practices when conducting exposure assessments. For example, a 2003 study examined seafood
consumption in Asian-American and Pacific-Islander populations in King County, Washington
(Sechena et al. 2003). The study reported average and median seafood consumption rates of
117.2 g/day and 89 g/day based on an average body weight of 62 kg. Of significance to exposure
assessors, however, is the considerable variation in consumption rates among ethnic groups, such
as Vietnamese, Japanese and Hmong (Sechena et al. 2003; U.S. EPA 2001a). The study also
reported people ate fish fillets with the skin 55 percent of the time and the head, bones, eggs or
other organs 20 percent of the time. Crabmeat, including the hepatopancreas (which accumulates
organochlorine compounds), was consumed 43 percent of the time. These differences in fish
ingestion rates and preparation practices could require special considerations in the exposure
assessment (e.g., data collection and selection of ingestion rates). In another example, Weintraub
and Birnbaum (2008) suggested that catfish consumption might be a significant PCB source for
the one million non-Hispanic black anglers who fish for catfish because they consume the entire
fish.
4.3.4. Traditional Methods
Traditional methods include documenting the locations of locally unwanted land uses, such as
hazardous waste sites, pollution emitters or highways. Possible data sources include the Toxics
Release Inventory) and similar state databases of contaminants, pollution discharge permits and
air monitoring data. Assessors then characterize the population surrounding these locations by
race, income and other factors, usually from U.S. census data. The target area for these sites
varies; it could be a census block group, census tract, ZIP code, county or selected buffer zone
(often described using geographic information system methods). Assessors compare the
populations in these target areas to the overall state or U. S. population or to other areas having
no locally unwanted land uses. Researchers have noted methodological issues with these studies:
Those using a ZIP code or larger area of analysis tend to find that income is a greater risk factor
than race/ethnicity for exposure to environmental burdens, whereas studies using block groups or
census tracts tend to find that race/ethnicity is a greater risk factor than income for exposure.
4.3.5. Case Studies
A case study describes a particular neighborhood or group's experience with an exposure or
environmentally related condition over time. For the most part, case studies use descriptive
statistics, document searches, ethnographic research or individual or group interviews as the
basis for building the study. Although case studies might lack statistical power, they can be
valuable for describing past exposures or understanding how and why certain exposures
happened.
4.3.6. Neighborhood Methods
Neighborhood methods begin with the identification of particular areas that have high
proportions of disadvantaged people or other populations of concern. The areas include census
blocks, census tracts, ZIP codes, counties or specially or traditionally defined neighborhoods.
Page| 53
-------
Researchers usually obtain data on neighborhood demographic composition from U.S. census
data or other government surveys. Next, researchers measure or estimate the overall or specific
level of contaminants or pollution sources using resources such as those described in Section 5.4.
The neighborhood pollutant or pollution source levels then are compared to national or regional
means. Neighborhood studies are useful for understanding cumulative risks or identifying areas
already bearing high levels of environmental burdens.
4.3.7. Population-Based Methods
In a population-based method, analysts compare the overall or person-specific mean, percentile
or distribution of exposure for a given population(s) of concern to that of a control population or
to the mean, percentile or distribution level of exposure of the entire population. This comparison
requires data on each person in a population and an assessment of whether that individual is a
member of a particular population. In many cases, data on individual exposures are scarce.
Analysts instead assign exposure values to individuals based on area data or surrogate exposure
measures. Generally, the geographic area of interest is larger than a neighborhood—municipal,
countywide, statewide, regional or national in scope. Population-based methods are useful in
understanding population group-specific differences in health or in identifying priorities for
health and environmental interventions.
4.3.8. Social Process Methods
Analysts use social process methods to assess the association between a social-level variable(s)
and chemical exposures. The variables include measures of racial residential segregation, income
inequality and poverty rates. The general method is to use regression analysis, treating these
measures as independent variables and the exposure metrics as dependent variables. The
regression models also often use other demographic, social or environmental measures as the
independent variables. A subset of these methods uses hierarchical linear modeling (also called
mixed or multilevel modeling). In this subset, at least two levels of effect are assessed, typically
including the individual level (including race/ethnicity, sex, age, income) and the neighborhood
or other higher level variables (including owner-occupied housing percentages, racial/ethnic
percentages in a population or other social-level variables). These methods can be valuable in
screening for potential associations between multiple risk factors and differences in health
between racial and other groups. Assessing causation, however, requires additional evidence.
4.3.9. National-Level versus Local/Community-Specific Assessments
Differences in exposure to environmental contaminants by race, ethnicity, class, geography and
other factors can be assessed within localities, between localities and across populations at the
national level. The exposure assessor, in consultation with the risk manager/decision maker, will
determine the most relevant geographic scope.
National-Level Assessment
At the national level, screening for differential exposure can use the large, comprehensive
databases on pollutant concentrations in environmental media (e.g., air, water) and the locations
of pollution sources that national organizations such as EPA, the U.S. Bureau of the Census and
the Centers for Disease Control and Prevention develop. For example, the screening study might
combine data on segregation and income inequality, metropolitan air quality indices, modeled air
toxics concentrations and data from the Toxics Release Inventory.
Page| 54
-------
One example of a national-level assessment is EPA's National-Scale Air Toxics Assessment
(NAT A), an ongoing comprehensive evaluation of air toxics in the United States. EPA
developed the NATA program in 1996. These assessments estimate the risk of cancer and non-
cancer health effects from inhaling air toxics, including estimates of exposures at the census-tract
level. Assessments include estimates of health effects based on chronic exposure from outdoor
sources. Assessments provide a snapshot of the outdoor air quality and the risks to human health
that would result if emission levels of these pollutants remained unchanged. NATA is updated
with new inventory and exposure data on a 3-year cycle.
Local-Level (Community) Assessment
Local-level exposure assessments are useful for responding to specific community concerns and
planning for hazardous waste or brownfield site cleanups. In addition, local-level exposure
assessments can help unmask unique or high exposure levels of specific community or
population groups that would be "averaged out" in a national-level assessment. This situation is
particularly germane for groups having traditional practices, including Native Americans and
other ethnic and religious groups. Community-based risk assessment is an active area of research
for EPA, in particular for EPA's National Center for Environmental Research. Several reports,
including workshops, case studies and modeling tools (e.g., C-FERST [Community-Focused
Exposure and Risk Screening Tool], T-FERST [Tribal-Focused Environmental Risk and
Sustainability Tool], ReVA [Regional Vulnerability Assessment], community-based air toxics
models, the Regional Air Impact Modeling Initiative, the Toxics Release Inventory Explorer, the
Internet Geographic Exposure Modeling System) are available as resources (Barzyk et al. 2010).
4.4. Summary
• Unique characteristics and sociodemographic factors can increase exposure or
predispose an individual, lifestage, specific group or population to greater health risk.
• Numerous executive orders, policies and legislative mandates emphasize EPA's history
of commitment to considering lifestages, vulnerable groups and populations of concern
in exposure assessments.
• Environmental exposures and health risks can be concentrated among certain population
groups and in potentially vulnerable communities. Vulnerability refers to characteristics
of individuals or populations that place them at increased risk of adverse health effects.
Susceptibility refers to the increased likelihood that a stressor will affect an individual or
population more than it will affect the general population because of intrinsic biological
factors. The planning and scoping phase of the exposure assessment is the optimal point
for outlining the approach to identify and consider population vulnerability.
• Certain lifestages, tribal populations, racial and ethnic groups and socioeconomically
disadvantaged population groups can be particularly vulnerable to exposure and are of
heightened concern to the exposure assessor.
o "Lifestage" refers to a temporal stage of life with distinct anatomical, physiological,
behavioral or functional characteristics that contribute to potential differences in
vulnerability to environmental exposures,
o EPA views childhood as a sequence of lifestages from conception through fetal
development, infancy and adolescence for the purposes of exposure assessment.
Page| 55
-------
o EPA provides guidance for use in assessing exposure in older adults, including Data
Sources for Modeling Environmental Exposures in the Older Adult Population.
o Incorporating long-term exposures to environmental chemicals and integrating age-
specific values for both exposure and toxicity/potency is advisable when appropriate,
o Assessors need to be aware of exposure issues and scenarios unique to tribal
populations and be cognizant of special cultural and technical challenges when
conducting exposure assessments.
• Many methods are available to exposure assessors for identifying lifestages, vulnerable
groups and populations of concern.
o Traditional methods include documenting the locations of locally unwanted land
uses, such as hazardous waste sites, pollution emitters or highways,
o Case studies describe a particular neighborhood or group's experience with exposure
or environmentally related conditions over time; this method can be valuable for
describing and understanding past exposures,
o Neighborhood methods identify areas having high proportions of populations of
concern and measure or estimate overall or specific levels of contaminants or
pollution sources. The neighborhood pollutant or pollution source levels then are
compared to national or regional means,
o Population-based methods compare the overall or person-specific mean, percentile
or distribution of exposure for a given population(s) of concern to that of a control
population or to the mean, percentile or distribution level of exposure of the entire
population.
o Social process methods are used to assess the association between a social-level
variable(s) and chemical exposures,
o National level and local/community-specific assessments. At the national level,
screenings for differential exposure use large, comprehensive databases on pollutant
concentrations in environmental media and the locations of pollution sources.
Page | 56
-------
CHAPTER 5. DATA FOR EXPOSURE ASSESSMENTS
Data, defined in this Guidelines for Human Exposure Assessment as the sets of quantitative and
descriptive information needed to answer exposure assessment questions, are the primary input
to an exposure assessment. Possible data types include physical measurements of environmental
and biological media, health survey and study outputs (e.g., the National Health and Nutrition
Examination Survey [NHANES]) including data on various health outcomes, location-specific or
population-based activity information and scientific research findings such as modeling data (see
Chapter 6). The information an exposure assessor needs to consider has grown more complex
due to advances in science and technology. Many new datasets and data collection methods have
become available to support exposure assessments. As analytical techniques have improved and
more sophisticated modeling and predictive tools have evolved, the ways to process data have
become increasingly complex. For these reasons, understanding data availability, applicability,
characteristics, quality issues and limitations is critical to conducting a scientifically sound
exposure assessment. The process of identifying and addressing data needs is iterative, involving
repeated reviews of data availability, quality and gaps. This chapter provides a framework for
addressing data needs, including an overview of key data considerations and links to relevant
data sources, resources and tools. Specifically, this chapter:
• Identifies the types of data used in an exposure assessment (Section 5.1)
• Discusses considerations in identifying data gaps and data needs (Section 5.2)
• Describes quality assurance/quality control (QA/QC) needs for an exposure assessment
(Section 5.3)
• Discusses sources of existing data and methods for collecting additional data for exposure
assessments (Section 5.4)
• Reviews data and decision uncertainty and variability (Section 5.5)
• Provides an overview of data management considerations (Section 5.6)
• Describes communication considerations specific to data (Section 5.7).
Section 5.8 summarizes this chapter.
5.1. Types of Data Used in an Exposure Assessment
Data used in an exposure assessment represent a wide variety of information, from chemical
concentrations in various media to information about activities at the individual or population
level. For each assessment, the exposure assessor needs to consider the relevance of various
types of data: environmental data, biomonitoring data, exposure factors and activity patterns.
Sections 5.1.1 through5.1.5 discuss these data considerations and the role they serve in an
exposure assessment.
Data support EPA's role of protecting public health and the environment. Laws and executive
orders EPA uses to meet these mandates are discussed in published documents (U.S. EPA 2017d;
U.S. EPA 2019b). Exposure assessment information used in protecting public health includes
data on the source-to-outcome continuum (see Figure 2-1). Assessors need to consider
coordinating with the appropriate program to ensure that data collected meet program-specific
requirements for collection of new data and use of existing data.
Page| 57
-------
5.1.1. Environmental Data
EPA defines environmental data as "any measurement or information that describe[s]
environmental processes, location or conditions, ecological or health effects and consequences,
or the performance of environmental technology" (U.S. EPA 2002h). Environmental data include
information collected directly from measurements, produced from models and compiled from
other sources, such as databases or the literature (U.S. EPA 2002e). Exposure assessments
typically use environmental data to characterize:
• Sources of contaminants
• Fate and transport of contaminants in media
• Exposure pathways
• Likelihood exposure will occur
• Who is exposed
• Impact of variability and uncertainty in exposure estimates.
The use of environmental data occurs throughout the exposure assessment process. During
planning and scoping, data can direct the development of a conceptual model by providing
information about the chemical source, types of releases and potential transport mechanisms
through the environment (see Section 3.2.2). Other data considerations include temporal
variability in environmental and biological measures and representativeness of samples. For
example, data on chemical concentrations in soil at a playground might highlight the importance
of understanding incidental soil ingestion by children. Assessors also use environmental data
when quantitatively estimating exposure. These data serve as fundamental inputs, either directly
as exposure concentrations or indirectly in exposure models, which might include
physicochemical models or databases. Data serve as input parameters for larger or higher level
exposure models that estimate likely exposure concentrations. For example, the concentration of
a solvent detected in an aquifer used as a drinking water source could directly represent an
exposure concentration for the population that uses the source. The concentration of a solvent
detected in groundwater upgradient of a drinking water well, on the other hand, could serve
indirectly as an input value to a model used to predict potential contamination based on
parameters such as groundwater flow, well pumping rates and groundwater velocity. Exposure
assessors need to consider carefully the way in which the environmental data fit into the
conceptual model (see Section 3.2.2) and the purpose of the assessment.
5.1.2. Biomonitoring Data
Biomonitoring is a useful method for assessing human exposure to chemicals by collecting
human tissues or specimens, such as blood and urine (NRC 2006b), combined with information
about environmental exposures from interviews and questionnaires. Measuring the chemicals or
their metabolites in human tissues or specimens along with information about the use and timing
of the chemical exposure (see Section 2.2.1) in relation to the collection of the specimen
provides valuable data for understanding internal dose and total exposure of an individual.
Biomonitoring data are useful in characterizing exposures when complete exposure data are not
available for current, recent or historical exposures or when exposure to multiple chemicals
might have occurred (Checkoway and Eisen 1998).
Page|58
-------
Biomarkers are cellular, biochemical, analytical or molecular measures, obtained from biological
media such as tissues, cells or fluids, which indicate exposure to a chemical (WHO 2004).
Biomarkers of exposure record the concentration of the chemical or its metabolites in biological
media, whereas biomarkers of effect indicate cellular, biochemical or molecular changes
occurring as a result of human exposure to the chemical (WHO 2004; WHO 2012; WHO 2015).
Centers for Disease Control and Prevention's (CDC) NHANES regularly provides biomonitoring
data from a nationally representative population sample for over 200 chemicals measured in
blood or urine. Summary statistics are available from the National Report on Human Exposure to
Environmental Chemicals, and individual observations are available in downloadable data files
from the NHANES website (CDC 2009; CDC 2012a).
What Are Biomonitoring Data?
The ideal biomarker is sensitive, specific, biologically relevant, easy to collect, inexpensive to
analyze, readily identified and persistent in the body for long periods (Metcalf and Orloff 2004;
Needham and Sexton 2000). Figure 5-1 illustrates persistence of a hypothetical chemical or
metabolite in human tissue after a single exposure, although residence times in the body can vary
depending on the type of chemical (Needham and Sexton 2000; Sohn et al. 2004). Various
adducts can form between blood components and toxicants that are both persistent or
nonpersistent (Needham and Sexton 2000). Adduct-forming toxicants or their metabolites
include DNA adducts having an electrophilic center that reacts with the neutrophilic center and
protein adducts (e.g., hemoglobin and albumin) formed after exposure to xenobiotics (Needham
and Sexton 2000). The America's Children and the Environment website discusses
biomonitoring data for several chemicals including lead, phthalates and mercury. In addition, as
described in Section 6.2.3, human dose models (forward and reverse dosimetry) can estimate a
dose based on biomonitoring data. The CDC Biomonitorins Summaries website provides a brief
general overview about the chemical or chemical group, including usage, environmental
pathways, sources of exposure, toxicology, health effects and human biomonitoring information
Figure 5-1. Representative Profiles of Hypothetical Biomarkers Following a Single
Exposure to a Persistent Chemical
Blood Toxicant/Metabolite
Albumin Adduct
Hemoglobin Adduct
DNA Adduct
Urinary Metabolite
Urinary Adduct
o
cd
o
1
10
100
1000
Time (Days)
Adapted from Needham and Sexton (2000)
Page | 59
-------
What are Biomonitoring Equivalents?
The "biomonitoring equivalent" (BE) approach estimates a single biomarker concentration
(called the BE) that corresponds to a guidance value (e.g., Maximum Contaminant Level,
Reference Dose), which then can be compared with measured biomarker data. The resulting
"hazard quotient" estimates (HQ = biomarker concentration/BE) then can be used to prioritize
chemicals for follow-up examinations (Phillips et al. 2014). Comparison of biomarker
concentrations to corresponding BE values are useful in guiding the evaluation of multiple
exposures in a population and to set priorities for research or reduction in exposures (Aylward et
al. 2013; St-Amand et al. 2014).
EPA's Office of Pesticide Programs (OPP) used an approach comparable to the BE approach to
evaluate triclosan and pentachlorophenol (U.S. EPA 2008a; U.S. EPA 2008b). OPP essentially
used biomonitoring data (urine) from NHANES and reasonable assumptions to estimate the
distribution of exposure (in jag chemical/kg body weight) to the U.S. population and subgroups.
How Are Biomonitoring Data Used?
Biomonitoring data provide a useful tool for assessors to identify chemicals in the environment
and human tissues and to monitor changes in exposure over time (NRC 2006b). Assessors use
biomonitoring studies to address data gaps associated with possible exposures, baseline
conditions, trends in concentrations of specific chemicals within populations over time (e.g.,
NHANES data) and internal chemical or metabolite concentrations. Biomarkers of exposure
provide information on chemical exposures in individuals, changes in levels over time and
variability among different populations (U.S. EPA 2018b). Assessors can use biomonitoring data
as a baseline or point of reference for comparing changes in concentrations over time. Baseline
information provides the ability to analyze changes in chemical concentrations over time,
including prevalence or magnitude of exposure, and to evaluate the impacts of removing
chemicals from the environment by examining changes in blood or urine concentrations over
time. Reference ranges describe general population exposures to chemicals for segments of the
population (CDC 2009; CDC 2015a). Biomonitoring data complement environmental and
modeling data in estimating exposure (e.g., temporal, scale, media, biodegradation).
What Are the Quality Considerations in Using Biomonitoring Data?
The CDC has developed specific guidance regarding assessing the quality of biomonitoring data.
Information is available on the CDC National Biomonitoring Program website (Berman et al.
2001; CDC 2016).
What Are the Limitations of Biomonitoring Data?
The CDC's Fourth National Report on Human Exposure to Environmental Chemicals, 2009
(CDC 2009) states the following caution in evaluating biomonitoring data:
"The measurement of an environmental chemical in a person's blood or urine
does not by itself mean that the chemical causes disease. Advances in analytical
methods allow us to measure low levels of environmental chemicals in people,
but separate studies of varying exposure levels and health effects are needed to
determine whether such blood or urine levels result in disease. These studies must
also consider other factors such as duration of exposure."
Page | 60
-------
Biomonitoring data have several limitations. For example, analytical methods are unavailable for
some chemicals, and interpreting results can be difficult due to background levels, confounding
co-exposures, metabolic processes with uncertain transformation times and limited information
correlating exposures to chemical measurements or metabolites. Assessors need to measure
source data in the individual media or model them, as Section 6.2.5 discusses. Having data on
sources of exposure and on internal biomonitoring information helps assessors understand how
to reduce exposures. Also, having both biomonitoring data and modeling information helps
identify the relative contribution from different sources of exposure. Another limitation includes
the potential for intra-individual variability in the measurement of nonpersistent chemicals in
urine. Permission or consent of individuals might apply to the collection and analyses of
biological specimens and reporting of data. Biomonitoring data often are not available for young
children because of difficulties in obtaining blood and urine samples. Human Studies Review
Boards (see Section 7.2.10) should be consulted for additional guidance for obtaining consent on
collecting and sharing biomonitoring data. Finally, the data on biologically equivalent doses that
result in toxic effects are limited, making the comparisons necessary to assess health risks
difficult. Before relying on biomonitoring data for an exposure assessment, the project team
needs to be cognizant of limitations inherent in the biomarkers used.
What Advances Are Expected in Biomonitoring Data?
The science behind biomonitoring and biomarkers is advancing rapidly, and biomonitoring in
human populations is becoming more common. Emerging research is expanding the Agency's
knowledgebase of biomonitoring, biomarkers and links to environmental exposures. The CDC,
for example, has been measuring chemicals in blood and urine from a subset of the U.S.
population using advanced laboratory and innovative technologies for more than three decades as
part of the NHANES program (CDC 2012a). NHANES is an ongoing program of surveys that
collect data on the health and nutritional status of the noninstitutionalized population of the
United States (CDC 2012a). The CDC reports detailed information and documentation on
NHANES methods, datasets and data analyses. CDC periodically releases comprehensive
reports, which are available on the NationalReport on Human Exposure to Environmental
Chemicals website.
What Special Considerations Are Needed to Use Biomonitoring Data?
When using biomonitoring data in an exposure assessment, an assessor needs to be mindful of
confidentiality and privacy considerations. Sections 5.4.2 and 5.4.4 discuss confidentiality
concerns associated with using data from individuals. Section 5.4.4 further discusses the
confidentiality and privacy issues associated with administering questionnaires and surveys and
conducting observational human exposure measurement studies. Section 7.2.10 specifically
discusses the confidentiality and privacy issues associated with conducting observational human
exposure measurement studies.
5.1.3. Exposure Factors
As described in Section 3.2.1, individuals within a population fall within a distribution of
exposures based on factors that include personal characteristics, individuals' activities and
behaviors and frequency and duration of exposure to various media. Because uncertainty and
variability are present in exposure assessments, EPA might incorporate a "high-end" exposure
level to ensure protection of potentially exposed individuals or groups within lifestages or
populations of concern (e.g., EPA's high-end levels are the 90th percentile and above, as
Page | 61
-------
Figure 5-2 shows). When combining exposure factors, the assessor needs to consider both high-
end and central-tendency exposure factors and ensure the combinations represent those that occur
within comparable times and places to an individual and within the range of plausible exposures.
Even with a high-end value, individuals could experience higher or lower exposures. EPA's
programs estimate high-end and central-tendency values and often provide a range of exposures
that encompass the actual exposure distribution for various individuals, lifestages, groups or
populations (U.S. EPA 2004c).
Figure 5-2. Schematic of the Distribution of Exposures for Individual Receptors
within a Population
Typical
%ile of
Exposure
50%
90%
95% 98% 99%
99.9%
High End of Exposure
Bounding
Estimate
Exposure descriptors characterize estimates for a specific point on the exposure distribution (e.g.,
mean, median, 95th percentile, maximum) for individual or population exposures. Exposures
vary due to differences among individuals, populations, spatial and temporal scales and other
factors. According to EPA's Example Exposure Scenarios, this "variability can be addressed by
estimating exposure for the various descriptors of exposure (i.e., central tendency, high-end or
bounding) to estimate points on the distribution of exposure" (U.S. EPA 2003c). Exposure
descriptors are useful when characterizing exposure and aid communication between exposure
assessors and risk managers/decision makers. Box 5-1 summarizes common exposure descriptor
terms used to describe exposure distributions for various individuals. The terms include
definitions based on the distributions, types of exposures and exposed individuals.
Assessors use exposure factors to estimate contact rates for different media (e.g., the amount of
air inhaled in a breath, breathing rates). Other exposure factors include data on people's physical
characteristics (e.g., body weight, skin surface area).
Page | 62
-------
Box 5-1. Terms Describing Exposure Distributions
Parts of the Exposure Distribution
• High end of the distribution: occurs above the 90th percentile of the population distribution, but not higher than the
percentile for the individual in the population who has the highest exposure.
• Maximum exposure range: above the 99th percentile in exposure.
Types of Exposure
• Bounding estimate: an estimate of exposure that is higher than the highest anticipated exposure to an individual, lifestage,
group or population. Bounding estimates show that true exposures are not greater than estimated exposures. Assessors
often use bounding estimates during screening-level assessments to eliminate exposure pathways of minor importance
from further consideration or to determine whether they need more data and information to evaluate other pathways.
• Central tendency exposure: an estimate of exposure of individuals in the middle of the distribution (i.e., those near the
median or 50th percentile).
• High-end exposure estimate: used in this guidance document as a plausible estimate of individual exposure for those
individuals at the upper end of an exposure distribution. The intent of this designation is to convey an estimate of exposure
in the upper range of the distribution while avoiding estimates that are beyond the true distribution.
• Reasonable maximum exposure: defined as the highest exposure reasonably expected to occur at a Superfund site, and
intended to estimate a conservative exposure case (i.e., well above the average case) that is still within the range of
possible exposures.
• Worst-case exposure: historically, used for the maximum possible exposure occurring when all events that can plausibly
occur to maximize exposure occur. This worst-case exposure might fall on the uppermost point of the population
distribution, but in most cases, will be somewhat higher than for the individual in the population having the highest
exposure.
Types of Exposed Individuals
• Maximally exposed individual (MEI): describes the uppermost portion of the high-end exposure range, although actual
usage has varied (e.g., section 112 of the Clean Air Act Amendments of 1990).
• Theoretical MEI: describes exposure under the worst case. It represents a hypothetical individual and an extreme set of
conditions.
• Reasonably MEI: describes exposure under the reasonable worst case.
Assessors use exposure factors, with activity pattern information and other data inputs, in
developing an exposure scenario and a conceptual model to estimate exposures. EPA's Exposure
Factors Interactive Resource for Scenarios Tool Glossary, based on the Exposure Factors
Handbook: 2011 Edition (U.S. EPA 201 Id), defines activity pattern (or time-use) data as
"information on activities in which various individuals engage, length of time spent performing
various activities, locations in which individuals spend time and length of time spent by
individuals within those various environments." Activity information describes the types of
activities in which individuals engage, the length of time people engage in that activity and
where and when that activity occurs. Activity pattern information is collected by using time-
activity diaries (e.g., paper diaries in which individuals record activities such as food consumed),
electronic devices (e.g., geographic information system or other hand-held devices),
questionnaires or surveys of activities.
EPA developed ExpoBox as a toolbox for exposure assessors and the Exposure Factors
Interactive Resource for Scenarios Tool (ExpoFIRST) to help individuals access exposure data.
ExpoBox is a compendium of exposure assessment tools that link to guidance documents,
databases, models, reference materials and other related resources. Exposure assessment
resources are organized into six tool sets, each containing a series of modules:
Page | 63
-------
1. Approaches (e.g., direct measurement [point of contact], indirect estimation [scenario
evaluation], exposure reconstruction [biomonitoring and reverse dosimetry])
2. Media (e.g., air, water and sediment, soil and dust, food, aquatic biota, consumer
products)
3. Routes of Exposure (e.g., inhalation, ingestion, dermal)
4. Tiers and Types (e.g., screening level and refined, deterministic and probabilistic,
aggregate and cumulative)
5. Lifestages and Populations (e.g., general population, residential consumers, lifestages,
highly exposed)
6. Chemical Classes (e.g., pesticides, other organics, inorganics and fibers, nanomaterials).
ExpoFIRST enables users to draw on data found in EPA's Exposure Factors Handbook: 2011
Edition (U.S. EPA 201 Id) to develop user-defined scenarios based on route of exposure,
medium, receptor(s), timeframe and dose metric for a contaminant.
Exposure factors might or might not be the same as those found in the Exposure Factors
Handbook: 2011 Edition. Exposure assessors need to be aware of their program's specific
exposure parameters for the assessment to help select appropriate exposure factors.
5.1.4. Observational Human Exposure Measurement Study Data
Observational human exposure measurement studies seek to quantify individuals' exposures to
chemicals in their everyday environments during their normal daily activities. Described further
in Chapter 7, these studies involve measurements of chemical, physical or biological agents in
environmental media; collection of information about the study participants and their homes,
work environments and activities; and collection of personal exposure and biomarker samples
(Lioy et al. 2005; Sheldon 2010; U.S. EPA 2008c; U.S. EPA 2009a; Zartarian et al. 2005).
Section 7.2.13 provides information on what the exposure assessor considers regarding the
evaluation of data from observational human exposure measurement studies in line with the data
quality objectives (DQOs).
5.1.5. Using Different Types of Data to Inform Decisions
Depending on the environmental question, individual data types can be used alone or in
combination. For example, combining existing measurement studies with exposure modeling
data is useful in evaluating contributions of individual routes of exposure or for model
evaluation. Combining data with model estimates can help determine the need for data collection
and inform study designs for observational human exposure measurement.
5.2. Identifying Data Gaps and Data Needs
The exposure assessment begins by evaluating existing data to determine whether the data
address the needs identified during planning and scoping. When data are not available or are
inadequate to represent potential exposures, the assessor needs to consider collecting data to
meet the goals of the assessment. In addition, where appropriate, the assessor needs to consider
obtaining measurements that characterize the nature and extent of chemical contamination and
information needed to predict future contaminant concentrations.
Page | 64
-------
Identification of data gaps and data needs begins with understanding the conceptual model, as
presented in Section 3.2.2. Figure 3-3 in that section represents an example conceptual model
that assessors can use in evaluating potential exposures due to the release of chemicals from
drums that impact multiple media and receptors (U.S. EPA 1988a; U.S. EPA 1989a). Table 5-1
describes exposure routes and potential receptors for a hypothetical exposure scenario resulting
from the release of chemicals from the spilled drums (U.S. EPA 2016b). Assessors need to
consider both current and future exposure scenarios because land use and activities near the
contamination source can change over time. The conceptual model helps identify the temporal
and spatial extent of contamination, source proximity, wind or water flow direction and
completed routes of exposures in the various media that might need sampling.
Table 5-1. Hypothetical Exposure Scenario for Leaking Chemical Drums
Source of
Contamination
Release or
Transport Medium
Exposure
Point
Exposure
Route
Exposed
Population
Drum spill
Contaminating soil
Ambient air volatile
emissions
Dust
Ambient air
Dust
Inhalation
Residents (adult and child)
Workers
Site visitors
Soil gas/Groundwater
Indoor air/Vapor
intrusion
Indoor air (Microenvironment of
room or smaller area of a room)
Vapor intrusion
Inhalation
Residents (adult and child)
Workers
Site visitors
Volatilization of
contaminants in soil and
deposition
Soil
Residential yards
Ingestion
Inhalation
Dermal
Residents (adult and child)
On site
Ingestion
Inhalation
Dermal
Workers (adults)
Site visitors (adult and
child)
Volatilization of
contaminants in drums
with deposition on soil
Windblown dust
Residential yards
Residential homes
Facility
Ingestion
Inhalation
Dermal
Residents (adult and child)
Workers (adults)
Leaching of
contaminants from
drums
Groundwater
Private wells
Ingestion
Inhalation
Dermal
Residents (adult and child)
Workers (adult)
Public water supply
Ingestion
Inhalation
Dermal
Residents (adult and child)
Workers (adults)
Uptake of contaminants
from soil and
groundwater impacted
by leaking drums
Biota
Locally grown food
Naturally occurring food (e.g.,
berries, mushrooms)
Contaminated fish and game
Ingestion
Residents (adult and child)
Subsistence populations
EPA provides resources for planning projects that use existing data (U.S. EPA 2018e). The
recommendations emphasize the need to assess data against their intended use. Review of
existing data needs to consider the data collection timeframe, relevance to the exposure
assessment being developed and changes in analytical methods (e.g., detection limits) over time.
Other resources include:
• Chapter 3 of the Guidance for Quality Assurance Project Plans (G-5) - identifies
elements to consider when planning projects that use existing data
Page | 65
-------
• Checklist for Quality Concerns About Using Data from Other Sources - provides step-
by-step guidance to ensure that secondary data meet project quality objectives
• Quality Assurance Project Plan Requirements for Secondary Data Research Project -
provides example guidance by the QA managers in EPA's National Risk Management
Research Laboratory
• EPA's Science Policy Council Assessment Factors - presents general assessment factors
for evaluating the quality of scientific and technical information
• Software - For links to free software for performing data quality assessment, see Quality-
Related Resources - Software.
Program-specific guidance for evaluating existing data recommends that assessors consider the
objective of the study or program that gathered the data, collection and analytical methods,
QA/QC procedures used and key study and data uncertainties (EPA's Resources for Planning
Projects that Use Existing Data website). EPA's Guidance for Data Usability in Risk Assessment
(Part A) provides information on the minimum quality and quantity of environmental data
required to support a Superfund risk assessment (U.S. EPA 1992b). The concepts outlined in
these program-specific guidance documents could apply to exposure assessments serving
functions beyond the specific program. The exposure assessment/characterization captures
decisions to use existing data, including appropriate discussion of any data limitations and
situations that preclude use of existing data. Coordination with appropriate programs is important
in determining if an assessment can use existing data.
5.2.1. Identification of Data Gaps—Existing Data
The review of available data examines the nature/kind of data available and examines its
quantity. The assessor uses this information to identify information gaps. The exposure assessor
considers what gaps in knowledge the assumptions, estimates, models, default values or targeted
data collection will fill. Planning discussions within the project team about filling data gaps
strive to answer the following types of questions:
• Is the quantity of data sufficient to perform the exposure assessment, despite having
particular data gaps?
• If data are pending, when will the results be available? The exposure assessor and project
manager need to weigh the impact of delaying an assessment or decision while acquiring
new data.
• Will the missing data make a substantive difference in the exposure assessment? In other
words, what is the direct relevance of the data to the problem or exposure assessment
objectives?
• How do the missing data relate to anticipated or known specified stakeholder concerns
about exposures?
5.2.2. Developing a New Data Sampling Program
Before planning a sampling program, the project team needs to evaluate the needed resources
(Whitmore et al. 2005). Sampling often is a resource-intensive endeavor requiring substantial
time and money. The following questions, modified from the Office of Pollution Prevention and
Toxics (OPPT) Considerations When Evaluating Exposure Assessments, are appropriate to
Page| 66
-------
consider when assessing the implications of implementing a sampling program to provide data
for an exposure assessment:
• Do the objectives, methods, scope and size of the proposed sampling program support the
objectives of an exposure assessment?
• Are appropriate data collection and analytical methods available? Has the scientific
community adopted or otherwise accepted these methods? Does EPA have standard
operating procedures (SOPs) for these methods? Has EPA validated the methods?
• How many samples are needed to meet the obj ectives of the study?
• What QA/QC procedures are required?
• What will the sampling program cost?
• What is the timeframe for the decision and is this timeframe sufficient for collecting the
data to support the decision?
• Will the uncertainty in the data (e.g., timeframe of data collected, number of samples,
detection limits) substantially limit usability of the data for the exposure assessment?
Depending on the answers to these questions, an exposure assessor might decide that a sampling
program cannot fill the data gaps (e.g., the appropriate sampling or analytical methods are
unavailable; the uncertainty is too great to reduce the data gap). The assessor might decide that a
sampling program sufficient to fill data gaps is more extensive than is possible within the
resource, time and institutional constraints of an exposure assessment. Alternatively, the assessor
might determine that a sampling program would provide valuable information to support an
exposure assessment and would be feasible within the available framework. Although it can be
an iterative process, an exposure assessment needs to address its purpose, the timeframe of the
decision and the point at which enough data are available to support the decision.
5.3. Data Quality for New Data Collection
The quality of the exposure characterization, and ultimately the risk characterization, depends on
the quality of data used to conduct the exposure assessment. EPA has published guidance and
compiled resources on data quality for existing data and new data to meet this objective.
Figure 5-3 outlines the process, and Box 5-2 lists selected guidance documents and resources
most relevant to data used in exposure assessments.
EPA's Assessment Factors: A Summary of General Assessment Factors for Evaluating the
Quality of Scientific and Technical Information (U.S. EPA 2003 a) also describes quality
considerations the Agency takes into account when evaluating scientific and technical
information. These include five general assessment factors:
• Soundness. The extent to which the scientific and technical procedures, measures,
methods or models employed to generate the information are reasonable for and
consistent with the intended application.
• Applicability and utility. The extent to which the information is relevant for the
intended use.
• Clarity and completeness. The degree of clarity and completeness in documenting the
data, assumptions, methods, QA, sponsoring organizations and analyses employed to
generate the information.
Page | 67
-------
• Uncertainty and variability. The extent of evaluating and characterizing the uncertainty
and variability (quantitative and qualitative) in the information or in the procedures,
measures, methods or models.
• Evaluation and review. The extent of independent verification, validation and peer
review of the information or of the procedures, measures, methods or models.
Figure 5-3. The Seven Iterative Steps in the Data Quality Objectives Process
Step 3. Identify Information Inputs.
Identify data & information needed to answer study questions
Step 2. Identify the Goal of the Study.
State how environmental data will be used in meeting objectives and
solving the problem, identify study questions, define alternative outcomes
Step 4. Define the Boundaries of the Study.
Specify the target population & characteristics of interest,
define spatial & temporal limits, scale of inference
Step 1. State the Problem.
Define the problem that necessitates the study,
identify the planning team, examine budget, schedule
Step 5. Develop the Analytic Approach.
Define the parameter of interest, specify the type of inference,
develop the logic for drawing conclusions from findings
Decision making Estimation and other
(hypothesis testing) analytic approaches
Step 7. Develop the Plan for Obtaining Data.
Select the resource-effective sampling and analysis plan
that meets the performance criteria
Specify probability limits for false
rejection and false acceptance
decision errors
Step 6. Specify Performance or Acceptance Criteria.
Develop performance criteria for new data
being collected or acceptable criteria for
existing data being considered for use
Page| 68
-------
Box 5-2. EPA Quality Assurance/Quality Control (QA/QC)
Websites and Resources
EPA's Quality System manages the quality of the Agency's environmental data collection, generation and use. The Quality
System ensures that environmental data are of sufficient quantity and quality to support the data's intended use. Under the EPA
Quality System, EPA organizations develop and implement supporting quality systems. Similar specifications may also apply to
contractors, grantees and other recipients of financial support from EPA. The Quality System also covers the implementation of
the EPA Information Quality Guidelines.
The website www.epa.gov/qualitv includes the following information categorized by the following titles:
• Quality, Regulations, Policies and Guidance. Includes information on EPA's system-related regulations and Assistance
Agreements; Policies and Procedures about Quality Assurance for EPA Organizations including training; list of Agency-wide
quality system documents; and quality specifications for non-EPA organizations to do business with EPA.
• Quality Assurance Tools. Includes management tools for projects; descriptions of tools for data quality assessment such
as references to A Reviewers Guide (QA-G9R), Statistical Tools for Practitioners (QA-G9S), Checklist for Quality Concerns,
and EPA Science Policy Council Assessment Factors; training courses on Quality Assurance and Control Activities; and
examples and other on-line resources.
• Information Quality Guidelines. Includes guidelines for ensuring and maximizing the quality, objectivity, utility and integrity
of information disseminated by the EPA and requests for correction and requests for reconsideration submitted to the EPA.
• Important Quality System Contacts. Provides contacts in EPA Regional Offices, National Program Offices and ORD
National Research Laboratory and Centers responsible for developing and implementing individual quality systems in
support of the EPA Quality System.
• Quality Training. The Environmental Quality Management Division develops training materials on quality assurance (QA)
activities and the EPA quality system.
• Frequent Questions. Provides answers on all aspects of the Quality System including: goals, benefits, activities,
relationship between the quality system and the Information Quality Guidelines, responsibilities and roles of managers and
Offices in the QA/QC process.
The EPA Quality Program Policy and Procedure for Agency products and services, issued in October 2008, are available here:
EPA Quality Program Policy - CIO 2106.0 and Procedure - CIO 2106-P-01.0.
Common Starting Points:
• If you are developing a quality management plan: Quality Management Tools - Quality Management Plans.
• If you are writing a QA project plan: Quality Management Tools-Quality Assurance Project Plans.
Also see Quality Management Tools - Systematic Planning for information about planning your project before you
document this planning in a QA project plan.
• If you are looking for a typical example of documentation for your specific-project, ask the QA Manager of the organization
sponsoring the work. They might or might not provide examples, depending on their organization's policy.
• If you are looking for information about EPA's guidelines on information quality, see the EPA Information Quality Guidelines
website.
General information on the Quality System is available at Frequent Questions about EPA's Quality System. Information on
quality specifications for non-EPA organizations is available at Quality Specifications for Non-EPA Organizations To Do
Business with EPA.
The Office of Management and Budget (OMB) issued government-wide guidelines that provide
policy and procedural guidance to federal agencies for ensuring and maximizing the quality,
objectivity, utility and integrity of information, including statistical information that federal
agencies disseminate (OMB 2006). International agencies, such as the World Health
Organization (WHO 2008) and the Canadian government have developed specific documents on
data quality in exposure assessment. State agencies also have data quality programs, such as the
Page| 69
-------
Department of Ecology of the State of Washington and the Texas State Soil and Water
Conservation Board (see Environmental Data Quality Management website).
5.3.1. Data Quality System
EPA Order 5360.1, Policy and Program Requirements for the Mandatory Agency-Wide Quality
System (U.S. EPA2000e) defines EPA's quality system requirements. EPA's policy requires all
EPA organizations and those that EPA funds to follow a quality system so data collected to
characterize environmental processes and conditions are of the appropriate type and quality to
support the decision. Agency policy, guidance, tools and guidelines are available at the EPA
website, How EPA Manages the Quality of its Environmental Data; specific guidance is
available at the Agency-wide Quality System Documents website. Figure 5-4 identifies the
components and tools EPA's Quality System uses, including the project-specific components
applied to individual projects (within a program) to ensure achievement of project objectives
(U.S. EPA2002e).
Figure 5-4. EPA Quality System Components and Tools
u
LU
-i
o
a:
CL
Conduct Study/
Experiment
QA
Project Plan
Data Quality
Assessment
Technical
Assessments
Data Verification
& Validation
Standard
Operating
Procedures
Systematic
Planning
(e.g., DQO Process)
PLANNING
A
-~ IMPLEMENTATION
ASSESSMENT
Source: U.S. EPA(2002e)
• Planning: Systematic planning, such as the data quality objectives (DQO) process,
facilitates development of performance criteria for the data (i.e., the type, quantity and
quality of data needed for a specific purpose), production of a sampling plan that satisfies
those criteria and determination of the level of oversight and quality control activities
needed to ensure the criteria are satisfied. The QA project plan and other planning
materials document the systematic planning results. The EI*A Quality Manual for
Environmental Programs (U.S. EPA 2000c) details the process elements emphasizing the
"specification of performance criteria for measuring quality" in the context of planning
activities. Other guidance includes EPA Requirements for Quality Assurance Project
Plans (QA/R-5) (U. S. EPA 2001c) and EPA Requirements for Quality Management
Plans (EPA QA/R-2) (U.S. EPA 200Id).
• Implementation and Oversight: The approved methods and procedures documented in
the QA project plan and Guide for Preparing Standard Operating Procedures (SOPs)
Page | 70
-------
(U.S. EPA 2007c) govern data acquisition. Technical audits and assessments (such as
product/service or process quality audits) comprise oversight, performed to determine
whether data acquisition complies with requirements specified in the QA project plan and
other planning documents. Audits and assessments initiate actions to correct any
identified problems.
• Assessment: Project personnel use technical knowledge and statistical methods to
determine whether the data meet the user's needs. Data verification and validation ensure
the measured values are free of gross errors due to procedural or technical problems; data
analysis determines whether the data meet the performance criteria documented in the
QA project plan (data quality assessment).
5.3.2. Data Usability—Determining Whether Data Meet Assessment Factors
The quality of the data used in an exposure assessment drives the credibility of and confidence in
the results. Any data used in an exposure assessment, existing or newly collected, need to be of
sufficient quality to answer the exposure assessment questions credibly. From a quality
perspective, "acceptance criteria" are specifications intended to evaluate the adequacy of one or
more existing sources of information or data as being acceptable to support the intended use.
"Performance criteria" represent the full set of specifications needed to design a data or
information collection effort that, when implemented, generate newly collected data of sufficient
quality and quantity to address the project's goals. Minimum performance and acceptance
criteria are established during the planning and scoping stage of any assessment (see
Section 3.1). EPA programs also might implement specific procedures, and exposure assessors
need to consult with their programs and follow their SOPs.
The data needed for an exposure assessment depend on the assessment approach selected during
planning and scoping and the assessment objectives (see Section 3.1.1). Therefore, in this stage
of an exposure assessment process, an exposure assessor needs to:
• Determine what data are needed to conduct an exposure assessment and DQOs for those
data
• For each data value needed, determine whether the data currently are available and if so,
obtain them and evaluate their quality and appropriateness
• When appropriate data are not available, determine how crucial they are to the
assessment and whether they can be estimated (e.g., collecting data, using models)
• When new data are collected, establish a QA project plan that documents the planning,
implementation and assessment procedures and how specific QA/QC activities will be
applied during a particular project (U.S. EPA 2001c).
Exposure assessment data present unique challenges when considering data quality. Whether
implementing a sampling program or using existing data, an exposure assessor needs to
determine that the resulting data meet QA/QC requirements. Upon receipt of data, an exposure
assessor reviews their quality. The new data need to meet the five general assessment factors
outlined in EPA's Assessment Factors: A Summary of General Assessment Factors for
Evaluating the Quality of Scientific and Technical Information (U.S. EPA 2003 a). Reviewing
new and existing data for usability and determining whether the data meet the five general
assessment factors involve considering the areas outlined in the following paragraphs.
Page|71
-------
Aligning Data with Data Quality Objectives
All exposure measurements are subject to some level of uncertainty and variability because of
the inherent limitations of the sampling methods used and temporal and spatial differences in
chemical concentrations. The measurement process, concentrations of specific chemicals
measured in various media (e.g., soil, groundwater, sediment) and analytical measurement
approaches can introduce uncertainty. DQOs describe the degree of uncertainty the project team
is willing to accept based on the needs of the risk manager/decision maker. Setting realistic
DQOs is essential because data of insufficient quality will have little value for problem solving,
and data of quality that vastly exceeds what is needed to answer the exposure assessment
questions provide few, if any, additional advantages. DQOs consider data needs, cost-
effectiveness and the capability of the measurement process. In establishing realistic DQOs for
the exposure assessment, the team considers the benefits of the additional information against
cost, in both time and resources. These considerations need to include selection of analytical
methods that meet the goals of the evaluation and minimize the number of non-detect samples in
the range of interest in the exposure assessment. DQOs, established for an exposure assessment
during planning and scoping (see Section 3.1), outline minimum performance and acceptance
criteria. Determining whether existing data meet an exposure assessment's DQOs is critical to
assessing whether the data are useful for the assessment. Often, existing data do not completely
align with the DQOs but are sufficient to inform the assessment team on how to proceed. For
example, air pollution sampling conducted as part of a network to track pollution trends is also
useful in representing exposure concentrations at a regional or local level, depending on the
locations of the samples. The assessor needs to consider potential exposure mi sclassification.
Using lower-quality data in an exposure assessment might be acceptable if the limitations in the
data do not affect the results significantly in the absence of more accurate information. For
example, sensitivity analyses or simulations can reveal whether different scenarios or
assumptions lead to similar conclusions. In these cases, an assessor needs to explain in the
exposure characterization why the limitations in the data do not invalidate conclusions. In some
cases, considering inadequate or partially relevant data when they are the only data available
might still yield some information. If these data are used, the exposure characterization needs to
state the uncertainty and resulting limitations clearly.
Establishing a Quality Assurance Project Plan
Developing a sound QA project plan is critical to the success of any data collection effort. EPA
defines a QA project plan as a written document that describes the QA procedures, QC
specifications and other technical activities, the implementation of which ensures the results of
the project or task will meet project specifications. QA project plans describe and document
primary data collection, secondary data usage and data processing (such as modeling) activities
that EPA funds (U. S. EPA 2018a).
EPA has compiled guidance and information about preparing a QA project plan on the Quality
Management Tools—OA Project Plans website. This website includes links to guidance on
preparing QA project plans for environmental data collection, modeling, secondary research data
and other topics (U.S. EPA 1999a). The EPA Requirements for Quality Assurance Project Plans
guidance document (U.S. EPA 2001c) outlines the specifications for QA project plans prepared
for activities that EPA conducts or supports. In addition to discussing project management,
assessment and oversight needs, this document details requirements for data generation,
Page| 72
-------
acquisition, validation and usability. The website also provides several examples of QA project
plans. Box 5-2 identified data quality resources, such as guidance for preparing QA project plans
for environmental data collection, modeling, secondary research data and other topics, including
example documents. Some individual Agency programs also might have information about
preparing a QA project plan that considers quality concerns specific to their projects and
missions. Agency exposure assessors are encouraged to consult with their programs to obtain
specific procedures and guidelines including data submission and review procedures. As with the
planning and scoping process, the QA project plan documents the implementation activities
needed to ensure that the results of the project or task meet project specifications.
Obtaining Peer Input to Data Review
With few exceptions, data documentation (e.g., sampling and implementation plans, QA project
plans, SOPs, data analysis plans) and data undergo some level of review, as EPA's quality
process website describes. Various scientific and technical experts inside and outside the Agency
provide input to the development of many Agency work products. EPA's Peer Review
Handbook, 4th Edition (U.S. EPA 2015c) identifies the following categories of review in the
evaluation of data:
• Peer involvement. A process whereby Agency staff involve subject matter experts from
outside their program in one or more aspects of work product development. Peer
involvement includes outreach to and participation by the broad scientific communities
beyond the Agency (external) and within the Agency (internal).
• Peer input. Ongoing discussions during the development of the work product. Peer input
is a form of peer involvement that generally connotes an interaction during the
development of an evolving Agency work product, providing an open exchange of data,
insights and ideas.
• Peer review. An evaluation of a work plan, preliminary draft or the final objective expert
evaluation of the work product. Peer review is a documented critical review of a specific
Agency scientific or technical work product.
Peer input, sometimes referred to as peer consultation, usually involves a one-time interaction or
a limited number of interactions. Peer input is encouraged during the early stages of the project
or as part of the culmination of the work product, as appropriate, or both. Evaluating potential
peer input requirements early in the process helps ensure allocation of adequate resources. In
addition, peer input considerations are integral to setting assessment milestones and schedules.
Major exposure assessments such as those involving controversial methodological or scoping
issues are often subject to peer review. Other exposure assessment products that might warrant
peer review include observational human exposure measurement studies (see Section 7.2.10),
probabilistic exposure analyses (see Section 8.3.4), community-based exposure assessments,
aggregate and cumulative exposure assessments and variability and susceptibility evaluations
within populations.
Evaluating Data Quality
Upon receipt of data generated during sampling, an assessor and team members (e.g., the quality
assurance staff) review the quality of the data following the same process used to assess existing
data. These new data need to meet the same five general assessment factors outlined in EPA's
Page | 73
-------
Assessment Factors: A Summary of General Assessment Factors for Evaluating the Quality of
Scientific and Technical Information (U.S. EPA 2003a) and described above in the introduction
to this section (see Section 5.3). The evaluation includes a data verification and validation
process used to evaluate whether data have been generated according to specifications, satisfy
acceptance criteria and are appropriate and consistent with their intended use. Data verification is
a systematic process for evaluating performance and compliance of a set of data when compared
to a set of standards to ascertain their completeness, correctness and consistency using the
methods and criteria defined in the project documentation. Data validation occurs after the data
verification process and uses information from the project documentation to determine the
usability of the data in light of its measurement quality objectives and to ensure that results
obtained are scientifically defensible.
Validating Data and Reviewing Quality of Sample Collection and Analysis Methods
EPA has developed several validated protocols for sample collection and analysis. Before using
data, an exposure assessor needs to review the data collection and analysis protocols to
determine if these methods have been validated. EPA's Guidance on Environmental Data
Verification and Data Validation EPA QA/G-8 explains how to implement data verification and
data validation in the context of EPA's Quality System and also provides practical advice and
references (U.S. EPA 2002e). If the use of validated methods is not possible, an exposure
assessor needs to consider what effect having data of unknown quality has on the confidence in
the conclusions.
Data validation is the process of reviewing laboratory data to identify potential QA/QC issues.
During data validation, analysts might assign data qualifiers to values for individual chemicals.
Examples of qualifiers used under EPA's Contract Laboratory Program for the Superfund
Program (U.S. EPA 2010c) to indicate QA/QC issues include:
• B (blank): The analyte was found in blank samples.
• J (judgment): The analyte is present but the concentration value is estimated.
• U (undetected): The sample was analyzed but the analyte was not detected at the
detection limit.
• R (reject): The quality control indicates that the data are unusable.
During a field study and subsequent analysis, several blank samples and duplicates are collected.
Blank samples (e.g., trip blanks, field blanks, laboratory blanks) are samples known to be free of
contamination that are carried through the sampling program. Trip blanks accompany the empty
sample bottle(s) to the field and the laboratory for analysis. Field blanks, opened in the field,
determine if field sampling procedures have contaminated samples. Laboratory blanks indicate
potential sample processing contamination. Detection of the chemical in any of these blanks
during analysis indicates that sampling or analytical processes have resulted in sample
contamination.
Duplicate samples are two samples collected in one location using identical sampling techniques.
Theoretically, analysis of these two samples would produce identical results. In reality, analysis
results rarely are identical. Data validation, however, assesses the magnitude of the difference to
identify possible quality issues. Duplicate laboratory samples demonstrate whether the laboratory
achieves acceptable method precision at the time of analysis.
Page| 74
-------
An assessor examines data quality concerns raised during validation because the data might be
insufficient for use in an exposure assessment (e.g., a large number of rejected data can skew
evaluations). The exposure characterization identifies any limitations with data use.
5.3.3. Assessment—Using Data to Evaluate Exposures
Addressing Non-detect Values
All analytical methods have sensitivity limitations, referred to as the detection limit,
quantification limit, method detection limit, reporting limit or other similar term. Analytical
chemistry datasets often will include values that are lower than limits deemed reliable enough to
report as numerical values (i.e., non-detects, sample quantitation limits). For these samples, the
actual presence or concentration of the chemical in the medium is unknown. An assessor,
therefore, determines how to represent these data in an exposure assessment including
appropriate considerations of method(s) determined by program guidance and practice. Although
the literature describes a variety of techniques, no single procedure is appropriate for all
exposure assessment circumstances; thus, an assessor needs to decide on the appropriate method
for a given situation. Techniques for analyzing non-detect datasets can be grouped into three
classes (Helsel 1990): simple substitution methods, distributional methods and robust methods.
• Simple substitution methods involve using a single value as a surrogate for each non-
detect value. Frequently used substitutions include the detection limit, half the detection
limit or zero. In statutes requiring health protective standards, a worst-case approach
might use the detection limit as a surrogate, which results in an upward bias in the data.
On the other hand, assigning all non-detect values as zero biases the mean downward.
Using half the detection limit as the surrogate seeks to balance the upward and downward
biases. Depending on the number of non-detects, the overall distribution and standard
deviation of the dataset might be severely biased, prompting evaluation by the exposure
assessor.
• Distributional methods use the detected values in the dataset to extrapolate values for
the non-detects. Several statistical analyses are available to extrapolate data, such as log-
probit analysis. These methods are most useful for situations in which the dataset
contains enough data points above the detection limit to define the distribution function
(e.g., lognormal) for exposure values with an acceptable degree of confidence.
• Robust methods generally assume a distribution only for the non-detect values rather
than the entire dataset. The non-detect values are extrapolated using regression
techniques. These methods do not assume that data above the detection limit follow a
defined distribution that then can be applied to the non-detect values. These methods
involve somewhat more data manipulation than distributional methods.
Data Quality Assessment: Statistical Methods for Practitioners (EPA QA/G-9S) provides
general guidance on assessing data quality criteria and performance (U.S. EPA2006c). EPA
developed ProUCL. a comprehensive statistical software package with statistical methods and
graphic tools to address many environmental sampling and statistical issues. The ProUCL user's
guide provides information on the evaluation of non-detects and a discussion of statistical
methods (U.S. EPA 2013a). Additionally, the Office of Pesticide Programs developed specific
guidance for dealing with non-detects in pesticide residues in food products (U.S. EPA 2000a).
Page | 75
-------
An assessor needs to present a transparent analysis and avoid presenting only summary statistics
(e.g., mean concentrations). Information characterizing the dataset (e.g., percentage of non-detect
values, maximum detected value, standard deviation) provides additional context for the
summary statistics. For complex statistical analyses, contacting a statistician for assistance might
be appropriate.
Evaluating Outlier Data
The data analysis should not eliminate outlier data (i.e., data points that are numerically distant
from the other data points in a dataset) unless these data points can be shown to differ from the
other data points in the dataset. Very often, outliers provide useful information to an exposure
assessor. Statistical tests such as the Dixon test are appropriate for determining the presence of
outliers (Dean and Dixon 1951; Dixon 1950; Dixon 1953; Dixon 1960). The ProUCL software
provides graphical techniques and programs to help identify outliers. EPA's Guidance for Data
Quality Assessment provides detailed explanations of the procedures for conducting the two-
sample tests for the evaluation of outliers (U.S. EPA 2000d).
Combining Datasets and Modeling Data
Combining datasets is not always possible and when done needs to be performed carefully. The
circumstances under which each set of data was collected (e.g., receptor, sampling design,
location, time, sampling methods, detection limits) and the quality (e.g., precision, accuracy,
representativeness, completeness) need to be evaluated. Similarly, combining measured data
with modeled data requires an understanding of the accuracy, representativeness and uncertainty
of both datasets. An exposure assessor also needs to understand the implications of using
combined datasets on resulting conclusions or exposure estimates. For example, differences in
detection limits over time might result in older samples listed as non-detects and more recently
acquired samples listed with detected levels. These changes could result in the need to determine
whether to include the more recent detected levels or a combination of detected and non-detect
concentrations that will affect the calculated exposures. Regardless of whether datasets can be
combined, an assessor needs to provide sufficient background information to explain what was
done and why, including clear documentation of the source of the data and any references.
Bounding Estimates
A bounding estimate is an estimate of exposure that is higher than the highest anticipated
exposure to an individual, lifestage, group or population. Bounding estimates often are used
during screening-level assessments to eliminate exposure pathways or chemicals of limited
importance from further consideration or to determine whether more data and information are
needed to evaluate other exposure pathways or agents.
Calculating Exposure Point Concentrations
Exposure point concentrations (EPCs) provide an estimate of exposure parameters in specific
media (e.g., air, water, soil, sediment). For example, EPCs in the Superfund program assess
chronic exposure scenarios (U.S. EPA 2002a); the calculation of the EPC depends on the number
of samples and statistical analysis of the data (U.S. EPA 1992d; U.S. EPA 2013a). Sampling data
and in some cases, modeling data, are used to calculate the media-specific EPCs. The EPC is
determined for each exposure unit in which a receptor moves and is exposed to an environmental
medium at a specific frequency (e.g., days/year) and for a specific duration (e.g., years).
Exposure assessors need to consult with their specific programs for guidance on calculating an
Page| 76
-------
EPC consistent with any legislative mandate. EPA recommends coordinating with an appropriate
program because legislative mandates might require the use of the maximum concentration,
mean concentration, upper confidence limit on the mean or other statistical value needed to
represent exposures. Coordination with the program early in the process will inform data
sampling and evaluation to ensure adherence to the statistical requirements.
5.4. Acquiring and Evaluating Data for an Exposure Assessment
When developing the analysis plan for an exposure assessment, an exposure assessor determines
the type of data needed (see Section 3.3). This section provides information about existing data
sources and methods for collecting new data. The information sources and the questions
provided in the following paragraphs represent some of the many resources and concerns that an
exposure assessor might encounter.
Each data type discussed in Section 5.1 has unique characteristics that need evaluation to
determine their appropriateness for an exposure assessment. Sometimes data needed to support
an exposure assessment are available from existing sources. An exposure assessor normally
would consider the following issues before using existing data:
• Identify whether data are available to meet a data need
• Identify possible surrogate data
• Obtain and critically review the data to assess usability in an exposure assessment.
Numerous sources are available and accessible to assessors seeking data. These sources can
provide data that characterize local, state, regional and national conditions (e.g., location-specific
chemical concentrations, state cancer registries, U.S. census demographics). Data also might be
available from peer-reviewed scientific literature (e.g., epidemiological studies considering
exposures versus health effects). One resource useful in locating this information is EPA's
Health and Environmental Research Online (HERO) database, which provides an easy way to
access the scientific literature behind EPA science assessments.
An assessor can use one of many approaches to access existing data. For example, all federal
agencies and departments that spend more than $100 million per year on research and
development are required to increase public access to peer-reviewed scientific research
publications and research data (OSTP 2013). That memorandum, "Increasing Access to the
Results of Federally Funded Scientific Research," indicates that all federally funded scientific
research should be available to the public, the scientific community and industry to the greatest
extent feasible, consistent with the following: applicable law and policy; Agency mission;
resource constraints; U.S. national, homeland and economic security; and the specific objectives
of the memorandum. EPA's efforts to increase public access are detailed in A Plan To Increase
Access to Results ofEPA-FundedScientific Research (U.S. EPA 2016c). Other federal agencies
also have developed plans to make data available, including the U.S. Department of Health and
Human Services (U.S. HHS 2016); the Food and Drug Administration (U.S. FDA2015); and
CDC (2015a). Table 5-6 (at the end of this chapter) highlights some of the more common data
sources for exposure assessments EPA and other federal agencies develop.
Page| 77
-------
EPA's Guidance for Data Usability in Risk Assessment (Part A) provides a foundation for
rigorously reviewing and making nationally consistent decisions about the minimum quality and
quantity of environmental data required to support a Superfund risk assessment (U.S. EPA
1992b) with specific guidance for federal facilities (U.S. EPA2005i). These documents provide
general concepts that can be useful beyond Superfund. EPA's OPPT recommends assessors
consider the questions listed in Table 5-2 when evaluating any type of existing data for use in an
exposure assessment (EPA's Resources for Planning Projects that Use Existing Data web site).
Table 5-2. Questions to Ask When Evaluating/Considering Data
Questions to Ask When Evaluating Existing Data
Questions to Ask When Considering New Data
What was the objective of the study or program that gathered
the data (e.g., characterizing contamination, establishing
baseline cancer rates)? Were the study objectives and designs
suitable for the purpose of the exposure assessment?
Do the objectives, methods, scope and size of the proposed
sampling program support the objectives of an exposure
assessment (e.g., characterizing contamination)?
What were the data collection and analytical methods? Has an
authoritative body adopted these methods (e.g., the National
Institute for Occupational Safety and Health)? Do they meet
project data quality objectives or does the scientific community
consider them acceptable?
Are appropriate data collection and analytical methods
available? Has the scientific community adopted or otherwise
accepted those methods? Does EPA have standard operating
procedures for the methods? How many samples will be
needed to meet the study objectives?
What quality assurance/quality control procedures, if any, were
used?
What quality assurance/quality control procedures are
required?
What are the key uncertainties of the study or program data?
Will the uncertainty in the data substantially limit their usability
in an exposure assessment?
What will the sampling program cost?
What is the proximity of the contaminant source to the exposed
individual? For example, if soil is a medium of concern,
consideration is given to whether the contamination is located
on the property or a road or at a location distant from the
receptor. What is the groundwater direction and how does it
influence the drinking water source?
Where do samples need to be collected in various media?
What soil depths are appropriate to represent exposure?
Where do groundwater samples need to be collected to
represent impacts on drinking water sources? Are the data
available for specific chemical species or are congener data
needed to support the exposure assessment?
What chemicals in the existing dataset lack data, analytical
sampling methods and standardized protocols for sampling?
Are new analytical sampling methods available? Can additional
sampling be conducted within the needed timeframe?
The data needed for an exposure assessment, as outlined during the planning and scoping process
(see Chapter 3), are not always available from existing sources. When data are critical to an
exposure assessment and no appropriate data are available, an assessor might consider
implementing a sampling program to gather the required data.
Sampling often is a resource-intensive endeavor (i.e., requiring substantial time and money),
necessitating evaluation of the costs and benefits of the sampling program. The questions listed
in Table 5-2, modified from OPPT's Considerations When Evaluating Exposure Assessments
(EPA's Resources for Planning Projects that Use Existing Data website), serve as a guide for
assessing the cost-benefit implications of implementing a sampling program to provide data for
an exposure assessment. Depending on the answers to those questions, an assessor might decide
that:
• A sampling program cannot fill the data gaps (e.g., the appropriate sampling or analytical
methods are unavailable, the uncertainty is too great to reduce the data gap satisfactorily)
Page| 78
-------
• A sampling program sufficient to fill the data gaps clearly is more extensive than is
possible within the resource, time and institutional constraints of an exposure assessment
because, for example, the data collection or analytical methods necessary to meet the
sampling program and assessment objectives require time, expertise or financial
resources beyond the capacity of the organization
• A sampling program would provide valuable information to support an exposure
assessment and would be feasible within the available time, resources and institutional
framework.
Sections 5.4.1 through 5.4.5 discuss the unique aspects of conducting activities to gather data for
an exposure assessment: characteristics of environmental data and environmental sampling;
biomonitoring; compilation of exposure factor information; conduct of questionnaires, surveys
and observations; and modeling. These discussions focus on sampling programs and methods
applicable to exposure assessment. EPA programs have developed many guidance documents
and compiled resources that detail the specifics of planning and implementing a sampling
program specific to legislative mandates.
5.4.1. Environmental Data
Sources of environmental data include:
• Location-specific environmental sampling and summary documents
• Local, regional or national monitoring databases
• Regulatory submittals for new and existing products
• Local, state and federal agency studies
• Peer-reviewed scientific literature.
Researchers collect environmental data for many reasons using a variety of sampling methods.
Table 5-3 describes some of the aspects of common environmental data measurements, including
typical measurement objectives, typical target media and examples of sources of existing data.
For an exposure assessment, evaluation of environmental data focuses primarily on the spatial
and temporal conditions that affect how well the existing data represent the conditions addressed
in the assessment. Key evaluation questions include:
Were the data collected close to an exposure point of concern in space and time?
Media measurements collected close to the point of contact for the population or
individual in space and time are preferable to measurements far removed geographically
and temporally. In addition, considering the frequency and duration of exposure is
important, for example, acute, subchronic and chronic; the age ranges of exposed
individuals such as children, adolescents and adults; and the types of activities that could
result in exposure such as children playing, adolescents trespassing, outdoor workers
maintaining surface areas and construction workers digging to greater depths. The
certainty with which the data represent the point of contact tends to decrease as the
distance in space and time from the point of contact increases. For example, an outdoor
air measurement alone cannot adequately characterize indoor exposure. Likewise, shelf
studies of consumer products or market basket studies of foods that use regional or
national sample groups can provide only a limited understanding of point-of-contact
concentrations for localized areas or population groups.
Page| 79
-------
Table 5-3. Common Environmental Data Measurements
Type of
Measurement
Typical Measurement
Objectives
Typical Target Media
Examples of Sources of Existing Data
Fixed-location media
monitoring
• Establish long-term trends at
specific sampling locations
• Identify changes in existing
conditions
• Air (indoor and outdoor)
• Vapor intrusion (indoor)
• Groundwater
• Soil/Indoor dust
• Surface water
• Sediment
• Biota (e.g., fish, crabs)
• National Stream Quality Accounting Network
• Water quality network
• Air Quality System (EPA)
• National Lakes Fish Tissue Study, National River and
Streams Assessment Fish Tissue Survey, etc.3
• Remedial Investigation site-specific sampling under the Comprehensive
Environmental Response, Compensation and Liability Act (CERCLA)
Short-term media
monitoring
• Characterize conditions at a
location for a relatively short
period of time
• Air (indoor and outdoor)
• Vapor intrusion (indoor)
• Groundwater
• Soil/Indoor dust
• Surface water
• Sediment
• Biota (e.g., fish, crabs)
• CERCLA - Removal Actions
• Resource Conservation and Recovery Act
• Special studies of environmental media
• Indoor air monitoring
Source monitoring
• Track chemical release rates to
the environment from sources
• Characterize the relationships
between release amounts and
various source operating
parameters
• Ensure regulatory compliance
• Identify disposal options for
waste streams
• Air
• Groundwater
• Surface water
• Waste streams
• Drinking water
• Mobile sources
• National Emissions Inventory (EPA)
• Toxics Release Inventory (EPA)
• Stack sampling
• Effluent sampling
• Leachate sampling from landfills
• Incinerator ash sampling
• Fugitive emissions sampling
• Pollution control device sampling
Page | 80
-------
Type of
Measurement
Typical Measurement
Objectives
Typical Target Media
Examples of Sources of Existing Data
Consumer product
sampling
• Characterize chemical
concentrations for exposure
assessment
• Assess the quality of the food
supply
• Ensure regulatory compliance
• Drinking water
• Food
• Consumer productsb
• Tap water sampling
• Water supply sampling
• Prepared food diet sampling
• Shelf surveys
• Fish sampling from contaminated water bodies
• Sampling of crops and livestock associated with contaminated soils and
surface water
• GIS (geographic information systems)
• Personal care products
• Household chemicals
• Pesticides
Microenvironmental
sampling
• Evaluate ambient conditions in
a defined area
• Identify exposure
concentrations
• Indoor/Ambient air
• Dust
• Contaminated surfaces
• Residences, offices,
commercial establishments
• Recreational (e.g.,
playgrounds, swimming pools)
• Special studies of residences
• Radon measurements
• Office building monitoring
Personal monitoring
(e.g., breathing zone
samples, skin patch
samples)
• Assess exposure to airborne
chemicals
• Characterize dermal exposure
• Ambient air
• Personal air breathing zone
(active/passive)
• Indoor air
• Skin
• Duplicate plate for food
• Observational human exposure measurement study results published in the
peer-reviewed literature
• Industrial hygiene studies
• Pesticide applicator surveys
ahttps://www.epa.qov/fish-tech/fish-tissue-data-collected-epa
bhttps://www.epa.qov/stationarv-sources-air-pollution/clean-air-act-quidelines-and-standards-solvent-use-and-surface
Page| 81
-------
• Under what environmental conditions were the data collected?
Data characterizing environmental conditions (e.g., groundwater flow, soil composition,
prevailing wind direction) are more representative when measured closer to the point of
contact. Again, as the distance from the point of contact or location increases, so does the
uncertainty about how well the data represent local conditions. For instance, an aquifer
might have an overall flow toward one direction, but local topography (e.g., streams,
hills) might alter the direction of flow.
• How might the chemical concentrations vary over space and time?
Chemical concentrations can vary considerably from place to place and over time
because of changing use patterns, bioaccumulation, degradation and migration. Changes
are of particular concern when using the measured data to extrapolate trends over long
periods, such as a lifetime. Exposure assessors frequently use transport and dispersion
models to understand how chemical concentrations vary over space and time.
• How might the chemical concentration compare to background concentrations?
Background chemical concentrations might derive from naturally occurring or
anthropogenic sources (U.S. EPA 2002e). Naturally occurring chemicals are those not
influenced by human activity, while anthropogenic chemicals are natural and human-
made substances present in the environment due to human activities (U.S. EPA 2002b;
U.S. EPA 2007f). Program-specific guidance can influence the degree to which
background is considered in the sampling design. The exposure assessor needs to consult
the program for further guidance.
• If data were collected from a microenvironmental study, do these data represent an
exposure assessment population?
Microenvironmental measurement approaches are based on the concept that specific
spaces are relatively homogeneous and a single measurement characterizes conditions in
that zone. These zones are smaller than the room and typically closer in size to a personal
breathing zone. For example, typical microenvironments include parts of a house or the
entire house, an office or other indoor setting and an automobile. Microenvironments can
be divided into time segments (e.g., kitchen-day, kitchen-night). This approach can
produce measurements closely linked with the point of contact in both location and time.
Because microenvironmental studies represent a very limited environment, an assessor
needs to establish that the measurements are representative of the population of interest in
an exposure assessment before generalizing them to a population.
Environmental sampling can fill data gaps associated with chemical concentrations or EPCs and
physical conditions (e.g., geology, hydrology). These data can either help define an EPC
(e.g., personal monitoring in the breathing zone) or support modeling efforts to characterize
possible exposure routes (e.g., ingestion of groundwater at a distribution point based on
groundwater direction and flow rates).
Questions an assessor might ask include:
• What sample collection and analytical methods are appropriate?
Numerous methods for collecting and analyzing environmental samples exist. The most
appropriate methods depend on the objectives of an exposure assessment regarding:
o Media being sampled (e.g., physical location)
o Biota type
Page | 82
-------
o Required detection limits
o Target analytes
o Sampling program objectives.
• What sample design is appropriate?
The sample design specifies the number of samples to be collected, sampling locations
and sampling time or period (e.g., single samples at multiple locations, seasonal samples
or multiple samples at various times in the year to establish trends) and the rationale or
justification for each of these elements. The sampling design also includes appropriate
QC samples, such as the number of duplicate samples, field blanks and laboratory blanks.
A well-planned sample design ensures the data are representative and scientifically
defensible for their intended use. Sample designs range from simple random sampling
patterns to statistically stratified sampling patterns. As with the selection of the collection
and analytical methods, the most appropriate sample design depends on the objectives of
the exposure assessment and an understanding of the tolerable level of uncertainty (U.S.
EPA2002d). Section 7.2 presents more information on sampling and study designs.
5.4.2. Biomonitoring Data
Sources of biomonitoring data include location-specific studies; local, state or national surveys
or registries; and peer-reviewed scientific literature. Table 5-4 describes some common
biomonitoring measurements, including typical measurement objectives, media sampled and key
sources for this type of data. Other considerations include: (1) differences in metabolic rate
across individuals and effects on biomarker levels, (2) time between exposure and sample
collection, (3) differences in metabolic rate depending on the route of exposure for rapidly
metabolized compounds and (4) influences of body mass on biomarker levels in different fluids.
Table 5-4. Common Biomonitoring Measurements
Typical Measurement Objectives
Typical Target
Media
Examples of Sources of
Existing Dataa
• Confirm presence of a chemical in the body without establishing an exposure
source
• Contribute to exposure assessments by measuring the internal concentration of
a chemical
• Assess the relationship between biomarker concentrations and body burden
• Evaluate trends in concentrations of specific chemicals across populations
• Evaluate trends in chemical-specific concentrations over time, e.g., decreases in
chemical concentrations in blood
• Adipose tissue
• Blood
• Breath
• Hair
• Nails
• Urine
• Breast milk
• NHANES
• Blood lead sampling in
children
• ATSDR National
Exposure Registry
• National Human
Adipose Tissue Survey
aNHANES = National Health and Nutrition Examination Survey available at http://www.cdc.gov/nchs/nhanes.htm; ATSDR = Agency for Toxic
Substances and Disease Registry; National Human Adipose Tissue Survey available at http://cfpub.epa.qov/ncea/cfm/recordisplav.cfm?deid= 55204
Evaluation questions specific to biomonitoring data include:
• Are suitable biomarker and analytical methods available to evaluate exposures?
Biomarkers are available to assess exposures to several hundred chemicals. The exposure
assessor needs to consider whether a biomarker is available for the specific chemical of
interest and whether the analytical methods used to detect the chemical concentrations of
concern in the media sampled (e.g., blood, serum, urine) are adequate. This information
Page | 83
-------
is an important consideration in determining whether biomonitoring data are useful for
the exposure assessment.
• Are suitable reference levels available to interpret biomarker results?
Reference levels help interpret biomarker study results. Biomonitoring studies of
chemical concentrations provide physicians and public health officials with reference
values so they can determine if people have been exposed to levels of the chemical higher
than found in the general population and whether actions are needed to reduce potential
exposures (CDC 2012b; CDC 2017). Developing reference levels can require a
determination of whether normalizing the results between participants will need
additional measurements, for example, creatinine in urine or lipids in breast milk.
Whether biomonitoring data need normalizing also is an important consideration during
study design to optimize data collection.
• Do confidentiality concerns restrict the access to or use of biomonitoring data?
Sometimes assessors cannot access or release biomonitoring data publicly due to
concerns or requirements about maintaining the confidentiality of personal data. For
example, assessors can access NHANES "public-use" datasets but not other datasets
because of confidentiality restrictions. Section 7.2.10 discusses further considerations
regarding data confidentiality. The National Center for Health Statistics (NCHS) has
established Research Data Centers to allow researchers access to restricted data. The
Research Data Centers host restricted data from a variety of groups within the U. S.
Department of Health and Human Services. Release of the data follows strict protocols to
protect participant confidentiality. For further information about Research Data Centers,
see the NCHS Research Data Center website.
• Are biomonitoring data the only data available to assess exposure?
Body burden or biomarker data represent the amount of a chemical inside the body of an
exposed individual. These data can establish the presence of a chemical and quantify the
concentration of the chemical or its metabolite in the sampled matrix. Biomonitoring
data, however, do not identify a specific source of exposure or the period of exposure
(e.g., years or days ago). Rather, exposure assessors have used body burden and
biomarker data to supplement environmental monitoring data and modeling activities in
estimating exposure. Increasingly, however, advances in science and research are making
possible more robust reverse and forward dosimetry models that support evaluation of
associations between biomonitoring data and exposures (see Section 6.2.3) (e.g., Morgan
etal. 2008; Tulve et al. 2011).
• What data quality information is available for the dataset?
EPA's general QA/QC procedures for collecting data can be applied to the collection of
biomonitoring data. For example, for existing data such as NHANES, reviewing the
study documentation to understand the QA/QC procedures is essential to ensure the data
meet the study objectives (Berman et al. 2001; CDC 2016).
Developing Data Quality Objectives and Identifying Sampling and Analysis Methods
The objectives of an exposure assessment and the tolerable level of uncertainty will drive the
selection of sample collection methods, analytical methods and study design. Consulting with
experts in biomonitoring often is helpful in developing a scientifically sound biomonitoring
study. Considerations for sample collection and analytical methods in biomonitoring studies are
similar to those for environmental sampling. Special considerations apply, however, when
Page | 84
-------
gathering data from people. EPA policy requires sampling programs that include gathering data
from individuals to address confidentiality, ethical issues and protocol reviews and approvals, as
detailed in Section 7.2.10.
Biomonitoring studies can address data gaps associated with:
• Possible exposures
• Baseline conditions
• Internal chemical or metabolite concentrations.
Evaluating Biomonitoring Data
Although biomonitoring data might not provide a direct link between an exposure source and a
health effect, they can influence the outcome of an exposure assessment. For example,
biomonitoring data that report chemical or metabolite concentrations can confirm that exposures
are occurring, which can direct an exposure assessment. For some chemicals, these data also can
provide information about internal doses, which can support modeling efforts. Box 5-3 lists
useful guidelines and resources associated with conducting biomonitoring studies.
Box 5-3. Guidance Documents and Resources for Planning and Implementing a
Biomonitoring Program
• CDC (2015b) CDC Specimen-Collection Protocol for a Chemical-Exposure Event.
• N RC (2006b) Human Biomonitoring for Environmental Chemicals.
• Laboratory Systems and Standards website. Association of Public Health Laboratories.
• Publications and Products website. Division of Laboratory Sciences website. CDC.
• Publications and Products website. National Biomonitoring Program website. CDC.
• Publications and Products website. National Center for Environmental Health website. CDC.
Biomonitoring is a rapidly advancing science. The available methods and data applications are
evolving constantly; better and more sophisticated tools can quickly replace methods currently
considered state-of-the-science. Consulting with experts in biomonitoring is critical to
developing a scientifically defensible sampling program or study. Questions an assessor might
ask these experts include:
• When conducting biomonitoring, what sample collection methods, analytical
methods and study design are appropriate?
Similar to environmental sampling, a biomonitoring project that involves collecting
fluids, tissues, breath, hair or nails considers sample collection and analytical methods.
Sample design considerations are similar for environmental and biomonitoring studies.
• What special considerations apply when gathering data from people?
Sampling programs that include gathering data from individuals are subject to several
considerations beyond sample collection, analysis and design. These programs also need
to address confidentiality, ethical issues and protocol reviews. Section 7.2.10 provides a
detailed discussion of the implications associated with conducting observational human
exposure measurement studies.
Page | 85
-------
5.4.3. Exposure Factor Information
Key sources of exposure factor information include:
• EPA exposure factors
o EPA's Exposure Factors Handbook: 2011 Edition (U.S. EPA 201 Id)
o EPA's Child-Specific Exposure Scenarios Examples (U. S. EPA 2014a)
o EPA program, office or region default values that are transparent in their presentation
of information, source of data and application in the absence of site-specific
information [e.g., Human Health Evaluation Manual, Supplemental Guidance:
Update of Standard Default Exposure Factors (U.S. EPA 2014g)].
• Datasets EPA has compiled
o Consolidated Human Activity Database
• Datasets other federal agencies have compiled
o American Time Use Survey
o Food Commodity Intake Database
o National Health and Nutrition Examination Survey (NHANES)
o Continuing Survey of Food Intakes by Individuals
• Peer-reviewed scientific literature.
EPA's Exposure Factors Handbook: 2011 Edition (U.S. EPA 201 Id) is the most widely known
source of exposure factor data. The basis of summary data and mean values cited in these
documents is published data and information that provide general population data (e.g., food
survey findings) or data collected from populations in a specific group or region (e.g., fish
consumption by Native Americans, outdoor activity in the Northeast). EPA also presents
confidence ratings that exposure assessors can use when evaluating data quality. A higher
confidence rating indicates higher data quality.
Some EPA programs have derived default values for exposure factor data, used in the absence of
location- or scenario-specific information. For example, the derivation of Maximum
Contaminant Levels can use default drinking water intakes. The use of defaults in risk
assessment raises concerns, however, and EPA's policies about using defaults have been the
subject of scrutiny. Public commenters raised this issue, and the Office of the Science Advisor
addressed it in their 2004 "Staff Paper," An Examination of EPA Risk Assessment Principles and
Practices (U.S. EPA 2004c). In its 2009 publication, Science and Decisions: Advancing Risk
Assessment (NRC 2009), the National Research Council (NRC) outlined several advantages and
disadvantages of using default values. This document provides assessors with additional
perspectives about the application of defaults in an exposure assessment.
Table 5-5 presents common exposure factor data, including typical measurement objectives and
data collection methods and examples of each type of data. Selection of specific exposure factors
needs to consider the ages of the exposed individuals, activity patterns, sensitive individuals such
as pregnant women and consumption patterns (e.g., ingestion of fish, game).
Page| 86
-------
Table 5-5. Common Exposure Factor Information Measurements
Type of
Measurement
Typical Measurement
Objectives
Typical Data Collection
Methods
Examples of Types of
Exposure Factor Data
Physical
characteristics
• Evaluate traits of individuals that
could impact how chemical
exposure affects their bodies
• Direct observation
• Surveys
• Questionnaires
Age-specific
• Body weight
• Height
• Skin surface area
Activity
frequency and
duration
• Identify how long and how
frequently individuals engage in a
particular activity
• Determine how often individuals
engage in activities that could
reduce/increase potential
exposures
• Questionnaires
• Surveys
• Time spent indoors/outdoors
• Frequency of hand washing
• Duration of showering
Intake rates
• Determine the amount of a
substance that individuals could
take into their bodies from
exposure
• Population surveys
• Questionnaires
Age-specific rates of
• Drinking water ingestion
• Fish consumption
• Incidental soil ingestion
• Inhalation rates
When evaluating exposure factor data for use in an exposure assessment, key questions include:
• When choosing to use default values because of a lack of location- or scenario-
specific information, what is the basis for these values?
The use of default values needs careful and thoughtful consideration to ensure that
default values are appropriate for the assessment. Because the use of default values in an
exposure assessment often is unavoidable when specific exposure factor data are lacking,
EPA has worked to provide transparency about the basis for choosing default values and
evidence and policy supporting their use in exposure assessment (NRC 2009). The
selection of exposure factors also needs to consider lifestages, sensitive populations such
as pregnant women and other characteristics unique to the population.
• Are the exposure factor data representative of the exposures being assessed?
Exposure factor data derive from studies of populations. The more the study population
resembles the assessment population in size, age, race, sex, lifeways and socioeconomic
status, the more representative exposure factor data are likely to be of the population
being assessed. Conversely, the more the study population and assessment population
differ, the less representative the exposure factor data likely will be.
5.4.4. Questionnaires, Surveys and Observations
Administering questionnaires and surveys or conducting observational human exposure
measurement studies can help address data gaps in exposure factor information. Because
exposure factor data contribute to the development of a conceptual model and exposure scenarios
and the quantitative assessment of exposure, findings are useful to evaluate assumptions about
exposure route, duration and frequency.
Data collected in an observational human exposure measurement study need to be of sufficient
quality and quantity to support the study objectives, hypotheses or scientific questions. The most
efficient way to ensure enough high-quality data is to establish the data quality criteria before the
study begins and then develop a study design that incorporates these criteria. To facilitate this
Page | 87
-------
approach, the Agency has developed the DQO process, a systematic planning tool based on the
scientific method, for establishing criteria for data quality and developing data collection designs
(U.S. EPA 2002c). Detailed guidance on the DQO process and other related information are
available in EPA's report, Guidance for Quality Assurance Project Plans: EPA OA/G-5 (U.S.
EPA 2002c), and at the EPA Quality System website. Another resource is the EPA Survey
Management Handbook, which provides guidance on conducting, designing and analyzing
environmental surveys. The handbook provides practical advice on many aspects of survey
research (U.S. EPA 2003h).
Box 5-4 lists useful guidelines and resources associated with designing and implementing
questionnaires and surveys and conducting observational studies. Questionnaire and survey
design is a complex process requiring experts in appropriate survey methods and techniques.
Box 5-4. Examples of Guidance Documents and Resources for Conducting
Questionnaires, Surveys or Observational Studies
• U.S. EPA (1992a) Consumption Surveys for Fish and Shellfish: A Review and Analysis of Survey Methods.
EPA/822/R-92/001.
• Dillman (1999) Mail and Internet Surveys: The Tailored Design Method.
• U.S. EPA (2003h) Survey Management Handbook. EPA/260/B-03/003.
• 0MB (2006) Questions and Answers When Designing Surveys for Information Collections.
• U.S. EPA (2007g) Guide for Measuring Compliance Assistance Outcomes. EPA/300/B-07/002.
More information on observational human exposure measurement studies is found in Chapter 7.
Questions an assessor might ask these experts include:
• What methods are available for implementing questionnaires and surveys?
Several approaches exist for conducting questionnaires and surveys. For exposure
assessment, common methodologies include respondent estimates, third-party estimates
and diaries.
o Respondent estimates are the least expensive and most commonly used
questionnaire alternative. Respondents are asked to estimate the time they spend at a
particular activity. Questionnaires and surveys ask how many hours were spent doing
a given activity, being at a given location or using a certain product. In exposure
studies, respondents might be asked how often they use a chemical or product of
interest or perform a specific activity,
o Third-party estimates use essentially the same approach as respondent estimates,
except that one person completes a questionnaire or survey for another. For third-
party estimates, the questionnaire or survey asks how many hours per week the
specific person spends completing a given activity, being at a given location or using
certain products. The person completing the questionnaire or survey can obtain
information by interviewing or observing the respondent (e.g., reviewing video
monitoring data) (U.S. EPA 2012f).
o Diary approaches provide a sequential record of a person's activities during a
specified period. Typical time-diary studies follow activities across a day or a week.
The design of diary forms facilitates respondents' reporting of all their activities and
Page| 88
-------
locations for the specified period. Carefully designed forms are especially important
for diary studies to ensure that the data each individual reports are comparable. The
resulting time budget helps characterize an individual's behavior, activities or other
features during the observation period. Sequential-activity monitoring forms the basis
of an activity profile.
• What methods are available for conducting observational studies?
Observational studies record activities, including location-time data, for an individual,
lifestage, specific group or population. The person(s) under evaluation or an observer can
record the activity. These studies sometimes use behavioral monitoring devices
(e.g., accelerometers, GPS [global positioning system] applications). These methods
probably are the most expensive approach to gathering activity data because they require
the use or development of equipment, respondent agreement to use such equipment and
technical help to install or adjust the equipment.
• What are the clearance requirements for releasing questionnaires or surveys?
The Paperwork Reduction Act of 1995 requires each federal agency to obtain approval
from OMB before collecting information from 10 or more people. The approval process
ensures the quality and practical utility of the information collected. This process is time
consuming and requires the publication of at least two Federal Register notices that an
Information Collection Request—commonly known as an OMB clearance package—has
been submitted. OMB's review of the request sometimes takes many months (i.e., at least
120 days) (OMB 2006).
• What special considerations apply when gathering data from people?
Similar to biomonitoring studies involving individuals, an assessor considers
confidentiality needs, ethical issues and protocol reviews associated with studying
individuals. Questionnaires and surveys can be components of observational human
exposure measurement studies; see Section 7.2.10, which provides a detailed discussion
of considerations associated with gathering data from people.
5.4.5. Modeling
EPA defines a model as "a simplification of reality that is constructed to gain insights into select
attributes of a particular physical, biological, economic or social system" (NRC 2007; U.S. EPA
2009d). Chapter 6 discusses model selection and use in exposure assessments.
In exposure assessments, assessors use models to reduce gaps in empirical data, extrapolate
monitoring data to non-surveyed populations or predict future exposures. Models also serve as a
framework to bring together various types of data to develop estimates of exposure that are
consistent with all empirical data:
• Chemical release rates from sources
• Chemical fate and transport
• EPCs
• Exposure factors
• Internal chemical or metabolite concentrations
• Estimated doses.
The estimates generated by models can be used as variables for conducting quantitative exposure
assessments. For example, a particular fate and transport model might estimate environmental
Page| 89
-------
concentrations by predicting chemical migration through groundwater and concurrent loss by
degradation. Model estimates also might serve as conclusions of an exposure assessment
(e.g., estimates of the cumulative impacts of exposures to multiple chemicals).
Box 5-5 presents guidelines and resources associated with environmental models. Section 6.2
describes the process for selecting an appropriate model, which depends on the specific
circumstances of an exposure assessment.
Box 5-5. Guidance Documents and Resources to Support Modeling Efforts
• WHO (2005) Principles of Characterizing and Applying Human Exposure Models.
• N RC (2007) Models in Environmental Regulatory Decision Making.
• U .S. EPA (2009d) Guidance on the Development, Evaluation, and Application of Environmental Models.
EPA/100/K-09/003.
• Center for Exposure Assessment Modeling (CEAM) website. U.S. EPA.
• Radiation Protection Document Library website. Radiation Protection website. U.S. EPA.
• Predictive Models and Tools for Assessing Chemicals under the Toxic Substances Control Act (TSCA) website.
U.S. EPA.
In addition, the following questions, set forth in the Guidance on the Development, Evaluation,
and Application of Environmental Models (U.S. EPA 2009d) for consideration when using a
model for regulatory or research purposes, provide a useful list for assessors to consider when
evaluating empirical models for use in an exposure assessment:
• What are the proj ect obj ectives?
• What are the type and scope of the needed model?
• What are the data criteria?
• In what situations does the model apply?
• What are the programmatic constraints?
• How does the empirical model fit with the conceptual model for the project?
• Has the model been peer reviewed?
• Has the model been evaluated with measured data?
The Agency has a community of practice that addresses environmental modeling. Modeling
content, including databases and training, is available at EPA's Environmental Modeling
website.
5.5. Data and Decision Uncertainty and Variability
An exposure assessment can have many sources of data uncertainty, data variability and decision
uncertainty. This section highlights some general points about data and decision uncertainty and
variability. Sections 8.1.1 through 8.1.3 present a detailed discussion of data uncertainty,
decision uncertainty and variability, including processes and methodologies to evaluate and
potentially reduce them. ExpoBox provides information on the concepts of uncertainty and
variability in the form of questions and answers related to exposure assessment. Chapter 8 also
provides a detailed discussion of the distinction between data uncertainty and decision
uncertainty and the importance of distinguishing between the two. This section addresses
Page | 90
-------
uncertainty and variability specifically associated with the data an assessment uses. An assessor
needs to consider how data uncertainty and variability can influence the outcome of the
assessment. Questions for an assessor to consider when reviewing the data include:
• What are the sources of uncertainty in the data?
The term "data uncertainty" encompasses many concepts. Sampling uncertainty, also
known as parameter uncertainty, stems from measurement errors, sampling errors,
misclassification of data and surrogate data weaknesses. The degree to which the data are
representative of actual conditions also introduces uncertainty. An assessor needs to
review these factors when evaluating data sources for use in an exposure assessment.
Many methods to address and reduce sampling uncertainty are available, ranging from
classical statistical analyses to probabilistic uncertainty analyses (see Section 8.3).
Qualitative or quantitative information about the level of uncertainty and variability
(i.e., confidence) associated with a dataset sometimes is available or an assessor can
estimate them. The Exposure Factors Handbook: 2011 Edition [(U. S. EPA 2011 d) (pages
1-5 through 1-7] presents several factors a risk assessor needs to consider in evaluating
data uncertainties and variability when selecting data to include. EPA provides
confidence ratings for the data presented in the Handbook (U.S. EPA 201 Id). A higher
confidence rating typically is associated with data having fewer uncertainties. An
assessor can conduct statistical analyses, such as standard deviation and upper confidence
limit calculations, to represent uncertainty and variability quantitatively. For example, a
larger standard deviation typically is associated with a greater level of uncertainty or
variability in the data.
• What are the sources of variability in the data?
All datasets introduce variability—the natural differences that occur in a sampling
medium or population—into an exposure assessment. For example, environmental data
represent the range of chemical concentrations in the environment, and activity
information represents the range of possible activities that can occur in a population. EPA
might address variability by selecting the data that represent a high-end exposure level
(see Section 5.1.3).
• How does decision uncertainty affect exposure assessment decisions?
Decision uncertainty includes data uncertainty and pertains to whether the analyses are
adequate to better inform the exposure assessment decisions and help the risk
manager/decision maker understand the relationships among the data and evaluate the
potential decision options. This process includes evaluating data, including appropriate
documentation of the approach used, to assess the adequacy of the data to support a
decision. The assessment might consider the number of samples, spatial distribution of
contaminants, detection limits, quantity and preci si on of measurements, number of
duplicates and concentrations less than the detection level but above zero that could
impact measured outcomes. The risk manager/decision maker might consider whether
closing data gaps necessitates statistical evaluation. Can the data be deemed adequate for
the decision at hand? Will uncertainty in the data or the manner in which they are used
cause a risk manager/decision maker to change the decision? Do significant data gaps
exist, making further data collection necessary? The confidence level assigned to the data
during an uncertainty and variability evaluation can influence the decision making
process. An assessor can use the confidence level to help determine data usability. Data
Page | 91
-------
with a low confidence level (e.g., a mean exposure concentration or an activity pattern
based on a small sample size) might be adequate to support screening-level exposure
assessment DQOs. If these data are critical to meeting the DQOs, an assessor might
decide that additional data collection efforts are necessary (WHO 2008).
5.6. Data Management
EPA's Office of Environmental Information maintains the Data Standards website, which
features information relevant to the data management process. This website houses data
standards, developed to ensure consistent data reporting across the Agency. Although these
standards are not data requirements, they can serve as a starting point when assessing data
management systems. An assessor needs to consider the unique data concerns and objectives for
a data management system in the context of the particular exposure assessment and project team
accessibility. Some individual programs within EPA have developed guidance and SOPs for
managing data. Therefore, assessors need to consult their programs when developing a data
management system. Useful questions for an assessor to ask when selecting a data management
system include:
• What technologies are available for managing data?
Many software programs are available for storing and managing data. Spreadsheet and
database programs are two standard technologies and are available as standard desktop
applications. GIS (geographic information system) applications are another tool for
managing data, especially when data mapping is necessary. In selecting the best
technology, an assessor reviews the data quantity and analysis needed and the
technological limitations or requirements (e.g., software accessibility to multiple users,
mapping functions). Information technology staff can provide guidance and assistance in
selecting and building an appropriate data management tool. A basic understanding of the
uses and limitations of each technology also is useful.
o Spreadsheets organize data in simple tables and include functions to conduct
statistical analyses, generate graphs and create charts. Most spreadsheet programs
have a limited ability to query and extract data and have limited data storage capacity
compared with more sophisticated database programs,
o Databases provide functions that are more robust for conducting queries, extracting
data and generating data reports compared with spreadsheet applications. Databases
also have a greater data storage capacity and allow linkage of multiple tables within a
database for establishing relationships between datasets. For example, a data analyst
can link a table containing activity data for individuals to another data table housing
biomonitoring results. Statistical analyses also are possible using databases. Users can
share databases and access the data simultaneously,
o GIS applications are sophisticated tools for depicting spatial relationships in data.
Although historically considered to be mapping tools, current programs include the
functionality to store and manage data, conduct statistical analyses, generate graphs
and charts and map information spatially. Because GIS programs are complex, an
assessor likely will need support from a GIS expert. Regardless of the data
management system used, the data need appropriate annotation with supporting
information so that future users can assess their quality and utility.
Page | 92
-------
o Data managers need to consider the Agency's Records Management Policy. The
assessor needs to coordinate with the program regarding specific legal and other
constraints on releasing and sharing certain categories of information.
• What are the implications of the Freedom of Information Act (FOIA) on data
management?
Under FOIA, any person can submit a written request that EPA release Agency records.
EPA releases these records unless they fall under one of the nine FOIA exemptions, such
as trade secrets (e.g., pesticide formulations) or medical files (e.g., health information
about an individual). Detailed information about FOIA, FOIA request forms and EPA's
policies and procedures regarding FOIA are available at EPA's Freedom of Information
Act (FOIA) website.
• What are the restrictions on releasing data publicly?
Some types of data, such as data exempted under FOIA, are confidential, and assessors
need to take precautions to avoid releasing them inadvertently. For example, the
formulation of a pesticide under review for registration is an example of a trade secret or
confidential business information. The U.S. Department of Justice's DO J Guide to the
Freedom of Information Act website provides more information about FOIA exemptions.
The 1996 FOIA update identifies specific provisions for protecting privacy of personal
information. Consultation with FOIA attorneys before the release of information is
important to avoid potential violations of privacy.
• What are the QA/QC requirements for data management?
QA/QC for data management needs to address data entry and verification and
maintaining data as part of records management planning. Validation of data entry is a
vital component of data management. Storing and retaining data is part of records
management and is a requirement of all government employees.
Other data remain confidential under EPA's Privacy Policy (U.S. EPA 2005a), which establishes
Agency requirements for safeguarding the collection, access, use, dissemination and storage of
personally identifiable information.
5.7. Data Communication
Ongoing communication with project staff and stakeholders is an important part of an exposure
assessment. Chapter 9 discusses communication needs. Several data-related topics are essential
for effective communication with risk managers/decision makers and stakeholders about existing
data reviews or data collection efforts (see Table 5-6 for example data sources):
• Data presentation. Outreach to stakeholders is important when evaluating existing data,
identifying data for collection or using data to support decisions (U.S. EPA 2016e).
Communication plans need to consider privacy and confidentiality of data (e.g.,
residential sampling), protection of personally identifiable information, how the data
meet the DQOs and how to present the information in a format that participants, the
community and stakeholders understand. Presentation considerations include length,
content and format (e.g., website, social media site, fact sheets, flyers, informal meetings,
briefings, formal meetings to meet regulatory requirements, informal availability
Page | 93
-------
sessions, Community Advisory Group or virtual). Testing presentations and materials
with colleagues to ensure the messages are understandable is helpful.
• Data representation. Quantitative data often are reported as a single point (e.g., average
concentration of a chemical in soil) or represent one moment in time (e.g., concentrations
in emissions from an incinerator). Rarely do these single data points represent the full
range of actual conditions. For example, an average soil concentration does not indicate
the highest or lowest detected value. An emissions sample from an incinerator does not
represent changing conditions, such as increasing capacity or varying waste stream
composition. Therefore, discussions about data indicate what the data do and do not
represent. Presenting data in graphical formats is helpful in showing locations of
concentrations, ranges of concentrations (e.g., minimum, maximum, average), outliers
and other parameters.
• Data limitations. Data can help answer some but not all questions about exposure.
Therefore, an assessor outlines the conclusions the data can and cannot support.
• Data collection rationale. In some cases, stakeholders might believe that data collection
efforts will provide needed answers to their concerns. In other cases, stakeholders might
believe that additional data collection is unnecessary. In both cases, an assessor states
why existing data suffice for an exposure assessment or why additional data collection
efforts are necessary. If collecting data, an assessor also explains the sample design
(e.g., what and where samples will be collected, what analyses will be conducted) and the
rationale for the sample design (e.g., why specific sample locations were selected).
• Data uncertainty and variability. Certain amounts of uncertainty and variability are
associated with all data. An assessor needs to outline the uncertainty and variability
associated with the data and show how these parameters affect the conclusions.
5.8. Summary
• Exposure assessments use a variety of data: environmental data, biomonitoring data and
exposure factors. Some exposure assessments might use a combination of data types or
supplement measurement data with modeling data.
• Identification of data gaps and data needs begins with understanding the conceptual
model. When existing data are insufficient to satisfy exposure assessment needs, a
sampling program is considered. Planning a sampling program requires careful
evaluation of resource needs.
• The Agency has longstanding, established procedures for ensuring data quality, has
published numerous guidance documents and provides a large body of resources that
exposure assessors can consult when beginning an assessment.
o EPA's policy requires all EPA organizations, and those funded by EPA, to follow a
quality system to ensure that data collected to characterize environmental processes
and conditions are appropriate to support the decision. EPA specifies a data quality
system comprising seven iterative steps, spanning planning, implementation and
assessment.
o Reviewing new and existing data for usability and determining whether they meet the
five general assessment factors (soundness, applicability and utility, clarity and
completeness, uncertainty and variability and evaluation and review) involve
(1) aligning data with data quality objectives, (2) establishing a QA project plan,
Page | 94
-------
(3) obtaining peer input to data review, (4) evaluating data quality and (5) validating
data and reviewing methods for sample collection and analysis,
o When using data for an exposure assessment, the assessor needs to decide what
methods to use in representing non-detects, how to handle outliers, whether and how
to combine datasets or combine datasets with modeling outputs, how to make use of
bounding estimates and how to calculate exposure point concentrations.
• Acquiring and evaluating data for exposure assessment informs assessors whether
they can use existing data or need new data. During planning and scoping, assessors
determine the data they need and how to obtain the data from existing sources, if
available, or whether and how to gather new data. EPA provides many guidance
documents for reviewing the quality of existing data and questions to guide an assessor's
decision to use existing data. When critical data are not available, an assessor might
consider implementing a sampling program to gather the required data. Each data type
has unique characteristics that require evaluation.
o Evaluation of environmental data focuses primarily on the spatial and temporal
conditions that affect how well the data represent the conditions addressed in the
assessment.
o Environmental sampling can fill data gaps associated with chemical concentrations
or EPCs and physical conditions such as geology or hydrology,
o Evaluation of biomonitoring data determines whether biomarkers are appropriate for
assessing exposure, reference levels are available and concerns exist regarding access
or disclosure and what data quality measures are associated with the dataset.
o Evaluation of exposure factor information determines the basis for any default
values and whether exposure factors represent the exposure being assessed,
o Acquiring data through questionnaires, surveys or observational studies involves
considering the methods for implementing these approaches, the clearances required
and what methods are available for observational studies,
o Modeling fills data gaps for some exposure assessments. EPA and other
organizations offer a large body of guidance for the application of models in exposure
assessment.
• The key considerations for an exposure assessor regarding data uncertainty and
variability are: What are the sources of uncertainty in the data? What are the sources of
variability in the data? How does decision uncertainty affect exposure assessment
decisions?
• An assessor needs to consider the unique data concerns and obj ectives for a data
management system in the context of the particular exposure assessment and with regard
to data accessibility for the project team.
• A fundamental requirement of effective exposure assessment is transparent and frequent
communication of data, for which having a solid communication plan is key.
Page | 95
-------
Table 5-6. Examples of Sources of Non-Occupational Data for an Exposure Assessment
from EPA and other Federal Agencies3
Source
Data Type
Scale
Description
Reference
U.S. EPA
EPA - Exposure Factors
Handbook: 2011 Edition
• Exposure factors
• Local
• State
• Regional
• National
• International
The Exposure Factors Handbook: 2011 Edition provides information
and recommendations on various factors used in assessing exposure
to adults and children. The handbook summarizes data on human
behaviors and characteristics that affect exposure to environmental
contaminants and recommends values. This document summarizes
the available statistical data on factors including consumption of
drinking water, fruits, vegetables, beef, dairy products and fish; soil
ingestion; inhalation rates; skin surface area; soil adherence; lifetime
activity patterns; body weight; consumer product use; and building
characteristics. Also see ExpoBoxand ExpoFIRST.
U.S. EPA (2011d)
httDs://cfpub.epa.aov/ncea/risk/re
cordisplav.cfm?deid=236252
EPA - Expo Box
• Exposure factors
• Local
• State
• Regional
• National
• International
ExpoBox is a compendium of exposure assessment tools that links to
guidance documents, databases, models, key reference materials
and other related resources. Resources are organized into six Tool
Sets, including modules designed to improve the accessibility and
usability of data from EPA's Exposure Factors Handbook: 2011
Edition. EPA plans to add exposure assessment resources to EPA
ExpoBox as they become available. Under the Media section,
information on exposure assessment tools for consumer products is
provided.
https://www.epa.aov/expobox
EPA - ExpoFIRST
• Exposure factors
• Local
• State
• Regional
• National
• International
ExpoFIRST - the Exposure Factors Interactive Resource for
Scenarios Tool - uses data from EPA's Exposure Factors Handbook:
2011 Edition in an interactive tool that maximizes flexibility and
transparency for exposure assessors. Users develop scenarios
based on route of exposure, medium, receptor(s), timeframe and
dose metric for a contaminant of concern. Assessors can modify
initial parameters, as appropriate, to account for assessment-specific
knowledge and calculate deterministic exposure estimates as point
estimates. The tool enables users to define unlimited potential
scenarios for various receptor populations and lifestages. Relies on
the Exposure Factors Handbook: 2011 Edition and ExpoBox for
updated information.
https://cfpub.epa.aov/ncea/risk/re
cordisplav.cfm?deid=322489
Page | 96
-------
Table 5-6. Examples of Sources of Non-Occupational Data for an Exposure Assessment
from EPA and other Federal Agencies3 (cont.)
Source
Data Type
Scale
Description
Reference
EPA-Consolidated
Human Activity Database
(CHAD)
• Exposure factors
• Local
• State
• Regional
• National
EPA's CHAD contains detailed data on human behavior from 22
separate exposure and time-use studies. The database includes
more than 54,000 individual study days of data including age, sex,
employment and education level, which allows researchers to
examine specific groups within the general population and how their
unique behavior patterns influence their exposures to chemicals. The
database is continuously maintained and updated as new human
activity data become available.
https://www.epa.aov/healthresear
ch/consolidated-human-activitv-
database-chad-use-human-
exposure-and-health-studies-and
EPA - National Human
Exposure Assessment
Survey (NHEXAS)
• Observational
human exposure
• Local
• State
• Regional
The purpose of NHEXAS was to evaluate comprehensive human
exposure to multiple chemicals on a community and regional scale.
These studies: (1) measured pollutant concentrations in air, water,
soil, dust, food, blood, urine and hair and on surfaces and human
skin using various sampling and analytical techniques;
(2) determined direct exposure using personal exposure monitors;
and (3) estimated human activity patterns using a series of
questionnaires and diaries.
https://cfpub.epa.aov/si/si public
record report.cfm?Lab=NERL&TI
MSTvpe=&count=10000&dirEntrv
ld=18200&searchAII=&showCriter
ia=2&simpleSearch=0&startlndex
=70001
EPA-Total Exposure
Assessment
Methodology Study
(TEAM)
• Observational
human exposure
• Local
TEAM, conducted from 1979 to 1985, developed methods for
collecting individual exposure information and applying these
methods, along with statistical analyses, to estimate exposures and
body burdens for individuals living in several urban areas. Volume I is
a summary and overview of the entire study. Volume II deals with
studies in New Jersey, North Carolina and North Dakota. Volume III
deals with studies in California. Volume IV presents the Standard
Operating Procedures employed in the study.
U.S. EPA (1987b);
Handy etal. (1987)
https://cfpub.epa.aov/si/si public
record report.cfm?Lab=NERL&TI
MSTvpe=&count=10000&dirEntrv
ld=50208&searchAII=&showCriter
ia=2&simpleSearch=0&startlndex
=70001
https://cfpub.epa.aov/si/si public
record report.cfm?Lab=NERL&TI
MSTvpe=&count=10000&dirEntrv
ld=44146&searchAII=&showCriter
ia=2&simpleSearch=0&startlndex
=30001
Page| 97
-------
Table 5-6. Examples of Sources of Non-Occupational Data for an Exposure Assessment
from EPA and other Federal Agencies3 (cont.)
Source
Data Type
Scale
Description
Reference
EPA-Children's Total
Exposure to Persistent
Pesticides and Other
Persistent Organic
Pollutants Study
(CTEPP)
• Observational
human exposure
• State
• Regional
CTEPP, completed in 2004, was designed to determine what
commonly used chemicals are found in home or daycare
environments and if children in these environments encountered
those chemicals in the course of their regular, day-to-day activities.
Chemicals included pesticides, cleaners and household products.
Participants maintained normal daily routines during the study. The
CTEPP website provides chapter-by-C hapterfiles of data, which also
are available through HEDS (Human Exposure Database System).
https://arch ive.epa. aov/heasd/arc
h ive-dears/web/h tm l/index-1. htm I.
EPA - Detroit Exposure
and Aerosol Research
Study (DEARS)
• Observational
human exposure
• State
• Regional
DEARS was designed to collect data, from 2004 to 2007, to improve
EPA's understanding of human exposure to various air pollutants in
the environment. During a 3-year period, personal indoor and
outdoor air monitoring data were collected to evaluate exposure to
particulate matter and a set of air toxics. These data were correlated
with information, such as blood pressure and heart rate, relevant to
potential health effects.
https://arch ive.epa. aov/heasd/arc
h ive-dears/web/h tm l/index. htm I
EPA-Air Toxics Risk
Assessment Reference
Library
• Reference on
assessing
exposures to air
toxics
• Local
• National
EPA developed an air toxics risk assessment (ATRA) reference
library for conducting air toxics analyses at the facility and community
scales. This library provides information on the fundamental
principles of risk-based assessment for air toxics and how to apply
those principles in different settings, as well as strategies for
reducing risk at the local level.
https://www.epa.aov/fera/risk-
assessment-and-modelina-air-
toxics-risk-assessment-reference-
librarv
EPA-Air Exposure
Guidelines Levels
• Information on the
acute exposure
guideline levels
• Local
Acute exposure guideline levels (AEGLs) describe the human health
effects from once-in-a-lifetime, or rare, exposure to airborne
chemicals. Used by emergency responders when dealing with
chemical spills or other catastrophic exposures, AEGLs are set
through a collaborative effort of the public and private sectors
worldwide.
https://www.epa.aov/aeal
Page | 98
-------
Table 5-6. Examples of Sources of Non-Occupational Data for an Exposure Assessment
from EPA and other Federal Agencies3 (cont.)
Source
Data Type
Scale
Description
Reference
EPA - Relationship
Between Indoor, Outdoor
and Personal Air Study
(RIOPA)
• Observational
human exposure
• Regional
RIOPA, completed in 2005, quantified indoor and outdoor inhalation
exposures to agents in three areas of the United States. Integrated
indoor, outdoor and personal air samples were collected both for
gas-phase and fine particulate matter (2.5 pm or smaller) analyses,
organic functional groups, elements, organic carbon, elemental
carbon, gas- and particle-phase polycyclic aromatic hydrocarbons
and chlordanes. Questionnaire and time activity information also
were collected from residents.The study was funded by the National
Urban Air Toxics Research Center with Dr. Clifford Weisel at the
Environmental Occupational Health Sciences Institute (EOHSI) as
Principal Investigator. One study was funded by the Health Effects
Institute (HEI) with Dr. Jim Zhang of EOHSI as Principal Investigator.
Another study was funded by HEI with Dr. Barbara Turpin of Rutgers
University as Principal Investigator.
Weisel et al. (2005)
https://cfpub.epa.aov/ncer abstra
cts/index.cfm/fuseaction/displav.h
iahliaht/abstract/8343
EPA - ExpoCast
Database
• Observational
human exposure
• Environmental
• Biological
• Local
• State
• Regional
• National
ExpoCast quickly and efficiently examines multiple routes of
exposure to provide exposure estimates and has been applied to
almost 8,000 chemicals. ExpoCast uses two types of models to
estimate exposure, farfield and nearfield. The database is updated
on a continuing basis.
https://www.epa.aov/chemical-
research/rapid-chemical-
exposure-and-dose-research
EPA-Air Quality
System (AQS)
• Environmental
• Local
• State
• Regional
• National
AQS contains ambient air pollution data collected by EPA, state,
local and tribal air pollution control agencies from thousands of
monitoring stations. AQS also contains meteorological data,
descriptive information about each monitoring station (including its
geographic location and operator) and data quality assurance/quality
control (QA/QC) information. Data can aid in an exposure
assessment of chemicals in air at varying geographic areas. Data are
updated on an ongoing basis to inform EPA, state, local and tribal air
pollution control agencies.
https://www.epa.aov/aas
EPA - Monitoring the
Occurrence of
Unregulated Drinking
Water Contaminants
• Environmental
• National
Nationally representative data on the occurrence of contaminants in
drinking water, the number of people potentially exposed and an
estimate of that exposure. These data provide the basis for future
regulatory actions to protect public health. The database is updated
on an ongoing basis.
https://www.epa.aov/dwucmr
Page | 99
-------
Table 5-6. Examples of Sources of Non-Occupational Data for an Exposure Assessment
from EPA and other Federal Agencies3 (cont.)
Source
Data Type
Scale
Description
Reference
EPA - National
Emissions Inventory
Air pollutants
• Local
• National
The National Emissions Inventory (NEI) is a comprehensive
estimate of air emissions of criteria pollutants, criteria
precursors and hazardous air pollutants from air emissions
sources. The NEI is released every three years based primarily
on data provided by state, local and tribal air agencies for
sources in theirjurisdictions and supplemented by data
developed by EPA.
httos://www.epa.aov/air-
emissions-inventories/national-
emissions-inventorv-nei
EPA-Toxic Release
Inventory Data (TRI)
Environmental
• Local
• State
• Regional
The TRI Program tracks the management of toxic chemicals
that might pose a threat to human health and the environment.
Facilities in certain industry sectors report annually on the
volume of toxic chemicals managed as waste—recycled,
treated or burned for energy recovery—and disposed of or
otherwise released into the environment.
httos://www.epa.ciov/trinationalan
alvsis
Centers for Disease Control and Prevention (CDC)
CDC - National Health
and Nutrition
Examination Survey
(NHANES)
• Observational
human exposure
• Biological
• Biomonitoring
• Regional
• National
NHANES is a program of studies designed to assess the health and
nutritional status of adults and children in the United States. The
survey is unique in that it combines interviews and physical
examinations. The sample for the survey is selected to represent the
U.S. population of all ages. To produce reliable statistics, NHANES
over-samples persons more than 60 years of age, African Americans
and Hispanics. The survey has become a continuous program that
has a changing focus on a variety of health and nutrition
measurements to meet emerging needs. Data are summarized and
published in National Reports on an ongoing basis.
htto://www.cdc.aov/nchs/nhanes.
htm
htto://www.cdc.aov/exDosurerepo
rtf
CDC - National Center
for Health Statistics
(NCHS) — Surveys and
Data Collection Systems
• Health
characteristics
• National
NCHS provides data to evaluate national trends in health statistics on
such topics as birth and death rates, infant mortality, life expectancy,
morbidity and health status, risk factors, use of ambulatory and
inpatient care, health personnel and facilities, financing of health
care, health insurance and managed care and other health topics.
Updates are developed on an ongoing basis.
htto://www.cdc.aov/nchs/
Page|100
-------
Table 5-6. Examples of Sources of Non-Occupational Data for an Exposure Assessment
from EPA and other Federal Agencies3 (cont.)
Source
Data Type
Scale
Description
Reference
CDC-NCHS-
Research Data Centers
(RDCs)
• Health
characteristics
• National
RDCs allow researchers access to restricted data. Researchers are
required to submit a research proposal outlining the need for these
sensitive data. The proposal provides a framework for NCHS to
identify potential disclosure risk. The RDCs also host restricted data
from a variety of groups within the U.S. Department of Health and
Human Services. Ongoing availability of data.
htto://www.cdc.aov/rdc/index.htm
CDC - Behavioral Risk
Factor Surveillance
System (BRFSS)
• Biological
• Health-related
behaviors
• Human activity
• State
• National
BRFSS is a telephone health survey system. It has tracked health
conditions and risk behaviors in the United States yearly since 1984.
Currently, data are collected monthly in all 50 states; Washington,
DC; Puerto Rico; the U.S. Virgin Islands; and Guam. The BRFSS
website makes its resources available to the public, including
interactive databases, maps and raw annual survey data. The site
also features data usage statistics by state. Updated annually.
htto://www.cdc.aov/BRFSS/
CDC - Agency for Toxic
Substances and Disease
Registry (ATSDR)
• Public health
assessments
• Health
consultations
• Local
• State
• Regional
ATSDR determines public health implications associated with
hazardous waste sites and other environmental releases. ATSDR
has developed a methodology for evaluating the public health
implications of exposures to environmental contamination. Ongoing
data development and availability.
htto://www.atsdr.cdc.aov/hac/PH
AManual/toc.html
htto://www.atsdr.cdc.aov/hac/pha/
index.aso
CDC - Agency for Toxic
Substances and Disease
Registry (ATSDR)
• Toxicological
profiles
• National
ATSDR toxicological profiles characterize the toxicological and
adverse health effects information for hazardous substances. Profiles
include information on potential human exposures.
httos://www.atsdr.cdc.aov/toxDrofi
ledocs/index.html
U.S. Census Bureau
U.S. Census Bureau -
American FactFinder
• Demographics
• Census tract
• State
• Regional
• National
The Census Bureau's American FactFinder is an interactive
application that supports the Economic Census, the American
Community Survey, the 1990 Census, Census 2010 and the latest
population estimates. It provides fact sheets and data on population
demographics, housing and businesses that can be useful in
understanding sources of exposure. The website features
downloadable Microsoft Excel sheets, maps and a search engine.
Ongoing updates.
htto://factfinder2.census.aov/face
s/nav/isf/paaes/index.xhtml
Page|101
-------
Table 5-6. Examples of Sources of Non-Occupational Data for an Exposure Assessment
from EPA and other Federal Agencies3 (cont.)
Source
Data Type
Scale
Description
Reference
U.S. Census Bureau -
Equal Employment
Opportunity (EEO) Data
Tool
Demographics
• Census tract
• State
• Regional
• National
The Census Bureau's Census 2000 EEO Data Tool is a web-
based tool that enables users to select tabulations of residence
or workplace information at varying levels of geographic
specificity. The data present available information for a variety
of occupations categorized by race/ethnicity and sex that can
be used in assessing residence times in specific geographic
areas. Ongoing updates.
httos://www.census.aov/eeo2000/
Bureau of Labor Statistics
Bureau of Labor
Statistics - American
Time Use Survey (ATUS)
Activity
• National
ATUS measures the amount of time individuals spend
completing various activities, such as paid work, childcare,
volunteering and socializing. These data can be used in an
exposure assessment to estimate frequency and duration.
Updated on an ongoing basis.
htto://www.bls.aov/tus/
U.S. Geological Survey (USGS)
USGS-Toxics
Substances Hydrology
Program
• Environmental
• Local
• Regional
• National
The USGS Toxics Substances Hydrology Program conducts
(1) intensive field investigations of representative cases of
subsurface contamination at local releases; and (2) watershed- and
regional-scale investigations of contamination affecting aquatic
ecosystems from nonpointand distributed point sources. This type of
information might be helpful in understanding geographical locations
of water bodies and other factors that could contribute to exposures.
The newsletter is updated on an ongoing basis.
htto://toxics.usas.aov/index.html)
USGS - National Water
Information System
• Environmental
• Local
• Regional
• National
National Water Information System Mapper provides information on
the geological locations of water bodies that might be sources of
exposure. The website is updated on an ongoing basis.
httD://waterdata.usas.aov/nwis
Page|102
-------
Table 5-6. Examples of Sources of Non-Occupational Data for an Exposure Assessment
from EPA and other Federal Agencies3 (cont.)
Source
Data Type
Scale
Description
Reference
USGS - National Water
Quality Assessment
Program (NAWQA)
• Environmental
• Local
• State
• National
NAWQA provides data on water quality conditions, changes over
time and how natural features and human activities affect those
conditions. A consistent study design and uniform methods and
analyses are used. Monitoring data are integrated with geographic
information on hydrological characteristics, land use and other
landscape features in models to extend water quality understanding
to unmonitored areas. Data are used to design and implement
strategies for managing, protecting and monitoring water resources
in many different hydrologic and land-use settings across the nation.
The website is updated on an ongoing basis.
htto://water.usas.aov/nawaa/
USGS - Health Related
Activities
• Environmental
• Local
• State
• National
The Human Consumption of Chemical and Pathogenic
Contaminants portion of this website provides information on the
occurrence of bioaccumulative contaminants in water, sediment and
fish tissue that might be helpful in evaluating contaminant trends
over time and sources of contamination in biota. The website is not
being updated on an ongoing basis.
The USGS website (include second webpage link) on contaminants
and pathogens provides information on chemical pollutants, human
and animal pathogens, and the biological effects of chemical and
biological contaminants on natural ecosystems. This webpage is
updated on an ongoing basis.
httos://archive.usas.aov/archive/site
s/health.usas.aov/bioacc cont/inde
x.html
httos://www.usas.aov/centers/umid-
water/science/contaminants-and-
pathoaens
USGS - National Stream
Quality Accounting
Network (NASQAN)
• Environmental
• Local
• Regional
• National
NASQAN's major objective is to report on the concentrations and
loads of selected constituents delivered by major rivers to the
coastal waters of the United States and selected inland subbasins in
priority river basins to determine the sources and relative yields of
constituents within these basins. These priority basins are significant
in reducing delivery of constituents that contribute to adverse
conditions in receiving waters. Other objectives include monitoring
for climate change and describing long-term trends in the loads and
concentrations of select constituents at key locations. The database
is updated on an ongoing basis.
httos://www.usas.aov/centers/dakot
a-water/science/national-stream-
aualitv-accountina-network-
nasaan?at-
science center obiects=0#at-
science center obiects
Page|103
-------
Table 5-6. Examples of Sources of Non-Occupational Data for an Exposure Assessment
from EPA and other Federal Agencies3 (cont.)
Source
Data Type
Scale
Description
Reference
USGS - Sediment Data
Portal
• Environmental
• Land use
• Local
• Regional
• National
This portal provides users with land use data, the ability to view the
location of sediment sites in the context of various geospatial data
layers and tools to enable users to select sites of interest. This website
is updated on an ongoing basis.
http://cida.usas.aov/sediment/
USGS - Environmental
Health Sciences
• Environmental data
• National
USGS provides information useful in characterizing the processes that
affect the interaction among the physical environment, living
environment and people, and the resulting factors that affect ecological
and human exposures to disease agents. The Environmental Health
Science Strategy summarizes national environmental health priorities
that USGS is best suited to address and serves as a strategic
framework to meet the USGS environmental health science goals,
actions and outcomes for the next decade. Implementation of this
strategy is intended to aid coordination of USGS environmental health
activities with other federal agencies and to provide a focal point for
disseminating information to stakeholders. This website is updated on
an ongoing basis.
http://www.usas.aov/envirohealth/
U.S. EPA/USGS-Water
Quality Portal (WQP)
• Environmental
groundwater
• Local
• National
The WQP provides access to data stored in various large water quality
databases. It provides information on location, site, sampling and date
parameters to customize the returned results. WQP provides
information (locations of sample collection) and sample results
(analytical data of collected samples). The portal relies in part on
STORET (STORage and RETrieval) and USGS's National Water
Information System database. The website is updated on an ongoing
basis.
http://www.wateraualitvdata.us/
USGS - Background
levels of elements in soils
and other surficial
materials
• Environmental soil
• Local
• National
USGS developed a report on background concentrations of various
chemicals, including metals, throughout the contiguous United States.
Samples were analyzed for their content of elements in soils and other
surficial materials. This website is updated on an ongoing basis.
https://www.usas.aov/faas/does-
usas-have-reports-backaround-
levels-elements-soils-and-other-
surficial-materials?at-
news science products=2#at-
news science products
Page|104
-------
Table 5-6. Examples of Sources of Non-Occupational Data for an Exposure Assessment
from EPA and other Federal Agencies3 (cont.)
Source
Data Type
Scale
Description
Reference
U.S. Department of Agriculture
USDA's Pesticide Data
Program (PDP)
• Pesticide residue
monitoring data
• National
The PDP is a national pesticide residue monitoring program that
produces the most comprehensive pesticide residue database in the
United States. The Monitoring Programs Division administers PDP
activities, including the sampling, testing and reporting of pesticide
residues on agricultural commodities in the U.S. food supply, with an
emphasis on those commodities highly consumed by infants and
children. The program is implemented through cooperation with state
agriculture departments and other federal agencies. PDP data:
• Enable EPA to assess dietary exposure.
• Facilitate the global marketing of U.S. agricultural products.
• Provide guidance for the U.S. Food and Drug Administration
(FDA) and other governmental agencies to make informed
decisions.
Updated on an annual basis.
httos://www.ams.usda.aov/datase
ts/pdp
U.S. Food and Drug Administration
U.S. FDA-Total Diet
Study (TDS)
• Contaminant and
nutrient data for
food and beverages
• National
The TDS is an ongoing FDA program that monitors levels of about
800 contaminants and nutrients in the average U.S. diet; the number
varies slightly from year to year. To conduct the study, the program
purchases, prepares and analyzes about 280 types of foods and
beverages from representative areas of the country, four times a
year. Results of the TDS, from 1991 to the present, are available to
the public in electronic form on this website. (Results prior to 1991
can be found in the publications listed on the publications page of the
website.) Each section of the website explains a different aspect of
the study, from a brief description of study design to a brief
explanation of the organization of the results, including a link to
zipped text files of the data.
httos://www.fda.aov/Food/FoodS
cienceResearch/TotalD ietStudv/
U.S. FDA - Center for
Food Safety and Applied
Nutrition (CFSAN)
• Contaminant data
for food and
cosmetics
• National
The Center is one of six product-oriented centers at FDA that carry
out FDA's mission. CFSAN provides services to consumers,
domestic and foreign industry and other outside groups regarding
field programs; agency administrative tasks; scientific analysis and
support; and policy.
httos://www.fda.aov/aboutfda/cen
tersoffices/officeoffoods/cfsan/
Page|105
-------
Table 5-6. Examples of Sources of Non-Occupational Data for an Exposure Assessment
from EPA and other Federal Agencies3 (cont.)
Source
Data Type
Scale
Description
Reference
National Oceanic and Atmospheric Administration
NOAA- National Center
for Environmental
Information
• Atmospheric,
coastal, oceanic
and geophysical
data
• Local
• Regional
• National
NOAA's National Center for Environmental Information (NCEI)
provides access to one of the most significant archives on Earth with
comprehensive oceanic, atmospheric and geophysical data
(approximately 25 petabytes). NCEI is the nation's leading authority
for environmental information.
httos://www.ncei.noaa.aov/
NOAA-Weather
Forecast
• Weather, water and
climate data,
forecasts and
warnings; includes
data for various
timeframes for
assessing
exposures and
model air
concentrations
• Local
• National
NWS forecasts, warnings, data and products form a national
information database and infrastructure used by other governmental
agencies, the private sector, the public and the global community.
This enables core partners to make decisions when weather, water
or climate has a direct impact on the protection of lives and
livelihoods. NWS forecasts and warnings are provided directly to
decision makers in local communities and at state and federal levels
to protect lives and property in neighborhoods and communities. This
information is used in modeling concentrations from specific facilities,
determining migration of air toxics, etc.
httos://www.weather.aov/about/
aNot exhaustive
Page|106
-------
CHAPTER 6. COMPUTATIONAL MODELING FOR
EXPOSURE ASSESSMENTS
This chapter presents an overview of modeling in exposure assessments, highlighting basic
concepts to consider when using models and providing a brief taxonomy of the types of models
frequently encountered. Specifically, it:
• Provides the principles of the modeling process and key definitions (Section 6.1)
• Provides an overview of the process of identifying appropriate models based on exposure
assessment goals (Section 6.2)
• Explains how an exposure assessor evaluates models that potentially are useful in the
exposure assessment (Section 6.3).
Section 6.4 summarizes this chapter.
6.1. Principles and Definitions of Modeling
In EPA's Guidance on the Development, Evaluation, and Application of Environmental Models
(U.S. EPA 2009d), the Agency adopted the National Academy of Sciences' definition of a model
as "a simplification of reality that is constructed to gain insights into select attributes of a
particular physical, biological, economic, or social system" (NRC 2007). Computational models
are based on first developing conceptual models and then deriving mathematical models of the
process defined by the conceptual model, with simplifying assumptions, as appropriate. In an
exposure assessment, computational models are tools the assessor uses to analyze and
characterize processes that are too complex for capturing completely by empirical data or for
which empirical data are not available. Models help extrapolate monitoring data to non-surveyed
populations, reconstruct past exposures or predict future exposures. Models also serve as a
framework for assembling various types of data to develop estimates of exposure that are
consistent with all available empirical data.
In general, the modeling process within an exposure assessment might include interactions
among public policy processes, represented by the planning and scoping and problem
formulation steps of an exposure assessment, model development and model application (U.S.
EPA 2009d). Depending on the analysis plan developed during the problem formulation phase of
an exposure assessment (see Section 3.3), exposure modeling might help estimate environmental
concentrations, extend existing monitoring information to populations and locations without
data, predict exposures under current and future scenarios and evaluate potential exposure
reduction and associated environmental and health benefits resulting from risk management
actions (Isakov et al. 2009; Jayjock et al. 2007; Lobdell et al. 2011; U.S. EPA 1989b; U.S. EPA
1992b; Williams et al. 2010). After the project team has identified the problem that modeling
will address, they determine the specifications of the problem, including the model type that
addresses the purpose of the assessment, meets the data criteria and considers the spatial,
temporal and physical boundaries of the problem. This determination occurs while developing
the analysis plan (see Section 3.3).
Page|107
-------
The process of developing and validating a new model or modifying and evaluating an existing
model is beyond the scope of this document, but Guidance on the Development, Evaluation, and
Application of Environmental Models (U.S. EPA 2009d) describes the steps in detail. This model
guidance document recommends that model developers and users:
• Subj ect their model to a credible and obj ective peer review
• Assess the quality of the data used in the creation and evaluation of the model
• Corroborate their model by evaluating the degree to which it corresponds to the system
being modeled
• Perform sensitivity and uncertainty analyses
• Document all aspects of a modeling project
• Communicate effectively with analysts and risk managers/decision makers.
Model applications also involve evaluating and refining the model and comparing the model
results to assessment goals and data quality objectives to ensure that they are achieved (see
Section 5.3.2). An assessor might need to refine the model further or use a new one if the model
does not meet the criteria. The resources listed in Box 6-1 support EPA's continued efforts to
ensure the quality, transparency and reproducibility of the information in models the Agency
uses and disseminates.
Box 6-1. Pertinent Resources for Modeling
• U.S. EPA (2002f) Guidelines for Ensuring and Maximizing the Quality, Objectivity, Utility and Integrity of Information
Disseminated by the Environmental Protection Agency EPA/260/R-02/008.
• U.S. EPA (2006g) System Life Cycle Management Policy.
• NRC (2007) Models in Environmental Regulatory Decision Making. Reviews the evolving scientific and technical issues
related to the development, selection and use of computational and statistical models at EPA.
• U.S. EPA (2009d) Guidance on the Development, Evaluation, and Application of Environmental Models.
EPA/100/K-09/003. Provides a simplified, comprehensive resource on the principles of good modeling practice.
• WH 0 (2005) Principles of Characterizing and Applying Human Exposure Models.
• Quality System for Environmental Data and Technology website. U.S. EPA.
• Predictive Models and Tools for Assessing Chemicals under the Toxic Substances Control Act (TSCA) website.
U.S. EPA.
• Radiation Protection Document Library website. Radiation Protection website. U.S. EPA.
6.2. Selecting the Type of Model for Exposure Assessments
The process of model selection first involves identifying the type of model needed to meet the
risk management objectives of the assessment and then determining the complexity of the model
necessary to reach a decision. After defining the type of model needed, the assessor and
individuals with modeling expertise determine whether such a model exists to meet those needs
or a new model needs to be developed.
Selection of an appropriate model is critical for accurately estimating exposure concentrations.
Assessors generally choose from multiple models that are relevant for the populations and
exposure sources of interest. The assessment should include documentation of the reasons for
model selection.
Page|108
-------
When selecting and using models, assessors, in collaboration with individuals experienced in
modeling, need to review the modeling literature. Several factors help an exposure assessor
select an appropriate model for an exposure assessment: the study objectives, technical
capabilities of the model, model availability and ease of use (U.S. EPA 1987a; U.S. EPA 1988b).
An equally critical step in the process is to choose appropriate default values for exposure factors
when data are missing or incomplete. The best model is less reliable without good data and
appropriate default factors. Presentation of the rationale for selecting the model is essential to
promoting transparency in the assessment (see Section 6.2.1). Table 6-1 lists environmental
modeling-related inventories and clearinghouses.
Table 6-1. EPA Exposure-Related Inventories and Clearinghouses
Source
Description
Types of Models and Information
URL
Registry of EPA
Applications and
Databases
Authoritative source of information about
EPA information resources.
Models that EPA uses, supports or funds.
http://ofmpub.epa.aov/sor in
ternet/reqistrv/svstmreq/hom
e/overview/home.do
Center for Exposure
Assessment Modeling
Database designed to meet the scientific
and technical exposure assessment needs
of EPA, state environmental agencies and
resource management agencies.
Models that provide predictive exposure
assessment techniques for aquatic, terrestrial
and multimedia pathways for organic chemicals
and metals. Includes models of groundwater,
surface water, food chains and multimedia.
http://www.epa.aov/ceampu
M
Center for Subsurface
Modeling Support
(CSMoS)
Center that provides software and technical
support to EPA and state risk
managers/decision makers in subsurface
model applications, including groundwater
models and databases from EPA's National
Risk Management Research Laboratory.
Models used for site characterization,
conducting groundwater flow and transport
simulations, determining wellhead protection
areas and selecting groundwater remediation
at Resource Conservation and Recovery Act
and Superfund sites.
U.S. EPA (2012d)
https://cfpub.epa.aov/si/si p
ublic record Report.cfm?dir
EntrvlD= 19569
Emissions Modeling
Clearinghouse
Database that supports and promotes
emission-modeling activities, both internal
and external to EPA.
Emissions data, modeling platforms, emission
modeling software resources and ancillary
data.
https://www3.epa.aov/ttnchi
e1/software/
An Overview of
Exposure Assessment
Models Used by the
U.S. Environmental
Protection Agency
Overview of exposure assessment models
EPA supports and uses.
Includes 12 fate/transport models, 15 exposure
models and 8 integrated fate/transport-
exposure models.
Williams etal. (2010)
Model Clearinghouse
Information Storage
and Retrieval System
Single EPA focal point for reviewing the use
of modeling techniques for specific
regulatory applications.
Information about referrals from EPA regional
offices involving the interpretation of modeling
guidance for specific regulatory applications.
http://cfpub.epa.aov/oarweb/
MCHISRS /
Support Center for
Regulatory
Atmospheric Modeling
Website providing documentation of EPA's
Air Quality Modeling Group modeling
analyses that support policy and regulatory
decisions in the Office of Air and Radiation.
Air quality models and other mathematical
simulation techniques used in assessing
control strategies and source impacts.
https://www.epa.aov/scram
Watershed and Water
Quality Modeling
Technical Support
Center
Center that assists EPA programs and state
and local governments in the
implementation of the Clean Water Act.
Tools and approaches for use in developing
Total Maximum Daily Loads, wasteload
allocations and watershed protection plans.
https://www.epa.aov/tmdl/wa
tershed-and-water-aualitv-
modelina-technical-support-
center-factsheet
Office of Pollution
Prevention and Toxics
Website from which users can download or
access tools and models the Office of
Pollution Prevention and Toxics uses in its
programs.
Models used for prioritization, screening and
detailed assessment of chemicals. Includes
hazard and exposure and fate models, tools
and documents.
https://www.epa.aov/aboute
pa/about-office-chemical-
safetv-and-pollution-
prevention-ocspp
Office of Pesticide
Programs
Website from which users can access
exposure models, model-related resources
(e.g., databases on exposures and guidance
documents), and links to other models.
Models described at this site are used to
predict exposures for a range of ecological
receptors and to predict exposures for humans
from dietary and non-dietary sources.
https://www.epa.aov/pesticid
e-science-and-assessina-
pesticide-risks
Page|109
-------
Models range from simple to complex. Users typically begin with simpler models that, when
combined with conservative inputs, can screen out exposures of low concern (see Section 6.2.2).
Certain questions also are more amenable to simple models. For example, if the goal is to
determine whether the level of a chemical in a product or the level of a chemical observed to
occur in an environmental medium is of low concern, a bounding analysis using a deterministic
model might be sufficient. Certain goals, by necessity, will require models that are more
advanced. Questions concerning the fraction of the population a source affects or regarding the
quantitative characterization of uncertainty immediately dictate the need for probabilistic models
(see Section 6.2.3).
6.2.1. Setting the Objectives for the Modeling Effort
Before selecting a model to estimate an exposure, an exposure assessor defines the exposure
assessment objectives(s) and describes how the model addresses assessment questions or
hypotheses required to meet these objectives. The objectives include identifying the processes of
greatest importance in the modeling effort (U.S. EPA 2009a). Identifying these processes is a
key part of developing the conceptual model (see Section 3.2.2) and model selection.
An exposure assessor, in collaboration with other project team members, develops a clear
statement of what information the model will estimate and how the assessment will use this
estimate. Depending on program, office or regional guidelines, the exposure assessment plan,
modeling approach document or a standard operating procedure (SOP) could include this
statement. The modeling approach needs to be consistent with known project constraints
(e.g., schedule or budget). The exposure assessment's analysis plan, discussed in Section 3.3,
describes the information needs the model will address. Examples of such needs are identifying
population groups of concern; determining whether to present outputs on an hourly, daily,
quarterly, yearly or multiyear basis; deciding on the number of prediction years (e.g., lifetime or
shorter timeframes) and location for modeling (e.g., on site, at the smoke stack, fence line, off
site, indoor, in-vehicle, outdoor, residential); capturing variability over time, space or the
population; and presenting results in a form appropriate for the intended purpose and audience.
Exposure assessors need to be aware that many available modeling applications could make
exposure-modeling simulations appear deceptively simple. EPA highly recommends a discipline
expert conduct or participate in any statistical modeling to predict or estimate exposure.
6.2.2. Level of Model Complexity
The risk manager/decision maker, exposure assessor and modeler should work together to
develop the problem statement and system conceptualization during the initial stage of the
exposure assessment that forms the basis for developing a modeling methodology. This
methodology stipulates the degree of model complexity. The computational models EPA uses in
exposure assessment range from simple deterministic screening-level models to complex
probabilistic models, based on both the complexity of the exposure process and regulatory
considerations. Figure 6-1 illustrates that the complexity of the exposure and uncertainty
characterization in an exposure assessment need to increase to meet greater decision making
needs and regulatory significance, typically requiring selection of more complex models. The
more sophisticated models provide more refined estimates of exposure that are useful for certain
types of regulatory and decision making needs. Because of the higher levels of effort required to
use and explain complex models, projects should make use of the simplest model that meets the
project's needs.
Page|110
-------
Figure 6-1. A Tiered Approach for Modeling Analysis
~
U
- £
£ rc
£5
x
c
9" "i§
C
o
i2 *<75
=3 U
Cuo
-------
Figure 6-2. Deterministic versus Probabilistic Analysis
Single value
parameter input
A
B
C
D
Deterministic analysis
Model
-**¦
E=f(A, B, C, D)
Single value output
O
c
0
a)
Probability
density function
c
~
D -J
Probabilistic analysis
Model
E=f(A, B, C, D)
Distributed value output
(uncertainty and variability)
Adapted from Maslia and Aral (2004)
Assessors need to consult their programs to determine which screening-level models the
programs use and to develop an understanding of the models' capabilities and limitations. Higher
tier models provide estimates that are more refined for particular regulatory and decision making
needs.
Deterministic models help establish the range of possible outcomes. Basing the input values for a
deterministic model on the average value across a population yields a reasonable estimate of the
average individual's exposures. When input values are conservative, but still realistic, the
deterministic model will generate exposure estimates that fall on the high end of the expected
exposure distribution. Several EPA programs routinely use screening-level models to screen out
exposures of low concern rapidly and efficiently. Assessors compare exposure estimates from
screening-level models to screening values. Screening values include health-based values
expressed as a dose (e.g., reference doses) and chemical concentrations in a specific medium
(e.g., soil screening values). (When using chemical concentrations as screening values, an
exposure assessor usually can compare an exposure point concentration directly to the soil
Page|112
-------
screening value. In this case, estimating the exposure quantitatively would be unnecessary.)
Typically, assessors carry forward those exposure estimates exceeding screening values for
additional analyses. The additional analyses include in-depth evaluation of the problem by using
a more sophisticated exposure model or by collecting additional monitoring data. Deterministic
models can be the subject of sensitivity analyses, which can provide insight on which parameters
should be the focus of additional data collection and refinement. Finally, if the cost of taking a
risk management action is low or the cost of additional analyses is high, a screening assessment
could trigger remedial action. Box 6-2 lists resources that support the use of screening-level
models.
Box 6-2. Examples of Resources for Screening-Level Models
• U.S. EPA (2001f) General Principles for Performing Aggregate Exposure and Risk Assessments.
• Arnot (2009) Mass Balance Models for Chemical Fate, Bioaccumulation, Exposure and Risk Assessment.
• U.S. EPA (2009d) Guidance on the Development, Evaluation, and Application of Environmental Models. EPA/100/K-
09/003.
• Williams et al. (2010) An Overview of Exposure Assessment Models Used by the U.S. Environmental Protection Agency.
• Exposure Assessment Tools and Models website. U.S. EPA.
Probabilistic Models
Probabilistic exposure modeling (e.g., Monte Carlo analysis, Latin hypercube) represents a
higher tier assessment method that provides estimates of the range of probable exposures to
agents of interest. Probabilistic risk assessment is employed when detailed statistical analysis
(e.g., developing quantitative estimates of exposure or dose at upper percentiles) is necessary to
support sensitive decisions and help risk managers/decision makers distinguish among possible
alternatives. A probabilistic analysis considers the same exposure parameters (e.g., agent
concentration, exposure duration, intake rate) as other types of exposure models but considers
the range or distribution of exposures within a population rather than a single number. The
output of a probabilistic assessment is a probability distribution of exposures that reflects the
combination of the probability distributions of one or more of the model inputs or parameters.
The distributions can help characterize variability, uncertainty or both, depending on the design
of the model and its inputs. Probabilistic approaches also help identify data gaps, that is, cases
for which additional data collection could significantly improve a decision by reducing
uncertainty or better characterizing variability. If data gaps are identified, an exposure assessor
can address them by collecting additional data or conducting additional statistical analyses, such
as meta-analyses of existing data (Volstad et al. 2003; Weigel 2003). As an example, EPA
conducted a probabilistic exposure assessment that evaluated potential exposure and risk to
children from wood treated with chromated copper arsenate and a probabilistic multimedia
exposure modeling analysis for children's lead exposure to inform public health decision making
(Zartarian et al. 2006; Zartarian et al. 2017). These models typically require more resources than
screening models, however, and they generally are used only when required. Box 6-3 lists
examples of available resources for probabilistic assessments and models.
Page|113
-------
Box 6-3. Examples of Resources for Probabilistic Assessments and Models
• Finley and Paustenbach (1994) The Benefits of Probabilistic Exposure Assessment: Three Case Studies Involving
Contaminated Air, Water, and Soil.
• U.S. EPA (1996e) Summary Report for the Workshop on Monte Carlo Analysis. EPA/630/R-96/010.
• U.S. EPA (1997b) Guiding Principles for Monte Carlo Analysis. EPA/630/R-97/001.
• Hansen (1997a) Policy for Use of Probabilistic Analysis in Risk Assessment at the U.S. Environmental Protection Agency.
• Hansen (1997b) Use of Probabilistic Techniques (Including Monte Carlo Analysis) in Risk Assessment, and Guiding
Principles for Monte Carlo Analysis.
• U.S. EPA (1999b) Report of the Workshop on Selecting Input Distributions for Probabilistic Assessments. EPA/630/R-
98/004.
• U.S. EPA (2001h) Risk Assessment Guidance for Superfund. Volume III: Part A, Process for Conducting Probabilistic
Risk Assessment. EPA/540/R-02/002.
• U.S. EPA (2005g) Review of the National Ambient Air Quality Standards for Particulate Matter: Policy Assessment of
Scientific and Technical Information. EPA/452/D-03/001.
• U.S. EPA (2014i) Risk Assessment Forum White Paper: Probabilistic Risk Assessment Methods and Case Studies.
EPA/100/R-14/004.
Monte Carlo analysis is a widely used probabilistic method that uses a computer program to
combine multiple probability distributions. The methodology is applicable to simplistic models
with few input variables or to complex models involving dozens or hundreds of inputs. The
simulations use values of model inputs selected randomly from relevant distributions that
describe uncertainty or variability in model inputs to produce a quantitative estimate of exposure.
This process repeats, generating a series of exposure estimates that can be statistically analyzed.
Monte Carlo models can consider correlations between input values using correlation
coefficients or can assume independence. In more complex simulations, selection parameter
values can be linked by conditional distributions, for example, as in Zartarian et al. (2017). A
Monte Carlo analysis can characterize uncertainty, variability, or both, in a population. The data
reflected in the distributions of inputs (U.S. EPA 2001h) determine the inclusion of uncertainty
and variability.
Probabilistic exposure assessment is not necessary for every situation, and the complexity of a
probabilistic assessment can vary depending on the nature of the assessment performed. Monte
Carlo analysis is one of the simpler probabilistic approaches.
Advanced Methods for Probabilistic Modeling
Advanced modeling methods employ complex statistical analyses to characterize uncertainty and
variability either jointly or separately. In Appendix D of Risk Assessment Guidance for
Superfund. Volume III: Part A, Process for Conducting Probabilistic Risk Assessment (U.S. EPA
200lh), EPA highlights several advanced modeling methods, introduces terminology and basic
concepts of advanced modeling and provides resources for more information. Advanced models
or methods introduced in that document include:
• Microexposure Event (Microenvironmental Exposure) Analysis. The basic exposure
equation (see Section 2.4.1) assumes exposures are constant over time. Exposures at any
given moment, however, tend to fluctuate as the impacts from sources in an environment
vary over time or as individuals move from one microenvironment (e.g., outdoors,
kitchen, bedroom) to another. Microexposure event analysis separately models the doses
Page|114
-------
an individual receives while in each microenvironment for a sufficiently short period
during which exposures are reasonably constant. The sum of these individual doses gives
daily or longer-term doses (Price et al. 1996).
• Two-dimensional Monte Carlo analysis. This analysis, which also is a probabilistic risk
assessment method, separately characterizes the data uncertainty and variability of one or
more parameters in an exposure estimate. In the basic case, nested computational loops
(Macintosh et al. 1995) sample distributions that represent variability and data
uncertainty. Varying parameter values over many iterations (e.g., 5,000 times) results in
probability distributions for further statistical analyses (e.g., evaluation of confidence
limits surrounding the base-case variability distributions). In more advanced, separate
probabilistic modeling of variability and uncertainty, EPA has used the bootstrap-based
uncertainty analysis techniques described in Xue et al. (2006) for the Stochastic Human
Exposure and Dose Simulation (SHEDS) model application to chromated copper arsenate
exposures.
• Geospatial Statistics. This specialized branch of statistical analyses, relying on a
multivariate statistical tool, explicitly considers the geospatial location of data points in
the analysis of exposure. Geospatial statistics incorporate information about the spatial
distribution of chemical concentration inputs and modeled exposure predictions
(e.g., cities, states, regions).
• Elicitation of Expert Judgment. Expert elicitation is the process by which the
judgments of experts in multiple fields are quantified and documented. Experts
characterize the relationships, quantities, events or parameters of interest based on their
professional judgment and expertise, typically expressing the characterizations as
probabilities. Expert elicitation can be sought individually (i.e., each expert acts alone) or
as a group (i.e., experts meet and provide a collective response). An individual approach
typically is used when uncertainty characterization is needed. A group approach is
appropriate when a consensus or best estimate of uncertainty is needed (U.S. EPA 2009a;
U.S. EPA 2009b). Information from such elicitations can be used to characterize data
uncertainty and to fill data gaps in an exposure assessment when traditional scientific
research is not feasible or data are unavailable. Expert elicitation also can support
probabilistic approaches when data are scarce or lacking. In Bayesian analyses, experts in
a field construct distribution probabilities based on professional judgment and
experience, updating the distributions as new data become available. Statistical analyses
help quantify the value of the information stemming from the Bayesian analysis and the
impact on uncertainty and variability analyses (Bates et al. 2003; Gronewold et al. 2008).
To select among advanced modeling methods, an exposure assessor examines the rationale for
selecting a particular model, including the questions being addressed, data requirements of the
model, availability of the existing data, resources required to obtain additional data, scientific
integrity of the model, uncertainty and variability the model addresses and uncertainty the model
introduces. Note that more complex models are not necessarily more accurate than simpler
models. Accuracy depends on the availability of data and model-specific uncertainties. Also,
requirements for additional parameters might necessitate the use of more default values, leading
to greater uncertainty.
Page|115
-------
6.2.3. Categories of Models Used in Exposure Assessments
Different types of models are used during different stages of the continuum between source and
dose (see Figure 2-1).
• Fate and transport models. Assess the movement and transformation of pollutants in
the environment and yield predicted ambient pollutant concentrations in different
environmental media. The outputs of these models are concentrations of chemicals in
media that are relevant to specific receptors. These estimates often serve as proxies, or
surrogates, for actual exposures or serve as inputs to human exposure models.
• Human exposure models. Incorporate information on environmental concentrations and
exposure factors and yield predictions of exposures based on actual or assumed contact
between a receptor and the concentration of contaminants in the environment.
• Integrated fate/transport-exposure models. Yield both predicted ambient pollutant
concentrations and predicted exposures.
• Dose estimation models. Used to predict internal doses at target tissues, organs or
toxicity pathways that result from exposure to an agent. In the case of reverse dosimetry,
dose estimation models reconstruct exposure levels that are consistent with measurements
inside an organism or in biological material.
The above list focuses on deterministic or probabilistic quantitative mass-transfer models.
Statistical models such as regression models based on available or empirical data also help
estimate the distribution of exposures within a population, including central tendencies and
percentiles or help quantify the relative significance of factors that can influence exposure levels.
Section 6.2.6 (on high-throughput exposure models) discusses examples of these models.
In addition, a class of "sub" models can provide the input parameters for the mass transport
models described above. Such models include programs of indoor air concentrations such as
iSVOC. Params and MCCEM (Multi-chamber Concentration and Exposure Model). Other
examples of such submodels include models of exposure-related parameters such as dermal
absorption (IHA (American Industrial Hygiene Association) Exposure Assessment Committee
2000).
Fate and Transport Models
Exposure assessors use fate and transport models to estimate the movement and alteration of
contaminants as they are transported through environmental media (e.g., air, soil, water,
groundwater) (U.S. EPA 2019a). These models aid in the understanding of natural systems and
the way in which systems react to varying conditions, including the spread of toxic substances in
various media and the short- and long-term effects of exposure to hazardous substances. Model
outputs can include current and future media concentrations and concentrations at specific
locations (e.g., fence lines, locations for permit compliance, on site, off site). The EPISuite™
model can be used to estimate environmental fate and transport for certain chemicals.
Fate and transport models describe the impact of the physical, chemical and biological processes
on the predictions of exposures from specific sources of environmental release. Physical
processes include bulk transport in the movement of air, surface or groundwater flows or soil
transport; movement between media such as volatilization and sorption to solids; and dispersion
in a medium such as air or water. Chemical processes considered in the models include chemical
Page|116
-------
oxidation and reduction, reactions with solid material and photodegradation in air and water.
Microbial degradation also can affect concentrations of chemicals in the environment and change
the chemical nature of contaminants.
Atmospheric fate and transport models the Agency uses range from simple to complex. Simple
models are appropriate when substantial amounts of monitoring data are available. These
modeling approaches include models for inverse distance weighting that interpolate between
locations with monitoring data. Land use regression is another simple approach used to
approximate ambient air concentrations by combining land use information and monitoring data
in a multiple regression model and applying the model to areas having limited monitoring data.
More complex models include the spatially resolved point- and line-source-oriented AERMOD
model with limited consideration of chemistry or removal processes and the more complex
multisource and larger spatial scale CMAQ (Community Multiscale Air Quality) model that
incorporates physical-chemical processes influencing the concentrations of various pollutants
and their species. Examples of commonly used models of fate and transport in water are PRZM3,
BASINS and AQUATOX. The resources listed in Table 6-1 provide additional information on
the use of fate and transport models.
Human Exposure Models
Human exposure models simulate and predict population exposure and dose distributions and
assess variability in model inputs. The website, An Overview of Human Exposure Modeling at
the U.S. EPA 's National Exposure Research Laboratory, and journal publications (Furtaw Jr.
2001; Williams et al. 2010) provide overviews of these models. Table 6-1 also provides
resources on where to obtain detailed information on human exposure models. EPA and other
organizations have developed human exposure models that range from deterministic screening
models for assessing specific sources to probabilistic models of aggregate and cumulative
exposures across populations and over time. Screening models that address specific
environmental sources and a range of consumer products include E-FAST (U.S. EPA 2014c) and
the more recent Consumer Exposure Model (U.S. EPA 2016a). EPA and other organizations
have developed several human exposure models suited for inhalation exposure modeling.
Examples of more complicated models include SHEDS-Air, Air Pollutants Exposure Model
(APEX) for criteria air pollutants and Hazardous Air Pollutant Exposure Model (HAPEM) for
other hazardous air pollutants. In addition, EPA and other organizations have developed similar
but more complex probabilistic multimedia human exposure and dose models for aggregate and
cumulative pesticide exposures (e.g., SHEDS-Multimedia, Cumulative and Aggregate Risk
Evaluation System, LifeLine™, Calendex™) to accommodate additional chemicals (Price et al.
2001; Young et al. 2012). These models are examples of microexposure event models.
Integrated Fate/Transport-Exposure Models
Integrated fate/transport-exposure models combine measured or modeled concentrations in
different media (e.g., air, water, soil, indoor surfaces, food) with pertinent exposure factors to
estimate human exposures at modeled locations. For example, both atmospheric transport and
diffusion models and human exposure models combine ambient pollutant concentrations,
location-specific representative human or demographic data and relevant exposure factors
(e.g., breathing rates, times spent indoors and outdoors) to estimate or predict human exposures.
Examples of such models include SHEDS-Air, APEX and HAPEM. These models focus on
integrating data on fate and transport parameters with information on human physiology and
Page|117
-------
exposure-related behaviors for complex assessments of exposures to air toxics. In addition, many
integrated fate/transport-exposure assessment models provide estimates of potential or absorbed
dose (U.S. EPA2015b; U.S. EPA2016d; U.S. EPA2017a; U.S. EPA2017b).
Although EPA designed many of its exposure models as standalone models, ongoing efforts in
the Agency have focused on developing integrated modeling approaches. One effort is the
development of integrated air quality and exposure models to identify those sources and
microenvironments that contribute to the greatest portion of personal or population exposures
and determine optimum risk management strategies (Isakov et al. 2009). Advanced approaches
that combine regional and local models have been proposed as a future direction for air quality
modeling of hazardous air pollutants to address the spatial variability of air concentrations and
allow for better treatment of chemically reactive air toxics (Touma et al. 2006). EPA's White
Paper: Integrated Modeling for Integrated Environmental Decision Making recommends the
Agency adopt a systems thinking approach and consistently and systematically implement
integrated modeling approaches and practices to inform Agency decision making (U.S. EPA
2008d). NRC (2012b) states, "systems thinking considers the cumulative effects of multiple
stressors, evaluates a range of alternatives, analyzes upstream and downstream life-cycle
implications, involves a broad range of stakeholders and uses interdisciplinary scientific
approaches."
As indicated by these examples, integrated fate/transport-exposure models generally are for a
specific purpose (e.g., aggregate human inhalation exposures from hazardous air pollutants or
fumigants, cumulative exposures to multiple chemicals). Table 6-1 provides resources on where
to obtain detailed information on integrated fate/transport-exposure models.
Dose Estimation Models
Models are available that estimate dose (the mass of a chemical that crosses the exposure barrier
over a defined period) from exposure data or estimate both exposure and dose from
environmental data (U.S. EPA 201 Id). The models include parameters that address the uptake of
chemicals across the gut (for ingestion), the dermis (for dermal exposures) and the lung surface
(for inhalation). Many of these models address uptake and determine both the applied and the
absorbed doses.
6.2.4. Estimates of Exposure Using Scenario Evaluation
In the planning and scoping phase of an exposure assessment, an assessor develops exposure
scenarios of interest, as Section 3.1 described. A conceptual model identifies potential exposure
media and receptors for developing exposure scenarios. An exposure scenario is a combination
of facts, assumptions and inferences that define a discrete situation in which exposures occur. An
assessor determines chemical concentrations in a medium or location and combines this
information with information on the relevant physiology and exposure-related behaviors of the
exposed individuals. Characterization of an exposure scenario consists of the collection of
environmental measurement data (e.g., soil, dust, air, diet) and exposure factor information
(e.g., contact rates, activities), which then are combined according to the applicable conceptual
model structure. For an exposure scenario, an assessor usually characterizes the chemical
concentration and the time of contact separately.
Page|118
-------
For the chemical concentration characterization, an assessor typically estimates an exposure
concentration indirectly by measuring, modeling or using existing data on concentrations in the
bulk medium rather than at the point of contact. A chemical assessor's assumption that the
concentration in the bulk medium is the same as an exposure concentration can introduce
uncertainty in an exposure estimate. Section 6.3.4 on model uncertainty analysis discusses this
assumption. Generally, the closer to the point of contact (in both space and time) the
concentration in the medium is measured, the less uncertainty exists in an exposure concentration
characterization. Estimating the change in concentration over time helps calculate an exposure
estimate more accurately.
For the time-of-contact characterization, the assessor estimates the frequency and duration of
exposure for the activities related to that exposure (as Section 6.2.2 on microexposure event
analysis mentioned). Some electronic means of recording locations and activities are available,
including personal data recorders, automated global positioning system-based recorders,
videography-based microactivity diaries and smart phones. As Chapter 5 discussed, several
methods are being explored to characterize the locations and activities of individuals. Paper time-
activity diaries, however, are still the most common means to collect location and activity
information from participants in observational human exposure measurement studies. When
participant-specific activity information is not collected, time and activity are estimated using
available databases, for example:
• Exposure Factors Handbook: 2011 Edition (U.S. EPA 201 Id)
• Consolidated Human Activity Database
• American Time Use Survey.
In the absence of more substantive information, an assessor acquires participant-specific activity
information by making assumptions about behavior. Estimating dermal exposure, for example,
might involve combining assumptions about activity patterns based on observations, surface
sampling data for the chemical of interest and a dermal transfer rate. Likewise, estimating
inhalation exposure could involve linking databases of indoor and outdoor environmental
pollution concentrations with time-activity diary and inhalation rate information.
In 2001, EPA publi shed the Draft Protocolfor Measuring Children's Non-Occupational
Exposure to Pesticides by All Relevant Pathways, which details a systematic, measurement-
based approach to evaluating exposure by each route (i.e., inhalation, dermal, ingestion) using a
series of algorithms (U.S. EPA 2001b). Each algorithm mathematically expresses exposure for a
specific route as a function of chemical concentration in an environmental medium and selected
exposure factors. The algorithm's inputs explicitly identify the data requirements. Estimating
aggregate exposures using these algorithms requires complete datasets for each pathway. Similar
to the 2001 Draft Protocol, EPA's Office of Pesticide Programs (OPP) uses a set of SOPs to
estimate post-application exposures to pesticides for toddlers (through dermal contact and
hand-to-mouth activity) from treated residential surfaces. These SOPs are used for product
registration or reregi strati on in the United States and provide a screening-lev el assessment to
estimate exposures when data are limited and exposure estimates beyond the day of application
are desired. OPP has finalized an updated set of SOPs (U.S. EPA 2012f). Another example of a
scenario-based approach is EPA's Example Exposure Assessment Scenarios Tool and Associated
Report (U.S. EPA 2005b), which is designed to assist in developing estimates of exposure, dose
Page|119
-------
and risk. The purpose of the Example Exposure Scenarios document (U.S. EPA 2003c) is to
outline scenarios for various exposure pathways and demonstrate how data from the Exposure
Factors Handbook: 2011 Edition (U.S. EPA 201 Id) might be applied for estimating exposures.
The outcome of the scenario evaluation often is an exposure estimate that results from combining
concentrations with exposure factors. The assumptions or boundary conditions limit the
estimates, however (see Section 5.3.3, which discussed boundary conditions). To address this
limitation, an assessor (1) evaluates an exposure equation under conditions for which the limiting
assumptions hold true or (2) addresses the uncertainty caused by the divergence from the
boundary conditions. An example of the first approach is the microenvironment method. The
term microenvironment refers to surroundings (e.g., home, office, automobile) treated as
homogeneous or well characterized in the concentrations of an agent. In a given
microenvironment, the pollutant concentration is assumed uniformly distributed spatially during
the contact time, although the pollutant concentration might vary over time. Therefore, this
method evaluates an individual's exposures as a series of time segments and locations in which
the assumption of uniform concentration is approximately true and then sums results for the
segments to estimate total exposure over longer periods (Price et al. 1996). This effectively
removes some of the boundary conditions. For example, in determining inhalation exposure to
acute toxicants, the estimated ventilation rate changes depending on the activities performed
during an exposure event. This process avoids much of the error that using average values causes
when concentration varies widely along with time of contact. Several researchers have reported
the uncertainty that relying on deterministic bounding estimates introduces (Finley and
Paustenbach 1994; Simon 1999). Section 6.3.4 presents additional discussion of variability and
uncertainty analysis in probabilistic modeling.
6.2.5. Exposure and Dose Estimation Using Biomonitoring Data
Biomonitoring is the measurement or tracking of an agent or its biomarker in an organism or in
biological material to characterize the organism's exposure to an agent. Chemical agents
investigated using biomonitoring include organic compounds, metals and anions (CDC 2009).
Biomarkers of exposure confirm that an individual has been exposed to a chemical and function
as an important tool for understanding the linkages between external chemical exposures,
internal doses and potential health outcomes in humans (Clewell et al. 2008; NRC 2006b). In
addition, such measurements provide an integrated measurement of exposure to a chemical from
all sources and routes. The measurements are also one of the few available ways of
characterizing the total internal doses of agents that occur as a result of exposures from multiple
sources (aggregate exposures). Biomonitoring data reflect both current and past exposures of the
individual. Depending on the persistence of the agent, or its biomarker, the observed
concentrations might reflect exposures from the prior day or exposures that occurred decades
earlier (Chen et al. 2013; Clewell et al. 2008).
A limitation of most biomarkers of exposure is that they cannot be used alone to identify specific
sources and quantify the contribution of individual exposure pathways. As a result,
biomonitoring data collected in isolation are best used as a surveillance tool [e.g., baseline
exposure levels, trends over time, identifying populations with higher chemical exposures (CDC
2005; Hays et al. 2007; Sobus et al. 2010; Tan et al. 2005)]. The value of biomonitoring data
increases if collected in combination with demographic data and exposure data. In these cases,
Page|120
-------
biomonitoring data confirm the estimates of absorbed doses that exposure modeling produces
(Hinderliter et al. 2011).
Chemicals or their metabolites frequently serve as biomarkers of human exposure. The
chemicals are measured in samples of body fluids and tissues such as urine, blood, breath, hair
and saliva. Maternal biomonitoring provides information on in utero exposures. Measurements in
cord blood, amniotic fluid and meconium characterize perinatal exposures (Barr et al. 2007).
Urine is a frequently used matrix for biomonitoring for exposure to nonpersistent chemicals
because these contaminants generally have short half-lives (e.g., <24 hours) in the body. For
persistent chemicals, blood is a commonly used matrix for biomonitoring because, for these
compounds, levels are higher in blood than urine.
Forward Dosimetry
Forward and reverse dosimetry are two approaches for using biomonitoring data to provide
quantitative estimates of human exposure to chemicals. Forward dosimetry uses measurements
of environmental concentrations and supplemental data (such as exposure factors) in conjunction
with simple pharmacokinetic (PK) or more complex physiologically based pharmacokinetic
(PBPK) models to estimate internal doses of a chemical that are consistent with measured
biomonitoring data. The forward dosimetry approach provides valuable information on the
important sources, pathways and routes of human exposure to chemicals. It also provides a
quantitative measure of an integrated internal dose from multiple sources and routes over a
specified period. For instance, this approach showed the importance of dust pathways in
exposures to polybrominated diphenyl ethers (Stapleton et al. 2014). Ideally, the forward
dosimetry estimates are similar to the measured biomarker estimates, as was the case for
polybrominated diphenyl ethers (Lyons et al. 2008). Tulve et al. (2011) published cumulative
exposure estimates using forward dosimetry approaches. Tan et al. (2007) and Clewell et al.
(2008) present illustrative examples and further discussion of the interplay between exposure
estimation, biomonitoring data and these modeling methodologies. Table 6-2 presents examples
of dose estimation using forward dosimetry.
Table 6-2. Example Publications on Modeling Exposure and Dose
from Biomonitoring Data
Agent
Dosimetry Approach
Studies
Chlorpyrifos
Forward
Morgan et al. (2005); Edlmann etal. (2016)
Dioxins and dioxin-like compounds
Forward
Lorberetal. (2009); NRC (2006a)
Phthalates
Forward
Clark etal. (2011); Lorberetal. (2010)
Polybrominated diphenyl ethers
Forward
Lorber (2007)
Chloroform
Reverse
Lyons etal. (2008)
Glyphosate
Reverse
Acquavella et al. (2004)
Malathion
Reverse
Dong etal. (1994)
Perchlorate
Reverse
Blount etal. (2007); Huberetal. (2011)
Pesticides
Reverse
Mage et al. (2004); Mage et al. (2008)
Phthalates
Reverse
Koch etal. (2003)
Trihalomethanes
Reverse
Tan et al. (2007)
Page|121
-------
Reverse Dosimetry
Reverse dosimetry (i.e., exposure reconstruction) models estimate an external exposure to a
chemical that is consistent with, and based on, biomonitoring data. Reverse dosimetry is distinct
from forward dosimetry in that forward dosimetry considers all components and pathways of
exposure that comprise an individual's total exposure, whereas reverse dosimetry seeks only to
arrive at the total dose responsible for the measured biomarker. Reverse dosimetry modeling,
however, can incorporate basic PK information known about the chemical in the application
process. PK and PBPK models are applicable in reverse dosimetry analyses by rearranging
parameters to estimate intakes based on biomarkers and modeling parameters. Other simple
reverse dosimetry approaches assume steady-state exposures and measure exposure based on
urinary excretion rates. Urine accumulation in the bladder and differences in hydration across
individuals complicate measurements of urinary excretion, necessitating having data on the
timing of the accumulation period and urine volumes or normalized creatinine production rates
(LaKind et al. 2014). Creatinine correction is the most common method for adjusting for variable
dilutions in spot urine samples (Barr et al. 2005) and has been used in assessments of perchlorate
(Blount et al. 2007), phthalates (Koch et al. 2003) and pesticides (Mage et al. 2004; Mage et al.
2008).
Table 6-2 presents examples of exposure estimation using reverse dosimetry. Forward and
reverse dosimetry approaches are complementary because the hypotheses one analysis raises can
be tested by the other [e.g., Georgopoulos et al. (2009)].
Simple PK Models
In some cases, human PK data are available for environmental chemicals, and given the
information on external dose, these data are useful for predicting concentrations of the chemical
in an easily sampled body matrix such as urine or blood. PK data, such as first-order elimination
rates (often described by human half-lives), can be used. Depending on the nature of the PK data,
simple one-, two- and even three-compartment models are appropriate for this purpose
(Gabrielsson and Weiner 2000). In dosimetry, a "compartment" can be physiologically defined
(e.g., the volume of body lipid) or not physiologically defined (e.g., "volume of distribution").
The simplest PK model is a mass-balance model, which implicitly assumes the organism is at
steady state with its environment. Often, the goal of simple PK models is to compare predictions
of concentrations in body matrices with analogous measurements the literature reports. The
standard one-compartment, first-order absorption model, however, makes inherent assumptions
about the absorption, distribution, metabolism and elimination (ADME) processes that could
limit its use (Neubig 1990).
PBPK Models
PBPK models represent an important class of dosimetry models that assessors can use to predict
the internal dose at target organs for risk assessment applications. PBPK models consist of a
series of mathematical algorithms that represent biological tissues and physiological processes in
the body and simulate chemical ADME. The internal dose replaces the administered dose for the
derivation of quantitative dose-response relationships. When the PBPK modeling is reliable, the
move to the internal dose improves inter- and intraspecies extrapolations because differences in
ADME across species and across individuals are removed from the assessment. This reduces the
uncertainty in predictions of human dose-response and is one reason for the growing use of
PBPK models in scientific and regulatory assessments. Characterizing uncertainty in risk
Page|122
-------
assessments based on PBPK model results compared with uncertainty in results based on
administered dose is an important and active research area (U.S. EPA 2006a). In some cases, the
Agency might incorporate exposure and dose modeling uncertainties within a hierarchical
Bayesian framework in some of the exposure evaluations (Tornero-Velez et al. 2010). EPA
suggests considering the following questions when using PBPK models in risk assessments.
• Are data available on tissue-specific kinetics (e.g., distribution) for all relevant tissues,
physicochemical properties of the chemical and chemical-specific ADME?
• Does the PBPK model contain a compartment associated with the target tissue, contain
the target tissue or identify a surrogate for the target tissue?
• Are the physiological parameter values defensible (i.e., within the known plausible range
of the organism)?
• Have the models undergone a thorough evaluation of their structure, implementation,
appropriate application domain and predictive capability (U.S. EPA 2006a)?
PBPK models have been developed to address age-related changes in physiology (Loccisano et
al. 2013; McLanahan et al. 2014; Wu et al. 2016). These models, often referred to as lifestage
models, address physiological changes associated with pregnancy (McLanahan et al. 2014) and
the growth and development of infants and children (El-Masri et al. 2016; Luecke et al. 2007).
These modeling efforts also seek to address the issue of age-related changes in the activity of
enzymes associated with Phase I and Phase II metabolism of chemicals (Hines 2013). An
example of a model with age-related changes in ADME is the IEUBK (Integrated Exposure
Uptake Biokinetic) model (U.S. EPA 1994a).
6.2.6. High-Throughput Exposure Models
The recently developed ToxCast program has resulted in the generation of in vitro toxicity and
bioactivity data on thousands of chemicals in commerce. The ToxCast data have been used to set
screening levels of systemic dose associated with minimal levels of bioactivity (Thomas et al.
2013). These emerging findings form the basis for potentially prioritizing the need for animal
testing or the need for refined exposure estimates. Using these dose-bioactivity datasets to
prioritize requires screening estimates of the aggregate exposures to the chemicals. To meet this
need, a new type of human exposure model is being developed that generates high-throughput
screening estimates of aggregate exposures (Isaacs et al. 2014; Shin et al. 2015; Wambaugh et al.
2013; Wambaugh et al. 2014). The hallmark of the high-throughput screening models is that they
trade a reduction in model complexity with an increase in model uncertainty for models that can
be applied rapidly to thousands of chemicals (many of which have minimal exposure-related
information).
The SHEDS-High Throughput model (SHEDS-HT) is an example of this type of model. Based
on probabilistic methods and algorithms developed for earlier models in the SHEDS family of
software, the algorithms in SHEDS-HT reduce the input data demands and run times of the
earlier SHEDS models, while maintaining critical features and inputs that influence variation in
exposure (Isaacs et al. 2014). An initial effort applied SHEDS-HT to 2,507 organic chemicals
associated with consumer products and agricultural pesticides. The model addressed exposure
associated with the use of commercial products (nearfield sources) and dietary exposures from
agricultural pesticide use. The SHEDS-HT approach has the advantage of generating estimates
of the distributions of aggregate exposures across populations of different ages.
Page|123
-------
In addition to SHEDS-HT, Wambaugh et al. (2014) have proposed high-throughput screening
heuristic models of exposure. This approach has produced predictive models of the median
aggregate exposures to chemicals in the general population based on chemical use information
and the physical and chemical properties of the chemicals. EPA (U.S. EPA 2014e) also proposed
a framework for using estimates of exposures inferred from the National Health and Nutrition
Examination Survey biomonitoring program to evaluate and calibrate estimates from multiple
high-throughput models to form consensus high-throughput exposure predictions.
6.3. Evaluation of Models
The Agency defines model evaluation as the process that generates information during model
application to determine whether the model and its analytical results are of sufficient quality to
serve as the basis for a decision (U. S. EPA 2009d). Similarly, the National Academy of Sciences
defines model evaluation as the process of deciding whether and when a model is suitable for its
intended purpose. The Academy stipulates this process is not a strict validation or verification
procedure. Instead, it provides an objective assessment of the model's performance for the stated
purpose and increases the understanding of model strengths and limitations (NRC 2007). EPA's
Guidance on the Development, Evaluation, and Application of Environmental Models (U.S. EPA
2009d) discusses model evaluation versus validation versus verification.
Model evaluation is a multifaceted activity including peer input, corroboration of results with
data (e.g., other model predictions, actual measurements or other proxies such as biomonitoring
data), quality assurance/quality control (QA/QC) checks and uncertainty and sensitivity analyses
(NRC 2007). This process compares the accuracy of model results with data as an independent
test of how well the model represents the actual conditions. One consideration is how close the
predicted values (based on either deterministic model estimates or various statistics and
percentiles of more advanced probabilistic models) are to observational data. Evaluation also
considers the degree to which the basis of the model is generally accepted science and
computational methods. Whether the model fulfills its designed task and how well it
approximates observed conditions are also components of model evaluation. For example,
evaluating a fate and transport model that estimates concentrations at an exposure point might
include verifying that model equations appropriately represent the transport and transformation
concepts, the computer code is free from error (by comparing the model output with data from
laboratory microcosms) and modeling results are comparable to field data under various
conditions and for various chemicals.
Figure 6-3 illustrates the iterative use of models along with monitoring data from observational
human exposure measurement studies to evaluate and refine study designs and to inform
planning of future observational human exposure measurement studies. The single-headed
arrows indicate steps in the two processes (modeling and monitoring) while the double-headed
arrows denote the exchange of information between the two activities.
Page|124
-------
Figure 6-3. Iterative Use of Measurements and Models
Measurements
Models
Hypothesis-Based
Problem Definition
Conceptual Design
r+ Design Study
T
Conduct Study
T
Evaluate Results
1
Transfer Results
r Select Model
T
H Test & Calibrate
I
Evaluate & Refine
I
Transfer Results
Source: Ozkaynak (2009)
Although complex computational models typically cannot be validated, module-specific
predictions can be evaluated against available measurements or alternative model predictions
(NRC 2007). Consequently, many key components of EPA's exposure assessment models have
been evaluated using various approaches and comparing results to available measurements.
Approaches include comparing (1) the structure, model inputs and results of one model to
another, (2) modeled estimates with measured or field data and (3) modeled estimates with
biomonitoring data. Comparing predictions against field data can reveal uncertainties and
identify missing pathways, leading to the subsequent refinement of model selection and helping
design field studies to fill critical data gaps (Ozkaynak 2009).
Model evaluation makes possible the identification of the model's strengths and limitations and
the most critical model parameters and assumptions. Such an evaluation not only indicates the
conditions under which a simulation will be acceptable and accurate for its intended purpose, but
also the conditions under which using the model is unacceptable.
6.3.1. Soundness of Assumptions, Methods and Conclusions, Appropriateness
Peer input provides an independent evaluation and review of models used in an exposure
assessment. The purpose of model peer input is to evaluate the assumptions and whether sound
scientific principles underlie the methods and conclusions derived from models and to check the
scientific appropriateness of a model for informing a specific regulatory or risk management
decision (U.S. EPA2009d). The latter objective is particularly important for applications of
existing models for purposes other than those intended in their initial design. Researchers and
Page|125
-------
practitioners in academia, consulting, private industry and state and local governments—both
nationally and internationally—frequently use models and sometimes collaborate in their
development. A critical consideration in using models is transparency, including providing
rigorous model documentation and promoting unfettered communication among an exposure
assessor, modeler and risk manager/decision maker in the application of the model to a specific
problem.
6.3.2. Attainment of Quality Assurance Objectives
For a chosen model, an exposure assessor determines whether the inputs the model requires are
available and all parameters the model requires are obtainable or reasonable default values are
accessible. After running the model, an exposure assessor needs to evaluate whether the model
outputs meet the exposure assessment goal(s) and data quality objectives. If they do not, the
model parameters might need adjustment or a different model selected and tested. All input data
need to meet data quality acceptance criteria (see Section 5.3.2).
The exposure assessor also conducts a data quality assessment to assess the type, quantity and
quality of data to verify that the planning objectives, QA project plan components and sample
collection procedures are satisfied and to confirm the data are suitable for their intended purpose.
EPA's Quality Management Tools - Data Quality Assessment website describes EPA's five-step
process for data quality assessment.
6.3.3. Qualitative and Quantitative Model Calibration
As a standard practice, an exposure assessor verifies model operation and results. Some models
might need precalibration for use in subsequent exposure assessments. Calibration is the process
of adjusting selected model parameters within an expected range until the differences between
model predictions and field observations are within selected criteria (U.S. EPA2009d).
Calibration accounts for spatial variation and temporal variation that the model formulation does
not represent; functional dependencies of parameters that are unquantifiable, unknown or not
included in the model algorithms; and extrapolation of laboratory measurements to field
conditions.
6.3.4. Model Uncertainty and Sensitivity Analyses
An exposure assessor acknowledges and characterizes important sources of uncertainty in
modeled estimates, either qualitatively, quantitatively or both. Uncertainty is the lack of precise
knowledge, either qualitative or quantitative, and can refer to the limited knowledge about the
factors affecting exposure and adequacy of model outputs for decision making. An exposure
assessor characterizes the quality of the input data and the resulting limitations on the uses of the
model results. Chapter 8 presents a broader discussion of issues on uncertainty in exposure
assessment.
Models are mathematical representations of processes that quantify how a system behaves in
response to changes in its inputs. Exposure model development entails several choices regarding
what to include and at what level of detail. Model inputs and parameters typically include
various sources of uncertainty. Important sources of uncertainty are measurement error,
statistical sampling error, nonrepresentativeness of data and structural uncertainties in scenarios
and formulations of models.
Page|126
-------
Scenarios are assumptions regarding the factors that define the scope of the assessment, such as
the averaging time, geographic and temporal scales and the exposed population of interest
(Ozkaynak et al. 2008). Omitting any elements of the scenario of interest from the modeling
approach can bias the estimates. Uncertainty also might stem from extrapolating beyond
conditions for which the model was constructed or calibrated. The extent of verification and
validation, whether the model is extrapolated beyond the range of its evaluation and whether
alternative theories exist upon which alternative modeling approaches could be developed all
influence model uncertainty (Cullen and Frey 1999).
Assessments of input and parameter uncertainty can use advanced statistical methods (e.g.,
Bayesian techniques), conventional Monte Carlo methods or two-dimensional Monte Carlo
methods. Such assessments iterate model simulations using alternative sets of variability
distributions for key inputs and parameters. Typically, the simulations generate a few hundred
alternative exposure prediction distributions that depict the uncertainty around the initial
exposure distribution; for example, the cumulative distribution function for exposures or dose
(WHO 2008). Most Monte Carlo applications performed for predicting exposures capture the
combined variability and uncertainty associated with each input and variable in the model runs.
These results typically represent the variability in the predictions and the extremes in the
alternative uncertainty cumulative distribution functions by capturing the variability and
uncertainty bounds within a one-dimensional simulation (WHO 2008). In models that are more
refined, using two-dimensional Monte Carlo methods distinguishes variability from uncertainty.
Simplified two-dimensional Monte Carlo methods, as Macintosh et al. (1995) describe, are
appropriate, provided one ignores the potential correlations between the statistical parameters of
the variability distributions (e.g., if one were to assume independent uncertainty distributions for
the means and standard deviations of normal distributions). These correlations can be captured,
however, by the use of bootstrap-based uncertainty analysis techniques used in Xue et al. (2006).
Some of the fate and transport, human exposure and integrated fate/transport-exposure models
can simulate stochastic processes, which enables assessment of the variability and uncertainty in
modeled estimates and input parameters. Variability refers to the heterogeneity or diversity of
potential exposures in a population. The models, which can simulate stochastic processes, tend to
be higher tier exposure models and some of the integrated fate/transport-exposure models. For
these models, such assessments usually involve performing univariate or multivariate Monte
Carlo analyses, sensitivity analyses or exposure pathway contribution analyses (i.e., analyses to
understand the relative importance of different pathways), or a combination. The most common
model input parameters varied to address variability or uncertainty are exposure factors and
chemical residue values in different environmental media. A key challenge for integrated
fate/transport-exposure models is the quantification of coupled model uncertainties resulting
from propagation of errors from the different model components, linked during an integrated
analysis. Selected case studies have evaluated the impact of this problem (Ozkaynak et al. 2009).
In the context of an exposure assessment, EPA defines sensitivity analysis as "any systematic,
common sense technique used to understand how risk estimates and, in particular, risk-based
decisions, are dependent on variability and uncertainty in the factors contributing to risk" (U.S.
EPA 200lh). In other words, for understanding and addressing data uncertainty, the sensitivity
analysis is a process of determining which parameter(s) in an exposure assessment drive the
results. An exposure assessor uses these analyses to decide when to stop collecting data or
Page|127
-------
performing more time-consuming probabilistic analyses. Identifying the parameter(s) that most
influence uncertainty and variability in an exposure assessment's results enables an exposure
assessor to:
• Prioritize sources of data uncertainty, model uncertainty and variability
• Inform risk managers/decision makers and stakeholders about the potential impacts of
risk management decisions
• Support a cost-benefit analysis that weighs the cost of additional analyses or data
collection efforts against the benefit of having a more refined exposure assessment
• Target additional analyses or data collection efforts
• Assist in model development and refinement by highlighting key input parameters.
Sensitivity analyses can range from simple to more complex analyses, including modeling and
regression analysis. Simpler analysis typically involves a one-at-a-time fixed or percentile
scaling approach. Fixed approaches, for instance, might test the variation of results by varying
each input up and down by a factor of 2. In the percentile scaling approach, first a reference or
base (e.g., mean) value of the chosen variables is selected. Then, the modeler conducts two more
runs for each input at lower (e.g., 5th) and upper (e.g., 95th) percentiles of their distributional
range. For each run and simulated individual, modelers determine the mean outputs for each
lower and upper percentile simulation and compare them to the reference or base case results. In
addition, high/low ratios (e.g., the ratio of the 95th percentile result to the 5th percentile
prediction) are calculated. These ratios or ranges provide assessors with the impact and
significance of each influential variable on the exposure modeling predictions. A more
complicated sensitivity analysis approach relies on multivariate methods, whereby, in
probabilistic simulations, the modeler retains each simulated individual's means of input
variables and outputs. The modeler then uses this information in stepwise regression models to
examine the relationship between inputs and outputs of the model to determine the impact of the
key variables in the presence of others that influence the results.
The type of sensitivity analysis needed for each situation depends on the complexity of the
exposure assessment question (U.S. EPA 2001h). The essence of the analysis, however, remains
the same: evaluating how changes in the input parameters change the output. Several
methodological tools and approaches are available for conducting sensitivity and uncertainty
analysis (Cullen and Frey 1999; Mokhtari et al. 2006; Saltelli et al. 2004; WHO 2008). For
example, global sensitivity analysis methods (such as regression, analysis of variance,
categorical and regression trees), the Fourier amplitude sensitivity test and Sobol's method
identify key sources of variability, uncertainty, or both, when many inputs are varied
simultaneously. Appendix A of EPA's Risk Assessment Guidance for Superfund. Volume III:
Part A, Process for Conducting Probabilistic Risk Assessment (U.S. EPA 200 lh) and the World
Health Organization's Uncertainty and Data Quality in Exposure Assessment (WHO 2008)
provide detailed guidance on conducting sensitivity analyses. EPA programs might implement
specific procedures for conducting sensitivity analyses; therefore, assessors need to consult with
their programs and follow their SOPs.
This section has focused on sensitivity and uncertainty analyses within a selected model.
Performing sensitivity analyses across different models is also appropriate to determine whether
some models are less sensitive to certain critical parameters. Uncertainty across models is also
Page|128
-------
appropriate, for example, to quantify ranges of outputs that reflect the uncertainties of model
assumptions for a given set of inputs (Cullen and Frey 1999; Young et al. 2012).
6.4. Summary
• EPA has adopted the National Research Council definition of a model: "a simplification
of reality that is constructed to gain insights into select attributes of a particular physical,
biological, economic, or social system."
• Model selection involves identifying the type of model needed to meet the risk
management objectives of the assessment and determining the complexity of the model
necessary to reach a decision. The assessor determines whether a model to meet those
needs is available or if a new model needs to be developed.
o Before selecting a model, an exposure assessor defines the assessment objective(s)
and describes how the model addresses assessment questions or hypotheses required
to meet them.
o Based on the problem statement and system conceptualization, an assessor develops a
modeling methodology that stipulates the degree of complexity of the selected
model. Models range in complexity from deterministic to probabilistic to advanced
probabilistic modeling,
o Various categories of models are appropriate for application in exposure assessment:
fate and transport models, human exposure models, integrated fate/transport-
exposure models and dose estimation models,
o In the planning and scoping phase, an assessor develops exposure scenarios from
which to estimate exposure. The scenarios incorporate environmental measurement
data and exposure factor information. The outcome of the scenario evaluation is an
exposure estimate that results from combining concentrations with exposure factors,
o Biomonitoring data can be used to estimate exposure and dose. Forward and
reverse dosimetry are two approaches for using biomonitoring data to provide
quantitative estimates of human exposure to chemicals. Both simple PK models and
PBPK models can estimate exposure,
o High-throughput screening models rapidly generate estimates of aggregate
exposures to typical individuals for large numbers of chemicals. The hallmark of the
high-throughput screening models is that they trade model complexity and a
possible increase in model prediction uncertainty for applicability to thousands of
chemicals.
• Model evaluation is the process that generates information during model application to
determine whether the model and its analytical results are of sufficient quality to serve as
the basis for a decision.
o An exposure assessor typically verifies model operation and results both qualitatively
and quantitatively through calibration,
o Important sources of uncertainty include measurement error, statistical sampling
error, nonrepresentativeness of data and structural uncertainties in scenarios and
formulations of models. Sensitivity analysis is "any systematic, common sense
technique used to understand how risk estimates and, in particular, risk-based
decisions, are dependent on variability and uncertainty in the factors contributing to
risk."
Page|129
-------
CHAPTER 7. PLANNING AND IMPLEMENTING AN
OBSERVATIONAL HUMAN EXPOSURE
MEASUREMENT STUDY
Observational human exposure measurement studies quantify people's exposures to chemical,
physical or biological agents or other stressors found in their everyday environments during their
normal daily activities. Such studies involve measurements of these agents in environmental
media (e.g., air, dust, soil, water); collection of information about the study participants and their
homes, work environments and activities (e.g., use of personal care products, cleaning activities);
and collection of personal exposure (e.g., duplicate diet, dermal) and biomarker samples
(e.g., blood, urine) (Lioy et al. 2005; Sheldon 2010; U.S. EPA 2008c; U.S. EPA2009a; Zartarian
et al. 2005). These types of studies do not intentionally introduce agents or other stressors into
people's environments.
After a brief overview (Section 7.1), this chapter discusses the major aspects of planning and
implementing an observational human exposure measurement study:
• Designing a study (Section 7.2)
• Planning and executing a pilot study (Section 7.3)
• Planning and executing a full field study (Section 7.4)
• Conducting peer review and completing a final report (Section 7.5).
Section 7.6 summarizes this chapter.
7.1. Overview
Observational human exposure measurement studies enable exposure scientists and risk
assessors to identify agents to which people are exposed; exposure concentrations; important
sources, routes and pathways of exposure; and factors that have the greatest influence on
exposure. Observational human exposure measurement studies also can be conducted within the
context of an epidemiological investigation. Results from observational human exposure
measurement studies support the regulatory work of Agency programs and contribute
significantly to our understanding of human exposures and risks from environmental agents. In
addition, results from these studies have identified major stressors and determined whether
mitigation measures have been successful and whether exposures exceeded regulatory standards.
These studies evaluate exposures; they do not examine absorption, distribution, metabolism and
elimination (ADME) parameters or dose-response. Box 7-1 lists examples of observational
human exposure measurement studies.
Data from an observational human exposure measurement study also can be used to evaluate and
refine exposure and dose models. The data collected in the study, however, need to be
compatible with the data needs of the model of interest. An iterative relationship exists between
the information derived from observational human exposure measurement studies and exposure
and dose models (see Section 6.2). Model evaluation and optimization use the measured data as
model inputs. Exposure and dose models then help identify key data needs that observational
human exposure measurement studies can provide.
Page|130
-------
Box 7-1. Examples of Observational Human Exposure Measurement Studies
• U.S. EPA (1987c) The Total Exposure Assessment Methodology (TEAM) Study.
> Measured the personal exposures of 600 residents of 7 U.S. cities to toxic and carcinogenic chemicals in air and
drinking water.
• Eskenazi et al. (1999) Exposures of Children to Organophosphate Pesticides and Their Potential Adverse Health Effects
> Center for the Health Assessment of Mothers and Children of Salinas (CHAMACOS Study). Investigated in utero
and postnatal organophosphate pesticide exposure and its relationship to neurodevelopment, growth and
symptoms of respiratory illness in children.
• Weisel et al. (2005) Relationship between Indoor, Outdoor, and Personal Air Study (RIOPA).
> Large urban air toxics project comprising three studies.
• U.S. EPA (2005f) A Pilot Study of Children's Total Exposure to Persistent Pesticides and Other Persistent Organic
Pollutants (CTEPP).
> Investigated the aggregate exposures of 257 preschool children and their primary adult caregivers to pollutants
commonly detected in their everyday environments.
• U .S. EPA (2003f) National Human Exposure Assessment Survey (NHEXAS).
> Examined the range of environmental pollutants and chemicals (volatile organic chemicals, metals, pesticides) to
which humans are exposed in daily life.
• U.S. EPA (2011a) Detroit Exposure and Aerosol Research Study (DEARS).
> Evaluated how air quality information collected at community monitors represents what people living in
neighborhoods are exposed to every day.
Table 5-6 presents more information about these and other data sources.
In the exposure assessment process, biological measurements often are combined with
environmental, personal and activity pattern data (Bouvier et al. 2005). Use of biological
measurements in exposure assessments is limited because the potential clinical significance of
biomonitoring results (e.g., association with a health effect) has been established for relatively
few chemicals (ECETOC 2005; NRC 2006b). Barr et al. (2006) presented an overview of the
concepts that need to be considered when using biomonitoring data. Sections 5.1.2, 5.4.2 and
6.2.5 presented additional information about the uses and limitations of biomonitoring data.
Although this chapter focuses on observational human exposure measurement studies, exposure
assessments can use other types of research. All research on human subjects that EPA scientists
conduct or the Agency supports must go through several levels of approval. Although the
specific path for review differs slightly depending on the origin of the research, the Human
Subjects Research Review Official (HSRRO) must approve all research projects on human
subjects before any such work can begin. The HSRRO's responsibility is to ensure that all
research studies EPA supports comply with EPA regulations concerning research with human
subjects (40 Code of Federal Regulations [CFR] 26), EPA Policy Order 1000.17 Change Al and
best practices in ethics. A review and approval by an Institutional Review Board must precede a
review request to the HSRRO.
Requests for all third-party research involving intentional exposure of a human subject to any
substance, submitted to EPA or considered in connection with any EPA decision under the
Page|131
-------
Federal Insecticide, Fungicide, and Rodenticide Act13 or section 408 of the Federal Food, Drug
and Cosmetic Act,14 must meet the additional standards specified in 40 CFR 26. Among other
provisions, these regulations might require review by the Human Studies Review Board, an
advisory board established under the Federal Advisory Committee Act, depending on the
purpose and initiation date for the research. More information can be found at the Office of the
Science Advisor Human Studies Review Board website. Questions related to third-party pesticide
research may be directed to the Office of Pesticide Programs' Human Research Ethics Review
Officer.
7.2. Study Design
An adequately developed technical study design will address all parts of a study—from
identifying data needs to reporting the results to the study participants. As such, a study design
might include planning that considers
• Budget and 1 ogi sti cs
• Data elements
• Sample size
• Criteria for selecting study location
• Eligibility criteria for study participants
• Data quality objectives and sampling and analytical protocols
• Chain of custody, storage and data management
• Community involvement
• Engaging stakeholders
• Human subjects guidelines, informed consent and recruitment
• Sample collection
• Sampling schemes
• Data analysis and database design.
This section addresses these aspects of the study design. Many other published articles and
reports can assist when developing the technical study design (Buckley et al. 2000; Daston et al.
2004; Fenske et al. 2005; Morgenstern and Thomas 1993; Ozkaynak et al. 2005; Rice et al. 2003;
U.S. EPA 1998; U.S. EPA2001b; U.S. EPA 2005d).
7.2.1. Budget and Logistical Planning
Availability of resources is an important consideration in planning an observational human
exposure measurement study. Available resources, participant burden, types of sampling
methods and specificity of the measurements that need to be collected strongly influence the
number of participants and the types of samples collected and analyzed. Sufficient resources
need to be available to obtain sample sizes sufficient to meet the study objectives. At some point,
the researchers might need to consider how to balance data needs against limited resources.
Achieving such a balance could entail reducing the number of study participants, eliminating
selected analytical procedures or modifying other study elements. When altering the study plan
13Federal Insecticide, Fungicide, and Rodenticide Act, 7 U.S.C. ch. 6 § 136 et seq.
"Federal Food, Drug and Cosmetic Act, 21 U.S.C. ch. 9 § 301 et seq.
Page|132
-------
to meet resource constraints, the researchers weigh whether the modified study plan is likely to
provide the quality of information necessary to meet the study objectives against other options
for filling data or informational needs.
Planning also needs to consider burdens to both the participants and field technicians. Field
studies typically are complex and might require many field staff and extensive travel. Logistical
planning is essential for conducting the study within a specified period and using available
resources most efficiently.
7.2.2. Identifying Critical Data Elements
For each specific study objective, hypothesis or scientific question, the critical data elements are
those pieces of information that need to be collected to achieve the objective, test the hypothesis
or to answer the question. For example, a study objective might be to determine the associations
between concentrations measured at central site monitors and outdoor residential, indoor
residential and personal exposures for selected air toxics, particulate matter (PM) constituents
and PM from specific sources. To achieve this objective, the following measurements are
required: personal, indoor, outdoor and central site measurements for fine PM (2.5 [j,m in
diameter and smaller); coarse PM (diameter larger than 2.5 [j,m and smaller than 10 (j,m); air
toxics; and other pollutant variables. These measurements need to be stratified by site, season,
housing stock, geographic location and primary source (U.S. EPA 201 la). In addition, models
also might be used for identifying critical data elements and testing hypotheses.
7.2.3. Determining Sample Size for Each Data Element
The sample size needed to address the study objectives, hypotheses or scientific questions is
determined statistically based on the desired outcome. Estimations of sample size (i.e., power
calculations) are necessary to ensure the probability of missing an important difference is small
and to reduce unnecessary cost and waste (Devane et al. 2004). The number of participants
enrolled in a study and the frequency of sample collection from participants often are a
compromise between the available budget and the statistical power the study can achieve
(Dupont and Plummer Jr. 1990; Kulldorff et al. 2004; Woodward 1999).
Estimating effect size helps determine the appropriate sample size for a study. Effect size is a
measure of the differences between or within populations used to assess whether the differences
are statistically significant. If the effect size is large (e.g., the difference in average fish
consumption between recreational anglers and the general population), the differences are easier
to establish, and identifying statistically significant results would require a smaller study
population. Conversely, if the effect size is small (e.g., the difference in fish consumption
between men and women in the general population), establishing the differences is more
difficult, and identifying statistically significant results would require a larger study population.
If the true effect size were known already, variable parameters (e.g., average fish consumption)
would be known and a study would be unnecessary. Data from a pilot study could help
determine the predicted effect size (Devane et al. 2004).
The variability within the defined population also needs consideration when determining sample
size. Much information is available in the peer-reviewed literature on sample size estimations for
studies (Baguley 2004; Dell et al. 2002; Devane et al. 2004; Dupont and Plummer Jr. 1990;
Page|133
-------
Dupont and Plummer Jr. 1998; Kampman et al. 2003; Kieser et al. 2004; Kraemer et al. 2006;
Rippin 2001; Salganik 2006; Vaeth and Skovlund 2004).
7.2.4. Developing Criteria and Identifying Potential Study Locations
Study location is an integral part of the technical study design and needs to be based on the study
objectives, hypotheses or scientific questions. Criteria can include the location of the population
group of interest, geographic or built environment considerations, the size of the cohort and the
time of year for the study. For example, a study objective to determine exposures to vehicle
exhaust in an urban area requires selection of a study location in an urban environment.
7.2.5. Developing Eligibility Criteria for Study Participants
Eligibility criteria also are essential to the technical study design. Eligibility criteria determine
the type of person selected for an observational human exposure measurement study based on the
study objectives, hypotheses or scientific questions. For example, an observational human
exposure measurement study seeking to understand the variety of fruits and vegetables consumed
by the older adult population in the United States would have very different eligibility criteria
from an observational human exposure measurement study evaluating exposure to vehicle
exhaust associated with bicycle use as a means of transportation in urban areas. Some studies
might be designed to sample a representative portion of some larger population (i.e., random or
probability sampling). Other studies might select participants based on particular activities or
other lifestyle characteristics (i.e., convenience sampling). More information on scientific and
ethical approaches for observational human exposure measurement studies can be found at the
EPA Scientific and Ethical Approaches for Observational Exposure Studies website. Information
also is available in the peer-reviewed literature on design issues for these types of studies
(Adgate et al. 2000; Buck et al. 1995; Callahan et al. 1995; Lebowitz et al. 1995; Marshall 1996;
Pellizzari et al. 1995; Quackenboss et al. 2000; Vojta et al. 2002; Whitmore et al. 2005).
7.2.6. Developing Data Quality Objectives and Identifying Sampling and
Analysis Methods
After the exposure assessor establishes the data quality criteria (see Section 5.3), the assessor
identifies methods available to meet these criteria. Observational human exposure measurement
studies need to follow sampling and analytical protocols that are sufficiently accurate, precise
and sensitive to meet the study objectives, test the hypotheses or answer the scientific questions.
Selected analytical and sample collection methods will have demonstrated and acceptable
performance parameters. Many standard operating procedures (SOPs) are publicly available in
various Agency databases. If existing or adequate methods are not available, study
implementation could depend on additional method development and testing. If method
development is required, an assessor determines acceptable performance parameters and
evaluation criteria before deeming an analytical or sample collection method ready for
implementation.
Sample collection methods are tested and evaluated in small-scale pilot studies either in the
laboratory or in the field. Evaluating the sample collection methods in a pilot study prior to full
study implementation provides an opportunity for an assessor to ensure that the sampling
methods will be accurate, precise and sensitive and to reevaluate modifications, if necessary.
Evaluation of analytical protocols uses reference standards or previously analyzed samples, if
Page|134
-------
available. A comparison of the known results from a reference standard or previously analyzed
sample with the results of the analytical protocol can help determine the likelihood of success
when following the protocol.
7.2.7. Developing Chain-of-Custody, Storage and Data Management
Procedures
Chain-of-custody forms, storage of materials associated with a study and data management
procedures are all associated with the quality assurance (QA) project plan (U.S. EPA 2001c; U.S.
EPA 2012a). In combination, these procedures accurately track the movement of samples before,
during and after analyzing, storing and transferring the results.
Chain-of-custody forms (e.g., paper or electronic format) document all collection, shipment,
receipt, analysis, processing and handling steps that a sample undergoes. A chain-of-custody
record, initiated in the field, captures the field collection information for the sample and all
subsequent actions performed. EPA provides thq Air Pollution Training Institute website for
training on chain-of-custody procedures for samples and data.
Proper sample storage procedures ensure adequate and appropriate storage space for samples
(e.g., freezer, ultra-cold freezer, laboratory space). Storage procedures need to ensure minimal
analyte loss, contamination or degradation during shipment and minimize holding times prior to
analysis. Adequate storage space also might be required for paper forms collected during a study.
Ideally, sample and record storage is secure, with access limited to authorized personnel.
Researchers establish data management procedures to process data effectively so that relevant
data descriptions (e.g., sample numbers, locations, procedures, methods) are readily accessible
and accurately maintained (U.S. EPA 2003a). EPA's Forum on Environmental Measurements
maintains a website with information relevant to the data management process. Section 7.2.13
contains more information on the data analysis plan and database design. Section 5.6 provided
additional guidance on data management.
7.2.8. Engaging the Community
Community involvement is the process of engaging in dialogue and collaborating with members
of the community where the study will take place. Researchers need to define the community for
a particular study clearly and consider the extent of the community involvement for the study.
Involving the community is one way to increase respect for the study participants and to shape
research that addresses the needs and priorities of the community (NRC and IOM 2005; U.S.
EPA 2008c). Community involvement is founded on the belief that people need to know what
the Agency is doing in their community and be able to have input into the decision making
process (U.S. EPA 200li). Information and suggestions regarding community involvement in the
Superfund process (U.S. EPA2016e) might be generally applicable to community involvement
in observational human exposure measurement studies. EPA's Superfund Community
Involvement Handbook (U.S. EPA2016e) provides specific information.
Involving a community offers many advantages. Community representatives bring perspective,
value and competence to a research project. Community representatives also provide a study
with knowledge of community concerns, needs, values and priorities; a history of activism,
leadership and coalition building; and a network of community contacts (NRC and IOM 2005).
Page|135
-------
Community leaders can help researchers increase acceptance of the study in their community,
ensure that data collection instruments are culturally appropriate, promote enrollment and
increase retention in the study (NRC and IOM 2005). Ethical Considerations for Research on
Housing-Related Health Hazards Involving Children (NRC and IOM 2005) presents more
details on the importance of community involvement.
Community-based participatory research is an approach in which community members are active
partners in all aspects of the study, from the formulation of research questions to the application
of findings. Community members use their knowledge and experience in the community to help
specify the issues for study, develop research questions that are culturally sensitive and apply
study results to help support relevant program and policy development (NRC and IOM 2005).
Community residents can be involved in the research process as participants, parents of
participants, research staff, community consultants, reviewers or members of community
advisory boards (Hough et al. 2006; NRC and IOM 2005; U.S. EPA 2008c; Williamson et al.
2005). Making community involvement a priority helps ensure the research addresses the
concerns, needs and priorities of the community and leads to actions and changes that benefit the
community.
Communication between the research staff and the community is key. The research staff needs to
have a clear understanding of the type of community involvement needed for the particular study
they have proposed and clearly communicate with the community about the benefits of the study.
Although communication with the community is important, it also can be challenging. Many
investigators have published articles on developing and implementing an effective
communication plan (Brauer et al. 2004; Deck and Kosatsky 1999; White et al. 2004;
Williamson et al. 2005). Section 3.1.3 discussed community involvement in the planning and
scoping phase of an exposure assessment.
7.2.9. Engaging Stakeholders
Engaging other stakeholders in addition to community members is necessary when planning an
observational human exposure measurement study. A stakeholder is defined as anyone who has a
stake in the study but is not directly involved in it. Examples of stakeholders include community
residents not involved in the research, community groups, advocacy groups, interested members
of the public, medical organizations, university partnerships, industry groups, nonprofit
organizations, nongovernment and government organizations, states, tribes and the media. The
researchers and the community jointly decide which stakeholders to invite as participants in the
planning process. Beierle (2002) reported that including stakeholder points of view can improve
decisions. Determining which stakeholders to engage in planning an observational human
exposure measurement study depends on the objectives, hypotheses or scientific questions of the
research study and the persons recruited to participate. Stakeholders engaged in planning one
observational human exposure measurement study might not be the same as those engaged in
planning another. The Agency's Stakeholder Involvement & Public Participation at the U.S.
EPA: Lessons Learned, Barriers, & Innovative Approaches reviews how EPA handles
stakeholder involvement and public participation (U.S. EPA 200li).
During the planning of an observational human exposure measurement study, meetings take
place between the researchers and community members and other stakeholders. These meetings
gather input from the stakeholders and explain the purpose and approach of the study.
Page|136
-------
Researchers send study announcements sufficiently ahead of planning an observational human
exposure measurement study so that the stakeholders can prepare and actively participate. In
addition, the schedule for developing the technical study design allows ample opportunity for
stakeholder participation. Factoring time into the schedule to allow for public/stakeholder
participation is essential for the success of an observational human exposure measurement study.
EPA's Environmental Justice website presents information on incorporating stakeholders in the
research planning process.
7.2.10. Human Subjects Considerations
EPA has a history of conducting observational human exposure measurement studies to assess
the contact that people have with agents while completing routine activities in their homes and
work environments. Observational human exposure measurement studies often provide the
strongest data available to support regulatory action. Such studies can be complex in their design
and implementation, addressing many scientific and ethical considerations. In conducting these
studies, all scientists (regardless of affiliation) should endeavor to apply the most current
scientifically valid approaches, while recognizing the special responsibilities regarding the
ethical issues that sometimes arise when conducting these studies.
EPA's document, Scientific and Ethical Approaches for Observational Exposure Studies (U.S.
EPA 2008c), provides a template for EPA scientists to conduct scientifically valid observational
human exposure measurement studies, while addressing personal concerns and ethical issues.
The document, developed with guidance and input from experts outside the Agency, addresses
such issues as ensuring the protection of vulnerable groups, protecting the privacy of
participants, maintaining confidentiality, ensuring fair and equitable participant selection,
obtaining informed consent, involving the community and designing strategies for effective
communication. EPA's advisory committee, the Human Studies Review Board, reviewed the
document. Implementing the approaches the document outlines ensures that scientists conduct
observational human exposure measurement studies with attention to the concerns of the
participants. The document contains more information on the history of human subjects research
[see Section 1.2 in (U.S. EPA 2008c)], as do numerous peer-reviewed publications (Emanuel et
al. 2008; Emanuel and Menikoff2011; Moreno and Sisti 2015; Ndebele 2013; Presidential
Commission for the Study of Bioethical Issues 201 la; Presidential Commission for the Study of
Bioethical Issues 201 lb; Resnik 2012; Reverby 2009; Wertheimer 2011).
In addition to complying with 40 CFR Part 26 for studies involving human subjects, the
appropriate Institutional Review Board (IRB) needs to approve the study protocol and all
associated documentation. As EPA Policy Order 1000.17 Change Al directs, approval normally
involves an IRB from each participating organization and final approval by HSRRO, located in
the EPA Office of the Science Advisor. The HSRRO is responsible for reviewing and approving
human subjects research at EPA before the recruitment of participants into a study. The Office of
the Science Advisor website provides more information on this process. Additional information
on human subjects research and IRBs is available on the U.S. Department of Health and Human
Services' Office for Human Research Protections website and on the National Institutes of
Health's Bioethics Information Resources website. When the number of participants in an
observational human exposure measurement study is 10 or greater, Office of Management and
Budget (OMB) review also is required. More information on the OMB process is available on
the HHS Office of the Chief Information Officer website.
Page|137
-------
Effective recruitment is essential to the success of a study. Recruitment methods vary, depending
on the study design and the targeted participants, especially when English is not the primary
language of the participants. In representative, population-based sample designs, conducting in-
person or telephone recruitment for all selected households or individuals might be necessary.
For study designs that are not population based, recruitment methods include but are not limited
to advertisements in newspapers or magazines, advertisements on radio or television, word-of-
mouth, social media, endorsements from community leaders and message boards at grocery
stores, religious establishments or community centers (U.S. EPA 2008c). Recruitment plans and
materials are subject to IRB and HSRRO review and approval. Recruitment is most effective
when community leaders are engaged in the recruitment process (NRC and IOM 2005). Many
published papers discuss the recruitment process (Cabral et al. 2003; Sexton et al. 2003).
In addition, researchers need to obtain informed consent before a person can participate in a
research study. The appropriate IRBs need to approve the informed consent documents and
process prior to their use in the field. In studies where children old enough to have some
understanding of the study are the participants, researchers need to obtain their assent in addition
to the consent of their parents or guardians. The age at which a child can provide assent to
participate in a research study varies on a case-by-case basis and the study principal investigator
is advised to consult the IRB and HSRRO. Many peer-reviewed publications address the
challenges associated with informed consent (Crowhurst and Dobson 1993; IOM 2004; Mammel
and Kaplan 1995; Miller et al. 2004; Wendler 2006; Wendler and Shah 2003; Whittle et al.
2004).
Confidentiality and privacy also are key considerations in studies involving human subjects. The
Privacy Act of 1974,15 the E-Government Act of 2002,16 the Federal Information Security
Management Act,17 the Health Insurance Portability and Accountability Act (2003 Privacy Act)18
and OMB policy and guidance outline restrictions and requirements associated with using
personally identifiable information. EPA developed the Agency's Privacy Policy (U.S. EPA
2005a) to ensure compliance and outline Agency requirements for safeguarding the collection,
access, use, dissemination and storage of personally identifiable information. The Agency's
Privacy Policy defines personally identifiable information as "any information about an
individual maintained by an agency, which can be used to distinguish, trace, or identify an
individual's identity, including personal information which is linked or linkable to an individual"
(U.S. EPA2005a). As such, researchers must safeguard data collected during an observational
human exposure measurement study and linked or linkable to an individual. More information
and resources are available at the EPA Policy 2151.0: Privacy Policy website.
Determining appropriate compensation or incentives for participants in a research study can be
complex. Compensation or incentives can include offering to pay people for their time and effort
for participating in a study, but little guidance exists regarding an appropriate level of
compensation (U.S. EPA 2008c). Compensation or incentives can take various forms, including
monetary payments (e.g., cash, gift certificates), nonmonetary payments (e.g., gifts, valuable
information, study-related services), reimbursement for expenses associated with participating in
15Privacy Act of 1974, 5 U.S.C. ch. 5 § 552a.
16E-Government Act of 2002, 44 U.S.C. ch. 36 § 3601 et seq. 44 U.S.C. ch. 35, subch. Ill § 3541 et seq.
"Federal Information Security Management Act, 44 U.S.C. ch. 35, subch. Ill § 3541 et seq.
18Health Insurance Portability and Accountability Act, Pub.L. 104-191.
Page|138
-------
the study (e.g., mileage, parking) or nothing (i.e., altruistic approach). Compensation and
incentives for participants are subject to IRB and HSRRO review and approval. Numerous
research articles address the issues associated with compensating research participants
(Ackerman 1989; Dickert et al. 2002; Erlen et al. 1999; Fry et al. 2005; Grady et al. 2005; litis et
al. 2006; NRC and IOM 2005; Russell et al. 2000; VanderWalde 2005; Weise et al. 2002).
7.2.11. Samples To Be Collected—Environmental, Biological, Personal,
Exposure Factors and Questionnaires
Researchers collect environmental, biological, personal and exposure factor data and
questionnaire information to understand potential exposures. They select environmental samples
that account for exposure through the relevant routes and pathways based on the study
objectives, hypotheses or scientific questions. In some cases (e.g., pesticides), measuring all
relevant media is important, including air, water, food, dust and soil. In other cases, measuring
different types of analytes in one medium might be important (e.g., PM and other criteria
pollutants in air). The method needs to capture the appropriate timeframes of interest, be
sufficiently sensitive and be specific for the analytes of interest at anticipated or potential
exposure levels. Field data collection sheets are used to document supporting information about
each sample collected (e.g., temperature, humidity, time of day, day of week, sample collection
location).
In addition to environmental samples, collecting biological, personal, exposure factor data and
questionnaire information also might be necessary. Personal samples directly relate to the
individual participant, resulting in an individualized sample. For example, a personal air monitor
collects an air sample from a participant's breathing zone. A duplicate diet sample is a personal
sample that collects an exact copy of all foods and beverages the participant consumes. Exposure
factor information includes information on contact rates and time-activity information. Time-
activity information captures all locations where the participant has spent time and all activities
in which the participant has engaged during the period of interest that could account for
exposures. Researchers should be cautious to ensure they do not influence participant behavior
during sample collection (e.g., additional cleaning of house, eating foods different from usual
diet). Table 5-6 presented a discussion of EPA's Consolidated Human Activity Database and
other data sources. Measurements on biological samples determine the absorbed dose of the
chemical of interest. Sections 5.1.2, 5.4.2 and 6.2.5 provide more discussion of the uses and
limitations of biomonitoring.
Questionnaires collect information on parameters that researchers cannot measure any other way,
such as household demographic information or occupation (see Section 5.4.4). The Draft
Protocol for Measuring Children's Non-Occupational Exposure to Pesticides by All Relevant
Pathways (U.S. EPA200lb) describes methods and approaches for estimating exposure. This
protocol document is helpful for identifying the environmental, biological and personal samples;
activity pattern data; and questionnaires needed for an observational human exposure
measurement study. Two other frameworks that might be useful tools for developing the
technical study design for an observational human exposure measurement study include EPA's
National Center for Environmental Assessment's A Framework for Assessing Health Risk of
Environmental Exposures to Children (U. S. EPA 2006d) and the International Life Sciences
Institute's framework for children's risk assessment (Olin and Sonawane 2003).
Page|139
-------
7.2.12. Sampling Scheme
Researchers develop the sampling scheme after identifying the study objectives, hypotheses or
scientific questions, writing the technical study design and specifying the types of samples to
collect. The sampling scheme systematically details the samples to collect in the field and
usually includes the time, location and any other sample collection logistics (such as sample
collection order and preservation methods). The sampling scheme might include information for
both field samples and QC samples (see Section 5.3.2). The QC samples normally collected in a
field study include field blanks, field controls and duplicates. Field blanks are prepared in the
field to assess sample contamination from materials and handling methods. Field controls are
prepared in the laboratory, taken to the field, returned to the laboratory and analyzed to assess
potential losses of target compounds resulting from materials and handling methods. Duplicate
samples serve to assess collection and analytical precision. The QA project plan contains the
details on the QA associated with the field study, including the QC samples and procedures for
assessing the accuracy and precision of the sample collection and laboratory analysis. General
information on the QA project plan can be found on EPA's website; Section 5.3.2 presented
more details about these requirements).
7.2.13. Data Analysis Plan and Database Design
The data analysis plan describes how researchers analyze the collected data to address the
objectives, hypotheses or scientific questions of the research study. The data analysis plan
includes the objectives, hypotheses or scientific questions; the data relevant for evaluating each
objective, hypothesis or scientific question; and the statistical analyses that will be performed on
the data. Researchers write the data analysis plan in conjunction with the technical study design
and sampling scheme. They also use the plan to design the database to house the sampling
information, raw data and analysis results. Overall, details about the data requirements needed to
address the objectives, hypotheses or scientific questions are essential to data analysis and
necessitate comprehensive documentation.
The database houses the measurement data and all supporting documentation associated with
sample collection and analysis. Its development is a critical component of the study, completed
as part of the planning and scoping process (see Section 3.1). The Agency provides general
guidance on designing, implementing and using databases (U.S. EPA 2018c; U.S. EPA2018d),
but in general, a database is specifically designed for each study with the help and guidance of
the study's database manager. EPA's Developer Central Data website and EPA's Forum on
Environmental Measurements, Collection of Methods website include Agency guidance.
To be an effective source of information, the database needs to be relational and searchable, with
sufficient documentation to identify samples, corresponding measurements and any annotations
associated with sample collection and analysis. Other database requirements depend on the type
of data the database will include and the purpose of the database. Little published information
exists in the peer-reviewed literature on relevant database design elements; however, a handful of
papers suggests the need for well-organized databases (Detenbeck et al. 2005; Mills et al. 2001;
Sexton et al. 1994; Van Dyke et al. 2001).
Usually, researchers test the database design with the data generated during the pilot study
(described in Section 7.3). Testing the database before the full field study is imperative because
making design changes before attempting to populate the database with numerous data points is
Page|140
-------
easier than making changes to a populated database. Testing the database with the pilot study
data also ensures that the database is designed to meet the study specifications (e.g., relational,
searchable).
7.3. Planning and Executing a Pilot Study
In preparing for an observational human exposure measurement study, planning and executing a
pilot study are crucial. A pilot study is subject to the same requirements for obtaining informed
consent, and IRB and HSRRO review and approval, as the full field study. The purpose of the
pilot study is to evaluate all methods selected for use in the field study, including the recruitment
strategy, field collection and analytical methods. Typically, the pilot study includes only a few
participants and serves to evaluate the field readiness of the research personnel. The results and
lessons learned from the pilot study also identify needed changes in the implementation plan
before the full field study starts. Researchers address any special concerns or issues raised during
the pilot study so that they do not affect the full field study. Conducting a pilot study improves
the chances for success of the field study. For example, the research team conducted multiple
pilot studies before implementing the Total Exposure Assessment Methodology Study (U.S.
EPA 1987b), and pilot studies preceded the National Children's Study, as described at the
National Children's Study website.
7.3.1. Community and Stakeholder Involvement in the Pilot Study
Once the researchers engage the community and stakeholders in planning, they define their roles
and responsibilities in the pilot study. Examples of roles and responsibilities include serving as a
consultant in planning the study, contributing to the communication plan, reviewing the pilot
study results and providing resources.
Implementing the communication plan (see Section 7.3.3) is a critical component of the pilot
study. Research staff, the community and other stakeholders planning the design of an
observational human exposure measurement study collaboratively develop the communication
plan. The communication plan includes the opportunity for a debriefing after the pilot study is
completed. A debriefing provides all members of the field study team (e.g., research staff,
community members, other stakeholders) an opportunity to give direct feedback about their
experiences with the pilot study, including an in-depth view of what went well and any needed
changes. A debriefing with the community, other stakeholders and research staff identifies the
lessons learned and offers details needed for the implementation plan for the full study.
7.3.2. Implementation Plan for the Full Study
The study design delineates the study components and the implementation plan. The
implementation plan describes the study execution and contains all the details for conducting a
full observational human exposure measurement study successfully. The pilot study provides the
necessary information for refining both documents prior to the full study.
7.3.3. Communication Considerations
A communication plan is essential for successfully disseminating information about the pilot
study and the full study. As with communication in other exposure-related activities, the study
organizers need to involve the communication staff early in the planning process (see
Page|141
-------
Section 3.1.3 and Chapter 9). The communication plan details what information to exchange,
with whom and in what format. For example, the communication plan addresses the timely
reporting of results to the study participants. Numerous references discuss the importance of
reporting results to the study participants and the usefulness of communicating with research
participants in the format they specify (Ackerman and Proffit 1995; Brauer et al. 2004; Collins et
al. 2004; Covello 1989; Herrier andBoyce 1995; Hoffrage et al. 2000; Kasperson 1986; Keeney
and von Winterfeldt 1986; Parkin 2004; Payne-Sturges et al. 2004; Quandt et al. 2004; Schulte
and Singal 1989; Sharlin 1986; Slovic 1986).
The communication plan also specifies how research staff will discuss information with the
community, the media, public health officials, members of the scientific community and other
stakeholders, as they deem appropriate. Researchers need to consult the community and
stakeholders about what information they would like to receive and in what format.
Chapter 9 provides guidance on developing a comprehensive communication plan for exposure
assessments, including how to present data and results effectively.
7.4. Planning and Executing a Full Field Study
All components of the pilot study discussed in this chapter help in planning and executing the
full field study. Each component, as refined based on pilot study findings, is necessary for the
full field study:
• The study goals and objectives, and the data quality objectives, sampling needs, data
management guidelines, location and participant criteria and human subjects
considerations and needs, are detailed in the study design
• Data quality and data deliverables for the full field study are specified in the technical
study design and implementation plan
• Sampling and analysis methods are outlined in the method protocols and SOPs
• QA/QC issues are specified in detail in the QA project plan (see Section 5.3.2)
• A relational database is used to organize the collected data (multimedia samples and
questionnaire information) for analysis
• The data analysis plan specifies the analyses of the collected data in relation to the study
objectives, hypotheses or scientific questions
• The communication plan—developed by the project team, community members and
other stakeholders—specifies what data to convey, to whom and in what format.
7.5. Peer Review and Completion of the Final Report
Peer review is an integral component of the design, implementation and completion of an
observational human exposure measurement study. The peer-review process ensures the data
generated and information disseminated about the study meet the highest quality and ethical
standards (U.S. EPA 1998; U.S. EPA 2000b; U.S. EPA 2015c). Any documents associated with
the study are subject to peer review, but only the study design and final documents, such as
reports and journal articles, typically are peer reviewed. EPA's Peer Review Handbook, 4th
Page|142
-------
Edition (U.S. EPA 2015c) provides a comprehensive guide for organizing and conducting peer
reviews in accordance with EPA's updated peer review and peer involvement policy statement.
7.6. Summary
• Observational human exposure measurement studies enable exposure scientists and
risk assessors to identify agents to which people are exposed; exposure concentrations;
important sources, routes and pathways of exposure; and factors having the greatest
influence on exposure. The studies can help evaluate and refine exposure and dose
models.
• An adequately developed technical study design addresses all parts of a study. It might
include planning that considers budget and logistics; data elements; sample size; criteria
for selecting study location; eligibility criteria for study participants; data quality
objectives; chain-of-custody, storage and data management; community and stakeholder
involvement; human subjects guidelines; informed consent; recruitment; sample
collection; sampling schemes; and data analysis and database design.
• Planning and executing a pilot study can be crucial when preparing for an
observational human exposure measurement study. A well-executed pilot study that
involves communities and stakeholders, communicates the full study implementation
plan and articulates a carefully considered communication plan for successfully
disseminating information about the pilot study and the full study greatly increases the
odds of successful completion of the full study.
• Peer review is integral throughout the design, implementation and completion of an
observational human exposure measurement study.
Page|143
-------
CHAPTER 8. UNCERTAINTY AND VARIABILITY FOR
EXPOSURE ASSESSMENTS
Distinguishing between the uncertainty and variability in the data and the uncertainty in the
decisions is critical when conducting an exposure assessment and communicating its results.
Data uncertainty refers to incomplete or incorrect information. Variability refers to true
differences in attributes stemming from heterogeneity or diversity in an individual or population.
Decision uncertainty includes all uncertainties due to the data and to other choices that affect the
exposure assessment or influence the decisions made. Data uncertainty and variability are
components of decision uncertainty (NRC 2009).
Evaluating uncertainty and variability is essential for developing a robust exposure assessment
that provides information risk managers/decision makers need. Such evaluations can range from
using simple screening methods to conducting complex statistical analyses. The information
from these evaluations can inform decisions to reduce uncertainties in an exposure assessment
further.
Effectively communicating the uncertainties in an exposure assessment is a challenging but
critical part of any exposure assessment. Ensuring that stakeholders understand the various
uncertainties and the compounded effects of the uncertainties on the results and the decisions
made facilitates communication between groups.
To help an assessor evaluate uncertainties and their effects on an exposure assessment, this
chapter:
• Defines data uncertainty and variability, describes their differences and distinguishes
them from decision uncertainty (Section 8.1)
• Outlines the basic considerations that influence an uncertainty and variability evaluation
(Section 8.2)
• Describes a process for conducting such an evaluation using a tiered approach
(Section 8.3)
• Introduces considerations for effectively communicating information about uncertainty
and variability to risk managers/decision makers and stakeholders (Section 8.4).
Section 8.5 summarizes this chapter.
EPA consistently has acknowledged the need to characterize uncertainty in exposure and risk
estimates. This history is described in more detail in National Research Council (NRC) and EPA
documents (Executive Order No. 13045 1997; Hansen 1997a; NRC 1983; NRC 1989a; NRC
1994; NRC 1996; U.S. EPA 1986a; U.S. EPA 1986b; U.S. EPA 1986c; U.S. EPA 1986d; U.S.
EPA 1986e; U.S. EPA 1992c; U.S. EPA 1995a; U.S. EPA 1996e; U.S. EPA 1997a; U.S. EPA
200lh; U.S. EPA 2004c; U.S. EPA 201 lg; U.S. EPA 2014i; U.S. EPA2019d). In addition, the
World Health Organization's International Programme on Chemical Safety developed guidance
for characterizing and communicating uncertainty in exposure assessment and has emphasized
the importance of addressing both data and decision uncertainty in its 10 guiding principles for
an uncertainty evaluation (WHO 2008). These documents focus on reducing and characterizing
Page|144
-------
data uncertainties, while the emphasis of the 2013 Institute of Medicine report, Environmental
Decisions in the Face of Uncertainty (IOM 2013), addresses the need to understand and
characterize uncertainties that derive from the many other components of an environmental
assessment. Such components include those pertaining to subjective judgments and choices
about how risks are calculated and expressed.
This chapter discusses uncertainty and variability concerns associated with the entire exposure
assessment. Chapter 5 briefly described uncertainty and variability concerns associated with the
datasets used in an exposure assessment. Chapter 6 briefly described uncertainty and variability
concerns associated with models. Other chapters mention uncertainty and variability specific to
the topic of discussion.
8.1. Terminology
Many types of uncertainty exist. Sections 8.1.1, 8.1.2 and 8.1.3 discuss data uncertainty, decision
uncertainty and variability, respectively, in detail. Box 8-1 lists uncertainty and variability
terminology relevant to this document. Table 8-1 elaborates on the errors that can result from
these types of uncertainty.
8.1.1. Data Uncertainty
EPA defines data uncertainty as "a lack of precise knowledge as to what the truth is, whether
qualitative or quantitative" (U.S. EPA 2004c). Using more complete or "better" data in an
exposure assessment often reduces data uncertainty. Uncertainty analysis is the process of
identifying the sources of data uncertainty in an assessment and the magnitude and direction of
the resulting error (WHO 2004). Uncertainty analyses range from qualitative discussions of the
uncertainty to analyses that use quantitative techniques, such as probabilistic analysis, to describe
uncertainty by presenting a range of possible exposures and risks.
8.1.2. Decision Uncertainty
Regardless of whether an exposure assessor can reduce data uncertainty, decision uncertainty
remains for risk managers/decision makers. Data uncertainty is a scientific problem; decision
uncertainty involves both the uncertainties of the scientific problem and the uncertainties
involved when risk managers/decision makers choose how to formulate and execute the exposure
assessment. When combining multiple pieces of data to produce a single exposure assessment or
choosing appropriate alternatives among decision options, the decision maker/risk manager
wants to understand the compounded effect of all pieces of data. The compounded effect can
result from (1) how the assessor formulated the problem, (2) the expert and stakeholder
contributions to problem formulation and (3) the extent to which expert and stakeholder input
could change the assessment result. In decision uncertainty, the risk manager/decision maker
seeks to know, "How sensitive is a particular change (in data values or modeled result, problem
scoping/formulation, stakeholder values, expert judgments) to the outcome?"
Page|145
-------
Box 8-1. Terminology
• Data uncertainty
> A component of decision uncertainty describing how well the data used in the assessment are understood. Data uncertainty can lead to inaccurate or biased estimates of exposure.
Additional information can reduce uncertainty and increase the accuracy of exposure estimates. A perfect model producing perfect results is an example of zero uncertainty.
• Decision uncertainty
> Compounded effect of total uncertainty and variability on the exposure assessment. Total uncertainty includes those uncertainties pertaining to the data and models used, uncertainties
regarding how to formulate the problem, selection of data/model inputs and outputs, etc. It includes the extent to which experts agree about how to use those data to describe the
exposure assessment problem (e.g., uncertainty about the values/judgments used to reach the exposure assessment conclusions). Risk managers/decision makers cannot always
reduce decision uncertainty, but they can work to understand the decision context better and determine the robustness of choosing one alternative over another.
• Expert elicitation
> A process that gathers input from experts to characterize uncertainty and fill data gaps when traditional scientific research is not feasible or data are not available.
• Exposure scenario uncertainty
> Uncertainty in an exposure assessment occurs when the information regarding the exposure scenario is limited or inadequate. For example, using an exposure assessment that relies
on information from a study conducted in the southwestern United States to evaluate activity patterns in New England can introduce uncertainty. WHO (2008) defines scenario
uncertainty as the "uncertainty in specifying [an] exposure scenario that is consistent with the scope and purpose of the assessment."
• Monte Carlo analysis
> A probabilistic technique used in exposure assessment that provides a probability function of an estimated exposure using repeated random sampling from probability distributions for
input parameters.
• Observational or model uncertainty
> Gaps in the scientific theory required to make predictions based on causal inferences result in observational or model uncertainty. Model uncertainty is unavoidable and difficult to
quantify because modeling relies on mathematical or statistical formulas to capture complex processes (e.g., chemical releases, environmental fate and transport, biological activity).
• Probabil istic expos ure assess ment
> A range of techniques (e.g., Monte Carlo analysis, Latin hypercube) that rely on statistical distributions of input data in place of point values for key parameters resulting in a distribution
of possible exposure estimates and greater ability to characterize variability and uncertainty.
• Sampling or measurement uncertainty
> Uncertainty in sampling or measurement data is associated with data collection or analysis methods. Systematic sampling error, sample location, sample number and analysis methods
are sources of sampling uncertainty. Sampling methods and analyses are unlikely to produce the same reading every time, even when measuring the same sample, which adds to the
overall uncertainty of an exposure assessment. Using surrogate data to represent an exposure or using data not representative of the exposures also introduces sampling uncertainty.
Other organizations use the term "parameter uncertainty" for this type of uncertainty (WHO 2008).
• Sensitivity analysis
> An analysis conducted on a multivariate model to understand the degree to which a result changes due to uncertainty or variability. In a sensitivity analysis, the analyst changes one
variable while holding the others constant to determine that variable's effect on the result. This procedure compares exposure results when varying a parameter's input values between,
for example, its credible lower and upper bounds (holding all others at their nominal values, such as medians). The results help identify the variables having the greatest effect on
exposure estimates and help focus further information-gathering efforts.
• Variability
> Real differences in data, even when knowledge is complete, for example, the heterogeneity in daily water consumption by an individual or population, which varies based on age,
residence and activity patterns. Variability can be understood more completely—but not reduced—with additional information.
Source: U.S. EPA (2009b); U.S. EPA (2017c)
Page|146
-------
Table 8-1. Types of Uncertainty and Contributing Errors
Type of Uncertainty
Type of Error Causing
Uncertainty
Description or Example
Exposure scenario
Misclassification
Failure to identify exposure routes, exposure media and exposed populations adequately
Sampling or measurement
(Parameter uncertainty)
Measurement, random
Random errors in analytical devices (e.g., imprecision of continuous monitors that measure stack emissions)
Measurement, systemic
Systemic bias (e.g., estimating inhalation from indoor ambient air without considering the effect of volatilization
of contaminants from hot water during showers)
Surrogate data
Alternative data used for a parameter instead of direct analysis of exposure (e.g., using number of people as a
surrogate for population exposure)
Misclassification
Incorrect assignment of exposures of subjects in historical epidemiological studies resulting from faulty or
ambiguous information
Random sampling error
Result of using a small sample of individuals to estimate risk to a larger population
Nonrepresentativeness
Result of developing exposure estimates for a population in a rural area based on exposure estimates for a
population in a city
Observational or model
Relationship errors
Result of incorrectly inferring the basis of correlations between environmental concentrations and urinary
output
Oversimplification
Misrepresentations of reality (e.g., representing a three-dimensional aquifer with a two-dimensional
mathematical model)
Incompleteness
Exclusion of one or more relevant variables (e.g., relating a biomarker of exposure measured in a biological
matrix without considering the presence of the metabolite in the environment)
Surrogate variables
Alternative variables used for variables that cannot be measured (e.g., wind speed at the nearest airport used
as a proxy for wind speed at the facility site)
Failure to account for correlations
Not accounting for correlations that cause seemingly unrelated events to occur more frequently than expected
by chance (e.g., two separate components of a nuclear plant are missing a particular washer because the
same newly hired person assembled them)
Model disaggregation
Extent of (dis)aggregation used in the model (e.g., separately considering subcutaneous and abdominal fat in
the fat compartment of a physiologically based pharmacokinetic model)
Source: U.S. EPA (2004c)
Page|147
-------
Decision uncertainty pertains to comprehending the decision context. Understanding the decision
context includes deciding whether the analysis has been adequately characterized to answer the
question(s) (e.g., Were the appropriate data used?) and understanding the relationships among
the factors relevant to the decision options. To understand these relationships, two distinctions
are essential. The first, which requires analytical expertise, is determining how significant those
decision factors are to a specific exposure assessment. For example, an exposure assessor
determines the relative significance of the route of exposure and the exposure concentration. The
second distinction is determining the relative importance of those decision factors, which is the
role of a risk manager/decision maker. Determining relative importance means explicitly and
transparently expressing how to balance those factors. The relative importance of decision
factors reflects the values of the risk manager/decision maker and the input of stakeholders and is
determined by the risk manager/decision maker through a process that includes stakeholder
input.
Table 8-2 provides information about how to consider and evaluate decision uncertainty. Note
that although some of the risk management/decision making questions pertain to data (and hence,
data uncertainty), the issue in addressing decision uncertainty is that of understanding the data
(or data gaps) and other factors relative to the choice of decision options within the resource
constraints of the decision.
Table 8-2. Examples of Questions Asked to Examine Decision Uncertainty
Risk Management Questions and Issues
Questions/Approaches Responding to Risk Management
Questions/issues
Do the analytical design and current data
answer the decision question?
Using a sensitivity analysis, discuss the decision/analytical question with risk
managers/decision makers and stakeholders.
Will the decision be different if uncertainty is
better characterized?
Conduct a sensitivity analysis to determine whether a change in data values
could alter the risk manager's/decision maker's decision.
Are data gaps a problem for the decision?
How sensitive are the decision options to the data? That is, is the risk
manager/decision maker able to make a decision with the currently available
data?
Will using a different dataset be a problem?
If the data were different, would the risk management decision change
significantly?
Does the risk manager/decision maker need
to understand the current data more
completely?
What is the relationship between the currently available data and the
management decision options under consideration? For example, are the
conditions expected to change significantly in the future? How does variability
affect the decision options?
How is the relative acceptability of decision
options influenced by the choice of data
compared with how those data are combined
and used?
With sensitivity analyses, use "what if" scenarios to experiment with different
data and values (e.g., look at the ends of uncertainty bands) to examine the
relative merits of the decision options.
Uncertainty matters: The risk
manager/decision maker needs to reduce
uncertainty to make a decision.
What are the key exposure parameters that need to be addressed in this
analysis, and how will the additional data influence the decision?
Page|148
-------
8.1.3. Variability Impacts on Uncertainty
EPA defines variability as the "inherent heterogeneity across space, in time, or among
individuals. Variability cannot be reduced with additional investigation, only better understood
or characterized" (U.S. EPA 2004c). In exposure assessment, variability embodies the range of
possible outcomes representing an individual's or a population's exposures based on specific
characteristics (e.g., age group, socioeconomic status) or activities (e.g., the amount of water or
fish consumed on a daily basis, residence in particular geographic areas). Variability affects the
precision of exposure estimates and the degree to which results are generalizable. The need to
select a generalized result that has multiple sources of variability for an exposure assessment
contributes additional uncertainty to the assessment. Types of variability encountered in
exposure assessments include human, spatial and temporal variability. Variability adds another
level of unavoidable complexity when addressing uncertainty in exposure assessments.
Human variability describes person-to-person differences in biological susceptibility or
exposure (U.S. EPA 2004c). Human variability consists of intra- and interindividual variability.
Intra-individual variability refers to the changes that occur in one person over time, which can be
physiological (e.g., body weight, age) or behavioral (e.g., ingestion rates, activity patterns).
Interindividual variability refers to the differences among individuals within a population (e.g.,
physiological or behavioral characteristics).
Spatial variability and temporal variability describe differences that occur in space and time,
respectively. Spatial variability can occur at regional (i.e., macroscale) or local (i.e., microscale)
levels; for example, the percentage of drinking water from groundwater compared with surface
water sources varies from state to state and city to city. Temporal variability can occur over long
or short periods. For example, a change in outdoor exercise can occur seasonally or even daily,
depending on weather conditions (e.g., rain, snow, sun).
8.2. Considerations for Conducting an Uncertainty and Variability
Evaluation
Conducting an evaluation of uncertainty and variability provides the assessor with an opportunity
to evaluate the accuracy and effectiveness of the whole exposure assessment, as well as its
individual components (e.g., conceptual models, modeling approaches). An uncertainty
evaluation will not eliminate all uncertainty, but it will help an assessor address questions that
arise about the results of the exposure assessment and its impact on risk management decisions.
For example, an uncertainty and variability evaluation enables an assessor to determine how a
risk management decision (e.g., requiring the removal of contaminated soil) could change
potential exposures (e.g., eliminating exposure via direct contact with soil).
An uncertainty and variability evaluation can answer many questions that arise during an
exposure assessment (e.g., What are the sources of uncertainty?) and can influence the methods
selected for conducting the evaluation (e.g., Does one specific exposure scenario contribute
substantially to the total exposure?). Section 8.2.1 discusses this consideration and provides
examples of questions for an assessor to consider for each step of the exposure assessment:
planning and scoping, implementation and presentation of results.
Page|149
-------
8.2.1. Planning and Scoping for Characterizing Uncertainty and Variability
As described in Section 3.1, the planning and scoping step of an exposure assessment involves
determining the purpose, scope, approach, participants, level of effort and resources for the
assessment. During this step, an assessor considers how to characterize uncertainty and
variability for the assessment. Essential for this step, and throughout the exposure assessment
process, are transparent and open discussions with stakeholders and risk managers/decision
makers about the possible influence of assumptions, spatial and temporal scale (e.g., individual
versus population, immediate versus future timelines) and other factors on the resulting exposure
assessment. Stakeholders and risk managers/decision makers can use their understanding of
decision uncertainty to determine when they believe the exposure assessment is adequate. U.S.
EPA (2004c) provides a sample of questions to ask during planning and scoping to characterize
uncertainty and variability. U.S. EPA (2004c) primarily addresses data uncertainty. This list of
sample questions is applicable to both data uncertainty and decision uncertainty. We present the
questions in groups as those likely directed to assessors, to managers and to both assessors and
managers.
Questions directed to assessors might include:
• Who is being exposed (e.g., an individual, group or lifestage), and what are the routes of
exposure?
• What are the major sources of uncertainty?
• What are the major sources of variability within the individual, lifestage, group,
population?
• Have the weaknesses and strengths of the methods involved been identified?
Questions directed to managers might include:
• What time and resources are available for conducting an evaluation?
• What level of effort is warranted for this project?
• Are the essential skills (e.g., statistical expertise) and experience available to perform the
analysis?
• What is the timeframe within which a decision is needed?
Questions appropriate for joint discussions between assessors and managers might include:
• Will a quantitative estimate of uncertainty improve the assessment or the decision? That
is, will a quantitative estimate of uncertainty reduce decision uncertainty for the risk
manager/decision maker and inform stakeholders?
• Will a quantitative estimate of the variability of a specific exposure parameter improve
the assessment or the decision?
• How will the uncertainty and variability analyses affect the results of the exposure
assessment or the regulatory decision?
• How will the uncertainty analysis be communicated to the risk managers/decision makers
and stakeholders?
Page|150
-------
Communicating with risk managers/decision makers and stakeholders during the planning and
scoping phase also can identify questions that might influence the uncertainty and variability
evaluation and the outcome of the exposure assessment. Communication between an assessor
and risk manager/decision maker (see Section 9.3.5) is critical for identifying potential areas
where additional research or resources might be useful in an exposure assessment. Anticipating
these concerns during the planning and scoping and problem formulation phases can help an
assessor be responsive to the needs of the risk manager/decision maker and stakeholder.
8.2.2. Assessing the Impact of Uncertainty
Understanding whether uncertainties due to data uncertainty or variability contribute more to the
exposure assessment is informative for assessors. The use of statistical or other means could help
resolve data uncertainties, while data variability needs to be understood, and, if possible,
quantified. Considering decision uncertainty when designing and conducting an exposure
assessment can help assessors better understand and explain the relative influence of data
uncertainty and variability on the assessment outcome. Understanding specific data concerns can
highlight some limitations of the estimated exposures and help assessors determine whether to
spend additional resources on reducing uncertainties related to those data. Reducing or otherwise
addressing these concerns also can strengthen an exposure assessment.
In some cases, location- or project-specific data are available to support an exposure assessment.
In the absence of such data, an assessor might rely on existing datasets, such as those in the
Exposure Factors Handbook: 2011 Edition (U.S. EPA 201 Id). Existing data can serve as an
important reference for evaluating potential exposure factors for various segments of the
population. Regardless of the data source, an assessor considers how data uncertainty and
variability in the datasets used affect estimated exposures and decisions based on an exposure
assessment.
A data uncertainty and variability analysis is an iterative process. The extent of the evaluation
depends on many factors, including the type of assessment, data quality objectives and data
availability. In the planning stage, an assessor balances the cost of conducting uncertainty and
variability evaluations that are more advanced (e.g., probabilistic assessment, advanced
modeling) with the benefits reaped from the information. Likewise, evaluating decision
uncertainty is iterative, which can help stakeholders provide input regarding whether more
resources could reduce uncertainty and provide consequential benefits. In most instances,
spending more resources to obtain data that are more certain is important only if the new
information would change the choices a risk manager/decision maker makes. Whether a
particular level of uncertainty is acceptable is a matter of context (e.g., regulations for which the
decision is made), the timeframe within which the decision is needed and regulatory policy
(Jamieson 1996a; Stahl and Cimorelli 2005).
8.2.3. Conveying Uncertainty When Presenting Results
Transparency in the communication of information about an exposure assessment increases the
common understanding of exposure assessment results and limitations (NRC 2009). Clearly
communicating information about an uncertainty (data and decision) and variability evaluation,
however, can be difficult. Risk managers/decision makers and stakeholders might ask questions
about how uncertainty shapes decisions, affects confidence in an exposure assessment or
influences the application of the results to specific groups or populations. When risk
Page|151
-------
managers/decision makers determine which uncertainties contribute the greatest influence on the
exposure assessment results (e.g., which data or which judgments), they can focus on the most
consequential individual uncertainties in their communication plan. Determining and evaluating
decision uncertainty can facilitate communication because stakeholders will be able to explain
the influence of the compounded uncertainties on the resulting exposure assessment (Stahl and
Cimorelli 2005). Risk Assessment Guidance for Superfund, Volume III: Part A, Chapter 6 (U.S.
EPA 200lh) provides specific information on communicating uncertainty and variability to
many audiences, such as risk managers/decision makers and stakeholders. Section 6.4 of the
Superfund guidance discusses key factors for successfully communicating probabilistic risk
assessment, including early and continuous involvement of stakeholders, a well-developed
communication plan, effective graphics, a working knowledge of the factors that might influence
perceptions of risk and uncertainty and a foundation of trust and credibility (U.S. EPA 2001h).
EPA's Risk Characterization Handbook (2000g) is another resource for information about
communicating results to risk managers/decision makers and stakeholders (e.g., community
groups). Section 8.4 and Chapter 9 in this document provide additional information about
communication considerations for exposure assessments.
8.3. A Tiered Approach to Data and Decision Uncertainty and
Variability Evaluations
Data uncertainty and variability evaluations are increasing in complexity as evaluation tools
(i.e., modeling capabilities) become more sophisticated. Not all exposure evaluations, however,
require the most complex evaluation possible. The level of complexity of the evaluation relates
to the complexity of the assessment and the potential use of the exposure information in the risk
management decision. EPA has emphasized the use of a tiered approach for conducting data
uncertainty and variability evaluations. In its simplest form, the tiered approach to understanding
the effects of data uncertainty and variability on the exposure assessment outcomes involves
starting with basic screening methods and then sequentially employing more sophisticated
methods as needed to support the decision (U.S. EPA 2004c). This section describes the tiered
approach and discusses the methods most commonly used for each tier. Some evaluation
methods are appropriate for use in more than one tier. Assessors need to identify and use the
methods that best meet their needs and coordinate with their programs for specific guidance.
In addition to data uncertainty and variability, uncertainties related to agreements about planning,
scoping and problem formulation, data and model selection, what role expert judgment plays and
how stakeholders intend to use the data need evaluation to understand the compounded effect of
all these aspects on the resulting exposure assessment. Although traditional statistical tools and
methods can be useful in reducing data uncertainty and better understanding variability, other,
more stakeholder-focused approaches need to be engaged to determine and evaluate decision
uncertainty (Belzer et al. 2001; IOM2013; Stahl 2014; Verweij and Thompson 2006).
In moving through each tier, from simple to complex, an assessor determines whether additional,
case-by-case evaluations are needed or whether the uncertainty (data and decision) and
variability have been addressed or reduced to acceptable levels. This process involves:
Page|152
-------
• Selecting input parameters for an exposure assessment (data uncertainty and variability)
• Developing a deterministic analysis to identify potential exposures to provide a baseline
for a sensitivity and more sophisticated analyses (data uncertainty and variability)
• Conducting a sensitivity analysis to characterize decision uncertainty that includes the
compounded impacts of uncertainty or variability on exposure assessment outcomes (data
uncertainty, variability and decision uncertainty)
• When high decision uncertainty is an issue, implementing further analyses to refine the
input parameters to an exposure assessment and reduce uncertainty and better understand
variability in the assessment (data uncertainty, variability and decision uncertainty).
This process is iterative. The information generated when refining input parameters at one tier of
the evaluation (e.g., screening) overlaps with selecting input parameters at the next tier
(e.g., one-dimensional Monte Carlo analysis). Figure 8-1 illustrates EPA's tiered approach to
data uncertainty.
Figure 8-1 provides examples of commonly used methods for each tier. Assessors are
encouraged to consult with their programs to identify preferred evaluation tools and default input
parameters (e.g., drinking water intake, body weight). Assessors proceed iteratively to
understand more fully how each component of the exposure assessment influences the
assessment's decision uncertainty. Data uncertainty and variability are just two of the many
possible contributors to an exposure assessment's decision uncertainty. Discussions of several
methods that can help an assessor evaluate the importance of uncertainty within the risk
management/decision making process are available in the literature (Ducey 2001; Fischhoff
1976; Fischhoff 1988; Frey and Patil 2002; Greenland 2001; IOM 2013; Jamieson 1996b; Renn
1986; Stahl and Cimorelli 2005).
8.3.1. Selecting Input Parameters
For each input parameter of an exposure assessment (e.g., chemical concentration, exposure
duration), a range of potential values exists. As the data uncertainty and variability evaluation
becomes more sophisticated, an assessor will revisit and, as necessary, refine the input
parameters. The decision uncertainty and the results of the exposure assessment are influenced
not just by the people (e.g., stakeholders, assessor) selecting the input parameters but also the
input parameters themselves.
At the beginning of an exposure assessment, an assessor often uses a screening-level approach
(see Section 8.3.2) to gain an overall understanding of potential exposures. At the screening
level, an assessor typically selects a single data point estimate to represent a central tendency,
maximum or other exposure level. The goal of this approach is to achieve a conservative (i.e.,
health-protective) estimate of exposure. This step also is the first step of a data uncertainty and
variability evaluation.
If, after completing the screening-level evaluation, an assessor determines that additional
refinement of the input parameters is necessary, the assessor might advance to the next step in
the process and conduct a sensitivity analysis (see Section 8.3.3). The sensitivity analysis
determines the relative importance of various parameters (i.e., which parameters will benefit the
assessment by refinement or additional data).
Page|153
-------
Figure 8-1. Schematic Diagram of Tiered Approach to Data Uncertainty
42
c
CD
e
p
t
1
to
ii
O c5
o) y
C <5
(D
m
x
E
en
c
?
CD
8
3
(D
3
Tier 1 Point Estimate Risk Assessment
Point Estimate Sensitivity Analysis
Problem Formulation/Scoping/Work Planning/Data Collection
Decision Making Cycle: Evaluation, Deliberation, Data Collection,
Work Planning, Communication
-*¦ At each tier, a decision may be to exit the tiered process.
Note: MCA = Monte Carlo analysis; PRA = probabilistic risk assessment; 1-D and 2-D refer to 1-dimensional and 2-dimensional
Adapted from U.S. EPA (2001 h)
Based on the sensitivity analysis, an assessor might select the maximum and minimum values as
input parameters for the variables that most influence the assessment. The assessor uses these
values to estimate the upper and lower bounds of exposure (referred to as an interval or range).
Using professional judgment and experience, an assessor might assume a uniform or skewed
distribution across this interval. These assumptions, however, introduce additional data
Page|154
-------
uncertainty in an exposure assessment (WHO 2008). In addition to selecting a minimum and
maximum value, an assessor can develop an interval estimate by graphing the available data or
conducting statistical analyses. As part of this process, an assessor distinguishes between the data
that do and do not represent the receptor. For example, an analysis of fish consumption that
focuses on individuals who consume fish can exclude data for those who fish but do not consume
the fish they catch.
As an assessor moves through the tiers of the uncertainty and variability evaluation, the assessor
can use a probabilistic risk assessment approach to refine the input parameters. In this case, the
input parameters represent a probability distribution, defined as a mathematical representation of
the probability associated with specified intervals of a value (U.S. EPA2001h). For example, a
probability distribution for drinking water intake would represent the range of possible intake
rates and the spread of values within the range (Figure 8-2).
Figure 8-2. Hypothetical Example of an input Distribution for Drinking Water Intake Rates
0.025
700
600
0.020
500
"w
c
Q
•4—'
GM = 1.31
GSD = 1.30
AM = 1.36
SD = 0.36
0.015
400
300
0.010
200
CL
0.005
100
0.000
uu
0.0
1.0
2.0
3.0
4.0
Ingestion Rate (L/day)
Note: GM = geometric mean; GSD = geometric standard deviation; AM = arithmetic mean; SD = standard deviation
An assessor can use various approaches to develop a probability distribution for one or more
parameters in an exposure assessment. Specifying probability distributions for all parameters,
however, generally is unnecessary. An assessor can use the results from earlier sensitivity
analyses to select the critical parameters for the focus of the probability distribution.
Alternatively, an assessor can consider analyses conducted to refine the exposure assessment.
Page|155
-------
The accuracy of the probability distribution also depends on the quality of the input data
(see Section 5.3.2). For some parameters, location- or situation-specific data will be available.
For others, such as exposure duration, water intake and body weight, an assessor more likely will
need to develop distributions from published datasets and data summaries [e.g., Exposure
Factors Handbook: 2011 Edition (U.S. EPA 201 Id)]. Once an appropriate dataset is identified,
the assessor conducts statistical analyses to develop the probability distributions (U.S. EPA
2001h; U.S. EPA2004c). AppendixB ofEPA's Risk Assessment Guidance for Superfund.
Volume III: Part A, Process for Conducting Probabilistic Risk Assessment (U.S. EPA 200 lh)
provides detailed guidance on selecting probability distributions. Assessors are encouraged to
consult with their programs to identify preferred tools and guidance for selecting probability
distributions.
Beyond a probabilistic risk assessment, advanced modeling tools are available to characterize
uncertainty and variability further. Section 6.3.4 provided more information on these tools.
8.3.2. Screening-Level Analyses
Screening-level analyses serve as the first tier of a data uncertainty and variability evaluation.
Relying on conservative values for important exposure parameters, an assessor can use these
analyses to screen out exposure scenarios or pathways expected to pose little risk. If a scenario
poses only a slight increase in the potential for an adverse effect to occur, even assuming the
greatest potential exposure, an assessor might choose to eliminate this scenario from additional
and more complex evaluations (U.S. EPA 2004c). The assessor needs to communicate the
decision to exclude an exposure scenario from an assessment to the risk manager/decision maker
and stakeholder(s) clearly. The decision maker/risk manager considers how stakeholders are
involved in choosing the screening analysis and the type of screening analysis because this
decision could influence the exposure assessment and contribute to decision uncertainty.
The use of screening-level analyses typically occurs during the initial phase of an evaluation.
Usually at this stage in an assessment, little location- or scenario-specific information is
available. Therefore, an assessor commonly relies on default values, which are point estimates
for input parameters that are inherently broad in scope. An assessor often chooses conservative
default values to examine exposures that would fall on or beyond the high end of the expected
exposure distribution. The assumption is that if risks are not anticipated in a worst-case scenario,
assessors, risk managers/decision makers and stakeholders can be confident that the exposure
needs no further evaluation (U.S. EPA 2004c).
Screening-level analyses most commonly use a deterministic approach. This approach entails
developing a point estimate of exposure and using point estimates of toxicity to calculate a
hazard quotient (non-carcinogenic effects) or risk level (carcinogenic effects). This process
includes:
• Selecting point estimates for input parameters. An assessor likely will base these
estimates on default values, but can use location- or scenario-specific data, if available.
• Estimating potential exposures based on the scenarios identified.
• Comparing the estimated exposure to toxicology-based screening values. Screening
values include health-based values expressed as a dose (e.g., reference doses) and
chemical concentrations in a specific medium (e.g., soil concentrations). When using
Page|156
-------
chemical concentrations as screening values, an assessor usually can compare an
exposure point concentration to the value directly. In this case, estimating the exposure
quantitatively would be unnecessary.
• Determining which exposure pathways, if any, require additional evaluation. Typically,
an assessor will carry forward exposures that exceed screening values. In some cases, an
assessor might carry forward a scenario for further evaluation or eliminate a scenario
based on community concerns, stakeholder input or other factors.
An assessor also can use probabilistic risk assessment approaches during the screening-lev el
analysis. Probabilistic approaches are used more often when refining an exposure assessment
(see Section 8.3.4). EPA programs also might implement specific procedures that vary from this
basic process. Assessors need to consult with their programs and follow their standard operating
procedures and guidance.
8.3.3. Conducting a Sensitivity Analysis to Better Characterize Uncertainty
For exposure assessment, EPA defines sensitivity analysis as "any systematic, common sense
technique used to understand how risk estimates and, in particular, risk-based decisions, are
dependent on variability and uncertainty in the factors contributing to risk" (U.S. EPA 200lh).
Sensitivity analysis conducted as part of a data uncertainty evaluation is the process of
determining which parameter(s) in an exposure assessment drives the results. This analysis
places all relevant data into the context of the decision so that iterations that change data
estimates and values can inform risk managers/decision makers about how data uncertainty
might affect the evaluation of decision options.
For decision uncertainty, sensitivity analysis includes understanding the influence of
compounded uncertainties on the exposure assessment result, including whether an assessor
should have considered different data or models or could have made different judgments about
how to use the data. In this context, sensitivity analysis is a process of placing all relevant data in
the decision context so iterations that change data estimates and values (reflecting data
uncertainty) can inform the risk managers/decision makers about how data uncertainty might
affect the evaluation of decision options. The sensitivity analyses for data uncertainty and
decision uncertainty can occur simultaneously so that an assessor can use the results to avoid
additional data analysis or more time-consuming probabilistic analyses.
Identifying the parameter(s) driving data uncertainty and variability and understanding decision
uncertainty on the results of an exposure assessment enable an assessor to:
• Use the evaluation of decision uncertainty to prioritize sources of data uncertainty,
variability and other uncertainties pertaining to problem formulation, data choices, etc.
• Inform risk managers/decision makers and stakeholders about the potential impacts of
exposure assessment on the risk management decisions
• Determine whether to support a cost-benefit analysis that weighs the cost of additional
analyses or data collection efforts versus conducting a more refined exposure assessment
• Examine the merits of additional analyses or data collection efforts
• Evaluate the merits of additional model development and refinement that highlight key
input parameters identified in the exposure assessment.
Page|157
-------
Sensitivity analyses can range from simple "back-of-the-envelope" calculations to more complex
analyses, including modeling and regression analysis. The type of analysis needed depends on
the complexity of the exposure assessment question (U.S. EPA 200 lh). The essence of the
analysis, however, remains the same: evaluating how changes in the input parameters change the
output. Appendix A of EPA's Risk Assessment Guidance for Superfund. Volume III: Part A,
Process for Conducting Probabilistic Risk Assessment (U.S. EPA 2001h) and the World Health
Organization's Uncertainty and Data Quality in Exposure Assessment (WHO 2008) provide
detailed guidance on conducting a sensitivity analysis. Because specific EPA programs might
have their own procedures for conducting sensitivity analyses, assessors need to consult with
their programs and follow their standard operating procedures.
In some cases, sensitivity analysis is a low-cost procedure that uses basic calculations to evaluate
the relative contribution of the various exposure parameters. Other cases will require a more
intensive and complicated sensitivity analysis. This complexity usually arises when multiple
sources of uncertainty and variability, including correlation among the exposure parameters,
influence an exposure assessment outcome. These sources could be linked such that changes to
one source might affect another source (U.S. EPA 200lh).
Sensitivity analysis to determine whether further evaluation of uncertainty and variability is
necessary has two potential outcomes, regardless of the method:
• Uncertainty and variability have been defined such that an exposure assessment is
sufficient to support decisions (and hence, decision uncertainty is low), or
• Uncertainty and variability influence the outcome of an exposure assessment to the
degree that the assessment is insufficient to support decisions (and hence, decision
uncertainty is too high).
If the former is true, an assessor has completed the uncertainty and variability evaluation. The
evaluation has reached the highest tier necessary to support decisions. If the latter is true, an
assessor needs to move forward within the tiered approach to refine the exposure assessment
(U.S. EPA200lh).
8.3.4. Using Uncertainty and Variability Analyses to Refine an Exposure
Assessment
If a sensitivity analysis indicates the uncertainty and variability in an exposure assessment could
change decisions, an assessor needs to consider refining the assessment by reducing the
uncertainty or better defining variability. At this stage, an assessor might decide to conduct a
more sophisticated uncertainty and variability evaluation. At the screening level, an assessor
might decide to analyze the data statistically (e.g., calculate standard deviations, confidence
levels) to characterize the datasets more fully and inform the selection of input parameters
(i.e., address data uncertainty). As the data uncertainty and variability evaluations become more
sophisticated, an assessor might move to a one-dimensional Monte Carlo analysis, a
multidimensional probabilistic risk assessment approach or an advanced modeling approach.
These approaches, however, cannot address inherent uncertainty. Moreover, even though they
can help improve an assessor's understanding of exposure variability, they cannot reduce it.
Page|158
-------
Refining an exposure assessment can require considerable time and effort, but refinement is a
necessary step to inform decision making, particularly when the consequences of the decision
impact public health and public health resources. Therefore, using decision uncertainty to
determine the extent to which changes to multiple data uncertainties or other factors influence
the assessment results can be instructive for stakeholders when deciding whether to invest
additional time and effort (e.g., collecting additional data might be necessary; see Section 5.4.2).
As discussed in Section 8.2.2, an assessor should balance the effort involved in conducting
increasingly complex analyses with the benefits of reducing data uncertainty or better defining
variability. The selected approach will depend on the type of data uncertainty (scenario,
sampling or modeling) and the availability of techniques for reducing that uncertainty. An
assessor likely will use deterministic approaches, as discussed in the description of the screening-
level analyses (see Section 8.3.2), during the lower tiers of an evaluation. During upper-tier or
more complex evaluations, an assessor will rely more commonly on a probabilistic risk
assessment or advanced modeling approaches.
Role of Probabilistic Risk Assessment in Data Uncertainty Analyses
Probabilistic risk assessment is a statistical method that yields a probability distribution for risk,
generally by using a probability distribution to represent data uncertainty or variability in one or
more parameters of an exposure assessment. This approach is applicable when detailed statistical
analysis is necessary to support sensitive decisions and to help risk managers/decision makers
distinguish among possible alternatives. Probabilistic approaches also can help identify data gaps
where additional data collection might be necessary to reduce uncertainty and address variability.
An assessor can address identified data gaps by collecting more data (see Chapter 5) or
conducting additional statistical analyses, such as meta-analyses of existing data or probabilistic
approaches using multivariate analysis (Volstad et al. 2003; Weigel 2003). Box 8-2 lists
resources for conducting a probabilistic risk assessment.
Box 8-2. Guidance Documents and Resources Supporting
Probabilistic Risk Assessment
• Finkel (1990) Confronting Uncertainty in Risk Management: A Guide for Decision Makers.
• Morgan et al. (1990) Uncertainty: A Guide to Dealing with Uncertainty in Quantitative Risk and Policy Analysis.
• Finley and Paustenbach (1994) The Benefits of Probabilistic Exposure Assessment: Three Case Studies Involving
Contaminated Air, Water, and Soil.
• U.S. EPA (1996e) Summary Report for the Workshop on Monte Carlo Analysis. EPA/630/R-96/010.
• U.S. EPA (1997b) Guiding Principles for Monte Carlo Analysis. EPA/630/R-97/001.
• Hansen (1997a) Policy for Use of Probabilistic Analysis in Risk Assessment at the U.S. Environmental Protection Agency.
• Hansen (1997b) Use of Probabilistic Techniques (Including Monte Carlo Analysis) in Risk Assessment, and Guiding
Principles for Monte Carlo Analysis.
• U.S. EPA (1999b) Report of the Workshop on Selecting Input Distributions for Probabilistic Assessments.
EPA/630/R-98/004.
• U.S. EPA (2001h) Risk Assessment Guidance for Superfund. Volume III: Part A, Process for Conducting Probabilistic Risk
Assessment. EPA/540/R-02/002.
• U.S. EPA (2001h) Risk Assessment Forum White Paper: Probabilistic Risk Assessment Methods and Case Studies.
EPA/100/R-14/004.
Page|159
-------
Probabilistic risk assessment methods of varying sophistication are available, depending on the
exposure assessment objectives. Monte Carlo analysis is a widely used probabilistic method that
relies on computer simulations to combine multiple probability distributions in a quantitative
exposure assessment. During the simulation, exposure is estimated quantitatively using randomly
selected variables, a process repeated many (e.g., 10,000) times. The output is a series of
exposure estimates amenable to summarization with statistical analysis (e.g., mean, quartiles).
Most commonly, the input parameters are assumed independent (i.e., the value of one parameter
is not linked to the value of another). In simulations that are more complex, an analyst can link
parameters using conditional distributions or correlation coefficients. A one-dimensional Monte
Carlo analysis characterizes either uncertainty or variability, whereas a two-dimensional Monte
Carlo analysis simulates both and is considered an advanced modeling method (U.S. EPA
200 lh).
Role of Expert Elicitation
Expert elicitation can support probabilistic approaches when data are scarce or lacking. Expert
elicitation is the process by which experts in multiple fields characterize data uncertainty and fill
data gaps in an exposure assessment when traditional scientific research is infeasible or data are
not available (U.S. EPA 2007e). The resulting information can inform decisions associated with
the assessment. Each expert characterizes relationships, quantities, events or parameters of
interest based on professional judgment and expertise, with the characterizations typically
expressed as probabilities. Expert elicitation can be sought individually (each expert acts alone)
or as a group (experts meet and provide a collective response). An individual approach typically
applies when an assessor needs uncertainty characterization. A group approach is appropriate
when an assessor needs a consensus or best estimate of uncertainty (Edlmann et al. 2016;
Gregory et al. 2012; McKellar et al. 2017; U.S. EPA 2009a; U.S. EPA 2009c; Wallsten et al.
1997; Werner etal. 2017).
8.4. Communicating the Results of the Uncertainty and Variability
Evaluation
Even when addressing only data uncertainty and variability, effectively communicating these
concepts on an exposure assessment can be challenging. Risk managers/decision makers and
stakeholders should evaluate and understand any preconceptions and biases that could influence
their interpretations of the evaluation. Ultimately, an assessor seeks to communicate the
information to ensure informed decisions about risks to health, safety and the environment
(WHO 2008) and, more recently, the Intergovernmental Panel on Climate Change (Mastrandrea
et al. 2010). Section 3.2.2 and Chapter 9 of this document provide details about how to
communicate the overall exposure assessment process effectively. Communication about
decision uncertainty is a critically important emerging area and the need for facilitation
approaches remains great. Decision makers can communicate with stakeholders more effectively
when they have introduced them to the issues early in the process (see Section 3.1.3).
WHO (2008) presents a list of questions for an assessor to consider and address when discussing
the results of an uncertainty and variability evaluation with the public:
• How sure are you about the results?
• What evidence is available to support the methods used?
Page|160
-------
• What does your method mean for me (and my family)? What do your results mean for me
(and my family)?
• What would the result be if you used your sophisticated model for me?
• Why do you use national reference data for us? Aren't we different?
• Your data are old. Hasn't the situation (or product) changed?
An assessor also might need to address questions about the tools used to evaluate different
exposure scenarios. Because EPA encourages a tiered approach, an uncertainty and variability
evaluation might discuss results from screening-lev el analyses for some scenarios and from
analyses that are more complex for others. These analyses will differ markedly in the level of
sophistication, quality of data and amenability to quantitative expressions of uncertainty (both
data and decision). An assessor should outline the rationale for applying a specific method in a
specific situation to facilitate communication with the risk manager/decision maker and
stakeholders. An assessor will benefit from citing the sources, references and materials used to
support the overall evaluation. These include resources describing when and how to use a
specific tool and those defining parameter defaults.
Anticipating these types of questions and concerns will help an assessor, manager, community
involvement coordinator and others prepare effective communication materials. In responding to
these questions, an assessor also needs to recognize that, although resources (i.e., time and cost
constraints) can influence decisions about data collection and additional analyses, stakeholders
usually are less concerned about such constraints. In fact, their confidence might diminish in
decisions that appear driven by resource considerations rather than by exposure assessment
analyses. An assessor needs to focus on clearly communicating models, methods, assumptions,
distributions and parameters applied during an exposure assessment and uncertainty analysis.
Openness and transparency about the analyses builds trust and confidence between the risk
manager/decision maker and stakeholder (WHO 2008). Using sensitivity analyses and evaluating
decision uncertainty in exposure assessments can help decision makers better determine which
uncertainties have greater influence on the assessment result, allowing them to focus on
communicating those uncertainties.
Chapter 6 of EPA's Risk Assessment Guidance for Superfund. Volume III: Part A, Process for
Conducting Probabilistic Risk Assessment (U.S. EPA 2001h) and Chapter 6 of the World Health
Organization's Uncertainty and Data Quality in Exposure Assessment (WHO 2008) provide
detailed information about communicating exposure assessment results, including the impacts of
uncertainty and variability. In addition to providing general guidance about effective
communication, both documents provide examples and suggestions for clearly communicating
information about data and data uncertainty and variability pertaining to point estimates,
probability distributions, sensitivity analysis, probabilistic risk assessment and additional
concepts in uncertainty and variability. [For more information on decision uncertainty, see
(Dalkmann et al. 2004; Davies et al. 1987; Illing 1999; Jamieson 1996a; Jamieson 1996b;
Sarewitz 2004; Stahl and Cimorelli 2005.)] Chapter 9 in this document provides additional
information about communication considerations for exposure assessments.
Page|161
-------
8.5. Summary
• Decision uncertainty is the compounded effect of the total uncertainty (and variability) on
the decisions arrived at in an exposure assessment. It involves both the uncertainties of
the scientific problem and the uncertainties involved when risk managers/decision makers
choose how to formulate and execute the exposure assessment. It also includes
uncertainties associated with data and models, whether the problem is scoped accurately
and the most important factors are included and the degree to which experts and other
stakeholders concur about how to consider those factors in the assessment.
o EPA defines data uncertainty as "a lack of precise knowledge as to what the truth is,
whether qualitative or quantitative."
o Decision uncertainty includes uncertainty associated with the scientific problem and
with the choices of risk managers/decision makers and stakeholders in formulating
and executing the exposure assessment,
o EPA defines variability as the "inherent heterogeneity across space, in time, or
among individuals. Variability cannot be reduced with additional investigation, only
better understood or characterized." Data uncertainty includes data variability.
• Evaluating uncertainty and variability helps in understanding the accuracy and
effectiveness of the whole exposure assessment, as well as its individual components.
Although an uncertainty evaluation will not eliminate all uncertainty, it can help address
questions regarding exposure assessment results, including its influence on risk
management decisions, and can help with communication of the assessment.
o An assessor considers how to characterize uncertainty and variability for the
assessment during the planning and scoping phase of an exposure assessment,
o Understanding whether data uncertainty or data variability contributes more to the
exposure assessment can be instructive for assessors. Statistical or other means, such
as gathering more data, can help reduce data uncertainties and improve the
understanding of variability and could help quantify it.
o Transparency in the communication of information about an exposure assessment
increases the common understanding of exposure assessment results and limitations.
• EPA emphasizes the use of a tiered approach to conducting data uncertainty and
variability evaluations.
o Screening-level analyses are the first tier of a data uncertainty and variability
evaluation. These analyses often help screen out exposure scenarios or pathways
expected to pose little or no risk. Screening includes selecting point estimates for
input parameters, estimating potential exposures based on the scenarios identified,
comparing the estimated exposure to screening values and determining which
exposure pathways require additional evaluation,
o Sensitivity analysis is "any systematic, common sense technique used to understand
how risk estimates and, in particular, risk-based decisions, are dependent on
variability and uncertainty in the factors contributing to risk." For data uncertainty,
sensitivity analysis is the process of determining which parameter(s) in an exposure
assessment drives the results. The essence of the analysis is to evaluate how changes
in the input parameters change the output.
• If uncertainty and variability in an exposure assessment could change decisions, an
assessor needs to consider refining the assessment by reducing the uncertainty or better
Page| 162
-------
defining variability. Refining an exposure assessment is a necessary step to inform
decision making, particularly when the consequences of the decision impact public health
and public health resources. Using decision uncertainty to determine the extent to which
changes to multiple data uncertainties or other factors influence the exposure assessment
result can be instructive for risk managers/decision makers and stakeholders when
deciding on the merits of investing additional time and effort.
• When stakeholders understand the effect of the compounded uncertainties on the
exposure assessment result, communicating the importance of the results is easier.
Effectively communicating the concepts, evaluation tools and their impacts on an
exposure assessment nevertheless can be challenging.
Page|163
-------
CHAPTER 9. DEVELOPING A COMMUNICATION
PLAN AND PRESENTING RESULTS FOR EXPOSURE
ASSESSMENTS
The purpose of an exposure assessment shapes the manner in which assessors communicate the
exposure assessment results. This chapter highlights considerations when communicating the
results of an exposure assessment. It presents:
• An overview of communication in an exposure assessment (Section 9.1)
• Development of a communication plan (Section 9.2)
• Characterization of the results of an exposure assessment (Section 9.3)
• Communication products (Section 9.4)
Section 9.5 summarizes this chapter.
9.1. Overview of Communication in Exposure Assessment
For this document, EPA defines "communication" as the exchange of information and
viewpoints between the Agency and stakeholders to achieve a goal or objective, such as fostering
greater understanding of science and assessment methods, or gaining greater insight into diverse
public views and concerns about the scenarios affecting the potential for exposure of individuals
or a community to a defined agent [adapted from NAS (2017)].
This chapter presents recommended approaches for planning, initiating, maintaining and
delivering content associated with an exposure assessment. To be effective, communication
begins at the outset of the assessment process and remains ongoing throughout. Section 3.1.3
emphasized the need for early engagement with stakeholders. This chapter expands on that
content and addresses key points of ongoing communication between the assessor, community
and other stakeholders. The approach to communication varies with the degree of community
engagement, and the level of outreach and community engagement varies by program. Agency
activities that directly involve engagement with communities and a wider array of stakeholders
require more extensive planning (U.S. EPA 2016e). Box 9-1 lists EPA guidance and resources
on public involvement.
Effective communication begins during the early phases of the assessment process (i.e., during
planning and scoping and problem formulation) (NRC 2009). Section 3.1.3 presented guidance
on engaging with stakeholders during the early phases of an exposure assessment. EPA's Office
of Public Affairs and the communication staff within each program are available to facilitate the
initial contact and maintain a relationship with stakeholders throughout the process. Ongoing
coordination with the Office of Public Affairs is important to ensure effective communication.
EPA adopted the Seven Cardinal Rules of Risk Communication as a policy guidance document in
1988 (Covello and Allen 1988). These principles uphold the importance of dialogue with the
community and other interested stakeholders (Covello and Sandman 2001).
Page| 164
-------
Box 9-1. EPA Guidance and Resources on Public Involvement
Guidance
• Public Involvement Policy of the U.S. Environmental Protection Agency, EPA233-B-03-002 (U.S. EPA2003g)
The purposes of this policy are to:
Improve the acceptability, efficiency, feasibility and durability of the Agency's decisions
Reaffirm EPA's commitment to early and meaningful public involvement
^ Ensure EPA considers the interests and concerns of affected people and entities when making decisions
^ Promote use of a wide variety of techniques to create early and, when appropriate, continuing opportunities for
public involvement in Agency decisions
^ Establish clear and effective guidance for conducting public involvement activities
• Framework for Implementing EPA's Public Involvement Policy, EPA233-F-03-001 (U.S. EPA2003e)
• Introducing EPA's Public Involvement Policy
EPA Resources
• About the Office of Public Affairs website
• Public Participation Guide: Introduction to Public Participation website
• Development and Review of EPA Communication Products website
• Stakeholder Involvement and Public Participation at the U.S. EPA. Lessons Learned. Barriers & Innovative Approaches,
EPA 100-R-00-040
9.2. Development of a Communication Plan
A communication plan identifies the stakeholders, establishes a working relationship among
stakeholders and provides the approach for interactions. A communication plan needs to outline
the objectives/goals of the activity and present the most appropriate method for communication
throughout the exposure assessment. The plan also should consider how the information will be
presented to various stakeholders.
EPA's Communication Strategies document states that a communication plan needs to consider:
• Why - identify why communication is necessary and define the objective(s).
• Who - define the audience(s) and how to reach them.
• What - meet with the assessment team to discuss the communication plan; coordinate
communication plan content such as goals and objectives; focus on two or three key
messages and rank them by importance, timeliness or other factors.
• How - identify key messages and determine approach(es) for delivering them. Delivery
methods can include briefings, exhibits, fact sheets, the internet, mailings, presentations,
public notices, responsiveness summaries, telephone, translation of documents into
languages other than English, videos and social media.
• When - determine timing of meetings, outreach to stakeholders, budget considerations
and feedback to the assessment team to evaluate the strengths and weaknesses of
outreach and how it can be improved and revised to ensure continued effectiveness.
The document also includes worksheets.
Other resources include the EPA Guidelines for Research Project Communication Plans website.
Page|165
-------
9.3. Results of an Exposure Assessment: Exposure Characterization
and Risk Characterization
Exposure characterization is the narrative that provides the discussion, analysis and conclusions
to synthesize the exposure assessment results. It presents a balanced representation of the
available data and their relevance to the health effects of concern and identifies key assumptions
and major areas of uncertainty. Section 9.3.1 details the key elements of an exposure
characterization. Section 9.3.2 presents information on developing the exposure characterization
as part of the overall risk assessment. Section 9.3.3 discusses various formats for presenting
exposure characterization results, and Section 9.3.4 describes ways for conveying uncertainties
in the results. Section 9.3.5 presents considerations for communicating with stakeholders.
9.3.1. Elements of an Exposure Characterization
An exposure characterization:
• Provides the purpose, objective(s), scope, level of detail and approach used, including
key assumptions
• Presents the estimates of exposure and dose by pathway and route for individuals,
lifestages, groups or populations of concern
• Provides an evaluation of the overall quality of the assessment and the degree of
confidence the assessors have in the estimates of exposure and dose and in the
conclusions drawn
• Presents an interpretation of the data and results
• Presents information on uncertainty and variability
• Communicates the results within the context of the risk characterization.
The presentation of the exposure and dose estimates identifies and quantifies important
source(s), significant pathway(s) and route(s) of exposure from the source to the individual,
lifestage, group or population of concern as laid out in the conceptual model (see Section 3.2.2).
The presentation also discusses reasons for excluding any individual, lifestage, group or
population of concern from the assessment. If the exposure distribution is known, a variety of
exposure descriptors and, where possible, the full population distribution is presented. An
assessor provides risk managers/decision makers an estimate of how exposure is distributed
across the population and how variability in population activities influences this distribution by
including summary statistics, the average or central tendency exposure, high-end exposures,
other program-specific outputs (e.g., the maximally exposed individual) or other descriptors as
appropriate to regulatory needs (see Section 5.3). If the distribution is unknown, an assessor
presents context for and characterizes, to the extent possible, the exposure estimates. Ideally, an
exposure characterization links the purpose of the assessment with specific risk descriptors,
which in turn facilitate construction of a risk characterization.
Where appropriate, a description of additional research and data needed to improve an exposure
assessment can be helpful to risk managers/decision makers in making decisions. For this reason,
an exposure characterization identifies key data gaps that can help focus further efforts to reduce
uncertainty if additional information would inform key issues or provide greater certainty to the
decision. Finally, most risk management decisions take into account a variety of factors in
Page|166
-------
addition to science: economic factors, technological factors, laws, socioeconomic considerations,
political factors and public values (U.S. EPA2000g).
9.3.2. Development and Use of an Exposure Characterization in
Characterizing Risk
EPA's Information Quality Guidelines (U. S. EPA 2002f) lays out criteria for assessments made
available to the public:
"(i) each population addressed by any estimate of applicable human health risk or each risk
assessment endpoint, including populations if applicable, addressed by any estimate of
applicable ecological risk;
(ii) the expected risk or central estimate of human health risk for the specific populations
affected or the ecological assessment endpoints, including populations if applicable;
(iii) each appropriate upper-bound or lower-bound estimate of risk;
(iv) each significant uncertainty identified in the process of the assessment of risk and studies
that would assist in resolving the uncertainty; and
(v) peer-reviewed studies known to the Administrator that support, are directly relevant to, or
fail to support any estimate of risk and the methodology used to reconcile inconsistencies in
the scientific data."
In practice, an assessor writes characterizations for each component of the risk assessment
(hazard assessment, dose-response assessment, exposure assessment) to carry forward the
findings, assumptions, limitations and uncertainties in the three components. This set of
characterizations provides the informational basis for writing the integrated risk characterization.
The risk characterization conveys the risk assessor's judgment about the nature and presence or
absence of risks, information about how the risk was assessed, the assumptions used and
consideration of data uncertainty, and insights about where policy choices will need to be made.
Often these assessments lead to a regulatory decision (i.e., policy decision). The quality of the
exposure characterization determines the ability to integrate the exposure assessment with the
hazard identification and dose-response assessment into the risk assessment and incorporate it
into a regulatory decision. The overall risk characterization informs the risk manager/decision
maker and others about the rationale for EPA's approach to conducting the risk assessment
(i.e., why EPA took that approach to assess the risk). The risk characterization restates the scope
of the assessment, expresses results clearly, articulates major assumptions and uncertainty,
identifies reasonable alternative interpretations and distinguishes scientific conclusions from
policy decisions (U.S. EPA 2000g).
EPA's risk characterization policy calls for conducting risk characterizations in a manner
consistent with the principles listed below (U.S. EPA 2000g). These principles apply to each
component of the risk assessment:
• Transparency. The characterization needs to disclose—fully and explicitly—the
methods, default assumptions, logic, rationale, extrapolations and uncertainty
(distinguishing, when possible, between data and decision uncertainty) and the overall
strength of each step in the assessment.
Page|167
-------
• Clarity. Readers within and external to the assessment process need to understand the
products from the assessment. Documents need to be concise and free of jargon and
include understandable tables, graphs and equations.
• Consistency. The conduct and presentation of the assessment need to be consistent with
EPA policy and guidance.
• Reasonableness. Sound judgment needs to be the foundation of the assessment, with
methods and assumptions consistent with the current state-of-the-science and conveyed in
a manner that is complete, balanced and informative.
These four principles—transparency, clarity, consistency and reasonableness—are referred to
collectively as TCCR. To achieve TCCR in an exposure characterization, an assessor needs to
apply these principles in all steps of the process (U.S. EPA 2000g).
9.3.3. Formats for Exposure Characterization
EPA does not require a set format for exposure characterization reports, but some individual
programs within the Agency do have specific format requirements. EPA's Office of Land and
Emergency Management (formerly Office of Solid Waste and Emergency Response), for
example, has developed standardized methods for presenting exposure information, described in
the Risk Assessment Guidance for Superfund Part D (U.S. EPA 200 lg). The tables in that
guidance present an approach for summarizing and presenting information. They help organize
information on the exposure point concentration, including statistics used, exposure variables for
specific lifestages (e.g., children, adolescents, adults), toxicity values, calculated cancer risks and
estimates of non-cancer health hazards.
EPA's Risk Characterization Handbook (2000g), Appendices B through E, presents several
examples of exposure characterizations that are part of risk characterization case studies. EPA's
Office of Research and Development provides templates for presentations at conferences, public
meetings and other venues. Other EPA programs might have specific formats for communicating
results (e.g., oral, written). Assessors need to consult with their programs and follow their
standard operating procedures.
9.3.4. Communicating Uncertainty
One of the most challenging aspects of communication is the presentation of uncertainty (NRC
2009). Addressing uncertainties in assessments is an essential but often challenging task—
particularly when communicating with stakeholders with a wide range of technical expertise
(Spiegelhalter et al. 2011; Stirling 2010; Visschers et al. 2009). The most appropriate method for
addressing uncertainty depends on the nature of the assessment and the audience (IOM 2013;
U.S. EPA 200lh; U.S. EPA 2014h; U.S. EPA 2014i; WHO 2008).
In general, numerical, graphical or narrative formats can be used to present uncertainty,
depending on the audience (IOM 2013). Regardless of the presentation type, it needs to be self-
explanatory: capable of communicating the critical information without relying on the narrative
to explain the main message. Numerous researchers provide additional information on
presentation types [e.g., Helsel and Hirsch (1993), Lipkus (2007), Slovic (1986), Slovic et al.
(1979) and Tufte (2001)].
Page|168
-------
When communicating results and their attendant uncertainties, the assessor needs to keep in
mind that the use of numerical, narrative and graphical information is not mutually exclusive.
Rather, these three presentation types used in concert can improve communication. Certainly, a
table or graph can support a narrative in the presentation.
9.3.5. Stakeholders
Potential stakeholders (see Sections 3.1.3, 7.2.8 and 7.2.9) include residents of the community in
which the assessment took place, community groups, advocacy groups, interested members of
the public, medical organizations, university partnerships, industry groups, nonprofit
organizations, nongovernment and government organizations, states, tribes and the media.
Chapter 7 provided guidance on communicating results to study participants. Section 7.2.8
described ways to establish communication and dialogue with community members in the initial
phases of an exposure assessment. This dialogue includes asking the community to define their
questions of interest and the manner in which they wish to receive the assessment results. Payne-
Sturges et al. (2004) noted that effective communication and translation of the exposure
assessment approach enables the community to "credibly represent the study's implications to
policy makers and other stakeholders, thereby closing the loop between science and the
community."
The program with the most experience and activity with stakeholder involvement is EPA's
Superfund. EPA's Office of Land and Emergency Management has developed the most
extensive guidance for dealing with stakeholders; see the Superfund Community Involvement
Tools and Resources website. This knowledgebase includes content on establishing, engaging
and maintaining a working relationship with communities.
Covello and Sandman (2001) describe important obstacles to overcome in achieving effective
risk communication to stakeholders: inconsistent, overly complex, confusing or incomplete risk
messages; the lack of trust in information sources; selective reporting by the media; and
psychological and social factors that affect how information is processed. Exposure assessors
face significant challenges regarding how to interpret, report and act on results when the links
between environmental chemicals and health are only partially understood, poorly known or
complex (NRC 2006b). Examples include conveying both the risks and benefits of fish
consumption and discussing the significance of elevated body burdens of chemicals that lack
toxicological or epidemiological evidence regarding health effects. In this situation, the assessor
needs to be candid with stakeholders about the availability of useful information.
Section 9.4 lists additional resources assessors can use for effective communication about
exposures and risks.
9.4. Communication Products
The exposure characterization is consistent with the level of detail and complexity of the
assessment conducted. The needs of the stakeholders, however, determine the depth and detail of
subsequent products. When discussing assessment results with stakeholders, an approach would
be to provide a short executive summary, clearly highlighting key issues and conclusions, with
the technical information included in an appendix or as a reference to the exposure assessment
Page|169
-------
itself. Assessors need to consult with their programs and follow their standard operating
procedures.
Communication products might include fact sheets, slide presentations, press releases, Federal
Register notices, newsletters, site- or community-specific websites, social media, public
meetings and hotlines. Release of all communication products related to the exposure assessment
needs to follow appropriate Agency clearance procedures. Documents and other communication
products need to be dated and replaced (e.g., on websites) as updates become available.
For assessments that involve the public, a communication plan often is essential. A
communication plan includes the key messages for dissemination and the audiences, format,
timing and frequency for distributing them. NRC (1989b) notes that developing risk messages is
a collaborative effort between scientists and communications experts:
"It is a mistake to simply consider risk communication to be an add-on activity for either
scientific or public affairs staffs; both elements should be involved. There are clear
dangers if risk messages are formulated ad hoc by public relations personnel in isolation
from available technical expertise; neither can they be prepared by risk analysts as a
casual extension of their analytic duties."
Increasingly, EPA is taking advantage of electronic media, such as Wikis, blogs, microblogs
(e.g., Twitter) and social networking sites (e.g., Facebook), to communicate environmental and
health information to the public (U.S. EPA 201 lc). These tools can be an effective component of
a communication plan for an exposure assessment.
The array of published literature on risk communication and public involvement is extensive and
continues to evolve as the complexity of risk issues increases (Covello 1987; Deisler Jr. 1988;
Fischhoff 1995; Fischhoff 1998; Fischhoff and Downs 1997; Holliman et al. 2008a; Holliman et
al. 2008b; Hora 1992; Ibrekk and Morgan 1987; Johnson and Slovic 1995; Mastrandrea et al.
2010; Morgan and Martinez 1992; NAS 2017; North 1997; Thompson and Bloom 2000).
9.5. Summary
• Effective communication is a dialogue process between Agency staff and stakeholders
that begins during the early phases of the assessment process: planning and scoping and
problem formulation. EPA actively engages the public in many of its decision making
processes.
• A communication plan developed and implemented early in an assessment should
identify and establish a working relationship with the relevant and interested parties. The
strategy introduces stakeholders to assessment vocabulary and processes. It should
outline the goals and objectives, select the most appropriate method of communication
and inform stakeholders about what to expect in the final report.
• Exposure characterization, an element of risk characterization, is the narrative that
provides the discussion, analysis and conclusions to synthesize these results.
o A characterization is written for each component of the risk assessment (hazard
assessment, dose-response assessment, exposure assessment) and used to carry
forward the findings, assumptions, limitations and uncertainties in the three
Page|170
-------
components. These characterizations collectively provide the information for writing
the results of an integrated risk characterization analysis,
o The elements of an exposure characterization are (1) the purpose, objectives,
scope, level of detail and approach used; (2) exposure and dose estimates by pathway
and route for individuals, lifestages, groups and populations of concern; (3) an
evaluation of the overall quality of the assessment and the degree of confidence
assessors have in the estimates and conclusions; (4) an interpretation of the data and
results; and (5) communication of the results for integration with the other assessment
elements to develop a risk characterization,
o EPA has no set format for exposure characterization reports, but some individual
programs within the Agency have specific format requirements. EPA's Risk
Characterization Handbook presents examples of exposure characterizations,
o One of the most challenging aspects of communication is the presentation of
uncertainty. Using numerical, narrative and graphical information in concert can
improve communication.
• The stakeholders for an exposure assessment can range widely: community groups,
advocacy groups, the public, medical organizations, university partnerships, industry
groups, nonprofit organizations, nongovernment and government organizations, states,
tribes and the media. Upon completion of the exposure and risk characterizations, the
focus turns to communicating the results to the risk manager/decision maker.
• Appropriate communication products are tailored to their intended audiences.
Page|171
-------
CHAPTER 10. REFERENCES
Ackerman, JL; Proffit, WR. (1995). Communication in Orthodontic Treatment Planning:
Bioethical and Informed Consent Issues. The Angle Orthodontist 65: 253-261.
Ackerman, TF. (1989). An Ethical Framework for the Practice of Paying Research Subjects.
Institutional Review Board 11: 1-4.
Acquavella, JF; Alexander, BH; Mandel, JS; Gustin, C; Baker, B; Chapman, P; Bleeke, M.
(2004). Glyphosate Biomonitoring for Farmers and their Families: Results from the Farm
Family Exposure Study. Environmental Health Perspectives 112: 321-326.
Adamkiewicz, G; Zota, AR; Fabian, MP; Chahine, T; Julien, R; Spengler, JD; Levy, JI. (2011).
Moving Environmental Justice Indoors: Understanding Structural Influences on
Residential Exposure Patterns in Low-Income Communities. American Journal of Public
Health 101: S238-S245.
Adgate, JL; Clayton, CA; Quackenboss, JJ; Thomas, KW; Whitmore, RW; Pellizzari, ED; Lioy,
PJ; Shubat, P; Stroebel, C; Freeman, NC; Sexton, K. (2000). Measurement of Multi-
Pollutant and Multi-Pathway Exposures in a Probability-Based Sample of Children:
Practical Strategies for Effective Field Studies. Journal of Exposure Analysis and
Environmental Epidemiology 10: 650-661.
Arnot, JA. (2009). Mass Balance Models for Chemical Fate, Bioaccumulation, Exposure and
Risk Assessment. In LI Simeonov; MAHassanian (Eds.), Exposure and Risk Assessment
of Chemical Pollution - Contemporary Methodology (pp. 69-91). Dordrecht, Germany:
Springer Netherlands.
Ashley-Martin, J; Dodds, L; Arbuckle, TE; Ettinger, AS; Shapiro, GD; Fisher, M; Morisset, AS;
Taback, S; Bouchard, MF; Monnier, P; Dallaire, R; Fraser, WD. (2014). A Birth Cohort
Study to Investigate the Association between Prenatal Phthalate and Bisphenol A
Exposures and Fetal Markers of Metabolic Dysfunction. Environmental Health 13: 84.
Ashley-Martin, J; Dodds, L; Arbuckle, TE; Morisset, AS; Fisher, M; Bouchard, MF; Shapiro,
GD; Ettinger, AS; Monnier, P; Dallaire, R; Taback, S; Fraser, W. (2016). Maternal and
Neonatal Levels of Perfluoroalkyl Substances in Relation to Gestational Weight Gain.
International Journal of Environmental Research and Public Health 13.
Ashley-Martin, J; Levy, AR; Arbuckle, TE; Piatt, RW; Marshall, JS; Dodds, L. (2015). Maternal
Exposure to Metals and Persistent Pollutants and Cord Blood Immune System
Biomarkers. Environmental Health 14: 52.
ATSDR (Agency for Toxic Substances and Disease Registry). (1997). Child Health Initiative.
Healthy Children; Toxic Environments. Acting on the Unique Vulnerability of Children
Who Dwell Near Hazardous Waste Sites. Atlanta, GA: ATSDR.
https://books.google.com/books?isbn=0788175343.
Aylward, LL; Kirman, CR; Schoeny, R; Portier, CJ; Hays, SM. (2013). Evaluation of
Biomonitoring Data from the CDC National Exposure Report in a Risk Assessment
Context: Perspectives across Chemicals. Environmental Health Perspectives 121: 287-
294.
Baguley, T. (2004). Understanding Statistical Power in the Context of Applied Research.
Applied Ergonomics 35: 73-80.
Bangs, GW. (2005a). Comparison of Dermal Exposure Assessment Methods and Exposure
Assessment Guidelines Update. Risk Assessment Forum's Regional Risk Assessors
Meeting, May 2-6, Kansas City, MO.
Page|172
-------
Bangs, GW. (2005b). Revisions to the Exposure Assessment Guidelines of 1992: Proposed
Changes and Panel Discussion. The International Society of Exposure Analysis (ISEA)
15th Annual Conference, October 30-November 3, Tucson, AZ.
Barr, DB; Bishop, A; Needham, LL. (2007). Concentrations of Xenobiotic Chemicals in the
Maternal-Fetal Unit. Reproductive Toxicology 23: 260-266.
Barr, DB; Thomas, K; Curwin, B; Landsittel, D; Raymer, J; Lu, C; Donnelly, KC; Acquavella, J.
(2006). Biomonitoring of Exposure in Farmworker Studies. Environmental Health
Perspectives 114: 936-942.
Barr, DB; Wilder, LC; Caudill, SP; Gonzalez, AJ; Needham, LL; Pirkle, JL. (2005). Urinary
Creatinine Concentrations in the U.S. Population: Implications for Urinary Biologic
Monitoring Measurements. Environmental Health Perspectives 113: 192-200.
Barzyk, TM; Conlon, KC; Chahine, T; Hammond, DM; Zartarian, VG; Schultz, BD. (2010).
Tools Available to Communities for Conducting Cumulative Exposure and Risk
Assessments. Journal of Exposure Science and Environmental Epidemiology 20: 371-
384.
Bates, SC; Cullen, A; Raftery, AE. (2003). Bayesian Uncertainty Assessment in
Multicompartment Deterministic Simulation Models for Environmental Risk Assessment.
Environmetrics 14: 355-371.
Beierle, TC. (2002). The Quality of Stakeholder-Based Decisions. Risk Analysis 22: 739-749.
Belzer, RB; deFur, P; Clarke, D. (2001). Chapter 5. Selecting, Implementing, and Tracking
Ecological Risk Management Decisions: Necessary Elements of an Effective Decision-
Making Framework. In RG Stahl (Ed.), Risk Management: Ecological Risk-Based
Decision-Making (pp. 57-74). Pensacola, FL: Society of Environmental Toxicology and
Chemistry.
Berman, LE; Fisher, AL; Ostchega, Y; Reed-Gillette, DS; Stammeijohn, EL. (2001). Quality
Assurance (QC)/Quality Control (QC) Processes for the National Health and Nutrition
Examination Survey (NHANES). Proceedings AMIA Symposium 862-862.
Blount, BC; Valentin-Blasini, L; Osterloh, JD; Mauldin, JP; Pirkle, JL. (2007). Perchlorate
Exposure of the US Population, 2001-2002. Journal of Exposure Science and
Environmental Epidemiology 17: 400-407.
Bouvier, G; Seta, N; Vigouroux-Villard, A; Blanchard, O; Momas, I. (2005). Insecticide Urinary
Metabolites inNonoccupationally Exposed Populations. Journal of Toxicology and
Environmental Health Part B: Critical Reviews 8: 485-512.
Brady, D. (2011). Guidance for the Development of Conceptual Models for a Problem
Formulation Developed for Registration Review. Memorandum, March 10. Washington,
D.C.: U.S. EPA. https://www.epa.gov/pesticide-science-and-assessing-pesticide-
risks/guidance-development-conceptual-models-problem#memo.
Brauer, M; Hakkinen, BPJ; Gehan, BM; Shirname-More, L. (2004). Communicating Exposure
and Health Effects Results to Study Subjects, the Community and the Public: Strategies
and Challenges. Journal of Exposure Analysis and Environmental Epidemiology 14: 479-
483.
Braun, JM; Hauser, R. (2011). Bisphenol A and Children's Health. Current Opinion in Pediatrics
23: 233-239.
Brown, P. (1995). Race, Class, and Environmental Health: A Review and Systematization of the
Literature. Environmental Research 69: 15-30.
Page|173
-------
Brulle, RJ; Pellow, DN. (2006). Environmental Justice: Human Health and Environmental
Inequalities. Annual Review of Public Health 27: 103-124.
Buck, RJ; Hammerstrom, KA; Ryan, PB. (1995). Estimating Long-Term Exposures from Short-
Term Measurements. Journal of Exposure Analysis and Environmental Epidemiology 5:
359-373.
Buckley, B; Ettinger, A; Hore, P; Lioy, P; Freeman, N. (2000). Using Observational Information
in Planning and Implementation of Field Studies With Children as Subjects. Journal of
Exposure Analysis and Environmental Epidemiology 10: 695-702.
Bullard, RD. (1990). Ecological Inequities and the New South: Black Communities under Siege.
Journal of Ethnic Studies 17: 101-115.
Burger, J. (2000). Gender Differences in Meal Patterns: Role of Self-Caught Fish and Wild
Game in Meat and Fish Diets. Environmental Research 83: 140-149.
Burger, J. (2002a). Consumption Patterns and Why People Fish. Environmental Research 90:
125-135.
Burger, J. (2002b). Daily Consumption of Wild Fish and Game: Exposures of High End
Recreationists. International Journal of Environmental Health Research 12: 343-354.
Burger, J; Gaines, KF; Gochfeld, M. (2001). Ethnic Differences in Risk from Mercury among
Savannah River Fishermen. Risk Analysis 21: 533-544.
Burger, J; Pflugh, KK; Lurig, L; Von Hagen, LA; Von Hagen, S. (1999a). Fishing in Urban New
Jersey: Ethnicity Affects Information Sources, Perception, and Compliance. Risk
Analysis 19: 217-229.
Burger, J; Sanchez, J; Gochfeld, M. (1998). Fishing, Consumption, and Risk Perception in
Fisherfolk along an East Coast Estuary. Environmental Research 77: 25-35.
Burger, J; Staine, K; Gochfeld, M. (1993). Fishing in Contaminated Waters: Knowledge and
Risk Perception of Hazards by Fishermen in New York City. Journal of Toxicology and
Environmental Health 39: 95-105.
Burger, J; Stephens Jr., WL; Boring, CS; Kuklinski, M; Gibbons, JW; Gochfeld, M. (1999b).
Factors in Exposure Assessment: Ethnic and Socioeconomic Differences in Fishing and
Consumption of Fish Caught along the Savannah River. Risk Analysis 19: 427-438.
Cabral, DN; Napoles-Springer, AM; Miike, R; McMillan, A; Sison, JD; Wrensch, MR; Perez-
Stable, EJ; Wiencke, JK. (2003). Population- and Community-Based Recruitment of
African Americans and Latinos: The San Francisco Bay Area Lung Cancer Study.
American Journal of Epidemiology 158: 272-279.
Callahan, MA; Clickner, RP; Whitmore, RW; Kalton, G; Sexton, K. (1995). Overview of
Important Design Issues for a National Human Exposure Assessment Survey. Journal of
Exposure Analysis and Environmental Epidemiology 5: 257-282.
Callan, AC; Hinwood, AL; Heyworth, J; Phi, DT; Odland, JO. (2016). Sex Specific Influence on
the Relationship between Maternal Exposures to Persistent Chemicals and Birth
Outcomes. International Journal of Hygiene and Environmental Health 219: 734-741.
CDC (Centers for Disease Control and Prevention). (2005). Environmental Public Health
Tracking and Biomonitoring. Atlanta, GA: CDC.
http://www.cdc.gov/nceh/tracking/pdfs/trackbiomon.pdf.
CDC. (2009). Fourth National Report on Human Exposure to Environmental Chemicals. Atlanta,
GA: CDC. https://www.cdc.gov/exposurereport/pdf/fourthreport.pdf.
CDC. (2012a). National Health and Nutrition Examination Survey. Atlanta, GA: CDC.
http://www.cdc.gov/nchs/nhanes.htm.
Page|174
-------
CDC. (2012b). Second National Report on Biochemical Indicators of Diet and Nutrition in the
U.S. Population. Executive Summary. Atlanta, GA: CDC.
https://www.cdc.gov/nutritionreport/pdf/exesummary_web_032612.pdf.
CDC. (2015a). CDC Plan for Increasing Access to Scientific Publications and Digital Scientific
Data Generated with CDC Funding. CDC. https://www.cdc.gov/od/science/docs/Final-
CDC-Public-Access-Plan-Jan-2015_508-Compliant.pdf.
CDC. (2015b). CDC Specimen-Collection Protocol for a Chemical-Exposure Incident. Atlanta,
GA: CDC. https://emergency.cdc.gov/labissues/pdf/chemspecimencollection.pdf.
CDC. (2016). National Health and Nutrition Examination Survey. 2013-2014 Data
Documentation, Codebook, and Frequencies. Atlanta, GA: CDC.
https://wwwn.cdc.gov/Nchs/Nhanes/2013-2014/UHG_H.htm.
CDC. (2017). National Biomonitoring Program. Biomonitoring Summary. Barium. CDC. Last
modified April 7.
https://www.cdc.gov/biomonitoring/Barium_BiomonitoringSummary.html.
Chakraborty, J; Maantay, JA; Brender, JD. (2011). Disproportionate Proximity to Environmental
Health Hazards: Methods, Models, and Measurement. American Journal of Public Health
101: S27-S36.
Checkoway, H; Eisen, EA. (1998). Developments in Occupational Cohort Studies.
Epidemiologic Reviews 20: 100-111.
Chen, Q; Jiang, X; Hedgeman, E; Knutson, K; Gillespie, B; Hong, B; Lepkowski, JM;
Franzblau, A; Jolliet, O; Adriaens, P; Demond, AH; Garabrant, DH. (2013). Estimation
of Age- and Sex-Specific Background Human Serum Concentrations of PCDDs, PCDFs,
and PCBs in the UMDES and NHANES Populations. Chemosphere 91: 817-823.
Clark, KE; David, RM; Guinn, R; Kramarz, KW; Lampi, MA; Staples, CA. (2011). Modeling
Human Exposure to Phthalate Esters: A Comparison of Indirect and Biomonitoring
Estimation Methods. Human and Ecological Risk Assessment: An International Journal
17: 923-965.
Clewell, HJ; Tan, YM; Campbell, JL; Andersen, ME. (2008). Quantitative Interpretation of
Human Biomonitoring Data. Toxicology and Applied Pharmacology 231: 122-133.
Cohen Hubal, EA; Sheldon, LS; Burke, JM; McCurdy, TR; Berry, MR; Rigas, ML; Zartarian,
VG; Freeman, NC. (2000). Children's Exposure Assessment: A Review of Factors
Influencing Children's Exposure, and the Data Available to Characterize and Assess That
Exposure. Environmental Health Perspectives 108: 475-486.
Collins, JJ; Bodner, KM; Baase, CM; Burns, C; Jammer, B; Bloemen, LJ. (2004).
Communication of Epidemiology Study Results by Industry: The Dow Chemical
Company Approach. Journal of Exposure Analysis and Environmental Epidemiology 14:
492-497.
Cook, WA. (1969). Problems of Setting Occupational Exposure Standards—Background.
Archives of Environmental Health 19: 272-276.
Cooke, GM. (2014). Biomonitoring of Human Fetal Exposure to Environmental Chemicals in
Early Pregnancy. Journal of Toxicology and Environmental Health Part B: Critical
Reviews 17: 205-224.
Corburn, J. (2002). Combining Community-Based Research and Local Knowledge to Confront
Asthma and Subsistence-Fishing Hazards in Greenpoint/Williamsburg, Brooklyn, New
York. Environmental Health Perspectives 110: S241-S248.
Page|175
-------
Covello, VT. (1987). Decision Analysis and Risk Management Decision Making: Issues and
Methods. Risk Analysis 7: 131-139.
Covello, VT. (1989). Communicating Information about the Health Risks of Radioactive Waste:
A Review of Obstacles to Public Understanding. Bulletin of the New York Academy of
Medicine 65: 467-482.
Covello, VT; Allen, FH. (1988). Seven Cardinal Rules of Risk Communication. (OPA/87/020).
Washington, D.C.: Office of Policy Analysis, U.S. EPA.
https://archive.epa.gov/care/web/pdf/7_cardinal_rules.pdf.
Covello, VT; Sandman, P. (2001). Risk Communication: Evolution and Revolution. In A
Wolbarst (Ed.), Solutions to an Environment in Peril (pp. 164-178). Baltimore, MD: John
Hopkins University Press.
Crowhurst, B; Dobson, KS. (1993). Informed Consent: Legal Issues and Applications to Clinical
Practice. Canadian Psychology 34: 329-346.
Cullen, AC; Frey, HC. (1999). Probabilistic Techniques in Exposure Assessment: A Handbook
for Dealing with Variability and Uncertainty in Models and Inputs. New York, NY:
Plenum Press.
Dalkmann, H; Herrera, RJ; Bongardt, D. (2004). Analytical Strategic Environmental Assessment
(ANSEA) Developing a New Approach to Sea. Environmental Impact Assessment
Review24: 385-402.
Daston, G; Faustman, E; Ginsberg, G; Fenner-Crisp, P; Olin, S; Sonawane, B; Bruckner, J;
Breslin, W; McLaughlin, TJ. (2004). A Framework for Assessing Risks to Children From
Exposure to Environmental Agents. Environmental Health Perspectives 112: 238-256.
Davies, JC; Covello, VT; Allen, FW. (1987). Risk Communication: Proceedings of the National
Conference on Risk Communication, held in Washington, D.C., January 29-31, 1986.
Washington, D.C.: The Conservation Foundation.
Dean, RB; Dixon, WJ. (1951). Simplified Statistics for Small Numbers of Observations.
Analytical Chemistry 23: 636-638.
Deck, W; Kosatsky, T. (1999). Communicating their Individual Results to Participants in an
Environmental Exposure Study: Insights from Clinical Ethics. Environmental Research
80: S223-S229.
deFur, PL; Evans, GW; Cohen Hubal, EA; Kyle, AD; Morello-Frosch, RA; Williams, DR.
(2007). Vulnerability as a Function of Individual and Group Resources in Cumulative
Risk Assessment. Environmental Health Perspectives 115: 817-824.
Deisler Jr., PF. (1988). ES Series: Cancer Risk Assessment. 5. The Risk Management-Risk
Assessment Interface. Environmental Science & Technology 22: 15-19.
Dell, RB; Holleran, S; Ramakrishnan, R. (2002). Sample Size Determination. Institute of
Laboratory Animal Resources Journal 43: 207-213.
Dellarco, M; Bangs, GB. (2006). The Evolution of Exposure Assessment Science. Society for
Risk Analysis (SRA) Annual Meeting, December 3-6, Baltimore, MD.
Detenbeck, NE; Cincotta, D; Denver, JM; Greenlee, SK; Olsen, AR; Pitchford, AM. (2005).
Watershed-Based Survey Designs. Environmental Monitoring and Assessment 103: 59-
81.
Devane, D; Begley, CM; Clarke, M. (2004). How Many Do I Need? Basic Principles of Sample
Size Estimation. Journal of Advanced Nursing 47: 297-302.
Dickert, N; Emanuel, E; Grady, C. (2002). Paying Research Subjects: An Analysis of Current
Policies. Annals of Internal Medicine 136: 368-373.
Page|176
-------
Dillman, DA. (1999). Mail and Internet Surveys: The Tailored Design Method. 2nd Edition.
New York, NY: Wiley.
Dixon, WJ. (1950). Analysis of Extreme Values. Annals of Mathematical Statistics 21: 488-506.
Dixon, WJ. (1953). Processing Data for Outliers. Biometrics 9: 75-89.
Dixon, WJ. (1960). Simplified Estimation from Censored Normal Samples. Annals of
Mathematical Statistics 31: 385-391.
Dockery, DW; Pope, CA; Xu, X; Spengler, JD; Ware, JH; Fay, ME; Ferris Jr., BG; Speizer, FE.
(1993). An Association between Air Pollution and Mortality in Six U.S Cities. New
England Journal of Medicine 329: 1753-1759.
Dong, MH; Draper, WM; Papanek Jr., PJ; Ross, JH; Woloshin, KA; Stephens, RD. (1994).
Estimating Malathion Doses in California's Medfly Eradication Campaign Using a
Physiologically Based Pharmacokinetic Model. In WM Draper (Ed.), Environmental
Epidemiology: Effects of Environmental Chemicals on Human Health (pp. 189-208).
Washington, D.C.: American Chemical Society.
Ducey, MJ. (2001). Representing Uncertainty in Silvi cultural Decisions: An Application of the
Dempster-Shafer Theory of Evidence. Forest Ecology and Management 150: 199-211.
Dupont, WD; Plummer Jr., WD. (1990). Power and Sample Size Calculations. A Review and
Computer Program. Controlled Clinical Trials 11: 116-128.
Dupont, WD; Plummer Jr., WD. (1998). Power and Sample Size Calculations for Studies
Involving Linear Regression. Controlled Clinical Trials 19: 589-601.
ECETOC (European Centre for Ecotoxicology and Toxicology of Chemicals). (2005). Guidance
for the Interpretation of Biomonitoring Data. (DOC 044). Brussels, Belgium: ECETOC.
http://www.ecetoc.org/wp-content/uploads/2014/08/DOC-0441.pdf.
Edlmann, K; Bensabat, J; Niemi, A; Haszeldine, RS; McDermott, CI. (2016). Lessons Learned
from Using Expert Eli citation to Identify, Assess and Rank the Potential Leakage
Scenarios at the Heletz Pilot C02 Injection Site. International Journal of Greenhouse Gas
Control 49: 473-487.
EJHU (Environmental Justice & Health Union). (2003). Environmental Exposures and Racial
Disparities. Oakland, CA: EJHU.
https://web.archive.Org/web/20100213170328/http://ejhu.org/disparities.html.
El-Masri, H; Kleinstreuer, N; Hines, RN; Adams, L; Tal, T; Isaacs, K; Wetmore, BA; Tan, YM.
(2016). Integration of Life-Stage Physiologically Based Pharmacokinetic Models with
Adverse Outcome Pathways and Environmental Exposure Models to Screen for
Environmental Hazards. Toxicological Sciences 152: 230-243.
Emanuel, EJ; Grady, C; Crouch, RA; Lie, RK; Miller, FG; Wendler, D. (2008). The Oxford
Textbook of Clinical Research Ethics. New York, NY: Oxford University Press.
Emanuel, EJ; Menikoff, J. (2011). Reforming the Regulations Governing Research with Human
Subjects. New England Journal of Medicine 365: 1145-1150.
Erlen, JA; Sauder, RJ; Mellors, MP. (1999). Incentives in Research: Ethical Issues. Orthopedic
Nursing 18: 84-87.
Eskenazi, B; Bradman, A; Castorina, R. (1999). Exposures of Children to Organophosphate
Pesticides and Their Potential Adverse Health Effects. Environmental Health
Perspectives 107 Suppl 3: 409-419.
Executive Order No. 12898. (1994). Federal Actions to Address Environmental Justice in
Minority Populations and Low-Income Populations. Washington, D.C.: Office of the
Page|177
-------
Press Secretary, The White House, https://www.archives.gov/files/federal-
regi ster/ executive-order s/pdf/12898 .pdf.
Executive Order No. 13045. (1997). Protection of Children from Environmental Health Risks
and Safety Risks. Washington, D.C.: Office of the Press Secretary, The White House.
http://www.gpo.gov/fdsys/pkg/FR-1997-04-23/pdf/97-10695.pdf.
Executive Order No. 13175. (2000). Consultation and Coordination with Indian Tribal
Governments. Washington, D.C.: Office of the Press Secretary, The White House.
https://www.federalregister.gOv/documents/2000/l 1/09/00-29003/consultation-and-
coordination-with-indian-tribal-governments.
Fenske, RA; Bradman, A; Whyatt, RM; Wolff, MS; Barr, DB. (2005). Lessons Learned for the
Assessment of Children's Pesticide Exposure: Critical Sampling and Analytical Issues for
Future Studies. Environmental Health Perspectives 113: 1455-1462.
Finkel, AM. (1990). Confronting Uncertainty in Risk Management: A Guide for Decision-
Makers: Report. Washington, D.C.: Center for Risk Management, Resources for the
Future.
http://digitalcollections.library.cmu.edu/awweb/awarchive?type=file&item=438442.
Finley, B; Paustenbach, D. (1994). The Benefits of Probabilistic Exposure Assessment: Three
Case Studies Involving Contaminated Air, Water, and Soil. Risk Analysis 14: 53-73.
Firestone, M; Moya, J; Cohen-Hubal, E; Zartarian, V; Xue, J. (2007). Identifying Childhood Age
Groups for Exposure Assessments and Monitoring. Risk Analysis 27: 701-714.
Fischhoff, B. (1995). Risk Perception and Communication Unplugged: Twenty Years of Process.
Risk Analysis 15: 137-145.
Fischhoff, B. (1998). Communicate Unto Others.... Reliability Engineering and Systems Safety
59: 63-72.
Fischhoff, B. (1976). Attribution Theory and Judgment under Uncertainty. In JH Harvey; WJ
Ickes; RF Kidd (Eds.), New Directions in Attribution Research Vol 1 (pp. 421-452). New
York, NY: John Wiley and Sons.
Fischhoff, B. (1988). Judgment and Decision Making. In RJ Sternberg; EE Smith (Eds.), The
Psychology of Human Thought (pp. 153-187). New York, NY: Cambridge University
Press.
Fischhoff, B; Downs, JS. (1997). Communicating Foodborne Disease Risk. Emerging Infectious
Diseases 3: 489-495.
Fitzgerald, EF; Deres, DA; Hwang, SA; Bush, B; Yang, BZ; Tarbell, A; Jacobs, A. (1999). Local
Fish Consumption and Serum PCB Concentrations among Mohawk Men at Akwesasne.
Environmental Research 80: S97-S103.
Fitzgerald, EF; Hwang, SA; Brix, KA; Bush, B; Cook, K; Worswick, P. (1995). Fish PCB
Concentrations and Consumption Patterns among Mohawk Women at Akwesasne.
Journal of Exposure Analysis and Environmental Epidemiology 5: 1-19.
Fitzgerald, EF; Hwang, SA; Bush, B; Cook, K; Worswick, P. (1998). Fish Consumption and
Breast Milk PCB Concentrations among Mohawk Women at Akwesasne. American
Journal of Epidemiology 148: 164-172.
Fitzgerald, EF; Hwang, SA; Deres, DA; Bush, B; Cook, K; Worswick, P. (2001). The
Association between Local Fish Consumption and DDE, Mirex, and HCB Concentrations
in the Breast Milk of Mohawk Women at Akwesasne. Journal of Exposure Analysis and
Environmental Epidemiology 11: 381-388.
Page|178
-------
Frey, HC; Patil, SR. (2002). Identification and Review of Sensitivity Analysis Methods. Risk
Analysis 22: 553-578.
Fry, CL; Ritter, A; Baldwin, S; Bowen, KJ; Gardiner, P; Holt, T; Jenkinson, R; Johnston, J.
(2005). Paying Research Participants: A Study of Current Practices in Australia. Journal
of Medical Ethics 31: 542-547.
Furtaw Jr., EJ. (2001). An Overview of Human Exposure Modeling Activities at the USEPA's
National Exposure Research Laboratory. Toxicology and Industrial Health 17: 302-314.
Gabrielsson, J; Weiner, D. (2000). Chapter 3. Pharmacokinetic Concepts. In Pharmacokinetic
and Pharmacodynamic Data Analysis: Concepts and Applications. 3rd Edition, (pp. 45-
174). Stockholm, Sweden: Swedish Pharmaceutical Press.
Gee, GC; Payne-Sturges, DC. (2004). Environmental Health Disparities: A Framework
Integrating Psychosocial and Environmental Concepts. Environmental Health
Perspectives 112: 1645-1653.
Georgopoulos, PG; Sasso, AF; Isukapalli, SS; Lioy, PJ; Vallero, DA; Okino, M; Reiter, L.
(2009). Reconstructing Population Exposures to Environmental Chemicals from
Biomarkers: Challenges and Opportunities. Journal of Exposure Science and
Environmental Epidemiology 19: 149-171.
Grady, C; Dickert, N; Jawetz, T; Gensler, G; Emanuel, E. (2005). An Analysis of U.S. Practices
of Paying Research Participants. Contemporary Clinical Trials 26: 365-375.
Greenland, S. (2001). Sensitivity Analysis, Monte Carlo Risk Analysis, andBayesian
Uncertainty Assessment. Risk Analysis 21: 579-583.
Gregory, R; Long, G; Colligan, M; Geiger, JG; Laser, M. (2012). When Experts Disagree (and
Better Science Won't Help Much): Using Structured Deliberations to Support
Endangered Species Recovery Planning. Journal of Environmental Management 105: 30-
43.
Gronewold, A; Reckhow, K; Vallero, D. (2008). Improving Human and Ecological Exposure
Assessments: A Bayesian Network Modeling Approach. Epidemiology 19: S228-S229.
Guan, H; Piao, FY; Li, XW; Li, QJ; Xu, L; Yokoyama, K. (2010). Maternal and Fetal Exposure
to Four Carcinogenic Environmental Metals. Biomedical and Environmental Sciences 23:
458-465.
Handy, R; Smith, D; Castillo, N; Sparacino, C; Thomas, K. (1987). Total Exposure Assessment
Methodology (TEAM) Study: Standard Operating Procedures Employed in Support of an
Exposure Assessment Study. Volume 4. (EPA/600/6-87/002D). Washington, D.C.: U.S.
EPA.
https://cfpub. epa.gov/ si/si_public_record_report.cfm?dirEntryId=44146&keyword=castill
o&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&arch
iveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublis
hedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&
dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publi
sherID=&sortBy=revisionDate&count=50&CFID=65675636&CFTOKEN=42372121.
Hansen, F. (1997a). Policy for Use of Probabilistic Analysis in Risk Assessment at the U.S.
Environmental Protection Agency. Memorandum, May 15. Washington, D.C.: U.S. EPA.
http://www2.epa.gov/sites/production/files/2014-ll/documents/probpol.pdf.
Hansen, F. (1997b). Use of Probabilistic Techniques (Including Monte Carlo Analysis) in Risk
Assessment, and Guiding Principles for Monte Carlo Analysis. Memorandum, May 15.
Washington, D.C.: U.S. EPA.
Page|179
-------
Harper, BL; Flett, B; Harris, S; Abeyta, C; Kirschner, F. (2002). The Spokane Tribe's
Multipathway Subsistence Exposure Scenario and Screening Level RME. Risk Analysis
22: 513-526.
Harper, BL; Harding, AK; Waterhous, T; Harris, SG. (2007). Traditional Tribal Subsistence
Exposure Scenario and Risk Assessment Guidance Manual. (EPA-STAR-J1-R831046).
Oregon State University.
http: / / www7 dev. nau. edu/itep/mai n/ orca/Downl oads/3 803_ORC A. pdf.
Hays, SM; Becker, RA; Leung, HW; Aylward, LL; Pyatt, DW. (2007). Biomonitoring
Equivalents: A Screening Approach for Interpreting Biomonitoring Results from a Public
Health Risk Perspective. Regulatory Toxicology and Pharmacology 47: 96-109.
Helsel, DR. (1990). Less Than Obvious - Statistical Treatment of Data below the Detection
Limit. Environmental Science & Technology 24: 1766-1774.
Helsel, DR; Hirsch, RM. (1993). Statistical Methods in Water Resources. New York, NY:
Elsevier.
Herrier, RN; Boyce, RW. (1995). Communicating Risk to Patients. American Pharmacy NS35:
12-14.
Hightower, JM; O'Hare, A; Hernandez, GT. (2006). Blood Mercury Reporting inNHANES:
Identifying Asian, Pacific Islander, Native American, and Multiracial Groups.
Environmental Health Perspectives 114: 173-175.
Hinderliter, PM; Price, PS; Bartels, MJ; Timchalk, C; Poet, TS. (2011). Development of a
Source-to-Outcome Model for Dietary Exposures to Insecticide Residues: An Example
Using Chlorpyrifos. Regulatory Toxicology and Pharmacology 61: 82-92.
Hines, EP; Mendola, P; von Ehrenstein, OS; Ye, X; Calafat, AM; Fenton, SE. (2015).
Concentrations of Environmental Phenols and Parabens in Milk, Urine and Serum of
Lactating North Carolina Women. Reproductive Toxicology 54: 120-128.
Hines, RN. (2013). Developmental Expression of Drug Metabolizing Enzymes: Impact on
Disposition in Neonates and Young Children. International Journal of Pharmaceutics
452: 3-7.
Hoffrage, U; Lindsey, S; Hertwig, R; Gigerenzer, G. (2000). Medicine. Communicating
Statistical Information. Science 290: 2261-2262.
Holliman, R; Thomas, J; Smidt, S; Scanlon, E; Whitelegg, L. (2008a). Practising Science
Communication in the Information Age: Theorising Professional Practices. New York,
NY: Oxford University Press.
Holliman, R; Whitelegg, L; Scanlon, E; Smidt, S; Thomas, J. (2008b). Investigating Science
Communication in the Information Age. Implications for Public Engagement and Popular
Media. New York, NY: Oxford University Press.
Hora, SC. (1992). Acquisition of Expert Judgment: Examples from Risk Assessment. Journal of
Energy Engineering 118: 136-148.
Hough, RL; Stephens, C; Busby, A; Cracknell, J; Males, B. (2006). Assessing and
Communicating Risk with Communities Living on Contaminated Land. International
Journal of Occupational and Environmental Health 12: 1-8.
Houston, D; Li, W; Wu, J. (2014). Disparities in Exposure to Automobile and Truck Traffic and
Vehicle Emissions near the Los Angeles-Long Beach Port Complex. American Journal of
Public Health 104: 156-164.
Huber, DR; Blount, BC; Mage, DT; Letkiewicz, FJ; Kumar, A; Allen, RH. (2011). Estimating
Perchlorate Exposure From Food and Tap Water Based on US Biomonitoring and
Page | 180
-------
Occurrence Data. Journal of Exposure Science and Environmental Epidemiology 21:
395-407.
Ibrekk, H; Morgan, MG. (1987). Graphical Communication of Uncertain Quantities to
Nontechnical People. Risk Analysis 7: 519-529.
IHA (American Industrial Hygiene Association) Exposure Assessment Committee. (2000).
Mathematical Models for Estimating Occupational Exposure to Chemicals (2 ed.):
American Industrial Hygiene Association.
Illing, HP. (1999). Are Societal Judgments Being Incorporated into the Uncertainty Factors Used
in Toxicological Risk Assessment? Regulatory Toxicology and Pharmacology 29: 300-
308.
ILSI (International Life Sciences Institute). (1999). A Framework for Cumulative Risk
Assessment: An ILSI Risk Science Institute Workshop Report. Washington, D.C.:
International Life Sciences Institute Press.
litis, AS; DeVader, S; Matsuo, H. (2006). Payments to Children and Adolescents Enrolled in
Research: A Pilot Study. Pediatrics 118: 1546-1552.
IOM (Institute of Medicine). (1999). Toward Environmental Justice: Research, Education, and
Health Policy Needs. Washington, D.C.: The National Academies Press.
IOM. (2004). Ethical Conduct of Clinical Research Involving Children. MJ Field; RE Behrman
(Eds.). Washington, D.C.: The National Academies Press.
http://www.nap.edu/catalog/10958/the-ethical-conduct-of-clinical-research-involving-
children.
IOM. (2013). Environmental Decisions in the Face of Uncertainty. Washington, D.C.: The
National Academies Press.
Isaacs, KK; Glen, WG; Egeghy, P; Goldsmith, MR; Smith, L; Vallero, D; Brooks, R; Grulke,
CM; Ozkaynak, H. (2014). SHEDS-HT: An Integrated Probabilistic Exposure Model for
Prioritizing Exposures to Chemicals with Near-Field and Dietary Sources. Environmental
Science & Technology 48: 12750-12759.
Isakov, V; Touma, JS; Burke, J; Lobdell, DT; Palma, T; Rosenbaum, A; Ozkaynak, H. (2009).
Combining Regional- and Local-Scale Air Quality Models with Exposure Models for
Use in Environmental Health Studies. Journal of the Air and Waste Management
Association 59: 461-472.
Jamieson, D. (1996a). Scientific Uncertainty and the Political Process. Annals of the American
Academy of Political and Social Science 545: 35-43.
Jamieson, D. (1996b). Scientific Uncertainty: How Do We Know When to Communicate
Research Findings to the Public? Science of the Total Environment 184: 103-107.
Jayjock, MA; Chaisson, CF; Arnold, S; Dederick, EJ. (2007). Modeling Framework for Human
Exposure Assessment. Journal of Exposure Science and Environmental Epidemiology 17:
S81-S89.
Johnson, BB; Slovic, P. (1995). Presenting Uncertainty in Health Risk Assessment: Initial
Studies of Its Effects on Risk Perception and Trust. Risk Analysis 15: 485-494.
Kampman, E; Arts, IC; Hollman, PC. (2003). Plant Foods versus Compounds in Carcinogenesis;
Observational versus Experimental Human Studies. International Journal for Vitamin and
Nutrition Research 73: 70-78.
Kasperson, RE. (1986). Six Propositions on Public Participation and their Relevance for Risk
Communication. Risk Analysis 6: 275-281.
Page|181
-------
Keenan, RE; Finley, BL; Price, PS. (1994). Exposure Assessment: Then, Now, and Quantum
Leaps in the Future. Risk Analysis 14: 225-230.
Keeney, RL; von Winterfeldt, D. (1986). Improving Risk Communication. Risk Analysis 6: 417-
424.
Kieser, M; Rohmel, J; Friede, T. (2004). Power and Sample Size Determination When Assessing
the Clinical Relevance of Trial Results by 'Responder Analyses'. Statistics in Medicine
23: 3287-3305.
Koch, HM; Drexler, H; Angerer, J. (2003). An Estimation of the Daily Intake of Di(2-
ethylhexyl)phthalate (DEHP) and Other Phthalates in the General Population.
International Journal of Hygiene and Environmental Health 206: 77-83.
Kraemer, HC; Mintz, J; Noda, A; Tinklenberg, J; Yesavage, JA. (2006). Caution Regarding the
Use of Pilot Studies to Guide Power Calculations for Study Proposals. Archives of
General Psychiatry 63: 484-489.
Kulldorff, M; Zhang, Z; Hartman, J; Heffernan, R; Huang, L; Mostashari, F. (2004). Benchmark
Data and Power Calculations for Evaluating Disease Outbreak Detection Methods.
Morbidity and Mortality Weekly Report 53: S144-S151.
LaKind, JS; Fenton, SE; Dorea, JG. (2009). Human Milk Biomonitoring of Phthalates:
Expanding our Understanding of Infant Exposure is Compatible with Supporting
Breastfeeding. Environment International 35: 994-995.
LaKind, JS; Sobus, JR; Goodman, M; Barr, DB; Furst, P; Albertini, RJ; Arbuckle, TE;
Schoeters, G; Tan, YM; Teeguarden, J; Tornero-Velez, R; Weisel, CP. (2014). A
Proposal for Assessing Study Quality: Biomonitoring, Environmental Epidemiology, and
Short-Lived Chemicals (BEES-C) instrument. Environment International 73: 195-207.
Lebowitz, MD; O'Rourke, MK; Gordon, S; Moschandreas, DJ; Buckley, T; Nishioka, M. (1995).
Population-Based Exposure Measurements in Arizona: A Phase I Field Study in Support
of the National Human Exposure Assessment Survey. Journal of Exposure Analysis and
Environmental Epidemiology 5: 297-325.
Lehmann, GM; Verner, MA; Luukinen, B; Henning, C; Assimon, SA; LaKind, JS; McLanahan,
ED; Phillips, LJ; Davis, MH; Powers, CM; Hines, EP; Haddad, S; Longnecker, MP;
Poulsen, MT; Farrer, DG; Marchitti, SA; Tan, YM; Swartout, JC; Sagiv, SK; Welsh, C;
Campbell, JL, Jr.; Foster, WG; Yang, RS; Fenton, SE; Tornero-Velez, R; Francis, BM;
Barnett, JB; El-Masri, HA; Simmons, JE. (2014). Improving the Risk Assessment of
Lipophilic Persistent Environmental Chemicals in Breast Milk. Critical Reviews in
Toxicology 44: 600-617.
Lin, LC; Wang, SL; Chang, YC; Huang, PC; Cheng, JT; Su, PH; Liao, PC. (2011). Associations
between Maternal Phthalate Exposure and Cord Sex Hormones in Human Infants.
Chemosphere 83: 1192-1199.
Lioy, P; Lebret, E; Spengler, J; Brauer, M; Buckley, T; Freeman, N; Jantunen, M; Kissel, J;
Lebowitz, M; Maroni, M; Moschandreas, D; Nieuwenhuijsen, M; Seifert, B; Zmirou-
Navier, D. (2005). Defining Exposure Science. Journal of Exposure Analysis and
Environmental Epidemiology 15: 463.
Lipkus, IM. (2007). Numeric, Verbal, and Visual Formats of Conveying Health Risks: Suggested
Best Practices and Future Recommendations. Medical Decision Making 27: 696-713.
Lobdell, DT; Isakov, V; Baxter, L; Touma, JS; Smuts, MB; Ozkaynak, H. (2011). Feasibility of
Assessing Public Health Impacts of Air Pollution Reduction Programs on a Local Scale:
New Haven Case Study. Environmental Health Perspectives 119: 487-493.
Page | 182
-------
Loccisano, AE; Longnecker, MP; Campbell, JL, Jr.; Andersen, ME; Clewell, HJ, 3rd. (2013).
Development of PBPK Models for PFOA and PFOS for Human Pregnancy and Lactation
Life Stages. Journal of Toxicology and Environmental Health Part A 76: 25-57.
Lopez, R. (2002). Segregation and Black/White Differences in Exposure to Air Toxics in 1990.
Environmental Health Perspectives 110: S289-S295.
Lopez, SR. (2003). Reflections on the Surgeon General's Report on Mental Health, Culture,
Race, and Ethnicity. Culture, Medicine and Psychiatry 27: 419-434.
Lorber, M. (2007). Exposure of Americans to Polybrominated Diphenyl Ethers. Journal of
Exposure Science and Environmental Epidemiology 18: 2-19.
Lorber, M; Angerer, J; Koch, HM. (2010). A Simple Pharmacokinetic Model to Characterize
Exposure of Americans to Di-2-ethylhexyl Phthalate. Journal of Exposure Science and
Environmental Epidemiology 20: 38-53.
Lorber, M; Patterson, D; Huwe, J; Kahn, H. (2009). Evaluation of Background Exposures of
Americans to Dioxin-Like Compounds in the 1990s and the 2000s. Chemosphere 77:
640-651.
Luecke, RH; Pearce, BA; Wosilait, WD; Slikker, W, Jr.; Young, JF. (2007). Postnatal Growth
Considerations for PBPK Modeling. Journal of Toxicology and Environmental Health
Part A 70: 1027-1037.
Lyons, MA; Yang, RSH; Mayeno, AN; Reisfeld, B. (2008). Computational Toxicology of
Chloroform: Reverse Dosimetry Using Bayesian Inference, Markov Chain Monte Carlo
Simulation, and Human Biomonitoring Data. Environmental Health Perspectives 116:
1040-1046.
Macintosh, DL; Xue, J; Ozkaynak, H; Spengler, JD; Ryan, PB. (1995). A Population-Based
Exposure Model for Benzene. Journal of Exposure Analysis and Environmental
Epidemiology 5: 375-403.
Mage, DT; Allen, RH; Gondy, G; Smith, W; Barr, DB; Needham, LL. (2004). Estimating
Pesticide Dose from Urinary Pesticide Concentration Data by Creatinine Correction in
the Third National Health and Nutrition Examination Survey (NHANES-III). Journal of
Exposure Analysis and Environmental Epidemiology 14: 457-465.
Mage, DT; Allen, RH; Kodali, A. (2008). Creatinine Corrections for Estimating Children's and
Adult's Pesticide Intake Doses in Equilibrium with Urinary Pesticide and Creatinine
Concentrations. Journal of Exposure Science and Environmental Epidemiology 18: 360-
368.
Mammel, KA; Kaplan, DW. (1995). Research Consent by Adolescent Minors and Institutional
Review Boards. Journal of Adolescent Health 17: 323-330.
Marshall, MN. (1996). Sampling for Qualitative Research. Family Practice 13: 522-525.
Maslia, ML; Aral, MM. (2004). Analytical Contaminant Transport Analysis System (ACTS)—
Multimedia Environmental Fate and Transport. Practice Periodical of Hazardous, Toxic,
and Radioactive Waste Management 8: 181-198.
Mastrandrea, MD; Field, CB; Stacker, TF; Edenhofer, O; Ebi, KL; Frame, DJ; Held, H; Kriegler,
E; Mach, KJ; Matschoss, PR; Plattner, G-K; Yohe, GW; Zwiers, FW. (2010). Guidance
Note for Lead Authors of the IPCC Fifth Assessment Report on Consistent Treatment of
Uncertainties. Intergovernmental Panel on Climate Change (IPCC).
https://archive.ipcc.ch/pdf/supporting-material/uncertainty-guidance-note.pdf.
Mattison, DR. (2010). Environmental Exposures and Development. Current Opinion in
Pediatrics 22: 208-218.
Page|183
-------
McKellar, JM; Sleep, S; Bergerson, JA; MacLean, HL. (2017). Expectations and Drivers of
Future Greenhouse Gas Emissions from Canada's Oil Sands: An Expert Elicitation.
Energy Policy 100: 162-169.
McKelvey, W; Gwynn, RC; Jeffery, N; Kass, D; Thorpe, LE; Garg, RK; Palmer, CD; Parsons,
PJ. (2007). ABiomonitoring Study of Lead, Cadmium, and Mercury in the Blood of New
York City Adults. Environmental Health Perspectives 115: 1435-1441.
McLanahan, ED; White, P; Flowers, L; Schlosser, PM. (2014). The Use of PBPK Models to
Inform Human Health Risk Assessment: Case Study on Perchlorate and Radioiodide
Human Lifestage Models. Risk Analysis 34: 356-366.
Metcalf, SW; Orloff, KG. (2004). Biomarkers of Exposure in Community Settings. Journal of
Toxicology and Environmental Health Part A 67: 715-726.
Miller, VA; Drotar, D; Kodish, E. (2004). Children's Competence for Assent and Consent: A
Review of Empirical Findings. Ethics and Behavior 14: 255-295.
Mills, P; Braun, L; Marohl, D. (2001). Comparison of EPA's QMS to SEI's CMMI. Quality
Assurance 9: 165-171.
Mokhtari, A; Frey, HC; Zheng, J. (2006). Evaluation and Recommendation of Sensitivity
Analysis Methods for Application to Stochastic Human Exposure and Dose Simulation
Models. Journal of Exposure Science and Environmental Epidemiology 16: 491-506.
Morello-Frosch, R; Jesdale, BM. (2006). Separate and Unequal: Residential Segregation and
Estimated Cancer Risks Associated with Ambient Air Toxics in U.S. Metropolitan Areas.
Environmental Health Perspectives 114: 386-393.
Morello-Frosch, R; Lopez, R. (2006). The Riskscape and the Color Line: Examining the Role of
Segregation in Environmental Health Disparities. Environmental Research 102: 181-196.
Morello-Frosch, R; Pastor Jr., M; Porras, C; Sadd, J. (2002). Environmental Justice and Regional
Inequality in Southern California: Implications for Future Research. Environmental
Health Perspectives 110: S149-S154.
Moreno, JD; Sisti, D. (2015). Biomedical Research Ethics: Landmark Cases, Scandals, and
Conceptual Shifts. In JD Arras; E Fenton; R Kukla (Eds.), The Routledge Companion to
Bioethics (pp. 185-199). New York, NY: Routledge Press.
Morgan, MG; Henrion, M; Small, M. (1990). Uncertainty: A Guide to Dealing with Uncertainty
in Quantitative Risk and Policy Analysis. New York, NY: Cambridge University Press.
Morgan, MK; Sheldon, LS; Croghan, CW; Jones, PA; Robertson, GL; Chuang, JC; Wilson, NK;
Lyu, CW. (2005). Exposures of Preschool Children to Chlorpyrifos and Its Degradation
Product 3,5,6-Trichloro-2-pyridinol in their Everyday Environments. Journal of Exposure
Analysis and Environmental Epidemiology 15: 297-309.
Morgan, MK; Sheldon, LS; Thomas, KW; Egeghy, PP; Croghan, CW; Jones, PA; Chuang, JC;
Wilson, NK. (2008). Adult and Children's Exposure to 2,4-D from Multiple Sources and
Pathways. Journal of Exposure Science and Environmental Epidemiology 18: 486-494.
Morgan, WJ; Martinez, FD. (1992). Risk Factors for Developing Wheezing and Asthma in
Childhood. Pediatric Clinics of North America 39: 1185-1203.
Morgenstern, H; Thomas, D. (1993). Principles of Study Design in Environmental
Epidemiology. Environmental Health Perspectives 101: S23-S38.
Moya, J; Bearer, CF; Etzel, RA. (2004). Children's Behavior and Physiology and How it Affects
Exposure to Environmental Contaminants. Pediatrics 113: S996-S1006.
Page | 184
-------
NAS (National Academies of Sciences, Engineering, and Medicine). (2017). Communicating
Science Effectively: A Research Agenda. Washington, D.C.: The National Academies
Press.
Ndebele, P. (2013). The Declaration of Helsinki, 50 Years Later. Journal of the American
Medical Association 310: 2145-2146.
Needham, LL; Sexton, K. (2000). Assessing Children's Exposure to Hazardous Environmental
Chemicals: An Overview of Selected Research Challenges and Complexities. Journal of
Exposure Analysis and Environmental Epidemiology 10: 611-629.
NEJAC (National Environmental Justice Advisory Council). (2004). Ensuring Risk Reduction in
Communities with Multiple Stressors: Environmental Justice and Cumulative
Risks/Impacts. Washington, D.C.: National Environmental Justice Advisory Council,
http s: // www. epa. gov/ site s/pr oduction/fi 1 es/2015 -
04/documents/ ensuringri skreducati onnej ac. pdf.
Neubig, RR. (1990). The Time Course of Drug Action. In WB Pratt; P Taylor (Eds.), Principles
of Drug Action: The Basis of Pharmacology, Third Edition (pp. 297-364). New York,
NY: Churchill Livingstone Inc.
Nieuwenhuij sen, MJ. (2015). Exposure Assessment in Environmental Epidemiology. Oxford,
U.K.: Oxford University Press.
North, DW. (1997). Risk Characterization: A Bridge to Informed Decision Making. Fundamental
and Applied Toxicology 39: 81-88.
Northridge, ME, (Ed.). (2011). December Special Issue. American Journal of Public Health 101:
S5-S364.
NPS (National Park Service). (1999). Preservation on the Reservation [And Beyond], U.S.
Department of the Interior. https://www.nps.gov/archeology/cg/fa_1999/Subsist.htm.
NRC (National Research Council). (1983). Risk Assessment in the Federal Government:
Managing the Process. Washington, D.C.: The National Academies Press.
http://www.nap.edu/openbook.php?isbn=0309033497.
(1989a). Biologic Markers in Pulmonary Toxicology. Washington, D.C.: The National
Academies Press. http://www.nap.edu/catalog.php?record_id=1216.
(1989b). Improving Risk Communication. Washington, D.C.: The National Academies
Press, http://www.nap.edu/openbook.php?isbn=0309039436.
(1993). Pesticides in the Diets of Infants and Children. Washington, D.C.: The National
Academies Press. http://www.nap.edu/openbook.php?isbn=0309048753.
(1994). Science and Judgment in Risk Assessment. Washington, D.C.: The National
Academies Press, http://www.nap.edu/openbook.php?isbn=030904894X.
(1996). Understanding Risk: Informing Decisions in a Democratic Society. Washington,
D.C.: The National Academies Press.
http://www.nap.edu/openbook.php?isbn=030905396X.
(1997). Use of the Gray Literature and Other Data in Environmental Epidemiology.
Volume 2 of Environmental Epidemiology. Washington, D.C.: The National Academies
Press, http://www.nap.edu/catalog.php?record_id=5804.
(2006a). Health Risks from Dioxin and Related Compounds: Evaluation of the EPA
Reassessment. Washington, D.C.: The National Academies Press,
http: / / www. nap. edu/catal og. php? record_i d= 116 8 8.
(2006b). Human Biomonitoring for Environmental Chemicals. Washington, D.C.: The
National Academies Press, http://www.nap.edu/catalog.php?record_id=l 1700.
NRC.
NRC.
NRC.
NRC.
NRC.
NRC.
NRC.
NRC.
Page|185
-------
NRC. (2007). Models in Environmental Regulatory Decision Making. Washington, D.C.: The
National Academies Press, http://www.nap.edu/catalog.php?record_id=l 1972.
NRC. (2009). Science and Decisions: Advancing Risk Assessment. Washington, D.C.: The
National Academies Press, http://www.nap.edu/catalog.php?record_id=12209.
NRC. (2012a). Exposure Science in the 21st Century: A Vision and a Strategy. Washington,
D.C.: The National Academies Press, http://www.nap.edu/catalog/13507/exposure-
science-in-the-21 st-century-a-vision-and-a.
NRC. (2012b). Science for Environmental Protection: The Road Ahead. Washington, D.C.: The
National Academies Press, https://www.nap.edu/catalog/13510/science-for-
environmental-protection-the-road-ahead.
NRC and IOM. (2005). Ethical Considerations for Research on Housing-Related Health Hazards
Involving Children. B Lo; ME O'Connell (Eds.). Washington, D. C.: The National
Academies Press, http://books.nap.edu/catalog.php?record_id=l 1450.
Oberdorster, G; Stone, V; Donaldson, K. (2007). Toxicology ofNanoparticles: A Historical
Perspective. Nanotoxicology 1: 2-25.
Olin, SS; Sonawane, BR. (2003). Workshop to Develop a Framework for Assessing Risks to
Children from Exposure to Environmental Agents. Environmental Health Perspectives
111: 1524-1526.
OMB (Office of Management and Budget). (2006). Questions and Answers When Designing
Surveys for Information Collections. Washington, D.C.: Office of Information and
Regulatory Affairs, OMB.
https://obamawhitehouse.archives.gov/sites/default/files/omb/inforeg/pmc_survey_guida
nce_2006.pdf.
OSTP (Office of Science and Technology Policy). (2013). Memorandum for the Heads of
Executive Departments and Agencies. Washington, DC: Office of Science and
Technology Policy.
Ozkaynak, H. (2009). Iterative Use of Models and Measurements to Develop Scientific
Understanding. National Research Council (NRC) Workshop on Exposure Science in the
21st Century, June 18-19, Washington, D.C.
http ://cfpub. epa. gov/ si/si_public_record_Report. cfm? dirEntryId=217025&CFID= 179 5 97
95&CFTOKEN=53 822540.
Ozkaynak, H; Frey, HC; Burke, J; Pinder, RW. (2009). Analysis of Coupled Model Uncertainties
in Source-to-Dose Modeling of Human Exposures to Ambient Air Pollution: A PM2.5
Case Study. Atmospheric Environment 43: 1641-1649.
Ozkaynak, H; Frey, HC; Hubbell, B. (2008). Characterizing Variability and Uncertainty in
Exposure Assessments Improves Links to Environmental Decision Making. Air and
Waste Management Association Magazine for Environmental Managers, Air and Waste
Management Association, 16-20, Pittsburgh, PA.
Ozkaynak, H; Wyatt, RM; Needham, LL; Akland, G; Quackenboss, J. (2005). Exposure
Assessment Implications for the Design and Implementation of the National Children's
Study. Environmental Health Perspectives 113: 1108-1115.
Ozkaynak, H; Zartarian, V; Greim, H; Yu, H. (2011). Collaborative Project on Exposure
Assessment. The 2nd International Conference on Risk Assessment, January 26-28,
Brussels, Belgium.
Page|186
-------
Parkin, RT. (2004). Communications with Research Participants and Communities: Foundations
for Best Practices. Journal of Exposure Analysis and Environmental Epidemiology 14:
516-523.
Paustenbach, D; Galbraith, D. (2006). Biomonitoring and Biomarkers: Exposure Assessment
Will Never Be the Same. Environmental Health Perspectives 114: 1143-1149.
Paustenbach, DJ. (1985). Occupational Exposure Limits, Pharmacokinetics, and Unusual Work
Schedules. InLJ Cralley; LV Cralley (Eds.), Patty's Industrial Hygiene and Toxicology,
Volume IIIA (pp. 111-277). New York, NY: John Wiley and Sons.
Payne-Sturges, DC; Schwab, M; Buckley, TJ. (2004). Closing the Research Loop: A Risk-Based
Approach for Communicating Results of Air Pollution Exposure Studies. Environmental
Health Perspectives 112: 28-34.
PCCRARM (Presidential/Congressional Commission on Risk Assessment and Risk
Management). (1997). Report Information. PCCRARM.
http://cfpub.epa. gov/ncea/cfm/recordisplay.cfm?deid=55006.
Pellizzari, E; Lioy, P; Quackenboss, J; Whitmore, R; Clayton, A; Freeman, N; Waldman, J;
Thomas, K; Rodes, C; Wilcosky, T. (1995). Population-Based Exposure Measurements
in EPA Region V: A Phase I Field Study in Support of the National Human Exposure
Assessment Survey. Journal of Exposure Analysis and Environmental Epidemiology 5:
327-358.
Perera, FP; Herbstman, J. (2011). Prenatal Environmental Exposures, Epigenetics, and Disease.
Reproductive Toxicology 31: 363-373.
Perera, FP; Rauh, V; Whyatt, RM; Tsai, WY; Tang, D; Diaz, D; Hoepner, L; Barr, D; Tu, YH;
Camann, D; Kinney, P. (2006). Effect of Prenatal Exposure to Airborne Polycyclic
Aromatic Hydrocarbons on Neurodevelopment in the First 3 Years of Life among Inner-
City Children. Environmental Health Perspectives 114: 1287-1292.
Phillips, MB; Sobus, JR; George, BJ; Isaacs, K; Conolly, R; Tan, YM. (2014). A New Method
for Generating Distributions of Biomonitoring Equivalents to Support Exposure
Assessment and Prioritization. Regulatory Toxicology and Pharmacology 69: 434-442.
Pleil, JD; Sheldon, LS. (2011). Adapting Concepts from Systems Biology to Develop Systems
Exposure Event Networks for Exposure Science Research. Biomarkers 16: 99-105.
Ponsonby, AL; Symeonides, C; Vuillermin, P; Mueller, J; Sly, PD; Saffery, R. (2016).
Epigenetic Regulation of Neurodevelopmental Genes in Response to In Utero Exposure
to Phthalate Plastic Chemicals: How Can We Delineate Causal Effects? Neurotoxicology
55: 92-101.
Presidential Commission for the Study of Bioethical Issues. (201 la). "Ethically Impossible."
STD Research in Guatemala from 1946-1948.
https://bioethicsarchive.georgetown.edu/pcsbi/node/654.html.
Presidential Commission for the Study of Bioethical Issues. (201 lb). Moral Science: Protecting
Participants in Human Subjects Research.
https://bioethicsarchive.georgetown.edu/pcsbi/node/558.html.
Price, PS; Curry, CL; Goodrum, PE; Gray, MN; McCrodden, JI; Harrington, NW; Carlson-
Lynch, H; Keenan, RE. (1996). Monte Carlo Modeling of Time-Dependent Exposures
Using a Microexposure Event Approach. Risk Analysis 16: 339-348.
Price, PS; Young, JS; Chaisson, CF. (2001). Assessing Aggregate and Cumulative Pesticide
Risks Using a Probabilistic Model. Annals of Occupational Hygiene 45(Suppl 1): S131-
S142.
Page|187
-------
Quackenboss, JJ; Pellizzari, ED; Shubat, P; Whitmore, RW; Adgate, JL; Thomas, KW; Freeman,
NC; Stroebel, C; Lioy, PJ; Clayton, AC; Sexton, K. (2000). Design Strategy for
Assessing Multi-Pathway Exposure for Children: The Minnesota Children's Pesticide
Exposure Study (MNCPES). Journal of Exposure Analysis and Environmental
Epidemiology 10: 145-158.
Quandt, SA; Doran, AM; Rao, P; Hoppin, JA; Snively, BM; Arcury, TA. (2004). Reporting
Pesticide Assessment Results to Farmworker Families: Development, Implementation,
and Evaluation of a Risk Communication Strategy. Environmental Health Perspectives
112: 636-642.
Renn, O. (1986). Decision Analytic Tools for Resolving Uncertainty in the Energy Debate.
Nuclear Engineering and Design 93: 167-179.
Resnik, DB. (2012). Environmental Health Ethics. New York, NY: Cambridge University Press.
Reverby, SB. (2009). Examining Tuskegee: The Infamous Syphilis Study and Its Legacy. Chapel
Hill, NC: University of North Carolina Press.
Rice, C; Birnbaum, LS; Cogliano, J; Mahaffey, K; Needham, L; Rogan, WJ; vom Saal, FS.
(2003). Exposure Assessment for Endocrine Disruptors: Some Considerations in the
Design of Studies. Environmental Health Perspectives 111: 1683-1690.
Rippin, G. (2001). Design Issues and Sample Size when Exposure Measurement Is Inaccurate.
Methods of Information in Medicine 40: 137-140.
Rothenberg, SE; Feng, X; Li, P. (2011). Low-Level Maternal Methylmercury Exposure through
Rice Ingestion and Potential Implications for Offspring Health. Environmental Pollution
159: 1017-1022.
Russell, ML; Moralejo, DG; Burgess, ED. (2000). Paying Research Subjects: Participants'
Perspectives. Journal of Medical Ethics 26: 126-130.
SAB (Science Advisory Board). (2000). Toward Integrated Environmental Decision-Making.
(EPA/SAB/EC/00-011). Washington, D.C.: SAB, U.S. EPA.
http://yosemite.epa.gov/sab/sabproduct.nsf/D33811633594B9D78525719B00656478/$Fi
le/ecirpOll.pdf.
Salganik, MJ. (2006). Variance Estimation, Design Effects, and Sample Size Calculations for
Respondent-Driven Sampling. Journal of Urban Health 83: S98-112.
Saltelli, S; Tarantula, S; Campolongo, F; Ratto, M. (2004). Sensitivity Analysis in Practice: A
Guide to Assessing Scientific Models. New York, NY: Wiley.
Sarewitz, D. (2004). How Science Makes Environmental Controversies Worse. Environmental
Science & Policy 7: 385-403.
Schell, LM; Hubicki, LA; DeCaprio, AP; Gallo, MV; Ravenscroft, J; Tarbell, A; Jacobs, A;
David, D; Worswick, P; Akwesasne Task Force on the Environment. (2003).
Organochlorines, Lead, and Mercury in Akwesasne Mohawk Youth. Environmental
Health Perspectives 111: 954-961.
Schulte, PA; Singal, M. (1989). Interpretation and Communication of the Results of Medical
Field Investigations. Journal of Occupational Medicine 31: 589-594.
Sechena, R; Liao, S; Lorenzana, R; Nakano, C; Polissar, N; Fenske, R. (2003). Asian American
and Pacific Islander Seafood Consumption - A Community-Based Study in King County
Washington. Journal of Exposure Analysis and Environmental Epidemiology 13: 256-
266.
Sexton, K; Adgate, JL; Church, TR; Greaves, IA; Ramachandran, G; Fredrickson, AL; Geisser,
MS; Ryan, AD. (2003). Recruitment, Retention, and Compliance Results from a
Page|188
-------
Probability Study of Children's Environmental Health in Economically Disadvantaged
Neighborhoods. Environmental Health Perspectives 111: 731-736.
Sexton, K; Wagener, DK; Selevan, SG; Miller, TO; Lybarger, JA. (1994). An Inventory of
Human Exposure-Related Data Bases. Journal of Exposure Analysis and Environmental
Epidemiology 4: 95-109.
Sharlin, HI. (1986). EDB: A Case Study in Communicating Risk. Risk Analysis 6: 61-68.
Sheldon, LS. (2010). Chapter 42. Exposure Framework. In R Krieger (Ed.), Hayes' Handbook of
Pesticide Toxicology, Third Edition, Volume 1 (pp. 971-976). Waltham, MA: Academic
Press.
Sheldon, LS; Cohen Hubal, EA. (2009). Exposure as Part of a Systems Approach for Assessing
Risk. Environmental Health Perspectives 117: 1181-1194.
Shin, HM; Ernstoff, A; Arnot, JA; Wetmore, BA; Csiszar, SA; Fantke, P; Zhang, X; McKone,
TE; Jolliet, O; Bennett, DH. (2015). Risk-Based High-Throughput Chemical Screening
and Prioritization using Exposure Models and in Vitro Bioactivity Assays. Environmental
Science & Technology 49: 6760-6771.
Simon, TW. (1999). Two-Dimensional Monte Carlo Simulation and Beyond: A Comparison of
Several Probabilistic Risk Assessment Methods Applied to a Superfund Site. Human and
Ecological Risk Assessment: An International Journal 5: 823-843.
Slovic, P. (1986). Informing and Educating the Public about Risk. Risk Analysis 6: 403-415.
Slovic, P; Fischhoff, B; Lichtenstein, S. (1979). Rating the Risks. Environment: Science and
Policy for Sustainable Development 21: 14-39.
Sobus, J; Morgan, MK; Pleil, JD; Barr, DB. (2010). Chapter 45. Biomonitoring Uses and
Considerations for Assessing Human Exposures to Pesticides. In R Krieger (Ed.), Hayes'
Handbook of Pesticide Toxicology, Third Edition, Volume 1 (pp. 1021-1036). Waltham,
MA: Academic Press.
Sohn, MD; McKone, TE; Blancato, JN. (2004). Reconstructing Population Exposures from Dose
Biomarkers: Inhalation of Trichloroethylene (TCE) as a Case Study. Journal of Exposure
Analysis and Environmental Epidemiology 14: 204-213.
Spiegelhalter, D; Pearson, M; Short, I. (2011). Visualizing Uncertainty about the Future. Science
333: 1393-1400.
St-Amand, A; Werry, K; Aylward, LL; Hays, SM; Nong, A. (2014). Screening of Population
Level Biomonitoring Data from the Canadian Health Measures Survey in a Risk-Based
Context. Toxicology Letters 231: 126-134.
Stahl, CH. (2014). Out of the Land of Oz: The Importance of Tackling Wicked Environmental
Problems without Taming Them. Environment Systems and Decisions 34: 473-477.
Stahl, CH; Cimorelli, AJ. (2005). How Much Uncertainty Is Too Much and How Do We Know?
A Case Example of the Assessment of Ozone Monitor Network Options. Risk Analysis
25: 1109-1120.
Stapleton, HM; Misenheimer, J; Hoffman, K; Webster, TF. (2014). Flame Retardant
Associations between Children's Handwipes and House Dust. Chemosphere 116: 54-60.
Stirling, A. (2010). Keep it Complex. Nature 468: 1029-1031.
Tan, C; Liao, K; Clewell, H. (2005). Physiologically Based Pharmacokinetic Modeling as a Tool
to Interpret Human Biomonitoring Data. CUT Activities 25: 1-8.
Tan, YM; Liao, KH; Clewell III, HJ. (2007). Reverse Dosimetry: Interpreting Trihalomethanes
Biomonitoring Data Using Physiologically Based Pharmacokinetic Modeling. Journal of
Exposure Science and Environmental Epidemiology 17: 591-603.
Page|189
-------
Thomas, RS; Philbert, MA; Auerbach, SS; Wetmore, BA; Devito, MJ; Cote, I; Rowlands, JC;
Whelan, MP; Hays, SM; Andersen, ME; Meek, ME; Reiter, LW; Lambert, JC; Clewell
3rd, HJ; Stephens, ML; Zhao, QJ; Wesselkamper, SC; Flowers, L; Carney, EW; Pastoor,
TP; Petersen, DD; Yauk, CL; Nong, A. (2013). Incorporating New Technologies into
Toxicity Testing and Risk Assessment: Moving From 21st Century Vision to a Data-
Driven Framework. Toxicological Sciences 136: 4-18.
Thompson, KM; Bloom, DL. (2000). Communication of Risk Assessment Information to Risk
Managers. Journal of Risk Research 3: 333-352.
Tornero-Velez, R; Davis, J; Xue, J; Setzer, RW. (2010). Physiologically-Based Pharmacokinetic
Models of Pyrethroids: Bayesian Calibration and their Use in Interpreting Probabilistic
Exposure Data. Research Triangle Park, NC: National Exposure Research Laboratory,
National Center for Computational Toxicology, Office of Research and Development,
U.S. EPA. http://www.regulations.gOv/#! documentDetail;D=EPA-HQ-OPP-2010-03 83-
0014.
Touma, JS; Isakov, V; Ching, J; Seigneur, C. (2006). Air Quality Modeling of Hazardous
Pollutants: Current Status and Future Directions. Journal of the Air and Waste
Management Association 56: 547-558.
Tufte, ER. (2001). The Visual Display of Quantitative Information. 2nd Edition. Cheshire, CT:
Graphics Press.
Tulve, NS; Egeghy, PP; Fortmann, RC; Xue, J; Evans, J; Whitaker, DA; Croghan, CW. (2011).
Methodologies for Estimating Cumulative Human Exposures to Current-Use Pyrethroid
Pesticides. Journal of Exposure Science and Environmental Epidemiology 21: 317-327.
U.S. EPA (Environmental Protection Agency). (1984). EPA Policy for the Administration of
Environmental Programs on Indian Reservations. Signed by Administrator William D.
Ruckelshaus, November 8. Washington, D.C.: U.S. EPA.
https://www.epa.gov/sites/production/files/2015-04/documents/indian-policy-84.pdf.
U.S. EPA. (1986a). Guidelines for Carcinogen Risk Assessment. (EPA/630/R-00/004. 51 Fed.
Reg. 185: 33992-34003, September 24). Washington, D.C.: Office of Health and
Environmental Assessment, Office of Research and Development, U.S. EPA.
http://cfpub.epa.gov/ncea/cfm/recordisplay.cfm?deid=54933.
U.S. EPA. (1986b). Guidelines for Estimating Exposures. (51 Fed. Reg. 34042-34054,
September 24). Washington, D.C.: Office of Health and Environmental Assessment,
Office of Research and Development, U.S. EPA.
U.S. EPA. (1986c). Guidelines for Mutagenicity Risk Assessment. (EPA/630/R-98/003. 51 Fed.
Reg. 34006-34013, September 24). Washington, D.C.: Office of Health and
Environmental Assessment, Office of Research and Development, U.S. EPA.
http://cfpub.epa.gov/ncea/cfm/recordisplay.cfm?deid=23160.
U.S. EPA. (1986d). Guidelines for the Health Assessment of Suspect Developmental Toxicants.
(51 Fed. Reg. 34028-34040, September 24). Washington, D.C.: Office of Health and
Environmental Assessment, Office of Research and Development, U.S. EPA.
U.S. EPA. (1986e). Guidelines for the Health Risk Assessment of Chemical Mixtures.
(EPA/630/R/-98/002. 51 Fed. Reg. 34014-34013, September 24). Washington, D.C.:
Office of Health and Environmental Assessment, Office of Research and Development,
U.S. EPA. http://cfpub.epa.gov/ncea/cfm/recordisplay.cfm?deid=22567.
U.S. EPA. (1987a). Selection Criteria for Mathematical Models Used in Exposure Assessments:
Surface Water Models. (EPA/600/8-87/042). Washington, D.C.: Office of Health and
Page|190
-------
Environmental Assessment, Office of Research and Development, U.S. EPA.
http://cfpub.epa.gov/ncea/cfm/recordisplay.cfm7deicN41961.
U.S. EPA. (1987b). The Total Exposure Assessment Methodology (TEAM) Study: Project
Summary. (EPA/600/S6-87/002). Washington, D.C.: Office of Acid Deposition,
Environmental Modeling and Quality Assurance, U.S. EPA.
http://nepis.epa.gov/Adobe/PDF/2000TTY7.PDF.
U.S. EPA. (1987c). The Total Exposure Assessment Methodology (TEAM) Study: Summary
and Analysis. Volume 1. (EPA/600/6-87/002a). Washington, D.C.: Office of Research
and Development, U.S. EPA. L. A. Wallace, Project Manager.
http://nepis.epa.gov/Adobe/PDF/2000UC5T.PDF.
U.S. EPA. (1988a). Guidance for Conducting Remedial Investigations and Feasibility Studies
Under CERCLA. Interim Final. (EPA/540/G-89/004 Publication 9355.3-01).
Washington, D.C.: Office of Emergency and Remedial Response, U.S. EPA.
http s://rai s. ornl. gov/ documents/GUI DANCE, PDF.
U.S. EPA. (1988b). Selection Criteria for Mathematical Models Used in Exposure Assessments:
Ground-Water Models. (EPA/600/8-88/075). Washington, D.C.: Office of Health and
Environmental Assessment, Office of Research and Development, U.S. EPA.
http://nepis. epa.gov/Exe/ZyPURL. cgi?Dockey=3 0001HMJ.TXT.
U.S. EPA. (1989a). Getting Ready Scoping the RI/FS. (Publication 9355.3-901FS1).
Washington, D.C.: Office of Solid Waste and Emergency Response, U.S. EPA.
http s ://sem spub. epa. gov/ work/HQ/174409. pdf.
U.S. EPA. (1989b). Risk Assessment Guidance for Superfund. Volume I: Human Health
Evaluation Manual (Part A). Interim Final. (EPA/540/1-89/002). Washington, D.C.:
Office of Emergency and Remedial Response, U.S. EPA.
https://www.epa.gov/sites/production/files/2015-09/documents/rags_a.pdf.
U.S. EPA. (1991a). Conducting Remedial Investigations/Feasibility Studies for CERCLA
Municipal Landfill Sites. (EPA/540/P-91/001). Washington, D.C.: Office of Emergency
and Remedial Response, U.S. EPA. https://semspub.epa.gov/work/HQ/175660.pdf.
U.S. EPA. (1991b). Risk Assessment Guidance for Superfund. Volume I: Human Health
Evaluation Manual (PartB, Development of Risk-Based Preliminary Remediation
Goals). Interim. (EPA/540/R-92/003 Publication 9285.7-0IB, December). Washington,
D.C.: Office of Emergency and Remedial Response, U.S. EPA. https://epa-
prgs.ornl.gov/radionuclides/HHEMB.pdf.
U.S. EPA. (1991c). Risk Assessment Guidance for Superfund. Volume I: Human Health
Evaluation Manual (Part C, Risk Evaluation of Remedial Alternatives). Interim.
(Publication 9285.7-01C, October). Washington, D.C.: Office of Emergency and
Remedial Response, U.S. EPA. https://rais.ornl.gov/documents/HHEMC.pdf.
U.S. EPA. (1992a). Consumption Surveys for Fish and Shellfish: A Review and Analysis of
Survey Methods. (EPA/822/R-92/001, February). Washington, D.C.: Office of Water,
U.S. EPA. https://nepis.epa.gov/Exe/ZyPURL.cgi?Dockey=20003KQE.TXT.
U.S. EPA. (1992b). Guidance for Data Useability in Risk Assessment (Part A). Final.
(Publication 9285.7-09A). Washington, D.C.: Office of Emergency and Remedial
Response, U.S. EPA. https://semspub.epa.gov/work/05/424356.pdf.
U.S. EPA. (1992c). Guidelines for Exposure Assessment. (EPA/600/Z-92/001. 57 Fed. Reg.
22888-22938, May 29). Washington, D.C.: Risk Assessment Forum, U.S. EPA.
http://cfpub. epa.gov/ncea/cfm/recordisplay. cfm?deid=l 5263.
Page|191
-------
U.S. EPA. (1992d). Supplemental Guidance to RAGS: Calculating the Concentration Term.
(Publication 9285.7-081). Washington, D.C.: Office of Solid Waste and Emergency
Response, U.S. EPA. https://nepis.epa.gov/Exe/ZyPURL.cgi?Dockey=9100UGVL.TXT.
U.S. EPA. (1994a). Guidance Manual for the Integrated Exposure Uptake Biokinetic Model for
Lead in Children. (Publication 9285.7-15-1). Washington, D.C.: Office of Solid Waste
and Emergency Response, U.S. EPA. https://www.epa.gov/superfund/lead-superfund-
sites-software-and-users-manuals#gui dance.
U.S. EPA. (1994b). Methods for Derivation of Inhalation Reference Concentrations (RfCs) and
Application of Inhalation Dosimetry. (EPA/600/8-90/066F). Washington, D.C.:
Environmental Criteria and Assessment Office, Office of Health and Environmental
Assessment, Office of Research and Development, U.S. EPA.
http://cfpub.epa.gov/ncea/cfm/recordisplay.cfm?deid=71993.
U.S. EPA. (1995a). Appendix A: Policy for Risk Characterization at the U.S. Environmental
Protection Agency. In Risk Characterization Handbook. (EPA/100/B-00/002).
Washington, D.C.: Science Policy Council, U.S. EPA.
https://www.epa.gov/sites/production/files/2015-
10/documents/osp_ri sk_characterization_handbook_2000.pdf.
U.S. EPA. (1995b). New Policy on Evaluating Health Risks to Children. Signed by
Administrator Carol M. Browner and Deputy Administrator Fred Hansen, October 20.
Washington, D.C.: Office of the Administrator, U.S. EPA.
http://www2.epa.gov/sites/production/files/2014-
05/documents/health_policy_cover_memo.pdf.
U.S. EPA. (1996a). Community Advisory Groups: Partners in Decisions at Hazardous Waste
Sites: Case Studies. (EPA/540/R-96/043 Publication 9230.0-75). Washington, D.C.:
Office of Solid Waste and Emergency Response, Community Involvement and Outreach
Center, U.S. EPA. https://nepis.epa.gov/Exe/ZyPURL.cgi?Dockey=1000281S.TXT.
U.S. EPA. (1996b). Environmental Health Threats to Children. (EPA/175/F-96/001).
Washington, D.C.: Office of the Administrator, U.S. EPA.
http://www2.epa.gov/sites/production/files/2014-
05/documents/national_agenda_to_protect_childrens_health_from_environmental_threats
.pdf.
U.S. EPA. (1996c). Food Quality Protection Act of 1996. (Public Law 104-170. Signed August
3, 110 STAT 1489). https://www.govinfo.gov/content/pkg/PLAW-
104publ 170/pdf/PLAW-l 04publ 170.pdf.
U.S. EPA. (1996d). Soil Screening Guidance: User's Guide. Attachment A. Conceptual Site
Model Summary. (Publication 9355.4-23). Washington, D.C.: Office of Solid Waste and
Emergency Response, U.S. EPA. https://semspub.epa.gov/work/HQ/175226.pdf.
U.S. EPA. (1996e). Summary Report for the Workshop on Monte Carlo Analysis. (EPA/630/R-
96/010). Washington, D.C.: Risk Assessment Forum, Office of Research and
Development, U.S. EPA.
https://nepis.epa.gov/Exe/ZyPURL.cgi?Dockey=30004YTZ.TXT.
U.S. EPA. (1997a). Guidance on Cumulative Risk Assessment. Part 1. Planning and Scoping.
Washington, D.C.: Science Policy Council, U.S. EPA.
http://www2.epa.gov/sites/production/files/2015-01/documents/cumrisk2_0.pdf.
Page| 192
-------
U.S. EPA. (1997b). Guiding Principles for Monte Carlo Analysis. (EPA/630/R-97/001).
Washington, D.C.: Risk Assessment Forum, U.S. EPA.
http://www2.epa.gov/sites/production/files/2014-ll/documents/montecar.pdf.
U.S. EPA. (1997c). Standard Operating Procedures (SOPs) for Residential Exposure
Assessments. Washington, D.C.: Office of Pesticide Programs, U.S. EPA.
https://archive.epa.gov/scipoly/sap/meetings/web/html/sopindex.html.
U.S. EPA. (1998). Guidance for Conducting Fish and Wildlife Consumption Surveys.
(EPA/823/B-98/007). Washington, D.C.: Office of Water, U.S. EPA.
https://www.epa.gov/sites/production/files/2015-01/documents/guidance-fish-wildlife-
survey.pdf.
U.S. EPA. (1999a). Quality Assurance Project Plan Requirements for Secondary Data Research
Projects. Washington, D.C.: U.S. EPA. https://www.epa.gov/sites/production/files/2015-
07/documents/found-data-qapp-rqts.pdf.
U.S. EPA. (1999b). Report of the Workshop on Selecting Input Distributions for Probabilistic
Assessments. (EPA/630/R-98/004). Washington, D.C.: Risk Assessment Forum, Office
of Research and Development, U.S. EPA.
http://nepis. epa.gov/Exe/ZyPURL. cgi?Dockey=30004ZPJ.TXT.
U.S. EPA. (1999c). Sociodemographic Data Used for Identifying Potentially Highly Exposed
Populations. (EPA/600/R-99/060). Washington, D.C.: National Center for Environmental
Assessment, U.S. EPA. http://cfpub.epa.gov/ncea/cfm/recordisplay.cfm?deid=22562.
U.S. EPA. (2000a). Assigning Values to Non-Detected/Non-Quantified Pesticide Residues in
Human Health Food Exposure Assessments. Washington, D.C.: Office of Pesticide
Programs, U.S. EPA. https://archive.epa.gov/pesticides/trac/web/pdf/trac3b012.pdf.
U.S. EPA. (2000b). Data Quality Objectives Process for Hazardous Waste Site Investigations:
EPAQA/G-4HW. (EPA/600/R-00/007). Washington, D.C.: Office of Environmental
Information, U.S. EPA.
https://nepis.epa.gov/Exe/ZyPURL.cgi?Dockey=200132AN.TXT.
U.S. EPA. (2000c). EPA Quality Manual for Environmental Programs. (CIO 2105-P-01-0).
Washington, DC: Office of Environmental Information Quality Staff, U.S. EPA.
https://www.epa.gov/sites/production/files/2013-10/documents/2105p010.pdf.
U.S. EPA. (2000d). Guidance for Data Quality Assessment. Practical Methods for Data
Analysis: EPA QA/G-9. QAOO Update. (EPA/600/R-96/084). Washington, D.C.: Office
of Environmental Information, U.S. EPA.
https://www.epa.gov/sites/production/files/2015-06/documents/g9-final.pdf.
U.S. EPA. (2000e). Policy and Program Requirements for the Mandatory Agency-Wide Quality
System. (EPA Order CIO 2105.0). Washington, D.C.: Office of Environmental
Information, U.S. EPA. https://www.epa.gov/sites/production/files/2013-
10/documents/21050.pdf.
U.S. EPA. (2000f). Presenter's Manual for "Superfund Risk Assessment and How You Can
Help." A40-Minute Videotape. (EPA/540/R-99/013 Publication 9285.7-29).
Washington, D.C.: Office of Solid Waste and Emergency Response, U.S. EPA.
https://www.epa.gov/sites/production/files/2015-l 1/documents/vdmanual.pdf.
U.S. EPA. (2000g). Risk Characterization Handbook. (EPA/100/B-00/002). Washington, D.C.:
Office of Science Policy, Office of Research and Development, U.S. EPA.
http://nepis. epa.gov/Exe/Zy PURL. cgi?Dockey=40000006.TXT.
Page|193
-------
U.S. EPA. (2000h). Summary Report for the Workshop on Issues Associated with Dermal
Exposure and Uptake. (EPA/630/R-00/003). Washington, D.C.: Risk Assessment Forum,
U.S. EPA. http://cfpub.epa.gov/ncea/raf/wrkshpderm.htm.
U.S. EPA. (2001a). Asian American and Pacific Islander Initiative Outreach Strategy.
(EPA/202/K-01/003). Washington, D.C.: Office of Administration and Resource
Management, U.S. EPA.
https://nepis.epa.gov/Exe/ZyPURL.cgi?Dockey=40000BSF.TXT.
U.S. EPA. (2001b). Draft Protocol for Measuring Children's Non-Occupational Exposure to
Pesticides by all Relevant Pathways. (EPA/600/R-03/026). Research Triangle Park, NC:
National Exposure Research Laboratory, Office of Research and Development, U.S.
EPA. http://nepis.epa.gov/Exe/ZyPURL.cgi?Dockey=l0004SF1.TXT.
U.S. EPA. (2001c). EPA Requirements for Quality Assurance Project Plans: EPA QA/R-5.
(EPA/240/B-01/003). Washington, D.C.: Office of Environmental Information, U.S.
EPA. https://www.epa.gov/sites/production/files/2016-06/documents/r5-final_0.pdf.
U.S. EPA. (2001d). EPA Requirements for Quality Management Plans: EPA QA/R-2.
(EPA/240/B-01/002). Washington, D.C.: Office of Environmental Information, U.S.
EPA. https://www.epa.gov/sites/production/files/2016-06/documents/r2-final.pdf.
U.S. EPA. (2001e). Exploration of Perinatal Pharmacokinetic Issues. (EPA/63 O/R-01/004).
Washington, D.C.: U.S. EPA. https://www.epa.gov/sites/production/files/2014-
ll/documents/perinatal_pharmacokinetic.pdf.
U.S. EPA. (200If). General Principles for Performing Aggregate Exposure and Risk
Assessments. Office of Pesticide Programs, Washington, D.C.: U.S. EPA.
https://www.epa.gov/sites/production/files/2015-07/documents/aggregate.pdf.
U.S. EPA. (200lg). Risk Assessment Guidance for Superfund. Volume I: Human Health
Evaluation Manual (Part D, Standardized Planning, Reporting and Review of Superfund
Risk Assessments). Final. (Publication 9285.7-47). Washington, D.C.: Office of
Emergency and Remedial Response, U.S. EPA.
https://www.epa.gov/sites/production/files/2018-03/documents/175137.pdf.
U.S. EPA. (200lh). Risk Assessment Guidance for Superfund. Volume III: (Part A, Process for
Conducting Probabilistic Risk Assessment). (EPA/540/R-02/002 Publication 9285.7-45).
Washington, D.C.: Office of Emergency and Remedial Response, U.S. EPA.
https://www.epa.gov/sites/production/files/2015-09/documents/rags3adt_complete.pdf.
U.S. EPA. (200li). Stakeholder Involvement and Public Participation at the U.S. EPA: Lessons
Learned, Barriers, and Innovative Approaches. (EPA/100/R-00/040). Washington, D.C.:
Office of Policy, Economics, and Innovation, U.S. EPA.
https://www.epa.gov/sites/production/files/2015-09/documents/stakeholder-involvement-
public-participation-at-epa.pdf.
U.S. EPA. (2002a). Calculating Upper Confidence Limits for Exposure Point Concentrations at
Hazardous Waste Sites. (Publication 9285.6-10). Washington, D.C.: Office of Emergency
and Remedial Response, U.S. EPA.
https://nepis.epa.gov/Exe/ZyPURL.cgi?Dockey=P100CYCE.TXT.
U.S. EPA. (2002b). Guidance for Comparing Background and Chemical Concentrations in Soil
for CERCLA Sites. (EPA/540/R-01/003 Publication 9285.7-41). Washington, D.C.:
Office of Emergency and Remedial Response, U.S. EPA.
https://www.epa.gov/sites/production/files/2015-l 1/documents/background, pdf.
Page| 194
-------
U.S. EPA. (2002c). Guidance for Quality Assurance Project Plans: EPAQA/G-5. (EPA/240/R-
02/009). Washington, D.C.: Office of Environmental Information, U.S. EPA.
https://nepis.epa.gov/Exe/ZyPURL.cgi?Dockey=20011HPE.TXT.
U.S. EPA. (2002d). Guidance on Choosing a Sampling Design for Environmental Data
Collection for Use in Developing a Quality Assurance Proj ect Plan: EPA QA/G-5 S.
(EPA/240/R-02/005). Washington, D.C.: Office of Environmental Information, U.S.
EPA. https://www.epa.gov/sites/production/files/2015-06/documents/g5s-final.pdf.
U.S. EPA. (2002e). Guidance on Environmental Data Verification and Data Validation: EPA
QA/G-8. (EPA/240/R-02/004). Washington, D.C.: Office of Environmental Information,
U.S. EPA. https://www.epa.gov/sites/production/files/2015-06/documents/g8-final.pdf.
U.S. EPA. (2002f). Guidelines for Ensuring and Maximizing the Quality, Objectivity, Utility,
and Integrity of Information Disseminated by the Environmental Protection Agency.
(EPA/260/R-02/008). Washington, D.C.: Office of Environmental Information, U.S.
EPA. https://www.epa.gOv/sites/production/files/2017-03/documents/epa-info-quality-
guidelines.pdf.
U.S. EPA. (2002g). Lessons Learned on Planning and Scoping for Environmental Risk
Assessments. Washington, D.C.: Planning and Scoping Workgroup, Science Policy
Council Steering Committee, U.S. EPA.
http://nepi s. epa. gov/Exe/ZyPURL. cgi?Dockey=P 1008PP7.TXT.
U.S. EPA. (2002h). Overview of the EPA Quality System for Environmental Data and
Technology. (EPA/240/R-02/003). Washington, D.C.: U.S. EPA.
https://www.epa.gov/sites/production/files/2015-08/documents/overview-final.pdf.
U.S. EPA. (2002i). A Review of the Reference Dose and Reference Concentration Processes.
(EPA/630/P-02/002F). Washington, D.C.: Risk Assessment Forum, U.S. EPA.
http://www2.epa.gov/sites/production/files/2014-12/documents/rfd-final.pdf.
U.S. EPA. (2003a). Assessment Factors: A Summary of General Assessment Factors for
Evaluating the Quality of Scientific and Technical Information. (EPA/100/B-03/001).
Washington, D.C.: Assessment Factors Workgroup, Science Policy Council, U.S. EPA.
https://www.epa.gov/sites/production/files/2015-01/documents/assess2.pdf.
U.S. EPA. (2003b). CSFII Analysis of Food Intake Distributions. (EPA/600/R-03/029).
Washington, D.C.: National Center for Environmental Assessment, Office of Research
and Development, U.S. EPA.
http://cfpub.epa.gov/ncea/cfm/recordisplay.cfm?deid=56610.
U.S. EPA. (2003c). Example Exposure Scenarios. (EPA/600/R-03/036). Washington, D.C.:
National Center for Environmental Assessment, U.S. EPA.
https://nepis.epa.gov/Exe/ZyPURL.cgi?Dockey=300062M5.TXT.
U.S. EPA. (2003d). Framework for Cumulative Risk Assessment. (EPA/630/P-02/00IF).
Washington, D.C.: Risk Assessment Forum, Office of Research and Development, U.S.
EPA. https://www.epa.gov/sites/production/files/2014-
11 /documents/ frmwrk_cum_ri sk_assmnt. pdf.
U.S. EPA. (2003e). Framework for Implementing EPA's Public Involvement Policy.
(EPA/233/F-03/001). Washington, D.C.: Office of Policy, Economics, and Innovation.
U.S. EPA. https://nepis.epa.gov/Exe/ZyPURL.cgi?Dockey=40000PHY.TXT.
U.S. EPA. (2003f). National Human Exposure Assessment Survey (NHEXAS). Office of
Research and Development, U.S. EPA.
https://cfpub.epa. gov/ si/si_public_record_report.cfm?Lab=NERL&TIMSType=&count=
Page|195
-------
10000&dirEntryId= 18200&searchAll=&showCriteria=2&simpleSearch=0&startIndex=7
0001.
U.S. EPA. (2003g). Public Involvement Policy of the U.S. Environmental Protection Agency.
(EPA/233/B-03/002). Washington, D.C.: Office of Policy, Economics, and Innovation.
U.S. EPA. https://nepis.epa.gov/Exe/ZyPURL.cgi?Dockey=40000P9G.TXT.
U.S. EPA. (2003h). Survey Management Handbook. (EPA/260/B-03/003). Washington, D.C.:
Office of Information Analysis and Access, U.S. EPA.
http://nepis. epa.gov/Exe/ZyPURL. cgi?Dockey=Pl 005GNB.TXT.
U.S. EPA. (2004a). Air Toxics Risk Assessment Library, Volume 1: Technical Resource
Manual. Chapter 11. (EPA/453/K-04/001A). Research Triangle Park, NC: Office of Air
Quality Planning and Standards, U.S. EPA.
https://www.epa.gov/sites/production/files/2013-08/documents/volume_l_reflibrary.pdf.
U.S. EPA. (2004b). ChemSTEER-Chemical Screening Tool for Exposures and Environmental
Releases. Beta Version. Office of Pollution Prevention and Toxics, U.S. EPA.
https://www.epa.gov/tsca-screening-tools/chemsteer-chemical-screening-tool-exposures-
and-environmental -r el ea se s.
U.S. EPA. (2004c). Office of the Science Advisor Staff Paper. An Examination of EPA Risk
Assessment Principles and Practices. Staff Paper Prepared for the U.S. EPA by Members
of the Risk Assessment Task Force. (EPA/100/B-04/001). Washington, D.C.: Office of
the Science Advisor, U.S. EPA. https://semspub.epa.gov/work/10/500006305.pdf.
U.S. EPA. (2004d). Risk Assessment Guidance for Superfund. Volume I: Human Health
Evaluation Manual (Part E, Supplemental Guidance for Dermal Risk Assessment). Final.
(EPA/540/R/-99/005 Publication 9285.7-02EP). Washington, D.C.: Office of Superfund
Remediation and Technology Innovation, U.S. EPA.
http s: // sem spub. epa. gov/ work/10/500011570.pdf.
U.S. EPA. (2005a). EPA Policy 2151.0: Privacy Policy. U.S. EPA.
http://www2.epa.gov/privacy/epa-policy-21510-privacy-policy.
U.S. EPA. (2005b). Example Exposure Assessment Scenarios Tool and Associated Report.
(EPA/600/R-03/036). Washington, D.C.: National Center for Environmental Assessment,
Office of Research and Development, U.S. EPA.
http://cfpub.epa.gov/ncea/cfm/recordisplay.cfm?deid=85843.
U.S. EPA. (2005c). Guidance on Selecting Age Groups for Monitoring and Assessing Childhood
Exposures to Environmental Contaminants. (EPA/630/P-03/003F). Washington, D.C.:
Risk Assessment Forum, Office of Research and Development, U.S. EPA.
http://www2.epa.gov/sites/production/files/2013-09/documents/agegroups.pdf.
U.S. EPA. (2005d). Guidelines for Carcinogen Risk Assessment. (EPA/630/P-03/001F).
Washington, D.C.: Risk Assessment Forum, Office of Research and Development, U.S.
EPA. https://www.epa.gov/sites/production/files/2013-
09/documents/cancer_guidelines_final_3-25-05.pdf.
U.S. EPA. (2005e). Human Health Risk Assessment Protocol for Hazardous Waste Combustion
Facilities. (EPA/530/R-05/006). Washington, D.C.: Office of Solid Waste and
Emergency Response, U.S. EPA.
http://nepis. epa.gov/Exe/ZyPURL. cgi?Dockey=Pl 0067PR.TXT.
U.S. EPA. (2005f). A Pilot Study of Children's Total Exposure to Persistent Pesticides and Other
Persistent Organic Pollutants (CTEPP). (EPA/600/R-04/193). Research Triangle Park,
NC: Human Exposure and Atmospheric Sciences Division, National Exposure Research
Page|196
-------
Laboratory, Office of Research and Development, U.S. EPA.
https://cfpub.epa.gov/si/si_public_record_report.cfm?Lab=NERL&dirEntryId=88702.
U.S. EPA. (2005g). Review of the National Ambient Air Quality Standards for Particulate
Matter: Policy Assessment of Scientific and Technical Information. OAQPS Staff Paper.
(EPA/452/R-05/005). Research Triangle Park, NC: Office of Air Quality Planning and
Standards, U.S. EPA.
http://www.epa.gov/ttn/naaqs/standards/pm/data/pmstaffpaper_20050630.pdf.
U.S. EPA. (2005h). Supplemental Guidance for Assessing Susceptibility from Early-Life
Exposure to Carcinogens. (EPA/630/R-03/003F). Washington, D.C.: Risk Assessment
Forum, Office of Research and Development, U.S. EPA.
http: / / www. epa. gov/ttnatwO 1 / chil drenssuppl ement_fi nal. pdf.
U.S. EPA. (2005i). Uniform Federal Policy for Quality Assurance Project Plans. Evaluating,
Assessing, and Documenting Environmental Data Collection and Use Programs. Part 1:
UFP-QAPP Manual. (EPA/505/B-04/900A). Washington, D.C.: Intergovernmental Data
Quality Task Force, U.S. EPA.
https://www.epa.gov/sites/production/files/documents/ufp_qapp_vl_0305.pdf.
U.S. EPA. (2006a). Approaches for the Application of Physiologically Based Pharmacokinetic
(PBPK) Models and Supporting Data in Risk Assessment. (EPA/600/R-05/043F).
Washington, D.C.: National Center for Environmental Assessment, Office of Research
and Development, U.S. EPA.
http: / / cfpub .epa. gov/ncea/ cfm/r ecor di splay. cfm? dei d= 15 7668.
U.S. EPA. (2006b). Consulting With Indian Tribal Governments at Superfund Sites: A
Beginner's Booklet. Washington, D.C.: Office of Research and Development, National
Center for Environmental Assessment, U.S. EPA.
http s ://sem spub .epa. gov/ work/HQ/17 5 8 60. pdf.
U.S. EPA. (2006c). Data Quality Assessment: Statistical Methods for Practioners: EPA QA/G-
9S. (EPA/240/B-06/003). Washington, D.C.: Office of Environmental Information, U.S.
EPA. https://www.epa.gov/sites/production/files/2015-08/documents/g9s-final.pdf.
U.S. EPA. (2006d). A Framework for Assessing Health Risk of Environmental Exposures to
Children. (EPA/600/R-05/093F). Washington, D.C.: National Center for Environmental
Assessment, Office of Research and Development, U.S. EPA.
http://cfpub. epa. gov/ncea/ cfm/recordi splay, cfm? dei d= 15 83 63.
U.S. EPA. (2006e). Guidance on Systematic Planning Using the Data Quality Objectives
Process: EPA QA-G4. (EPA/240/B-06/001). Washington, D.C.: Office of Environmental
Information, U.S. EPA.
https://www.epa.gov/sites/production/files/documents/guidance_systematic_planning_dq
o_process.pdf.
U.S. EPA. (2006f). Guide to Considering Children's Health When Developing EPA Actions:
Implementing Executive Order 13045 and EPA's Policy on Evaluating Health Risks to
Children. Washington, D.C.: Office of Policy, Economics, and Innovation, U.S. EPA.
http s: // www. epa. gov/ site s/pr oduction/fi 1 es/2014 -
05/ documents/ epaadpgui dechi 1 dr enheal th. p df.
U.S. EPA. (2006g). System Life Cycle Management Policy (EPA Order CIO 2121-P-03.0).
Washington, D.C.: Office of Environmental Information, U.S. EPA.
http://www2.epa.gov/sites/production/files/2013-l l/documents/cio_2121-p-03.0.pdf.
Page|197
-------
U.S. EPA. (2007a). Amendments to Superfund Hazard Ranking System Guidance Incorporating
Native American Traditional Lifeways. (Publication 9200.0-66). Washington, D.C.:
Office of Solid Waste and Emergency Response, U.S. EPA.
http s ://sem spub. epa. gov/ work/HQ/17 5 8 62. pdf.
U.S. EPA. (2007b). Better Assessment Science Integrating Point and Non-point Sources
(BASINS). Office ofWater, U.S. EPA. https://www.epa.gov/ceam/better-assessment-
science-integrating-point-and-non-point-sources-basins.
U.S. EPA. (2007c). Concepts, Methods, and Data Sources for Cumulative Health Risk
Assessment of Multiple Chemicals, Exposures and Effects: A Resource Document.
(EPA/600/R-06/013F). Cincinnati, OH: National Center for Environmental Assessment,
Office of Research and Development, U.S. EPA.
http: / / cfpub .epa. gov/ncea/ cfm/r ecor di splay. cfm? dei d= 190187.
U.S. EPA. (2007d). Dermal Exposure Assessment: A Summary of EPA Approaches.
(EPA/600/R-07/040F). Washington, D.C.: National Center for Environmental
Assessment, Office of Research and Development, U.S. EPA.
http://cfpub. epa. gov/ncea/ cfm/recordi splay, cfm? deid= 183 5 84.
U.S. EPA. (2007e). Expert Elicitation White Paper: U.S. EPA, Office of the Science Advisor.
https://cfpub.epa.gov/si/si_public_record_Report. cfm?dirEntryId=155023&CFID=l 8993
341&CFTOKEN=32202511&jsessionid=3830500bdd7b8a21cc9286d331222623el86.
U.S. EPA. (2007f). Framework for Metals Risk Assessment. (EPA/120/R-07/001). Washington,
D.C.: Risk Assessment Forum, Office of the Science Advisor, U.S. EPA.
https://www.epa.gov/sites/production/files/2013-09/documents/metals-risk-assessment-
final.pdf.
U.S. EPA. (2007g). Guide for Measuring Compliance Assistance Outcomes. (EPA/300/B-
07/002). Washington, D.C.: Office of Enforcement and Compliance Assurance, U.S.
EPA. https://nepis.epa.gov/Exe/ZyPURL.cgi?Dockey=Pl0008LK.TXT.
U.S. EPA. (2007h). Review of Worker Exposure Assessment Methods. Minutes of the Federal
Insecticide, Fungicide, and Rodenticide Act Scientific Advisory Panel Meeting, January
9-12, 2007, SAP Minutes No. 2007-03, Washington, D.C.
https://archive.epa.gOv/scipoly/sap/meetings/web/pdf/january2007finalmeetingminutes.p
df.
U.S. EPA. (2007i). Summary Report of a Peer Involvement Workshop on the Development of an
Exposure Factors Handbook for the Aging. Washington, D.C.: National Center for
Environmental Assessment, Office of Research and Development, U.S. EPA.
http://cfpub.epa.gov/ncea/CFM/recordisplay. cfm?deid=l 71923.
U. S. EPA. (2007j). User's Guide for the Integrated Exposure Uptake Biokinetic Model for Lead
in Children (IEUBK) Windows®. EPA 9285.7-42. (EPA/540/K-01/005). Washington,
D.C.: Office of Superfund Remediation and Technology Innovation, U.S. EPA.
https://nepi s. epa. gov/Exe/ZyPURL. cgi?Dockey=P 1002RKA.TXT.
U.S. EPA. (2008a). Reregi strati on Eligibility Decision for Pentachlorophenol. (EPA/739/R-
08/008). Washington, D.C.: Office of Prevention, Pesticides and Toxic Substances, U.S.
EPA. https://nepis.epa.gov/Exe/ZyPURL.cgi?Dockey=P1002CL2.TXT.
U.S. EPA. (2008b). Reregi strati on Eligibility Decision for Triclosan. (EPA/73 9/R-08/009).
Washington, D.C.: Office of Prevention, Pesticides and Toxic Substances, U.S. EPA.
https://archive.epa.gov/pesticides/reregistration/web/pdf/2340red.pdf.
Page|198
-------
U.S. EPA. (2008c). Scientific and Ethical Approaches for Observational Exposure Studies.
(EPA/600/R-08/062). Research Triangle Park, NC: National Exposure Research
Laboratory, Office of Research and Development, U.S. EPA.
https://cfpub .epa.gov/ si/si_public_record_report.cfm?Lab=NERL&dirEntryId= 191443.
U.S. EPA. (2008d). White Paper: Integrated Modeling for Integrated Environmental Decision
Making. (EPA/100/R-08/010). Washington, D.C.: U.S. EPA.
https://www.epa.gov/sites/production/files/2015-
02/documents/im4iedm_white_paper_final_epal00r08010.pdf.
U.S. EPA. (2009a). A Conceptual Framework for U.S EPA's National Exposure Research
Laboratory. (EPA/600/R-09/003). Washington, D.C.: National Exposure Research
Laboratory, Office of Research and Development, U.S. EPA.
https://cfpub. epa.gov/si/si_public_record_report.cfm?Lab=NERL&dirEntryId=203 003.
U. S. EPA. (2009b). Expert Eli citation Task Force White Paper. Washington, D.C.: Science and
Technology Policy Council, U.S. EPA.
https://yosemite.epa.gOv/sab/sabproduct.nsf/0/F4ACE05D0975F8C68525719200598BC7
/$File/Expert_Elici tation_White_Paper-January_06_2009.pdf.
U.S. EPA. (2009c). Expert Elicitation Task Force White Paper—Addendum: Selected Recent
(2006-2008) Citations. Washington, D.C.: Office of the Science Advisor, U.S. EPA.
https://yosemite.epa.gov/sab/sabproduct.nsf/fedrgstr_activites/F4ACE05D0975F8C6852
5719200598BC7/$File/Expert_Elicitation_White_Paper-
Addendum_of_Recent_References-January_2009.pdf.
U.S. EPA. (2009d). Guidance on the Development, Evaluation, and Application of
Environmental Models. (EPA/100/K-09/003). Washington, D.C.: Council for Regulatory
Environmental Modeling, Office of the Science Advisor, U.S. EPA.
https://www.epa.gov/sites/production/files/2015-04/documents/cred_guidance_0309.pdf.
U.S. EPA. (2009e). Revised Risk Assessment Methods for Workers, Children of Workers in
Agricultural Fields, and Pesticides with No Food Uses. Washington, D.C.: Office of
Pesticide Programs, U.S. EPA. https://www.regulations.gov/document?D=EPA-HQ-
OPP-2009-0889-0002.
U.S. EPA. (2009f). Risk Assessment Guidance for Superfund. Volume I: Human Health
Evaluation Manual (Part F, Supplemental Guidance for Inhalation Risk Assessment).
Final. (EPA/540/R-070/002 Publication 9285.7-82). Washington, D.C.: Office of
Superfund Remediation and Technology Innovation, U.S. EPA.
https://nepis.epa.gov/Exe/ZyPURL.cgi?Dockey=P1002UOM.TXT.
U.S. EPA. (2010a). Data Sources Available for Modeling Environmental Exposures in Older
Adults. (EPA/600/R-12/013). Research Triangle Park, NC: National Exposure Research
Laboratory, Office of Research and Development, U.S. EPA.
https://cfpub .epa.gov/ si/si_public_record_report.cfm?Lab=NERL&dirEntryId=2413 06.
U.S. EPA. (2010b). Guidelines for Preparing Economic Analyses. (EPA/240/R-10/001).
Washington, D.C.: National Center for Environmental Economics, Office of Policy, U.S.
EPA. https://www.epa.gov/sites/production/files/2017-08/documents/ee-0568-50.pdf.
U.S. EPA. (2010c). USEPA Contract Laboratory Program: National Functional Guidelines for
Inorganic Superfund Data Review (ISM01.2). (EPA/540/R-l 0/011 Publication 9240.1-
51). Washington, D.C.: Office of Superfund Remediation and Technology Innovation,
U.S. EPA. https://nepis.epa.gov/Exe/ZyPURL.cgi?Dockey=P1006PUX.TXT.
Page|199
-------
U.S. EPA. (201 la). Detroit Exposure and Aerosol Research Study (DEARS). National Exposure
Research Laboratory, U.S. EPA.
https://cfpub.epa.gov/si/si_public_record_report.cfm?Lab=NERL&dirEntryId=56188.
U.S. EPA. (201 lb). EPA Policy on Consultation and Coordination with Indian Tribes.
Washington, D.C.: U.S. EPA. https://www.epa.gov/sites/production/files/2013-
08/documents/cons-and-coord-with-indi an-tribes-policy.pdf.
U.S. EPA. (201 lc). EPA Social Media Policy. (EPA Order CIO 2184.0). Washington, D.C.:
Office of Environmental Information, U.S. EPA.
http://www2.epa.gov/sites/production/files/2013-ll/documents/social_media_policy.pdf.
U.S. EPA. (201 Id). Exposure Factors Handbook: 2011 Edition. (EPA/600/R-09/052F).
Washington, D.C.: National Center for Environmental Assessment, Office of Research
and Development, U.S. EPA.
http://cfpub.epa.gov/ncea/ri sk/recordisplay.cfm?deid=236252.
U.S. EPA. (201 le). Highlights of the Exposure Factors Handbook. (EP A/600/R-10/030).
Washington, D.C.: National Center for Environmental Assessment, Office of Research
and Development, U.S. EPA.
http://cfpub.epa. gov/ncea/ri sk/recordi splay, cfm? deid=221023.
U.S. EPA. (201 If). Plan EJ 2014. Washington, D.C.: Office of Environmental Justice, U.S. EPA.
https://nepis. epa.gov/Exe/ZyPDF.cgi/PI OODFCQ.PDF?Dockey=Pl 00DFCQ.PDF.
U.S. EPA. (201 lg). Public Involvement. Office of Policy, National Center for Environmental
Assessment, U.S. EPA.
U.S. EPA. (201 lh). Recommended Use of Body Weight 3/4 as the Default Method in Derivation
of the Oral Reference Dose. (EPA/100/R-11/0001). Washington, D.C.: Office of the
Science Advisor, Risk Assessment Forum, U.S. EPA.
https://www.epa.gov/sites/production/files/2013-09/documents/recommended-use-of-
bw34.pdf.
U.S. EPA. (2012a). Appendix B, C, D. National Risk Management Research Laboratory, U.S.
EPA.
U.S. EPA. (2012b). Benchmark Dose Technical Guidance. (EPA/100/R-12/001). Washington,
D.C.: Risk Assessment Forum, U.S. EPA.
https://www.epa.gov/sites/production/files/2015-
01/documents/ben chmarkdosegui dance.pdf.
U.S. EPA. (2012c). Biomonitoring — An Exposure Science Tool for Exposure and Risk
Assessment (EPA/600/R-12/039). Research Triangle Park, NC: National Exposure
Research Laboratory, Office of Research and Development, U.S. EPA.
http: / / cfpub. epa. gov/si/si_publ i c_fi le_do wnl oad. cfm? p_do wnl oad_i d=5 0688 7.
U.S. EPA. (2012d). Center for Subsurface Modeling Support. CSMoS Ground-Water Modeling
Software. National Risk Management Research Laboratory, Ada, OK. U.S. EPA. Last
modified December 10.
https://cfpub. epa.gov/ si/si_public_record_Report. cfm? dirEntryID= 19569.
U.S. EPA. (2012e). Microbial Risk Assessment Guideline: Pathogenic Microorganisms with
Focus on Food and Water. (EPA/100/J-12/001 USDA/FSIS/2012-001). Washington,
D.C.: Interagency Microbiological Risk Assessment Guideline Workgroup, U.S. EPA.
USD A. https://www.epa.gov/sites/production/files/2013-09/documents/mra-guideline-
final.pdf.
Page|200
-------
U.S. EPA. (2012f). Standard Operating Procedures for Residential Pesticide Exposure
Assessment. Washington, D.C.: Health Effects Division, Office of Pesticide Programs,
Office of Chemical Safety and Pollution Prevention, U.S. EPA.
https://www.epa.gov/sites/production/files/2015-08/documents/usepa-opp-
hed_residential_sops_oct2012 .pdf.
U.S. EPA. (2013a). ProUCL Version 5.0.00 Technical Guide. Statistical Software for
Environmental Applications for Data Sets with and without Nondetect Observations.
(EPA/600/R-07/041). Washington, D.C.: Office of Research and Development, U.S.
EPA. https://www.epa.gov/sites/production/files/2015-
03/documents/proucl_v5.0_tech.pdf.
U.S. EPA. (2013b). Reaffirmation of the U.S. Environmental Protection Agency's 1995 Policy
on Evaluating Health Risks to Children. Signed by Administrator Gina McCarthy,
October 31. Washington, D.C.: Office of the Administrator, U.S. EPA.
http://www2.epa.gov/sites/production/files/2014-
05/documents/reaffirmation_memorandum.pdf.
U.S. EPA. (2013c). Risk Communication Tool. U.S. EPA, Superfund Community Involvement
Tools and Resources, https://19january2017snapshot.epa.gov/superfund/community-
involvement-tools-and-resources_.html.
U.S. EPA. (2014a). Child-Specific Exposure Scenarios Examples. (EPA/600/R-14/217F).
Washington, D.C.: National Center for Environmental Assessment, Office of Research
and Development, U.S. EPA.
https://cfpub.epa. gov/ncea/risk/recordi splay, cfm? dei d=262211.
U.S. EPA. (2014b). E-FAST-Exposure and Fate Assessment Screening Tool 2014
Documentation Manual. Washington, D.C.: Exposure Assessment Branch, Office of
Pollution Prevention and Toxics, U.S. EPA.
https://www.epa.gov/sites/production/files/2015-04/documents/efast2man.pdf.
U.S. EPA. (2014c). E-FAST-Exposure and Fate Assessment Screening Tool. Version 2014.
Washington, D.C.: Office of Pollution Prevention and Toxics, U.S. EPA.
https://www.epa.gov/tsca-screening-tools/e-fast-exposure-and-fate-assessment-screening-
tool-version-2014.
U.S. EPA. (2014d). EPA Policy on Environmental Justice for Working with Federally
Recognized Tribes and Indigenous Peoples. Washington, D.C.: U.S. EPA.
https://archive.epa.gov/partners/web/pdf/ej-indigenous-policy.pdf.
U.S. EPA. (2014e). Exposure SAP White Paper: New High-Throughput Methods to Estimate
Chemical Exposure Final. Scientific Advisory Panel, U.S. EPA.
https://www.regulations.gov/document?D=EPA-HQ-OPP-2014-0331-0005.
U.S. EPA. (2014f). Framework for Human Health Risk Assessment to Inform Decision Making.
(EPA/100/R-14/001). Washington, D.C.: Risk Assessment Forum, Office of the Science
Advisor, U.S. EPA. https://www.epa.gov/sites/production/files/2013-
09/documents/cancer_guidelines_final_3-2 5-0 5. pdf.
U.S. EPA. (2014g). Human Health Evaluation Manual, Supplemental Guidance: Update of
Standard Default Exposure Factors. (Publication 9200.1-120). Washington, D.C.: Office
of Solid Waste and Emergency Response, U.S. EPA.
https://www.epa.gov/sites/production/files/2015-l l/documents/oswer_directive_9200.1-
120_exposurefactors_corrected2 .pdf.
Page|201
-------
U.S. EPA (U.S. Environmental Protection Agency). (2014h). Probabilistic Risk Assessment to
Inform Decision Making: Frequently Asked Questions. (EPA/100/R-09/001B).
Washington, D.C.: Risk Assessment Forum, Office of the Science Advisor, US EPA.
https://www.epa.gov/sites/production/files/2014-l 1/documents/raf-pra-faq-final.pdf.
U.S. EPA. (2014i). Risk Assessment Forum White Paper: Probabilistic Risk Assessment
Methods and Case Studies. (EPA/100/R-14/004). Washington, D.C.: Risk Assessment
Forum, Office of the Science Advisor, U.S. EPA.
https://www.epa.gov/sites/production/files/2014-12/documents/raf-pra-white-paper-
final.pdf.
U.S. EPA. (2015a). Alaska Native Village and Rural Communities Program Annual Report. U.S.
EPA. https://www.epa.gov/sites/production/files/2016-
04/documents/2015annualreport_anv_final_3_3115 .pdf.
U.S. EPA. (2015b). The HAPEM User's Guide, Hazardous Air Pollutant Exposure Model,
Version 7. U.S. EPA, Office of Air Quality Planning and Standards.
https://www.epa.gov/sites/production/files/2015-12/documents/hapem7usersguide.pdf.
U.S. EPA. (2015c). Peer Review Handbook. 4th Edition. (EPA/100/B-15/001). Washington,
D.C.: Science Policy Council, U.S. EPA.
http s: // www. epa. gov/ site s/pr oduction/fi 1 es/2016 -
03/documents/epa peer revi ew handb ook_4th_editi on .pdf.
U.S. EPA. (2016a). Consumer Exposure Model (CEM) Draft User Guide. Version 1.4.1.
Washington, D.C.: Office of Pollution Prevention and Toxics, U.S. EPA.
http s: // www. epa. gov/ site s/pr oduction/fi 1 es/2016 -
10/documents/cem_v_l .4. l_user_guide.pdf.
U.S. EPA. (2016b). Lead's Impact on Indoor Air Quality. U.S. EPA. Last modified September 6.
https://www.epa.gov/indoor-air-quality-iaq/leads-impact-indoor-air-quality.
U.S. EPA. (2016c). Plan to Increase Access to Results of EPA-Funded Scientific Research.
Version 1.1. Washington, D.C.: U.S. EPA.
https://www.epa.gov/sites/production/files/2016-
12/documents/ epasci enti fi ere sear chtran sp erancy pi an. pdf.
U.S. EPA. (2016d). Stochastic Human Exposure and Dose Simulation (SHEDS) to Estimate
Human Exposure to Chemicals, https://www.epa.gov/chemical-research/stochastic-
human-exposure-and-dose-simulati on-sheds-estimate-human-exposure.
U.S. EPA. (2016e). Superfund Community Involvement Handbook. Washington, D.C.: Office of
Emergency and Remedial Response, U.S. EPA.
http s: // sem spub .epa.gov/ work/HQ /100000070.pdf.
U.S. EPA. (2017a). Air Pollutants Exposure Model Documentation (APEX, Version 5), Volume
1: User's Guide. (EPA-452/R-17-001a). U.S. EPA, Office of Air Quality Planning and
Standards, https://www.epa.gov/sites/production/files/2017-07/documents/apex5_users-
guide-voll_0.pdf.
U.S. EPA. (2017b). Air Pollutants Exposure Model Documentation (APEX, Version 5), Volume
2: Technical Support Document. (EPA-452/R-17-001b). U.S. EPA, Office of Air Quality
Planning and Standards, https://www.epa.gov/sites/production/files/2017-
07/documents/apex5_users-guide-vol2_0.pdf.
U.S. EPA. (2017c). Ecological Committee on FIFRA Risk Assessment Methods (ECOFRAM).
U.S. EPA. Last modified December 27. https://www.epa.gov/pesticide-science-and-
assessing-pesticide-risks/ecological-committee-fifra-risk-assessment-methods.
Page|202
-------
U.S. EPA. (2017d). Policy and Guidance. U.S. EPA. Last modified August 17.
http s: // www. epa. gov/1 a ws-r egulati on s/p oli cy-guidance.
U.S. EPA. (2018a). About EPA's Quality System. U.S. EPA. Last modified May 8.
https://www.epa.gov/quality/about-epas-quality-system.
U.S. EPA. (2018b). Defining Pesticide Biomarkers. U.S. EPA. Last modified February 20.
https://www.epa.gov/pesticide-science-and-assessing-pesticide-risks/defining-pesticide-
biomarkers.
U.S. EPA. (2018c). Environmental Data Standards. U.S. EPA. Last modified January 23.
https://www.epa.gov/measurements-modeling/resources-assessing-
measurements#standards.
U.S. EPA. (2018d). Methods, Models, Tools, and Databases. U.S. EPA. Last modified May 3.
https://www.epa.gov/research/methods-models-tools-and-databases.
U.S. EPA. (2018e). Resources for Planning Projects that Use Existing Data. U.S. EPA. Last
modified August 1. https://www.epa.gov/quality/resources-planning-projects-use-
exi sting-data.
U.S. EPA. (2019a). Federal Guidance for Radiation Protection. U.S. EPA. Last modified August
6. https://www.epa.gov/radiation/federal-guidance-radiation-protection.
U.S. EPA. (2019b). Laws and Regulations. U.S. EPA. Last modified September 6.
https://www.epa.gov/laws-regulations.
U.S. EPA. (2019c). Risk Assessment. U.S. EPA. Last modified April 30.
https://www.epa.gOv/risk#risk.
U.S. EPA. (2019d). Superfund Community Involvement. Washington, D.C.: Office of Solid
Waste and Emergency Response, Office of Superfund Remediation and Technology
Innovation, U.S. EPA. Last modified March 26.
https://www.epa.gov/superfund/superfund-community-involvement.
U.S. FDA (Food and Drug Administration). (2015). Plan to Increase Access to Results of FDA-
Funded Scientific Research. U.S. FDA.
https://www.fda.gov/downloads/ScienceResearch/AboutScienceResearchatFDA/UCM43
5418.pdf.
U.S. GAO (Government Accountability Office). (1983). Siting of Hazardous Waste Landfills
and their Correlation With Racial and Economic Status of Surrounding Communities.
(GAO/RCED-83-168). Washington, D.C.: U.S. Government Accountability Office.
http://www.gao.gov/assets/150/140159.pdf.
U.S. GAO. (2005). Environmental Justice: EPA Should Devote More Attention to
Environmental Justice When Developing Clean Air Rules. (GAO/05-289). Washington,
D.C.: U.S. Government Accountability Office.
http://www.gao.gov/new.items/d05289.pdf.
U.S. HHS (Department of Health and Human Services). (2016). Access to Scientific Data and
Publications. In Open Government Plan. Version 4.0. U.S. HHS.
https://www.hhs.gov/open/2016-plan/accessing-data-and-publications.html.
UCC (United Church of Christ). (1987). Toxic Waste and Race in the United States: A National
Report on the Racial and Socio-Economic Characteristics of Communities with
Hazardous Waste Sites. New York, NY: Commission for Racial Justice United Church of
Christ.
http://d3n8a8pro7vhmx.cloudfront.net/unitedchurchofchrist/legacy_url/13567/toxwrace8
7.pdf? 141843 993 5.
Page|203
-------
Upton, AC. (1988). Evolving Perspectives on the Concept of Dose in Radiobiology and
Radiation Protection. Health Physics 5: 605-614.
Vaeth, M; Skovlund, E. (2004). A Simple Approach to Power and Sample Size Calculations in
Logistic Regression and Cox Regression Models. Statistics in Medicine 23: 1781-1792.
Van Dyke, MV; LaMontagne, AD; Martyny, JW; Ruttenber, AJ. (2001). Development of an
Exposure Database and Surveillance System for Use by Practicing OSH Professionals.
Applied Occupational and Environmental Hygiene 16: 135-143.
VanderWalde, A. (2005). Undue Inducement: The Only Objection to Payment? The American
Journal of Bioethics 5: 25-27.
Verweij, M; Thompson, M. (2006). Clumsy Solutions for a Complex World: Governance,
Politics and Plural Perceptions. Basingstoke, Hampshire, UK: Palgrave Macmillan.
Visschers, VHM; Meertens, RM; Passchier, WWF; de Vries, NNK, (2009). Probability
Information in Risk Communication: A Review of the Research Literature. Risk Analysis
29: 267-287.
Vojta, PJ; Friedman, W; Marker, DA; Clickner, R; Rogers, JW; Viet, SM; Muilenberg, ML;
Thorne, PS; Arbes Jr., SJ; Zeldin, DC. (2002). First National Survey of Lead and
Allergens in Housing: Survey Design and Methods for the Allergen and Endotoxin
Components. Environmental Health Perspectives 110: 527-532.
Volstad, JH; Roth, NE; Mercurio, G; Southerland, MT; Strebel, DE. (2003). Using
Environmental Stressor Information to Predict the Ecological Status of Maryland Non-
Tidal Streams as Measured by Biological Indicators. Environmental Monitoring and
Assessment 84: 219-242.
Wallsten, TS; Budescu, DV; Erev, IDO; Diederich, A. (1997). Evaluating and Combining
Subjective Probability Estimates. Journal of Behavioral Decision Making 10: 243-268.
Wambaugh, JF; Setzer, RW; Reif, DM; Gangwal, S; Mitchell-Blackwood, J; Arnot, JA; Joliet,
O; Frame, A; Rabinowitz, J; Knudsen, TB; Judson, RS; Egeghy, P; Vallero, D; Cohen
Hubal, EA. (2013). High-Throughput Models for Exposure-Based Chemical
Prioritization in the ExpoCast Project. Environmental Science & Technology 47: 8479-
8488.
Wambaugh, JF; Wang, A; Dionisio, KL; Frame, A; Egeghy, P; Judson, R; Setzer, RW. (2014).
High Throughput Heuristics for Prioritizing Human Exposure to Environmental
Chemicals. Environmental Science & Technology 48: 12760-12767.
Weigel, BM. (2003). Development of Stream Macroinvertebrate Models That Predict Watershed
and Local Stressors in Wisconsin. Journal of the North American Benthological Society
22:123-142.
Weintraub, M; Birnbaum, LS. (2008). Catfish Consumption as a Contributor to Elevated PCB
Levels in a Non-Hispanic Black Subpopulation. Environmental Research 107: 412-417.
Weise, KL; Smith, ML; Maschke, KJ; Copeland, HL. (2002). National Practices Regarding
Payment to Research Subjects for Participating in Pediatric Research. Pediatrics 110:
577-582.
Weisel, C; Zhang, J; Turpin, B; Morandi, MT; Colome, S; Stock, TH; Spektor, DM; Korn, L;
Winer, AM; Kwon, J; Meng, QY; Zhang, L; Harrington, R; Liu, W; Reff, A; Lee, JH;
Alimokhtari, S; Mohan, K; Shendell, D; Jones, J; Farrar, L; Maberti, S; Fan, T. (2005).
Relationships of Indoor, Outdoor, and Personal Air (RIOPA). Part I, Collection Methods
and Descriptive Analyses. Research Report (Health Effects Institute) Nov: 1-107.
Page|204
-------
Wendler, DS. (2006). Assent in Paediatric Research: Theoretical and Practical Considerations.
Journal of Medical Ethics 32: 229-234.
Wendler, DS; Shah, S. (2003). Should Children Decide Whether They Are Enrolled in
Nonbeneficial Research? The American Journal of Bioethics 3: 1-7.
Werner, C; Bedford, T; Cooke, RM; Hanea, AM; Morales-Napoles, O. (2017). Expert
Judgement for Dependence in Probabilistic Modelling: A Systematic Literature Review
and Future Research Directions. European Journal of Operational Research 258: 801-819.
Wernette, D; Nieves, LA. (1992). Breathing Polluted Air. EPA Journal 18: 16-17.
Wertheimer, A. (2011). Rethinking the Ethics of Clinical Research: Widening the Lens. New
York, NY: Oxford University Press.
White, MC; Berger-Frank, S; Campagna, D; Inserra, SG; Middleton, D; Millette, MD; Noonan,
CW; Peipins, LA; Williamson, D; Health Investigations Communications Work Group.
(2004). Communicating Results to Community Residents: Lessons From Recent ATSDR
Health Investigations. Journal of Exposure Analysis and Environmental Epidemiology
14: 484-491.
Whitmore, RW; Pellizzari, ED; Zelon, HS; Michael, LC; Quackenboss, JJ. (2005).
Cost/Variance Optimization for Human Exposure Assessment Studies. Journal of
Exposure Analysis and Environmental Epidemiology 15: 464-472.
Whittle, A; Shah, S; Wilfond, B; Gensler, G; Wendler, D. (2004). Institutional Review Board
Practices Regarding Assent in Pediatric Research. Pediatrics 113: 1747-1752.
WHO (World Health Organization). (1983). Environmental Health Criteria 27: Guidelines on
Studies in Environmental Epidemiology. Geneva, Switzerland: International Programme
on Chemical Safety (IPCS) Harmonization Project, WHO.
http://www.inchem.org/documents/ehc/ehc/ehc27.htm.
WHO. (2004). IPCS Risk Assessment Terminology. Part 1, IPCS/OECD Key Generic Terms
Used in Chemical Hazard/Risk Assessment; Part 2, IPCS Glossary of Key Exposure
Assessment Terminology. Geneva, Switzerland: IPCS Harmonization Project, WHO.
http://www.who.int/ipcs/methods/harmonization/areas/ipcsterminologypaitsland2.pdf7ua
=1.
WHO. (2005). Principles of Characterizing and Applying Human Exposure Models. Geneva,
Switzerland: IPCS Harmonization Project, WHO.
http://whqlibdoc.who.int/publications/2005/9241563117_eng.pdf.
WHO. (2006). Environmental Health Criteria 237: Principles for Evaluating Health Risks in
Children Associated with Exposure to Chemicals. Geneva, Switzerland: IPCS
Harmonization Project, WHO. http://www.who.int/ipcs/publications/ehc/ehc237.pdf.
WHO. (2008). Uncertainty and Data Quality in Exposure Assessment. Part 1, Guidance
Document on Characterizing and Communicating Uncertainty in Exposure Assessment;
Part 2, Hallmarks of Data Quality in Chemical Exposure Assessment. Geneva,
Switzerland: IPCS Harmonization Project, WHO.
http: / / www. i nchem. org/ document s/harmproj /harmproj /harmpr oj 6.pdf.
WHO. (2012). Harmonization Project Strategic Plan: Harmonization of Approaches to the
Assessment of Risk from Exposure to Chemicals. Geneva, Switzerland: IPCS
Harmonization Project, WHO. http://www.who.int/ipcs/methods/harmonization/en/.
WHO. (2015). Human Biomonitoring: Facts and Figures. Copenhagen, Denmark: WHO
Regional Office for Europe.
Page|205
-------
http://www.euro.who.int/ data/assets/pdf_file/0020/276311/Human-biomonitoring-
facts-figures-en.pdf.
Whyatt, RM; Garfinkel, R; Hoepner, LA; Andrews, H; Holmes, D; Williams, MK; Reyes, A;
Diaz, D; Perera, FP; Camann, DE; Barr, DB. (2009). ABiomarker Validation Study of
Prenatal Chlorpyrifos Exposure within an Inner-City Cohort during Pregnancy.
Environmental Health Perspectives 117: 559-567.
Williams, PRD; Hubbell, BJ; Weber, E; Fehrenbacher, C; Hrdy, D; Zartarian, V. (2010). Chapter
3. An Overview of Exposure Assessment Models Used by the U.S. Environmental
Protection Agency. In GHanrahan (Ed.), Modelling of Pollutants in Complex
Environmental Systems, Volume II (pp. 61-131). Hertfordshire, U.K.: ILM Publications.
https://pdfs.semanticscholar.org/61ad/lb7eel8f3b3f77ec3 ed36c38803049a49750.pdf
Williamson, DM; Millette, D; Beauboeuf-Lafontant, T; Henry, JP; Atherton, C. (2005).
Including Residents in Epidemiologic Studies of Adverse Health Effects in Communities
with Hazardous Exposures. Journal of Environmental Health 67: 23-28.
Woodruff, TJ; Parker, JD; Kyle, AD; Schoendorf, KC. (2003). Disparities in Exposure to Air
Pollution during Pregnancy. Environmental Health Perspectives 111: 942-946.
Woodward, M. (1999). Epidemiology: Study Design and Data Analysis. Texts in Statistical
Science. Boca Raton, FL: Chapman & Hall/CRC.
Wu, YC; Fisher, J; Neal, A. (2016). Infant Toxicology: Overview and Considerations for the
Safety Assessment of Products for Infants. In Food Toxicology: CRC Press.
Xue, J; Zartarian, VG; Ozkaynak, H; Dang, W; Glen, G; Smith, L; Stallings, C. (2006). A
Probabilistic Arsenic Exposure Assessment for Children Who Contact Chromated
Copper Arsenate (CCA)-Treated Playsets and Decks, Part 2: Sensitivity and Uncertainty
Analyses. Risk Analysis 26: 533-541.
Young, BM; Tulve, NS; Egeghy, PP; Driver, JH; Zartarian, VG; Johnston, JE; Delmaar, CJ;
Evans, JJ; Smith, LA; Glen, G; Lunchick, C; Ross, JH; Xue, J; Barnekow, DE. (2012).
Comparison of Four Probabilistic Models (CARES®, Calendex™, ConsExpo, and
SHEDS) to Estimate Aggregate Residential Exposures to Pesticides. Journal of Exposure
Science and Environmental Epidemiology 22: 522-532.
Zartarian, VG; Bahadori, T; McKone, T. (2005). Adoption of an Official ISEA Glossary. Journal
of Exposure Analysis and Environmental Epidemiology 15: 1-5.
Zartarian, VG; Ott, WR; Duan, N. (2007). Chapter 2. Basic Concepts and Definitions of
Exposure and Dose. In WR Ott; AC Steinemann; LA Wallace (Eds.), Exposure Analysis
(pp. 33-63). Boca Raton, FL: CRC Press.
http://www.crcnetbase.com/doi/book/10.1201/9781420012637.
Zartarian, VG; Xue, J; Ozkaynak, H; Dang, W; Glen, G; Smith, L; Stallings, C. (2006). A
Probabilistic Arsenic Exposure Assessment for Children who Contact CCA-Treated
Playsets and Decks, Part 1: Model Methodology, Variability Results, and Model
Evaluation. Risk Analysis 26: 515-531.
Zartarian, VG; Xue, J; Tornero-Velez, R; Brown, J. (2017). Children's Lead Exposure: A
Multimedia Modeling Analysis to Guide Public Health Decision-Making. Environmental
Health Perspectives.
Page|206
------- |