United States      EPA Science Advisory    EPA-SAB-EHC/IHEC-02-004
Environmental      Board (1400A)      December 2001
Protection
  REVIEW OF THE OFFICE
  OF RADIATION AND
  INDOOR AIR'S DRAFT
  METHODOLOGY FOR
  RANKING INDOOR AIR
  TOXICS: AN SAB
  REPORT

  A REVIEW BY THE
  ENVIRONMENTAL HEALTH
  AND INTEGRATED HUMAN
  EXPOSURE COMMITTEES OF
  THE EPA SCIENCE ADVISORY
  BOARD (SAB)

-------
                                     December 14, 2001
EPA-SAB-EHC/fflEC-02-004

Honorable Christine Todd Whitman
Administrator
U.S. Environmental Protection Agency
1200 Pennsylvania Avenue, NW
Washington, DC 20460
              Subj ect:       Review of the Office of Radiation and Indoor Air's Draft Methodology
                            for Ranking Indoor Air Toxics: An SAB Report

Dear Governor Whitman:

       A Joint Committee of the EPA Science Advisory Board, including Members and Consultants
from the Environmental Health and Integrated Human Exposure Committees, met on July 19, 2001, to
review a draft methodology for generating an order-of-magnitude,  screening-level ranking of key indoor
air toxics.  The methodology was developed by EPA's Office of Radiation and Indoor Air (ORIA) as
an outgrowth of the methodology used to select key pollutants for the National Air Toxics
Program/Urban Air Toxics Strategy.

       The Charge for the review, the Joint Committee's findings, and some comments from  the EPA
Science Advisory Board's Executive Committee, addressed the following issues:

       a)     Is the overall methodology suitable for the purposes of the ranking analysis (i.e.,
              development of an "order-of-magnitude," screening-level ranking and selection of key
              air toxics indoors)?

       In general, the Joint Committee finds that the proposed methodology used in the document
       appears to be appropriate  (subject to the caveats noted below) only for the purpose of
       providing a preliminary "order-of-magnitude," screening-level ranking of a limited selection  of
       toxics. The specific application is  seriously flawed in a way that may bias conclusions on
       priorities. As it is currently applied, the document's title is too general and implies a
       comprehensiveness that it  does not contain.  A more accurate title to the report in its current
       form would be "Ranking Selected Indoor Organic and Metallic Air Toxics." Monitoring data

-------
are not available for many indoor air pollutants, leading to the omission from the ranking
exercise of numerous toxicants of known public health concern.  These omissions result from
limitations in the available data, and associated limitations in the analytical methods, sampling
approaches, and/or lexicological assessments. The resultant ranking biases must be addressed
by identifying the data gaps, so that better exposure data can be generated in the most
important areas. In terms of analytical approaches, the sources of indoor air toxics (outdoor or
indoor sources) may well be a factor affecting consumer risk and exposure reduction response,
but this model does not deal with this issue.  The type of building (e.g., office, residence,
school) is another important parameter, but is also not addressed. Since the human body does
not artificially divide exposure between indoor and outdoor exposure, it may be most
appropriate to consider total potential exposure without distinction of the indoor/outdoor
source.  Some available data on personal exposures should be used to test the rankings to see if
the same or different rankings  are generated where there is additional information. The
document must make it clear to the reader that lack of data or measurements for a given agent
means only that data were not available or were not considered, not that the agent is
considered to be of lesser (or greater) risk.

The Joint Committee noted that even an uncertain and unstable preliminary ranking system
would usually be preferable to no ranking system at all, unless information gaps bias the ultimate
conclusions.  Such a situation could lead to random choice of pollutant for study or a system
that depends on the "chemical-of-the-week" syndrome or some other non-risk based set of
criteria. In its review, the SAB Executive Committee noted that a number of more
sophisticated analyses that have been undertaken previously to ascertain the risks of air
pollutants, and that these analyses compensated for the lack of monitoring with modeling and
other risk assessment techniques. These efforts to rank pollutants and/or quantify their risks
make the current Indoor Air Toxics Ranking document appear to be somewhat dated.

The Joint Committee wishes to again emphasize that the results must only be used for
preliminary relative ranking, i.e., to identify the "top"(highest risk) ranked or first tier chemicals
of those available to be ranked, versus ones ranked in the middle or lower tiers. Although an
order-of-magnitude ranking will work, using the results as a surrogate for absolute risk is
inappropriate because of the noted uncertainties in the database. To be explicit, the results
should not be used for absolute ranking.

The SAB has recently completed review of the National Air Toxics Assessment which relied
heavily upon sophisticated modeling. The Joint Committee is not entirely comfortable with this
document's explanation of the  superiority of monitoring data to model results.  Models, if
properly  calibrated and validated, can sometimes compensate for deficiencies in monitoring
data caused by changes in exposure (e.g., the cancellation of pesticide registrations mentioned),
short-term vs. long-term monitoring, etc.  Given the severe limitations of existing direct
monitoring data, it might be advisable to consider supplementing the approach with a "screening

-------
level" indoor fate and exposure model to draw upon other sources of information (i.e.,
emissions data, chemical use data, activity data, ...).

b)     Are the criteria used to select the monitoring studies for the analysis appropriate? Are
       the studies chosen for the ranking analysis suitable, and are there other studies that you
       believe should be included in this analysis?  Were the methods used to select and
       statistically analyze the data within the studies useful to the analysis?

The criteria listed in the draft document seem to be consistent with the objectives of the report.
However, these criteria need to be much better defined.  And, as noted above, the referenced
studies do not include most of the identified indoor chemicals of public health concern.  A
number of indoor pollutants that have been measured repeatedly  and are known to be
important (e.g., carbon monoxide, radon,  asbestos, fine particulates, nitrogen oxides, ozone,
and compounds associated with environmental tobacco smoke) are not included in this
"Ranking."

c)     Is the methodology for selection of the "risk-based concentrations" (RBC) (based on
       that presented in the Technical Support Document for the National Air Toxics
       Program/Urban Air Toxics Strategy) useful in the context of this analysis?

The Joint Committee felt that the methodology for the selection of RBC was reasonable for
purposes of a preliminary screening level ranking, but that the limitations of the methodology
must be better explained. An appendix listing all the possible RBC for each chemical derived
from each of the different data sources should be added, as well as a discussion of limitations in
the toxicity studies on which the RBC were based.

d)     How well have we described and addressed the adequacy, limitations, and uncertainties
       of the analysis, including:
       1)      Incomplete  data on indoor concentrations and hazard/risk indices
       2)      Difficulties in  determining the representativeness/accuracy of the "typical" levels
               indoors
       3)      The use of short-term monitoring data to represent chronic exposure periods
       4)      Issues related to the age of the data
       5)      Variations in the methods used by the various agencies to arrive at the health
               indices, which are the basis for the "risk-based concentrations?"

With a few exceptions, the document adequately describes and discusses the major
uncertainties of the analysis in qualitative terms.  Improvements in the treatment that would
enhance the utility of the document and its transparency to readers are detailed in the Joint
Committee's report. Limitations and uncertainties will be more or less important depending on

-------
       the decisions that will be influenced by the ranking results and the environment in which the
       decisions are made.

              The Joint Committee also addressed some issues not specifically posed by the Charge,
       and advanced several recommendations, including:

       a)     Make the document clear as to the specific purposes for which it can be used, and by
              whom.  This information is central to evaluation of the adequacy and appropriateness of
              the document

       b)     Specifically consider sensitive populations, including children, people with diseases such
              as asthma or chronic obstructive pulmonary disease, pregnant females, the elderly, etc.

       c)     Perform some type of validation, which could range from a simple check to see that the
              relative ranking makes sense, to a quantitative assessment for those agents for which the
              ranking suggests action is warranted.

       EPA is currently developing an indoor air toxics strategy to reduce risks from toxic air
pollutants indoors, using non-regulatory, voluntary actions. The EPA Science Advisory Board has
supported an increased emphasis upon, and allocation of resources to address, the health importance of
indoor toxics exposures and would offer our expertise and experience to assist with the formulation of
the strategy through all stages of its development.

       We look forward to a written response to the Committee's recommendations to make
environmental technology performance measures more comprehensive and useful. Please  contact us if
we may be of further assistance.

                             Sincerely,
                                    /signed/

                             Dr. William daze, Chair
                             EPA Science Advisory Board
                                    /signed/

                             Dr. Henry Anderson, Chair
                             Integrated Human Exposure Committee
                             EPA Science Advisory Board

-------
       /signed/

Dr. Mark Utell, Chair
Environmental Health Committee
EPA Science Advisory Board

-------
                                         NOTICE
       This report has been written as part of the activities of the EPA Science Advisory Board, a
public advisory group providing extramural scientific information and advice to the Administrator and
other officials of the Environmental Protection Agency. The Board is structured to provide balanced,
expert assessment of scientific matters related to problems facing the Agency. This report has not been
reviewed for approval by the Agency and, hence, the contents of this report do not necessarily
represent the views and policies of the Environmental Protection Agency, nor of other agencies in the
Executive Branch of the Federal government, nor does mention of trade names or commercial products
constitute a recommendation for use.
Distribution and Availability: This EPA Science Advisory Board report is provided to the EPA
Administrator, senior Agency management, appropriate program staff, interested members of the
public, and is posted on the SAB website (www.epa.gov/sab). Information on its availability is also
provided in the SAB's monthly newsletter (Happenings at the Science Advisory Board).  Additional

-------
copies and further information are available from the SAB Staff [US EPA Science Advisory Board
(1400A), 1200 Pennsylvania Avenue, NW, Washington, DC 20460-0001; 202-564-4533].

-------
                                        ABSTRACT

       A Joint Committee of the EPA Science Advisory Board met on July 19, 2001, to review a
draft methodology for generating a ranking of indoor air toxics. The methodology was developed by
EPA's Office of Radiation and Indoor Air.

       The Joint Committee found that the methodology used in the Ranking document appears to be
appropriate for the purpose of providing a preliminary "order-of-magnitude" screening-level ranking for
selected indoor air toxics. However, due to limitations in the available data used to generate the
specific rankings, data were not available for a number of prevalent indoor air pollutants (carbon
monoxide, radon, asbestos, fine paniculate matter, nitrogen oxides, ozone, and environmental tobacco
smoke), and pesticides appeared to be under-represented. Jf the  Agency makes the decision to apply
the methodology, the utility of the ranking results will be limited to the chemicals included.
Nevertheless, even an uncertain and unstable preliminary ranking  system, limited to a subset of
pollutants, would usually be preferable to no ranking system use at all (random choice of pollutants for
study) or a system that depends on the chemical-of-the-week syndrome or some other non-risk bases
set of criteria, unless information gaps significantly bias the ultimate conclusions

       The Joint Committee suggested also suggested that EPA should: state clearly the specific
purposes for which the methodology can be used be made clear; give special consideration to sensitive
populations; perform a sensitivity analysis to identify factors having the greatest influence on the ranking;
state clearly that lack of data for a given compound should not be taken to mean that the compound is
of lesser or greater risk than compounds for which data were provided; should perform some measure
of validation; and perform periodic reviews to take advantage of newly published data.
KEYWORDS: Indoor air pollutants; air toxics; risk; risk based concentrations; ranking; screening;
pesticides.
                                              111

-------
                          U.S. Environmental Protection Agency
                               EPA Science Advisory Board
         Environmental Health Committee/Integrated Human Exposure Committee
                                     Joint Meeting
                                      July 19, 2001

CO-CHAIRS
Dr. Henry A. Anderson, Chief Medical Officer, Bureau of Environmental Health, Division of Public
                                                                                 Health,
                                                                                 State of
                                                                                 Wiscon
                                                                                 sin
                                                                                 Depart
                                                                                 ment of
                                                                                 Health
                                                                                 and
                                                                                 Family
                                                                                 Service
                                                                                 s,
                                                                                 Madiso
                                                                                 n,WI

Dr. Mark J. Utell, Professor of Medicine and Environmental Medicine, Pulmonary Unit,,
       University of Rochester Medical Center, Rochester, NY

SAB MEMBERS
Dr. Annette Guiseppi-Elie, Senior Consultant, Corporate Remediation Group, DuPont
       Spruance Plant, DuPont Engineering, Richmond, VA

 Dr. Paul Foster, Program Director, Endocrine, Reproductive and Developmental Toxicology,
       Chemical Industry Institute of Toxicology, Research Triangle Park, NC

Dr. Michael Jayjock, Senior Research Fellow, Rohm and Haas Co., Spring House, PA

Dr. George Lambert, Associate Professor and Center Director,  Center for Child and
       Reproductive Environmental Health, Environmental and Occupational Health Sciences Institute,
       UMDNJ, Piscataway, NJ

Dr. Grace LeMasters, Professor, Division of Epidemiology and Biostati sties, University of  Cincinn
                                                                                 ati,
                                                                                 Cincinn

                                           iv

-------
                                                                                ati, OH
Dr. Abby Li, Senior Neurotoxicologist, Regulatory & Toxicology Manager, Monsanto,
       Regulatory Division, St. Louis, MO

Dr. Ulrike Luderer, Assistant Professor, Department of Medicine, University of California at Irvine,
                                                                                CA

Dr. Randy Maddalena, Scientist, Lawrence Berkeley National Laboratory, Environmental Energy
       Technologies Division, Indoor Environment Department, Berkeley, CA

Dr. Barbara J. Petersen, President, Novigen Sciences, Inc., Washington, DC

Dr. Jed M. Waldman, Chief, Indoor Air Quality Section, California Department of Health Services,
       Berkeley, CA

Dr. Charles J. Weschler, Adjunct Professor, Department of Environmental and Community  Medici
                                                                                ne,
                                                                                UMD
                                                                                NJ,
                                                                                Piscata
                                                                                way,
                                                                                NJ

CONSULTANT
Dr. Stephen Brown, Risks of Radiation Chemical Compounds (R2C2), Oakland, CA

EPA SCIENCE ADVISORY BOARD STAFF
Mr. Samuel Rondberg, Designated Federal Officer, EPA Science Advisory Board (1400A), 1200
       Pennsylvania Avenue, NW, Washington, DC

Ms. Dorothy Clark, Management Assistant, EPA Science Advisory Board (1400A), 1200
       Pennsylvania Avenue, NW, Washington, DC

-------
                                TABLE OF CONTENTS
1 EXECUTIVE SUMMARY	1

2 INTRODUCTION	5
       2.1 Background 	5
       2.2 Charge	5

3 DETAILED RESPONSES  	7
       3.1 Suitability of the Overall Methodology for the Ranking Analysis  	7
              3.1.1. Is the methodology suitable for the purposes of a screening-level
                     ranking?	7
              3.1.2  Is the methodology as described suitable for the "selection of key
                     air toxics indoors"?	9
       3.2 Use of Studies for the Ranking Analysis	11
              3.2.1  Are the criteria used to select the monitoring studies for the analysis appropriate?!
              3.2.2  Are the studies chosen for the ranking analysis suitable, and are there
                     other studies that you believe should be included in this analysis?	13
              3.2.3  Were the methods used to select and statistically analyze the data
                     within the studies useful to the analysis?	14
       3.3 Methodology for Selection of the "Risk-based" Concentrations  	15
       3.4 Adequacy, Limitations, and Uncertainties of the Analysis	19
              3.4.1  Incomplete Data on Indoor Concentration and Hazard/Risk Indices	21
              3.4.2  Difficulties in Determining the Representativeness/Accuracy of the "Typical"
                     Levels  Indoors	22
              3.4.3  The Use of Short-term Monitoring Data to Represent Chronic
                     Exposure Periods	22
              3.4.4  Issues Related to the Age of the Data	23
              3.4.5  Variations in the Methods Used by the Various Agencies to Arrive
                     at the Health Indices	23
       3.5 Additional Issues	24

REFERENCES	R-l
                                             VI

-------
                               1  EXECUTIVE SUMMARY
       A Joint Committee, including Members and Consultants from the Environmental Health
Committee and the Integrated Human Exposure Committee, met on July 19, 2001, to review a draft
methodology for generating an order-of-magnitude, screening-level ranking of key indoor air toxics.
The methodology was developed by EPA's Office of Radiation and Indoor Air (ORIA) as an
outgrowth of the methodology used to select key pollutants for the National Air Toxics Program/Urban
Air Toxics Strategy.

       The Charge for the review, and the Joint Committee's findings, include the following issues:

       a)     Is the overall methodology suitable for the purposes of the ranking analysis (i.e.,
              development of an "order-of-magnitude," screening-level ranking and selection of key
              air toxics indoors)?

       In general, the Joint Committee finds that the methodology used in the Ranking document
       appears to be appropriate for the purpose of providing a preliminary "order-of- magnitude,"
       screening-level ranking for selected indoor air toxics.  Although it is recognized that indoor air
       may present a significant health risk, data are not available for a number of prevalent indoor air
       pollutants. As such, any method for ranking indoor air toxics will have significant limitations,
       and might suffer from "information bias," reaching false conclusions because of data gaps.  The
       most serious problem seems to be omissions from the ranking of numerous toxicants of concern
       (e.g., "stealth" and criteria air pollutants listed below), and the limited inclusion of currently
       registered residential use pesticides. These omissions are due to limitations in the available data
       used to complete the ranking, which may in turn be due to limitations in the analytical methods,
       sampling approaches, and/or lexicological assessments. Efforts must be made to examine the
       biases caused by these limitations.  The most important application of this tool may well be to
       define data gaps, so that better data can be generated in the most important areas.
       Furthermore, the ranking method can be improved by incorporating some indication of the
       likely ranges of exposures measured indoors.

       The current methodology will work for the Agency and provide them with a preliminary
       screening-level evaluation and relative ranking of the chemical agents included in the analysis.
       However, if the Agency makes the decision to apply the methodology, the utility of the ranking
       results will be limited to the chemicals included and is not sufficiently comprehensive to rank
       indoor pollutants in general, since many important indoor air pollutants are not addressed.
       Nevertheless, even an uncertain and unstable preliminary ranking system, limited to a subset of
       pollutants, would usually be preferable to no ranking system use at all (random choice of
       pollutants for study) or a system that depends on the chemical-of-the-week syndrome  or some

-------
other non-risk bases set of criteria, unless information gaps significantly bias the ultimate
conclusions

The report must define "air toxics" in the context of the ranking exercise and also explicitly
explain why biologicals, radon and particulates are not included. Ideally, these important
residential pollutants should be placed in the proper context (and most likely included in the
ranking analysis).  Also, the document should be revised to make it clear to the reader that lack
of data or measurements for a given agent means only that data were not available or were not
evaluated, not that the agent is considered to be of lesser (or greater) risk.

b)     Are the criteria used to select the monitoring studies for the analysis appropriate? Are
       the studies chosen for the ranking analysis suitable, and are there other studies that you
       believe should be included in this analysis? Were the methods used to select and
       statistically analyze the data within the studies useful to the analysis?

The criteria listed in the draft  document are consistent with the  objectives of the report.
However, these criteria must be much better defined.

Although the referenced studies span a large range of chemicals, they do not include most of the
identified indoor chemicals of public health concern.  A number of indoor pollutants that have
been measured repeatedly and are known to be important are not included in this "Ranking."
These include: carbon monoxide, radon, asbestos, fine particulate matter, nitrogen oxides,
ozone, and selected compounds associated with environmental tobacco smoke.  In addition, as
noted earlier, pesticides appear to be under-represented.

Additional explanation is needed regarding the studies that were not selected and why they
were excluded. The report states that studies were not selected that included monitoring data
that "contained specific chemical sources (e.g. smoking or specific products or materials)."  The
risk agents that were excluded should be clearly identified in the document along with the
reason for exclusion. A limitation of the studies is that monitoring in several studies occurred
during a very limited period, yet these values are used as lifetime daily exposure levels.
Therefore, the mean value used for chronic exposure could be an overestimate or an
underestimate depending on how representative the sampling period is  of average yearly
exposure for the chemical in question. This problem can only be corrected by  obtaining better
probabilistic based-data that takes into account regional and seasonal differences.

c)     Is the methodology for selection of the "risk-based concentrations" (RBC) (based on
       that presented in the Technical Support Document for the National Air Toxics
       Program/Urban Air Toxics Strategy) useful in the context of this analysis?

-------
       The Joint Committee concluded that the methodology for the selection of RBC was reasonable
       for purposes of a preliminary screening level ranking for selected toxics, but that the limitations
       of the methodology must be better explained.  An appendix listing all the possible RBC for each
       chemical derived from each of the different data sources should be added,  as well as a
       discussion of limitations in the toxicity studies on which the RBC were based.

       d)      How well have we described and addressed the adequacy, limitations, and uncertainties
               of the analysis, including:

               1)      Incomplete data on indoor concentrations and hazard/risk indices
               2)      Difficulties in determining the representativeness/accuracy of the "typical" levels
                      indoors
               3)      The use of short-term monitoring data to represent chronic exposure periods
               4)      Issues related to the age of the data
               5)      Variations in the methods used by the various agencies to arrive at the health
                      indices, which are the basis for the  "risk-based  concentrations?"

       Limitations and uncertainties will be more or less important depending on the decisions that will
       be influenced by the results and the environment in which the decisions are made.

       The results should only be used for a preliminary relative ranking, i.e., to identify the "top
       ranked" (those that potentially present the most substantial risks among the chemicals included
       in the ranking exercise) or first tier chemicals versus ones ranked in the middle or lower tiers.
       Although an order-of-magnitude ranking will work, using the results as a surrogate for absolute
       risk is inappropriate because of the uncertainty in the database.  To be explicit,  the results
       should not be used for absolute ranking.

       The Joint Committee also addressed some issues not specifically posed by the Charge, and
made the following suggestions:

       a)      The document will be useful for initial screening, but it should be made clear as to the
               specific purposes for which it can be used, and by whom.  This information is central to
               evaluation of the adequacy of the document

       b)      In keeping with USEPA guidelines, this exercise should take into consideration
               sensitive populations, including children, people with chronic diseases such as asthma or
               chronic obstructive pulmonary disease, pregnant females, the elderly etc. One Member
               noted, however, it was not clear how this goal could be accomplished without the
               application  of considerably greater resources than had been devoted to this effort. This
               Member suggests that, given such resources, a feasible  option might be to simply
               highlight those substances for which there are known highly susceptible groups not

-------
       covered by the usual safety factors in the derivation of RBCs, or known higher
       exposures.

c)     A "sensitivity analysis" to identify decisions and data gaps that have the greatest
       influence on the ranking ratios" would be useful.

d)     The document should state clearly that lack of data for a given compound should not be
       taken to mean that the compound is of lesser or greater risk than  compounds for which
       data were provided.

e)     Before implementing any action the Agency should perform some measure of validation.
       This may range from a simple check to see that the relative ranking makes sense to a
       quantitative assessment for chemicals that the strategy would suggest action is
       warranted.  Any quantitative evaluation should build on existing data and previous
       evaluations. It important to recognize and appropriately document that this ranking may
       be flawed because not all relevant chemicals could be included.

f)      As the Agency is well aware, there are numerous studies that continue to develop data.
       The Agency should not wait on these data to support the current strategy but that the
       strategy be subject to periodic (perhaps annual review) to take advantage of newly
       published data.

-------
                                    2  INTRODUCTION
2.1 Background

       EPA is currently developing an indoor air toxics strategy to reduce risks from toxic air
pollutants indoors, using non-regulatory, voluntary actions.  To help focus its efforts on the most
substantial risks, the Office of Radiation and Indoor Air (ORIA) has developed a draft methodology to
generate an "order-of-magnitude" screening-level ranking and selection of key air toxics indoors.  The
ranking analysis used a methodology similar to that used to select key pollutants for the National Air
Toxics Program/Urban Air Toxics Strategy, as presented in the Technical  Support Document (TSD,
2000) for that program.  Ten monitoring studies, chosen to represent typical concentrations of the
pollutants found indoors, form the basis of the ranking.  These data are combined with health-based
indices (i.e., Risk-Based Concentrations, or RBCs, as defined in the TSD) to obtain ranking indices for
both acute and chronic effects.

       The ranking analysis will allow ORIA to identify those indoor pollutants that may present a
greater risk indoors (based on the available data), and then focus risk reduction efforts on the greatest
opportunities for reducing  risks through voluntary, non-regulatory risk management approaches.

2.2 Charge

       a)     Is the overall methodology suitable for the purposes of the ranking analysis (i.e.,
              development of an "order-of-magnitude," screening-level ranking and selection of key
              air toxics indoors)?

       b)     Are the criteria used to select the monitoring  studies for the analysis appropriate? Are
              the studies chosen for the ranking analysis suitable, and are there other studies that you
              believe should be included in this analysis? Were the methods used to select and
              statistically analyze the data within the studies useful to the analysis?

       c)     Is the methodology for selection of the "risk-based concentrations" (based on that
              presented in the Technical Support Document for the National Air Toxics
              Program/Urban Air Toxics Strategy) useful in the context of this analysis?

       d)     How well  have we described and addressed the adequacy, limitations, and uncertainties
              of the analysis, including:

               1)     Incomplete data on indoor concentrations and hazard/risk indices
              2)     Difficulties in determining the representativeness/accuracy of the "typical" levels
                      indoors

-------
3)     The use of short-term monitoring data to represent chronic exposure periods
4)     Issues related to the age of the data
5)     Variations in the methods used by the various agencies to arrive at the health
       indices, which are the basis for the "risk-based concentrations?"

-------
                              3  DETAILED RESPONSES
3.1 Suitability of the Overall Methodology for the Ranking Analysis

       The first element of the Charge asked "Is the overall methodology suitable for the purposes of
the ranking analysis (i.e., development of an "order-of-magnitude," screening-level ranking and
selection of key air toxics indoors)?" The response to this issue is divided into two sections:

  3.1.1.  Is the methodology suitable for the purposes of a screening-level ranking?

       The proposed approach could provide "order-of-magnitude" type rankings, and the Joint
Committee agreed that the incorporation of both exposure and toxicity measures was appropriate. The
Joint Committee notes that there are uses for quick screening tools that utilize surrogates for exposure
and associated risk. However, it must be clearly noted that such screening tools themselves do not
assess exposure or risk. Therefore, the Members felt it is critical that the report clearly indicate the
limited circumstances under which it is appropriate to apply the tool, as well as examples of when it
would be inappropriate (as are discussed below). As a general comment, we might note that, as it is
currently applied, the document's title is too general; a more accurate title to the report in its current
form would be "Ranking Selected Indoor Organic and Metallic Air Toxics."

       Moreover, the document should be clearer about how well an uncertain surrogate for risk
performs in attempting to rank pollutants with respect to "real" risk.  Presumably, an ideal ranking
would rank highest those pollutants for which complete abatement would produce the greatest benefit in
reduced cancer and non-cancer health effects in the U.S. population. No one really knows what these
"real" risks are, so we use quotation marks and think of risk instead as what a state-of-the-art unbiased
risk assessment would estimate. The quantitative nature (and the overall quality) of the ranking may
then consequently degrade and become more qualitative in nature as the risk assessment is  simplified by
ignoring  some of the parameters of risk.  Such "lost" parameters would typically include the number of
people exposed at each level of exposure and average or typical concentration levels.  If the ranking
index changes substantially from rank N to rank N+l in comparison to the uncertainties in the data and
the factors by which exposure differs from  concentration, then those uncertainties and simplifications
will have relatively little impact on the ranking. Otherwise,  the ranking may have very limited utility.
Nevertheless, even a preliminary, uncertain and unstable ranking system will usually be preferable to no
ranking system at all (possibly leading to a random choice  of pollutant for study) or a system that
depends  on the chemical-of-the-week syndrome or some  other non-risk based set of criteria.

       The method makes no estimate of the potential population exposures (e.g. numbers of people)
nor for the frequency or duration of exposure. Duration of exposure is potentially important.  Some
indication of the likely ranges of exposure in the population would make the ranking more useful -
perhaps by including a measure of the range  of body burdens in the ranking process.

                                              7

-------
       EPA combined carcinogens and non-carcinogens together in the ranking of chemicals because
of a stated need to set priorities for all of the compounds, regardless of the endpoint used. The Joint
Committee recognizes this need, but recommended that it may still be useful to create and present a
separate chronic Risk-Based Concentration (RBC) list for non-carcinogens and carcinogens. First, the
risk assessment approaches are so different between carcinogens and non-carcinogens.  Second,
separating non-carcinogens from carcinogens will provide more focus for chemicals that have important
non-carcinogenic effects that could be swamped out by combining carcinogens and non-carcinogens,
even when using the 10"4 risk

       Agents have been identified using 10 different studies, chosen as including measurements
representative of "typical" concentrations of indoor pollutants. However, the analytical method chosen
for a given study determines which subset of indoor pollutants is measured.  For example, although all
of the indoor environments sampled are expected to contain pesticides, only two studies actually
measured indoor pesticides (EPA, 1990; Gordon, 1999). These studies were designed to sample,
detect and quantify pesticide levels; the others were not. An analogous statement applies for polycyclic
aromatic hydrocarbons (Sheldon 1992b) or metals (Clayton 1993). Consequently, not all known
indoor pollutants are captured by these  ten studies; only those that can be measured by the particular
analytical procedures employed were detected.  Not only do different studies capture different
pollutants, but even taken together these ten studies miss  certain pollutants known to be present.  For
example, pyruvic acid  is a human bioeffluent (208 mg/day/person; NRC, 1992)  and will be present in
any indoor environment that contains people. Yet none of these ten studies reported concentrations for
pyruvic acid; none of them were designed to sample and quantify this compound. Pyruvic acid is not
expected to be a human health concern at typical indoor levels, but other undetected/unreported
pollutants are less benign.  Such pollutants include small, unsaturated aldehydes, certain highly oxidized
compounds, thermally  sensitive compounds, and short lived, highly reactive species that are not readily
detected by analytical methods routinely applied to indoor air (Weschler and Shields, 1997a; Wolkoff
et a/., 1997).  Other examples of potential important toxicants include acrolein, methacrolein,
butadiene, peroxyacetyl nitrate, brominated ethers, Criegee biradicals, the hydroxyl radical (Weschler
and Shields, 1996;  1997b) and methyl peroxy radicals. Given the above discussion, the document
should be revised to  make it clear to the reader that lack of data or measurements for a given
agent means only that no data were available or were not evaluated, not that the agent is
considered to be of lesser (or greater) risk.

       The Joint Committee recognized the limitations of the existing data and further noted that this
exercise is really a ranking of those agents that have already been sampled and chemically analyzed.
This implies that somehow these substances were already determined to have some level of concern in
the indoor environment and that others are not of concern. In point of fact, other potentially important
agents have not been determined because of difficulties in analytical methodology or because they were
simply not (understandably) addressed by the available studies, which were done for purposes other
than comparative rankings.

-------
       The reliability of this method is entirely dependent upon the reliability of the underlying data for
both exposure and risk based concentrations (see below for further discussion of reliability of data
sources).  Data were available that would permit estimation of a rank value for only 59 of more
than 1000 potential indoor air pollutants.  In developing this method, the available studies were
reviewed.  Only a limited number of studies were of sufficient quality to use for this purpose (more than
50 studies were discarded). For some of the agents, there was inadequate indoor air monitoring (or the
substance was detected less than 10% of the time). Much of the data are relatively old and may not be
relevant to current indoor air pollutants. For example, the data on pesticide levels are more than 10
years, old and the EPA-approved uses for these chemicals have changed dramatically during that
period. Many residential uses of those pesticides are no longer permitted, and, at the same time, new
substances have been approved. It should also be noted, however, that many of these agents are very
long-lived in the environment, and measurable levels may persist in houses that have been treated with
them for years to decades after the last treatment  (Delaplane and Lafage, 1990).  Therefore, the data
on these insecticides, although 10 years old, are not as irrelevant as they might first appear,  although,
ideally, one would like to know the persistence of each such agent.  Other examples include
chlorofluorocarbons, which are being phased out as a consequence of the Montreal Protocol, and
trichloroethylene, whose use has declined because of both health concerns and the Montreal Protocol.)

       The sources of indoor air toxics (outdoor or indoor sources) drive consumer risk and exposure
reduction response, but this model does not incorporate any measure of source-driven exposure.  It
may also be that the type of building (e.g., office, residence, school) is as important as other parameters
and that the rankings would be more useful if the data were analyzed in terms of specific building types.
From a purely biological standpoint, the human body does not artificially divide exposure between
indoor and outdoor exposure, and  it may be most appropriate to consider total potential exposure
without distinction of the indoor/outdoor source.  Some available data on personal exposures should be
used to test the rankings, e.g. where there is additional information do we reach the same or different
rankings?

  3.1.2  Is the methodology as described suitable for the "selection of key air  toxics indoors"?

       The suitability of the method for assessing "air toxics" is dependent on the definition of "air
toxics."  The Joint Committee notes that many airborne substances (including biologicals, radon and
particulates) found in the residential environment are excluded from the current ranking method. The
report must define "air toxics" in the restricted context of this methodology, and also explain why
biologicals, radon and particulates are not included.  Ideally, these important residential pollutants
should be placed in the proper context (and most likely included in the ranking analysis). It appears to
the Joint Committee that the methodology would be equally applicable to all residential pollutants.
Alternatively, the scope could be redefined to convey the more limited class of substances that are
ranked.

-------
       The overall methodology does not adequately account for the fact that the indoor
concentrations of some "key" pollutants are marginally characterized. For example, most of the
pesticide data are from just one study, conducted in two cities (EPA 1990). It addressed only a limited
subset of the housing stock, sampled between 1986 and 1988 before some of these pesticides were
withdrawn from commerce.  This one study yielded 6 of the top 16 compounds in Figure C7 (indoor
mean/chronic case 1 Risk Based Concentration ) and 6 of the top 14 compounds in Figure C13
(indoor-outdoor mean/chronic case 1 RBC).1

       Although the referenced studies span a large range of chemicals, they do not include
most of the identified indoor chemicals of concern. A number of indoor pollutants that have
been measured repeatedly and are known to be important are not included in this "Ranking."
These include: carbon monoxide, radon, asbestos, fine particulate matter, nitrogen oxides,
ozone, and selected compounds associated with environmental tobacco smoke.  Similarly,
some broad classes of chemicals, such as residential pesticides, have only limited, and
outdated representation. Although these substances may have been omitted from this
ranking by design, the Joint Committee feels that it would be instructive to apply the ranking
method to these "common" indoor air pollutants, if only to provide a set of benchmarks for
understanding the rankings for the other substances.

       The presentation of results in the report was admirably clear and straightforward. However, for
chemicals for which data are limited, the Joint Committee recommends that, in the Figures (4.1, 4.2,
and 4.3), an alternative symbol (other than the one for "Mean") be used when there is only one study.
This is the case for metals (Clayton 1993), for pesticides (with the exception of chlorpyrifos and
diazinon) (EPA 1990), and for polynucleated aromatic hydrocarbons (Sheldon 1992).

       The degree to which the data are nationally representative is critical. This issue includes
geographical representativeness as well as for the target populations. Of particular concern to the Joint
Committee is the need for unique rankings for exposures to children, since children have different
activity patterns that need to be considered. There should be some consideration of those chemicals
that may have a higher potential for exposure for children (e.g. substances preferentially found in
carpets). (Further comments about special consideration of children's exposures are provided in
section 3.5 of this report.)

       The overall methodology for ranking the chemicals involved determining a risk based
concentration for cancer and non-cancer endpoints. The risk based concentrations were obtained from
recognized sources such as EPA IRIS (Integrated Risk Information System), EPA's Acute Exposure
Guideline Levels, the American Industrial Hygiene Association, etc.  Although a flowchart that
prioritized these sources was consistently applied for all the  chemicals, the actual values selected came
        Only chlorpyrifos and diazinon are reported in Gordon 1999; all of the other pesticides come from EPA,
1990.

                                             10

-------
from variable sources with different levels of peer review and reliability, different approaches in
selecting the most sensitive endpoint of concern and different application of uncertainty factors.  The
difference in reliability and consistency of risk management decisions within and across these different
organizations can have an important impact on the relative ranking of chemicals. In addition, it is
unclear the extent to which severity of effect is taken into account in deriving the risk based
concentrations.  The Joint Committee recognizes the difficulty of addressing these limitations, and, as
stated above, advances it as an ideal.  Nevertheless, an important step forward toward achieving this
ideal is to make sure that this report provides the critical factors that describe how the risk based
concentrations were derived. At a minimum, the Joint Committee recommends that for non-cancer
endpoints, the report tabulate the critical endpoint, the type of study (e.g. dog chronic, rat teratology,
human study), the Lowest Observed Adverse Effects Level (LOAEL) and No Observed Adverse
effects level (NOAEL), and brief explanation of uncertainly factors that were applied (e.g. 10
intraspecies, 10 interspecies, 5 subchronic to chronic). For cancer endpoints, a brief description of the
tumor type and  study used, as well as the unit risk should be included.

       In summary, the Joint Committee concludes that the method is suitable for initial screening-level
ranking of selected toxic chemicals, but the participants are concerned about important omissions
associated with  the approach. The most serious problem seems to be omissions in the ranking of
numerous toxicants of concern (e.g., "stealth" and criteria air pollutants listed above). These are due to
limitations in the available data used to complete the ranking, which are in turn due to limitations in the
analytical methods, sampling approaches, and/or toxicological assessments.  The biases caused by
these limitations must be addressed. The most important application of this tool may well be to define
data gaps, so that better data can be generated in the  most important areas. Furthermore, the ranking
method can be improved by incorporating some indication of the likely ranges of exposures measured
indoors. It would be helpful to have a table that lists categories of information available for each of the
ranked chemicals in order to identify individual  chemical data gaps.

3.2 Use of Studies for the Ranking Analysis

       The second Charge element asked "Are the criteria used to select the monitoring studies for the
analysis appropriate? Are the studies chosen for the ranking analysis suitable, and are there other
studies that you believe should be included in this analysis? Were the methods used to select and
statistically analyze the data within the studies useful to the analysis?" These three inter-related
questions are addressed separately below:

  3.2.1 Are the criteria used to select the monitoring studies for the analysis appropriate?

       The three criteria are listed on page 4 of the draft report:

       a)      Results presented were representative of typical concentrations in indoor non-industrial
               environments. Studies were not selected that contained monitoring data from buildings

                                              11

-------
               chosen because they had indoor air quality complaints, contained specific chemical
               sources (e.g., smoking or specific products or materials), were located near known
               outdoor sources (e.g., university laboratories or mining sites), etc.

       b)      Reasonably high confidence in validity of results, based on sample and analysis
               methods, and quality assurance procedures.

       c)      Data are of type and format suitable for inclusion in the risk ranking matrix.

       These criteria are consistent with the objective of the report.  However, they need to be much
better defined. In addition, the ORIA should discuss how the Building Assessment Survey and
Evaluation (BASE) and School Intervention Studies (SIS) studies, which have not been published, meet
the criteria established for the literature studies. By  improving the discussion of the criteria used by the
EPA to select studies, the Agency can be much more specific about what they want to rank and, more
important, what they think they can (or cannot) rank.

       The first criterion defines the breadth of the approach.  Although the report identifies "typical
concentrations in indoor non-industrial environments" as the focus of the ranking, several other factors
should be included when using "representative" as a selection criterion.  At a minimum, the first criterion
should specify where (urban regions, agricultural regions, the contiguous U.S., ...); who (adults,
children, male, female, a probability based sample of the non-institutionalized U.S. population, ...);
when (retrospective analysis, prospective analysis, long-term average, short-term average, ...); and for
what chemical(s) (all chemicals, measurable chemicals, volatile organic compounds, metals, pesticides,
...) and media (indoor/outdoor air, personal air, house dust, surfaces,  foods, ...).  This is also the  place
to identify the exposure pathways that are included in the ranking process (inhalation of indoor air) and
which are excluded (dietary and non-dietary ingestion, dermal, all outdoor pathways and indoor
pollutants of outdoor origin).

       Additional explanation is also needed regarding the studies that were not selected. The report
states that studies were not selected that included monitoring data that "contained specific chemical
sources (e.g., smoking or specific products or materials)." The risk agents that were excluded should
be clearly identified in the document along with the reason for exclusion.  In some cases,  the chemicals
may have been excluded because a separate effort was made to specifically address these chemicals
(e.g.. radon). If so, this should be clearly stated and referenced. In other cases, a few sentences are
needed to clarify some apparent discrepancies in selection of literature studies.  For example, the report
states that monitoring data that contained specific chemical sources such as smoking were omitted, yet
several of the literature studies that were included clearly measured chemical exposure in households
that had smokers.  In addition, the BASE study evaluated data from 100 randomly selected office
buildings which did not strictly follow the described selection process for literature studies.
                                               12

-------
       In defining the second criterion of what contributes to a "reasonably high confidence in validity
of results," the Agency should include the level of peer review for the study/data. This recommendation
is in addition to the adequacy of the sample and analysis method and quality assessment/quality control
procedures that are already specified as important. The Joint Committee did not examine the BASE
and SIS studies, but the revised ranking methodology document should include a discussion noting the
type of peer review to which these studies were subjected.  Even though both of these studies were
published as EPA reports, it is imperative that the full data set be made available so they can be
independently evaluated.

       For the third criterion, it would be helpful to state exactly what format is needed and what types
of data transformations might be acceptable. For example, the arithmetic mean is identified in the
report as the most desirable measure of central tendency. However, a number of studies only report
the geometric mean and geometric standard deviation.  This criterion might specify that for these cases,
the EPA will assume that the data are lognormally distributed and use the reported geometric mean and
geometric standard deviation to estimate the arithmetic mean. EPA indicated in the presentation at the
public meeting that they conducted a comprehensive literature search first and then narrowed down the
number of studies from 65 to 10.  EPA should explain this process in the report and list the  studies that
were considered and failed to meet the selection criteria in an appendix or at least report the years that
were searched. Sufficient detail about how and when the search was performed should be provided so
that when/if the study is updated then the search effort won't need to be duplicated.

  3.2.2 Are the studies chosen for the ranking analysis suitable, and are there other studies
that you believe should be included in this analysis?

       From the exposure standpoint, the suitability of the studies depends on the overall purpose of
the analysis, which should be spelled out in the study selection criteria as discussed above.  If the
question is whether the studies provide an informative case for demonstrating the ranking methodology
with a limited  set of chemicals, then the  selected studies are adequate. However, if the goal is to
provide a ranking across the universe of chemicals in the indoor environment then the selected studies
clearly fall short of the mark and the results are inappropriate. Although it ultimately depends on how
"representative" is defined in the study selection criteria, a set of studies that represent a probability-
based sampling of all indoor non-industrial environments in the U.S. during the past, present or future
does not exist and will almost certainly not exist any time soon.  Given the severe limitations of direct
monitoring data, ORIA should consider supplementing the approach with a "screening level" indoor fate
and exposure model to draw upon other sources of information (i.e., emissions data, chemical use data,
activity data, ...).

       Care should be taken to insure that the "compound" identified in the monitoring studies matches
the "compound" addressed in the ranking analysis studies. This statement applies to the metals, not the
airborne organic compounds.  In the case of the metals, the speciation is very important —  oxidation
state and associated ligands (e.g. in the case of transition metal complexes, the organic compounds

                                              13

-------
coordinated to the metal center). For example, manganese (Mn) has been identified in the appropriate
monitoring study (Clayton 1993) by x-ray fluorescence. This analytical method provides no information
on the actual chemical(s) that contain Mn.  Mn has significantly different bioavailability in its different
chemical forms.  Without knowing Mn's speciation in indoor air, it is not possible to properly match its
airborne concentration to a risk.

  3.2.3  Were the methods used to select and statistically analyze the data within the studies
useful to the analysis?

        A limitation of the studies is that monitoring in several studies occurred during a very short
period, yet these values are used as lifetime daily exposure levels. Therefore, the mean value used for
chronic exposure could be an overestimate or an underestimate depending on how representative the
sampling period  is of average yearly exposure for the chemical in question. This problem can only be
corrected by obtaining better probabilistic based data that takes into account regional and seasonal
differences.  These limitations aside, the mean is a more stable estimate than the 95th upper limit for
purposes of determining relative rank because the mean reflects the central tendency and is less
influenced by range of values in the data set.

        The treatment of uncertainty in the report is somewhat inconsistent. Although the ranking ratios
are calculated and plotted for each data source providing a range of values, information about the
variance associated with these measurements for each building/study is lacking. In addition to variability
across similar building types, the sources,  distribution processes and removal mechanisms for indoor
pollutants will vary between residences, office buildings and schools (this was noted in Section 6.1 of
the report). However, this variability/uncertainty is not captured in the ranking ratio.  Even if the EPA
assumes, for policy reasons, that there is no uncertainty in risk-based concentration (RBC), uncertainty
reported for the measured concentrations  can and should be propagated through  the calculations to
provide estimated confidence intervals for the ranking ratio.  (See section 3.4 of this report for a full
discussion of uncertainty issues.)

        EPA used different values for means, undetected samples, and upper limit primarily because the
various  studies reported data differently. If the primary goal is to determine relative ranking of
chemicals, then it would seem that consistency of values used would be desirable. There were different
opinions among  SAB members as to the relative contribution of this difference to the ranking in light of
other uncertainties. As a specific example, one-eighth of the limit of quantification was assigned to
undetected samples in some cases and one-half of the limit of quantification in others.  The rationale
was to use values that were internally consistent with each of the studies.  It is possible that the value
used for non-detects could make a significant difference in the calculation of exposure and hence in the
risk-based ratio especially for those chemicals with large numbers of non-detects. How much of a
difference this makes depends on the risk based concentration for each chemical. In other words, the
contribution of the variability resulting from difference in assignment of values for non-detects is not
simply 4-fold.  Until a sensitivity analysis is conducted, it is difficult to determine how significant these

                                               14

-------
differences would be to the ranking analysis. Given that there were only 10 literature studies that
required follow up, it would have been possible for EPA to obtain raw values in order to conduct a
uniform analysis.  Since EPA will be using these studies as basis for recommending action, it would be
prudent to have the data supporting these literature studies in hand and undertake the above sensitivity
analyses.

       The difference between indoor and outdoor concentrations is commonly used as a surrogate for
identifying indoor sources. Joint Committee Members expressed concerns about using this simplistic
model which, as indicated in the report, can overestimate the influence of outdoor sources resulting in a
lower ranking for a given indoor pollutant. For the chemicals included in this ranking, using the
indoor/outdoor difference  did not seem to significantly alter the ranking for the chemicals in the upper
20%. Therefore, to reduce the chance of removing a potentially important chemical from the list, we
recommend that all of the chemicals measured in the indoor air be included in the ranking process but
those suspected of being predominantly of outdoor origin should be flagged or identified in the text.
Characterizing the source of the pollutant is important, but it is too complicated and poorly understood
to include in the "order-of magnitude" screening method presented here. Removing the indoor/outdoor
results would also have the benefit of reducing the number of outcomes to three (Chronic/Cancer;
Chronic/Non-Cancer; and Acute/Combined) rather than six.

       One of the key strengths of this report is that it highlights the limitations of existing monitoring
data.  To take full advantage of this strength, the chemicals that  were considered initially, but removed
from the ranking process should be identified in a separate table or an appendix. If a chemical was
removed from the ranking because of inadequate monitoring data or lacking toxicity data then that is
very useful information, and it should be noted.  Detection of a chemical less than 10% of the time may
be an indication that exposure to that chemical is episodic, but real, so completely removing these
chemicals may be misleading both to the decision maker and the public, particularly when these are low
frequency, high concentration events and the outcome of concern is acute.

       There seems to be an implicit emphasis on volatile organic compounds and adults in that only
indoor air concentrations are considered. Expanding the ranking approach to include surrogate data for
other exposure pathways (i.e., house dust and surface wipes related to non-dietary ingestion and
dermal contact by children) would improve the way semi-volatile chemicals and metals are considered.
However, including semi-volatile organic compounds and metals appropriately would significantly
increase the complexity of the ranking procedure (semi-volatile organics are present in the gas phase as
well as in the condensed phase (on the surface of particles, carpets etc.); they are partitioned between
these two phases). If this is beyond the scope of the report, then it should be noted that a number of
exposure media and exposure pathways were excluded from the analysis (see discussion of study
selection criteria).

       As previously noted, it would be helpful to include a sensitivity analysis to identify the decisions
and data gaps that have the greatest influence on the ranking ratios. A range of sensitivity analysis

                                              15

-------
methods are available (Saltelli and Chan, 2000), and many of them can be used without a significant
investment of time and resources.

3.3 Methodology for Selection of the "Risk-based" Concentrations

        The Joint Committee was generally satisfied that the methodology is reasonable for the
purposes of ranking.  The use of a level of cancer risk equivalent to exposure at the reference dose is a
rational way of making cancer and non-cancer risk analyses comparable. The use of two risk levels
(10"6 and 10"4) is a reasonable way of showing the sensitivity of the analysis to risk management
preferences. EPA rarely uses risk levels outside that range as criteria for the acceptability of exposure.
The use of a hierarchical scheme of data preference is commonplace for ranking systems.  There were
a few concerns and several suggestions provided by the Committee.

       Figures 4.1 through 4.3 in the draft report were very helpful in reducing complicated
procedures to a straightforward format.  Further details explaining the methodology presented in these
figures for generating RBC and operational definitions for key terms  such as RBC are needed. It is
unwieldy to use reference documents to understand these essential terms.

        Overall, the RBC seem appropriately conservative given that the purpose of this process is to
provide a screening-level ranking of indoor air toxics.  Preference was given to more protective risk
estimates rather than less protective exposure limits like occupational exposure limits, which are not
designed with the most sensitive individual or with the potential for lifetime exposure in mind.  On the
other hand, many of the sources on which the RBC were based are likely to have used toxicology
studies on adult animals. If developmental toxicity studies were  included, however, they are apparently
traditional developmental toxicology studies in which embryos are examined towards the end of
gestation.  These studies do not evaluate more subtle developmental toxicity such as effects on the
reproductive, immune, and nervous system that are manifested later in life. Thus, it could not be readily
determined whether the RBC was based on data or risk management decisions that took into
consideration potential differences in susceptibility between children and adults.  The report should
include a table that lists the critical endpoint, study type and species, and brief description of uncertainty
factors or unit risk used to derive the RBC. EPA should also address how the RBC, and ultimately the
rank order, is or is not relevant to children Given that children and pregnant adults may be the most
susceptible populations in the indoor environment, additional consideration should be given to the
impact of the rankings on these two groups.  Almost all the Members of the Joint Committee find merit
with this concept, i.e., providing a dual  ranking priority system (one designed for susceptible
populations and another for less susceptible groups). Two Members disagreed, however, noting that
the derivation of the RBC takes into account sensitive sub-populations and is sufficiently conservative
for this order-of-magnitude ranking scheme, and that further analyses of specific chemicals should
evaluate effects on sensitive populations.
                                              16

-------
       A quality control check was performed on four chemicals. Two were straightforward,
because RBC from the Integrated Risk Information System (IRIS) were used. When RBC were
gathered from other databases the process was not easily reproduced. One possible explanation for
this lack of replication may be related to the frequent updates of the California EPA (CalEPA)
database. Thus, if the date on which the RBCs are abstracted from the databases are provided as
footnotes in Table B3, this confusion will be avoided. One or two examples outlining generation of the
ranking ratios from beginning to end is needed to assure better understanding.

       The Joint Committee was concerned with the dated information on IRIS.  If California
Environmental Protection Agency (CalEPA) databases are a more current data source, then perhaps
the order of preference should be altered.  However, the inherent policy decisions in both databases
should be evaluated before making such a  decision. Information as to the quality control checks
already completed by the EPA on the entire methodology should be provided.

       Concern was expressed that use of a purely hierarchical selection process when there are
several available RBCs seems to waste information. Why not compare the different available RBCs
and make an assessment as to the weight of the evidence? Criteria could include how up-to-date the
studies are that were used to determine the RBCs, what assumptions were made in converting animal
data to human data, etc. The discussion of limitations on page 19 addresses this  somewhat in that it
explains that for most compounds there was only one available RBC.  However, the example of
benzene (for which there were several RBCs) indicates a three-order-of- magnitude difference in RBC
from among four sources. The Joint Committee recommends that ORIA include an appendix showing
the different possible RBCs for those compounds for which there were multiple options, as was done in
the California Office of Environmental Health Hazard Assessment (OEHHA) Air Toxics Risk
Assessment Guidelines for cancer unit risk values. In this regard, the Committee also recommends that
the endpoint on which the RBC is based be included in the tables.

       Another issue identified concerned the question of why the ranking of sources for chronic and
acute RBCs changed compared to the Technical Support Document. The Joint Committee noted the
following changes:

       a)     For the acute RBC, Cal OEHHA Reference Exposure Levels have been moved down
              to fourth from second and American Industrial Hygiene Association Emergency
              Response Planning Guidelines moving from third to second.

       b)     For the chronic RBCs the  Cal OEHHA Reference Exposure Levels have been moved
              up and the EPA Health Effects Assessment Summary Table (HEAST) moved down in
              ranking. Which of these, if any, were derived with the general population, including
              more sensitive individuals, in mind?  Those factors would be the most appropriate to
              use for the current purpose.
                                             17

-------
       c)     National Institute of Occupational Safety and Health (NIOSH) Immediately Dangerous
              To Life and Health (IDLH) moved from fourth to third. For the NIOSH IDLH, has the
              value derived from dividing by 10 been compared to the acute one-hour mild values for
              compounds for which there are IDLHs, ERPGs, and Reference Exposure Levels
              available, to determine whether they are comparable?

       For carcinogens, the risk estimates that were given priority were derived using linear multistage
modeling, which assumes no threshold effects, and thus predicts higher unit risks than other models.
For extrapolation from humans to animals, doses were converted based on surface area (0.67 power of
body mass), rather than body mass. The former is the more protective approach.  Finally, for cancer,
the more protective 95% upper confidence limits rather than means were used. For non-carcinogens,
preference was again appropriately given to the more conservative risk estimates.  The EPA Reference
Concentrations (RfC), Agency for Toxic Substances and Disease Registry Minimum Response Level
(ATSDR MRLs), and Cal OEHHA Reference Exposure Levels were used for determination of chronic
non-cancer RBC. Most of these are derived by applying a standard uncertainly factor of 10 for
interspecies extrapolation and another factor of 10 for inter-individual extrapolation to the No
Observed Adverse Effects Level (NOAEL) for a chemical, resulting in a protective limit.  Combining
the cancer risk estimates and the non-cancer based risk estimates is a good approach for a screening
level process and the use of two cancer risk levels permits the capturing of non-cancer chronic health
effects that would have been "swamped out" by using only the 10"6 risk levels.

       Ranking is not sensitive to a consistent bias in health-based concentration criteria.  That is, if all
EPA unit risk factors are overstated by the same factor, then the pollutants will not be mis-ranked.
However, if health indices are inconsistently conservative (either within the EPA, IRIS system, or
across agencies), the potential for mis-ranking arises. This deficiency of using criteria with conservative,
but inconsistent, biases is well known to be a problem for ranking systems, but probably cannot be
avoided in the absence of a data set based on central or "best" estimates of toxicity criteria.
Furthermore, the rankings cannot be interpreted to assess absolute risk. These issues should be
discussed in the document.

       A voluminous amount of information was well summarized in Tables Bl - B9. These tables
were presented in a straightforward and easily interpretable manner, but they should include relevant
footnoting.  Apparent inconsistencies in the tables were not explained. For example, Table Bl lists four
studies for styrene, with four having indoor building data. One of the studies indicated (in Table Bl of
Daisey's 1994 article) that  12 buildings were studied. The frequency of detection is indicated as 88%,
but no number of indoor observations is listed.  These data appear inconsistent and confusing, but
probably can be explained easily with a footnote.  Also, another table might be added to summarize
each chemical, organized by the ranking ratio it achieved via each methodology. This new table (BIO)
will assist the reader in assimilating the important information from tables B4 through B9 without having
to flip back and forth.
                                              18

-------
       Each ranking ratio methodology produced a different set of ranking ratios for the majority of the
chemicals.  The top ranked chemical, formaldehyde, was the exception, generating a rank of 1 on each
table. The rankings for certain specific air toxics were surprising to some Members, particularly for the
acute ranking.  For example, ethanol and acetone ranked 12 and 13 in Table B5, whereas acute
toxicity from these substances in indoor air seemed unlikely to these Members. The explanation
probably lies in the linearity implicit to the ranking, as it does not deal with thresholds of toxicity. Thus,
the high ranking of ethanol and acetone is being driven by airborne concentrations, Some comment on
this limitation of the rankings is desirable, as there was concern about the ultimate interpretation of the
process and the results by both scientists and consumers.

       In conclusion, the Joint Committee felt that the methodology for the selection of RBC was
reasonable for purposes of a preliminary screening level ranking for selected chemicals, but that the
limitations of the methodology could be better explained.  First, an appendix listing all the possible RBC
for each chemical derived from each of the different data sources should be added, allowing some of
the information lost by using a strictly hierarchical approach to selection of the RBC to be retained.
Second, a discussion of limitations in the toxicity studies on which the RBC were based should include
some indication that studies evaluating effects on sensitive subpopulations such as children and pregnant
women were probably lacking for most chemicals. Third, the endpoint on which each RBC was based
should be included in Table B3. Finally, the table should be modified so that readers can determine
what version of a given data set was used to generate a specific RBC.

3.4 Adequacy, Limitations, and  Uncertainties of the Analysis

       The Joint Committee first provides an answer to the general question of Charge 4 and then
addresses each of the more specific sub-questions posed by the Charge

       Clearly, the adequacy of the analysis depends on how well it can serve its purpose. Limitations
and uncertainties will be more or less important depending on the decisions that will be influenced by
the results and the environment in which the decisions  are made.  It does not make sense to devote  too
much effort to improve the ranking system if that would significantly decrease the Office of Radiation
and Indoor  Air's (ORIA) resources for actually dealing with indoor air toxics. On  the other hand, if
ORIA's decisions will greatly impact those responsible for indoor air quality in residences, schools,  and
office buildings, then a flawed ranking can lead to serious mis-allocation of public resources.

       According to the request for review provided to the SAB, the draft document was developed
to help focus ORIA's efforts on "the most substantial risks" as EPA develops its indoor air strategy.
The document attempts to present an "order-of-magnitude", screening-level ranking using similar
methodology to that used to select key pollutants for the National Air Toxics Program/Urban Air
Toxics Strategy. EPA's indoor air strategy will likely use non-regulatory, voluntary incentives to reduce
risks from indoor pollutants.  The document itself states that its purpose is to "provide  a screening-level
                                              19

-------
prioritization scheme for air toxics indoors [to identify] those pollutants that may present a greater risk
indoors . . ."

       However, exactly what options will be prioritized remains unclear. Can ORIA develop a
control strategy for any indoor pollutant, or only those with more complete data sets? Is population risk
(in the sense of the annual incidence of debilitating health effects) the principal concern? How important
are pollutants that might not affect a large population, but would place disproportionately high risks on
a smaller population, such as the most highly exposed group or a vulnerable group such as children?
To what extent can ORIA gather more information to improve the ranking, or must it rely on existing
data? A ranking of research priorities would be different than a ranking of action priorities based on
current information.

       ORIA should be sure that the quality of the ranking system matches the needs of the uses to
which it will be put.  As it stands, the system only addresses that part of the universe of indoor air toxics
that are "under the lamppost" in the sense of having sufficient data available for ranking with the current
algorithm. The Joint Committee noted that use of default values or model results for missing data could
expand the universe to be ranked, but of course with correspondingly uncertain results. Such a strategy
could at least help identify those pollutants that couldbe important, and suggest where research might
have the greatest payoff. As it stands, the system is more useful as a screening exercise to identify
those pollutants that are not  likely to be high in risk relative to the highest ranking of the qualifying
pollutants. It may not be adequate to identify a few indoor air toxics that deserve significant resources
for development of a control strategy.

       With a few exceptions, the document adequately describes and discusses the major
uncertainties of the analysis in qualitative terms.  Improvements in the treatment that might enhance the
utility of the document and  its transparency to readers include:

       a)      A better statement about what constitutes adequacy, limitations, and uncertainties for a
               ranking system.  In the opinion of the Joint Committee, the key question is how often
               might the Agency focus on an indoor air pollutant that poses relatively low "real" risk at
               the expense of deferring attention to an indoor air pollutant with relatively high "real"
               risk.  (See our comments about risk-based ranking in section 3.1.1 of this report to
               understand  why the word "real" is in quotation marks.) Only limitations and
               uncertainties that lead to substantial mis-ranking are important in judging the adequacy
               of the ranking method and data.

       b)      Some discussion of quantitative measures of uncertainty is needed. Although the Joint
               Committee recognizes that the available data are not extensive and prevent easy
               quantitative characterization of uncertainty, the document could at least compare the
               typical uncertainty in average concentrations (as represented by the standard deviation
               on the mean concentration) with the range of ranking indices. For example,  Figures C-

                                               20

-------
             7 to C-9 suggest that the ranking index varies from about 3x10+2 to IxlO"4 for the
             chronic Case 1 analysis, a range of over six orders of magnitude. If the uncertainties in
             the concentration data are indeed "order of magnitude" in the sense of being within a
             factor of 10 of the true population- and time-weighted average concentration, then that
             uncertainty would only change rankings by perhaps 10 places, and rarely would a
             pollutant ranked in the bottom third of the list actually deserve ranking in the top third.
             Uncertainties of a factor of 10 in the RBC will have essentially the same impact on the
             quality of the ranking.  Of course, if ORIA can only address one or two of the indoor
             air pollutants at a time, the influence of uncertainty will be greater than if it can address
             20% of the list at a time.

     c)      The Joint Committee is not entirely comfortable with the document's explanation of the
             superiority of monitoring data to model results. Models, if properly calibrated and
             validated, can sometimes compensate for deficiencies in monitoring data caused by
             changes in exposure (e.g., the cancellation of pesticide registrations mentioned), short-
             term vs.  long-term monitoring, etc.

     d)       The uncertainty section does not mention children or other subpopulations. It is
             important to describe how they are or are not included in the analysis.  The report does
             not provide sufficient information to determine whether the rank order is relevant for
             children.  At a minimum, the report should address this or consider it a limitation of the
             analysis.

     e)      The treatment of uncertainty in the report is somewhat inconsistent. Although the
             ranking ratios are calculated and plotted for each data source, thereby providing a
             range of values, information about the variance associated with these measurements for
             each building/study is lacking. In addition to variability across similar building types, the
             sources,  distribution processes and removal mechanisms for indoor pollutants will vary
             between residences, office buildings and schools (this was noted in Section 6.1 of the
             report).  However, this variability/uncertainty is not captured in the ranking ratio.  Even
             if the EPA assumes that there is no uncertainty in RBCs for policy reasons, uncertainty
             reported for the measured concentrations can and ideally should be propagated through
             the calculations to provide estimated confidence intervals for the ranking ratio.

     f)      Until a sensitivity analysis is conducted, it will remain difficult to determine how
             significant differences in the treatment of non-detects, the measure of central tendency,
             and other study design choices are to the ranking analysis. As noted earlier in this
             report, a range of sensitivity analysis methods is available, and many of them can be
             used without a significant investment of time and resources.

3.4.1  Incomplete Data on Indoor Concentration and Hazard/Risk Indices.

                                            21

-------
       The consensus of the Joint Committee is that the analytical methodology is appropriate but the
available data are definitely lacking relative to providing a screening level analysis for indoor air toxics.
It is clear that all or perhaps even most chemical species salient to human health risk are not included in
the current database.  This limitation is born of the paucity of exposure and health effects data. Thus
the analysis is useful for a well-defined universe of specifically identified agents but can not claim to
screen existing risk from indoor air pollutants in general. It is therefore important to recognize and
document more fully the fact that this ranking is flawed because not all relevant chemicals are included.
The document points to the lack of data for "thousands of chemicals," but perhaps this could be placed
in better context for what it means for the use of the results by this ranking method.  Similarly, there
should be a clearer explanation of why agents like radon and biologicals are not addressed.

       One approach to including more relevant air toxics into the analysis is to consult with those
within the EPA working on Design for the Environment projects.  This group has studied important
indoor air sources and has facilitated the development of the Wall Paint Exposure Model as a
state-of-the-science modeling tool that predicts the long-term time course of indoor air concentration
from paint concentration. (EPA, 1999)

       The most challenging part of doing a more comprehensive analysis of indoor air toxicants will
be in the identification and characterization of the most important species.  General air monitoring in a
screening analysis for hundreds of volatile,  semi-volatile and oxygenated species would be very useful.
Several organizations have pioneered a number of techniques relevant to this area that may be of value
to the Agency.

       On the hazard/risk indices, a discussion of the specific methods used in developing hazard/risk
indices from the various sources and their inherent limitations and/or biases would be appropriate. The
use of a hierarchy is acceptable, once it can be shown that there is not systematic bias or that those
biases are addressed.

  3.4.2  Difficulties in Determining the Representativeness/Accuracy of the "Typical" Levels
Indoors

       Representativeness and accuracy of the "typical" indoor levels are very important in identifying
those indoor pollutants that present substantial risks indoors. As noted earlier, this begs for a definition
of "typical" and "representativeness," because it is accepted that these measurements are not accurate.
It would appear that as many varied settings were used as available, e.g., residences, offices and
schools.  Combining these different data would produce a larger database and improve statistical
power, but it would make even more  difficult drawing a conclusion about "typical and representative"
because the environments are so different.  Some evaluation of specific indoor settings would be better
to draw conclusions about representativeness for a given setting (homes only, schools only, etc).
Other than this, it should be made clear that these are simply attempts to rank indoor air concentrations
and make no claims about representativeness.

                                               22

-------
       Useful estimates of "typical" levels are possible, given a sufficiently large database of
representative subjects. This is essentially a statistical question; however, it is fairly obvious that the
limited data available in this work are not large enough to assure a high level of confidence in these
estimates, and perhaps confidence limits around the estimates will help.

  3.4.3  The Use of Short-term Monitoring Data to Represent Chronic Exposure Periods

       Although the Joint Committee is satisfied that short-term measurements are reasonable to use to
represent long-term averages for the purposes of ranking, additional discussion of the possibility of bias
in the draft document, as well as suggestions for dealing with bias when it is identified, would be
welcome. For example, if all the studies for a particular pollutant were conducted in summer when
ventilation rates might be higher and indoor concentrations from indoor sources lower, then their
rankings would be biased low in comparison to a pollutant with more representative year-round
measurements.  A similar problem might exist if different LOQ strategies were employed for different
pollutants.

       Another concern is that some toxins could have more significant effects depending on when (in
the life cycle of the exposed human) exposures take place, e.g., causing birth defects in the fetus or
neuro-developmental changes in infants. In this context, short-term measurements may not relate
accurately to significant exposures, unless the studies were looking specifically at sensitive populations
(see also the discussion of sensitive populations in section 3.5 of this report).

       Any attempt to propose action would require a more detailed evaluation of the relevance of the
timing of health effects based on exposure.

  3.4.4  Issues Related to the Age of the Data

       EPA acknowledges that the pollutant concentration data on which the ranking is based are
dated. This problem is inherent in any ranking situation in which the conditions of exposure are
changing with time. Therefore, the conclusions can stand, if used to define relative ranking, but in this
instance more than any other, validation is required to ensure that unwarranted action is not being
proposed.

       The results should only be used for relative ranking, i.e., to identify the "top" (those that
potentially present the most substantial risks) ranked or first tier chemicals versus ones ranked in the
middle or lower tiers.

       Although an order-of-magnitude ranking will work, using the results as a surrogate for absolute
risk is inappropriate because of the uncertainty in the database.
                                              23

-------
       The results should not be used for absolute ranking. Before implementing any action, EPA
should perform some measure of validation. This may range from a simple check to see that the relative
ranking makes sense to a quantitative assessment for chemicals proposed for control strategies.  Any
quantitative evaluation should build on existing data and previous evaluations.

       Finally, as the Agency is well aware, there are numerous studies under way that will develop
relevant data.  Examples include toxicity testing data being generated under the high production volume
program and exposure data being generated by the National Urban Air Toxics Research Center on
apportionment between indoor, outdoor and personal exposures. It is not proposed that the Agency
wait on these data to support the current strategy but that the strategy be subject to periodic (perhaps
annual) review to take advantage of published data.

  3.4.5 Variations in the Methods Used by the Various Agencies to Arrive at the Health
Indices

       The discussion of the influence of different approaches to health indices among the agencies
could be improved by noting whether there are consistent differences (e.g., are the ATSDR MRLs
consistently higher than EPA Reference Concentration when both agencies have published results for
the same pollutant?). If that were true, then a pollutant ranked with an ATSDR MRL might fall lower
on the list than a similarly risky pollutant ranked with an EPA Reference Concentration.

       The Joint Committee suggests that the hierarchy of RBC methods be "calibrated" by comparing
a number of materials that have RBC in all or most of the available methods. These RBC could then be
compared to each other to determine any level and type of systematic differences between them. For
example, one could describe a distribution of ratios of estimates from one to another and the
parameters of the distribution might be useful in determining adjusting factors that would "even out" the
estimates from each in a less biased ranking scheme.

       An important limitation  of the toxicity component of the ranking is that the severity of effect, or
level of concern, is not considered in this screening level ranking.  Taking severity into account is not an
easy task because it requires subjective assessment. However, at a very basic level, additional columns
or a new table should be added that identifies the critical effects that are the basis for the risk based
concentrations, the uncertainty factors  applied, and the unit risk for carcinogens.  It should also be
noted that the underlying assumption of life-time chronic exposure may not be appropriate for all
chemicals evaluated for chronic toxicity. A consideration of actual duration and level  of exposure can
make an important difference to the lexicological outcome and hence to whether the risk-based
concentration used is relevant.

       The differences among the sources for the RBC need to be more clearly stated rather than
referring to the Technical Support Document for Hazardous Air Pollutants (outdoors).  It is important
to recognize the inherent policy positions that are taken in each method and ensure that these are

                                              24

-------
explicitly noted.  An evaluation to show the level and direction of "bias" (i.e., does one database
consistently provide higher or lower values) would provide an additional basis for determining whether
overall the hazard/risk indices are consistent and provide meaningful results. The question to be
addressed is: are the different indices supportive of each other or divergent and if the latter is there a
plausible, defensible reason.

3.5 Additional Issues

       The Joint Committee identified several issues and concerns not specifically addressed in the
Charge:

       a)     2,3,7,8-tetrachlorodibenzo p-dioxin was not on the tables but is referred to in text.

       b)     EPA recently developed the National Air Toxics Assessment (NATA) and subjected it
              to SAB review.  It is a first cut at a risk assessment of air toxics from outdoor sources.
              Interestingly, neither the NATA nor this proposed methodology document cite one
              another. One of the criticisms of NATA is that it does not address total exposure
              because it does not deal with indoor sources and one of the criticisms of this indoor
              report is that it does not address total exposure, eliminating consideration of outdoor
              sources. Some of the methodology is different across the two documents. It is not
              possible to redo each of these documents with consistency, but each should
              acknowledge the other and discuss the issue of air toxics risk from the viewpoint of the
              total exposure of the person.

       c)     The authors of the report are not listed and there is no indication of other peer review.
              Traditionally, names of authors and reviewers are provided to give credit to the hard
              work involved, but also to let other reviewers understand the likely technical attention
              paid to elements beyond the scope of the SAB review. For example, were  any
              authors/reviewers expert in toxicology, exposure and environmental air monitoring to
              enable judgments on the quality of the data used from unpublished studies and different
              agency risk based concentrations?

       d)     The document will be used for screening, but it is not clear for what additional future
              purposes and by what entities.  This information is central to evaluation of the adequacy
              of the document.

       e)     As noted above, children's specific health issues were not considered, nor were issues
              pertaining to  any group of humans that may have heightened sensitivity to these
              chemicals. This is probably due to a lack of data on these chemicals and their relative
              effects on the developing animal or the developing human.
                                              25

-------
In consideration of indoor air pollutants, child-specific factors have to be taken into
consideration if the prioritization is to have its greatest reliability and acceptance.

1)     Children may have higher risks from a given exposure than do adults, due to
       their neuro developmental status or smaller size. The child may be exposed to
       chemicals that are found at higher concentration at infant/child height than at
       adult heights.  The higher concentration of these chemicals at the lower heights
       in rooms may be due to the air pollutants being emitted from materials that are
       found at lower heights such as floor coverings (rugs, varnish, etc), or chemicals
       that are sprayed on the floor (pesticides), or pollutants that are heavier than air
       and are found at higher concentration at lower levels. However, such exposure
       assessments are complex, since convective mixing in most indoor settings may
       be more than sufficient to prevent this type of stratification for contaminants
       present at part per billion levels.  Furthermore, the different exposure routes for
       children, such as dermal and via ingestion, need to be considered.

       The child also has a higher exposure from a physiological and pharmacokinetic
       basis. The child has a higher tidal volume and relative higher respiratory surface
       area per kilogram as compared to the adult or the elderly.  This results in the
       child breathing in more air pollutants and absorbing more chemicals from the air
       than the adult breathing the same air pollutants.  Once they are absorbed, the
       child may clear the chemicals at a slower rate than the adult (although it should
       be recognized that higher rates of metabolism could lead to more rapid
       detoxification and consequent reduced toxicity).

2)     Children may be more sensitive to the toxic effects of pollutants for several
       reasons. First, children are disproportionately burdened with certain diseases,
       such as asthma, that might make them more susceptible to the pulmonary
       effects of indoor air toxics. Second, many organ systems, such as the central
       nervous system and the reproductive system, continue to develop after birth.
       Even short-term exposures during critical developmental windows can
       permanently alter the function of these organ systems.

The prioritization exercise did not take any of the above issues into consideration.
Regarding animal studies, few of the studies examined the developing animal. Few if
any of the studies on humans involved adolescents, children, infants, or newborns, and
their heightened sensitivity and susceptibility, were not addressed.   In the discussions  of
the data and prioritization, there was no discussion or identification of which chemicals
the human child would be at greater risk from as compared to the adult.
                                26

-------
In keeping with USEPA guidelines, this exercise should take into consideration
sensitive populations, which include children, people with diseases such as asthma or
chronic obstructive pulmonary disease, pregnant females etc.

Realizing the published animal and the human data are probably not adequate to
quantitatively estimate the heightened or reduced sensitivity of children as compared to
adults, it would be a useful exercise for the Agency to identify those chemicals from
which children may be at greater or lesser risk, and, if possible, determine a relative risk
(lesser, slightly greater, moderately greater, very much greater risk) as compared to the
adult.  One Member noted, however, it was not clear how this goal could be
accomplished without the application of considerably greater resources than had been
devoted to this effort.  This Member suggests that, given such resources, a feasible
option might be to simply highlight those substances for which there are known highly
susceptible groups not covered by the usual safety factors in the derivation of RBCs, or
known higher exposures.
                                27

-------
                                     REFERENCES
Clayton, C.A., Perritt, R,L., Pellizzari, E.D., Thomas, K.W., Whitmore, R.W., Ozkaynak, H.,
       Spengler, J.D., and L. A. Wallace. 1993. Particle Total Exposure Assessment Methodology
       (PTEAM) study: Distributions of Aerosol and Elemental Concentrations in Personal, Indoor,
       and Outdoor Air. Expos Anal Environ Epidemiol.  3(2):227-250.

EPA (US Environmental Protection Agency).  1999.
       http://www.epa.gov/opptintr/exposure/docs/wpem.htm)

EPA (US Environmental Protection Agency).  1990. Nonoccupational pesticide exposure study
       (NOPES). Research Triangle Park, NC. US EPA. EPA/600/3-90/003.

Gordon S., Callahan, P.J., Nishioka, M., Brinkman, M. C., O'Rourke, M. K., Lebowitz, M.D., and D.
       J. Moschandreas.  1999.  Residential environmental measurements in the National Human
       Exposure Assessment Survey (NHEXAS) pilot study in Arizona: preliminary results for
       pesticides and VOCs.. JExp Anal Environ Epidemiol 9(5):456-470.

NRC (National Research Council).  1992. Guidelines for developing spacecraft maximum allowable
       concentrations for space station contaminants.  National Academy Press, Washington, DC..

Saltelli, A., and K. Chan, Eds. 2000. Sensitivity Analysis. Wiley Series in Probability and Statistics.
       New York, John Wiley & Sons, LTD.

Sheldon, L., Clayton, A., Keever, J., Perritt, R, and D. Whitaker. 1992. PTEAM: Monitoring of
       phthalates and PAHs in indoor and outdoor air samples in Riverside, California. Sacramento.
       California Air Resources Board. Contract No. A933-14.

TSD (Technical Support Document). 2000 Ranking and  Selection of Hazardous Air Pollutants for
       Listing under Section 112(k) of the Clean Air Act Amendments of 1990: Technical Support
       Document. U.S. EPA, http://www.epa.gov/ttn/uatw/urban/urbanpg.html.

Weschler, C. J.  2000. Ozone in indoor environments: concentration and chemistry. Indoor Air,  10:
       269-288.

Weschler, C. J., and H. C. Shields.  1997a. Potential reactions among indoor pollutants, Atmos.
       Environ., 31: 3487-3495.

Weschler, C. J.,  and H. C. Shields.  1997b. Measurements of the hydroxyl radical in a manipulated,
       but realistic, indoor environment, Environ. Sci. Techno!., 31: 3719-3722.

                                           R-l

-------
Weschler, C. J., and H. C. Shields.  1996.  Production of the hydroxyl radical in indoor air, Environ.
       Sci.  Technol. 30: 3250-3258.

Wolkoff, P., Clausen, P.A., Jensen, B., Nielsen, G.D. and C. K. Wilkins. 1997. Are we measuring
       the relevant indoor pollutants? Indoor Air 7: 92-106.
                                             R-2

-------