September 26, 2000
EPA-SAB-EHC-LTR-00-007

Honorable Carol Browner
Administrator
U.S. Environmental Protection Agency
1200 Pennsylvania Avenue, NW
Washington, DC 20460

       Subject: Review of the Draft Report to the Congress "Characterization of Data Uncertainty and
       Variability in IRIS Assessments, Pre-Pilot vs Pilot/post-Pilot"

Dear Ms. Browner:

       The Environmental Health Committee (EHC) of the US EPA Science Advisory Board (SAB)
met on August 30, 2000, in Washington, DC.  The purpose of the meeting was to fulfill a
Congressional directive to review (and to provide advice and comment on) the Agency's study of the
Integrated Risk Information System (IRIS).

       The IRIS data base contains EPA's consensus scientific position on potential adverse human
health effects that may result from chronic exposure to specific chemical substances in the environment.
First publically available in 1988, the earliest IRIS assessments provided the results of the EPA
deliberations culminating in consensus health hazard conclusions.  With the passage of time, the
assessments gradually included more detailed data and information on the process leading to the
reported conclusions. As a consequence of analyzing the IRIS program and responding to comments
received about IRIS, the Agency decided to test some improvements through a Pilot Program. The
Pilot primarily addressed the assessment, documentation, peer review,  and Agency consensus process
that precedes IRIS data base entries. EPA developed (or updated, for existing entries) IRIS
assessments for ten Pilot substances.  The Pilot process consisted of a) a call for technical information
on the eleven substances from the public via a Federal Register Notice [April 2, 1996], b) a  search of
the current literature, c) development of draft IRIS summaries and support documents, d) internal peer
review (i.e., within EPA), e) external peer review (outside EPA), f) a new Agency consensus review
process, and management approval, g) preparation of final IRIS summaries and support documents,
and h) entry of the assessment into the IRIS data base.

       In response to the directive in the October 1999 report from Congress (HR 106-379)
regarding EPA's appropriations for FY2000, EPA undertook an evaluation of the  documentation  of

-------
data variability and uncertainty in IRIS assessments developed before, and after, the Pilot program (The
specific language in the Congressional report is provided in Enclosure A).

       EPA's Office of Research and Development (ORD) National Center for Environmental
Assessment (NCEA) consulted with the SAB Executive Committee (EC) on November 29, 1999,
about their proposed approach to this study. At this Consultation, individual Members of the EC
provided comments, but, following SAB standard procedures, no consensus report was generated.
The proposed approach involved assembling a team of independent, qualified individuals, external to
EPA, who would evaluate a representative set of IRIS assessments. One particular comment
suggested to ORD/NCEA was that it might maximize the number of assessments reviewed in-depth, by
limiting the number of independent reviews per assessment to three. In this way, there would still be a
range of opinions, given the experts' range of subject area expertise.  The study, as ultimately
undertaken, reflected the adoption of this, and other, comment(s) received from various EC Members.

       The extent of documentation of variability and uncertainty in IRIS assessments was established
in two steps, through a stratified random sampling procedure. The first step was to classify a random
10% sample of pre-Pilot IRIS assessments (52 of 522), and all assessments carried out after 1995, into
3 categories of documentation: none/minimal, some/moderate, or extensive.

       The second step was to select a random sample of IRIS assessments for an in-depth
examination of their treatment of variability and uncertainty. The in-depth review then focused on 16
IRIS assessments, subdivided into 8 from the pre-Pilot assessments and 8 from the assessments
developed after 1995 ('Pilot/post-Pilot' assessments). Within these 2 subsets the assessments were
randomly selected to represent the some/moderate and extensive documentation categories as evenly
as possible. ORD/NCEA arranged for a contractor to select this sub-sample.

       ORD/NCEA's contractor assembled and coordinated a set of independent experts to carry out
the  in-depth review. These experts were selected on the basis of their familiarity with EPA's human
health risk assessment methodologies, with IRIS, their knowledge of current practices for evaluating
and documenting uncertainty and variability in data used in health assessments, and their expertise in
how these factors relate to sensitive subpopulations including children. They represent a range of
professional affiliations and of health science backgrounds among cancer and non-cancer toxic
endpoints. The experts evaluated the documentation of uncertainty and variability in assessments on the
basis of the data available at the time each assessment was conducted, focusing on the presentation of
available  data and variability in that data, discussion of confidence and uncertainty, including any
uncertainty factors applied.  The final report comprises the individual and collective findings and
conclusions of the six evaluators, as well as ORD's summary and conclusions.

-------
       The Charge for this review, and the EHC's findings on each element follow below.

       The first of the three elements of the Charge asked the Committee to comment on how
well the study conformed to the study plan developed after consulting with the SAB EC.

       The Committee agreed that the Agency did a good job implementing the study plan laid out in
the July 19 NCEA report (National Center for Environmental Assessment Study Plan.
Characterization of Data Uncertainly and Variability in IRIS Assessments. 2000.  Pre-Pilot vs.
Pilot/Post-Pilot. Post-SAB Consultation and Update. Environmental Protection Agency), in terms of
the number of reviewers evaluating each IRIS chemical assessment, randomized process for selection
of chemicals, number of chemicals evaluated, selection of reviewers and overall scope of the review.
The standardized questions asked of the reviewers and the methodology used to evaluate, summarize
and report results were consistent with the NCEA study protocol.

       One significant deviation from the NCEA plan was in the number of IRIS substances selected
with "extensive" and "some" documentation of uncertainty in the "pre-pilot" and "pilot/post-pilot"
groups.  The Pilot program reviewed ten IRIS substances in order to test improvements in assessment,
documentation, peer review, and the Agency consensus process (EPA, 1996).  Considerable effort
was taken to describe uncertainty in pilot and subsequent IRIS assessments, and, as a result, all but one
of the 15 available "pilot/post-pilot" assessments were found in the internal Agency review to have
"extensive" documentation of uncertainty. In contrast, only  3 of the 52 selected "pre-pilot" assessments
were found to be in this category. According to the NCEA study plan, the 8 pre-pilot and 8 post-pilot
assessments chosen for in-depth review were each to have an equal number with "some" and
"extensive" documentation of uncertainty. Since this was not possible, the contractor selected all 3 of
the 8 pre-pilot assessments, and 7 of the 8 pilot/post-pilot assessments, from the extensive category.
The EHC found this to be a reasonable deviation from study plan.

       Although the study conformed to the general guidance laid out in the NCEA plan, the
Committee would like to highlight a few points regarding its implementation:

       a)     The study's definitions of "variability" and "uncertainty."  Although the definition
              of "uncertainty" used for the study followed  that used by the risk assessment community
              (National Research Council.  1994.  Science and Judgment in Risk Assessment.  NRC,
              Committee on Risk Assessment of Hazardous Air Pollutants, National Academy Press,
              Washington, DC; National Research Council.   1996. Understanding Risk: Informing
              Decisions in a Democratic Society.  NRC Committee on Risk Characterization,
              National Academy Press, Washington, DC.), the definition of "variability" did not. In a
              strict sense, uncertainty refers to lack of knowledge, while variability refers to the
              changeable nature of reality - for example, with time,  space, and the perspectives of
              individuals. Variability as used in the report was seen to encompass "any aspect of the
              risk assessment process that can have varying results, including the potential

-------
       interpretations of the available data, the availability of data collected under different
       experimental protocols, and the availability of different models and methods" (NCEA
       Study Plan).  Thus variability, as used by the study, covered both uncertainty and what
       is traditionally covered by variability. The importance of keeping the two terms distinct
       when assessing and describing risk has been emphasized in a Congressionally mandated
       review of EPA risk assessment activities conducted by a National Research Council
       Committee (National Research Council.  1994. Science and Judgment in Risk
       Assessment.  NRC, Committee on Risk Assessment of Hazardous Air Pollutants,
       National Academy Press, Washington, DC).

       Although the definition of variability used in the study may be  seen as overly broad, it
       may have resulted from an interpretation of the Congressional  language calling for an
       evaluation of the IRIS documentation of "the range of uncertainty and variability of the
       data."

       This issue led some SAB Committee Members to express concern that the study did
       not fully address what may have been (or, to speculate, perhaps should have been) the
       underlying concern of Congress.  Congress asked about "uncertainty and variability of
       the data." However, since neither the Congress nor the EPA  study plan provided a
       completely satisfactory definition of those terms, EPA chose to interpret the
       Congressional request to apply mainly to the information underlying the IRIS values,
       not to the values themselves.  An alternative and more salient interpretation would
       focus on the extent to which the IRIS documentation provides a) a reasonable
       description of the intrinsic uncertainty in a given human health risk assessments, and b)
       an estimate of the extent of variability of human risk. For example, it might be possible
       to state that the IRIS RfD was thought to be below the individual threshold for adverse
       health effects for 99.99% of the population, and that an RfD ten times higher was
       thought to be protective for only 99% of the population.

b)     The study's review of IRIS documentation of human variability in response to
       exposure to the IRIS substance. Variability in risk, particularly among individuals, is
       recognized as an important factor to consider in making decisions about risk (see, e.g.,
       the previous NRC references). The study was not implemented to review adequately
       IRIS qualitative or quantitative descriptions of interindividual differences in
       susceptibility.  Evaluation of IRIS descriptions of individual susceptibility and variability
       in risk with different life stages would have been consistent with the study plan.

c)     The representativeness  of the sample.  The far greater proportion of pilot/post-
       pilot substances evaluated  over pre-pilot substances was appropriate given the
       underlying study goals of gauging improvements in uncertainty descriptions in IRIS
       documents, and identifying examples of "good' assessments.

-------
       d)      The guiding questions for reviewers.  Although there were differences in the way
               each reviewer approached the questions asked, there seemed to be reasonable
               consistency on general points. Defining some of the general terms and providing
               structure for the reviewers may have resulted in greater consistency in the reviewers'
               findings on IRIS treatment of uncertainty. Asking reviewers to grade assessments could
               have reduced the opportunity for misinterpretation of reviewers findings.

       e)      Bias.  The contractor reported that a process was followed to ensure that the
               reviewers were "free of bias or conflicts of interests" (Versar, Inc. 2000.
               Characterization of Data Uncertainty and Variability in IRIS Assessments.  Pre-Pilot
               vs. Pilot/Post-Pilot. Prepared for EPA National Center for Environmental Assessment.
               (p. 11)). Although it is possible to avoid conflict of interest, avoidance of bias is
               probably not possible.  All scientists carry bias due, for example, to discipline,
               affiliation, and experience. Oftentimes discussions of expert committees are initiated
               with a bias disclosure and discussion. Fuller disclosure of sources of reviewer bias in
               this study (e.g., beyond Table 2-1 in the contractor's report) would have provided
               useful information in interpreting study results.  In addition, having more reviews would
               help to insure the balance.

       The second element of the Charge inquired as to the whether the EHC concurred with
the findings of the external reviewers concerning selected agents incorporated in the IRIS.

       The Committee agreed that the reviewers had followed their mandate and reached overall
conclusions that were reasonable. The Committee noted that the findings of reviewers on specific
points varied, in several cases considerably, even when the discussions of uncertainty were extensive.
This was to be expected. There is not currently any scientific consensus on how uncertainty in risk
should be described, and practitioners of risk assessment differ on what constitutes a good and
adequate discussion of uncertainty. Still, the Committee concurred with the general conclusion that the
description of uncertainty could be significantly improved for most pre-pilot chemicals, and that such
descriptions have improved significantly since the initiation of the pilot program. The Committee also
agreed with general recommendations for improvement of characterizations of uncertainty and
variability (see the recommendations addressed in the third element of the Charge, below).  Their
comments were by and large insightful, and contained several useful suggestions for improvement.

       In summarizing the content of the reviews, it should be noted that the reviewers had a number
of positive things to say about the IRIS reports, especially those that have been written since 1995.
The Committee concurs with the overall summary from the outside reviewers' report that "There is no
question that EPA's years of labor in providing biologically-based, consensus IRIS toxicity values to
the scientific community has been of inestimable value, at the very least because the process has been
instrumental in clarifying issues and suggesting research needs in the developing field  of risk assessment.
IRIS is indeed a useful tool for public health risk assessment."

-------
       The outside reviewers commended the EPA for the improvement in the characterization of
uncertainty and variability. They indicated that more recent (i.e., Pilot/Post-pilot) assessments were
"distinctly more comprehensive... and included more description and better discussion of data gaps
and end points such as reproductive/developmental or neurological effects, as well as physicochemical
information relevant to pharmacokinetics and toxicity and more complete synopses of conclusions for
each supporting study (p.36)." They considered the Toxicological Review documents that have
accompanied the recent IRIS reports to be valuable in that they have provided a more comprehensive
discussion of various  aspects of studies that bear on variability and uncertainty than was available for
older reports.

       The last element of the Charge requested that the Committee comment on what
further improvements, if any, might the Agency make in IRIS documentation in response to
the study results.

       In responding, the Committee noted that the draft report does not come to any overall
conclusion about the adequacy of uncertainty and variability information in the IRIS documentation. In
the pre-pilot sample, half (12 of 24) of the reviewers' ratings of the treatment of uncertainty and
variability were judged "negative." One-third (8) were rated as "positive," and the remainder (4) were
rated as "mixed."  The Committee believes this indicates the IRIS documentation of uncertainty and
variability could be significantly improved for the pre-pilot chemicals. The pilot/post pilot results were
only somewhat more encouraging (9, 5,  and 10 for positive, mixed, and negative, respectively),
although the accompanying text suggested that the reviewers may have judged the pre-pilot IRIS
documentation less harshly because it often met the standards prevalent at the time it was prepared.

       Even in its present form, IRIS could be strengthened in  its characterizations of data uncertainty
and variability.  Therefore, a greater effort needs to be expended in addressing this important issue.
Thus, the Committee recommends that EPA should attempt to improve the IRIS database by including
more information on uncertainty and variability in every chemical summary that would have been rated
less than extensive by the  reviewers. Given limited resources for such a task, priority should be given to
chemicals for which controversy over the IRIS evaluations is most acute. An examination of the
reasons for discrepancies between the EPA evaluators and the expert peer panel evaluations of the
study sample might help in refining the protocol.

       To undertake that task most effectively, EPA should first develop a detailed protocol of steps
for completing an adequate documentation of uncertainty and variability and then rigorously train the
managers of IRIS assessments in that protocol.  The protocol should indicate what aspects of
uncertainty and variability should, at a minimum, be discussed (e.g., interspecies  and intraspecies
differences in susceptibility; uncertainties introduced by using predictive models rather than clearly
applicable human data). The protocol should also present criteria for deciding whether any meaningful
discussion of uncertainty is possible with the available data(e.g., are results in at least two species of

-------
laboratory animals via relevant exposure routes needed to characterize the uncertainties due to
interspecies variability?).

       The Agency should also develop a strategy for reducing uncertainties where these severely
compromise the utility of IRIS evaluations. Although it may be beyond the IRIS mandate to
recommend the development of entirely new datasets, it may be possible to improve the precision and
accuracy of its toxicity numbers by more insightful use of existing information, for example by:

       a)      Continuing the development of ways to use data-driven uncertainty factors rather than
               default values

       b)      Refining methods for examining curvilinearity and/or thresholds in dose-response
               relationships

       c)      Integrating information from multiple relevant studies of adequate quality, rather than
               using only one study as the basis for the toxicity numbers

       d)      Performing a balanced assessment of known  human variability in susceptibility to
               various classes of chemical compounds, and  using the results to improve the discussions
               of human variability for other chemicals within those classes

       More broadly, the Committee recommends that EPA should investigate the feasibility of
providing more information that can help answer the underlying question about the uncertainties and
variabilities in human health  risk assessments based on the IRIS toxicity numbers.  One proposal
suggested by some reviewers is to characterize the toxicity of chemicals through distributional analyses
of toxicity, as well as of exposure, in human health risk assessments. In essence, good environmental
policy should be able to answer the questions "How many people might be harmed by current patterns
of exposure?," and "What are reasonable limits on this estimate?" Whether the toxicity numbers  should
be replaced by uncertainty/variability distributions, confidence limits on the point estimates now
presented, or simply enhanced by presentation of quantitative or qualitative discussions of uncertainty
and variability is not as yet clear.

       The request from the Congress indicates that it is driven by "...concern about the accuracy of
information in the IRIS data base..." It is recognized both within  and outside of the Agency that the
major problem with IRIS is that most evaluations are at least 10 years old and that they fail to reflect
more recent improvements and Agency practices in risk assessment. The evaluation of the adequacy of
the uncertainty and/or variability analysis for representative agents responds to the specific language in
the congressional request but it does not fully address the more important quality issue. In order  for
IRIS to be of greatest value to the Agency, the database must be current, and there should be a
mechanism for the IRIS data to be subjected to external scientific and independent peer review  and
capable of timely and continuous revision.  Another criticism of IRIS is that is does not include data for

                                               7

-------
many of the agents for which information is needed within the Agency offices. The mandate for adding
new agents, plus the need to revise the documentation on the current agents, exceeds the resources
allocated by the EPA to this task.  Because the IRIS database is critical to the Agency and extremely
important to outside stakeholders, the Congress should consider allocating resources which are
earmarked for this specific purpose. In the interim, the Agency should consider collaborative efforts
with outside institutions, such as the National Academy of Sciences to expedite the generation of IRIS
files.  To facilitate this, EPA could provide Internet as well as the Federal Register listings of the current
status of updates and prioritization information.

       Many of the pre-pilot IRIS documents provide information on the specific toxicological and/or
epidemiological studies that support the IRIS recommendations, whereas most of the post pilot agents
have more extensive toxicologic reviews.  The reviews cover the issues of ancillary studies,
transparency and uncertainty/variability evaluation in more detail and they are scientifically more
informative. There is considerable overlap between IRIS toxicology review and the Agency for Toxic
Substances and Disease Registry (ATSDR) Toxicology Profiles, the International Agency for Research
on Cancer (IARC) cancer documents, the EPA's Acute Exposure Guideline Level program
documentation, the documentation for national and international occupational exposure levels, and the
World Health Organization and the Organization for European Community Development databases as
well as those created and maintained by state governments, environmental groups, industry, and other
list generating groups. The IRIS staff should make the best possible  use of the IARC, ATSDR, and
other documents so as to avoid duplication of effort and make their own reviews easier to conduct, and
should also seek to cross-reference these other reviews. In this way, EPA could focus on improving
the quality of  input data, eliminating redundant compilations of the same data and developing single
"gold standard" evaluations for all important compounds. This long term goal may be difficult to
achieve.  In the near term, efforts should emphasize the development of IRIS documents on chemicals
with significant environmental exposures that are not currently on IRIS, or for which the IRIS is
believed to be inaccurate, out-of-date, or non-informative.

       IRIS could provide an evaluation of the epidemiologic and toxicologic data, and these
evaluations could be used by all stakeholders as the basis for their recommendations for regulatory,
occupational,  or environmental levels. One suggestion to enhance the quality of the toxicologic
evaluations is to make the IRIS process open to public stakeholder review in a more formal manner.
This could be similar to the process the EPA OPP Special Review and Registration Division and the
USD A Office of Pest Management follows for re-registration of pesticides in which  an open meeting is
held to discuss the risk assessment documents.  The purpose of such a meeting is to make sure that key
information and data that impact the final risk assessment are available to EPA. Non-profit
organizations  such as ILSI or Toxicology Excellence for Risk Assessment could be involved in
organizing the panels to debate and uncover the range of scientific opinion and critical data that impact
the risk assessment. The EPA SAB or NAS/NRC could provide peer review.

-------
       The Committee noted also that the quality of the EPA's interpretation of the weight of evidence,
and the use of this information to select appropriate uncertainty parameters or quantitative risk
assessment approaches, are critical to the success of the IRIS database in aiding regulators and industry
in adequately protecting the public. However, these factors are very difficult to measure, and they were
not the primary focus of the IRIS study. Instead, the emphasis was on documentation of the scientific
evidence supporting the decisions that were made.  The IRIS Study is indeed an important step
forward in making the process more transparent. SAB agrees with the reviewers that, in general,
pilot/post-pilot IRIS assessments were more detailed and provided more chemical-specific information.
However, there were several cases in which individual reviewers from the IRIS study as well  as SAB
Members were aware of critical data that were not included in the IRIS risk assessment discussion. It
is not the expectation that every reference associated with the chemical should be cited in order for the
IRIS database to be considered complete. It is also understandable that the budget and time
constraints make it difficult to thoroughly evaluate the large amount of published data often available on
each chemical. However, key studies of high quality that have impact on interpretation of the  weight of
evidence and risk assessment need to be discussed and considered.

       Finally, the Committee noted that the reviewers only occasionally discussed whether or not the
IRIS files cited children as a subpopulation that might be more sensitive than the general population,  and
that the ORD/NCEA summary did not mention this issue at all. This issue is central to whether or not
the uncertainty factors assigned for intraspecies (human) variability are sufficient to cover such potential
childhood sensitivity. EPA should relatively quickly decide how it will deal with the concern that
children might be at greater risk from certain environmental chemicals than adults. Pesticide risk
assessments are required by law to include an "extra" uncertainty/safety factor of three to ten-fold
whenever there are lexicological concerns or when data on the safety of a pesticide for children are
lacking.  Some observers believe that a similar factor should be included in every IRIS assessment
lacking childhood-specific data, whereas others believe  that the current uncertainty factor for
intraspecies (human) variability is adequate.   The correct answer is undoubtedly chemical-specific, and
the Agency needs to decide whether, and if so, how, to modify IRIS toxicity numbers for potential
childhood sensitivity.

       Although not part of the formal Charge, the Committee wished to comment on several other
issues.  First, the Committee is of the opinion that the report should be prefaced by some statements
that will assist the reader in understanding the IRIS review process, and it should also cite the SAB's
report on the extent to which these assessments document the range of uncertainty and the variability of
the data.

       Lastly, we recommend that EPA establish protocols for the whole IRIS process, not just the
uncertainty and variability parts noted above.  Having such protocols would contribute to
 three important goals EPA should work towards:

-------
a)     making the total process by which all the available information is integrated to arrive at
       the toxicity numbers presented in IRIS more transparent, e.g., why certain studies were
       selected for inclusion over others

b)     instituting a standardized approach to determining what information will be considered
       in the IRIS evaluations (e.g., the inclusion of unpublished studies or studies not fully
       conforming to good laboratory practice in addition to those that fully meet current
       criteria)

c)     developing a standardized process to determine which agents should be added to the
       IRIS database, perhaps using some of the following criteria:

       1)      likelihood that a large population is exposed
       2)      high likelihood that a large population of children is exposed
       3)     judgment that the agent is hazardous at low doses
       4)     judgment that the agent is not being considered by other public health entities
       5)      pertinent to the agency's overall missiorfjh that significant exposures are likely to

       6)      extant toxicity findings in two or
       7)      extant clinical or epidemiology studies of sufficient power and quality showing a
               trend in toxicity

We appreciate the opportunity tit review thes&issues^Mq )<}ok forward to your response.
                             Dr. Morton Lippmann, Interim Chair
                             Science Advisory Board
                             Dr. Mark Utell, Chair
                             Environmental Health Committee
                             Science Advisory Board
                                        10

-------
                                      ENCLOSURE A
  Report Language from the Senate Appropriations Committee accompanying the EPA budget for FY
                                           2000:

"The Committee is concerned about the accuracy of information contained in the Integrated Risk
Information system [IRIS] data base which contains health effects information on more than 500
chemicals. The Committee directs the Agency to consult with the Science Advisory Board (SAB) on
the design of  a study that will a) examine a representative sample of IRIS health assessments
completed before the IRIS Pilot Project, as well as a representative sample of assessments completed
under the project and b) assess the extent to which these assessments document the range of
uncertainty and variability  of the data. The results of that study will be reviewed by the SAB and a
copy of the study and the SAB's report on the study sent to the Congress within one year of enactment
of this Act."
                                            A-l

-------
                          U.S. Environmental Protection Agency
                                 Science Advisory Board
                            Environmental Health Committee
                    Integrated Risk Information System (IRIS) Review
                                    August 30, 2000

CHAIR
Dr. Mark J. Utell, Chair, Department of Medicine, Director, Pulmonary Unit, and Professor of
       Medicine and Environmental Medicine, University of Rochester Medical Center, Rochester,
       NY

MEMBERS
Dr. Stephen L. Brown, Risks of Radiation and Chemical Compounds,  Oakland, CA

Dr. John Doull, Professor Emeritus, Department of Pharmacology, Toxicology and Therapeutics,
       University of Kansas Medical Center Kansas City, KS

Dr. George Lambert, Associate Professor - Center Director, UMDNJ-Robert Wood Johnson
       University Hospital, New Brunswick, NJ

Dr. Grace K. Lemasters, Director, Division of Epidemiology & Biostatisties, Department of
       Environmental Health, University of Cincinnati, Cincinnati, OH

Dr. Abby A. Li, Neurotoxicology Technical Leader, Monsanto Company, Louis, MO

Dr. Michele Medinsky, Toxicology Consultant, Durham, NC

Dr. Roy E. Shore, Director, New York University Medical School, Division of Epidemiology and
       Biostatistics, New York, NY

Dr. Lauren Zeise, Chief, Reproductive and Cancer Hazard Assessment Section, Office of
       Environmental Health Hazard Assessment, California Environmental Protection Agency,
       Oakland, CA

SCIENCE ADVISORY BOARD STAFF
Mr. Samuel Rondberg, Designated Federal Officer, U. S. Environmental Protection Agency Science
       Advisory Board (1400A), Washington, D.C. 20004

Ms. Wanda Fields, Management Assistant, Environmental Protection Agency, Science Advisory
       Board (1400A), Washington, D.C. 20004

-------
                                         NOTICE

       This report has been written as part of the activities of the Science Advisory Board, a public
advisory group providing extramural scientific information and advice to the Administrator and other
officials of the Environmental Protection Agency. The Board is structured to provide balanced, expert
assessment of scientific matters related to problems facing the Agency.  This report has not been
reviewed for approval by the Agency and, hence, the contents of this report do not necessarily
represent the views and policies of the Environmental Protection Agency, nor of other agencies in the
Executive Branch of the Federal government, nor does mention of trade names or commercial products
constitute a recommendation for use.
Distribution and Availability: This Science Advisory Board report is provided to the EPA
Administrator, senior Agency management, appropriate program staff, interested members of the

-------
public, and is posted on the SAB website (www.epa.gov/sab). Information on its availability is also
provided in the SAB's monthly newsletter (Happenings at the Science Advisory Board).  Additional
copies and further information are available from the SAB Staff.
                                              111

-------