United States       EPA Science Advisory      EPA-SAB-RSAC-01-009
       Environmental       Board (1400A)          September 2001
       Protection Agency      Washington DC          www.epa.gov/sab
&EPA IMPLEMENTATION OF THE
       ENVIRONMENTAL
       PROTECTION AGENCY'S
       PEER REVIEW PROGRAM:
       AN SAB EVALUATION OF
       THREE REVIEWS

       A Review by the Research
       Strategies Advisory Committee
       (RSAC) of the EPA Science
       Advisory Board (SAB)

-------
                     UNITED STATES ENVIRONMENTAL PROTECTION AGENCY
                                    WASHINGTON, D.C. 20460

                                        September 26, 2001

EP A-S AB-RS AC-01-009
                                                                  OFFICE OF THE ADMINISTRATOR
                                                                   SCIENCE ADVISORY BOARD
Honorable Christine Todd Whitman
Administrator
U.S. Environmental Protection Agency
1200 Pennsylvania Avenue, NW
Washington, DC 20460

              Subject:      Implementation of the Environmental Protection Agency's Peer
                           Review Program: An SAB Evaluation of Three Reviews

Dear Governor Whitman:

       On June 26 and 27, 2001 the Research Strategies Advisory Committee (RSAC) of the US
EPA Science Advisory Board (SAB) met to review three examples that indicated the
implementation of the US Environmental Protection Agency's (EPA) peer review program.

       The committee was asked to address the following questions: a) Are the reviews and
resulting advice timely? b) Do the peer reviews make a difference? c) To what extent are the
review comments responded to and acted on by the Program Office/Region? and d) Does the
RSAC have additional comments/guidance for EPA on how to improve the effectiveness of the
peer review process?

       Based on its evaluation of the three examples and detailed discussion with EPA staff and
participants, the major findings of the RSAC study are as follows:

       a)     RSAC's study was limited to a small number (i.e., three) of products. While
              some valuable lessons can be learned from a limited sample such as this, there is
              need for continual review of the robustness and effectiveness of the peer review
              process to ensure that the process is effective, that it is making a difference, and it
              informs decision making in a timely manner. RSAC recommends that the
              Agency develop a continuing in-depth analysis to fully examine trends in the use
              of peer review at EPA, evaluate the impacts of the peer review on decision
              making, and identify additional opportunities for improving the benefits of the
              peer review at the Agency.

       b)     During the course of this review, the committee observed that there are important
              products not being peer reviewed, such as for example, the technical resource
              documents for MACT standards, and the TRI lead rule.

       c)     Peer review is being vigorously conducted by the Agency and it is clearly making
              a difference in those examples that RSAC examined.

       d)     RSAC found no examples of lack of independence of the peer reviews examined.
              RSAC encourages the Agency to continue to take necessary steps to avoid even
              the potential appearance of lack of independence.

-------
       e)     RSAC reviewed guidance documents in this evaluation. RSAC recommends that
              the Agency also evaluate and more systematically document how peer review is
              being employed to address the use of science in the specific rules and general
              environmental decision-making. This type of review is at the interface of science
              and policy and, therefore, is a different type of peer review than the normal
              scientific peer review process.

       f)      An area for potential improvement is the need to develop a uniform_process for
              collecting, systematically documenting and archiving information on responses to
              peer review comments so that they can be used in the future by those not
              participating in the original peer review process.

       We appreciate the opportunity to review and provide advice on the EPA's peer review
program. The Research Strategies Advisory Committee would be pleased to expand on any of
the findings described in our report, and we look forward to your response.

                                           Sincerely,


              /Signed/                         /Signed/

       Dr. William H. Glaze, Chair          Dr. Raymond C. Loehr, Chair
       EPA Science Advisory Board        Research Strategies Advisory Committee
                                          EPA Science Advisory Board

-------
                                         NOTICE
       This report has been written as part of the activities of the EPA Science Advisory Board,
a public advisory group providing extramural scientific information and advice to the
Administrator and other officials of the Environmental Protection Agency. The Board is
structured to provide balanced, expert assessment of scientific matters related to problems facing
the Agency. This report has not been reviewed for approval by the Agency and, hence, the
contents of this report do not necessarily represent the views and policies of the Environmental
Protection Agency, nor of other agencies in the Executive Branch of the Federal government, nor
does mention of trade names or commercial products constitute a recommendation for use.
Distribution and Availability: This EPA Science Advisory Board report is provided to the EPA
Administrator, senior Agency management, appropriate program staff, interested members of the
public, and is posted on the SAB website (www.epa.gov/sab). Information on its availability is
also provided in the SAB's monthly newsletter (Happenings at the Science Advisory Board).
Additional copies and further information are available from the SAB Staff [US EPA Science
Advisory Board (1400A), 1200 Pennsylvania Avenue, NW, Washington, DC 20460-0001; 202-
564-4533]

-------
                                      ABSTRACT
       The Research Strategies Advisory Committee (RSAC) of the EPA Science Advisory
Board (SAB) met June 26 and 27, 2001 to review examples that indicated the implementation of
EPA's peer review program. The Committee was asked to address timeliness of the reviews,
whether they make a difference, to what extent the review comments are responded to and acted
upon, and whether the RSAC has additional comments or guidance for the Agency to improve
the effectiveness of the peer review process.

       Based on its evaluation of the three examples and detailed discussion with EPA staff and
participants, the RSAC found that peer review is being extensively conducted by the Agency and
is clearly making a difference in those examples that were examined. For the three examples
examined, the RSAC found no obvious examples of lack of independence in the reviewers. An
area of potential improvement is the need to develop a uniform process for collecting,
documenting and archiving information on responses to peer review comments.  The RSAC
observed that while this was not the focus of this review, there are important products which are
not being peer reviewed.  Among the recommendations made, the RSAC recommended that the
Agency  develop an ongoing in-depth analysis to more fully examine trends in the use of peer
review at EPA evaluate the impacts of the peer review on decision  making and explore
additional opportunities for improving the benefits of the peer review process over time at the
Agency.
       Keywords: Peer Review

-------
                 US ENVIRONMENTAL PROTECTION AGENCY
                         EPA SCIENCE ADVISORY BOARD
              RESEARCH STRATEGIES ADVISORY COMMITTEE
CHAIR
Dr. Raymond C. Loehr, Professor, University of Texas at Austin, Department of Civil Engineering, Austin,
       TX

PAST CHAIR
Dr. William Randall Seeker, Senior Vice President, General Electric Energy and Environmental Research
       Corp., Irvine, CA

MEMBERS
Dr. William Adams, Director, Environmental Science, Environmental Department, Kennecott Utah Copper
       Corp., Magna, UT

Dr. Richard J. Bull, President, MoBull Consulting, Kennewick, WA

Dr. Philip Hopke, Robert A. Plane Professor of Chemistry, Clarkson University, Department of Chemical
       Engineering, Potsdam, NY

Dr. Genevieve M Matanoski1, Professor of Epidemiology, Johns Hopkins University, School of Hygiene
       and Public Health, Department of Epidemiology, Baltimore, MD

Dr. Maria  Morandi, Associate Professor of Environmental Science, University  of Texas Health Science
       Center at Houston, School of Public Health, Houston, TX

Dr. Ishwar Murarka, Chief Scientist and President, Ish Inc., Sunnyvale, CA

Dr. William Smith, Professor of Forest Biology,  School of Forestry and Environmental Studies, Yale
       University, New Haven, CT

Dr. Mark Utell1, Professor of Medicine and Environmental Medicine,  University of Rochester Medical
       Center, Rochester, NY

Dr. James  E. Watson, Professor,  Department  of  Environmental Sciences and Engineering  Department,
       University of North Carolina at Chapel Hill, Chapel Hill, NC

Dr.  Lauren  Zeise,  Chief,  Reproductive  and   Cancer  Hazardous   Assessment  Section,  California
       Environmental Protection Agency, Oakland, CA

EPA SCIENCE ADVISORY BOARD STAFF
Dr. K.  Jack Kooyoomjian,  Designated  Federal  Officer,  US  Environmental  Protection Agency, EPA
       Science Advisory Board (1400A), 1200 Pennsylvania Avenue, NW, Washington, DC 20460

Ms.  Dorothy M. Clark, Management Assistant, US Environmental Protection Agency, EPA Science
       Advisory Board (1400A), 1200 Pennsylvania Avenue, NW, Washington, DC 20460
       ' Not present at meeting.

                                             iii

-------
                              TABLE OF CONTENTS
1.  INTRODUCTION	1
       1.1 Background and Schedule  	1
       1.2 Charge to the Committee	1
       1.3 The RSAC Study Process  	2
       1.4 Format of this Report 	5

2.0 RESPONSE TO CHARGE QUESTIONS 	6
       2.1 Are the reviews and resulting advice timely?	6
       2.2 To what extent are the review comments responded to and acted
             on by the Program Office/Region? 	6
       2.3 Do the peer reviews make a difference in the quality of the product?	8
       2.4 Does the RSAC have additional comments/guidance for EPA on
             how to improve the effectiveness of the peer review process?	9
             2.4.1 General Findings and Recommendations 	9

Appendix A - Water Quality Criteria Methodology for Human Health	  A-1
       A-l. Were the reviews and resulting advice timely?  	  A-l
       A-2. Do the peer reviews make a difference?  	  A-l
       A-3. To what extent are the review comments responded to and
             acted on by the Program Office/Region?	  A-5
       A-4. General Observations and Recommendations and Next Step	  A-5

Appendix B The Draft National Biqaccumulation Factor For Methylmercury  	B-l
       B-l. Background Information On Review	B-l
       B-2. Are the reviews and resulting advice timely?	B-l
       B-3. Do the peers reviews make a difference?	B-2
       B-4. To what extent are the review comments responded to and acted on by
             the Program Office/Region?  	B-3

Appendix C - Risk Characterization Handbook Peer Review	  C-1
       C-l. Introduction	  C-l
       C-2. Are the reviews and resulting advice timely?	  C-2
       C-3. Do the peer reviews make a difference?  	  C-2
       C-4. To what extent are the review comments responded to and
             acted on by the Program Office/Region?	  C-4
       C-5. Does the RSAC have additional comments/guidance for EPA on
             how to improve the effectiveness of the peer review process?	  C-5

REFERENCES CITED	R-l

ACRONYMS
                                          IV

-------
                                  1.  INTRODUCTION
1.1 Background and Schedule

       The Research Strategies Advisory Committee (RSAC) of the EPA Science Advisory Board
was requested to conduct a review of the overall peer review process and the U.S. Environmental
Protection Agency's (Agency's) associated efforts to develop guidance to implement the policy.
RSAC decided to conduct this review in two phases. The first phase focused on EPA's peer review
processes and policies. This initial RSAC review was limited to an overall evaluation of the peer-
review process gleaned from the Peer Review Handbook (1998), several GAO reports, a number of
letters and memos from senior management in the Agency, and presentations and interactions with
Agency staff during a September 23-24 1999 public meeting. A report on this first review was
approved by the Executive Committee in November 1999 (SAB, 1999).

       The second phase described in this report focused on the implementation of the processes and
policies, and the impact of the peer review policy on the Agency's decision-making.  RSAC was
asked to conduct a subsequent in-depth analysis to sample trends in the use of peer review in EPA, the
impacts of the peer reviews, and to identify additional opportunities for enhancing the benefits from peer
review in the form of quality, credibility, relevance, timeliness, and the Agency's leadership position.
This was done by considering a number of programs and products and selecting three examples to
review in detail.

1.2 Charge to the Committee

       EPA's charge to the RSAC was as follows:

       a)     Are the reviews and resulting advice timely?

              RSAC was asked to evaluate the impact of the peer review process for a series of case
              studies including information on:

              1)     The scientific and technical character of the work products,

              2)     The scope and depth of the peer review conducted,

              3)     The peer review results/recommendations generated, and

              4)     The use and impact of peer review results in Agency decisions (e.g., how did
                     peer review result in better decisions)?

       b)     Do the peer reviews make a difference?

              Making a difference includes providing useful advice in a timely manner:

              1)     Was peer review conducted for the quality, adequacy and completeness of the
                     data developed and obtained from the literature for the study and Agency
                     decisions?

              2)     When was the peer review conducted?

              3)     Did the peer reviewers have adequate time to conduct an in-depth review?

                                             1

-------
              4)     How were the peer review results utilized to improve EPA's decision?

              5)     To what extent are the review comments responded to and acted on by the
                     Program Office/Region?, and

              6)     Does the RSAC have additional comments/guidance for EPA on how to
                     improve the effectiveness of the peer review process?

1.3 The RSAC Study Process

       The Agency's Peer Review Policy was signed in 1993 (Reilly, 1993). Following a 1997
review of its implementation, peer review guidance was developed in 1998 in the form of the Peer
Review Handbook that was formally adopted by the Science Policy Council that same year (U.S.
EPA, 1998). Subsequently the General Accounting Office (GAO) reviewed the peer review
implementation process raising questions about the independence of reviews done by EPA (see
collectively U.S. GAO, 1994; 1996; and 1999).

       The Research Strategies Advisory Committee (RSAC) also evaluated EPA's Peer Review
Program in a two-step process.  The first step was RSAC's September 23-24, 1999 review of
whether or not key components of sound peer review process are in place at the EPA, whether
appropriate tools and training are available, and whether management commitment exists to carry out
EPA directives for peer review (SAB, 1999). The Committee was pleased to see the Agency's
diligence with respect to Peer Review. From the materials presented to the RSAC, EPA's Peer
Review Process was well articulated and appeared to be fundamentally sound and, with a few
exceptions, working as intended.

       RSAC noted that peer review processes seemed to be well established at the EPA and were
continuing to improve through a mechanism of continued internal examination, led by the Office of
Research and Development (ORD), and process changes carried out by Decision-makers at the
direction of the Science Policy Council (SPC). The key driver for the Peer Review Program was
EPA's management leadership.  RSAC made several suggestions to strengthen the peer review process
the most important of which was to expand the scope to the evaluation of interagency and international
products considered important to environmental decision-making. RSAC also recommended that peer
review be extended to the up-front review of scientific and technical planning products such as strategic
plans, analytic blueprints, research plans, and environmental goals documents, noting that major
economy and technology products, and social science research products can and should be subjected
to the peer review process in a manner similar to natural science products.  Products that are policy-
analytic, in that they are not purely science-based but involve the application of policy and values,
should also be peer reviewed to ensure that appropriate methods and procedures have been used,
including an explicit treatment of assumptions and value judgments, adequate sensitivity analysis, and
adequate treatment of uncertainty.  Thus, the explicit need to review major social science products (in
addition to economic products), and policy-analytic products should be added to the Peer Review
Handbook. Finally there should be a requirement for completion of training before a person can be
designated as a Peer Review Leader.

       The RSAC decided that a follow-up review would be needed to evaluate the extent, adequacy
and timeliness by which peer review is carried out by various program offices, Regions and EPA
laboratories.

       On February 16, 2000 RSAC held a teleconference to plan phase two of the evaluation of
EPA's peer review program, the effectiveness review.  The committee explored several ways to do the
study and held follow-up discussions during its February 24 public meeting where the Committee
identified the need to consider products that were:

-------
       a)     High and low importance work products,

       b)     From a range of offices,

       c)     Complex and simple products developed by one office and by two or more offices,

       d)     Developed in support of decisions that had to be made in both long- and short-time
              frames,

       e)     Both program-directed and core science, and

       f)      Peer reviewed through various mechanisms including letter, contractor and FACA
              review.

       The above seven points were used as criteria to evaluate the entries on the Agency's 1997 peer
review inventory. The Committee selected the following peer reviews as candidates for further study:

       a)     Guidance for PM 2.5 Speciation (OAR),

       b)     Integrated Atmospheric Network (Region 5),

       c)     Criteria for Requiring in utero Cancer Studies (OPP),

       d)     Regional Environmental Monitoring & Assessment: Galveston Bay (R6),

       e)     Cancer expert System (OPPTS),

       f)      Dispersion Modeling of Toxic Pollutants in Urban Area (OAR),

       g)     Fossil Fuel Combustion Report to Congress (OSW), and

       h)     Tributyl Tin Draft Aquatic Toxicity (OW)

       The charge to RSAC charge with respect to its review of the peer review process at EPA was:

       a)     Is EPA peer reviewing the right products?

       b)     Are the peer reviews conducted appropriately?

       c)     Do the peer reviews make a difference?

       d)     Does EPA peer review all the science it uses (e.g., data  submitted from  parties outside
              the Agency)?

       e)     Does the RSAC have additional comments/guidance for EPA?
       We contacted the appropriate offices and asked them to provide background material and a
brief analysis to help the Committee address the charge questions. In response, the offices provided
such a large volume of relevant materials that the Committee was unable to conduct its review by this
mechanism.

       Thus, at its December 2000 meeting the Committee revisited the means by which it might
conduct the review and asked EPA for staff help to better focus and frame the issue. Dr. Kevin
Teichman, the Office of Research and Development's Office of Science Policy (OSP) Associate

-------
Director of Science for OSP, volunteered to help marshal the Agency's assistance. RSAC noted that
while it originally considered 8 candidate products as a basis to conduct the review of peer review
implementation in order to get a broad range of Agency products, that it would be best to start with a
smaller number that could be reviewed in a reasonable time period before committing to a larger effort.

       Noting that the key is to see how the peer review comments made the science used to inform
the decision better it was felt that the best way to proceed would be to look at regulations related to the
case studies and work back from the final products to the drafts that were peer reviewed to see how
they were improved by the review.  Dr. Teichman suggested that the case study candidates might
include: a) residual risk, b) sediment criteria, c) atrazine, d) mobile source rule, e) ecological guidelines,
f) risk characterization, and g) some regional product.

       At the March 6-7, 2001 RSAC meeting Ms. Connie Bosma, Office of Science Policy Program
Support Staff Chief, suggested that RSAC consider the following as it selected the studies to review:
For each  case study selected, she suggested that RSAC look at: a) the peer review comments, b) how
the Agency handled the comments,  c) how the comments changed the document to reflect the
comments, d) how the changed document impacted Agency decisions, and e) the lessons learned and
implications for the future.

       Dr. Teichman and Ms. Bosma identified 10 candidates for RSAC's evaluation: a) Report on
Bioaccumulation of Mercury, b) Human Health Methodology for Deriving Ambient Water Quality
Criteria, c) The Silver Study, d) Risk Assessment for Chlorinated Aliphatics Listing Determination, e)
Chemical Assessment for Atrazine,  f) Human Subjects Testing, g) Risk Characterization Handbook, h)
Science Algorithms of the EPA Models - 3 Community Multi scale Air Quality (CMAQ) Modeling
System, i) Mercury Study Report to Congress, and j) National Ambient Air Quality Standards for
Particulate Matter.

       Of these RSAC selected four as possible candidates for the review.These candidates appeared
able to be reviewed in the time RSAC had available and with the experience of it's members. Ms.
Bosma arranged for Agency staff to brief the committee on these case studies: a) Report on
Bioaccumulation of Mercury, b) Human Health Methodology for Deriving Ambient Water Quality
Criteria,c) Chemical Assessment for Atrazine, and d) Risk Characterization Handbook.

       Following the briefings the committee selected three of the cases for their review: a) Report on
Bioaccumulation of Mercury, b) Human Health Methodology for Deriving Ambient Water Quality
Criteria, and c) Risk Characterization Handbook.

       As to the process of narrowing down to the final three cases, they were all examples of
documents that were developed at the science/policy interface as major products.  The RSAC
determined early in the process that they did not have time or resources to do a representative sample.
A prior cut removed from our table reviews of papers being published in the peer reviewed literature
because the RSAC felt that they were probably not a problem  area since peer review is an integral part
of the process. The RSAC also decided not to do the mercury report in part because the SAB was
part of that review. During deliberations about which products to review, RSAC considered a certain
level of familiarity with the documents as one of the criteria to facilitate our work at this stage, since it
was clear that, without this as a criteria, the effort would be a complex undertaking. Further, the type
of peer review selected was not a selection criterion. As a result, we essentially ended up with case
studies having similar methods of peer review by chance. The final selections were activities that were
setting precedents for  the use of science in developing policy.

-------
1.4 Format of this Report

       Following this Introduction, the report provides specific responses to the questions in the charge
to the Committee (Chapter 2). In the attached Appendices are the individual assessments of three peer
review processes.

-------
                    2.0 RESPONSE TO CHARGE QUESTIONS


2.1 Are the reviews and resulting advice timely?

       RSAC reviewed the time sequence for each of these three peer reviews case studies. In these
three cases, external peer review was organized by a consulting firm under contract with EPA. This
contractor had general oversight of the process, including participation in the selection of peer
reviewers, compilation of preliminary comments, organization of the review workshop or meeting, and
preparation of the document summarizing the peer review comments. The time lines included the time
required for the peer review after the reviewers received the charge questions and the time required for
the Agency to generate a response to the peer review comments.  The three peer review case studies
involved products of different complexities and the process from issuance of the peer review charge
questions to completion of the review ranged in duration from 3 to 5 months. Given the characteristic
time constraints on the type of people typically asked to  serve on such review panels, the schedules
were adequate to provide an in-depth review and were reasonably timely in execution.  In addition the
Agency generally acted in a timely manner in having the peer review report completed and in
responding to the peer review comments.

       Typically, there is a more protracted period to manage the entire peer review process. The
Agency must allow sufficient time to select the contractors, develop the work assignment, select experts
together with the contractor, and to respond to the reviewers comments.  The whole process took up
to two years for one of the case studies evaluated in this study. However,  given the complexity of the
peer review products considered in this study, the timeliness of the entire  peer review process was
reasonable.

       Each of the case studies examined by RSAC used an external peer review that was organized
and facilitated by an Agency contractor. In this model, the Agency only provides the areas of expertise
that are needed to review the product and the contractor then suggests potential panel members to the
peer review manager. The Agency can then disagree with suggestions and ask for appropriate
changes.  It is not possible for RSAC to judge the general timeliness of the peer review for all products
due to the limited types (Agency Contractor facilitated) of case studies evaluated.

2.2 To what extent are the review comments responded to and acted on by the Program
Office/Region?

       In all three cases, RSAC concludes that the Agency did respond to the peer review comments
by making modifications to the draft document. However, there were some notable differences in the
way these changes were documented. There were written responses to the peer reviews for both the
Draft National Bioaccumulation Factors For Methylmercury and for the Draft Water Quality
Criteria Methodology for Human Health. However, there was no specific documentation of the
response to the reviews in the case of the Risk Characterization Handbook, where changes were
incorporated in the text without a separate specific answer to each  comment in a stand-alone
document.  The Peer Review Handbook indicates that the "peer review record must contain a
document describing the Agency's response to the peer review comments" (page 74, first paragraph).
The absence of such document in the case of the Risk Characterization Handbook (RCH) meant that
it was necessary not only to review the draft and the heavily revised final document carefully in order to
assess the changes made in response to the reviewers' comments, but also that the disposition of
specific comments was not always clear.

-------
       It is important for several reasons that a written response to the reviews be available. First,
everyone interested in the issues being raised in the document should be able to ascertain how the
major criticisms of the document were dealt with.  The lack of specific responses to some peer review
comments raises the issue of how responses to peer review comments and suggestions are tracked and
documented.  Concise and easily available documentation on changes and the rationale for responses to
peer review recommendations would provide more transparency in the process. In addition, individual
reviewers can disagree on specific issues, so that there exist conflicting recommendations in some
cases, or the peer reviewers can be wrong in their assessments of an issue. It is obvious that there is a
process of decision about which recommendations to adopt and how to incorporate the changes that
are adopted in the revised document. There must be clear and systematic documentation of this
decision processes including an explicit rationale for the response to each peer review comment,  and
there should be a systematic way of compiling such documentation.

       Thus, for the particular case of the RCH, the complete record for peer review should have
included: the Draft document, the charge for peer review, preliminary peer review comments, the
summary of the peer review Workshop, the specific responses to the peer review comments, and the
final version of the RCH.

       The availability of such documentation leaves a trail for the next set of staff who have to work
on the particular area or rule being reviewed. In many cases, a particular topic is left alone for a period
of time after documents have been reviewed and rules promulgated.  It is important for the next staff
members who have to pick up the threads years later to see what was said by the peer reviewers as
well as the rationale of the document authors for adopting or ignoring the suggested changes.  Such a
written record provides  a much better basis for revising and improving work products in the future
when new information becomes available.

       In those cases where a written record was provided to illuminate the changes made in response
to the reviews, it is clear that the reviews resulted in substantial changes to the documents as detailed in
the next section.  In the case of the bioaccumulation factors for mercury, the Program Office (Office of
Water - Health and Ecological Criteria Division) responded in an appropriate manner to the peer
review comments as summarized below. Most members of the peer review panel indicated that
derivation of single-value trophic level-specific national BAFs for methylmercury that would be
applicable to all waters of the U.S. under all conditions would be difficult,  if not impossible.  The peer
review panel recommended developing BAFs on a more  local or regional scale, if not on a site-specific
basis. In response to the peer comments on the need for  site-specific BAFs (which central to whole
review), EPA did not change the Draft National Reports, but concluded that trying to derive a universal
BAF for mercury / methylmercury is problematic which would call into question the validity of derived
water quality criteria using the national BAFs. Hence, the Agency decided to make a major change in
the way the water quality criterion is to be  derived for mercury.  EPA decided to use a fish tissue
residue-based approach to setting water quality criteria for mercury (i.e., a given concentration in fish
and shellfish shall not be exceeded). EPA  published this revised approach for establishing water quality
criteria for mercury in January 2001. The approach includes a site-specific tissue residue measurement
as well as the potential to develop a site-specific aquatic BAF if the tissue criterion is exceeded.  This
reflects a significant change in  the regulation resulting from the peer review process.

       In the case of the Draft Water Quality Criteria Methodology for Human Health, the
Agency appeared to respond appropriately to most comments raised by the peer review panel. The
major issues in which this was not done were those issues where agency-wide consensus had not been
reached (e.g. the proposed cancer risk assessment guidelines, in large part). Less frequently, the
Agency had to point out factual errors in the peer review report - for example some assertions that the
methodology  on risk assessment was done differently under the Safe Drinking Water Act.  Some


                                              7

-------
disagreements seemed to arise as a result of falling into the policy arena rather than questions of
science. For example, the methodological approach to RfD and PdP/SF were essentially the same
when they have a different philosophical basis.  This fact was raised by the review, but the Agency's
response was couched in terms that different endpoints would occur at different doses. The answer
was true, but did not address the question being posed. Despite the fact that disagreements were
identified and some responses did not directly address the issue, the Agency appeared very responsive
to outside comments on the development of Water Quality Criteria Methodology for Human Health.

2.3 Do the peer reviews make a difference in the quality of the product?

       In all three case studies, RSAC found that the peer reviews had substantial effects on the final
product. In the case of the Ambient Water Quality Methodology for Human Health, responses to the
peer review comments were responded to individually in a single document.  The suggestions were
substantive and required a considered response. In some cases changes were directly incorporated
into the product while in others editing of the product was done which made the Agency's intent much
clearer.

       In the case of the mercury bioaccumulation, the whole approach of developing bioaccumulation
factors that would be applied nationally was abandoned in favor of developing data that addressed the
processes that may be operative in particular sites.  In effect, tolerances were set at a level of mercury
that would be allowed in fish flesh. Site-specific data on bioaccumulation then are utilized to back
calculate to a regulatory action.  Therefore, the peer review had a major impact on the guidance for
determining bioaccumulation factors.

       The Risk Characterization Handbook was substantially improved via responses to its peer
review. In this case, the RSAC was not supplied a document that detailed the nature of the responses
of the Agency.  The Committee confirmed that appropriate changes were made. In addition, the clarity
of the revised handbook was much better in response to the peer review.

       RSAC strongly  supports the establishment of a consistent system for capturing both the
reviewers' comments and Agency responses to these comments.  This should be part of the institutional
memory of the Agency. Virtually everything the Agency does will be subjected to update and revision
in the future.  These comments not only establish that the process was followed properly, but will serve
as an invaluable aid to staff updating this guidance.  Essentially, these documents provide insights into
the thinking that was done in prior iterations as a starting point for revisions. Frequently, the thinking
behind the guidance is as important as the guidance itself because it can make the object or purpose of
the guidance much clearer.

       The peer review activities examined by RSAC were limited to examples that were managed by
a contractor. Some comments on the contractor-managed process are presented below.  This was not
deliberate choice by the committee as projects were reviewed primarily on the basis of the importance
of the product in the development of policy. However, the committee points out that our conclusions
may not be fully applicable to peer reviews managed in a different manner.

       The general conclusion of our review is that peer review had a substantial impact on the quality
of the products that were selected. In some cases specific improvements in the use of science resulted.
In other cases, the comments prompted the Agency to clarify the basis for definitions and guidance in
the document. Coincidently, there were areas in which the documentation could not be finalized
because the final guidance needed to incorporate information from other activities of the Agency (e.g.
the proposed revisions in the Agency's Cancer Risk Assessment guidelines).  These collective inputs all
resulted in substantial improvements in the final products.

-------
2.4 Does the RSAC have additional comments/guidance for EPA on how to improve the
effectiveness of the peer review process?

2.4.1 General Findings and Recommendations

       a)     Peer Review is being extensively conducted by the Agency and it is clearly making a
              difference in those examples that RSAC examined.

       b)     RSAC found no examples of lack of independence of the reviews examined. RSAC
              encourages the Agency to continue to take necessary steps to ensure that even the
              potential appearance of lack of independence is avoided. In this particular review of the
              three case-studies examined, the RSAC observed that there appeared to be
              independence by the contractor in examination of the credentials of the peer reviewers
              and selection of the peer reviewers finally chosen.

       c)     RSAC reviewed guidance documents in this study. RSAC recommends that the
              Agency evaluate and systematically document how Peer Review is being employed to
              address the use of science in the rules and decision-making. This type of review is at
              the interface of science and policy and therefore is different from the normal scientific
              peer review process.

       d)     An area for potential improvement is the need to develop a uniform process for
              collecting, archiving, and systematically documenting information on responses to peer
              review comments so that disposition of reviewers' comments is fully transparent and the
              documentation on responses to peer review can be used in the future by those outside
              of the original peer review process.

       e)     While this was not the focus of this review, the committee observes that there are
              important products that are not being peer reviewed e.g., technical resource documents
              for MACT standards, TRI lead rule and residual risks documents.

       f)      RSAC's review was limited to a small  number of products. There is need for continual
              review of the robustness and effectiveness of the peer review process to ensure that the
              process is effective and making a difference in timely manner. RSAC recommends that
              the Agency develop a continuing in-depth analysis to fully examine trends in the use of
              peer review in EPA, the evaluate the impacts of the peer reviews on decision-making,
              and to explore additional opportunities  for improving the benefits of the peer review at
              the Agency.

-------
      Appendix A - Water Quality Criteria Methodology for Human Health

       Development of the Ambient Water Quality Criteria has been a key activity with the Water
Programs of the Agency.  The prior draft guidelines for human health methodology were published in
1980. Consequently, the new draft guidelines updated over 20 years of environmental research and
understanding of problems in water and changes that have occurred in the environmental policy arena.

       The RSAC review of the peer review of the methodology was guided by the questions included
in the draft charge dated 2/28/01.

A-l. Were the reviews and resulting advice timely?

       The time line for this review has the following key elements:

       a)     Draft Technical Support Document (TSD) and Federal Register Notice (FRN) July
              1998.

       b)     Peer review initiated in April,  1999,

       c)     Peer review workshop, May  17-19, 1999,

       d)     Peer review workshop report, September  1999,

       e)     Response to Peer Review Comments, August 2000, and

       f)      Guidelines issued October 2000.

       Considering the complexity of these guidelines, the time line was reasonable. In its charge to
the Peer Review Workshop, the Agency indicated its intent to issue the guidelines by the end of 1999.
At the same time, however, the Agency had indicated the issuance of the methodology would not occur
until the cancer risk assessment guidelines had been finalized. To our knowledge, those guidelines
remain in draft form. RSAC was unable to identify any external drivers on the development of the
guidelines in the documentation supplied. The  peer review report was completed within 4 months of the
Peer Review Workshop. RSAC finds this advice to be quite timely,  considering the complexity of the
Water Quality Methodology for Human Health.

A-2. Do the peer reviews make a difference?

       The peer review identified a number of major and minor points on the draft human health
methodology TSD and FRN.  The Agency published a formal response that makes it relatively easy to
follow the discussion. It is not the intent of this report to detail the merits of any debate that occurred,
but to see how the Agency responded. It is probably important to point out that the peer review
committee created some confusion in the way it addressed problems. For example, the discussion of
the peer review panel on issue 1 of the cancer section (5.2.1) was largely directed at policy issues
rather than science issues.  The peer review group also discussed issues under categories different than
those of the agency without making it clear why that was done.

       Selected points and responses are identified under three categories: Accepted peer review
recommendations, partial acceptance with explanation, disagreement with recommendations with
explanation, and dismissed and/or simply ignored points made in peer review.  It should be noted that a
                                           A-l

-------
collection of individual comments were provided to the Agency, but these were not included in the
material provided and were, in any case, considered beyond the scope of the present review.

1. Accepted peer review recommendations:

       a)     The peer review committee indicated that EPA needed to develop guidance for when
              to use the NOAEL/LOAEL, benchmark or categorical regression methodologies for
              RfD development. The Agency concurred and indicated that they were developing
              such a document and that it was undergoing internal review (5.2.3).

       b)     The use of less-than-90 day data was discouraged by the peer review workgroup and
              an addition of an additional UF of 10 if it was explained and necessary.  The Agency
              concurred with this recommendation (5.2.3)

       c)     The Agency accepted the recommendation that there be a case-by-case consideration
              of a non-threshold mode of action for certain chemicals that cause noncancer effects
              when deriving RfDs (5.2.3).

       d)     The Agency Accepted the Workgroup's recommendations that inhalation and dermal
              exposure should be considered in deriving criteria and their suggestions on how to
              derive criteria from such considerations.  They also indicated that they would
              acknowledge the guidelines developed in states.

       e)     The peer review workgroup suggested that Federal Register Guidance Document
              Equations be shown in the same level of detail and a list of contaminants that occur in
              fish tissues be provided.  The Agency accepted the suggestion and will work on a list
              after the guidance has been finalized.

       f)     There were three recommendations identified under section 6.5 of the review comments
              that related to data reliability, encouraging States and Tribes to do the best that they
              can, and to consider risks to individuals as well as populations. In general the Agency
              agreed with the  comments although there were nuances in the comments of the Agency
              and the original comments of the reviewers that were not captured by the other "side".

2. Acceptance or partial acceptance with explanation:

       a)     The peer review committee expressed concern that there were separate methods for
              use of pharmacokinetic modeling for noncarcinogens vs. linear or nonlinear
              carcinogens. The main issue was an apportionment of the UF of 10 between
              toxicokinetic vs. toxicodynamic variables. The committee stated that the toxicodynamic
              factor should be independent of the toxicokinetic factor. Moreover, they
              recommended it be applied to both low-dose linear carcinogens as well as non-
              carcinogens. The Agency explained that this was being addressed elsewhere in the
              Agency where there were attempts to harmonize assessment approaches for different
              endpoints (5.2.2).

       b)     Relative source  contribution brought a lengthy discussion on the part of the peer review
              workgroup and  as an extensive discussion from the Agency. They agreed to place
              more explicit information in their decision tree. However, they also pointed out that the
              Agency is limited in its ability to coordinate the RSC with other Agencies. They also
              indicated that they would attempt to clarify the discussion on RSC within the document.


                                             A-2

-------
              A major point of disagreement was the fact that incremental risk (because of the use of
              the linear extrapolation methods) is used for cancer and the different treatment required
              if thresholds are involved discussed elsewhere in this review (6.2).

       c)      Extensive discussion focused on use of USD A data and aspects of data use (e.g.
              uncooked data) from the exposure handbook and extended into considerations of
              minority and other groups in the population that might have different fish consumption
              habits. This included discussion of the combined body weight water consumption
              parameters to better capture exposure of children.  There was agreement in principle,
              but extensive technical  discussions in the response. While the Agency did not accept all
              these recommendations, their response was reasoned and documented (6.2).

       d)      The peer review workgroup brought up several issues on Monte Carlo and other
              statistical techniques, with which the Agency agreed. However, they pointed out that
              basing the criteria on the population at risk, they were confident that there general
              procedures were sufficiently conservative to protect the population as a whole.  They
              also point out that the CWA requirements and goals do not make development of
              reasonable maximum exposure or maximally exposed individuals useful (essentially they
              permit dischargers) (6.3).

       e)      The peer review group seemed to suggest that EPA needs to provide special
              procedures for the States and Tribes to create water quality standards that do not
              require Federal Resources.  The Agency indicated this was their guidance and that the
              States and Tribes were not constrained in their ability to develop AWQC to reflect
              local and regional conditions (6.3).

       f)      Some discussion arose  related to a method for aggregating exposure from various
              sources as opposed to route specific margin of exposure approaches. There was
              particular concern about cumulative exposure to several chemicals with similar toxic
              endpoints.  The Agency provided a list of publications relevant to this point and pointed
              out the difficulties of developing meaningful guidance at this point in time. (6.4)

       g)      There was significant discussion and response related to the question of who (subgroup
              or percentile) the Agency is trying to protect with its AWQC. In general the Agency
              acknowledged the point, but noted the difficulty of establishing an accurate mean risk
              that has any real meaning. This is complicated by lack of information on very many
              meaningful distributions within the population, of which exposure is only one variable, to
              estimate cases of disease. They quote the Guidelines for  Exposure Assessment in
              saying that "the estimate's value lies in framing hypothetical risk in an understandable
              way rather than any literal interpretation of the term 'cases".  Additional discussion
              ensues around questions of whether 10"4 is protective of human health for those who
              have fish consumption at the 95th percentile.  From the SAB's perspective, these are
              important policy discussions for which the scale is in the eye of the beholder and are not
              appropriate topics for scientific debate unless there were more explicit attempts at
              garnering public opinion, as opposed to that of special interests with political agendas.
              This is not to say 1hat there are not scientific inputs into the process and that there
              should be  discussion of the most appropriate data.  It is simply stating that the risk
              targets selected are a policy decision that is unlikely to be scientifically tested for
              accuracy in the vast majority of cases.

>. Disagreement with recommendations with explanation:


                                             A-3

-------
       a)      It was noted that the RED and PdP/SF equations were operationally the same equation.
               The Agency explained that it was important to separate these procedures as the RfD
               always refers to the most sensitive non-cancer endpoint. In the application of the
               PdP/SF approach it must address the tumor site under consideration. Therefore, the
               two points may not be the same (5.2.1).

       b)      The peer-review team was concerned that the relative source term was included in the
               RfD range, but not in the linear cancer model (sections 5.2.1 and 5.2.2). This was
               clearly explained in the text of the document and the response to the comments.  The
               main reason was that this program addresses incremental risk of ambient water
               contamination, not an apportionment of risk. In the case of threshold effects it is
               necessary to be certain that total exposure does not exceed the RfD, essentially a
               surrogate for a threshold. There was also the issue that the linear extrapolation model
               was thought to be sufficiently protective to not require this correction.

       c)      The peer review workgroup did not think the state of the science supported a
               quantitative adjustment for severity of effect of an RfD. EPA replied that they were
               considering the situation where the mode of action was known and sequences of events
               in the development of an adverse health effect  It was not apparent that the review
               workgroup was addressing this problem (5.2.3).

       d)      Methods proposed for non-linear carcinogens do not consider individual variability.
               EPA explains that it has adopted the NRC (1994) recommendation that "the
               conservatism inherent in a linear-no-threshold model obviates the need for any explicit
               consideration of interindividual variability. Extra factors might be considered if a special
               population is under consideration.

       e)      Methods for carcinogens and non-linear carcinogens do not estimate risk.  The Agency
               explains that this can be estimated based on any distribution of actual or potential
               environmental exposures for non-linear dose extrapolations.  The Agency again
               explained that they would conform to policy issues made  on an Agency wide basis in
               this regard. The issue of threshold assumption in one case vs. the other is also
               described in some detail on page 16 of the Agency's reply dealing with incremental risk.

4. Dismissed or ignored comments:

       a)      Application of an RfD  range rather than a default point estimate was arbitrarily set at a
               half-long range on either side of the RfD had no basis (5.2.2).  The response admitted
               that this lacked a basis in data, but the Agency maintained this the RfD range concept
               because it allowed some flexibility for site-specific or contaminant-specific situations. It
               would have been better to simply indicate that this was a policy call.

       b)      The review workgroup indicated that that the body weight ratio374 should be applied to
               the RfD derivation as well as the cancer and PdP/UF approaches(p. 5-11). The
               Agency did not acknowledge this point.
                                             A-4

-------
A-3.  To what extent are the review comments responded to and acted on by the Program
Office/Region?

       In general, the Agency appeared to respond appropriately to most opinions raised by the peer
review panel. The major issues in which this was not done were those issues where agency-wide
consensus had not been reached (e.g. the proposed cancer risk assessment guidelines, in large part).
Less frequently, the Agency had to point out factual errors in the peer review report - for example
some assertions that the methodology on risk assessment was done differently under the Safe Drinking
Water Act.

       Some disagreements seem to arise as a result of falling back into the policy arena. For
example, comment 3 a above was directed at the methodological approach to RfD and PdP/SF
approaches and the Agency's response was couched in terms that different endpoints would occur at
different doses. The answer was true, but did not address the central question.

       Conversely, the peer review workgroup occasionally did not address an issue in the same
framework as the Agency.  In issue 3c, the Agency explained that it was considering severity in the
context of events progressing from detecting molecular disturbances involved to the actual health effect
and recognizing that the doses  at which these parameters might be detected might be different and that
had to be taken into account. The peer review workgroup seems to have interpreted the question in
choosing between endpoints (e.g. liver toxicity vs. cardiovascular effects, vs. mild rashes, etc.).

       Despite the fact that disagreements were identified and some responses did not directly address
the issue, the Agency appeared very responsive to outside comments on the development of Water
Quality Criteria Methodology for Human Health. The peer review was organized by a contractor and
involved a finite number of experts familiar with the field. Considering the complexity of this program,
this approach is much more focused on technical issues than would have been the case if broader
involvement had been sought.  Thus, it provides a necessary adjunct to the broader Public Comment
input into the process. The RSAC is of the opinion that the process utilized for peer review input was
very appropriate.

A-4. General Observations  and Recommendations and Next Step

       The  contractor-run peer review process clearly can be an efficient method to review work
products.  However, it continues to beg the question of the independence of review.  Since the individual
who is ordering the review has input into the make-up of the panel, there is the possibility of excluding
individuals who are technically qualified and are known to have opinions that may be different from those
of the Agency.  The problem is that the contractors do not always have the expertise to identify all of the
competent reviewers and to build a panel with a balance of views in the case of controversial or uncertain
issues.

       There are two possible ways to resolve this potential problem.  First, an independent group
within the Agency could be established who would be given the responsibility for the review process.
In the same way, the National  Center for Environmental Research (TSTCER) can externally review a
wide range of proposals for scientific research, an analogous group could be established who would
have the expertise to work with external contractors to build competent panels without any potential of
inadvertent bias. Alternatively, the Agency could work to build a number of contractors who would
have higher levels  of expertise  and thus, able to function independently from the Agency office that is
ordering the review.
                                             A-5

-------
  Appendix B The Draft National Bioaccumulation Factor For Methylmercury


B-l. Background Information On Review

       a)     Technical Charge for Expert Peer Review:  The peer review group for the Draft
              National Bioaccumulation Factors for Methylmercury Review were instructed that the
              review would be performed in accordance with EPA Peer Review Handbook.

       b)     EPA requested comments on: (1) Are you aware of any useable mercury-specific
              bioaccumulation data that may not have been identified by the literature search
              strategy? (2) Of the data used in the document, are there any that you feel are
              inappropriate for deriving a mercury BAF? (3) Given the lack of data, is the use of
              translators to convert one form of mercury in water to another (e.g., total mercury to
              total dissolved methylmercury)? (4) Given the available data, is it appropriate to
              combine the trophic level and water-body type (e.g., lentic and lotic) specific BAFs into
              single trophic level-specific BAFs, or should the BAFs remain separated by water body
              and trophic level? (5) Does the uncertainty discussion adequately identify and describe
              major sources of uncertainty in the BAFs that would be useful to risk managers?

       c)     EPA also provided RSAC with a separate 2-page document entitled "Technical
              Charge for Expert Peer Review." It is not clear that these questions were for the same
              peer review.  The charge questions related to bioaccumulation were identified as
              follows: (1) The recommended methodology guidance to estimate BAFs (as opposed
              to BCFs) as it relates to the hierarchy of four methods used to derive BAFs; (2) the
              appropriateness of procedures for estimating consumption-weighted default lipid value,
              the equation to derive the freely dissolved fraction of a chemical (including estimates of
              KDOc and KPOC) and the choice of food web structures used to calculate food chain
              multipliers. (3) available approaches and data to account for metabolism in the
              determination of a BAF value, and to predict food chain multipliers; (4) Any other
              models that EPA should consider for inclusion in the revised methodology for estimating
              bioaccumulation; and (5) whether the draft BAF methodology is an improvement over
              the 1980 methodology and, in particular, whether it is likely to be more predictive of
              bioaccumulation.

B-2. Are the reviews and resulting advice timely?

       Time sequence for the various publication / peer review steps:

       a)     May 10, 2000 - Draft Report, Section I National Bioaccumulation Factors For
              Methylmercury and Draft Report, Section n Default Chemical Translator for Mercury
              and Methylmercury,

       b)     June, 2000 - Charge to peer review committee,

       c)     August 23, 2000 - Peer Review Comments Report prepared for U.S. EPA by Versar,
              Inc., and

       d)     January 3, 2001 - Response to Peer Review comments on the Draft national
              Methylmercury prepared by Erik Winchester (U.S.EPA) to Criterion File: Water
              Quality Criteria for Methylmercury.


                                           B-l

-------
       The peer review panel recommended developing BAFs on a more local or regional scale, if not
on a site-specific basis. In response to the peer comments on the need for site-specific BAFs (which
central to whole review), EPA did not change the Draft National Reports, but concluded that trying to
derive a universal BAF for mercury/methylmercury is problematic which would call into question the
validity of derived water quality criteria using the national BAFs. Hence, the Agency decided to make
a major change in the way the water quality criterion is to be derived for mercury. EPA decided to use
a fish tissue residue-based approach to setting water quality criteria for mercury (i.e., a given
concentration in fish and shellfish shall not be exceeded). EPA published this revised approach for
establishing water  quality criteria guidelines for methylmercury in January 2001.  The approach includes
a site-specific tissue residue measurement as well as the potential to develop a site-specific aquatic
BAF if the tissue criterion is exceeded. This reflects a significant change in the regulation resulting from
the peer review process.

       RSAC review of the time sequence of the Draft Bioaccumulation Factor Report, Peer Review
and response to Peer Review Comments Report indicate that the Agency acted in a timely manner in
having the report peer reviewed and in responding to the peer review comments.

B-3. Do the peers reviews make a difference?

       a)     Was peer review conducted for the quality, adequacy and completeness of the
              data developed and obtained from the literature for the study and Agency
              decisions? And, did the review consider the scientific and technical character of
              the work products?

       The first question posed to the peer review group was: are you aware of any useable mercury-
specific bioaccumulation data that may not have been identified by the literature search strategy?
Additionally, the Agency asked the peer review group to comment whether:  any of data used in the
document were inappropriate for deriving a mercury BAF?  Taken together these questions indicate
that the Agency was looking to determine the completeness accuracy of the data obtained from the
literature and used in makings its assessment.

       b)     When was the peer review conducted?

       The peer review was conducted in August 2000 and followed the development of two reports
entitled: Draft Report, Section I National Bioaccumulation Factors For Methylmercury and Draft
Report, Section U  Default Chemical Translator for Mercury  and Methylmercury. This occurred prior
to the Agency making a final decision as to how BAFs should be derived for the purpose of setting a
national water quality criterion for mercury and methylmercury. This was the appropriate time-period
to conduct a peer review.

       c)     Did the peer reviewers have adequate time to conduct an in-depth review ?

       Information on the amount of time the reviewers had to review the reports provided and to
prepare comments was not provided.

       d)     How were the peer review results  utilized to improve EPA 's decision?
       The peer review process assisted the Agency in making a proper assessment of methods and
data available to establish bioaccumulation factors for aquatic organisms for mercury and
methylmercury. The BAFs are utilized in establishing water quality criteria for mercury and hence are
                                            B-2

-------
very important and have national significance.  The Agency used the peer review process to assure that
appropriate methodology was being employed before setting water quality regulations for mercury.

B-4. To what extent are the review comments responded to and acted on by the Program
Office/Region?

       The Program Office (Office of Water - Health and Ecological Criteria Division) responded in
an appropriate manner to the peer review comments as summarized below.  Most members of the peer
review panel indicated that derivation of single-value trophic level-specific national BAFs for
methylmercury that would be applicable to all waters of the U.S. under all conditions would be difficult,
if not impossible.

-------
           Appendix C - Risk Characterization Handbook Peer Review

C-l. Introduction

       The Risk Characterization Handbook (RCH) was written to provide guidance to Agency
personnel on how to collect, evaluate, and utilized the available qualitative and quantitative information
to characterize health and ecological risks. The intention of the Handbook is to provide an Agency-
wide document for risk characterization across the EPA. The peer review coordinator of this work
product was Dr. Dorothy Patton who was then Executive Director of the Science Policy Council at
ORD's Office of Science Policy.  She has since retired from the Agency. In order to obtain a
perspective on the review process for this document, we contacted her and she provided useful
information. The documents reviewed included:

1)     Draft Risk Characterization Materials Prepared for Peer Review Scheduled for March 24-25,
       1999 - EPA/600/R-99/025, Office of Research and Development, Cincinnati, Ohio (March
       1999),

2)     Workshop for Peer Review of the Draft EPA Risk Characterization Guidance and Case
       Studies - Premeeting Comments, Alexandria, Va. (March 24-25, 1999),

3)     Summary Report, Peer Review Workshop - Draft EPA Risk Characterization Guidance and
       Case Studies, USEPA, Office of Science and Policy, Office of Research and Development,
       Washington D.C., Prepared by Eastern Research Group, Inc., Contract No 68-C-98-1148,
       Work Assignment No. 99-03 (May 21, 1999),

4)     Science Policy Council Handbook - Risk Characterization, EPA 100-B-00-002, Office of
       Science and Policy, Office of Research and Development, Washington, DC, (December
       2000), and

5)     Assorted documents (memoranda, electronic communication, review notes, etc. dated from
       July, 1999 through December 2000) provided by Dr. Jack Fowle.

       The Draft Risk Characterization Materials Prepared for Peer Review Scheduled for March 24-
25,  1999, contains the Draft Risk Characterization Handbook including Case Studies, and the charge
to the peer reviewers.  The Draft Handbook was the culminating product of cross-Agency efforts since
1995, and it built upon the Agency's 1995 Risk Characterization Policy, the 1995 Risk
Characterization Implementation Plans prepared by the Offices/Regions, and prior definitions and
conceptual frameworks proposed by the NAS and others.

       The Draft Handbook was peer reviewed in a workshop held March 24-25,1999, in
Alexandria, VA. The review was organized by a contractor, Eastern Research Group, Inc. Prior to the
workshop, the thirteen reviewers were requested to provide individual pre-meeting comments on the
Draft Handbook guidance and each of four case studies. Separate charge questions were issued for
the Draft Handbook guidance and each specific case study. All reviewers were requested to comment
on the Draft Handbook guidance. Subpanels of three reviewers each were asked to address questions
about the consistency of the specific risk characterization elements for each case study with the guiding
principles of transparency, clarity, consistency and reasonableness (TCCR) set forth in the Draft
Handbook guidance.  The pre-Workshop comments were compiled in a single document that was
made available to Workshop participants.
                                           C-l

-------
       During the Workshop, Dr. Patton provided three additional charge questions for the reviewers
that focused on the utility of the guidance and case studies both within and outside the Agency.  The
contractor prepared the Summary Report of the discussions during the peer review Workshop, the
pre-meeting comments, and comments from Workshop observers; this document was published in May
21, 1999. The Draft Handbook document was revised and published in final format in December,
2000.

C-2. Are the reviews and resulting advice timely?

       In this case, the decision was made to use an external panel review with the review run by an
Agency contractor. In this model, the Agency provides the areas of expertise that are needed to
review the product. The contractor then suggests the panel members to the peer review manager and
the Agency can disagree with the suggestions and ask for changes. Dr. Patton described the issues
related to this choice including the ability of the external consultant panel to provide faster turnaround on
the review.  Thus, as best we can tell, there was a timely review of this product.

       The review of the Draft Risk Characterization Handbook was held on March 24-25, 1999.
The draft document was sent to the review panel prior to the meeting, but the documentation does not
indicate the date.  The pre-meeting comments on the draft document were made available to the
Workshop participants , but there is no information about the date either. However, given the typical
time constraints on the type of people asked to serve on such expert panels, and considering the
detailed pre-meeting review comments provided by the reviewers, the timeline for mailing information  to
reviewers, receiving and compiling pre-meeting comments in a single document, announcing the
Workshop in the  Federal Register, and holding the review meeting appears to have been adequate to
provide an in-depth review. There was a two-month window for the preparation of the Workshop
comments, which also is reasonable.

       The Draft document was revised based on the external peer review comments and with further
input from the Program and Regional Offices. A memorandum requesting comments from the offices on
the revised document (Memorandum from Drs. Jack Fowle and Kerry Dearfield; Subject: Review of
Revised Risk Characterization Handbook, Date: August 17, 2000) suggests that the revision of the
draft may have taken almost fifteen months. The available documentation also indicates that comments
from the program offices were received as late as November, 2000, just one month prior to the
publication of the final RCH.  Thus, internal and external peer review of the draft RCH may have added
two (plus) years to the publication of the final document. Although this length of time would appear to
be excessive, it is probably justified by the major revisions performed on the draft document.

C-3. Do the peer reviews make a difference?

        Was peer review  conducted for the quality, adequacy and completeness of the data
       developed and obtained from the literature for the study and Agency decisions? And, did
       the review consider the  scientific and technical character of the work products?

       The RCH is a "how to" document, so the charge questions on the guidance component of the
document focused on issues of clarity and usefulness to risk assessors.  The responses to the case
studies charge questions provided significant input on the scientific and technical aspects of each case,
not just on the consistency of each case with the guidance principles.

        When was the peer review conducted!
        See answer to Question 1 above.
                                            C-2

-------
       Did the peer reviewers have adequate time to conduct an in-depth review?

       As stated earlier, the time allowed to the reviewers cannot be determined from the  available
documentation. The contractor would have set the time frame for the peer reviewers to provide comments.
As stated earlier, the comments provided by the review panel were extensive and detailed, so the time
allowed to them appears to have been adequate.

       How were the peer review results utilized to improve EPA 's decision?

       The Draft version of the handbook was heavily edited and revised as recommended by the peer
review panel.  This is most evident in the major revisions made to the guidance component of the
document.  Specific major recommendations and the corresponding changes included reorganization,
addition of definitions and improved clarity in definitions, inclusion of uncertainty and bias considerations
as part of risk characterization, more balanced consideration of human health and ecological risk
characterization elements, increased direction on how to perform risk characterization, addition of a
typology for risk characterization, and others. The revisions were highly responsive to the peer
reviewers' comments and suggestions. The final version of the document is much clearer than the draft
and contains the missing elements pointed out by the reviewers, so peer review made a significance
difference in the quality of this particular product. To the extent that the peer reviewers were experts in
risk assessment, including its risk characterization component, it can be concluded that the final version
of the Handbook is a better tool for risk assessors than its predecessor.  In this sense, although the
names of the peer reviewers would be known to the scientists in the risk assessment and related fields,
and to the extent that the materials for peer review and the pre- and post-Workshop comments are
public documents, it would have been useful to include the affiliation of the peer reviewers in each of the
documents summarizing comments.

       The case studies were also revised following many of the peer review recommendations,
including both content and format. However, compared to the revisions in the guidance, there is an
apparent lower rate of adoption of suggested changes. In part this may be due to the nature of a case
study as compared to the user guidance.  The comments that were not specifically addressed are not
major issues, but lack of documentation on responses to each specific comment raises the question of
the transparency of the rationale behind the adoption of recommended changes.  For example, in the
case of the Waquoit Bay Watershed risk characterization, there were some specific comments from
reviewers that were not incorporated in the final version of the case study (for example, a better
description of the nature of the chemical emissions from a military facility that is a Superfund site located
in the watershed). One of the reviewers even recommended that this case study be replaced by another
with a more complete risk characterization, and provided three examples that could be used as
substitutes. Some of these concerns expressed by reviewers appear to have been addressed by
addition of a brief statement under the Conclusions section to the effect that this case study should be
considered as  a "problem formulation" and that "a more complete ecological risk assessment is
forthcoming". The available documentation does not provide the rationale for the disposition of the
peer review comments in this case.

       The apparent lack of specific responses to some peer review comments raises the issue of how
the Agency's  responses to peer  review comments and suggestions are tracked and documented. While
the Peer Review Handbook does not specify the format of the response to peer review, it states that
the peer review record should contain a document of responses to peer review comments (i.e, "The
peer review record must contain a document describing the Agency's response to the peer review
comments," Peer Review Handbook, page 74, first paragraph). Lack of a concise and more easily
traceable documentation on the  rationale for responses to peer review comments and recommendations
diminishes the transparency of the peer review process. It is obvious that there is a process of decision


                                            C-3

-------
making about which recommendations to adopt and how to incorporate the adopted changes in the
revised document. There may be also some documentation on this rationale (e.g., personal notes,
memoranda, etc) but there is no apparent systematic way of compiling such documentation on the
rationale for decisions on peer review comments.  While the process requires final approval by the peer
review manager, and this approval is awarded only after the peer review comments have been
satisfactorily addressed, it would be useful to have a document that summarizes the responses to peer
review comments following the model of an author's response to peer reviewers' comments on a
manuscript accepted with revisions for publication in a journal. Although the preparation of such
document would appear to be an added burden, it is really just a documentation of the discussions,
rationale, and final decisions [e.g., 1) agree with peer reviewer's comment and have adopted the
recommendation by...; 2) agree with peer reviewer but cannot adopt the recommendation at this time
because...; or 3) disagree with peer reviewer recommendation because... ]  compiled in a single
document.

       While a central repository of individual documents relevant to the disposition of peer review
comments would be helpful, the RSAC understands that it may pose special administrative problems.
The Agency has made a decision to establish peer review records in the individual program and
regional offices as the most efficient  manner to maintain these records. For instance, in the particular
case of the RCH there would be a complete set of easily accessible documents in peer review record:
the draft document, the charge for peer review, preliminary peer review comments, the summary of the
peer review Workshop, the specific responses to the peer review comments, and the final product in a
central repository. This approach would be particularly useful when there are conflicting
recommendations from individual peer reviewers because it would provide a clear and accessible
record of which specific recommendation was adopted instead of another, as well as the rationale for
the selection. The document of responses to peer reviewers' comments would contribute to increasing
the transparency of the peer review process and how the Agency uses science. At a minimum, the
Agency should track where the peer review records reside in the individual programs, and regional
offices, and it should be clear that those entities have the responsibility and administrative burden to
maintain such records for access by interested parties. Electronic storage on a URL  site would clearly
facilitate reasonable access by interested parties, and would avoid duplicative files and additional
staffing that would be expected with a central repository.

C-4  To what extent are the review comments responded to and acted on by the Program
Office/Region?

       The available documentation does not indicate if the program  offices/regions were directly
involved with the response to peer review comments or the preparation of the revised document. The
Peer Review Handbook calls for involvement of decision makers in the peer review process and
"Decision Maker approval of the approach to addressing the peer review comments" (Peer Review
Handbook, page 74). Drs. Jack Fowle and Kerry Dearfield requested comments from the program
offices and regions on specific elements of the revised document that dealt with the major issues of the
peer review charge, received comments back from several of the offices, and incorporated some of the
offices suggestions in the final document. Therefore, there is some record of responses on peer review
from the program offices, although their participation was voluntary rather than required.  There is a
need for making the office/region participation in this component of peer review more systematic, as
well as including the related documentation in the peer review record.

       Some of the offices and regions expressed enthusiasm about the revised document and  there
were multiple suggestions for improvement in clarity. The preponderance of the comments dealt with
editorial issues.  Thus, this process appears to be an internal peer review, and the program offices and
                                            C-4

-------
regions were not requested to evaluate the impact of external peer review on the scientific content,
clarity, and usability of the revised document as compared to the pre-peer review draft.

       In the case of the RCH, the program offices/regions had significant input into the development
the 1999 draft document (Jack Fowle, personal communication). The RCH became official in
December, 2000, so there is a relatively short time frame for judging impact from its use.  There is at
this time some evidence that program offices have adopted elements of the handbook. However, since
a document becomes final only after peer review (sometimes more than one round of peer review), the
program offices/regions would have to adopt the final version anyways, so implementation and use of
the revised guidelines is not necessarily a reflection of positive impact of peer review on the quality of
final products. It would be useful therefore to specifically request comments  from the program offices
and regions on the impact of peer review on revised documents.

C-5. Does the RSAC have additional comments/guidance for EPA on how to improve the
effectiveness of the peer review process?

       The contractor-run peer review process clearly can be an efficient method to review work
products.  However, it  continues to beg the question of the independence of review. Since the
individual who is ordering the review has input into the make-up of the panel, there is the perception
outside of the Agency of the possibility of excluding individuals who are technically qualified and are
known to have opinions that may be different from those of the Agency.  The concern is that the
contractors do not always have the expertise to identify all of the competent reviewers and to build a
panel with a balance of views in the case of controversial or uncertain issues.

       There are two possible ways to resolve this potential problem. First, an independent group
within the Agency could be established who would be given the responsibility for the review process.
In the same way NCER can externally review a wide range of proposals for scientific research, an
analogous group could be established who would have the expertise to work with external contractors
to build  competent panels without any potential of inadvertent bias. Alternatively, the Agency could
work to build a number of contractors who would have higher levels of expertise and thus, able to
function independently from the Agency office that is ordering the review.

       The RSAC has been advised by Agency staff that the Agency actually does have formal
controls in place to provide independence and  objectivity in selection of peer reviewers in contract
situations. For instance, it is clear Agency policy in the Peer reviewer guidance that when EPA hires a
contractor to perform a peer review, EPA must allow the contractor independence in conducting the
peer review. It is Agency practice and policy to keep a distance between the individual who is ordering
the review and the contractor making the panel selection.  This policy and practice is in place to
specifically avoid the possibility of undue influence into the make-up of the panel, and to minimize the
possibility of excluding individuals who are technically qualified and are known to have opinions that
may be different from those  of the Agency.  EPA staff can and does provide guidance to the contractor
on the types of expertise they think most likely is needed for the peer review, and may even provide a
list of names that serve as examples (if a list is provided, it is  representative and is not a short list of
names).  However,  it is entirely up to the contractor to assemble the panel. The final peer review
candidates are then submitted to the Agency's  Contracting Officer as part of the contracting rules.  As a
practical matter, EPA may then suggest that the panel may not be "balanced" or could be missing
needed expertise, but cannot zero in on a particular peer reviewer (unless a clear and unacceptable
conflict  of interest is known). EPA can ask to redo the make-up of the panel as a whole, but it is up to
the contractor to assemble the final peer review panel or group.
                                            C-5

-------
       In this particular review, along with the other case-studies examined, the RSAC observed that
there appeared to be independence by the contractor in examination of the credentials of the peer
reviewers and selection of the peer reviewers finally chosen. There are many contractors the EPA has
used that have experience in this process and have at their disposal many known experts from which to
draw appropriate peer review panels, including complex and controversial topics. If the Agency sets
up a body to review panels, then it appears that this gets closer to EPA directing the make-up of the
contractor's panel and this is not acceptable.
                                             C-6

-------
                              REFERENCES CITED
Browner, C. 1994. Peer Review Program, Washington, DC, Memorandum issued June 7, 1994

National Academy of Sciences. 1999. Evaluating Federal Research Programs: Research and the
       Government Performance and Results Act, Washington, DC: National Academy Press,
       January 1999.

National Research Council. 1995. Interim Report of the Committee on Research and Peer Review
       in EPA, Washington, DC: National Academy Press, March 1995.

National Research Council. 2000. Strengthening Science at the U.S. Environmental Protection
       Agency: Research Management and Peer Review Practices, Washington, DC: National
       Academy Press

Reilly, W. 1993. Peer Review Policy, Washington, DC, Memorandum issued January 19, 1993

SAB. 1999. An SAB Report: Review of the Peer Review Program of the Environmental
       Protection Agency, US EPA Science Advisory Board, Washington, DC, EPA-SAB-RSAC-
       00-002, November 22, 1999

U.S. EPA. 1998.  Science Policy Council Handbook: Peer Review, Science Policy Council,  EPA
       100-B-98-001, January 1998

U.S. EPA. 2000. Science Policy Council, Peer Review Handbook, 2nd edition, Office of Science
       Policy, Office of Research and Development, EPA 100-B-OO-OOOl, December 2000

U.S. EPA. 2000a. Science Policy Council Handbook - Risk Characterization, EPA 100-B-OO-
       002, Office of Science and Policy, Office of Research and Development, Washington, DC,
       December 2000

U.S. General Accounting Office. 1994. Peer review: EPA Needs Implementation Procedures and
       Additional Controls, GAO/RCED-94-89, Washington, DC: U.S. Government Printing Office,
       February 1994

U.S. General Accounting Office. 1996. Peer review: EPA 's Implementation Remains Uneven,
       GAO/RCED-96-236, Washington, DC: U.S. Government Printing Office, September 1996

U.S. General Accounting Office. 1999. Federal Research: Peer Review Practices at Federal
       Science Agencies Vary, GAO/RCED-99-99, Washington, DC: U.S. Government Printing
       Office, March 1999
                                         R-l

-------
                                    ACRONYMS
AWQC
BAF
BCF
CWA
EPA
FACA
FRN
GAO
KDOC
LOAEL
MACT
NAS
NCER
NRC
NOAEL
OAR
OPP
OPPTS
ORD
OSP
OSW
OSWER
OW
PdP

PM
RAF
R
RCH
RfC
RfD
RSAC

RSC
SAB
SF
SPC
TCCR
TRI
TSD
UF
U.S.
U.S.D.A
Ambient Water Quality Criteria
Bioaccumulation Factor
Bioconcentration Factor
Clean Water Act
U.S. Environmental Protection Agency
Federal Advisory Committee Act
Federal Register Notice
General Accounting Office
Octanol Water Partition Coefficient (e.g., KDoc, KPOc)
Lowest Observed Adverse Effect Level
Maximum Achievable Control Technology
National Academy of Sciences
National Center for Environmental Research, U.S. EPA/ORD
National Research Council of the National Academy of Sciences
No Observed Adverse Effect Level
Office of Air and Radiation, U.S. EPA
Office of Pesticides Programs, U.S. EPA
Office of Pollution Prevention and Toxic Substances, U.S. EPA
Office of Research and Development, U.S. EPA
Office of Science Policy, U.S. EPA
Office of Solid Waste, U.S. EPA
Office of Solid Waste and Emergency Response, U.S. EPA
Office of Water, U.S. EPA
Point of Departure (e.g., PdP/SF, PdP/UF) (Equations dealing with the
octanol water partition coefficient)
Particulate Matter
Risk Assessment Forum
EPA Region (e.g., R6 is EPA Region 6)
Risk Characterization Handbook
Reference Concentration
Reference Dose
Research Strategies Advisory Committee of the U.S. EPA Science Advisory
Board
Relative Source Contribution
EPA Science Advisory Board
Safety Factor
Science Policy Council, U.S. EPA
Transparency, Clarity, Consistency and Reasonableness
Toxics Release Inventory
Technical Support Document
Uncertainty Factor
United States
United States Department of Agriculture

-------