United States        Science Advisory       EPA-SAB-EC-ADV-99-011
     Environmental        Board (1400)            Jury 1999
     Protection Agency       Washington, DC         www. epa.gov/sab
SEPA AN SAB ADVISORY ON THE
     "WHITE PAPER" ON THE
     NATURE AND SCOPE OF
     ISSUES ON ADOPTION OF
     MODEL USE ACCEPTABILITY
     CRITERIA
     CONDUCTED BY THE
     ENVIRONMENTAL MODELS
     SUBCOMMITTEE OF THE SCIENCE
     ADVISORY BOARD

-------
                                      July 30, 1999
EPA SAB-EC-ADV-99-011

Honorable Carol M. Browner
Administrator
U. S. Environmental Protection Agency
401 M Street, SW
Washington, DC 20460

       Subject:      Advisory on the "White Paper on the Nature and Scope of Issues on
                    Adoption of Model use Acceptability Criteria"

Dear Ms. Browner:

       The Environmental Models Subcommittee (EMS), hereinafter referred to as the
"Subcommittee", met February 23 and 24, 1999 to review the draft "White Paper on the Nature
and Scope of Issues on Adoption of Model Use Acceptability Criteria".  The Subcommittee
conducted this review in order to provide the Agency with advice and insights on the adequacy
of this proposed approach to evaluating regulatory environmental models with respect to their
ability to produce defensible, scientifically-based and high quality results that meet EPA's needs.

       The review  meeting was conducted in public session under the provisions of the Federal
Advisory Committee Act (FACA). EPA provided the Subcommittee with the "White Paper"
before the meeting  and briefed the Subcommittee during the meeting.  The Subcommittee was
impressed with the  depth of knowledge exhibited and the level of cooperation shown during the
presentation and briefing,  and has prepared this letter and the accompanying report.  The letter
summarizes EMS' key findings and recommendations. The attached report provides a more
complete description of the Subcommittee's advice.

Charge 1: Please comment on the adequacy of this approach for helping model developers
       explain their  models clearly, articulate major assumptions  and uncertainties,
       identify reasonable alternative interpretations, and separate scientific  conclusions
       from policy judgments.

       The "White Paper'" s general  approach and the specific points raised in it are very
constructive and can provide the basis for a more  effective and consistent process of model
development and application across the Agency. The issue of distinguishing scientific
conclusions from policy judgments is not directly addressed in the "White Paper", but the
recommended  protocol for model validation may be of assistance to model evaluators in this
regard. It is often tempting for modelers who have come to "believe"  in the results of their
efforts to promote these scientific conclusions within the realm of policy. The "White Paper"'s

-------
protocol places great emphasis on the primary role of "task specification" in directing how a
model should be evaluated.  Specification of the task for which the model is to be developed is
the prerogative of the decision maker(s); it is his/her obligation to specify in detail, as
appropriate, the terms and conditions to be fulfilled by the model. Provided there is adherence to
this aspect of the protocol, i.e., task specification, separation of scientific conclusions and
assumptions should be successfully seen to be entirely separate from policy judgments.

       The Subcommittee suggests that the Agency might consider positive incentives to the
Program Offices and Regions that develop models to encourage them to report, document and
exchange information on their model Quality Assurance (QA) procedures. They should also be
encouraged to report the successes they have achieved through effective model use, and the
lessons learned. This could be accomplished through use  of a highly visible and accessible web
page, where offices are given the opportunity to self-report their methods and procedures for
ensuring that models contribute effectively to decision support.

       Part of the struggle to coordinate model evaluations across the Agency seems to be the
lack of a common nomenclature. The models acceptability "White Paper" could help this
situation by defining key terms, and then using these definitions consistently throughout the
document as well as in its future work.

Charge 2:  Is this proposal comparably useful for models for health and for ecological risk
       assessments as well as for pollution prevention? If not, please identify special  needs
       for any of these general areas.

       The basic principles for developing and evaluating different environmental models are the
same for health and for ecological risk assessments as well as for pollution prevention.  However,
the proposal is written generically and would be strengthened by including specific references to
these other applications in order to make it clear that the "White Paper" is not restricted to fate
and transport models. One potentially important difference between exposure models and those
developed for pollution prevention analysis is that the sphere of pollution prevention lies
principally within the private sector where the same degree of willingness to submit models to an
examination of structure, complexity, and uncertainty may not always be present.

       The "White Paper" emphasizes that even though models are evolving from simple models
to estimate exposure results to those designed to perform  more complex risk assessments, EPA
provides no guidance about how to deal with more complicated situations. Obviously, it is
important that the scale and complexity of ecological models used in risk assessment be
compatible and consistent with the scale and complexity implied by regulatory needs.

-------
Charge 3: Please comment on the adequacy and utility of the proposal for helping decision-
       makers, other risk managers (e.g., assessors and their managers), and the public
             i.      understand models used in a regulatory context
             ii.     evaluate the appropriate use for the results from models in decision
                    making
             iii.    understand the "unseen" aspects of the modeling including choices
                    made during regulatory use and the rationale for those choices

       The "White Paper" addresses the need to consider these aspects.  However, in its current
form, it lacks the broader view of what needs to be included and the associated steps required for
implementation.  EPA model development can benefit greatly from targeted stakeholder
participation to obtain insight into the range of applications, available data and constraints that
exist in different locales throughout the United States.

       The discussion in relation to model use and evaluation in the Office of Air (OAR) might
be particularly useful to help others understand model use in a regulatory context. OAR appears
to have addressed many of the issues raised in the "White Paper". Several of the case studies
presented in the report provide examples of the use of models in a regulatory or decision-making
context.

       Underlying these model-centric themes set out above, EPA needs to ensure that the
public, the regulatory community and local decision-makers realize  the role that value judgments
play in the selection of a model and the way a model is used.  Thus, it is important to be very
diligent in informing the public, state regulators and local  decision-makers on this aspect of
models. In the Program Offices, EPA should consider developing educational materials to assist
stakeholders in the selection, understanding and use of models that address a program's
mandates. In addition to improving user literacy, this educational outreach should identify the
target community for eventual feedback.  Tracking model selection and model use by state and
local decision-makers will provide a valuable data set to EPA  regarding the efficacy of its
programs.

Charge 4:  Please comment on the utility of the proposal to help those outside EPA
       understand the Agency's modeling goals and to help evaluate EPA's progress
       toward achieving those goals

       In order to help organizations outside the EPA understand the Agency's modeling goals,
and allow them to evaluate EPA's progress towards achieving its goals, the information must be
accurate, up-to-date and publicly available. The "White Paper" indicates that the CREM will
provide guidance to EPA on model evaluation in the  form of a protocol. Establishment of a
model clearinghouse by the CREM will allow model users to document the model evaluation
process, and those outside the EPA will have the opportunity to access this information and
understand the Agency's modeling efforts.

-------
Charge 5:  Please comment on the overall utility and adequacy of the proposed "Strategy
       for Defining Uncertainty in Model Elements (Section 5.1) and supporting "How to"
       guidances (p.7) for judging model acceptability

       While this question generated much discussion among Subcommittee members, none of
this undermined its basic response which is that the utility and adequacy of the "White Paper"'s
proposed strategy are entirely appropriate.

       The "White Paper" should make it clear that (a) uncertainties in a model propagate
forward into prediction uncertainty, (b) that decisions be seen to be robust  in the presence of
such prediction uncertainty, and (c) that procedures are available for ranking the various
contributing sources of uncertainty and that steps may be taken to reduce the consequences of
the most critical of these, as the model is successively improved over time.

       The Subcommittee recognizes the difficulty many will have in grasping the concepts and
arguments underlying the discussion of the "White Paper" (as evident in our responses to other
Charge Questions). The Subcommittee feels, therefore, that there may indeed be a need for
producing written materials expressing these issues in a format more accessible to a wider
audience.  However, the Subcommittee wishes to record its recognition that the issues of model
evaluation are neither trivial nor inherently easy to completely address, therefore great care will
be needed to understand and explain them in lay terms.

Charge 6 : EPA welcomes any additional comments or suggestions

       The Subcommittee suspects that when the guidelines for model acceptability are first
implemented, there will be a backlog of Agency models whose quality must be evaluated in the
broad format recommended by the "White Paper". The Agency should give consideration to the
details of any procedure for clearing this backlog and to the procedure for taking advantage of
this opportunity for updating models.

       In summary, the Subcommittee finds that the guidance in the "White Paper" is generally
useful for addressing the quality  and reliability aspects for EPA's environmental regulatory
models. In addition, the Subcommittee finds that model quality issues have been
comprehensively addressed.  Furthermore, the "White Paper" includes the beginnings of a
clarification of how peer review  could be interfaced with the majority of more
computationally-oriented facets of an evaluation.  However, at this point it lacks guidance and
information about what needs to be included and associated steps required  for implementation to
be useful for decision makers and the public (e.g.,  a communication strategy for obtaining user
feedback and establishing a dialog in model development with stakeholders). The Subcommittee
suggests that model information be related to the totality of the specific decision-making use, and
in this context it should strive to  achieve "transparency" in both technical and non-technical
respects (e.g., policy decisions).  The Subcommittee also recommends that  the Agency Program
Offices and Regions consider investing in the development of a host of high-quality outreach

-------
educational materials, tailored to different audiences, on the general topic of models as decision
support tools.

       The Environmental Models Subcommittee looks forward to continued work with the
Agency as it refines its guidance for model acceptability, and we look forward to the response of
the Assistant Administrator for Research and Development to the advice contained in this
Advisory.
                                  Sincerely,
 !)r. Ishwar Murarka, Chair                         /X'  Dr. Joan Daisey, Chair
Environmental Models Subcommittee                    Science Advisory Board
Science Advisory Board

-------
                                       NOTICE

       This report has been written as part of the activities of the Science Advisory Board, a
public advisory group providing extramural scientific information and advice to the
Administrator and other officials of the Environmental Protection Agency. The Board is
structured to provide balanced, expert assessment of scientific matters related to problems facing
the Agency. This report has not been reviewed for approval by the Agency and, hence, the
contents of this report do not necessarily represent the views and policies of the Environmental
Protection Agency, nor of other agencies in the Executive Branch of the Federal government,
nor does mention of trade names or commercial products constitute a recommendation for use.

-------
                                     ABSTRACT

       The general approach contained in the "White Paper on the Nature and Scope of Issues
on adoption of Model use Acceptability Criteria" and the specific points raised in it are very
constructive.  The "White Paper" can provide the basis for a more effective and consistent
process of model development and application across the Agency. However, there is a lack of a
common nomenclature surrounding model application and usage. The models acceptability
"White Paper" could help by defining key terms, and then using these definitions consistently
throughout the document as well as in its future work.  In addition, the "White Paper" needs a
broader view of what needs to be included for effective model development and the associated
steps required for implementation. EPA can benefit greatly from targeted stakeholder
participation to obtain insight into the range of applications, available data and constraints that
exist in different locales throughout the U.S.  EPA also needs to ensure that the public, the
regulatory community and local decision-makers appreciate the role that value judgments play in
the selection of a model and the way a model is used. EPA Program Offices should consider
developing educational materials to assist stakeholders in the selection, understanding and use of
models to address their program's mandates. Tracking model selection and model use by state
and local decision-makers will provide a valuable data  set to EPA regarding the efficacy of its
programs.  The Subcommittee supports the establishment of the Committee for Regulatory
Environmental Modeling (CREM) and a model clearinghouse by the CREM. This will allow
model users to document the model evaluation process to help others understand.  As an
additional benefit, it will allow those  outside  the EPA to access this information and it will
provide them with an opportunity to provide feedback.

-------
               U.S. ENVIRONMENTAL PROTECTION AGENCY
                          SCIENCE ADVISORY BOARD
          ENVIRONMENTAL MODELS SUBCOMMITTEE OF THE
                           EXECUTIVE COMMITTEE

CHAIR
Dr. Ishwar Murarka, Chief Scientist and President, ISH Inc., Cupertino, CA

MEMBERS
Dr. Steven M. Bartell, Senior Associate, Cadmus Group, Inc., Oak Ridge, TN

Dr. Calvin Chien, Senior Environmental Fellow, E.I. DuPont Company, Wilmington, DE

Dr. Kai-Shen Liu, Epidemiologist, California Department of Health Services, Environmental
      Health Laboratory Branch, Berkeley, CA

Dr. Paulette Middleton, Associate Director, Environmental Science and Policy Center, RAND
      Corporation, Boulder, CO

CONSULTANTS
Dr. M. Bruce Beck, Professor & Eminent Scholar, Warnell School of Forest Resources,
      University of Georgia, Athens GA

Dr. Linfield Brown, Professor, Department of Civil and Environmental Engineering, Tufts
      University, Medford, MA

Dr. Arthur J. Gold, Professor, Department of Natural Resources Science, University of Rhode
      Island, Kingston, RI

Dr. Helen Grogan, Research Scientist, Cascade Scientific, Inc., Bend, OR

Dr. Wu-Seng Lung, Professor, Department of Civil Engineering, University of Virginia,
      Charlottesville, VA

Dr. Jana Milford, Associate Professor, Department of Mechanical Engineering, University of
      Colorado, Boulder, CO

Dr. Mitch Small, Professor, Department of Civil Engineering & Public Policy, Carnegie Mellon
      University, Pittsburgh, PA

Dr. Thomas Theis, Professor & Chair, Department of Civil and Environmental Engineering,
      Clarkson University, Potsdam, NY
SCIENCE ADVISORY BOARD STAFF
                                         in

-------
Dr. John R. Fowle III, Deputy Staff Director/Designated Federal Officer, Environmental
       Protection Agency, Science Advisory Board, Washington, DC

Ms. Karen Martin, Deputy Designated Federal Officer, Environmental Protection Agency,
       Science Advisory Board, Washington, DC

Mrs. Dorothy M. Clark, Management Assistant, Environmental Protection Agency, Science
       Advisory Board, Washington, DC
                                           IV

-------
                              TABLE OF CONTENTS


1.  EXECUTIVE SUMMARY	1

2.  INTRODUCTION	5

3.  OVERVIEW COMMENTS AND RESPONSE TO CHARGE	6
       3.1    Overview Comments and Observations	6
       3.2    Responses to Charge Questions	6
             Charge 1:  Please comment on the adequacy of this approach for helping model
                    developers explain their models clearly, articulate major assumptions and
                    uncertainties, identify reasonable alternative interpretations, and separate
                    scientific conclusions from policy judgments	6
             Charge 2: Is this proposal comparably useful for models for health and for
                    ecological risk assessments as well as for pollution prevention? If not,
                    please identify special needs for any of these general areas 	8
             Charge 3: Please comment on the adequacy and utility of the proposal for helping
                    decision-makers, other risk managers (e.g., assessors and their managers),
                    and the public	10
             Charge 4: Please comment on the utility of the proposal to help those outside EPA
                    understand the Agency's modeling goals and to help evaluate EPA's
                    progress toward achieving those goals	12
             Charge 5: Please comment on the overall utility and adequacy of the proposed
                    " Strategy for Defining Uncertainty in Model Elements	12
             Charge 6 : EPA welcomes any additional comments or suggestions 	13

4.  CONCLUSION	15

REFERENCES CITED 	R-l

-------
                            1.  EXECUTIVE SUMMARY
       The Environmental Models Subcommittee (EMS) reviewed the draft "White Paper on the
Nature and Scope of Issues on Adoption of Model Use Acceptability Criteria" which has been
developed to provide guidance on the development and use of environmental regulatory models at
EPA. The Subcommittee addressed six charge questions.

Charge 1: Please comment on the adequacy of this approach for helping model developers
       explain their models clearly, articulate major assumptions and uncertainties, identify
       reasonable alternative interpretations, and separate scientific conclusions from policy
       judgments.

       The "White Paper'" s general approach and the specific points raised in it are very
constructive and can provide the basis for a more effective and consistent process of model
development and application across the Agency. It is often tempting for modelers who  have come
to "believe" in the results of their efforts to promote  these scientific conclusions within  the realm
of policy.  The "White Paper"'s protocol places great emphasis on the primary role of "task
specification" in directing how a model should be evaluated. Specification of the task for which
the model is to be developed is the prerogative of the decision maker(s); it is his/her obligation to
specify in detail, as appropriate, the terms and conditions to be fulfilled  by the model. Provided
there is adherence to this aspect of the protocol, i.e., task specification, scientific conclusions and
assumptions should be successfully seen to be entirely separate from policy judgments.

       The Subcommittee  suggests that the Agency might consider positive incentives to the
Program Offices and Regions that develop models to encourage them to report, document and
exchange information on their model Quality Assurance (QA) procedures. They should also be
encouraged to report the successes achieved through effective model use, and the lessons learned.
This could be accomplished through the use of a highly visible and accessible web page, where
offices are given the opportunity to self-report their methods and procedures for ensuring that
models contribute effectively to decision support.

       The Subcommittee was concerned that there  should be a balance between consistency
across the Agency in performing model evaluations, but without being prescriptive in order to
achieve such consistency.  An effective way to accomplish this would be through the
establishment of an entity such as the proposed Committee for Regulatory Environmental
Modeling (CREM).

       Part of the struggle  to coordinate model evaluations across the Agency seems to be the
lack of a common nomenclature.  The models acceptability "White Paper" could help this
situation by defining key terms, and then using these definitions consistently throughout the
document as well as in its future work.

-------
Charge 2:  Is this proposal comparably useful for models for health and for ecological risk
       assessments as well as for pollution prevention? If not, please identify special needs
       for any of these general areas.

       The basic principles for developing and evaluating different environmental models are the
same for health and for ecological risk assessments as well as for pollution prevention. However,
the proposal is written generically and would be strengthened by including specific references to
these other applications in order to make it clear that the "White Paper" is not restricted merely to
fate and transport models.  One potentially important difference between exposure models and
those developed for pollution prevention analysis is that the sphere of pollution prevention lies
principally within the private sector where the same degree of willingness to submit models to an
examination of structure, complexity, and uncertainty may not always be present.

       Models designed to assess the effects of environmental pollutants on human health and the
environment are more complicated than those used to estimate exposure to environmental
contaminants. No one model can "do it all", so a number of models is needed to estimate
contaminant concentrations precisely, to assess human exposure and body burden correctly, to
establish a reasonable dose-response curve, and to reasonably project the health risk to the
exposed population.  The "White Paper" emphasizes that even though models are evolving from
simple  models to estimate exposure results to those designed to perform more complex risk
assessments, EPA provides no guidance about how to deal with more complicated situations.
Obviously, it is important that the scale and complexity of ecological models used in risk
assessment be compatible and consistent with the scale and complexity implied by regulatory
needs.

Charge 3: Please comment on the adequacy and utility of the proposal for helping decision-
       makers, other risk managers (e.g., assessors and their managers), and the public
             i.  understand models used in a regulatory context;
             ii. evaluate the appropriate use for the results from models in decision
                    making;
             iii. understand the "unseen" aspects of the modeling including choices made
                    during regulatory use and the rationale for those choices

       The "White Paper" addresses the need to consider these aspects. However, in its current
form, it lacks the broader view of what needs to be included and the associated steps required for
implementation.  EPA model development can benefit greatly from targeted stakeholder
participation to obtain insight into the range of applications, available data and constraints that
exist in different locales throughout the United States.

       The discussion in relation to model use and evaluation in the Office of Air (OAR) might be
particularly useful to  help  others understand model use in a regulatory context. OAR appears to
have addressed many of the issues raised in the "White Paper".  Several of the case studies
presented in the report provide examples of the use of models in a regulatory or decision-making
context.

       Underlying the model-centric themes set out above, EPA needs to ensure that the public,
the regulatory community and local decision-makers realize the role that value judgments play in

-------
the selection of a model and the way a model is used. Thus, it is important to be very diligent in
informing the public, state regulators and local decision-makers about this aspect of models.  In the
Program Offices, EPA should consider developing educational materials to assist stakeholders in
the selection, understanding and use of models that address a program's mandates. In addition to
improving user literacy, this educational outreach should identify the target community for
eventual feedback. Tracking model selection and model use by state and local decision-makers
will provide a valuable data set to EPA regarding the efficacy of its programs. It is important to
reemphasize that educational outreach is not a small  task and will require EPA to make a serious
commitment of resources.  Posting the results of models used in specific applications on the
Internet would provide the opportunity for an informed public to view and understand model
selection and application in a regulatory context.

Charge 4:  Please comment on the utility of the proposal to help those outside EPA
       understand the Agency's modeling goals and to help evaluate EPA's progress toward
       achieving those goals

       In order to help organizations outside the EPA understand the Agency's modeling goals,
and allow them to evaluate EPA's progress towards achieving its goals, the information must be
accurate, up-to-date and publicly available. The "White Paper" indicates that the CREM will
provide guidance to EPA on model evaluation in the form of a protocol. Establishment of a model
clearinghouse by the CREM will allow model users to document the model evaluation process, and
those outside the EPA will have the opportunity to access this information and understand the
Agency's modeling efforts.

Charge 5:  Please comment on the overall utility and adequacy of the proposed "Strategy for
       Defining Uncertainty in Model Elements (Section 5.1) and supporting "How to"
       guidances (p.7) for judging model acceptability

       While this question generated much discussion among Subcommittee members, none of
this undermined its basic response which is that the utility and adequacy of the "White Paper'"s
proposed strategy are entirely appropriate.

       The "White Paper" should make it clear that  (a) uncertainties in a  model propagate
forward into prediction uncertainty, (b) that decisions be seen to be robust in the presence of such
prediction uncertainty, and (c) that procedures  are available for ranking the various contributing
sources of uncertainty and that steps may be taken to reduce the consequences of the most critical
of these, as the model is successively improved over  time.

       The Subcommittee recognizes the difficulty many will have in grasping the concepts and
arguments underlying the discussion of the "White Paper" (as evident in our responses to the other
Charge Questions). The Subcommittee feels, therefore, that there may indeed be a need for
producing written materials expressing these issues in a format more accessible to a wider
audience. However,  the Subcommittee wishes to record its recognitionthat the issues of model
evaluation are neither trivial nor inherently easy to address; therefore, great care will be needed to
understand  and explain them in lay terms.

-------
Charge 6 : EPA welcomes any additional comments or suggestions

       The Subcommittee suspects that when the guidelines for model acceptability are first
implemented, there will be a backlog of Agency models whose quality must be evaluated in the
broad format recommended by the "White Paper". The Agency should give consideration to the
details of any procedure for clearing this backlog and to the procedure for taking advantage of this
opportunity for updating models.

-------
                                2.  INTRODUCTION
       The Environmental Models Subcommittee (EMS) met February 23 and 24, 1999 to review
the draft "White Paper on the Nature and Scope of Issues on Adoption of Model Use
Acceptability Criteria".  This review was carried out by EMS in order to provide the Agency with
advice and insights on the adequacy of this proposed approach to evaluate regulatory
environmental models with respect to their ability to produce defensible, scientifically-based and
high quality results that meet the needs of the Agency.

       The SAB was provided with a copy of the "White Paper on the Nature and Scope of Issues
on Adoption of Model Use Acceptability Criteria" prior to the public meeting. The charge to the
Subcommittee contained six questions focusing on the concepts and application of the "White
Paper" to facilitate future Agency use of models to inform regulatory environmental decision-
making.

Charge 1: Please comment on the adequacy of this approach for helping model developers
       explain their models clearly, articulate major assumptions and uncertainties, identify
       reasonable alternative interpretations, and separate scientific conclusions from policy
       judgments.

Charge 2: Is this proposal comparably useful for models for health and for ecological risk
       assessments as well as for pollution prevention? If not, please identify special needs
       for any of these general areas.

Charge 3: Please comment on the adequacy and utility of the proposal for helping decision-
       makers, other risk managers (e.g., assessors and their managers), and the public
             i.  understand models used in a regulatory context
             ii. evaluate the appropriate use for the results from models in decision making
             iii. understand the "unseen" aspects of the modeling including choices made
                    during regulatory use and the rationale for those choices.

Charge 4: Please comment on the utility of the proposal to help those outside EPA
       understand the Agency's modeling goals and to help evaluate EPA's progress toward
       achieving those goals.

Charge 5: Please comment on the overall utility and adequacy of the proposed "Strategy for
       Defining Uncertainty in Model Elements (Section 5.1) and supporting "How to"
       guidances (p.7) for judging model acceptability.

Charge 6 : EPA welcomes any additional comments or suggestions.

-------
          3.  OVERVIEW COMMENTS AND RESPONSE TO CHARGE
3.1    Overview Comments and Observations

       The general approach in the "White Paper" is good and the Subcommittee thinks that the
"White Paper" and the specific points raised in it are very constructive, and that implementation
can provide the basis for a more effective and consistent process of model development and
application across the Agency.  The major concerns about the "White Paper" center on public
outreach.  Many will have difficulties in grasping the concepts and arguments underlying the
discussion of the paper, therefore great  care is needed when explaining these concepts in
understandable terms.

3.2    Responses to Charge Questions

Charge 1: Please comment on the adequacy of this approach for helping model developers
       explain their models clearly, articulate major assumptions and uncertainties, identify
       reasonable alternative interpretations, and separate scientific conclusions from policy
       judgments.

       The "White Paper"'s general approach and the specific points raised in it are very
constructive and can provide the basis for a more effective and consistent process of model
development and application across the Agency. The  guidance applies equally well to model users
and to environmental analysts in general, not just to model developers. To the extent that CREM
can achieve buy-in from various EPA offices involved in  model development and use, the effort is
more likely to be viewed as enhancing the effort of individual offices, rather than as "yet another"
bureaucratic imposition.  Thus, the Subcommittee suggests that the Agency might consider positive
incentives to the Program Offices and Regions that develop models to encourage them to report,
document and exchange information on their model Quality Assurance (QA) procedures. They
should also be encouraged to report the successes they have achieved through effective model use,
and the lessons they have learned. This could be accomplished through the use of a highly visible
and accessible web page, where offices are given the opportunity to self-report their methods and
procedures for ensuring that models contribute effectively to decision support.

       The Model Evaluation Case Histories in Appendix C of the "White Paper on the Nature
and Scope of Issues on Adoption of Model use Acceptability Criteria" provide good examples of
how this reporting could be organized and displayed. These case histories, in general, contain the
following components :

       a)     Regulatory Niche & Purpose (i.e., Task Specification)

       b)     Model Selection

       c)     Data sources for inputs

       d)     Assumptions and inputs based on scientific judgment vs. those reflective of value
                    judgments and policy decisions

-------
       e)     Calibration/Validation/Testing

       f)      Sensitivity and Uncertainty Analysis

       g)     Needs for further research and model(s) improvement

       h)     Peer Review

       The models acceptability "White Paper" in and of itself will not have much effect on
encouraging model developers to "explain their models clearly", but it is certain that
implementation of the paper's recommended approach to model evaluation (in particular steps 2-
4) will have a positive effect.  The protocol for model validation set forth in the "White Paper"
focuses on five aspects (or steps) of model creation and application:  structure, complexity,
parameter uncertainty, sensitivity, and quantitative evaluation. The protocol is quite
comprehensive, at least for fate, transport, and effects models (those most often used for
regulatory purposes), and offers a good framework for model developers to explain their models,
and their major assumptions and uncertainties.  To be acceptable for specific EPA-defined tasks,
the model developer needs to follow the guidance for addressing uncertainty, peer review, and
evaluation as the model is being developed (e.g., TRIM.FaTE development).  Certainly the model
developer will have choices to make among structural (mathematical) representations of certain
biological, chemical, lexicological, etc. phenomena (this being step 1 of the "White Paper" 's
proposed approach).  If the model incorporates  only one of the alternative representations, then
the justification for its inclusion must be articulated in the model assumptions.  If model code
allows the user to select from alternative structures, then the model developer must provide
guidance for the user on how to make the appropriate selection.

       The issue of distinguishing scientific conclusions from policy judgments is not directly
addressed in the "White Paper", but again the recommended protocol for model validation may be
of assistance to model evaluators in this regard.  It is often tempting for modelers who have come
to "believe" in the results of their efforts to promote these scientific conclusions within the realm
of policy.  The "White Paper"'s protocol places great emphasis on the primary role of "task
specification"  in directing how a model should be  evaluated. Specification of the task for which
the model is to be developed is the prerogative of the decision maker(s); it is his/her obligation to
specify in detail, as appropriate, the terms and conditions to be fulfilled by the development of the
model. Provided there is adherence to this aspect of the protocol, i.e., task specification,
separation of scientific conclusions and assumptions should be successfully seen to be entirely
separate from policy judgments.

       The Subcommittee was concerned that there should be a balance between consistency
across the Agency in performing model evaluations, but without being prescriptive in order to
achieve such consistency.  The Agency should set criteria for what needs to be included in these
assessments and provide exemplary examples of how they can be done. However, the steps in the
model assessment should not be overly prescriptive. An effective way to accomplish this would be
through the establishment of the proposed CREM. Because of the diversity  of modeling
applications in the Agency, Program Offices and Regions need to be able to  select from a menu of
useful evaluation tools. However, guidance is definitely needed regarding a framework for the
assessment of models.  Again we reiterate that the "White Paper" is correct to emphasize that the

                                            7

-------
first step is task specificati on, and that full documentation of peer review, performance evaluation,
sensitivity analysis and uncertainty analysis is needed. In addition, assessments should also
demonstrate the power of the tests used to differentiate between models that are or are not
adequate for the specified tasks.

       Part of the struggle to coordinate model evaluations across the Agency seems to be the
lack of a common nomenclature. The models acceptability "White Paper" could help this situation
by defining key terms, and then using these definitions consistently throughout the document as
well as in its future work. There seems to be a persistent mix-up of the terms "validation" and
"verification". The current use of the term "validation" is an example of the potential for
confusion and misunderstanding. In some places it seems to be used for the overall process of
assessing the adequacy of a model for a particular application. Elsewhere it is used to refer to the
comparison of model results with experimental and observational data. The term "performance
evaluation" may be more appropriate for the latter activity. The Agency should also consider
maintaining the distinction between model "uncertainty" and modeling "errors". For example, the
transcription of mathematical equations into code may very well have errors (that should be
corrected if we know about them) but not "uncertainty". The term "verification" is also used to
describe these translational errors. The term "uncertainty" could be used to express likelihood or
probability to provide a statistical measure of variability or difference between model predictions
and real world observations.  However,  "uncertainty" can only be reduced with the aid of new
information when it improves the estimates for parameters used to carry out computations using
already developed models. Specific  examples of especially confusing terminology appear on p. 27
where the "White Paper" lists "uncertainties" in model tests arising from the range of statistics
used in an assessment or "uncertainty" about how a test will be made. The  Subcommittee believes
"inconsistency" (among analysts), not "uncertainty", was intended here; once a method or
particular test has been chosen that decision cannot be "uncertain".

Charge 2: Is this proposal comparably useful for models for health and for ecological risk
       assessments as well as  for pollution prevention?  If not, please identify special needs
       for any of these general areas.

       The basic principles for developing and evaluating different environmental models are the
same.  The "White Paper" captures the major problems of the current practice of model adoption
and recommends various ways  of improvement.  The proposal is useful for health and for
ecological risk assessments as well as for pollution prevention. However, the proposal is written
genetically and would be strengthened by specifically including references to these other
applications in order to make it clear that the "White Paper" is not restricted to fate and transport
models. There are special needs for  models which are developed and applied in different domains.
In order to comply with environmental and occupational regulations, we have decades of
experience in the estimation of contaminant concentrations and their temporal and spatial
variations in different media  (e.g., air, water, soil, food etc.). Models developed for the prediction
of contaminant dispersion may  also be components of pollution prevention analyses and health or
ecological risk assessments. As such, models to predict contaminant concentrations, while often
complex and often exhibiting significant uncertainty, can usually be evaluated using well-
established protocols for code validation and comparison with observed laboratory and field data.
In contrast the biological and ecological mechanisms involved in risk assessment are much more
uncertain and we often lack the ability to define, much less measure, key system outputs and state

-------
variables.

       One potentially important difference between exposure models and those applied for
pollution prevention analysis is that the sphere of pollution prevention lies principally within the
private sector where the same degree of willingness to submit models to an examination of
structure, complexity, and uncertainty may not always be present.  In such cases, issues related to
the proprietary nature of manufacturing, marketing strategies and internal costs may be present,
complicating the review process.  Such models must accurately capture the nature of the processes
under evaluation, but also must be able to accurately assess cost alternatives. They are not
typically used directly in support of public regulatory functions, but instead are used in the private
sector to justify allocation of resources and to compare the return-on-investment alternatives.  Of
course these models must ultimately attain the same level of confidence as those developed under
EPA or government auspices, if they are to be used to develop public policy, and so must be
carefully evaluated. However, the application of the validation protocol may not be as direct and
may necessitate different approaches with respect to the format, expertise and  background of
reviewers used, as well as in the dissemination of results.

       Models designed to assess the effects of environmental pollutants on human health are
more complicated than those used to estimate distribution ofcontaminants in the environment.
Health effects of pollutants are determined by the contaminant concentration, human exposure,
body burden, dose-response relationship and characteristics of the exposed population.  Ideally, a
good model for health risk assessment should be able to handle all these components with equal
precision.  However, no  one model can "do it all", so a numberof models is neededto estimate
contaminant concentrations precisely, to assess human exposure and body burden correctly, to
establish a reasonable dose-response curve, and to reasonably project the health risk of the
exposed population. The "White Paper" emphasizes that even though models  are evolving from
simple models to estimate exposure results to those designed to perform more complex risk
assessments, EPA provides no guidance about how to deal with more complicated situations.
Multi-contaminant, multi-media, and multi-pathway models have been mentioned repeatedly
(important for considering exposures), while models for health and ecological risks are
multi-endpoint. As noted in the Subcommittee's earlier Advisory on the TREVI.FaTE model,
"Advisory on the Total Risk Integrated Methodology (TRIM)" (SAB,  1998), evaluating such
models presents formidable difficulties, especially with respect to the availability of
comprehensive field data.

       For health risk, a model should be able to assess chronic health effects (carcinogenic and
non-carcinogenic) based on long-term integrated exposure while predicting acute health effects
based on short-term peak exposure.  For ecological risk, all animals and plants  in an ecosystem can
be affected.  The endpoints of health and ecological risks are important factors that should be
addressed in the "Model Use Acceptability Criteria". If the endpoints are not well-defined, it is
impossible to evaluate the performance of the model in the conventional "matching history",
although the model's composition would still be subjected to formal evaluation (as covered in the
"White Paper"). Expanding the scope of the proposal to include ecological risk assessment
models would also require the development and implementation  of procedures  that address the
ecological scale and complexity of such models. Obviously, it is important that the scale and
complexity of ecological models used in risk assessment be compatible and consistent with the
scale and complexity implied by regulatory needs.

-------
Charge 3: Please comment on the adequacy and utility of the proposal for helping decision-
       makers, other risk managers (e.g., assessors and their managers), and the public
              i.   understand models used in a regulatory context
              ii.  evaluate the appropriate use for the results from models in decision making
              iii. understand the "unseen" aspects of the modeling including choices made
                    during regulatory use and the rationale for those choices.

       The "White Paper" addresses the need to consider these aspects. However, in its current
form, it lacks the broader view of what needs to be included and the associated steps required for
implementation.  EPA needs, therefore, to provide the principal guidance on how to develop,
select, use appropriately decision-support models and to be aware of their limitations. Models are
filled with complex principles, statistics and mathematics.  Model parameterization and
comparison of model results with field data are usually discussed in terms of probability, scale and
levels of uncertainty. The basic language surrounding model evaluation is not common to many
state regulators, local decision-makers and the public. The materials addressed in Appendix D of
the "White Paper", for example, will be relevant for decision-makers and risk managers in
understanding the important issues in model development, application, and evaluation.  But again,
it is doubtful that this will be understood by the general public.  There should, therefore, be  great
concern when  state and local regulators are attracted to complicated models that generate "hard"
numbers under the false belief that the complexity of a model is tantamount to its worth or
validity, regardless of the available data or the particulars of a given situation. EPA model
development can benefit greatly from targeted stakeholder participation to obtain insight into the
range of applications, available data and constraints that exist in different locales throughout the
U.S.

       Outreach needs to be directed to stakeholder audiences outside of EPA, including state
regulators, planners, local decision-makers and the public. As recommended in its Advisory on
establishing the CREM "Advisory on the Charter for the Council for Environmental Regulatory
Modeling (CREM)" (SAB, 1999a), the Subcommittee strongly suggests that the Program Offices
and Regions be charged with this outreach function.

       The Agency  should consider investing in the development of a host of high-quality
outreach educational materials on the general topic of models as decision support tools. These
materials should be tailored to different audiences, and ideally would focus on different aspects of
decision support models, including such topics as:

       a)      what is a model

       b)      types of models

       c)      the regulatory realities and situations that generate the need for models

       d)      how models are developed and tested

       e)      how is the validity of a model determined

       f)      how models have been useful in previous applications

                                           10

-------
       g)      how to compare models

       h)      what are the limitations of models

       i)      how have models been misused

       Section 3 of the "White Paper" describes how various offices within EPA have applied
models in their decision-making. The discussion in relation to model use and evaluation in the
Office of Air (OAR) might be particularly useful to help others understand model use in a
regulatory context. OAR appears to have addressed many of the issues raised in the "White
Paper". Several of the case studies presented in the report provide examples of the use of models
in a regulatory or decision-making context.  Perhaps additional case studies might be developed to
serve as examples of how models were used to support regulatory decision-making. Otherwise the
"White Paper" may not provide general guidance concerning the use of models in a regulatory
decision-making context.

       Underlying these model-centric themes set out above, EPA needs to ensure  that the public,
the regulatory  community and local decision-makers realize the role that value judgments play in
the selection of a model and the way a model is used. EPA is constantly confronted with the task
of modeling situations where data are limited and major gaps exist in our process-level
understanding.  In these situations, there is real controversy over the usefulness of quantitative
models vs. indices of risk and the applicability of "worst case" scenarios vs. other scenarios.
Different sectors of our society often support vastly different modeling approaches, because the
choice of a model may have major consequences on the regulatory climate surrounding their
interests. Thus, it is important to be very diligent in informing the public, state regulators and local
decision-makers about this aspect of models.  The public needs to hear the arguments for simple,
worst-case, decision-support models as well as the arguments surrounding the development and
use of more sophisticated risk-based models.

       In the Program Offices, EPA should consider developing educational materials to assist
stakeholders in the selection, understanding and use of models that address a program's mandates.
In addition to improving user literacy, this educational outreach should identify the  target
community for eventual feedback. Tracking model selection and model use by state and local
decision-makers will provide a valuable data set to EPA regarding the efficacy  of its programs.
The key to this program must be a  constant reassessment and refinement of the guidance and
communication to users.

       It is important to reemphasize that educational outreach is not a small task and will require
EPA to make a serious commitment of resources.  Education needs to reach beyond Washington
to inform those "in the trenches".  National program managers need to ensure that the educational
materials are crafted well  and also develop mechanisms to assess the materials for coherence,
quality and consistency.  Posting the results of models used in specific applications  on the Internet
                                            11

-------
would provide the opportunity for an informed public to view and understand model selection and
application in a regulatory context.

       The background material in Section 3 of the "White Paper" describing the various
approaches to modeling issues as understood and implemented by the various offices within the
Agency might be particularly useful to help decision-makers and risk managers evaluate the
appropriate use of model results, in a general sense. While the issue has been comprehensively
addressed, the document provides no guidance on the specific evaluation of models in relation to
model quality. However, Section 5 of the "White Paper", which discusses the nature and
contribution  of various sources of uncertainty in the modeling process, may be useful in this
context of assisting managers to evaluate the results of model applications. It may assist these
managers in  enhancing their appreciation of the "unseen" aspects of the modeling enterprise.  For
example, similar issues of model evaluation are outlined in EPA/540R-94-039, including the
scientific foundation of model structure, adequacy of parameter estimation, verification, and
empirical comparisons; these identify important aspects of the modeling process that are not
always obvious to the community of decision-makers.

Charge 4: Please comment on the utility of the proposal to help those outside EPA
       understand the Agency's modeling goals and to help evaluate EPA's progress toward
       achieving those goals

       In order to help organizations outside the EPA understand the Agency's modeling goals,
and to allow them to evaluate EPA's progress towards achieving its goals the information must be
accurate, up-to-date and publicly available.  The proposal indicates that the CREM will provide
guidance to EPA on model evaluation in the form of a protocol. Establishment of a model
clearinghouse by the CREM will allow model users to document the model evaluation process, and
those outside the EPA will  have the opportunity to access this information and understand the
Agency's modeling.
Charge 5: Please comment on the overall utility and adequacy of the proposed "Strategy for
       Defining Uncertainty in Model Elements (Section 5.1) and supporting "How to"
       guidances (p.7) for judging model acceptability.

       While this question generated much discussion among Subcommittee members, none of
this undermined its basic response, which is that the utility and adequacy of the "White Paper"'s
proposed strategy are entirely appropriate. However, the Subcommittee does have some
recommendations to make. These deal with matters of clarity and the need to be aware of some
important gaps in the strategy.

       First, as we have already recommended (in our response to Charge Question 1) care should
be taken with use  of the word "uncertainty" in the "White Paper". The Subcommittee believes that
on several occasions in the paper it would have been more correct to talk of "error",
"inconsistency", or "disagreement", as opposed to "uncertainty". We recommend that serious
consideration be given to preparing a glossary for inclusion in the final version of the paper.

       Second, while the "White Paper" itself acknowledges that attitudes towards "validation"
have changed this decade, there is still a need to ensure that the model acceptability guidelines,

                                            12

-------
when published, are consistent with the contemporary consensus. As presently drafted the paper
lacks references to crucial literature published in the last 4-5 years.

       Third, the proposed strategy for evaluating models is appropriate and sufficiently
comprehensive. However, the steps in the analysis of uncertainty do not extend (explicitly) into
the decision context. The "White Paper" should make it clear that (a) uncertainties in a model
propagate forward into prediction uncertainty, (b) that decisions be seen to be robust in the
presence of such prediction uncertainty, and (c) that procedures are available for ranking the
various contributing sources of uncertainty and that steps may be taken to reduce the
consequences of the most critical of these, as the model is successively improved over time.

       Fourth, if a judgment on the acceptability of a given model is to be made, it will be
necessary to make such a judgment on the basis of incommensurate forms of information and
evaluative diagnostics, for example, from peer review, on the quantitative uncertainties of model
parameter estimates, on the statistics of the overall match of the model's outputs with history, and
so on. The Subcommittee is not aware of procedures for facilitating the process of coming to the
required, summary judgment and accordingly recommends that the "White Paper"  acknowledge
this gap clearly.

       Last, turning to the charge regarding "How to" guidances, the Subcommittee recognizes the
difficulty many will have in grasping the concepts and arguments underlying the discussion of the
"White Paper" (as evident in our responses to other Charge Questions). The Subcommittee feels,
therefore, that there may indeed be a need for producing written materials expressing these issues
in a format more accessible to a much wider audience. However, the Subcommittee wishes to
record its recognition that the issues of model evaluation are neither trivial nor inherently easy to
understand; great care will be needed when explaining them in lay terms.

       The Subcommittee also recognizes that there will be cases when no quantitative modeling
effort is warranted.  In these situations the effort should stop at the conceptual and perhaps
qualitative level of model development.

Charge 6 : EPA welcomes any additional  comments or suggestions.

       The Subcommittee suggests that when the guidelines for model acceptability are first
implemented, there will be a backlog of Agency models whose quality must be evaluated in the
broad format recommended by the "White Paper". Although these models are  already in existence
(and have been used), future users will still need to know which of them have been evaluated as
acceptable. The Agency should give consideration to the details of any procedure for clearing this
backlog.

       With respect to this backlog of existing models, in particular, implementation of the
acceptability guidelines will afford opportunities for updating the theoretical basis of each model
(whether it still reflects the state-of-the-science) and the appropriateness  of the input data, given
contemporary sampling and instrumentation schemes. Again, consideration should be given to the
procedure for taking advantage of this opportunity for updating.
                                            13

-------
       The Subcommittee is aware that the Agency's Quality System Management Plan was
recently reviewed by the SAB's Environmental Engineering Committee (SAB, 1999b). The well-
designed program outline  contained in the Models Acceptability White Paper could be extended to
serve as the basis of the modeling elements component of the Agency's Quality system.
                                          14

-------
                                  4.  CONCLUSION
       The Subcommittee finds that the guidance in the "White Paper" is generally useful for
addressing the quality and reliability of models.  Model quality issues have been comprehensively
addressed.  The general approach and the specific points raised in it are very constructive, and can
provide the basis for a more effective and consistent process of model development and
application across the Agency.  Furthermore, it includes the beginnings of a  clarification of how
peer review could be interfaced with the majority of more computationally-oriented facets of an
evaluation. However, at this point the paper lacks guidance and information about what needs to
be included and associated steps required for implementation to be useful for decision makers and
the public (e.g., a communication strategy for obtaining user feedback and establishing a dialog in
model development with stakeholders).  The Subcommittee suggests that model information be
related to the totality of the specific decision-making use, and in context it should strive to achieve
"transparency" in both technical and non-technical respects (e.g., policy decisions).

       The Subcommittee recommends that EPA define key terms, and use them consistently
throughout the document, and that the "White Paper" include a broader view of what needs to be
included  for effective model development and the associated steps required  for implementation.
The Subcommittee also recommends that the Agency Program Offices and Regions consider
investing in the development of a host of high-quality outreach educational materials, tailored to
different  audiences, on the general topic of models as decision support tools. The  Subcommittee
recommends that EPA seek targeted stakeholder participation to obtain insight into the range of
applications, available data and constraints that exist in different locales throughout the United
States.  EPA should also ensure that the public, the regulatory community, and local
decision-makers realize the role that value judgments play in the selection of a model and the way
a model is used.  The Subcommittee supports the establishment of the Committee for Regulatory
Environmental Modeling (CREM) and a model clearinghouse by the CREM to allow model users
to document the model evaluation process, and those outside the EPA to access this information
and to provide feedback.
                                            15

-------
                             REFERENCES CITED
EPA, 1994. Validation Strategy for the Integrated Exposure Uptake Biokinetic Model for Lead in
      Children. EPA/540R-94-039.  Office of Emergency and remedial Response, US EPA,
      Washington, DC. December, 1994.

SAB, 1998. Advisory on the Total Risk Integrated Methodology (TRIM).  SAB-EC-ADV-99-003.
      Science Advisory Board. US EPA, Washington, DC. December, 1998.

SAB, 1999a. Advisory on the Charter for the Council on Regulatory Environmental Modeling
      (CREM). EPA-SAB-EC-ADV-99-009. Science Advisory Board. US EPA, Washington,
      DC. June, 1999.

SAB, 1999b. Science Advisory Board Review of the Implementation of the Agency-Wide Quality
      System.. EPA-SAB-EEC-LTR-99-002. Science Advisory Board. US EPA, Washington,
      DC. February, 1999.
                                        R-l

-------
                         DISTRIBUTION LIST
Deputy Administrator
Assistant Administrators
Deputy Assistant Administrator for Science, ORD
Director, Office of Science Policy, ORD
EPA Regional Administrators
EPA Laboratory Directors
EPA Headquarters Library
EPA Regional Libraries
EPA Laboratory Libraries
Library of Congress
National Technical Information Service
Congressional Research Service

-------