UNITED STATES ENVIRONMENTAL PROTECTION AGENCY
  *                            WASHINGTON D.C. 20460
                                                            OFFICE OF THE ADMINISTRATOR
                                                               SCIENCE ADVISORY BOARD

                                  August 22, 2006

EPA-SAB-06-009

The Honorable Stephen L. Johnson
Administrator
U.S. Environmental Protection Agency
1200 Pennsylvania Avenue, N.W.
Washington, D.C. 20460

       Subject:  Review of Agency Draft Guidance on the Development, Evaluation, and
               Application of Regulatory Environmental Models and Mode Is Knowledge Base
               by the Regulatory Environmental Modeling Guidance Review Panel of the EPA
               Science Advisory Board

Dear Administrator Johnson:

       The EPA Regulatory Environmental Modeling (REM) Guidance Review Panel of the
Science Advisory Board has completed its review of the Agency's Council on Regulatory
Environmental Models (CREM) Draft Guidance on the Development, Evaluation, and
Application of Regulatory Environmental Models, dated November, 2003 (also referred to as the
Draft Guidance), and the Models Knowledge Base (MKB), an online database of environmental
models.

       The Panel commends the Agency's REM initiative, which provides a much needed vision
for modeling across all EPA programs and offices.  The Draft Guidance in particular provides a
comprehensive overview of modeling principles and best practices. The Panel notes that the
Agency has been very responsive to previous SAB advice on environmental modeling, and
recommends that special recognition be accorded to Agency CREM participants for their
leadership. However, the Panel is concerned that the CREM activities have been funded through
an Adv Hoc approach and the REM vision is not matched by a commensurate, and steady,
allocation of resources on the part of the Agency.  It is therefore recommended that the Agency
provide a meaningful commitment of resources to the REM initiative.

       The Panel also commends the Agency for recognizing the need for and beginning
development on the Models Knowledge Base (MKB).  This type of resource has been needed for
some time and even in its draft form, the MKB provides an easily accessible resource for the
modeling community that, if maintained and used, will significantly improve the development
and application of models both internal and external to the Agency.

-------
       The Panel's report emphasizes a number of ways in which the Draft Guidance andMKB
can be improved, including:

    •   Care in articulating the audience to which the Draft Guidance is directed;

    •   The need to develop and apply models within the context of a specific problem;

    •   Caution in the way that information on modeling uncertainty is evaluated and
       communicated, and the need for the Draft Guidance to more fully discuss uncertainty and
       sensitivity analysis methods;

    •   More consistency in conforming the terminology used in the Draft Guidance and MKB to
       previous uses and meanings through the REM Glossary; and

    •   The need to gather, and in many cases to develop, additional information to be included
       in the MKB, including the framework, evaluation, and limitations of models included, and
       to implement a mechanism within the MKB that allows the community of users to submit
       feedback on their experiences.

       In summary, the SAB finds that the Draft Guidance on the Development, Evaluation,
and Application of Regulatory Environmental Models is an important document, and the Mode Is
Knowledge Base an important tool that will guide the Agency and others in developing and using
models for environmental purposes.  In the Panel's judgment it is essential that these efforts be
revised and updated regularly in order for their full value to the Agency to be realized. The
Panel stands ready to provide additional advice  and review on this effort as it continues to
progress.


                                  Sincerely,

             /signed/                                 /signed/

        Dr. M. Granger Morgan                Dr. Thomas L. Theis
        Chair                                 Chair
        Science Advisory Board                REM Guidance Review Panel
                                              Science Advisory Board

-------
                                       NOTICE

        This report has been written as part of the activities of the EPA Science Advisory Board
(SAB), a public advisory group providing extramural scientific information and advice to the
Administrator and other officials of the Environmental Protection Agency.  The SAB is
structured to provide balanced, expert assessment of scientific matters related to problems facing
the Agency. This report has not been reviewed for approval by the Agency and, hence, the
contents of this report do not necessarily represent the views and policies of the Environmental
Protection Agency, nor of other agencies in the Executive Branch of the Federal government, nor
does mention of trade names of commercial products constitute a recommendation for use.
Reports of the SAB are posted on the EPA website at http;//www\epa,,gQWgab,

-------
                     U.S. Environmental Protection Agency
                             Science Advisory Board
        Regulatory Environmental Modeling (REM) Guidance Review Panel
CHAIR
Dr. Thomas L. Theis, Professor, Civil and Materials Engineering, and Director, Institute for
Environmental Science & Policy, University of Illinois at Chicago, IL

MEMBERS
Dr. Linfield C. Brown, Professor, Civil and Environmental Engineering Department,
Tufts University, Medford, MA

Dr. Joseph DePinto, Senior Scientist, Limno-Tech, Inc., Ann Arbor, MI

Dr. Panos Georgopoulos, Professor, Environmental and Occupational Medicine, UMDNJ -
Robt Wood Johnson Medical School, Piscataway, NJ

Dr. Steven Heeringa, Research Scientist, Institute for Social Research (ISR), University of
Michigan, Ann Arbor, MI

Dr. Bruce K.  Hope, Senior Environmental Toxicologist, Oregon Department of Environmental
Quality (DEQ), Air Quality Division, Portland, OR

Dr. Alan J. Krupnick, Senior Fellow & Director, Quality of the Environment Division,
Resources for  the Future, Inc., Washington, DC

Dr. Randy L. Maddalena, Scientist, Environmental Energy Technologies Division, Indoor
Environment Department, Lawrence Berkeley National Laboratory, Berkeley, CA

Dr. June Fabryka-Martin, Staff Scientist, Los Alamos National Laboratory, Los Alamos, NM

Mr. David Merrill, Principal, Gradient Corporation, Cambridge, MA

Dr. Paulette Middleton, President, Panorama Pathways, Inc., Boulder, CO

Dr. Mitchell J. Small, The H. John Heinz III Professor of Environmental Engineering,
Department of Civil & Public Policy, Carnegie Mellon University, Pittsburgh, PA

Dr. Douglas G. Smith, Principal Environmental Health Scientist, Risk Assessment Department,
ENSR International, Inc., Westford, MA

Dr. James H.  Smith, Lead Technical Project Manager, State Photochemical Smog Group, Texas
Commission on Environmental Quality, Austin, TX
                                         11

-------
Dr. Richard L. Wetzel, Chair, Department of Biological Sciences and Professor of Marine
Science, Virginia Institute of Marine Sciences (VIMS), College of William & Mary, Gloucester
Pt, VA

Dr. Peter Wilcoxen, Associate Professor of Economics and Public Administration, Syracuse
University,  Syracuse, NY

SCIENCE ADVISORY BOARD STAFF
Dr. K. Jack Kooyoomjian, Designated Federal Officer, US EPA Science Advisory Board,
Washington, DC
                                          in

-------
                     U.S. Environmental Protection Agency
                             Science Advisory Board
CHAIR
Dr. M. Granger Morgan, Carnegie Mellon University, Pittsburgh, PA

SAB MEMBERS
Dr. Gregory Biddinger, ExxonMobil Biomedical Sciences, Inc, Houston, TX

Dr. James Bus, The Dow Chemical Company, Midland, MI

Dr. Trudy Ann Cameron, University of Oregon, Eugene, OR

Dr. Deborah Cory-Slechta, University of Medicine and Dentistry of New Jersey and Rutgers
State University, Piscataway, NJ

Dr. Maureen L. Cropper, University of Maryland, College Park, MD

Dr. Virginia Dale, Oak Ridge National Laboratory, Oak Ridge, TN

Dr. Kenneth Dickson, University of North Texas, Denton, TX

Dr. Baruch Fischhoff, Carnegie Mellon University, Pittsburgh, PA

Dr. A. Myrick Freeman, Bowdoin College, Brunswick, ME

Dr. James Galloway, University of Virginia, Charlottesville, VA

Dr. Lawrence Goulder, Stanford University, Stanford, CA

Dr. Rogene Henderson, Lovelace Respiratory Research Institute, Albuquerque, NM

Dr. Philip Hopke, Clarkson University, Potsdam, NY

Dr. James H. Johnson, Howard University, Washington, DC

Dr. Meryl Karol, University of Pittsburgh, Pittsburgh, PA

Dr. Catherine Kling, Iowa State University, Ames, IA

Dr. George Lambert, Robert Wood Johnson Medical School/ University of Medicine and
Dentistry of New Jersey, Piscataway, NJ

Dr. Jill Lipoti, New Jersey Department of Environmental Protection, Trenton, NJ
                                         IV

-------
Dr. Genevieve Matanoski, Johns Hopkins University, Baltimore, MD

Dr. Michael J. McFarland, Utah State University, Logan, UT

Dr. Jana Milford, University of Colorado, Boulder, CO

Dr. Rebecca Parkin, The George Washington University, Washington, DC

Mr. David Rejeski, Woodrow Wilson International Center for Scholars, Washington, DC

Dr. Joan B. Rose, Michigan State University, E. Lansing, MI

Dr. Kathleen Segerson, University of Connecticut, Storrs, CT

Dr. Kristin Shrader-Frechette, University of Notre Dame, Notre Dame, IN

Dr. Robert Stavins, Harvard University, Cambridge, MA

Dr. Deborah Swackhamer, University of Minnesota, Minneapolis, MN

Dr. Thomas L. Theis, University of Illinois at Chicago, Chicago, IL

Dr. Valerie Thomas, Georgia Institute of Technology, Atlanta, GA

Dr. Barton H. (Buzz) Thompson, Jr., Stanford University, Stanford, CA

Dr. Robert Twiss, University of California-Berkeley, Ross, CA

Dr. Terry F. Young, Environmental Defense, Oakland, CA

Dr. Lauren Zeise, California Environmental Protection Agency, Oakland, CA
SCIENCE ADVISORY BOARD STAFF
Mr. Thomas Miller, Designated Federal Officer, US EPA Science Advisory Board,
Washington, DC

-------
                                  Table of Contents

EXECUTIVE SUMMARY	1

BACKGROUND	4

1.   BEST PRACTICES	8

  1.1.    INTERPRETATION OF "BEST AVAILABLE AND PRACTICABLE SCIENCE"	8
  1.2.    GENERAL COMMENTS	8
  1.3.    PROBLEM SPECIFICATION	9
  1.4.    MODEL CALIBRATION AND SENSITIVITY ANALYSIS	10
  1.5.    MODEL POST-AUDIT	11
  1.6     DOCUMENT ORGANIZATION	11

2.   GOALS AND METHODS	14

  2.1.    INTRODUCTION	14
  2.2.    INTENDED AUDIENCE AND SCOPE OF USE	14
  2.3.    GLOSSARY	16
  2.4.    MODEL DOCUMENTATION, PROJECT DOCUMENTATION, AND USER MANUAL	17

3.   GRADED APPROACH	18

  3.1.    DEFINITION OF "GRADED APPROACH"	18
  3.2.    MODELING COMPLEXITY AND ASSOCIATED EVALUATION NEEDS	18
  3.3.    EVALUATING MODEL RESPONSE	19
  3.4.    USE OF MULTIPLE AND LINKED MODELS	20
  3.5.    USE OF MODEL-DERIVED DATA	21

4.   PRACTICAL ADVICE FOR DECISION-MAKERS	22

  4.1.    GENERAL COMMENTS ON UNCERTAINTY	22
  4.2.    SENSITIVITY ANALYSIS VIS-A-VIS UNCERTAINTY ANALYSIS	24
  4.3.    UNCERTAINTY ANALYSIS PRACTICES/METHODS (REM GUIDANCE SECTION C.6)	25
  4.4.    VALUE OF INFORMATION - IDENTIFYING "UNCERTAINTIES THAT MATTER"	27
  4.5.    COMMUNICATING UNCERTAINTY	28

5.   IDENTIFICATION AND STRUCTURE OF OPTIMAL SET OF INFORMATION FOR ALL USERS
    30

  5.1.    GENERAL COMMENTS AND SUGGESTIONS	30
  5.2.    SPECIFIC SUGGESTIONS BY THE PANEL	33
  5.4.    LISTING OF KEY PUBLICATIONS AND APPLICATIONS OF MODELS	35
  5.5.    CLARIFICATION OF MKB ENTRY SHEET ITEMS C1-C3	35

6.   DATA DICTIONARY AND  DATA STRUCTURE	37

  6.1.    GENERAL COMMENTS	37
  6.2.    MODEL PERFORMANCE INFORMATION	38
  6.3.    ADDITIONAL RECOMMENDATIONS	39

7.   QUALITY OF INFORMATION PROVIDED ABOUT THE MODELS	42

  7.1     GENERAL COMMENTS	42
  7.2     VISION FOR THE KNOWLEDGE BASE	43
  7.3     QUALITY ASSURANCE AND QUALITY CONTROL	44
  7.4     LAYOUT AND NAVIGATION OF KNOWLEDGE BASE	46
  7.5     UPDATING THE KNOWLEDGE BASE	46
  7.6     THE ROLE OF THE KNOWLEDGE BASE AS A "MODEL SELECTION TOOL"	47

REFERENCES	48
                                           VI

-------
APPENDIX A - ENHANCEMENTS TO THE GLOSSARY	52

APPENDIX B - THE CREM MODELS KNOWLEDGE BASE DATA ENTRY SHEET	54

APPENDIX C - PANEL MEMBERS EXPERIENCES USING THE MKB	56

  C-l  CALPUFF (THE ILLUSTRATIVE Am MODEL)	56
  C-2  THE INTEGRATED PLANNING MODEL (TPM - THE ILLUSTRATIVE ECONOMIC MODEL)	59
  C-3  AQUATOX (THE ILLUSTRATIVE WATER QUALITY MODEL)	60
  C-4  OTHER MODELS	61

APPENDIX D-ACRONYMS	63
                                       vn

-------
                             EXECUTIVE SUMMARY

       The Regulatory Environmental Modeling (REM) Panel of the SAB has reviewed the
Agency's Draft Guidance on the Development, Evaluation, and Application of Regulatory
Environmental Models, dated November, 2003 (referred hereafter as the Draft Guidance), and
the Agency's Mode Is Knowledge Base (referred to as the MKB).  Major points of consensus are
summarized below.1


       The Panel commends the Agency's REM initiative, which provides a much needed
vision for modeling across all EPA offices. The Draft Guidance in particular provides a
comprehensive overview of modeling principles and best practices. The Panel notes that the
Agency has been very responsive to previous SAB advice on environmental modeling, and
recommends that special recognition be accorded to Agency REM participants for their
leadership. The Panel believes that the Regulatory Environmental Models (REM) program at
EPA will provide leadership and guidance for improving the quality of model development,
evaluation, and application in the use of environmental models for decision support. As a part of
this program, the MKB will provide a web-based database of information on selected models,
including key operational and scientific features, model downloads, guidance for use, and
examples of model applications provided by model developers. Nevertheless, the Panel is
concerned that the activities of EPA's Council on Regulatory Environmental Modeling (CREM)
have been funded through an Ad Hoc approach and the REM vision is not matched by  a
commensurate, and steady, allocation of resources. It is therefore recommended that the Agency
provide a meaningful commitment of resources to the REM initiative.


       The Draft Guidance is comprehensive, and will most likely be read and used by a wide
variety of audiences including model developers, analysts, managers at various levels,  decision-
makers, and other stakeholders who come from Federal, State, and private sectors. Yet it is
written, and most comprehensible, primarily to those who develop and/or those who "use" or run
models to generate output. Accordingly the Panel recommends that the Agency clarify carefully
the use of the Draft Guidance for a variety of audiences, describing or suggesting how  it can be
used beneficially by different participants in a modeling project. In the same vein, the Panel
finds that the use of modeling terminology is sometimes inconsistent with Agency past uses, or
usage common in the modeling community. It is recommended that these inconsistencies be
recognized through developing and using a common reference, the Glossary, in which these and
other terms are carefully defined. The current Glossary in the Draft Guidance should be
expanded to make it as comprehensive as possible.
1 This report contains the consensus views of the REM Panel on the current state of the REM program within the
Agency, as presented in the Draft Guidance and the MKB documents. The report is organized by responses of the
Panel to charge questions posed by the Agency. Generally speaking, each set of responses consists of statements
and explanatory materials that present the Panel's point of view on a given topic, which are followed by formal
recommendations, or in some cases commendations. For ease in discerning the plain meanings and actions of the
Panel, these recommendations and commendations are boldened. Less urgent, but still important observations,
suggestions, and concerns are not boldened, and/or are contained in the appendices of the report.

-------
       In the Panel's view it is important that the specifics of the problem posed be explicitly
stated and agreed upon by all stakeholders, and be used to guide model conceptual development,
complexity, data needs, and interpretation of output. Toward this end, the Panel suggests an
alternative version of Figure 1 (page 7) in the Draft Guidance in which Problem Specification is
given greater emphasis (page 12 in this review). The Panel believes that this alternative figure
better reflects the central role of stakeholders in the public policy process, and provides a more
accurate representation of the modeling process and its iterative nature.


       As noted in the Draft Guidance the evaluation of uncertainty in the application of models
is an important element in both understanding a system and in presenting results to decision-
makers, a point with which the Panel concurs. Indeed the use of Quantitative Uncertainty
Assessment (QUA) methods is a desirable, and often necessary component of modeling, but
experience suggests that the use of increasingly complex QUA techniques without an equally
sophisticated framework for decision-making and communication may only increase
management  challenges. Accordingly the Panel recommends that the Draft Guidance strongly
advise modelers to select particular QUA methods only after becoming aware of how the
decision-maker plans to use the information on uncertainty. This is an important component of
the Problem Specification as well.

       The Panel finds that the Draft Guidance provides a generally adequate discussion of
sensitivity analysis methods; however it is deficient in articulating a more tangible set of
alternatives for assessing model uncertainty, and a clearer distinction between sensitivity and
uncertainty analysis. While references cited provide an array of applicable methods to address
model uncertainty, the Draft Guidance does not provide sufficient discussion, context, and
recommendations necessary to provide a model user/decision-maker with "practicable"
information relating to appropriate uncertainty analysis methods and how to convey the results
of such analyses. In addition, recommendations for uncertainty analysis could identify focusing
resources on those processes to which the model state variables are most sensitive and are less
certain in terms of their formulation and/or parameterization. The topic of propagation of
uncertainty in modeling frameworks relying upon linked models, is not addressed in the Draft
Guidance, and warrants specific discussion. The Panel also recommends that both the Draft
Guidance and the MKB provide more practicable information through inclusion of "case study"
examples of where and how EPA is currently incorporating QUA in environmental models as an
integral component of decision-making.

       The Panel commends the Agency for recognizing the need for and beginning
development on the MKB. This type of resource has been needed for some time and even in its
draft form, it provides an easily accessible resource for the  modeling community that, if
maintained and used, will significantly improve the development and application of models both
internal and external to the Agency. In its review of the MKB, the Panel arrived at several
suggestions for modifying the data entry sheet that are given in our response to Charge Question
5.  Perhaps the most important recommendation is the need to clarify and in some cases gather
additional information on models including their framework (which in the Panel's opinion needs
to be redefined),  evaluation, and limitations. The Model Evaluation section of the Model Science
MKB information page considers many of the key issues needed to evaluate the scientific rigor
behind the underlying model development and previous applications, and addresses many of the

-------
elements of good modeling practice that are emphasized in the Draft Guidance.  Indeed, the
Panel views an important purpose of the MKB as providing an incentive for model developers
and stakeholders to conduct and openly communicate their efforts in model evaluation. From this
perspective, the Panel recommends some additional pieces of information that should be elicited
and reported, including:

        1)  Documented examples of peer review for the model, including reviews
           conducted by the EPA, other agencies or panels, and papers presented in
           the peer reviewed literature. Key limitations and needs for improvement
           that were identified in these evaluations should be reported;

        2)  Benchmarking studies in which the model's predictions and/or accuracy were
           compared with other models;

        3)  Provision of a mechanism that actively solicits feedback from the user community
           regarding application experience and model performance, both inside and outside the
           agency, beyond voluntary e-mails to designated contacts for individual models; and

        4)  Information on revision tracking, which should be incorporated into the MKB.

       The Panel also recommends that the Agency follow its own standard QA/QC program
procedures for ensuring quality of all of the underlying information in the MKB system. A
meaningful commitment to QA/QC is necessary to ensure the quality of information in the
MKB, without which it is doubtful the MKB will achieve its potential value and utility. This
QA/QC function will require the allocation of an appropriate level of resources on the part of
the Agency.

       Finally, this report contains specific experiences of Panel members (Appendix C) on the
use of the MKB for three specific models that it contains. These experiences can help guide
efforts by the Agency as they continue to modify the MKB in the future.

-------
                                       BACKGROUND2

        The impetus for much of the Council for Regulatory Environmental Modeling's (CREM)
current activities derives from the Data Quality Act and the Act's requirement that EPA and
other executive agencies establish mechanisms to allow the public to raise questions about
information they issue.3 Because environmental models and their analytical results were
generally construed to fall within the Act's ambit, EPA's Administrator charged the CREM to
establish guidelines to clarify the Agency's views on model quality.4 While the Data Quality Act
was passed in 2000, the history of EPA's and the SAB's interest in the nexus between policy and
environmental models actually date back a few decades, as described in the following
paragraphs.

       In December 1984, the Chairman of the Executive Committee of the SAB first addressed
the issue of best modeling practices in a letter to the EPA Administrator, recommending that a
"systematic effort of model validation be initiated, including identification  of the appropriate
balance between monitoring and modeling." In 1989, the SAB's Environmental Engineering
Committee, noting common problems among the models brought before the Committee for
review, recommended that EPA establish "a central coordinating group within the EPA to assess
the status of environmental models currently used or proposed for use in regulatory assessment,
and to provide guidance in model selection and use by others in the Agency."5 In subsequent
years, SAB addressed a variety of modeling issues, such as the need for generic models to
account for site-specific circumstances,6 and the need to conduct sensitivity and uncertain
analyses to better characterize modeling uncertainties.7

        Among the efforts to respond to SAB suggestions, EPA established an ad hoc committee
in 1992 to address challenges related to generating and using models. This  committee, the
Agency Task Force on Environmental Regulatory Modeling (ATFERM), produced guidance on
2 CREM Background Materials:  A web version of the CREM related background information, with links to
pertinent documents, is available at www .cpa. gov/crcm/sab.
3 U.S. Congress. 2001. Pub. L. No. 106-554. 2001. The Data Quality Act, Section 515 of the Treasury and General
Government Appropriations Act for Fiscal Year 2001, Pub. L. No. 106-554.
4 U.S. EPA. 2003a. Council for Regulatory Environmental Modeling, Administrator Memorandum, 2003.
Available at http://www.epa.gov/osp/crem/library/whitman.PDF.
5 U.S. EPA. SAB. 1989. Resolution on the Use of Mathematical Models by EPA for Regulatory Assessment and
Decision-Making, by the Modeling Resolution Subcommittee of the Environmental Engineering Committee,
Science Advisory Board, EPA-SAB-EEC-89-012, January 13, 1989. Available at
http://www.epa. gov/osp/creiii/librai3f/sab_89iEsolirtion_iiiodels.pdf.
6 U.S. EPA. SAB. 1990. Review of the CANSAZ Flow and Transport Model for Use in EPACMS, Report of the
Saturated Zone Model Subcommittee of the Environmental Engineering Committee (EEC), Science Advisory Board
1990, EPA-SAB-EEC-90-009, March 27, 1990. Available at http://www.epa.gov/osp/creni/libraiy/sab_cansaz.pdf.
7 U.S. EPA. SAB. 1995. Commentary on Bio accumulation Mo deling Issues, Report from Bioaccumulation
Subcommittee, Science Advisory Board, EPA-SAB-EPEC/DWC-COM-95-006, September 29, 1995. Available at
tep://www.epa.gov/osp/crem/libraiy/sab_bioaccumulation.pdf.
                                             4

-------
the peer review of models,  suggested model acceptability criteria, and proposed a charter for a
Council for Regulatory Environmental Modeling (CREM).

       In 1999, the SAB recommended that EPA establish policies and procedures for the
development, validation and use of environmental regulatory models. SAB further suggested that
EPA should collaborate with model users not just inside the Agency, but outside as well, seeking
their feedback to continually improve model development and use.

       In February 2000, EPA's Administrator formally established the CREM to continue the
initiatives toward building consensus and consistency in modeling efforts by the Agency.9  In
February 2003, the Administrator stated her expectations for the CREM to lead EPA in, among
other things:

   1.  providing "guidance for the development, assessment, and use of environmental models;"
       and
   2.  making "publicly accessible an inventory of EPA's most frequently used models, which
       will include information on a model's use, development, validation, and quality
       assessment."

It is with regard to these two items that the CREM has now turned to the SAB's Regulatory
Environmental Modeling Guidance Review Panel for advice. Specifically, the CREM has
submitted the following charge questions to the Panel.

Specific Charge Questions

Charge Question 1:  Has EPA sufficiently and appropriately identified the best practices,  such
that decisions based on models developed and used in accordance with these practices may be
said to be based on the best available, practicable science?

Charge Question 2:  Has EPA sufficiently and appropriately described the goals and methods,
and in adequate detail, such that the guidance serves as a practical, relevant, and useful tool for
model developers and users? If not, what else would you recommend to achieve these ends?

Charge Question 3:  Has EPA sufficiently and appropriately proposed a graded approach, such
that users of the guidance can determine the appropriate level of evaluation for a particular
model use.  If there are deficiencies in the proposed approach, what would you recommend to
correct it, and why?
  U.S. EPA. 1994. Agency Guidance for Conducting External Peer Review of Environmental Regulatory Modeling,
1994. Available at http://cfpiib.epa.gov/crem/niodelpr.cfiii.


9 U.S. EPA. 2000. Framework for the Council on Regulatory Environmental Modeling, Available at
http://www.epa.gov/osp/crem/libraiy/crem%20franiework.htm.

-------
Charge Question 4: Has EPA sufficiently and appropriately provided practicable advice for
decision-makers who must deal with the uncertainty inherent in environmental models and their
application? What additional advice should EPA consider in dealing with uncertainty, and why?
A number of researchers recommend a Bayesian approach to help decision-makers incorporate
uncertainty into their decisions and to do so in a transparent fashion (see, e.g., Attachments B
and C). Is the use of methods such as Bayesian networks an effective and practicable way for
EPA decision-makers to incorporate uncertainty within their decisions and to communicate this
uncertainty to stakeholders? If so, how? Are there alternative methods available?

Models Knowledge Base: As noted above, the SAB recommended that the CREM coordinate
EPA efforts to collaborate and seek input from model developers and users both inside and
outside EPA. One mechanism to implement this collaboration is through a web-accessible
knowledge base for environmental models. EPA has developed such a knowledge base to
communicate more clearly the data, algorithms, assumptions, and uncertainties underlying each
model; to facilitate the use of individual models or the combined use of multiple models; and to
enable developers and analysts to more easily identify information needs.

Charge Question 5: The Panel should consider that environmental models will be used by
people whose technical sophistication will vary widely. EPA has therefore attempted to cull
information about models that broadly serve the needs of all users, using a data template to
collect this information (see Attachment D). Has EPA identified, structured and developed the
optimal set of information to request from model developers and users, i.e., the amount of
information that best minimizes the burden on information providers while maximizing the
utility derived from the information?

Charge Question 6: EPA has developed a data dictionary  and  database structure to organize
the information it has collected on environmental models (see Attachments E and F). Has EPA
provided the appropriate nomenclature needed to elicit specific information from model
developers that will allow broad intercomparisons of model  performance and application without
bias toward a particular field or discipline?

Charge Question 7:  To facilitate review for this particular charge question, the Panel should
focus on three models that represent the diversity of model information housed within the
Models Knowledge Base. These models are:  (1) Aquatox, a water quality model, with
information found at hjtp;//cfj)^                                         (2) Integrated
Planning Model, a model to estimate air emissions from electric utilities, with information found
at httpj/Mpjjb.,.epa^^^                                  ancj NWPCAM,  an economic model
with information at http://cfpub.epa.gov/crem/crem  report.cfm?deld=74918.10
10 The final model selections from the MKB for observation and examination by the Panel include CALPUFF (The
Illustrative Air Model - see Appendix C-l in this Report); IPM (Integrated Planning Model - The Illustrative
Economic Model - see this Appendix C-2 in this Report); and AQUATOX (The Illustrative Water Quality Model -
see Appendix C-3 in this Report). Other models are discussed generally in Appendix C-4 of this Report.

-------
       Using these three models as examples and emphasizing that EPA is not seeking a review
of the individual models, but rather the quality of the information provided about the models,
EPA poses the following questions to the Panel. Through the development of this knowledge
base, has EPA succeeded in providing:

       (7a) Easily accessible resource material for new model developers that will help to
       eliminate duplication in efforts among the offices/regions where there is overlap in the
       modeling efforts and sometimes communication is limited?

       (7b) Details of the temporal and spatial scales of data used to construct each model as
       well as endogenous assumptions made during model formulation such that users may
       evaluate their utility in combination with other models and so that  propagation of error
       due to differences in data resolution can be addressed?

       (7c) Examples of "successful" models (e.g., widely applied, have been tested, peer
       reviewed etc.)?

       (7d) A forum for feedback on model uses outside Agency applications and external
       suggestion for updating/improving model structure?

-------
                                1.     BEST PRACTICES

Charge Question 1:  Has EPA sufficiently and appropriately identified the best practices, such
that decisions based on models developed and used in accordance with these practices may be
said to be based on the best available, practicable science?


          1.1.       Interpretation of "Best Available and Practicable Science"

       In developing and applying a model for supporting a regulatory action or decision, it is
important to meet the criterion stated in Charge Question 1--"based on the best available,
practicable science''  To the Panel, this means that the model uses the best current science that is
consistent with the model's intended use, whether that use is regulatory, management or
scientific. The term "practicable" refers to consideration of problem specification and
programmatic constraints (data quality and availability, and limitations of time and resources) in
selection of model complexity (i.e., spatial, temporal, and process resolution). Thus in the
context of Figure 2 (page 11) of the Draft Guidance document, the Panel suggests that the
location of the minimum (both in the x- and y-directions) in the uncertainly versus model
complexity curve will depend on the problem specification and programmatic constraints. The
Panel believes that when a model complexity is most appropriate for the problem and available
data and resources, it  is obtaining the minimum possible uncertainty and,  hence, using the best
available, practicable science. The Panel interprets this question as asking whether the Guidance
aids the modeler in finding that level of model complexity.


          1.2.       General Comments

       In general, the Panel finds the REM initiative provides a common and much needed
vision for modeling across  all of the offices within the Agency. The draft document in
particular provides a comprehensive overview of modeling principles and best practices, in
a concise manner. The Panel also finds that the Agency has been responsive to previous
SAB advice on modeling practices and commends  the REM participants for their
leadership. In particular the Panel applauds the emphasis in the document on using the peer
review process to insure that a Regulatory Environmental Model is using  the best available,
practicable  science However, the Panel is concerned that the CREM activities have been
funded through an Ad Hoc approach and the REM vision is not matched by a
commensurate, and  steady, allocation of resources on the part of the Agency. It is therefore
recommended that the Agency provide a meaningful commitment of resources to the REM
initiative. The Panel believes that successful implementation of this recommendation will
require a commitment from the top of Agency management, will require institutional change in
the Agency, will take significant time to implement, and will require the establishment of a
formal institutional mechanism responsible for review, oversight and coordination of model use
in EPA.

-------
       The Panel encourages the Agency to recommend that any regulatory modeling project
include peer review as part of its Quality Assurance Project Plan (QAPP). Furthermore, the
Panel suggests that the peer review plan implement ongoing peer review through all stages of the
modeling process, not just after the model application.  Such a proactive practice will assist in
avoiding technical errors or omissions that are often difficult or impossible to rectify after the
project is over. Also, the Panel favors an open modeling process for Regulatory Environmental
Models, in which modeling decisions and results are shared with stakeholders through model
development and application. This practice avoids a situation where the model fails to address
the regulatory questions as conceived by the various  stakeholders in the process.

       Consistent with the above discussion concerning ongoing peer review and interaction
between modelers and stakeholders and to reflect the recommendations of the Panel presented in
more detail below, the Panel suggests an Alternative Figure 1 to the EPA's Figure 1 shown in the
Draft Guidance (U.S. EPA. 2003).  The Alternative Figure 1 represents the same general logic
and information flow provided in the EPA's original Figure 1, but it has been amended to
enhance the detail of some of the particular steps.  It has also been expanded to represent the
Panel's perception of the interaction with stakeholders in both the identification and specification
of the problem to be solved and in the  ongoing review of the quality of the regulatory modeling
tools.
          1.3.       Problem Specification


       The Panel appreciates the distinctions made in the Draft Guidance between model
framework development and model application. Nevertheless, the Panel finds that this distinction
is not consistently maintained throughout the document. For example, the terms "application
tool" in Section 2 means problem-specific model implementation whereas "model application"
in Section 4 means model-based decision making. The Panel recommends that the term
application tool be replaced with "problem-specific implementation."

       The Panel believes that Problem Specification is a critical element of any modeling
project. It guides the development of the conceptual model and it governs the model complexity.
It must, therefore, include a clear and complete statement of policy, management, and/or
scientific objectives, model spatial and temporal domain and resolution characteristics, as well as
program constraints (e.g., legal, institutional, data, time and costs).  This process must involve
interactions among all stakeholders. The Panel recommends that Problem Specification be
given greater emphasis in the Draft Guidance by elevating it to a separate, initial step in
the modeling process, as shown in the Alternative Figure 1 offered below.

       In accordance with this observation, the Panel offers the following suggestions that
should be included for completeness and clarity in the expanded problem specification portion of
the Draft Guidance for each of the above aspects of problem specification:

-------
       1) Regulatory or research objectives are statements of what questions a model has to
          answer. The statement of modeling objectives should include the state variables of
          concern, the stressors (model inputs) driving those state variables and their control
          options, appropriate temporal and spatial scales, user acceptance of the model, and
          very importantly, the degree of accuracy and precision of the model. The discussion
          of Data Quality Objectives (DQOs) in the document is good, but the relationship
          between total uncertainty, accuracy, and precision of the model needs to be further
          clarified.

       2) An alternative description of model types as a component of problem specification
          should compare and contrast: empirical vs.  mechanistic,  static vs. dynamic,
          simulation vs. optimization, deterministic vs.  stochastic,  lumped vs. distributed.

       3) Specifying the model domain characteristics includes: identification of the
          environmental domain being modeled; specification of transport and transformation
          processes within that domain that are relevant to the policy /management/research
          objectives; specification of important time and space scales inherent in transport and
          transformation processes within that domain in comparison with the time and space
          scales of the problem objectives; and any peculiar conditions of the domain that will
          affect model selection or new model construction.

       4) Problem specification should include a discussion of the potential programmatic
          constraints. These address time and budget, available data or resources to acquire
          more data, legal and institutional considerations, computer resource constraints, and
          experience and expertise of the modeling staff.

       These factors, collectively, allow the modeler to determine the "complexity" of a model
that is necessary and  sufficient for the application under consideration (see recommended
definition of model complexity in Charge Question 2 response).


          1.4.      Model Calibration and Sensitivity Analysis

       The Panel applauds the overall treatment of model quality assurance and evaluation in
Appendices B  and C  of the Draft Guidance. However, the Panel recommends that the process
of "model calibration" receive increased attention regarding guiding principles and best
practices, both in the main text of the document and in the appendices. While calibration of
air models may not always be feasible or justified, it is an integral part of water quality modeling
and one of the more poorly understood steps in the modeling process. For example, the
document could discuss how sensitivity analysis can be used  during the calibration process.

       Most process-oriented environmental models are underdetermined; that is, they contain
more uncertain parameters than state variables that can be used to perform a calibration.
Therefore, good model calibration practice uses sensitivity analysis to determine key processes
for a given problem-specific implementation and then recommends empirical determination of
                                           10

-------
the rate of those key processes as part of the calibration process in addition to measuring the time
and space profile of relevant state variables. This practice can help further constrain a model for
which parameterization by calibration (i.e. ground-truthing with empirical data or statistical
techniques with data to estimate unknown parameters) is difficult. An example of this practice
would be to measure the rate of photosynthesis (process) in a lake in addition to the biomass of
phytoplankton (state variable).


          1.5.       Model Post-Audit

       The practice of model post-auditing is defined as the ongoing observation of the response
of the system to the actual implementation of a policy or management action relative to the
model's forecast of how that system would respond, and is crucial to the ongoing improvement
of environmental models. The Panel recommends that the Draft Guidance acknowledge the
value of post-auditing of models and associated data collection. This practice deserves a
section of its own in the model application area (note the addition of a reference to post-
auditing in Alternative Figure 1).  That section should also discuss the role of regulatory
modeling in adaptive management of environmental systems.


          1.6        Document Organization

       The Panel believes that there are best practices for the development of a generic model
framework (for example, WASP, QUAL2E, and AQUATOX) however most users of the Draft
Guidance will not be model developers. Therefore, the document should contain additional best
practices that should be followed for a site-specific or problem-specific implementation of a
model framework.  In order to clarify the guiding principles that should be  considered for
each type of project, the Panel recommends that the Agency consider organizing the  Draft
Guidance according to the steps involved in carrying out a modeling project from inception
to completion as indicated in Alternative Figure 1.

       The Panel identifies the steps in Alternative Figure 1 to be: Problem Specification; Model
Identification/Selection (the document should recognize that a site-specific modeling project may
be conducted by either new model construction or by selection of an existing model framework);
Model Development (including problem-  and site-specific model conceptualization, model
formulation and configuration, and model calibration); Model Evaluation (through peer review,
data quality assessment, model code verification, model confirmation/corroboration, sensitivity
analysis, and uncertainty analysis); Model Application (including diagnostic  analysis,11 problem
1: Diagnostic use of models has great value for both model evaluation and problem-specific application. For
example, plotting the cumulative distribution of observations of a state variable on the same plot as the cumulative
distribution of model computation of that state variable on the same spatial and temporal scale is valuable for
identifying whether the model is biased at high or low concentrations. As another example, development of a model
mass balance diagram of a given state variable over appropriately chosen space and time scales (e.g., whole lake
water column over the course of a year) is useful for identifying significant mass flow pathways, for addressing
specific management questions, and for helping to guide monitoring programs.
                                            11

-------
solution, and application support for decision-making); and, after implementation of a regulatory
action, Model Post-Audit. These activities should be covered in a QAPP for any given modeling
project. Furthermore, the entire Modeling Process  should be detailed in a report that includes
documentation of all of the above steps in the process.
                              1 •                                  for            the
t
k.
Environmental ;^
Controls |
i

t
tf .
I
I
 »           of
 »                      and
 »Courts and     gcsfiinniifrt    (i,|*,
 • Advocacy flroups (o.ij.j pubic, fttwiistwiirtilj industry,     cw
  On the left side, a few additional elements of the "public policy process" are suggested to clarify the stages of
modeling that occur after a decision is made including the use of monitoring programs and post audit reviews of the
outcome of previous or new regulatory actions to support model improvement. Alternate Figure 1 also expands on
the role of models in supporting regulatory decisions, identifying needed environmental controls, and implementing
these controls through enforcement actions when necessary. In addition, the centralized interactive role of all types
of stakeholders is emphasized.  These stakeholders include source facility owners and other responsible parties,
neighboring property owners and other directly affected members of the public, courts and interested government
agencies or related entities, and advocacy groups representing various environmental, industry, or trade
organizations.
                                                  12

-------
        The expanded format for the right side of the diagram illustrating the Environmental Model Development
and Application Process also maintains the same basic logic of the original EPA Figure 1; but the individual steps
have been expanded somewhat including details for problem specification, model selection, model calibration and
uncertainty analysis to represent the recommendations of the Panel.

        Finally, the added emphasis provided by the addition of the continuous "Model Review Process"
emphasizes the strong support of the Panel for the processes already occurring in much of the REM development
program. The Panel commends the continued and expanded application of this model review process to the
further development of the Models Knowledge Base.
                                                  13

-------
                             2.     GOALS AND METHODS

Charge Question 2:  Has EPA sufficiently and appropriately described the goals and methods,
and in adequate detail, such that the guidance serves as a practical, relevant, and useful tool for
model developers and users? If not, what else would you recommend to achieve these ends?


          2.1.       Introduction

       The general goals of the Draft Guidance are clearly stated (page 6), i.e., to provide
guidance on how to assess the quality of regulatory environmental modeling. The assessment is
to be made on the basis of a number of "performance criteria" or "specifications" (page 3) that
characterize the three major components of regulatory environmental modeling; namely (1)
model development, (2) model evaluation, and (3) model application.  The Draft Guidance
provides specific (and alternative) methods by which the performance criteria for each of these
three components may be assessed.

       The Panel agrees that the Draft Guidance is an excellent start to defining the
process of and providing the measurement tools for quality assurance in regulatory
environmental  modeling. Furthermore, the Panel makes particular note of the critical
importance of problem specification at the beginning of any modeling project  Problem
specification supplies the modeling objectives and constraints that thereafter guide
implementation of the modeling steps described in the Draft Guidance.


          2.2.       Intended Audience and Scope of Use

       The Draft Guidance identifies the intended audience as being composed of two general
categories: model developers and model users. Upon closer reading, however, other important
modeling constituencies are explicitly or implicitly identified, each with distinctly different roles
in the modeling process, leading the Panel to conclude that the term "model user" is overly broad
and imprecise. For this reason, the Panel is concerned that the Draft Guidance elaborate on the
distinction between the model users who generate model output (those who setup, parameterize,
run, calibrate, etc, particularly with model framework software such as WASP or QUAL2E), and
those who are managers and are principally users of model output. They are both users, but play
different roles in regulatory environmental modeling, and as such are likely to use this Draft
Guidance to assess different quality criteria. It would also help to clarify the intent of the Draft
Guidance and its relationship to its different regulatory audiences (at least 2 groups): regulatory
decision makers, and regional and state "assessors'Vadvisors for permit applicants.  Panel
discussions also suggested including other stakeholders in this audience, e.g., those to whom the
results will apply or affect. For less experienced audiences, the Draft Guidance may be
insufficiently explanatory.  The Panel recommends that the Agency clarify the use of the
                                           14

-------
Draft Guidance for the variety of intended audiences and suggests that the Agency identify
which sections will be most useful to the various stakeholders in a modeling project.

       A general concern about the overall Draft Guidance is its scope of use.  The Panel finds
that it provides a valuable resource to modelers in a wide range of disciplines, but unlike typical
EPA guidance documents, it does not lay out a step-by-step course of action. Instead, it
identifies a set of key "best practices" which should be adhered to,  along with supporting
materials  Because this Draft Guidance differs in scope and content from other "guidance,"
and because the term "guidance" has specific connotations in certain areas of model
application, the Panel suggests that EPA consider using a term such as "guiding principles"
instead of "guidance," both in the body of the Draft Guidance and in its title.

       A second general issue related to the scope of the Draft Guidance is that much of the
introductory parts of the Draft Guidance refer exclusively to regulatory applications of models,
yet it is clear that the intent of the REM process is to bring consistency to all environmental
applications of models, (e.g., regulatory support, research, resource assessment, evaluating
alternative management actions, economic evaluations, etc.). Therefore, the Panel
recommends that the Draft Guidance, including its stated purpose, be revised to reflect
these additional uses.

       A final issue regarding scope concerns the types of models to which the Draft Guidance
is intended to apply. The executive summary states "this Guidance provides recommendations
for environmental models drawn from Agency white papers, EPA's Science Advisory Board,
and peer-reviewed literature."  The Panel presumes that the intended application is to a broad
range of models. However, this intention (if correct) is not clearly  articulated in the "Scope of
Guidance" in the Introduction to the Draft Guidance, nor are the classes of models (i.e.,
economic, behavioral, physical, engineering design, health, ecological, and fate/transport
models) explicitly identified. This concern is particularly apparent in the Models Knowledge
Base (see also CQ5), where much of the information elicited is highly focused on models for
pollutant fate, transport, exposure, and effects.  Models that address economic activity, behavior,
and emissions  are differentiated by other key criteria, including whether they predict at the level
of the individual, household, firm, sector, region, or national or global economy; whether they
are normative (predicting how people should behave under various assumptions of rationality
and information) or descriptive (reporting how people actually do behave); and whether they
address the costs or benefits of environmental regulations.

       Clearly the Draft Guidance is primarily intended to address regulatory environmental
models, particularly those models used for policy analysis, national regulatory decision-making,
and implementation applications. However, it should also be noted that it applies equally to a far
broader category of models than its original targeted audience, and hence most  of the Draft
Guidance is expected to be useful for other modeling audiences as well.

       According to the EPA's CREM home page, "The Models Knowledge Base is intended to
be a living demonstration of the recommendations from the Guidance for Environmental Models.

                                           15

-------
In this way, these two products work in tandem to describe and document good modeling
practices." In pursuit of this goal, the Panel recommends that the Draft Guidance clearly
articulate the broad range of model types to which it is to apply earlier in the document,
and ensure that the guiding principles for problem specification, model development,
model evaluation, and model application reflect this diversity of types.


          2.3.       Glossary

       One of the keys to a workable Draft Guidance for quality assurance in environmental
modeling is that the various modeling constituencies share a common language and definition of
key ideas and terms. The Panel believes the Agency has made a commendable effort in
attempting to establish a common vocabulary for the purpose of environmental modeling. The
glossary is an excellent component of this Draft Guidance for providing the basis of that shared
understanding.

       However, there is room for improvement and a need for consistency, not only in the
glossary, but also in the text.  For example, some of the terminology and definitions are subject
to multiple interpretations, which is to be expected for a document that combines vocabularies
from a variety of fields. The Panel notes that the Draft Guidance's use of certain terms, e.g.
"guidance," as described in the preceding section, is at times at variance with past definitions,
including some of the Agency's own previous modeling documents many of which are cited in
the  references. The Agency should clarify the Draft Guidance's use of terminology and
definitions that may not always be used consistently.

       The current terminology used to describe the graded approach needs to be clarified.  For
example, "managerial controls" should be replaced with a more generic terms such as "level of
effort" or "allocation of resources." Another problematic area is the potentially misleading or
overly generalized use of common statistical terms such as "reliability" and "sampling errors."
Where the Agency's use of terms is intentionally different from prior or accepted use, they
should be noted as such, and a brief, appropriate rationale should be provided.

       The Panel suggests that the Glossary be expanded to include more terms to make it as
comprehensive as possible. Some key terms that should be added are: "validation" (add a note:
see  model validation), "documentation," "user manual," "proprietary models," "secondary
applications," "flow chart (code)," etc.  Some panel members questioned whether the glossary
definitions are the consensus of those in the Agency, or in the modeling community, or both?
For example, "corroboration" is an interesting and appealing substitute for "validation," but one
that is not yet widely used in practice. Appendix A contains specific suggestions for enhancing
the  utility of the Glossary.
                                           16

-------
          2.4.       Model Documentation, Project Documentation, and User Manual

       A variety of types and levels of "documentation" are required for a successful modeling
project. The Draft Guidance discusses model documentation only in the model application
component, i.e. a comprehensive project documentation to address "transparency" issues, (see
box "Recommended Elements for Model Documentation," in Section 4: Model Application, on
page 26 of the Draft Guidance). However there is a need for model documentation during
development, especially for complex modeling frameworks. In addition, the Draft Guidance
makes no mention of the need for an adequate user manual (or user guide) for the "analyst"
group of model users. It is unclear whether this is assumed to be part of the overall modeling
project documentation.  Some Panel members believe it is separate and distinct from model
project documentation, and is essential.

       In addition to the items already included in the box on page 26 of the Draft Guidance, the
Panel believes it is important to note the need for documentation of choices made during model
development, and for a model user manual.
                                          17

-------
                              3.     GRADED APPROACH

Charge Question 3: Has EPA sufficiently and appropriately proposed a graded approach, such
that users of the guidance can determine the appropriate level of evaluation for a particular
model use? If there are deficiencies in the proposed approach, what would you recommend to
correct it, and why?


          3.1.       Definition of "Graded Approach"

       The concept of a "graded approach" is implicit throughout the Draft Guidance, as it
should be. Usually "graded" is expressed implicitly through the use of the descriptor
"appropriate." The term "graded approach" first appears under "Model Evaluation" (introduced
on page 18). However, the graded concept applies to all phases of modeling—development,
evaluation and application—not just evaluation. The Panel is concerned that the concept of a
graded approach be introduced earlier in the document, before the discussion of model
development,  as an example of overarching concepts that are part of all of the modeling stages.
Accordingly, the Panel recommends that the material on the graded approach be modified
to reflect that model development, evaluation and application should always be conducted
using a graded  approach that is adequate and appropriate to the decision at hand,  as
determined by the Problem Specification process described in the Panel discussion of
Charge Question #1. This introduction should then be followed by a brief  discussion of how
"graded" applies throughout the modeling process. For example, in the context of model
development,  "graded" refers to the extent to which existing models are modified to fit the
problem specification or that screening models are used where appropriate,  instead of more
complex models.


          3.2.       Modeling Complexity and Associated Evaluation Needs

       The scope (i.e., spatial, temporal and process detail) of models that can be used for a
particular application can range from the simplest models to the very complex depending on the
problem specification and data availability, among other factors. In addition to providing some
additional comment on where the model continuum starts (i.e., what is the simplest model to be
considered in  the Draft Guidance or in the MKB), the Draft Guidance needs to comment in more
detail on the level of evaluation or "grade" of evaluation that might be appropriate for models of
varying degrees of complexity.  Currently, the discussion on page 18 dealing with the graded
approach to evaluation is brief and the discussion of model complexity on page 11 only  touches
on evaluation complexity. In addition to the example of a "screening test" noted as a case where
less rigorous model evaluation is required, examples of more complex situations should also be
addressed in order to clarify the  extended scope of evaluation that may be needed in different
cases.
                                          18

-------
       The Draft Guidance also needs to alert the reader that external circumstances can affect
the rigor required in model evaluation. For example, in cases where the likely result of the
modeling will be costly control strategies, court actions, or alienation of some sectors of the
population, detailed model evaluation may be necessary. In those cases, all aspects of the
modeling will come under close scrutiny, and it is incumbent upon the modeler to probe deeply
into the model's inner workings (sometimes called "process analysis") to support subsequent
regulatory decisions. This level of deeper model evaluation also would be appropriate when
modeling unique or extreme situations not previously encountered.

       The draft document should also note that gradation in evaluation can apply within
complex model applications. For example, in modeling urban air quality, most areas use a
regional modeling domain nested to provide higher resolution over the region of primary interest
(e.g., Amar et a/., 2004). Clearly the most intensive performance evaluation should be directed
towards the object of the modeling (the "fine grid"), but at least some level of evaluation should
be applied to more  distant areas (the "coarse grid"). The Panel finds that the Draft Guidance
acknowledges the  scope and complexity of the models being used, but recommends that it
provide more examples of appropriate evaluation steps for different models and model
systems (i.e., combinations of models linked to address a particular issue) and for their
particular applications.  The Panel recommends that the Draft Guidance broaden the
discussion of the graded evaluation approach to discuss  evaluation requirements for
additional circumstances such as using models in potentially litigious applications or in
unfamiliar or unique situations.

       Model evaluation in most every situation basically involves expert judgment,
examination of model output under changes in key driving variables, intercomparison with other
similar models, sensitivity and uncertainty  analysis and comparison with observational data. The
Draft Guidance needs to discuss the appropriateness of using the more qualitative evaluation
steps such as expert judgment to "screen" the model performance and application
appropriateness (i.e., how well does the numerical model agree with the conceptual model under
current and scenario conditions) before launching into more formal and complex, or higher
grade, intercomparisons with observations  or sensitivity analyses. In addition, the Draft
Guidance should offer examples of some particular practical methods, complementary to
evaluation (e.g., use of relative reduction factors and ensemble modeling) that can be used to
address uncertainty in the decision-making process.


          3.3.       Evaluating Model Response

       The Draft Guidance provides a comprehensive discussion of methods for evaluating a
model's performance in terms  of its ability to replicate historical situations. However, in
regulatory applications the most important feature of a model usually is its response to changes
in its input (e.g., response to growth and/or control of emissions). Aside from a discussion of
post-audit, the guidance provides little direction for model users to evaluate whether a model will
respond correctly to changes in critical  inputs. Certainly a solid performance evaluation of how
well the model replicates historical events, including analyses of the model's processes as well as
                                           19

-------
its predictions, is an important component of evaluating its response. However, additional
analyses focused on evaluating the performance of model response should also be conducted
when the goal of the modeling is to predict a future state under expected or hypothesized changes
to inputs.

       EPA provides a good discussion on evaluating model response in its recently-released
draft final Guidance on the Use of Models and Other Analyses in Attainment Demonstrations for
the 8-hour Ozone NAAQS [U.S. EPA, 2005]. Recommended techniques include retrospective
analyses (similar to post-audit), use of various probing tools, comparison to observation-based
models, and conducting sensitivity analyses for both the base and predictive cases using a variety
of assumptions (a detailed discussion of these techniques is beyond the scope of this review).
The Panel recommends that the guidance be expanded to specifically discuss evaluation of
model response, and to include suggested techniques such as those provided in [U.S. EPA,
2005].


          3.4.       Use of Multiple and Linked Models

       Many environmental problems require use of multiple models, with the models often linking
together and interacting to varying degrees. For example, air quality modeling often links
meteorological, emissions, and air chemistry/transport models. Integrated assessments that attempt to
evaluate multiple, interdependent benefits and costs of a problem such as the  overall value of the
Clean Air Act as is done in EPA's studies on Section 812 of that act (U.S. EPA, 1997, 1999) and the
work of the Grand Canyon Visibility Transport Commission (GCVTC, 1996) require linkage of a
wide variety of atmospheric, environmental, economic and social models.

       In cases in which multiple models are linked together to address a particularly complex
issue, each model needs to be evaluated individually to assure that the model  is being used
within its proper domain and that it is performing properly in the context of the integrated
assessment. In addition, evaluation of the full modeling  system needs to take place to make sure
that the overall analysis is adequate and appropriate for the application. Just because individual
modeling components are behaving properly does not necessarily mean that the fully linked
system will provide authentic overall analyses. When using such a system of models, it is
essential to beware of compensating errors, which can lead to "getting the right answer for the
wrong reason."

       A classic example of compensating errors occurs in air quality modeling applications,
where emission rates of pollutants are developed using an emissions model and meteorological
parameters are generated with a meteorological model. Pollutant concentrations are then
simulated using a dispersion model, using as inputs the emissions and meteorological model
outputs. Modeled wind speeds that are too slow will lead to over-prediction of pollutant
concentrations by the dispersion model, while modeled emission rates that are too low will lead
to under-prediction of pollutant concentrations. These errors  can be mutually  offsetting,
producing modeled pollutant concentrations that meet accepted performance standards.
However, the fundamental flaws in a model's formulation will likely  cause the modeling system
                                          20

-------
to respond incorrectly to changes in the inputs (e.g., application of emission controls).

       The Panel recommends that the Draft Guidance acknowledge that many
applications require the linkage of multiple models and that this linkage has implications
for assessing uncertainty and applying the team of models. Each component model as well
as the full system of integrated models needs to be evaluated for a given application.


          3.5.       Use of Model-Derived Data

       The Panel commends the Agency for recognizing that the definition of data includes
data sets generated from modeling exercises as well as from the literature and existing
databases. However, the guidance also needs to clearly discuss treatment of uncertainty
associated with the application of these diverse model-generated data as well as data sets
derived directly from observations.

       Data derived from modeling analysis that are then used for another modeling application
also must be evaluated for uncertainties, caveats, and limitations in applicability. The evaluation
then must be carried with the data throughout their future uses. One example of this need for
propagation of data uncertainties and limitations is the use of emission inventories in regional air
quality modeling. The emission inventories often are the result of complex data collection,
analysis and emissions modeling. The inherent uncertainties in the emissions data and the
emissions modeling need to be somehow quantified. Model users must recognize that the use of
data as input for the next phase of modeling carries uncertainties, thereby impacting the next
modeling steps. Sometimes, these uncertainties can be treated explicitly and quantitatively, but at
other times the uncertainties can only be acknowledged qualitatively. Regardless, the
uncertainties need to be noted and considered throughout the modeling system. This complex
relationship between data and models needs to be discussed in the Draft Guidance.
                                          21

-------
               4.     PRACTICAL ADVICE FOR DECISION-MAKERS

Charge Question 4: Has EPA sufficiently and appropriately provided practicable advice for
decision-makers who must deal with the uncertainty inherent in environmental models and their
application?  What additional advice should EPA consider in dealing with uncertainty, and why?
A number of researchers recommend a Bayesian approach to help decision-makers incorporate
uncertainty into their decisions and to do so in a transparent fashion. Is the use of methods such
as Bayesian networks an effective and practicable way for EPA decision-makers to incorporate
uncertainty within their decisions and to communicate this uncertainty to stakeholders? If so,
how? Are there alternative methods available?


          4.1.       General Comments on Uncertainty

       Experience suggests that shifts toward new, more informative, but potentially more
complex, quantitative uncertainty assessment (QUA) methods inevitably present decision makers
with challenges. A greater knowledge of uncertainty, absent an equally sophisticated framework
for decision-making and communication, may only increase management challenges. More
sophisticated  QUA techniques do not automatically create more sophisticated regulatory
decision-making. Thus the effective incorporation of uncertainty in decisions by decision
makers, and the acceptance of these decisions by stakeholders, will not be accomplished with
different or ever more elaborate QUA tools alone.

       Specific methods for performing sensitivity and uncertainty analysis are discussed in
Section C.5 and Section C.6, respectively,  of the Draft Guidance. The guidance appropriately
recommends a sequential approach to evaluating the sensitivity of the model to its components
and boundary values, to be followed by more in-depth investigation of components and potential
interactions that prove to exert the greatest influence on the variability of model outcomes. This
is a sound recommendation for developing an understanding of sensitivity in complex models
with many factors and many possible interaction effects among those factors. In addition to the
work by Saltelli et a/., 2000 cited in the report, other authors have proposed experimental test
frameworks (Kleijnen, 2005) for formally  examining sensitivity to individual effects and
interactions in multi-parameter models. The matrix of statistical methods in Section C.5.7
provides a convenient comparison of the strengths and weaknesses of a progressively more
complex set of approaches to sensitivity analysis.

       The merits of various methods for QUA have been discussed, debated, enthused over,
and at times derided, including everything from simple bounding analyses through 1-D and 2-D
Monte Carlo analyses, to Bayesian techniques. However, the REM Guidance should remind
readers that incorporation of uncertainly into decisions is not just a function of finding the  right
mathematical or modeling QUA "tool." Because scientists and researchers are often more
comfortable focusing on the "hard science" of models/tools than on the "soft science" that
governs the decision making process, often too little attention is given to problem formulation (in

                                           22

-------
its fullest meaning), risk communication, or the perspective of decision makers (Thompson and
Bloom. 2000). The Panel cautions that searching for the "right" modeling tool (or uncertainty
analysis) may miss the point; namely that models for regulatory purposes are a service to
decision makers, and are not intended as a substitute for the hard task of selecting the "right"
answer.

       Before deciding on a QUA tool, it is incumbent on the modeler to seek input from
decision makers and stakeholders as to how and to what extent they may  accommodate
uncertainty in their regulatory decisions. To a scientist, expressing and quantifying uncertainty is
a good thing. But the single value has a long history of use in regulatory decision-making.
Asking decision makers and stakeholders how they view scientific uncertainty, how they would
like to see it expressed, and how they see it being used in the decision-making process is the
necessary precursor to effective and transparent use of any QUA method. In short:

   a)  How much discretion does the decision maker  have in addressing uncertainty? During
       policy development or for an action not directly governed by statute or rule, they may
       have considerable leeway to do so. Once a statute or rule is in place, they may have much
       less or no such leeway. Procedural regulations  seem particularly resistant to incorporation
       of uncertainty. Many regulations work with reference to a fixed point (a "brightline"
       standard) and, despite an awareness that uncertainty exists in where this "fixed" point is
       actually located, decisions are simply based on whether or not the outcome is above or
       below that value.

   b)  How will stakeholders react to knowledge of uncertainty and how will this reaction shape
       the decision-making process? To a stakeholder, expressions of uncertainty can be
       interpreted that experts "don't know," or could also imply inadequate effort,
       incompetence, or otherwise a lack of credibility of the responsible party, which undercuts
       support for regulatory decisions. Knowledge of uncertainty also allows opposing interests
       in a regulatory decision to focus on the highest or lowest value, regardless of its
       probability. Because there are often significant costs associated with choosing one
       specific value over another, arguments can erupt over differences in values that are,
       because of "uncertainty," statistically indistinguishable.

       The definition of the term "uncertainty" has been a source of considerable confusion in
EPA documents and discussions of models used in environmental risk assessment. The REM
Draft Guidance attempts to clarify the use of the term by: 1) identifying types of uncertainty
(model, data, application niche) in Section 3.1.3.1; 2)  distinguishing uncertainty from natural
variability in model inputs and parameters for different modeling applications; and 3) defining
uncertainty analysis (parameters) as distinct from sensitivity analysis (model form and
importance of model factors).

       The Panel recommends that the Agency more clearly identify and discuss the various
sources of uncertainty in model application, including:
                                           23

-------
   a)  Stochastic variability, over space, time, and/or from individual to individual.
       Uncertainty arises from incomplete or improper representation of stochastic variability
       and the associated uncertainty in future system outcomes (e.g., of weather);

   b)  Model (structure) uncertainty, including errors due to missing or improperly
       formulated process equations, inadequate spatial or temporal resolution, and incorrect
       model use;

   c)  Model input uncertainty, resulting from data measurement errors, inconsistencies
       between measured values and those used by the model (e.g., in their level of
       aggregation/averaging), and parameter value uncertainty; and

   d)  Scenario uncertainty, resulting from incomplete knowledge of current or future
       economic, regulatory, or physical conditions for which the model is applied.

       In addition to identifying sources of uncertainty, the Guidance should also discuss the
implications of propagating uncertainties within model frameworks where models use the output
of one model as input to another, or where model frameworks are assemblages of individual
models.

       The Guidance provides some useful but too brief advice (Guidance §4.1.2) on how this
uncertainty might be effectively  communicated to decision makers and stakeholders. Much  more
emphasis must be placed on performing a robust and iterative problem formulation with
modelers, decision makers, and stakeholders and on correctly conveying model results using
non-technical, non-quantitative, and non-condescending communication techniques.

       Any transparency  of QUA methods is only possible if decision makers and stakeholders
are engaged early on by inclusive, effective communication and outreach strategies. The Panel
recommends that the REM Guidance strongly advise modelers to begin model development
or use only after they have obtained an awareness  of how a decision maker plans to use the
information on uncertainty that they will be providing. This is an important component of
the Problem Specification as well.


          4.2.       Sensitivity Analysis vis-a-vis Uncertainty Analysis

       Section C.5 would benefit from improved clarity in the distinction between sensitivity
and uncertainty analysis. For example, in Section C.5.1, the REM guidance obscures the
distinction between the goals of sensitivity analysis and uncertainty analysis, where it states
".. .the distinction between these two related disciplines may be irrelevant" (p. 50). While the
Panel agrees that the two are interrelated and sometimes confused, the distinction should be
clarified in the guidance.

       Sensitivity analysis is an examination of the overall model response to a perturbation of
model inputs. The analysis thus can be used to  inform model users, decision makers and
                                           24

-------
stakeholders on where to focus the most resources in terms of developing a better understanding
and characterization of the uncertainties for particular components of the model identified as
"most sensitive" to perturbations of underlying model parameters. Rather than perpetuating any
possible confusion between the focus or goal of these two analyses, the REM guidance should be
more transparent in describing the purpose of each, their interrelationship, and the distinction
between them. For example, the discussion in Section C.5.5 relating to Monte Carlo analysis
currently reads more like a discussion of uncertainty analysis, rather than sensitivity analysis.

       As noted in Cullen and Small (2004), sensitivity analysis is an important adjunct of
uncertainty analysis, determining the impact of particular model inputs and assumptions on the
estimated risk. Sensitivity analysis is often conducted as a precursor to uncertainty analysis,
helping to identify those model assumptions or inputs that are important. If the model outcome is
not sensitive to a particular input or set of inputs, there is no need to examine these inputs as part
of a more sophisticated uncertainty analysis. Sensitivity analysis is revisited in the subsequent
phases of an uncertainty analysis to identify those inputs and assumptions that are significant
contributors to the overall variance of the  output and/or critical to pending decisions (for an
example of the latter, see Merz et a/., 1992), thereby identifying the uncertainties that matter. In
this manner, priorities can be established for further research and data collection efforts.
Therefore, the Panel recommends that the guidelines articulate a more tangible set of
alternatives for addressing model sensitivity/uncertainty. In particular, recommendations
for uncertainty  analysis should identify the need to focus resources on those processes to
which the model state variables are most sensitive and, in addition, are less certain in terms
of their formulation and/or parameterization.


          4.3.       Uncertainty Analysis Practices/Methods (REM Guidance Section C.6)

       Section C.6 of the Draft Guidance on uncertainty analysis is incomplete in relation to the
coverage given to sensitivity analysis in Section C.5. Returning to the discussion of types of
uncertainty in Section 3.1.3.1, this section tries to address the "niche uncertainty" under the label
of model suitability and "data uncertainty" through a weakly defined discussion of frequentist
and Bayesian interpretations of probability. Unlike the rather detailed  discussion of methods for
corroboration and model sensitivity analysis, there is little true guidance on how to evaluate
uncertainty in model parameters and the effect of this uncertainty in decision-making based on
model outcomes.

       The current Draft Guidance touches on the notion of a Bayesian framework and the use
of prior knowledge and expert advice to reflect uncertainly in the model inputs (including
parameter values). However, it does not distinguish carefully between Bayesian estimation of
posterior distributions and associated inferences and decision theoretic approaches which
incorporate explicit loss functions for certain errors in inferences. It would be very useful to have
a "Box" example of an uncertainty analysis in which there is an established prior for an
"uncertain" model parameter, a likelihood for the input data, and an updated posterior
distributions and associated inferences and decision theoretic  approaches which incorporate
explicit loss functions for certain errors in inferences. Thus, the Panel recommends that the
                                            25

-------
REM Guidance (and MKB ) provide more practicable information through inclusion of
"case study" examples of where and how EPA is currently incorporating uncertainty
analysis in environmental models as an integral component of decision-making. In
addition, the Panel recommends that Section C.6 be enriched to a level comparable to that
of Section C.5 on sensitivity.

       The Panel agrees that Bayesian approaches are one of several candidate methods suitable
for quantifying data uncertainty in appropriate situations. Bayesian methods are certainly
appropriate for treating uncertainty in environmental modeling and may be particularly effective
in modeling applications where empirical data on the distribution of model parameters in real
applications are sparse and expert judgment may provide the most realistic assessment of the
prior distributions. A Bayesian treatment of a simple model application or a more complex
model with a network of dependencies (conditional relationships) is a theoretically appealing
approach to incorporate prior uncertainty into posterior distributions of model outcomes (e.g.
exposures, concentrations, expenditures, morbidity, mortality, etc.). Current software and
iterative estimation algorithms have removed many of the computational barriers that once stood
in the way of Bayesian treatment of a model application. Yet the removal of computational
barriers does not eliminate the need for a solid understanding of the scientific basis for the model
and in fact may require a heightened understanding (subjective, expert knowledge) of the prior
distributions of parameters. Furthermore, adoption of Bayesian uncertainty analysis methods
does not reduce the importance of sensitivity analysis to establish the importance of the model
components and their interactions. The effectiveness of the Bayesian approach will be greatest
when information on the prior distributions is accurate and new data to support the model
application are plentiful. If the prior information is weak or uninformative or the amount of new
data available for model parameter estimation is large, the model results will be dominated by
the new data.  If the new data inputs to the model are weak, the posterior distributions for outputs
will be dominated by the prior distribution assumptions.

       The Panel endorses the recognition that QUA should be an inherent consideration
when using models to support regulatory decisions. Yet, given the enormous breadth of
modeling paradigms (spatial  and temporal  scope and degree of complexity), the Panel remains
cautious in its recommendations regarding specific methods of QUA (e.g., "frequentist" vs.
Bayesian as suggested in the  charge question). The nature and complexity of any particular
model, its application within  a particular regulatory program, availability  of data and resources,
etc. will all influence the choice of QUA that is appropriate.  Thus, as with all other aspects of
modeling, a graded approach is warranted for conducting uncertainty analyses. In some
applications, simple sensitivity analyses may be all that is required. Regulatory decisions with
far-reaching impacts should endeavor to use QUA tools to provide the public and stakeholder
community with greater appreciation for the uncertainty range in the model  output decision
variables that ultimately define regulatory decision points.
                                           26

-------
          4.4.       Value of Information - Identifying "Uncertainties that Matter"

       After identifying model inputs and assumptions that contribute significantly to variance
in the output, it is necessary to consider how to use this knowledge (Cullen and Small, 2005).
Value of Information (VOI) techniques seek to identify situations in which the cost of reducing
uncertainty is outweighed by the expected benefit of the reduction. In short, VOI is helpful in
identifying model inputs that are significant because: a) they contribute significantly to variance
in the output, and b) they change the relative desirability of the available alternatives in the
decision under consideration. The Panel recommends that the REM Guidance acknowledge
the potential utility of VOI techniques available to assess the importance of the variability
and uncertainty contributed by individual inputs to the expected value (or conversely, the
"loss") associated with a decision under uncertainty (Raiffa, 1968; Morgan and Henri on,
1990; Finkel and Evans, 1987; Massmann, et al., 1991; Dakins et al., 1996; Yokota and
Thompson, 2004).

       While the Panel understands that the REM guidance is not intended to be prescriptive in
its effort to provide an overview of QUA methods, it does not provide sufficient context
currently for an end user (e.g., modeler within the regulatory community) to be able to determine
the level of QUA that would be appropriate within a particular context or application. The REM
Guidance might consider providing a more concrete decision framework to help guide the choice
of appropriate/available QUA methods. As a starting point, the REM Guidance should include
examples of, or references to, the nature and degree of QUA currently being implemented or
adopted within various EPA programs. For example the Panel is aware of the extensive
uncertainty analysis that is an integral component of the 3MRA modeling system. While it is
clear that this one example should not be taken to endorse a particular QUA, the MKB would
provide one means of assembling a "library" of such examples with the nature of the QUA, the
data requirements, limitations, etc. This would provide at least some options by way of example
that model users and decision makers could turn to as a resource beyond the cited statistical
references.

       The appeal of QUA is that it can be used to provide quantitative estimates of the "degree
of confidence" when using model results as a component of regulatory decisions.  Nevertheless
the results should be presented with some caution. It might be tempting to assign a high degree
of confidence in the uncertainty analysis based on the adoption of a highly elaborate or complex
analysis. Yet, the validity  of the QUA is of course dependent on the quantity and quality of the
information available for the analysis. The choice of an appropriate QUA method (frequentist,
1-D versus 2-D Monte Carlo, Bayesian, etc) can only be made if the intended audience of the
REM Guidance understands the data requirements and associated level of effort to conduct the
analysis of the various types of QUA. As compared to  the REM Guidance describing best
practices for model development/evaluation, the guidelines do not contain a similar set of "best
practices" for evaluating, presenting, and incorporating model uncertainty into decision-making.
While references cited in the REM Guidance provide an array of applicable methods to
address model uncertainty, the draft guidelines do not provide sufficient discussion,
                                           27

-------
context, and recommendations needed to provide a model user/decision-maker with
"practicable" information relating to appropriate uncertainty analysis methods and how to
convey the results of such analyses.

       The Draft Guidance should offer some practical methods that can be used to address
uncertainty within the decision-making process. For example, one is the concept of Weight-of-
Evidence (WoE), in which the model is only one (albeit an important) component in a suite of
analyses feeding into the decision framework. A second possible approach is to use the model in
a relative, rather than absolute, predictive mode. This approach uses "relative reduction factors"
multiplied by observed (measured) conditions in place of absolute predictions. In theory, such an
approach can avoid or cancel out systematic biases in the model formulation, hence reducing the
uncertainty in the predictions used for decision-making. A third example approach to dealing
with uncertainty is  the use of ensemble modeling. This approach involves running several
different models and using a composite of the results. While ensemble modeling can be very
resource-intensive, it may be worth considering for  applications or decisions involving extreme
cost or risk. These example approaches could be included, among others, with the REM
Guidance to provide decision-makers with practical examples of methods incorporating
uncertainty in the decision framework.


          4.5.      Communicating Uncertainty

     Independent of the choice of particular QUA tools, the Panel recommends that the REM
Guidance provide more discussion on the importance of the manner in which results of
QUA are communicated to the decision-maker (and to public/stakeholders). Graphical
methods often serve to convey complex statistical/probabilistic results in a more understandable
manner, and the REM Guidance should consider including a range of examples in the document.
Again, the MKB may be useful as a library of such examples.

     As the analyst/modeler and the decision-maker are usually not the same individual, it is
important to accompany results with the key assumptions and caveats encompassed in the
analysis.  How can uncertainty or probabilistic results be interpreted to help identify the
uncertainties that matter most, and to point the analyst to further study or data collection
activities that can be most beneficial in reducing these critical uncertainties? As noted earlier,
often only a relatively small subset of inputs is responsible for a majority of the variance in a
model's output. Morgan and Henrion (1990), Cullen and Frey (1999) and others describe the use
of summary statistics, visual methods, regression approaches and other sensitivity analysis tools
to help find the most important input uncertainties. Broader approaches to risk communication
and methods for testing the effectiveness of alternative presentations are discussed in Finkel
(1990), Bostrom et al. (1992), Morgan etal. (1992), Fischhoff etal. (1998), Thompson and
Bloom (2000), and Cullen and Small (2005).

     The preponderance of QUA methods focus on what the REM Guidance defines as "data
uncertainty." Quantitative "model uncertainty" and  "application niche uncertainty" present
significant challenges that are rarely feasible to address. In addition, empirical or observational
                                           28

-------
data are themselves subject to uncertainty depending on the quantity and quality of those data,
and it is important to recognize these uncertainties in the context of evaluating the importance of
model uncertainties. In the case of directly observed data, there are uncertainties associated with
the measurement techniques and with the data analysis processes themselves. In the case of data
that are generated by modeling, uncertainties arise as a result of modeling analyses that produced
the data. A common example is the difficulty of comparing environmental data (collected at a
particular point in time and space) to a model prediction based on averaged conditions for a grid
cell with spatial parameters and time steps necessarily much different from the conditions under
which the measurement was made. As discussed earlier, a clear description that discusses the
main sources of uncertainty, including an indication of the types of uncertainty that are most
readily addressed, would be helpful in communicating these concepts to the reader. Therefore
the Panel recommends that the REM Guidance be clear on the types of model uncertainty
that most QUA tools address.

      These data uncertainties mean that using data to evaluate models is very much an imperfect
process. As a result, the discrepancy between observed data and model simulations does not
mean that the model is wrong or not useful. It is particularly important to communicate this
concept to decision-makers who may favor discounting modeling results if the comparisons
between observations and models are less than perfect. In addition, when analysis of data is used
in lieu of modeling results because the modeling results do not completely agree with
observations, the potential errors and/or uncertainties in the data used for the analysis must be
acknowledged. In some cases, these uncertainties may actually be more significant than the
uncertainties determined for the modeling itself.

      The complex nature of data uncertainties and modeling uncertainties needs to be carefully
communicated to decision-makers. To promote this discourse as part of the general practice
of modeling, the Panel recommends that the Draft Guidance should stress the importance
of communicating model sensitivity and uncertainty both in the context of model evaluation
and when interpreting and applying model outcomes in the context of decision-making.
                                          29

-------
       5.     IDENTIFICATION AND STRUCTURE OF OPTIMAL SET OF
                            INFORMATION FOR ALL USERS

Charge Question 5:  The Panel should consider that environmental models will be used by
people whose technical sophistication will vary widely. EPA has therefore attempted to cull
information about models that broadly serve the needs of all users, using a data template to
collect this information (see Attachment D). Has EPA identified, structured and developed the
optimal set of information to request from model developers and users, i.e., the amount of
information that best minimizes the burden on information providers while maximizing the utility
derived from the information?


          5.1.       General Comments and Suggestions

       As indicated in Attachment D of the MKB (included in this report as Appendix B), the
major categories of information collected for the models in the REM Models Knowledge Base
include:

              A. General Information, regarding the model name, contact information,
                 overview, and web link;

              B. User Information, concerning technical requirements and basic instructions
                 for obtaining and using the model;

              C. Model Science, including the conceptual basis for the model and discussion
                 of evaluation steps that have been undertaken and documented for the model
                 (code verification, corroboration with observed data, sensitivity and
                 uncertainty analysis); and

              D. Model Criteria, summarizing applicable regulations and the problem
                 domain(s) addressed by the model, including types of pollutants, sources,
                 environmental media, and key fate and transport and exposure and effects
                 processes.

       The information solicited in the  current data entry sheet addresses most of the critical
elements needed by potential users to assess the overall relevance and utility of a model in the
MKB,  and does so in an effective and efficient manner. However, some additional general
subcategories of information should be added to the data entry sheet.

            A. General Information
    The general information entries for the MKB data sheet include:
           1.      Model Name (and acronym),
          2.      Model Overview/Abstract,
                                          30

-------
      3.      Contact Information, and
      4.      Model's Home Page.

This information is appropriately informative and concise, and the examples we considered
in the current MKB provide useful introductions to the models.

       B.  User Information
   The user information entries include:
      1.      Technical Requirements
                a.  Computer Hardware,
                b.  Operating Systems,
                c.  Programming Languages, and
                d.  Other Requirements and Features.
      2.      Download Info (with URL)
      3.      Using the Model
                a.  Basic Model Inputs,
                b.  Basic Model Outputs,
                c.  User's Guide, and
                d.  Other User Documents.

         The information requested is useful and appropriate. Most users will not need to
know the programming language used by the model, since they will access, download, and
use an executable version of the model. Nonetheless, this information could be useful for
some users and provides a useful  context for system requirements. The MKB should indicate
whether the underlying programming language(s) must be obtained or licensed for use of the
model.

         Under the "Using the Model" section of data entry, the Panel believes that it
would be useful to indicate the level of expertise, both environmental and computer,
needed to understand and use the model, and the level of user support provided for the
model by its developers, the Agency, or other sources. This information is provided for a
number of the models currently in the MKB  as part of the User's Guide or Other User
Documents fields.  Still, it would be useful to explicitly ask for this information as part of the
data entry sheet.

       C.  Model Science
   The model science categories include:

      1.      Conceptual Basis of the Model,
      2.      Scientific Detail for the Model,
      3.      Model Framework (equations  and/or algorithms), and
      4.      Model Evaluation (verification (code), corroboration (model), sensitivity
             analysis, uncertainty analysis).

-------
         The requested information addresses many of the key elements needed to
document and assess the scientific basis for a model. However, the Panel does recommend
some modifications and additions to the list above. First, defining the Model
Framework as the 'equations and/or algorithms' for the model (as is also done in the
Model Glossary) appears counter to the usual use of the word "framework." This term
is usually associated with the broader conceptual basis for the model or (by some, see
the U.S. EPA, 2003 and in particular, EPA's Modeling QAPP Draft Guidance, page 54)
as "the model and its supporting hardware and operating system." A clearer request
for the underlying model equations and/or algorithms would  be provided using the
descriptor "Model Structure and Calculation Methods." Second, the mention of
corroboration (model) under Model Evaluation should explicitly mention the model's
ability to predict observed monitoring data.

         The Model Evaluation section of the Model Science entry considers many of the
key issues needed to evaluate the scientific rigor behind the underlying model development
and previous applications, and addresses many of the elements of good modeling practice
that are emphasized in the Draft Guidance. Indeed, the Panel views an important purpose of
the MKB as providing an incentive for model developers and purveyors to conduct and
openly communicate their efforts in model evaluation. From this perspective, the Panel
recommends some additional pieces of information that should be elicited and
reported, including:

      1) Documented examples of peer review for the model, including
         reviews conducted by the EPA, other agencies or panels, and
         papers presented in the peer reviewed literature. Key limitations
         and needs for improvement that were identified during these
         evaluations should be reported, and

      2) Benchmarking studies in which the model's predictions and/or its accuracy
         are compared with those of other models.

   The Panel also recommends the inclusion of a section, following Model Evaluation,
for the model developer to summarize key limitations of the model and plans or needs
for modifications and improvements. This type of self-critique would be both informative
to users and motivating to the ongoing improvement of the models in the MKB.

        D.  Model Criteria
   The model criteria elicited and reported include the major categories of:

      1.     Regulations,
      2.     Releases to the Environment,
      3.     Ambient Conditions,
      4.     Exposure or Uptake, and
      5.     Changes in Human Health or Ecology.

                                     32

-------
          The Panel notes that the criteria elicited are highly focused on models for
   pollutant fate, transport, exposure, and effects. Much of this information is not
   appropriate for models that address economic activity, behavior, and emissions. These
   models are differentiated by other key criteria, including whether they predict at the level
   of the individual, household, firm, sector, region, or national or global economy; whether
   they are normative (predicting how people should behave under various assumptions of
   rationality and information) or descriptive (reporting how people actually do behave);
   and whether they address the costs or benefits of environmental regulations. As such, the
   Criteria should first note the genre of the model, whether economic/behavioral vs.
   physical or engineering science models (though some models, e.g., for predicting
   emissions, could combine elements of both), and include different subsets of information
   for these.
       5.2.       Specific Suggestions by the Panel

1) Under Regulations, those entering information into the MKB should be given the
   opportunity to identify "Other Regulatory or Decision Support Applications." These
   could include US regulations, such as NEPA or Natural Resource Damage Assessments
   under CERCLA, or international agreements or treaties, such as those for ocean disposal
   or controls on persistent organic pollutants (POPs).  It could also include non-regulatory
   decision support applications, such as for risk communication efforts by state
   environmental or public health agencies, or life-cycle assessment in support of green
   design decisions by firms.

       2)  Under the Releases to the Environment Section, a differentiation should be made
            between models for natural systems (emphasized in the current list) and
            engineered environments, such as buildings, treatment plants, and water
            distribution systems.  (Models for the latter, such as EPANET, have received
            increased attention in recent years due to concerns regarding drinking water
            quality at the tap from accidental or purposeful (i.e. terrorist actions)
            contamination, and should be sought for inclusion in the MKB.) Also, under
            Source Type, area source models should be explicitly noted to include larger
            scale sources, e.g. for non-point source runoff in watersheds, biogenic emissions
            in regional air quality models, or distributed natural or anthropogenic sources to
            groundwater.

3) Under Ambient Conditions, the Panel feels that the terms included under Processes
   (transport, transformation, accumulation, and biogeochemical), while useful information
   for many fate-and-transport models, is specific enough that it need not be included in
   these general model criteria. The Panel suggests that this information be replaced with the
   following, more-general criteria:
          a)  Time scales addressed in the model and whether the model predicts for
              dynamic or static conditions,
                                        33

-------
             b)  Spatial scales or economic units addressed in the model and whether it
                 provides a primarily distributed vs. lumped representation of the modeled
                 system, and
             c)  Whether the model is deterministic, predicting single values for model
                 outputs, or stochastic, predicting a range or distribution of values to
                 characterize variability and/or uncertainty.

   4)  Under Changes in Human Health or Ecology, the options should be expanded to include
       natural resource or materials damage, to consider effects, e.g., on visibility, historic
       buildings, or property value.

   5)  Model Applications: In addition, the Panel recommends that an additional major
       category of information be elicited and reported (in addition to the major items A-
       D). The additional category would be listed as "E. Model Applications," and should
       direct users to specific examples of regulatory or non-regulatory applications of the
       model (distinguishing between the two) in the public record and the peer-reviewed
       scientific literature.

          5.3  Track Versions of Models

       The Panel recommends that revision tracking be incorporated into the MKB. Such a
feature would have several benefits. First, it better reflects the realities of modeling than the
current framework in which models are implicitly treated as unchanging. Second, it facilitates a
tighter connection between policy analysis and modeling: the documentation for an analysis
would specify a particular model version whose characteristics could be retrieved from the
database. Third, it would provide valuable insight into the evolution of models over time. It
would be possible to observe the extent to which changes in a model are driven by:
developments in the underlying science; the availability of new data; the availability of new
software or algorithms; the demand for new features; and the correction of programming bugs.

       Revision tracking  could be implemented as follows:

       1)  A version field and a date field would be added to the data entry form. The contents
          of the version field would be a character string supplied by the model developer. The
          string should contain enough information that the developer (or a subsequent
          maintainer) could reconstruct and rerun that same version of the model at a later time.
          The date field would  be the date at which that version of the model was released or
          placed in service, and

       2)  Each time a new version of the model is added to the database, there should be one or
          more fields describing the significant changes in the model from its previous version.
          In addition, all other fields associated with the model should default to their settings
          from the previous version. However, it should be possible to provide an updated
                                           34

-------
          version of any field without losing the corresponding field from the previous version
          of the model.

       The documentation burden imposed on model developers would be small. In particular,
models whose development has been sponsored, at least in part, by EPA will already have
significant changes spelled out in grant proposals or cooperative agreements. Ideally, the MKB
would also include information on bugs fixed between versions. With revision tracking in place,
the main page for each model would have a link to "Previous Versions," which would take users
to a page showing the dates and revision numbers of all previous vintages of the model in the
MKB. Each previous version should be a clickable link showing the list of changes embodied in
that version (from above) and include links to other information specific to that version of the
model.
          5.4.       Listing of Key Publications and Applications of Models

       The Panel believes that it would be useful to include a list of key references for each
model: publications and reports where the model is described or documented, and important
applications. Model developers will be able to provide this information easily and it will allow
potential users to: (a) find out more about a model; and (b) avoid duplicating previous research;
and (c) see example applications. This information would also address the concern raised in
charge question 7c by showing how widely used and thoroughly peer-reviewed each model is.


          5.5.       Clarification of MKB Entry Sheet Items C1-C3

       The distinction among items Cl, C2, and C3 in the MKB Data Entry Sheet should be
made clearer, and the information requested by these items should correspond more closely to
the parallel sections of the Draft Guidance that discuss this information. Question Cl and C3 are
intended to match Section 2.2 and 2.3 of the Draft Guidance but most model builders and users
will probably regard those sections as overlapping  considerably. Section 2.2 (Conceptual Model
Development) in the Draft Guidance, for example,  requests a clear statement and description of
each element of the conceptual model, plus documentation of the science behind the model,
including: its mathematical form, key assumptions, the model's scale, feedback mechanisms, etc.
It seems, in short, to be asking for essentially complete documentation for the model. Because of
such great breadth of coverage, the types of information covered by Section 2.2 are solicited by
items Cl, C2, and C3 on the Data Entry Sheet. Subsequently the Draft Guidance, Section 2.3
(Model Framework Construction), begins with a discussion of some of the same information: a
formal mathematical specification of the concepts and procedures of the model. Assuming
information provided under C3  is intended to parallel that discussed in  Section 2.3, it is not clear
how the mathematical formulation requested here differs from that requested under Cl.

       It appears that the intent of C1-C3  is the following. The answer to Cl would be a broad
conceptual overview of the model that would be relatively free of technical detail (no equations)
and would be accessible to  readers from a wide range of backgrounds. It would usually include a
                                          35

-------
diagram showing the relationship between major components of the model. The answer to C2
would provide the technical detail missing from Cl (namely, the model's key equations) and
would have specialists as its intended audience. It would provide the theoretical basis for the
model. The answer to C3 would describe the model's numerical implementation (data,
algorithms, computer programming). This approach would be useful but needs to be spelled out
more clearly in instructions accompanying the form. It would also integrate well with version
tracking: the answer to C3 will usually change with each revision of the model; the answer to C2
will change periodically; and the answer to Cl - which defines the essence of the model - will
generally be stable.
                                          36

-------
                6.     DATA DICTIONARY AND DATA STRUCTURE

Charge Question 6:  EPA has developed a data dictionary and database structure to organize
the information it has collected on environmental models (see Appendices E andF of the Draft
Guidance). Has EPA provided the appropriate nomenclature needed to elicit specific
information from model developers that will allow broad inter-comparisons of model
performance and application without bias toward a particular field or discipline?


          6.1.       General Comments

       The discussion of the elements of this question is based primarily upon relatively terse,
and sometimes vague, information provided by the REM Data Dictionary and the REM Entity
Relationship Diagram. The Panel's review of the Data Entry Sheet (CQ5) and related
documentation of several individual models appearing in the REM Models Knowledge Base
(MKB) were also considered in this question. This has led the Panel to recommend that the
technical issues concerning the specific design of the MKB be addressed by either  (a) a
separate knowledge base topical report, or (b) an additional appendix to the current Draft
Guidance, to allow the main report to concentrate on the Agency's overall plan for the use
of this important tool, without ignoring the  details of its functional design.

       The Panel's expectation is that the developers of the MKB database structure would also
perform the necessary QA review of their Data Dictionary and entity relationships to assure that
they are properly drawn and functioning. This aspect is virtually impossible for the Panel to
evaluate thoroughly on the basis of the limited details provided on the database structure in the
two documents provided. It is similarly difficult for panel members (who are not information
technology specialists) to provide much useful advice without a better understanding of the
strategy and implementation of the design. Perhaps the separate topic report or MKB Appendix
could include all of this definition information and an outline of the database design strategy.
Panel members were not sure this would be  helpful. As noted below, review of the individual
model documentation in the MKB provided the Panel with the most insight on the effective
results of the application of these tools within its system.

       Although the Glossary presented in Appendix A of the Draft Guidance is an undisputed
"plus" for the documentation effort, very few of the terms in the Data Dictionary are repeated
there, as may be expected and appropriate, given the specialized nature of database terminology
that is usually unique to the particular database software program for which it was specified. For
a database, its functional terminology use has to be clear and internally consistent, regardless of
its conformance to the "outside world." It has been noted elsewhere that several  of the Glossary
terms have varying definitions, as used in different sections of the Draft Guidance and MKB
references—even though they are intended to conform to the Guidance definitions put forth in
the Glossary. Although it initially appeared that ongoing efforts may have to include variant
definitions (with footnotes to indicate model association); the use of "special guidance-specific"

                                          37

-------
definitions for some terms may be satisfactory if the authors of the guidance carefully review
their use of terminology for consistency of use, and alter the text accordingly. As suggested
above, however, the MKB Data Dictionary can function independently and quite satisfactorily,
as long as the translation of the Data Entry Sheet terminology to database definitions is precisely
specified. The Panel therefore recommends that the Agency follow its own standard
QA/QC program procedures for ensuring quality of the all of the underlying information
in the MKB system. From evidence presented to the Panel, it appears that this has already been
substantially completed for the functions currently defined. As new functions are added to
support new features, including those recommended elsewhere in this report, it will of course be
necessary to expand and update this Data Dictionary and repeat many of the QC checks to verify
functionality.

      The Panel has varying opinions on whether the overall Glossary should include all of the
Data Dictionary terminology to assure that referencing is clear to all users. For the reasons
outlined above, it appears as though this would potentially add more opportunity for confusion
than enlightenment  Therefore, the recommended approach that would isolate the Data
Dictionary in its own self-standing report would seem most advantageous at the current
time. Regardless of the location of this documentation, the Panel re-iterates its encouragement to
extend the QA/QC procedures followed to establish the initial quality of the MKB into the larger
QA program.  This is needed to maintain the information, as well as the hardware and software
systems needed to implement it.


          6.2.       Model Performance Information

      This charge asks about including database information that is "unbiased." However, as
indicated by the presentations made by Region 5 and 10 representatives before the Panel,  there is
also a need for a place in the database for additional "classification" information, which may go
beyond that requested from the developer, and which may appear to support "apparent
advocacy" or be otherwise "biased," if it includes "recommendation" information. This would be
a subsection of the database specifically devoted to information that helps Agency regulatory-
model application staff and "outside applicants" to identify the "most appropriate" candidate
models.  (A new "model selection program" that is under development by ORD was
demonstrated at the Panel's review meeting. It appeared to be a potentially valuable tool, but
several Panel  members cautioned that it should produce  an output file that includes a matrix of
candidate models, rather than a single "recommendation," so that the user of the tool can more
fully consider which of several candidates best fits the problem application at hand). Much final
model-selection decision making is presently achieved by regional or state agency discussions
that come to agreement on the most appropriate  site-specific model choice for major projects at a
particular decision point. However, as noted further below, the MKB would be more valuable, if
cumulative EPA problem application experience could be more consistently represented in the
database, along with the present basic model description information.

      The Panel is in concurrence on the importance of eliciting and including information on
historical model performance and particular application  experience from various model users
                                          38

-------
(both other modelers and decision makers), as well as model developers. This was not especially
motivated by any desire to minimize "biases" in reporting. There was some concern that
developers of a model may not be in a position to fully (or objectively) judge its behavior in
various contexts. Avoiding or minimizing bias would seem to require gathering reviews from as
broad a user base as possible. It now appears that the current approach, which utilizes only
information volunteered by the model developers, would tend to ensure that individual "biases"
are included, without any real opportunity to neutralize them. This situation may be the
unintentional result of using a more open narrative format for developers to explain features of
the model. It may be noted that the Panel review of the current Data Entry Sheet, the Data
Dictionary, and the Entity Relationship Diagrams did not suggest that there were any particular
features that would "bias" the selection or representation of models. Instead, as noted both above
and below, the reviewers were interested in seeing more information, as this could include
application experience with "competing" models.

       In fact, the inclusion of additional information on the history of performance suggested
by several Panel members would be more likely to include "opinions" as to the quality of
performance, hopefully supported by comparison with appropriate measurement data sets. This
extra information was viewed as important to prospective model users, even though it would be
likely to also include some "biased" information. As long as  instances of "preconceptual bias"
can be identified and flagged or filtered, the availability of previous application experience
(especially successes) would be a valuable component of the MKB information set. (Given the
wide variety of models included, this "openness" may be helpful  to both agency and "outside"
users; but perhaps some form of warning of the risk of potential bias should be included with any
new "performance history " element, so that the new users are fully aware of this limitation).
The Panel recommends that the Agency clarify the intended roles of the "inside" and
"outside" users of the MKB system and how that affects the priorities for the user
interacting with the system (including supplemental, even if "biased," application history
information).


          6.3.       Additional Recommendations

       To address details issues of CQ 6 more specifically, the Panel reviewers observed that
the dictionary and database do capture much of the information necessary  to assess model
performance; but there were some noted exceptions:

       1) CONCEPT: This results from problem formulation, but may or may not convey to
          the user useful information about the problem or set of problems (Draft Guidance
          §2.1) for which the model was developed. Another field should be added,  "Problem
          Specification" (as noted in Section 1.2 of this review,  to concisely capture descriptive
          information about the original application problem.

       2) DECISIONDOCS:  As written, this field seems to focus on how to use (run?) the
          model, how to produce output, and what experience there has been with running the
          model. This (or a new) field should include information or links to examples of when,
                                          39

-------
          how, and where the model was used to support an actual decision or decisions.
          Qualitative opinion on how the model performed would be acceptable/desirable.
          What benefits and problems did decision makers and stakeholders experience when
          using the model? This element should include a date entry so potential users can
          better judge the currency of the model.

       3)  DOWNLOADINFO: This should include information on the size of the model
          (zipped and unzipped), whether it is one file or a collection of files, and whether its
          setup will require changes in system files.

       4)  DIR ENTRY STATUS and REVISION_DATE: It is not clear what is meant by "last
          reviewed" — whether the date given would be for when the model itself was
          reviewed or when its entry into the dictionary was last updated? There should be
          information on when the model itself was last reviewed by its developer, as well as
          documentation (or links to such) of any and all changes, including errata and
          enhancements. It would also be useful to have documentation of problems
          encountered or improvements suggested by actual users of the model. All of this may
          be considered in MODELCONTACTINFO, but the database appears to be placing
          any "institutional memory" of the model's behavior in a person, who may or may not
          be available. The reviewers thought that there should also be fields consistently
          indicating whether model documentation is available online, who is responsible for
          preparing and maintaining this documentation, and the date it was last reviewed
          and/or updated.

       5)  EVALUATION includes four questions,  but without performance information, the
          first three seemed less useful (recognizing that they might represent the only
          information available for newer models).

       6)  MODEL_CATALOG  Table information given in Data Dictionary is too cryptic to
          tell whether any model performance information would fall into the descriptions
          provided there, and

       7)  PROG_LANGUAGE:  This should also indicate whether any other software
          (particularly proprietary, e.g., ArcINFO)  is required to operate the model.

       Panel reviewers considered their observations in reviewing the Aquatox (See Appendix
C-3), CalPuff (See Appendix C-l), IPM (See Appendix C-2), and other models (See Appendix
C-4) in reaching their conclusions about the performance of the identified database elements.
Overall, the construction of the system appeared to be generally well-designed, but with
opportunity remaining for expanding its focus to include more consistent information on model
use experience and performance in a format that would make it more uniformly easy for users to
compare models of interest for a particular candidate application. There are several key features
that the Panel would like to see improved or expanded so that the MKB can be most effectively
used by the EPA and its stakeholders. The existing Data Dictionary and Database Structure

                                          40

-------
appear to be adequate to address existing features of the current MKB. However, as this tool is
expanded to include new features recommended by either this Panel or the Agency's developers,
it will be necessary to add new structural elements and data elements; and this will require an
ongoing additional QA/QC effort. Therefore, the Panel recommends that the following issues
should receive further consideration and attention:

     1) A consistent QA review of the current content of the information contained in the
        MKB [some model feature/description errors (at the user interface level) were
        noted by Panel members, see Appendix C of this report],

     2) Follow-up requests to developers who supplied original information to supply
        missing data for the minimum set of descriptors that the Agency decides are
        essential to proper model selection,

     3) Entries into the data dictionary be clearly defined and made as consistent as
        reasonably possible, with the text in the Draft Guidance and data entry forms, and

     4) Provision of a mechanism that actively solicits feedback from the user community
        regarding application experience and model performance, both inside and outside
        the Agency, beyond voluntary e-mails to designated contacts for individual
        models.
                                         41

-------
          7.     QUALITY OF INFORMATION PROVIDED ABOUT THE
                                           MODELS

Charge Question 7:  To facilitate review for this particular charge question, the Panel should
focus on three models that represent the diversity of model information housed within the Models
Knowledge Base. These models are: (1) Aquatox, a water quality model; (2) Integrated
Planning Model (IPM), a model to estimate air emissions from electric utilities; and
NWPCAM, an economic model. 12

Using these three models as examples and emphasizing that EPA is not seeking a review of the
individual models, but rather the quality of the information provided about the models, EPA
poses the following questions to the Panel.  Through the development of this knowledge base,
has EPA succeeded in providing:
(7a) easily accessible resource material for new model developers that will help to eliminate
duplication in efforts among the offices/regions where there is overlap in the modeling efforts
and sometimes communication is limited?
(7b) details of the temporal and spatial scales of data used to construct each model as well as
endogenous assumptions made during model formulation such that users may evaluate their
utility in combination with other models and so that propagation of error due to differences in
data resolution can be addressed?
 (7c) examples of "successful" models (e.g., widely applied, have been tested, peer reviewed
etc.)?
(7d) a forum for feedback on model uses outside Agency applications and external suggestion for
updating/improving model structure?


          7.1        General Comments

       The Panel commends the Agency for developing the Models Knowledge Base
(MKB) and strongly supports its continued improvement. This type of resource has been
needed for some time and even in its draft form, the MKB provides an easily accessible resource
for the modeling community that, if maintained and used, will significantly improve the
development and application of models both internal and external to the Agency.

       In answering questions 7b-7d,  the Panel focused primarily on two suggested models (i.e.,
AQUATOX and IPM) along with a third model selected by the Panel (CALPUFF).  (The choice
of models was governed by the past experiences of Panel members.) However, it was necessary
       The final model selections from the MKB for observation and examination by the Panel include CALPUFF
(The Illustrative Air Model - see Appendix C-l in this Report); IPM (Integrated Planning Model - The Illustrative
Economic Model - see this Appendix C-2 in this Report); and AQUATOX (The Illustrative Water Quality Model -
see Appendix C-3 in this Report).  Other models are discussed generally in Appendix C-4 of this Report.

                                           42

-------
to go beyond these models to address Charge Question 7a. The Panel interprets this as being
asked in the context of a model developer who might use the MKB to screen existing Agency
models for use in a specific application or for existing model technology to include in a new
model to support a specific decision. In this case the Panel found it necessary to identify a
number of similar models (i.e., atmospheric dispersion models or water quality models) and
assess first the number of models available to choose from and, second, the consistency,
transparency and comparability of the data for these similar models.

       In answering CQ 7a, the Panel finds that the MKB has the potential to provide readily
accessible information about models; however the amount and quality of information can be
improved. For CQ 7b, the Panel recognizes that the information provided in the MKB is not
highly detailed. As a result, sufficient level of detail about scales of data used and assumptions
made during the formulation of any specific model in the MKB cannot be obtained from this tool
alone. However, the MKB does allow for the initial identification of candidate models with links
and references for obtaining further information.

       For CQ 7c, the Panel agreed that the three models considered in this review were all good
examples of successful models both in their regulatory role and in the way they are presented in
the Knowledge Base. For the final Charge Question, the Panel was not satisfied with the current
form of feedback mechanism for the Knowledge Base. More detailed observations, suggestions
and recommendations follow.
          7.2       Vision for the Knowledge Base

       The issues surrounding which models to include in the MKB are not trivial; the Panel
recognizes that this choice can have significant implications for the application of this tool in
support of decision makers. The Panel is concerned that without a clear vision, the MKB may
increase the burden on Regional and State offices by implying that a particular model is
"endorsed"  by the Agency. The disclaimer on the main page of the MKB makes it clear that
models in the Knowledge Base are not endorsed by the Agency but the Panel suggests that
this disclaimer be clearly presented at the top of each "Model Report" page as well.

       Part of the Vision for the MKB should specify the role of this resource in the
development or life cycle of models. More specifically, there needs to be a clear statement about
what models are included in the Knowledge Base and what models or types of models are
excluded (if any). This will require that the Agency provide a clear definition of "Regulatory
Model," or else that it move away from this restrictive terminology towards a more inclusive
title. The Panel recognizes that in addition to providing a repository or library  of mature models
that are actively used by the Agency, the MKB can also play an important role in the
development of new models and the improvement of existing models. For this reason, the
Panel recommends that the Agency include models at all stages of their life cycle with a
process for identifying to users those models that are new, actively being developed,
currently used for decision-making and nearing retirement.

                                          43

-------
       An important aspect of any model repository from the perspective of a model developer
or new model user is that it be as comprehensive as is feasible. In other words, users must be
confident that when they use the MKB to identify an appropriate model for a task, it is likely that
all relevant models have been considered. The draft MKB provides a good start but needs to
continue to incorporate additional models used by the Agency. Many of the Agency's Offices,
Programs, and Regions have developed their own clearinghouse for models; the Agency should
make an effort to bring these existing data bases under the umbrella of the Knowledge Base. The
Panel recommends that the Agency identify these parallel Agency supported databases
(e.g., the Support Center for Regulatory Air Models (SCRAM), the Center for Exposure
Assessment Modeling (CEAM), etc.) and develop a plan to incorporate them into  the MKB.
If it is not feasible to incorporate these existing databases at this time, then the Panel
suggests providing a current list of- and links to - these additional databases on the main
page and the search page of the MKB. In addition, there are ongoing efforts outside the
Agency that are focused on developing common model documentation protocol (Benz and
Knorrenschild 1997) and a searchable web-based registry for existing models (Benz et al. 2001)
that may provide useful insight during the continued development of the MKB.

       The process of identifying and including existing models is clearly an important step to
insure that the Knowledge Base is comprehensive. It is also important to continue to populate
this MKB with new models as they emerge. To accomplish this, the Panel recommends that
the Agency incorporate new models into the Knowledge Base as part of their initial
application within the Agency.  The information in the MKB for a given model is, or  should be,
part of the model development process so submitting this information as part of a model's initial
application should not be an added burden to the model developers. Nevertheless, the Panel
recognizes that it may be necessary for the Agency to provide additional incentive (or penalties)
as part of their plan to encourage what is currently a voluntary effort by modelers to put their
models in the MKB.

       To insure the continued improvement of what appears to be an  extraordinarily valuable
model information system, the Panel recommends that the Agency consider appointment of a
Knowledge Base "System Librarian." This appointment might come from within the Agency or
from an appropriately qualified contractor. The position would emphasize consistency  in data
collection and input of new information as well as system QA to improve information reliability
with time, making the MKB a national resource for quality comparative information on both new
and established models used in the regulation of the environment.


          7.3       Quality Assurance and Quality Control

       In addition to its role as an institutional memory, the MKB, in its current form,  is clearly
a tool designed and developed to support regulatory decisions by delivering useful information
about prospective models for specific applications. The database itself  is not unlike other
"models" developed to support regulatory decisions. As noted in CQ6,  the development of the
MKB and the information provided in it should be subject to the same level of quality control
and quality assurance that any Agency modeling effort is expected to include. Therefore, in
                                          44

-------
addition to the Vision Statement discussed earlier, the Panel recommends that the Agency
provide a link on the main page of the Knowledge Base that takes the user to the Agency's
plan for insuring the quality (integrity, utility and objectivity) of information provided. At
a minimum, this should contain the following elements:

       1) Problem specification that identifies the drivers for setting up the MKB (i.e. reduce
          duplication of effort, improve networking, facilitate model development, satisfy
          training needs, etc.);

       2) Clear identification of the user community or "clients" for the MKB. There was some
          ambiguity among the Regional representatives at the face-to-face meeting about
          whether the Knowledge Base satisfied their specific modeling needs and as a result
          there appeared to be a lack of "buy in" from the Regions;

       3) Identify specific performance criteria for the MKB information along with selection
          criteria for models in the database and identify who will be responsible for insuring
          that these criteria are met; and

       4) If non-Agency models are eventually included in the MKB (see previous bullet on
          selection criteria) then the QA/QC plan should identify how these models will be
          treated or presented and who will absorb the burden of oversight for these models.

       The level of detail provided by each model should also be balanced. In the draft MKB,
the details provided for models differ widely. An example of a model where information is very
sparse is TRACI. Scientific detail is often just a statement of units used in the model (e.g., the
SWTMODEL includes only the following statement under Scientific Detail "The model uses
fixed units (S.I.)." and is missing Conceptual Basis all together).  In other cases, it is not apparent
that the sections include comparable information. For example, it is often difficult to distinguish
between the Conceptual Basis, Scientific Detail and the Model Framework sections. The Panel
recommends that improved guidance be provided as part  of the data entry sheet to insure
that the correct type of information  is input into each field. This will  also facilitate search
functions by making sure those submitting the information realize what fields are searched.

       It may be necessary to request a keyword list from the model developer. As an  example
of this last point, the Panel found that the CALPUFF model was not identified in the keyword
search using the phrase "air dispersion." Although "air" and "dispersion" are in the title or
abstract, the phrase "air dispersion" is missing and as a result  the model  is not identified when
the search is based on this common phrase. In another case, a  search for "vapor intrusion"
models (currently a timely topic) found no matches in the MKB.  A search for "indoor air"
models produced three matches, but none that appeared usable for the vapor-intrusion set of
problems. This illustrates that there is still some significant work ahead to verify that the priority
regulatory problems being addressed in Regional offices of EPA today are adequately considered
in selecting candidate models to be included in the MKB.
                                          45

-------
          7.4       Layout and Navigation of Knowledge Base

       In addition to the recommendations already provided in Section 5, the Panel identified
several pieces of information that should be elicited when a model is introduced into the
Knowledge Base. In this section, the Panel provides observations about the current layout of the
MKB and provides suggestions for where  new information should be presented.

       The current layout of the MKB is logical and generally easy to maneuver (with some
exceptions noted later). The Panel found that much of the summary level material was readily
accessible on the three main Report pages. The more detailed information is generally available
through appropriate links. However, the Panel notes that in several cases, including the
CALPUFF model, information is not provided for specific fields and rather than leave these
fields blank, they are apparently removed  from the Report. For example, the "Model
Framework" and the "Model Evaluation" fields are often missing. The Panel recognizes that the
Agency attempted to "cull information about models that broadly serve the needs of all  users.
but once this minimum information is identified, it should be provided for all models. The Panel
recommends that if information is not provided for specific fields, those fields should be left
blank rather than be removed from the Report. A  blank field provides clear information
about a model while a missing field is ambiguous.

       Overall, it was possible to use the MKB to obtain general information about the  existence
and availability of frequently used models and more detailed information about a specific model.
However, a real understanding about how  a given model works and what its specific strengths
and weaknesses are would appear to require either going into the detailed documentation or
contacting an actual user. Navigating the Knowledge Base was somewhat cumbersome, in that
apparently different links go the same destination, links to critical information (e.g., model
change bulletins) are obscure and the return link from the exit disclaimer page forwards the user
to the keyword search page. In addition, several different pages (10 in the case of CALPUFF)
needed to be accessed to gain a sense of model operation and capabilities. Perhaps
accommodating the somewhat bewildering array of models and their varying characteristics is
the cause of these navigational inefficiencies. Nevertheless,  the Panel recommends that the
MKB system be reconfigured so as to streamline access to model information.


          7.5     Updating the Knowledge Base

       The Panel recognizes that the MKB is a "living demonstration of the recommendations
from the Guidance for Environmental Models." This suggests that the Knowledge Base will
evolve and adapt to the specific needs of the user community. The comments above also support
the premise that this will be an ongoing process of optimization. Optimizing the MKB will
ultimately require an understanding of the user community and an active and transparent
feedback mechanism. To facilitate this, the Panel recommends that voluntary user profile

                                          46

-------
and registration information be requested so that use profiles can be developed. This
information can also provide a mechanism for announcements to be distributed when necessary.

       Improving the MKB and the models contained in it will ultimately depend on the quality
of feedback from "external users" and the ability of new users to access this information. The
MKB is currently limited to a single contact and does not provide any suggested format for
comments, nor does it provide for open dialogue and discussion of users' modeling experiences.
This seriously limits the Agency's ability to adapt the MKB and improve its utility. This lack of
an open forum also limits the model developers from gaining experience from model users and it
limits the ability of new modelers to learn about specific experience and application of a
particular model The Panel recognizes the challenges associated with hosting an open forum
on an Agency web site but recommends that the Agency reconsider including a transparent
user feedback mechanism that will facilitate an open dialogue for the models in the MKB.


          7.6     The Role of the Knowledge Base as a "Model Selection Tool"

       The Panel is not entirely convinced of the utility of a model selection tool or expert
system that accesses the MKB to facilitate model selection. However, the Panel suggests that if
such a tool is developed for application at the EPA Regions, labs and states, then the effort
should be considered "model development" and as such should clearly follow the principles
presented in the Draft Guidance.

       If such a model selection tool is developed, it will likely be used early in  the life of a
project, thus identifying and evaluating specific needs in a way that would facilitate a ranking of
models that would otherwise be difficult to achieve. Therefore the Panel recommends that any
tool developed by the Agency to facilitate model selection based on the Knowledge Base
should simply present the models in a comparative matrix in  the form of a side-by-side
comparison table like one would see in the car sales industry.

       Appendix C provides more detailed information about Panel members' experiences in
accessing and using specific models.
                                          47

-------
                                  REFERENCES

Amar, P.; R. Bornstein; H. Feldman; H. Jeffries; D. Steyn; R. Ramartino; and Y. Zhang (2004).  Review
of CMAQ Model, December 17-18, 2003. Submitted March 1, 2004. Available at:
www.    gov/cair/pdfs/PeerRevie w_of_CM AQ. pdf

Benz, J. and M. Knorrenschild, (1997) "Call for a common model documentation etiquette".
Ecological Modelling,  97, 141-143.

Benz, J. and R. Hoch and T. Legovic (2001) "ECOBAS — modelling and documentation" Ecological
Modelling 138 3-15.

Bostrom, A., B. Fischhoff and M.G. Morgan.  1992. "Characterizing Mental Models of
Hazardous Processes: A Methodology With an Application to Radon," J. Social Issues, 48(4),
85-100.

Cullen, A.C., and Frey, H.C., 1999, Probabilistic Techniques in Exposure Assessment: A
Handbook for Dealing With Variability and Uncertainty in Models and Inputs, ISBN 0-306-
45956-6, Plenum Press, NY.

Cullen, A.C. and M.S. Small.  2005.  "Uncertain Risk: The Role and Limits of Quantitative
Assessment," In Risk Analysis and Society: An Interdisciplinary Characterization of the Field.
Edited by T. McDaniels and M. Small, Cambridge University Press, Cambridge, UK.

Dakins, M.E., Toll, I.E., Small, M.J., Brand, K.P. 1996. Risk-based environmental remediation:
Bayesian Monte Carlo analysis and the expected value of sample information, Risk Analysis, 16:
67-79.
Finkel, A.M. and Evans, J.S., 1987, "Evaluating the benefits of uncertainty reduction in
environmental health risk management," J Air Pollut Control Assoc, 37:1164-1171.

Finkel, A.M. 1990.  Confronting uncertainty in risk management, Washington, D.C., Center for
Risk Management, Resources for the Future, Washington, D.C.

Fischhoff, B., D.  Riley, D.C. Kovacs and M. Small. 1998. "What information belongs in a
warning?" Psychology and Marketing, 15(7):  663-686.

Grand Canyon Visibility Transport Commission (GCVTC) (1996) Recommendations for
Improving Western Vistas: Report of the Grand Canyon Visibility Transport Commission to the
United States Environmental Protection Agency.  Dated June 10, 1996. Available at:
http://wrapair. org/WRA P/Reports/GCVTCFinal. PDF

Kleijnen, J.C. 2005. "An overview of the design and analysis of simulation experiments for
sensitivity analysis". European Journal of OperationalResearch,Vo\. 164, No. 2, pp.287-300.
                                          48

-------
March, J.G. and Simon, H.A., 1958, Organizations, John Wiley and Sons, New York.

Massmann, J., R.A Freeze, L. Smith, T. Sperling, B. James. 1991. Hydrogeological decision
analysis: 2. Applications to ground-water contamination. Ground Water, 29(4): 536-548.

Merz, J., M.J. Small and P. Fischbeck. 1992. "Measuring decision sensitivity: A combined
Monte Carlo-logistic regression approach," Medical Decision Making, 12: 189-196.

Morgan, M.G., Henrion, M., and Morris,  S.C. 1980. Expert Judgment for Policy Analysis,
Brookhaven National Laboratory, BNL 51358.

Morgan, M.G. and Henrion, M. 1990. Uncertainty: A Guide for Dealing With Uncertainty in
Quantitative Risk and Policy Analysis, Cambridge University Press, Cambridge, UK.

Morgan, M.G., B. Fischhoff, A. Bostrom, L. Lave, C. Atman. 1992. "Communicating Risk to
the Public," ES&T, 26(1 1), 2048-2056.

Raiffa, H., 1968, Decision Analysis: Introductory Lectures on Choices Under Uncertainty,
Addison-Wesley Publishing, Reading, MA.

Ramaswami, A., J.A. Milford and M.J. Small. 2005. Integrated Environmental
Modeling: Pollutant Transport, Fate and Risk in the Environment. John Wiley & Sons, New
York.

Saltelli,  A., K. Chan and M. Scott, eds., 2000. Sensitivity Analysis, John Wiley and Sons, New
York

Thompson, K.M., D.L. Bloom. 2000.  "Communication of risk assessment information to risk
       rs" Journal of Risk Research, 3(4): 333-352.
U.S. Congress. 2001. Pub L. No. 106-554.2001. The Data Quality Act, Section 515 of the
Treasury and General Government Appropriations Act for Fiscal Year 2001, Pub L. No. 106-554

U.S. EPA. (no date). CREM Background Materials: A web version of the CREM related
background information with links to pertinent documents, is available at:
U.S. EPA. (no date). On-Line Models Knowledge Base (MKB, or KBase) Link is available at:
http ://cfpub . epa. gov/crem/knowl     ...... base/knowbase.cftn

U.S. EPA. 1994. Agency Guidance for Conducting External Review of Environmental
Regulatory Modeling, 1994. Available at:
                                          49

-------
U.S. EPA. 1997. Final Report to Congress on Benefits and Costs of the Clean Air Act, 1970 to
1990. Report EPA 410-R-97-002. Available at: http ://www.epa. eov/air/sect8 1 27

U.S. EPA. 1999. Final Report to Congress on Benefits and Costs of the Clean Air Act, 1990 to
2010. Report EPA 4 10-R-99-001. Available at: http://www.epa.gov/air/sect8 12/

U.S. EPA. 2000. Framework for the Council on Regulatory Environmental Modeling. Available
at:  http ://www. epa. gov/osp/crem/1 i brary/crem%20framework. htm

U.S. EPA. 2001. Memorandum from Dr. Gary Foley to Dr. Donald G. Barnes entitled "Request
for Science Advisory Board Review of a draft outline of a proposed document entitled 'Guidance
on Recommended Practices in Environmental Modeling, '" October 4, 2001

U.S. EPA. 2002.  Memorandum from Christine Todd Whitman entitled "Strengthening Science
at the Environmental Protection Agency, " May 24, 2002. Available at:
http://epa.gov/osa/pdfs/saduties.pdf

U.S. EPA. 2003a. Memorandum from Administrator Christine Todd Whitman entitled "Council
for Regulatory Environmental Modeling, " February 7, 2003  [designating Dr Paul Gilman as the
EPA Science Advisor and asking him to revitalize the CREM and accelerate its efforts.]
Available at: httpj//ctj3ub,^
U.S. EPA. 2003. Draft Guidance on the Development, Evaluation, and Application of Regulatory
Environmental Models, Prepared by The Council for Regulatory Environmental Modeling
(CREM), November 2003, 60 pages. Available at:
http ://www. epa. gov/ord/crem/library/CREM%20Guidance%20Draft%20 1 2  03 .pdf

U.S. EPA. 2005. Guidance on the Use of Models and Other Analyses in Attainment
Demonstrations for the 8-hour Ozone NAAQS (Draft Final), February 17, 2005, EPA Support
Center for Regulatory Air Models. Available at
U.S. EPA SAB. 1989. Resolution on the Use of Mathematical Models by EPA for Regulatory
Assessment and Decision-Making, Modeling Resolution Subcommittee of the Environmental
Engineering Committee, Science Advisory Board, EPA-SAB-EEC-89-012, January 13, 1989.
Available at: htt|3i/Avww.j;|)a^
U.S. EPA. SAB. 1990. Review of the CANSAZ Flow and Transport Model for Use inEPACMS,
Report of the Saturated Zone Model Subcommittee of the Environmental Engineering
Committee, Science Advisory Board, EPA-SAB-EEC-90-009, March 27, 1990. Available at:
U.S. EPA. SAB. 1995. Commentary on Bioaccumulation Modeling Issues, Report from
Bioaccumulation Subcommittee, Science Advisory Board, EPA-SAB-EPEC/DWC-COM-95-

                                         50

-------
006, September 29, 1995. Available at:
http://www.epa..gov/osp/crem/library/sab_bioaccumulation.pdf

U.S. EPA. SAB. 2002. Panel Formation Process: Immediate Steps to Improve Policies and
Procedures: An SAB Commentary, EPA Science Advisory Board, EPA-SAB-EC-COM-02-003,
May 17, 2002

Yokota, F. and K.M. Thompson. 2004. "Value of information analysis in environmental health
risk management decisions: Past, present, and future," Risk Analysis, 24(3): 635-650.
HOTLINKS FOR SELECT SOURCES ARE AS FOLLOWS:

AQUATOX: Available at: httg;//_cfjub^
CALPUFF: Can access this through the MKB (KBase) link at:
http ://cfpub . epa. gov/crem/knowledge_base/knowbase.cfm

EPA Center for Exposure Modeling (CEAM) Web Site at:
Integrated Planning Model (IPM): Available at:
http ://cfpub .epa. gov/crem/crem ....... report, cfm? dei d=749 1 9

Models Knowledge Base (MKB, or KBase) Link is available at:
National Water Pollution Control Assessment Model (NWPCAM): Available at:
http://cfpub.epa.gov/crem/crem_report.cfm?deid=749 1 8

U.S. EPA. 2003. Draft Guidance on the Development, Evaluation, and Application of Regulatory
Environmental Models, Prepared by The Council for Regulatory Environmental Modeling
(CREM), November 2003, 60 pages. Available at:
http ://www. epa. gov/ord/crem/librarv/CREM%20Guidance%20Draft%20 1 2  03 .pdf
                                        51

-------
                    Appendix A - Enhancements to the Glossary

       Consensus on a common nomenclature is a key requirement for implementing a
consistent Agency-wide approach for environmental model development, use and quality
assurance. The Glossary in the draft document is a preliminary step towards this goal.  However,
several aspects of the Glossary would benefit from additional technical and editorial attention:

       1)    The reader is likely to be frustrated when looking up underlined terms from the
   text when the terms are not listed in the Glossary in the same form that they appear in the text,
   e.g.: Spatial and temporal domain (p. 9; listed under Domain in glossary), code verification (p.
   12), model evaluation (p. 16), model validation (pp. 16 and 43;  also appears on p. 30 in the
   definition for corroboration), integrity (p. 16), proprietary models (p. 23).

       2)    Several terms are defined in the Glossary slightly differently from their
   definitions in the text; it is suggested that the definition be the same in both locations. Module
   (Box 2 on p.  37); Terms from Box 3: Applicability and Utility,  Clarity and Completeness,
   Evaluation and Review, Objectivity, Uncertainty and Variability.  Application Niche
   Uncertainty (p. 21).

       3)    Several terms are not in alphabetical order in the Glossary: Expert Elicitation,
   False Negatives, Forms (models), Model, Parameter Uncertainty,  Quality,Variability.

       4)    Several additional terms should be added to the Glossary (and underlined in the
   text) and either defined at that location, or else cross-referenced to another existing term in the
   Glossary for the definition (as has been done for "Parameter Uncertainty"): Acceptance
   Criteria (Box 3), Bayesian view (p. 56), Beta test, bootstrap sampling (p. 48), Bug (computer),
   Configuration tests, Data, Data Acceptance Criteria (p. 43), Empirical data (p. 21 and 45),
   Errors, hyperplane (p. 51), Integration Tests (App B), Monte Carlo analysis (p. 53), Normal
   Distribution (p. 45), Paradigm (App C), Parameterize, Peer Review, Platform, Post-processing
   (model output),  Qualifiers (for analytical data) (Box 5 on p. 43), Quality Assurance, Regimes
   (p. 48), Representativeness (p. 20; Box 5 on p. 43), structural error (p. 21), Type I error (p.
   45), Type II error (p.45), User interface (p. 33, used in definition for Object-Oriented
   Platform).

       5)    Cross-references to more specific terms in the Glossary should be added to the
   definitions for generic terms, e.g.:
          a)  Decision errors: See also False Negatives, False Positives,
          b) Errors:  See also Accuracy, Bias, Data Uncertainty, Confounding Errors, Data
             Uncertainty, False Negatives, False Positives, Measurement Errors, Model
             Framework Uncertainty, Noise, Uncertainty,  Uncertainty Analysis, Variability,
             and
          c)  Model: See also Conceptual Model, Deterministic Model, Empirical Model,
             Mechanistic Model, Screening Model, Simulation Model, Statistical Model,
             Stochastic Model.

                                           52

-------
    6)     The definition of the term "model complexity" should be expanded to emphasize
theoretical and process issues first (e.g. basis of the model; spatial and temporal resolution).
The mathematical, numerical, and computational aspects of complexity should assume a
secondary posture.

    7)     In addition to the Glossary, Since the environmental modeling is generally
multidisciplinary, the Agency should consider adding a List of Acronyms to the Draft
Guidance. Appendix E of this report provides an initial list.
                                        53

-------
    Appendix B - The CREM Models Knowledge Base Data Entry Sheet

Instructions

1.  Please complete this data entry sheet for each model that you want to be included in the
   CREM Models Knowledge Base. You may use as much space as necessary.

2.  You are encouraged to include URLs to other sources of information, graphics, and other
   pertinent documents (in PDF or other formats).

3.  The data entry sheet for the IPM model is provided as an example.

4.  Any questions? Need assistance? Please contact Neil Stiber (202-564-1573).
(A) General Information
1. Model Name:
2. Model Overview / Abstract:
3. Contact Information (name,
affiliation, e-mail, phone #):
4. Model's Home Page:
(B) User Information
1. Technical Requirements:
a) Computer Hardware:
b) Operating Systems:
c) Programming Languages:
d) Other Req'ts and Features:
2. Download Info (with URL):
3. Using the Model:
a) Basic Model Inputs:
b) Basic Model Outputs:
c) User's Guide:
d) Other User Documents:
(C) Model Science
1. Conceptual Basis of the
Model:
2. Scientific Detail for the Model:
3. Model Framework (equations
and/or algorithms):
4. Model Evaluation (verification
(code), corroboration (model),
sensitivity analysis, uncertainty
analysis):






















                                       54

-------
(D) Model Criteria
Please use the shaded boxes on the left to select all criteria that are relevant to the model. Criteria
should be selected based on an appropriate level of generality / specificity. Please note that selection of
specific criteria (e.g., "Pollutant Type"); necessarily includes the more general (e.g., "Releases to the
Environment") but, not the more specific (e.g., "Physical").
       Regulations
              Clean Air Act (CAA)
              Clean Water Act (CWA)
              Safe Drinking Water Act (SDWA)
              Resource Conservation and Recovery Act (RCRA)
              Comprehensive Environmental Response, Compensation, & Liability Act (CERCLA)
              Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA)
              Toxic Substances Control Act (TSCA)
       Releases to the Environment
              Pollutant Type
                      Physical (e.g., radiation, heat, particles, fibers, noise)
                      Chemical (e.g., organic, inorganic, toxics)
                      Biological (e.g., microbial)
              Source Type
                      Point source (e.g., tank, spill, stack, discharge pipe)
                      Area source (e.g., spray, fertilizer, lagoon, holding area)
                      Mobile source (e.g., automobiles, trains, ships, airplanes)
       Ambient Conditions
              Media	
             	Ground (e.g., soil, sediment)
                      vjujinm. \^.g.; owii; pi/u-inii/iity	
                      Water (e.g., surface water, ground water)
                      Air
                      Ecosystem
              Processes
                      Transport (e.g., advection, bulk, dispersion, diffusion)
                      Transformation (e.g., chemical reaction, partitioning, biodegradation)
                      Accumulation (e.g., deposition, sedimentation)
                      Biogeochemical (e.g., cycling, growth, consumer-resource)
       Exposure or Uptake
              Exposure Characterization
                      Location
                      Frequency and Duration
                      Pathway (e.g., inhalation, digestion, dermal, injection)
              Body Burden - Dose (e.g., phamacokinetics, retention, transformation)
       Changes in Human Health or Ecology
              Human Health Indicators
                      Mortality
                      Chronic and Acute Diseases
              Ecological Indicators
                      Population Changes
                      Acute & Chronic Disease Occurrence
                      Land Use Change
                                              55

-------
          Appendix C - Panel Members Experiences Using the MKB

       This appendix summarizes comments related to the form and function of the Models
Knowledge Base with specific emphasis on models selected to facilitate the review and
response for Charge Question 7. Because the following narrative is meant to convey the
individual reviewers experience with the MKB during the review, the narrative has not been
heavily edited.

          C-l    CALPUFF (The Illustrative Air Model)

       The CALPUFF example evaluation starts from the "Models Knowledge Base" page,
and then goes to the listing of available models, and from that to the CALPUFF model report.
With respect to CQ 7(a), if the user wasn't going to a specific model, it would be hard to
decide, using this list alone, how to choose from among the several seemingly air-related
models listed (however, the keyword search capability is helpful for this). A model overview
on the "general information" page provides information that addresses, in part,  CQ 7(b). Going
to the "user information" page gives information on downloading and the availability of user's
guides. Here the heading "Using the Model" is slightly misleading in that it implies
information on how the model is used to make decisions but is actually about how a modeler
would run the model. This section also provides no citations or links as to application of model
results  in actual decision making. Although the "Recommendations for Regulatory Use"
section is informative, it also provides no citations or links as to how model results have fared
in actual decision-making. The "Model Evaluation" section is clearly about evaluation of the
model as a model and not as a decision support tool.

       The MKB does provide sufficient information to accomplish goal 7a for the CALPUFF
model in that it allows users of the data base to locate candidate models which might serve
their purpose. However, it should not be considered as providing a substitute (e.g., in summary
report form) of the detailed information that has to be retrieved from the open literature in
order to compare potentially relevant models for an application. It would be impractical for the
MKB to provide the level of information necessary for users to determine which models are
suitable for every application, but it can certainly help eliminate duplication by providing a
limited number of candidates to consider. Evaluating these candidate models requires
consistency in the presentation of information.

       The MKB cannot reasonably be expected to provide sufficient detail to  fully address a
model users/developer's questions about CALPUFF. However, it can and should answer basic
questions such as "at what temporal and spatial scales has the model been shown to operate
successfully?" and (for air models in the GAQM) "at what scales are these models considered
to be preferred or acceptable alternatives?" This information  should be  sufficient to guide users
of the MKB to ask the right questions, but probably  cannot provide complete answers, since
understanding the "endogenous assumptions made during model formulation" will require
detailed understanding of the model algorithms beyond its scope.

       The models presently in the MKB differ widely in terms of ranges, attributes,
objectives etc. The completeness/focus of the "model report" information also varies widely
                                         56

-------
relative to the amount of information provided. For example, under User Information,
essentially all that is provided for CALPUFF is links to the SCRAM and to the developer's
web site, but for some other vendor-supplied models, summary information is provided in the
MKB itself (plus appropriate links). Because vendors may provide information on models as
they see fit, it would be beneficial to have at least a summary of basic information about each
model in the MKB. As indicated in the Panel's Report, this information should include
computational requirements (including operating systems supported and requirements for other
software), descriptions of input data requirements, and descriptions of model output.
Additional useful information could include some examples where the model was successfully
applied, along with references and contact information to facilitate further research into the
suitability of models for specific applications.

       As another example of the need for consistency, the CALPUFF site under the "user
information" section, the link to "Technical Requirements" is missing. To facilitate
identification of all candidate models for a specific task, each model should have the same
major sections. Similarly, the Framework section on the Model Science page is missing for
CALPUFF (as well as for AquaTox). Even if sections are left blank, they  should be included
for every model to facilitate use of the MKB. The main page of the CALPUFF model
developer's website provides little information about the science of the model but does nicely
summarize model updates, provides links to its regulatory status, a download, and training
opportunities. The "regulatory status" page provides information similar to that found on the
EPA "model science" page but goes further by offering links to notices and reports on
regulatory use. This also highlights the need for some support by the Agency to synthesize
information provided by the model developer in order to provide a consistent format and level
of detail.

       Navigating the CALPUFF pages was somewhat awkward. The Environmental
Indicators search feature was the least useful since it presupposes knowledge of how the
Agency defines and uses such indicators. One of the download  links from the "user
information" page leads to EPA's SCRAM website, as does a similar link for "model
homepage" on the "general information" page. The SCRAM website is apparently the only
point at which it is possible to access the critical "Model Change Bulletin" and "Model Status"
records, which are somewhat obscurely included only as "Notes" in smaller font. There
appears to be considerable overlap in these two sets of information and the question arises why
they couldn't be combined in one more accessible location (e.g., on the "user information"
page). The link to the NTIS site is probably necessary but models without online
documentation would appear to be at a disadvantage. Getting to CALPUFF on the SCRAM
website from  either the "general information"  or "user information" pages provides one with a
link to the model developer's website, who is a contractor and not the EPA. A link directly to
this website is also on the "user information" page. Thus, there  are three apparently different
links on two different pages all leading to the same destination, a non-EPA website. This seems
unnecessarily convoluted. It is not entirely clear until this point that genuinely useful
information on the model resides with a contractor and not with the Agency.

       Something seemed to be wrong in the keyword search feature on the MKB primary
panel, since entering "air dispersion" produced only three results, all related to the RAIMI.
                                         57

-------
This search should produce several hits including CALPUFF. The Panel recognizes that the
search is only performed on the title and abstract so if the word or phrase is missing from this
field it will not be found. In CALPUFF, the abstract does not include the word "air" so it is not
picked up by searching for "air dispersion." The "browse for models by selecting for
environmental indicators" seems to have no search criterion which locates CALPUFF either.
Also, after inadvertently selecting "Exit Disclaimer" on the CALPUFF User Information page,
the "Return to Previous Page" takes the user to the "Browse to Knowledge Base" page rather
than the previous page.

       On the CALPUFF model developer's website, a reference is made to the Guidelines on
Air Quality Models (GAQM), while in the MKB, there is a reference to "Appendix W." In
fact, both refer to the same document. The MKB should be clear that Appendix W to 40 CFR
Part 51 and the GAQM are the same. Both the Model Knowledge Base and the model
developer's web sites should provide links to the GAQM.

       The MKB  includes many highly successful models (including CALPUFF), but it is not
clear how users will  be able to determine for themselves which ones are  "successful." Clearly
models "preferred" in the GAQM qualify, but  a similar gold standard may not exist for other
media. Other GAQM models may be assumed to have achieved some measure of "success." A
list of the applications of a model could be useful in providing a measure of its success. To
allow one to judge the level of success of a particular model, the summary report should
provide a very simple summary of the "applicability range" of the model. For example the
summary report states that "CALPUFF" is intended for use on scales from tens of meters from
a source to hundreds of kilometers" but does not mention the fact that the minimum temporal
resolution of the model  (hourly averages) restricts its applicability to a simulation range that
does not include important short-term phenomena (e.g., emergency events such as accidental
spills), dispersion  of heavy gases, etc.

       As indicated  in the Panel's report, especially important information that should be
included in the MKB are i) all input/output formats, ii) all software tools (public domain and
proprietary - as well as potential substitutes) that are needed in order to fully utilize the
model's capabilities, iii) available databases of inputs (potentially outputs from other models),
and  iv) past evaluations (especially cross-evaluations) studies involving the model(s) of
concern. The MKB provides the opportunity to turn abstract discussions in the Guidance into
specific examples; however, in order to achieve this, more  detailed and consistent information
needs to be included in the MKB.

       The role of the EPA as the "model contact" is not clear for the feedback forum. The
appropriate or desired role of the model contact as either an Agency or external interface for
the model should be  made clear at this stage of the development of the MKB. It would also
seem that a more direct link to the actual developer and maintainer of the model would be
helpful. The MKB appears to have no formal feedback mechanism other than contacting Mr.
Pasky Pascual. Feedback from model users could be extremely valuable to others who have
specific modeling needs. The information would help users answer the charge questions posed
in 7a-c.  The MKB could solicit comments from users of the models, and post these comments
on a bulletin board.  Postings should allow for anonymity,  as some model users might not want
                                         58

-------
to be identified personally as users of the models - for example it is not unusual for busy
modelers to get phone calls from graduate students wanting help running complex
environmental models for thesis projects.
          C-2     The Integrated Planning Model (IPM - The Illustrative Economic
               Model)

       The write up on IPM in the MKB is very thorough. It is clear, concise and helpful as a
first description of what this model contains and what it is used for. It turns out that almost all
of the write-up is a verbatim cut and paste from the IPM Model Documentation. This is
sufficient as long as the appropriate items are covered at sufficient depth. However, in
examining the IPM Model Documentation, page 2-5 begins a section on Key Methodological
Features (e.g., details of how the load duration curve is specified and information on how the
dispatch order is determined) that could be simplified and incorporated into the MKB to bring
the  reader one level further down in detail. Thus, to maintain consistency in the level of detail
presented in the MKB, it may be necessary for existing documentation to be re-written with a
consistent format across all models. It is recognized that this would likely require a scientific
editor/webmaster dedicated to the task of working with the model developer to prepare the
documentation for upload onto the MKB.

       The Panel recognizes that the MKB alone is unlikely to provide sufficient information
for  new model developers who may require a detailed understanding of potentially competing
models. This type of information can only be obtained, if at all, from model documentation.
The IPM site, which can be accessed from the MKB, does contain links to such detailed
documentation. In this sense new modelers may benefit. On the other hand an internet search
or a search of the EPA's website would immediately bring up such documentation without the
need for the MKB. New developers would be particularly keen on knowing the IPM's
limitations and assumptions, none of which seems to be available. IPM in particular is
extremely well entrenched in the Air Office and would be, therefore, unlikely to attract "new
model developers."

       The level of detail on "endogenous assumptions" for a given model is dependent on the
information provided by the model developer, so at some level this may be out of the realm or
control of the developers of the MKB. Evaluating the utility  in contrast to other models
requires first that competing models be identified through the MKB, and second that the MKB
provide enough information at a comparable level of detail so that appropriate choices on
which model to use can be made. A high spatially resolved model is expected to be more
accurate than one of lower resolution, but choices about resolution always involve tradeoffs,
such as in model complexity, data availability, model flexibility,  and the types of questions a
model is designed to answer. The charge question does not encourage this kind of thinking
(although earlier questions may) and  the database is silent on providing information to aid in
this type of thinking as well.

       For IPM, spatial resolution is  clearly given - all 48 states plus DC are covered along
with a number of coal producing regions that are identified. The forecasting horizon of the
                                          59

-------
model is clear, however the temporal discretization is not explicit stated. Exogenous
assumptions are not fully provided directly on the MKB model page, but model documentation
accessed through links would surely provide this information for this model. There is a list of
key assumptions (e.g., perfect foresight, pure competition) in the IPM Model Documentation
document; this information should be provided in the MKB. Again, as noted earlier, modelers
should be asked to provide a write-up for the MKB of significant limitations of their models in
terms of simplifications, strong assumptions, and factors that have been ignored and/or are
outside the scope.

       The Panel agrees that the IPM model is a good example of a "successful" model. A
forum for feedback on model uses outside Agency applications and a means of collecting
external suggestions for updating/improving model structure are currently inadequate.
          C-3    Aquatox (The Illustrative Water Quality Model)

       A new model developer would find the documentation and descriptive material on the
technical and theoretical aspects of AQUATOX very helpful in eliminating duplication of
effort. Processes in the model are well documented in the MKB and the associated model
documentation provided on the AQUATOX web site.

       The technical documentation of Release 2.0 is reasonably thorough with regard to
process documentation and assumptions inherent in the model. However, the format of the
report does not follow the recommended elements for model documentation given in the Draft
Guidance. The Panel would prefer to see a separate "Model Development" chapter that
includes a conceptual model, a complete disclosure of all model assumptions and resulting
caveats, and data used to convert the conceptual model to a mathematical model. Release 2.0
does specify that it can only be used in a non-dimensional or one-dimensional mode and does
discuss the temporal scales of use. There are certainly limitations to use of the model imposed
by these assumptions; the document does discuss these.

       This model has not had a long history of application in its current form, although it does
have a long history of application of previous incarnations  of the model (e.g., as CLEAN or
CLEANER or PEST). The user manual presents several examples of applications of the model;
however, only one of them  (Onondaga Lake) shows system data that allows the user to assess
the success of these applications. On the web site, model "validation" examples are offered in
an EPA report published in 2000 that includes Onondaga Lake, PCBs in Lake Ontario, and
agricultural runoff in the Coralville Reservoir. It does appear that these evaluation exercises
compare AQUATOX with data and previous models for these systems, which is good. There is
no discussion of regulatory use of the model. The documentation does make the point that this
is a multi-stressor, multi-response model.


Finally, the model web site provides an opportunity to become a registered user; however, it is
not clear that this is the portal to provide feedback to the Agency on outside application
experience or suggestions.
                                         60

-------
          C-4    Other Models

       As noted in the Panel's report, it was necessary to evaluate other models in the MKB in
order to assess the level and consistency of detail and ease of use. The following comments are
general observations from this survey.

       The Panel found that figures and diagrams were particularly helpful in the section
describing the model's conceptual basis as used in the IPM. The information provided for a
number of the models is not necessarily in line with the definition of "Conceptual basis" as
described in the Guidance. The descriptions range in detail from providing a statement of what
the model does to what inputs are required but not always clear on what the conceptual basis is
(i.e., is it mechanistic, or empirical, or something in between). The BLP model has only two of
the four sections in the model use section. There also appears to be some confusion between
"Scientific Basis" and "Model Framework," which is illustrated by the similar level of
information provided in the  Scientific Basis section for CALPUFF and the Model Framework
section of the IPM. With the IPM it appears that the text was pasted into the sections on
conceptual basis, and that the framework section was used to capture overflow text. This
reconstruction suggests confusion in populating the MKB system, either on the part of the
person who filled out the original Data Entry Sheet or the person who uploaded this
information from the data sheet into the MKB system.

       It would be useful  if the web page on "User Information" provided an indication of the
level of user expertise required to apply the model. For example, the IPM states that "The
model's core LP code is run  by ICF Consulting..." while at the other extreme, the
THERdbASE states that "User needs only moderate level of technical education and/or
modeling experience."This type of information is valuable for users planning to actually apply
the models beyond just learning what is available.

       The Panel found that the level of detail provided in the MKB is very different across
models. An example of a model that is very sparse is TRACI. Scientific detail is often just a
statement of units used in  the model (e.g., the  SWEVIODEL includes only the following
statement under Scientific Detail "The model uses fixed units (S.I.)," and is missing
Conceptual Basis all together). The NWPCAM report is missing the model evaluation section.
This speaks to the issue of quality control across the MKB. If the Agency is going to take
responsibility for the quality of information provided on these pages, then there will need to be
some oversight provided to the various  people inputting data in order to get an acceptable level
of consistency for the information provided. Or, as indicated earlier, there may be a need for a
dedicated scientific editor.

       The Panel has recommended that the MKB include more detail  on model version. A
good example of a version tracking matrix or table is given on the PRIZM version index page
that is found by following the links to the model web site that goes through the EPA Center for
Exposure Assessment Modeling (CEAM) site (http://www.    gov/ceampubl/products.htm) by
selecting the model from the menu.
                                          61

-------
       It is important that the information in the MKB be kept current. It would be helpful for
keeping the information up to date if an annual automated message was sent to individuals
listed as the model contacts requesting updates or reviews of the material on the MKB. As an
incentive, this could be accompanied with a report on the number of accesses that were made
to the specific model.

       The user community for the MKB may provide a very effective policing mechanism to
maintain model quality, especially when money is at stake. This provides a clear opportunity
and incentive for improving the models it contains. However,  this requires a more transparent
feedback mechanism, which is currently lacking. Once this resource is developed, the Panel
recognizes that the MKB will be a good candidate for technology transfer over the long-term.
                                          62

-------
                         APPENDIX D-ACRONYMS
AA-ship
ADV
ANSI
AQCS
AQUATOX
ArcINFO
ASQC
ASTM
BLP
BMPs
CAA
CALPUFF
CEAM
CERCLA

CFR
CLEAN
CLEANER
Assistant Administrator-ship (within the U.S. EPA)
Advisory
American National Standards Institute
Analytical Quality Control Services
It is a tool in performing ecological risk assessments for aquatic
ecosystems. It is a personal computer (PC)-based multi-stressor and
multi-response ecosystem model that simulates the transfer of biomass
and chemicals from one compartment of the ecosystem to another. It
does this by simultaneously computing important chemical and
biological processes over time. It predicts the fate of various pollutants,
such as nutrients, organic toxicants and various chemicals in aquatic
ecosystems, as well as the direct and indirect effects on the resident
organisms and their effects on the ecosystem, including fish,
invertebrates, and aquatic plants. It has the potential to help establish the
cause and effect relationships between chemical water quality and the
physical environment and aquatic life.
A Geographic Information Modeling System
American Society for Quality Control
American Society for Testing and Materials
Buoyant Line and Point source Gaussian plume dispersion model
designed to handle unique modeling problems associated with air
dispersion phenomena
Best Management Practices
Clean Air Act
A multi-layer, multi-species non-steady state puff air dispersion
model that simulates the effects of time- and space-varying
meteorological and air quality conditions on pollution transport,
transformation,  and removal for assessing long range transport of
pollutants and their impacts.
Center for Exposure Assessment Modeling (U.S. EPA/ORD)
Comprehensive Environmental Response Compensation and Liability
Act
Code of Federal Regulations
Crops, Livestock and Emissions  from Agriculture in the Netherlands: A
Modeling Tool to Evaluate Policy Options for Reduction of Mineral
Surplus, Ammonia Emissions to Air and Nitrogen and Phosphate
Emissions to Soil
Collaborative Large-Scale Engineering Analysis Network for
Environmental Research. A networked infrastructure of environmental
field facilities that enables formulation and development of engineering
and policy options for the restoration and protection of environmental
resources.
                                         63

-------
CMAQ
CONCEPT
CQ
CREM
CWA
D
DECISIONDOCS
DFO
DOWNLOADINFO
DQO
EEC
EPA
EPANET
FACA
FACT
FIFRA
FR
GAQM
GCVTC
HTTP
IQG
Community Multi-Scale Air Quality Model designed to simulate
and model a wide range of physical and chemical processes
relating to air quality at particular scales in the lower atmosphere.
World Health Organization Concept Model of Children's
Environmental Health Indicators which emphasizes the complex
relationships between environmental exposures and children's
health
Charge Question
Council for Regulatory Environmental Modeling
Clean Water Act
Dimension (e.g., as 1-D, 2-D, etc.)
A Central Database and Clearing House Information System for
Communication, Outreach,  Terminology, Environmental Data for
Monitoring, TMDLs, Water Quality, Ground Water Monitoring, etc.
Designated Federal Officer
A Listing of US EPA Environmental Models to Provide
Information on Dispersion Models Supporting Regulatory
Programs Required by U.S.Law
Data Quality Objectives
Environmental Engineering Committee (U.S. EPA/SAB/EEC)
Environmental Protection Agency (U.S. EPA)
Environmental Protection Agency Network simulation model. A
windows program that performs extended period water network
modeling simulation of hydraulic and water-quality behavior
within pressurized pipe networks. It tracks the flow of water in
each pipe, the pressure at each node, the height of water in each
tank, and the concentration  of a chemical species throughout the
network using a simulation period comprised of multiple time
steps. In addition to chemical species, water age and source
tracing can also be simulated.
Federal Advisory Committee Act (Public Law 92-463)
Flow and Contaminant Transfer Model
Federal Insecticide, Fungicide, and Rodenticide  Act
Federal Register
General Air Quality Model  (Also Guideline on Air Quality Models)
Grand Canyon  Visibility Transport  Commission
Hypertext Transfer Protocol (world wide web protocol)
Information Quality Guidelines
                                         64

-------
IPM
KBase
(also MKB)
LP

MKB
(also KBase)
NAAQS
NAS
NCSU
NEPA
NERL
NIST
NMSE
NRC

NSF
NTIS
NWPCAM
OAT
OECA
OECM
OMB
ORD
PCBs
PDF

PEST
POPs
PRIZM
Integrated Planning Model. This model is used by the U.S. EPA to
analyze the projected input of environmental policies on the electric
power sector in the 48 contiguous states and the District of Columbia. It
is a multi-regional, dynamic, deterministic linear programming model of
the U.S. electric power sector. It provides forecasts of least-cost capacity
expansion, electricity dispatch, and emission control strategies for
meeting energy demand and environmental transmission, dispatch and
reliability constraints.
Models Knowledge Base

An atmosphere-ocean model code for accumulation and printing of
diagnostics for ocean dynamics.
Models Knowledge Base

National Ambient Air Quality Standards
National Academy of Sciences
North Carolina State University
National Environmental Protection Act
National Exposure Research Laboratory (U.S. EPA/ORD/NERL)
National Institute of Standards and Technology
Normalized Mean Skjuare Error
National Research Council [of the National Science Foundation
(NSF)]
National Sicience Foundation
National Technical Information Service
National Water Pollution Control Assessment Model. It combines water
quality modeling with economic analyses to translate concentration
estimates to measures of "beneficial use attainment" used to characterize
water quality for policy purposes. This is a national-level water quality
modeling system that can simulate water quality changes and economic
benefits that result from pollution control policies. It can develop place-
specific water quality estimates for most of the nation's inland region.
Office of Air Toxics (of the U.S. EPA)
Office of Enforcement and Compliance Assurance (U.SA. EPA/OECA)
Office of Enforcement and Compliance Monitoring
Office of Management and Budget (U.S. OMB)
Office of Research and Development (U.S.  EPA/ORD)
Polychlorinated Bi-Phenyls
Portable Document Format (Also Probability Distribution Function
- depends on context)
Non-linear rjarameter estimation software for any numerical model
Persistent Organic Pollutants
A risk assessment model for pesticides to estimate environmental
concentrations in surface waters (e.g., PRIZM/EXAMS).
                                         65

-------
QA
QA/QC
QAPP
QC
QUA
QUAL2E
QUAL2EU
RAIMI
RCRA
REM
REM Panel

REV
SAB
SCRAM
SDWA
SI
Quality Assurance
Quality Assurance/Quality Control
Quality Assurance Project Plans
Quality Control
Quantitative Uncertainty Assessment
An enhanced stream water quality model which is applicable to well-
mixed dendritic streams. It simulates the major reactions of nutrient
cycles, algal productions, benthic and carbonaceous demand,
atmospheric reaeration and their effects on predicting temperature
fluctuations on the dissolved oxygen balance. It is intended as a water
quality planning tool for developing total maximum daily loads
(TMDLs) and can also be used in conjunction with field sampling for
identifying the magnitude and aquatic characteristics of non-point
sources.
This is an enhancement to QUAL2E which allows users to perform three
types of uncertainty analyses, namely sensitivity analysis, first order
error analysis, and Monte Carlo simulation.
Regional Air Impact Modeling Initiative. A regional air impact
modeling tool which is a set of software tools developed by U.S. EPA
Region 6 to integrate emissions inventories, air dispersion models, risk
models, and population models. EPA and state and local agencies  can
use this risk-based tool to evaluate the cumulative health impact on local
communities of virtually an unlimited number of emissions sources. It
has the ability to both predict potential risk to individual neighborhoods
and differentiate from hundreds of pollution sources to a few where
attention will yield the greatest health benefit. Results are generated in a
fully transparent fashion such that risk levels are traceable to each
source, each exposure pathway (e.g., inhalation, ingestion),and each
contaminant, allowing for prioritization of remedial action based on the
potential impact of a contaminant or source on human health.
Resource Conservation and Recovery Act
Regulatory Environmental Modeling
Regulatory Environmental Modeling Panel (U.S.  EPA/SAB/REM
Guidance Review Panel; also referred to as "the Panel")
Review
Science Advisory Board (U.S. EPA/SAB)
Support Center for Regulatory Air Models
Safe Drinking Water Act
International System of Units (from NIST)
                                          66

-------
SWIMODEL
THERdbASE
TMDL
TRACT
TRIM  FATE
TSCA
UK
URLs
VOI
WASP
WoE
A (Also referred to as SWMM) dynamic rainfall-runoff
storm water management simulation model, primarily but not
exclusively for urban areas, for single event or long-term
(continuous) simulation. Flow routing is performed for surface
and sub-surface conveyance and groundwater systems, including
the option of fully-dynamic hydraulic routing. Non-point source
runoff quality and routing may also be simulated, as well as
storage, treatment and other best management practices (BMPs).
Total Human Exposure Risk Database and Advanced Stimulation
Environment model. An integrated database and analytical
modeling software system for use in exposure assessment
calculations and studies.
Total Maximum Daily Loading
Tool for the Reduction and  Assessment of Chemical and Other
Environmental Impacts. This tool assists in impact assessment for
sustainability metrics, life cycle assessment, industrial ecology,
process design and pollution prevention.
Total Risk Integrated Methodology Model FATE Module [It is an
overall modeling framework intended to provide a flexible method
for integrating the release(s) of pollutants from single or multiple
sources to their multimedia, multipathway movement in order to
predict exposure to pollutants and to estimate human health and
ecological risks.]
Toxic Siub stances Control Act
United Kingdom
Uniform Resource Locators
Value of Information
Water Quality Analysis Stimulation Program. This is a generalized
framework for modeling contaminant fate and transport in surface
waters and is used in TMDL water quality modeling applications.
It is based on the flexible compartment modeling approach, and
can be applied in one, two, or three dimensions. It is designed to
permit easy substitution of user-written routines into program
structure. Problems typically studied include biochemical oxygen
demand and dissolved oxygen dynamics, nutrients and
eutrophication, bacterial contamination and organic chemical and
heavy metal contamination.
Wei ght-of-Evi dence
End of Document
                                         67

-------