Summary Minutes of the US Environmental Protection Agency
Science Advisory Board
Review of the Draft SAB Panel Report on EPA's Regulatory
Environmental Models Guidance
Public Teleconference Meeting
April 26, 2006
1:30 pm - 4:00 pm (Eastern Time)
Meeting Location: Via Telephone Only
Purpose of the Meeting: The Meeting was held to allow for the Chartered SAB to review
and approve the subject draft report. The meeting agenda is in Attachment A.
Members Participating in the Meeting:
Dr.
M. Granger Morgan, Chair
Dr.
Gregory Biddinger
Dr.
Trudy Ann Cameron
Dr.
Baruch Fischhoff
Dr.
A. Myrick Freeman
Dr.
James Galloway
Dr.
James Johnson
Dr.
Phil Hopke
Dr.
Cathy Kling
Dr.
Jill Lipoti
Dr.
Michael McFarland
Dr.
Jana Milford
Dr.
Rebecca Parkin
Dr.
Kathleen Segerson
Dr.
Deborah Swackhamer
Dr.
Thomas L. Theis
Dr.
Rob Stavins
Dr.
Joan Rose
Dr.
Gene Matanoski
Dr.
Rogene Henderson
Dr.
Valerie Thomas
Dr.
Lauren Zeise
Others Participating in the Meeting:
SAB Staff: Dr. Jack Kooyoomjian, DFO; Tom Miller, DFO, Dr. Vanessa Vu,
Director; Dr. Kathleen White, DFO; Dr. Anthony Maciorowski;
EPA Staff and Public: See Attachment B for a list of those who contacted the DFO
noting their interest.
Public Commenters: None
MEETING SUMMARY
Wednesday, April 26, 2006
This meeting was announced in the Federal Register at FR 71 18326-18327) (see
Attachment C of the physical file and on the SAB website at.
Mr. Thomas Miller, SAB Designated Federal Officer, convened the meeting and
identified those on the call. He noted that: 1) the meeting was an official meeting of the
Chartered Science Advisory Board, chaired by Dr. Granger Morgan; 2) the meeting
l
-------
complies with requirements of the FACA and EPA policy for expert advisory committees;
and 3) the SAB members participating in this meeting had submitted updates to their
confidential statements of financial interest and the Deputy Ethics Official for the SAB
Staff Office had determined that Members do not have "conflict of interest" or
"appearance of impartiality" issues within the meaning of the relevant ethics and conflict
of interest requirements that apply to this advisory activity.
Mr. Miller stated that Member's responsibilities during this meeting were to
evaluate the draft SAB Panel report and decide whether the report:
a) adequately addressed the Agency charge questions;
b) is clear and logical; and
c) conclusions drawn or recommendations made are supported by the body
of the Panel's report.
Mr. Miller noted that SAB proceedings provide an opportunity for public
observation and participation and that participation can be through providing written
comments to the SAB or by making short oral statements during the public meeting. Mr.
Miller noted that for this meeting, no members of the public have requested time for oral
statements nor have any written public comments been received.
Mr. Miller then introduced the SAB Chair, Dr. Granger Morgan, who carried out
the agenda.
A. REVIEW OF THE DRAFT PANEL REPORT ON EPA'S REM GUIDANCE
Dr. Morgan welcomed the participants and noted the focus of the meeting was to
review the draft report from the REM Panel. (See attachment D - Review of Agency 'Draft
Guidance on the Development, and Application of Regulatory Environmental Models and
Models Knowledge Base' by the REM Guidance Review Panel of the EPA SAB).
Dr. Morgan then introduced Mr. Pascual, EPA Office of Research and
Development, and Chair of the Council for Regulatory Environmental Modeling (CREM)
to make comments on behalf of the Agency.
Mr. Pascual thanked those who had provided the vision to pursue this effort from
within EPA, the SAB and its staff for their work. He stated his opinion that the draft report
thoroughly and fully answered the agency's charge to the SAB. He noted that EPA takes
the recommendations in the draft report seriously and that staff are already moving forward
to respond to what they heard in the Panel's review meetings and read in the draft report.
He offered three examples where EPA was already reacting:
1) The "Guidance": EPA will be considering the SAB advice, along with advice
of the NAS that is now looking at the issue of best practices for modeling even
more broadly, in an integrated fashion and using insights to improve the final
guidance.
2
-------
2) The "Knowledge Base": They are moving to increase the interactions with
outside groups that will make it easier to integrate across various groups that
are working on such models. In this, the Agency is restructuring to use global
and non-proprietary technologies that will allow all interested parties easier
access.
3) Agreeing with the Panel that the Guidance and the Knowledge Base will affect
more than just those who do modeling, EPA is poised to begin an effort to help
inform modelers, analysts, managers and stakeholders about REM so that they
can better understand and use the results of modeling in their roles. The effort
will begin with a workshop in November 2006.
Dr. Morgan then asked Dr. Thomas Theis, Chair of the SAB REM Guidance
Review Panel to comment. Dr. Theis noted that there are three areas where multiple SAB
Members' comments on the draft report are similar. He identified the three areas and
suggested how he would address each:
1) Members noted that the transition from the Introduction to the Body of the
Report was rough. Dr. Theis will edit the document to integrate some historical
background on EPA's activities under the Council for Regulatory
Environmental Modeling and also include information on the Agency Charge
for this current SAB effort.
2) Many Members believe the Letter to the Administrator is too long. He noted
that he had struggled with this during drafting and that one of the things giving
the impression is that the introductory material takes up a half page before the
substantive information comes out. He also noted that the letter was double
spaced. He will attempt once again to shorten the letter. The final letter will
only be single spaced and that should help.
3) Many Members commented that urging EPA to provide consistent resources for
the REM effort needed to be in the body of the report as well. The Panel gave
this comment serious consideration and even though this is also a view
expressed in reports by the SAB, believed it was important to say in this effort.
He will add information on this to the body of the report.
Dr. Morgan then asked the SAB Lead Reviewers to comment on the draft report.
Attachment E provides a compilation of the written comments of Board members on the
draft report. Members generally deferred to those comments and during the meeting only
mentioned specific comments that they believed to be in need of highlighting.
a) Dr. James Galloway stated that he believed the report was an excellent job and
he deferred to his written comments with no further comments to offer.
3
-------
b) Dr. Michael McFarland agreed with the quality of the draft report and in light
of the encouraging response from Mr. Pascual felt he did not need to emphasize
anything further from his written comments.
c) Dr. Trudy Cameron noted the many line edits she had provided and highlighted
several issues of significance, including: i) the need to clarify the use of the
term bias (page 53) which refers to a specific statistical concept (she suggested
"apparent advocacy" as the terminology to use); ii) the implication that
intrusion into modelers' time by graduate students justified anonymity for those
who do modeling; and iii) the need for caution in suggesting the privatization of
MKB - this is a case of a public good that will not likely be supported by
outside groups who do not have sufficient incentive to provide the goods.
d) Jana Milford focused on her general comments and stated that more work was
necessary to point out that the guidance should make it clear that it applies to
more than just fate and transport modeling. Members had varied opinions
about how clear this was in the report. Some thought that the guidance should
note that certain models might not be able to conform to the guidance. Her
second issue noted the need for more attention to how to address uncertainties
that are difficult to quantify (e.g., scenario uncertainties in forecast model
applications). Dr. Theis noted that this issue was discussed carefully by the
Panel and that the Panel report attempted to make the point that the guidance
should apply more broadly than just the fate and transport area.
e) Dr. Gregory Biddinger largely deferred to his written comments which attempt
to highlight places in the report where initial articulation of advice is strong and
then it seems to become diffuse as the explanation goes forward. He
emphasized that EPA should discuss what constitutes a stakeholder under
different conditions that prevail from model development to model application.
They are not necessarily the same stakeholders.
f) Dr. Lauren Zeise noted that the report was well done and emphasized three
points, i) the letter might point out that as uncertainty analysis gets more
complex, EPA might need to develop new risk management frameworks that
can accommodate such analyses better than current frameworks do; ii) the need
to clarify the "Post Application Audit" issue and possibly addition of a separate
box for this function as well as its placement in the overall figure; and iii) the
possibility of clarifying the "structural model uncertainty" issue by giving it a
separate treatment that clarifies its distinctiveness within overall uncertainty.
Members also discussed whether or not the draft report's handling of
uncertainty suggested a "dumbing-down" of the analysis so it could be
understood by risk managers who are usually not model practitioners. They
want to ensure that this tone is not conveyed in the final report.
Dr. Morgan then called on other SAB Members to point out specific issues that
they feel need to be addressed during this meeting. Comments mentioned by specific
members included:
4
-------
a) Dr. Rebecca Parkin emphasized the need to clarify the identify of who are
included in the group referred to as stakeholders.
b) Dr. Gene Matanoski noted that the report introduction should clarify that the
SAB role over time in model development has been as an advisor and not as co-
workers in developing models and model guidance with EPA.
c) Dr. Valerie Thomas agreed with the need to have clear discussion of the need for
stable resources for EPA's modeling efforts in the body of the report and not just
the Executive Summary. She also asked that the report be clear in distinguishing
between what this draft report advises and the advice given in the previous "SAB
Modeling Resolution of 1989." Also it would be good to note how the Agency has
moved forward relative to advice in the past SAB advisory efforts on modeling.
d) Dr. Joan Rose pointed out that modeling efforts for fate and transport modeling
for microorganisms is in its infancy. She noted the need to discuss the issue of
modeling efforts that are in their "infancy" and those that are mature in their
development and application.
e) Dr. Vanessa Vu. SAB Director, noted that Appendix D on Panel Formation was
not necessary because that issue is thoroughly discussed on the SAB Staff Office
Website.
f) Dr. Granger Morgan reiterated his written comments that the level and
sophistication of the treatment of uncertainty should be appropriately matched to
the problem at hand, and to the way the results will be used. He argued that
whenever uncertainty is an important element in a problem, it should at a minimum
be acknowledged and receive some basic quantitative analytical treatment. He also
suggested that the report should call for an expanded discussion of model
uncertainty.
ACTION: Dr. Morgan then noted that his sense of the Board's consensus based
on the discussion was that the report meets the objectives stated in the charge; that Board
Members have suggested many modest clarifying edits, along with several substantive
changes, and that all this could be accomplished by the Panel Chair, with the DFO's
assistance, and then the report could be read again by the Lead Reviewers for conformance
to suggested edits in the Member comments and then sent on to the Administrator. Dr.
Hopke made a motion that incorporated these points and it was seconded. Dr. Morgan
asked Board Members to vote on the motion by noting any objection to the motion as
made. No dissent was given and the Chair declared that the motion had passed. The
Report is approved subject to the conditions on editing and final reading by the Lead
Reviewers in the motion.
5
-------
B. TOPICS FOR THE SAB 2006 ANNUAL MEETING AND WORKSHOP
Dr. Morgan then introduced the final topic for the meeting noting that each year the
SAB conducts and annual meeting in December and that it is normally focused on a
workshop topic of the Board's choosing. Dr. Morgan referred to the candidate topics
below for Member consideration. He noted that the intention for today was to discuss the
pros and cons of each topic and to decide on how to go forward with planning.
Workshops are part of the SAB's efforts to advance and keep pace with complex
technical and emerging issues, and they provide a forum for Board and Committee
members to interact with EPA and external scientific experts and discuss environmental
areas of interest to EPA. Past Workshops have focused on:
(a) December 2003- Short, overview sessions on a variety of topics including: control
of transboundary air pollutants, emerging contaminants, invasive species,
nanotechnology, and genomics.
(b) December 2004- "Nanotechnology, Biotechnology, and Information Technology -
Implications for Future Science" This was a more in-depth exploration of the
nanotechnology topic that was discussed in the overview sessions during the
December 2003 workshop mentioned above.
(c) December 2005- "Science for Valuing the Protection of Ecological Systems and
Services" which discussed the progress and advancements developed on this issue
by the SAB Committee for the Valuation of Ecological Systems and Services.
Topics that have been suggested as candidates for the SAB 2006 annual meeting are
(Member discussions are included with each topic:
a) The SAB is establishing an Ad Hoc Committee to evaluate the extent of
environmental problems presented by reactive nitrogen and to make
recommendations toward a more integrated approach to nitrogen research and risk
management. The SAB could focus its workshop on nitrogen to kick off this
initiative.
Members discussed whether this topic was as narrow as it seems and agreed
that it had the potential to be broader than it appears. Some believed it was
not a good topic and others suggested it would be a critical topic for the
future. Dr. Galloway who will be the Chair of the SAB "Nitrogen Panel"
noted that the panel itself was only now in the process of being formed and
that it would not likely be ready to organize a December workshop. The
topic was deferred to that group's establishment.
b) The Board is embarking on a new study to assist the Agency in identifying and
building on lessons learned from past responses to major natural and man-made
environmental disasters (e.g., WTC, the anthrax incident on Capitol Hill,
6
-------
Hurricanes Katrina and Rita). As part of this effort, the Board could hold a
workshop inviting outside experts from the private and public sectors to discuss
and learn from their programs for dealing with environmental threats.
Members discussed the importance of the issue, whether it could include
things such as bird flu and HIV, noted that it was timely, wondered if our
efforts could be deep enough given the amount of research and other
activity that is already going forward in this area, noted the importance of
learning form other sectors activities in this area, and noted the need for a
significant component of the activity to focus on not just evacuation from
impacted areas but the need to consider the needs that have to be satisfied to
allow people to return to these areas. Dr. Morgan argued that the focus
should be learning how other public and private organizations prepare for
and deal with disasters. He suggested that the focus not be on EPA's recent
experiences with WTC, the anthrax incident on Capitol Hill, Hurricanes
Katrina and Rita. Members concluded that the project is timely and
warrants continued consideration. The focus is likely to be on learning
from other sectors outside EPA of their experience and plans for preparing
and reacting to such events.
c) In December 2003, the SAB sponsored a mini-session on global trans-boundary air
pollutants. The SAB could hold a workshop to allow in-depth discussions of this
topic and to explore the implications of transboundary pollutant transport on EPA's
science and research needs.
Members discussed the scope of the project and considered whether it was
a global issue or whether the real issues of interest to the SAB would be the
US effects of global transboundary pollution. Dr. Hopke noted that very
little was being done on the issue and thus it is premature for the SAB to
focus on this issue for now. The issue will be deferred for now.
d) There has been considerable advancement in the science of risk assessment and its
use in environmental policy-making. Advances have occurred in understanding
and evaluation of many of the specific components included in assessments and in
how these components are integrated. The NRC, GAO, EPA, and other
independent organizations have evaluated EPA's risk assessment processes and
these have been helpful in moving EPA's practice forward over the years. In spite
of all this evaluation and improvement, we still have many examples of
assessments that are embedded in strong controversy (e.g., PFOA, dioxin, arsenic).
Many have suggested that it is necessary to step back and look at EPA's risk
assessment practices within the context of how it is used in support of EPA's
mission. This is often referred to as the science-policy interface and it reflects the
hard reality that science is influenced by the legal structure of policy making as
much as science influences policy making itself. The dilemma is that there is no
clear line that separates science from policy. The SAB could hold a workshop to
discuss the interface between science and science policy choices as a precedent to a
7
-------
possible exploration of ways to make quantum improvements, not incremental and
disconnected ones, in risk assessment.
Members discussed the issue with several noting strong support for the
topic because of ongoing issues in the NAAQS area, because of its fit to the
SAB role and the difficulty panels often have of discerning where their
science focus lies and policy takes up; and how the recent EPA risk
assessment staff paper might relate to the project. Others reflected concern
that the topic was much too broad as stated. The project needs to have a
refinement in its focus to narrow the scope. There is also a possibility that
"b" and "d" might be able to he combined in some way. Members will
continue to work on refining the project.
Dr. Morgan will work with SAB Staff to refine projects "b" and "d" and then
decide on how to move forward. They will keep Board Members informed as this goes
forward.
The meeting was adjourned by the Designated Federal Officer.
Respectfully Submitted:
/ Signed /
Thomas O. Miller
Designated Federal Officer
US EPA Science Advisory Board
Certified as True:
/ Signed /
Dr. M. Granger Morgan
Chair, EPA Science Advisory Board
Attachments:
A Meeting Agenda
B Table of Interested Public (including EPA)
C FR Announcement (in physical file only)
D Draft REM Panel Report
E Compilation of SAB Member Written Comments on the Draft
8
-------
ATTACHMENT A
US Environmental Protection Agency
Science Advisory Board
Review of the Draft SAB Panel Report:
Review of Agency 'Draft Guidance on the Development, Evaluation, and Application of
Regulatory Environmental Models' and 'Models Knowledge Base'
Agenda
Public Teleconference Meeting
April 26, 2006
1:30 pm - 4:00 pm (Eastern Time)
Meeting Location: Via Telephone Only
Members of the public may obtain the call in number at 202-343-9999
Wednesday, April 26, 2006
1:30 pm Convene the Teleconference Call
Announcements, Summarize Agenda,
Attendance
Mr. Thomas Miller,Designated
Federal Officer
1:40 pm
Welcome and Introduction
Dr. Granger Morgan, SAB
Chair
a. Comments by EPA
To Be Determined
EPA ORD National Center for
Environmental Research
1:50 pm Review of Draft Report
Review Panel - REM Guidance Review Panel
Chair - Dr. Thomas Theis
Lead Reviewers are:
1) Dr. Gregory Biddinger
2) Dr. Trudy Cameron
3) Dr. James Galloway
4) Dr. Michael McFarland
5) Dr. Jana Milford
6) Dr. Lauren Zeise
3:00 pm Disposition of the Draft Report by the Board Chair and Members
3:15 pm Discussion of Candidate Topics for the 2006 SAB Chair
and Members
Annual Meeting/Workshop
4:00 pm Adjourn (time approximate)
9
-------
ATTACHMENT B
Public Requests for REM Meeting Information
No.
Name
Organization
1
B. Sachau
NA
2
Gina Williams
SBC Global
3
Noha Gaber
US EPA
4
Patt Phibbs
Bureau of National Affairs
5
Susan Reith
US EPA
6
Beth Eliason
State of Vermont
7
Namsoo Suk
State of New Jersey
8
John Holmes
NAS/NRC
9
Brian Hennesey
US EPA
10
ATTACHMENT C
http://www.epa.gov/fedrgstr/EPA-SAB/2006/April/Day-ll/sab5324.htm
ATTACHMENT D
Review of Agency 'Draft Guidance on the Development, and Application of Regulatory
Environmental Models and Models Knowledge Base' by the REM Guidance Review Panel
of the EPA SAB)
http://www.epa.gov/sab/pdf/rem_draft_02-24-06.pdf
10
-------
ATTACHMENT E
April 26, 2006
Compilation of Member Comments on the Draft REM Report
A. LEAD REVIEWERS:
1. Dr. Gregory Biddinger:
Other then the exceptions noted below, the SAB Review of Agency Draft Guidance on the
Development, Evaluation and Application of Regulatory Environmental Models and
Models Knowledge Base addressed the Agency charge questions in a clear and logical
manner and the conclusions and recommendations drawn were supported.
Cover letter
1. The concern raised regarding adequate resources in the 2nd paragraph does not
seem to have come from a response to the Agency guidance. As well the draft
SAB report does not have a section in the table of contents on this point. Was this
based on discussions with the agency during meetings? Personally I can believe it
is true but it should be supported somehow.
2. Paragraph 5 recommendations around problem specifications and stakeholders
seems out of place. See comments below under charge question 1. This paragraph
needs to be modified to create alignment with importance of recommendation
provided in section 1.2 of draft report.
Charge question 1.
1. Suggest that the first ten lines of the general comments section 1.2 be moved to the
front of the section as an introductory paragraph.
2. The remainder of section 1.2 actually raises the need to expand their model
guidance from general models to include site-specific considerations and also to
raise the role of stakeholders to a level of central importance. This section should
be renamed to be more explicit regarding its content. Something like " Expanded
Guidance Scope or Boundaries" is more appropriate.
3. The alternative Figure 1 discussion does not carry through to the letter to the
administrator. The discussion in the 5th paragraph (page 2 of letter , lines 18-23) of
the cover letter does not convey the importance that is provided in section 1.2 and
1.3. It is handled a bit better in Paragraph 5 (page two of summary , lines 10 to
16). In essence the report recommends an expanded scope for their guidance from
development of general models to include the broader considerations of selecting
and adapting models for site or problem specific applications. This is an important
and global recommendation that does not carry through. These are minor changes
but expect the messages to be much stronger if made.
4. The recommendation that stakeholders play a central role should be discussed in
further detail. There needs to be guidance on how to select appropriate
stakeholders dependent on whether the model being developed has general
11
-------
applications verses specific. Suggest a few sentences recognizing the agency will
have to include a discussion of what constitute a stakeholder under different
conditions of development or application from national policy to local permit
decisions.
5. Section 1.2 makes point about importance of peer review (Page 9 lines 10-11)
through out the model development/application process. The agency makes the
same point in Appendix C of their guidance ansd graphically presents the poin in
figure C1.1. Suggest you recognize that in conjunction with this point.
Charge question 2
1. Section 2.2 under Goals and methods raises a number of points related to the
agency's need to expand their focus or scope in drafting this guidance (or guiding
principles). The SAB Review comments suggest recognizing the following
a. Model users may be those that simply use the output and not run the
models
b. Modelers other than in a regulatory context should be an intended
audience
c. The guidance needs to cover a broad range of modeling types other than
just environmental models
It is not clear to me that there is value in making this guidance be so encompassing
that it covers all audiences. This document seems to me to be more in line with
previous Framework documents written for Ecological Risk Assessment. That
document set the groundwork for a whole series of subsequent documents
including separate primer for managers on how to use the output of ERA's and
critical issue papers on topics such as uncertainty. I wonder if the panel is asking
the agency to do more with this single document then is appropriate. They are
charged to focus regulatory environmental models and not the other models listed.
Maybe a more appropriate recommendation would be to plan and describe in this
document a series of continuing guidance that will follow covering other audiences
and model types. It might be better for them to write strong guidance for a
narrower audience and then expand in subsequent work once they have a solid
basis to work from.
2. Section 2.4 notes the need for documentation during the development of the model
not just when it is complete. I would agree with that point and would suggest that
you link this point with your recommendation on peer review through out the
model development process. It clearly would aid such integrated peer reviews.
Charge question 3.
1. Section 3.2 in the last paragraph on page 24 makes the point of need to discuss the
use of qualitative assessments tools such as expert judgement to test model
appropriateness before moving to more quantitative tools. The agency does make
a note of qualitative approaches under section 3.1.3.2 of their report covering the
topic of model corroboration. Seems appropriate to recognize that and build from
there on what more you would like to see in the guidance.
12
-------
2. The point in section 3.4 on need to provide some discussion in this guidance about
linking models to create a larger modeling tool is well taken. I would also suggest
that this might be a good example of where a more detailed guidance document on
this specific topic might be worth recommending. This would give them the option
of providing high level guidance here and more rigorous guidance in a following
document.
Charge Question 4 - Still reviewing
1. Section 4.0 included many good recommendations and lots of interesting and useful
suggestions for approaches and references. Many of these good points may deserve
more explicit definitions as "recommendations" and holding in the text. The following
are a few, but suggest that the authors revisit this text and make sure some key points
are not left with lees emphasis then is warranted.
a. In the 3rd and 4th paragraphs (page 29) of section 4.1 the point is made that the
guidance needs to direct focus on other sources of uncertainty in the decision-
making process then just the modeling. The discussion provided suggests that
the guidance should direct the modeler should consider the needs of the
decision-maker and relevant stakeholders in determining how much
uncertainty is acceptable in model design and execution. This seems to beg a
specific recommendation
b. Later in section 4.1 (page 31 lines 19-22) the review suggests the guidance
should include a discussion about propagations of uncertainties when working
with multiple models. This is a very important issues and in regulatory analysis
very often the real situation. This discussion is worth expanding, but if not at
a minimum I suggest it needs more emphasis as a recommendation.
c. There is a general recommendation at the end of section 4.1 but it seems to me
that many of the good points earlier in the section are lost in the generality.
The review panel may want to revisit and redraft to capture some of the above
recommendations more explicitly
d. In section 4.2 the point about confusion and lack of clarity between sensitivity
analysis and uncertainty analysis is both important and well described.
Unfortunately no explicit recommendation is made. This could be as simple as
holding lines 23-25 on page 32.
e. Section 4.3 suggests that uncertainty analysis needs more complete treatment
in section C.6 and specifically there is little guidance on how to evaluate
uncertainty in model parameters. But no recommendation is give. A more
explicit recommendation seems warranted
2. The use of case study examples seems like a worthwhile addition both for the guidance
and for the MKB. As noted above (see charge question 2 #1) it seems to me that the
development of guidance for Regulatory Environmental Models could follow a similar
pattern as that for Ecological Risk Assessment. In the case of the ERA guidance
documents there were 2 volumes developed that included a number of complete and
detailed case studies of the application of ERA's. The REM Guidance could also
follow such a series approach and you might want to consider not only that they
include a few illustrative examples but also they develop guidance in future using
detailed applications of modeling to support regulatiry decisions and use them to
highlight how to do the problem formulation, model design, execution, and quality
analysis plus the communication of modeling results.
13
-------
a. As well it might be worth considering that the MKB has a series of white
papers on the various tools to assess model sensitivity and analysis, and also
include white papers on critical technical issues around modeling such as
communication of results.
Charge Question 5 -
See comment above about the value of considering white papers on types of models,
tools for analysis of modeling sensitivity and uncertainty and also white papers on
critical issues.
Charge Question 6 - No comments
Charge Question 7 - No comments
2. Dr. James Galloway:
Thank you for the opportunity to serve as a Lead Reviewer of the Draft Guidance on the
Development, Evaluation, and Application of Regulatory Environmental Models and
Models Knowledge Base prepared by the Regulatory Environmental Modeling Guidance
Review Panel of the EPA Science Advisory Board. My overall impression is that the panel
has done an excellent job in thoroughly reviewing the report and in the process has of great
service to the agency. My comments therefore are more focused on how the information is
presented rather than its quality.
Following are my response to my three charges as a lead reviewer.
1. Have the original charge questions to the SAB Panel been adequately addressed in the
draft report? It is my assessment that the original charge questions to the SAB Panel are
adequately addressed in the draft report. The responses to each of the seven charge
questions are clear and extensive. The panel has been thorough in not only reviewing what
was written but in also suggesting alterations or additions to the text and the supporting
figures.
2. Is the draft report clear and logical? In general, the draft report is well-written and
clearly sets out the panel's recommendations. I do recommend that following
improvements. First, at the end of each of the sections dealing with a specific charge
question, there should be a summary of the panel's recommendations. Second, the
Executive Summary should state each charge question along with the summary from the
body of the report. Third, the letter to the Administrator is about 3 pages, which in my
mind is too long. It would be more effective if it were reduced in length by about a page.
Lastly, as noted in the report, Appendix C has not been highly edited given the individual
nature of the responses. Given the diffuse nature of the information provided, the panel
might wish to consider condensing the key points from the Appendix and merging them
into the body of the report. It would make the overall report shorter, and make the
information in the report more centrally located.
14
-------
3. Are the conclusions drawn and/or recommendations made by the panel supported by
information in the body of the draft report? It is my assessment that the panel's
recommendations are supported by the information in the report.
In summary, I commend the panel for doing an excellent job on the review. It is thorough,
well-written and should be of great value to the agency.
3. Dr. Michael McFarland:
General Comments: In general, the SAB draft report is well written, logical and
appropriately referenced. The SAB draft report provides a clear and comprehensive
response to each of the seven charge questions posed by the Agency. In all of its
responses, the SAB Panel furnishes the Agency with a number of useful and pragmatic
recommendations that, if implemented, would result in considerable improvement in the
scientific defensibility of the Agency's use of model derived information in regulatory
decision-making.
The SAB Panel is to be commended in its highlighting of the Agency's scientific
accomplishments in preparing the "Draft Guidance on the Development, Evaluation and
Application of Regulatory Environmental Models and Models Knowledge Base", which
included acknowledging the Agency's responsiveness to earlier SAB advice on model
formulation, development and implementation. Moreover, in recognizing the range of
deficiencies in the draft guidance, the Panel has eschewed the common practice of merely
accentuating the document's technical limitations and has, in all instances, provided the
Agency with practical steps that would substantively improve the Agency's modeling
activities and those decisions that are supported by model output.
The following are my specific responses to the quality review charge questions. It should
be noted that, as a non-modeler, my technical comments should be seen in the light of a
generalist whose knowledge of the models and modeling terminology referenced in the
draft document is somewhat limited.
Response to Charge Questions
1. Are the original charge questions adequately addressed in the draft report? The SAB
Panel's responses to the original charge questions are adequately addressed in the draft
report. In formulating its responses, the Panel has demonstrated a broad and practical
understanding of a range of technical issues germane to the Agency's generation and
use of model-derived information in support of regulatory program decisions.
Moreover, the Panel has furnished a number of detailed and pragmatic
recommendations in its response to each of the charge questions. Finally, an
overarching and valuable recommendation offered by the SAB Panel is the
reformulation of Figure 1. In my opinion, the improvements highlighted in alternative
Figure 1 represent substantive opportunities for the Agency to establish a scientifically
defensible framework for future model formulation, development and implementation.
2. Is the draft report clear and logical? The SAB draft report provides a clear and logical
basis in identifying and describing those scientific, technical and programmatic issues
that have the potential to undermine the validity of using models and model-derived
information to support Agency decisions. The SAB draft report cover letter and
15
-------
executive summary are well written and highlight those salient issues that Agency
senior management should consider in ensuring the scientific and regulatory
defensibility of decisions that are supported by modeling data and associated
information. The main body of the report provides clear, comprehensive and logical
responses to each of the charge questions. Where appropriate, the SAB Panel has
supported its charge question responses with practical examples, peer-reviewed
references and Panel member modeling experience.
3. Are the conclusions drawn and/or recommendations made supported by information
found in the body of the report? The SAB Panel's draft document has identified and
described a number of important conclusions focused on enhancing the value and
reliability of the Agency's model-derived information as well as a range of practical
recommendations formulated to address its current use and limitations. The SAB
Panel is to be commended for clearly supporting each of its conclusions and
recommendations within the main body of the report. The SAB Panel has provided
detailed descriptions of the broad range of scientific, technical and programmatic
challenges facing the Agency with regard to its current modeling programs. Finally,
the Panel's recommendations describe practical approaches for addressing a number of
critically important cross-Agency modeling issues and concerns including: 1)
uncertainty quantification and communication, 2) integration of appropriate levels of
peer review, 3) systematic model formulation and development, 4) model transparency
and 5) ensuring model output is based on the best available science.
4. Dr. Jana Milford:
General Comments: Due to time constraints, I focused my review on the panel's review of
the Draft Guidance. I did not closely review the panel's comments on the Models
Knowledge Base. With a few exceptions, I found the draft report to adequately address the
charge questions and to be generally clear and logical, and found the recommendations to
be supported by information in the body of the report. Overall, I feel the report could be
improved by redrafting, to make the recommendations and conclusions more direct. This
is most important in the letter to the administrator and the executive summary. I did not try
to suggest editorial changes, but tried to point out in my comments the places where I felt
improvement was especially needed. Two significant substantive concerns I have about
both the Draft Guidance and the panel's review are that (1) more attention needs to be paid
to the question of whether the Guidance adequately addresses (or should address) models
other than pollutant fate and transport models, and (2) more attention needs to be paid to
how to address uncertainties, such as scenario uncertainties in forecast model applications,
which are relatively difficult to quantify.
Letter to the Administrator: p. 1, line 32. I did not see the back-up for the "concern" that
"the REM vision is not matched by a commensurate, and steady, allocation of resources."
This seems like a very important concern, which warrants clear and open discussion of the
signs or consequences of this lack of sufficient resources, and the reasons for it. The fact
that the panel discusses in the introduction to its report the recommendations it made in the
1980's on regulatory modeling underscores the concern, but only in a very indirect way. If
there is a problem here, couldn't it be discussed more directly?
p. 2, lines 10-16. The point that the Draft Guidelines are not accessible to many in its
potential audience is important. This paragraph should be rewritten to state this more
16
-------
clearly and directly, and to recommend that the Draft Guidelines be rewritten to be made
more widely accessible, not to recommend that the Agency "clarify" how the document
should be used.
p. 2, lines 25-31 and p. 3, lines 1-9. The recommendations made in this paragraph are
important, but not clearly or directly phrased. Could the letter state more directly that the
Guidelines need to provide more context, examples, and recommendations on appropriate
uncertainty analysis and communication of uncertainties?
Executive Summary
p. 1, lines 19-22. Same comment as above on Letter, p. 1, line 32.
p. 1, lines 24-27 and p. 2, lines 1-8. Same comment as above on Letter, p. 2, lines 10-16.
p. 2, lines 29-31 and p. 3, lines 1-9. Same comment as above on Letter, p. 2, lines 25-31
and p. 3, lines 1-9.
p. 3, lines 3 and 10. It's not clear what the panel means by "practicable". Is the term used
to mean accessible, or useful, or ... ?
p. 3, line 21. The ES needs to explain why "framework" needs to be redefined,
p. 3, line 27. What is meant by "purveyors"?
The question of how well the Draft Guidance extends to models other than pollutant fate
and transport models, which is discussed on p. 18, is important, and warrants mention in
the ES.
Report
p. 7, lines 1-2. The panel leaves us hanging. What was the outcome of the SAB's 1989
model resolution? If it's worth mentioning the resolution, isn't it worth summarizing the
Agency's response (or lack thereof) over the ensuing 17 years?
Charge Question 1. Best Practices. I found this section of the report to adequately address
the charge question, to be clear and logical, and to provide adequate support for the
recommendations and conclusions made.
Charge Question 2. Goals and Methods.
p. 16, lines 26-27 and p. 17, lines 1-15. The discussion in this paragraph seems to relate to
the concern expressed in the Letter and Executive Summary that the Draft Guidance is not
likely to be very accessible to many "users" of model results who are not modelers. I think
this is a serious concern and warrants fixing, e.g., to expand the use of illustrative examples
in the Guidance, rather than merely clarifying how different audience members might use
the Guidance.
p. 18, lines 6-20. The panel notes (and I agree) that while the Guidance could have been
meant to apply to a wide variety of models, it seems to have been developed based
17
-------
primarily on literature, experience, and prior recommendations for pollutant fate and
transport models, as opposed to economic models or engineering process models. I think
this point warrants further consideration and elaboration in the panel's review. Are the
Best Practices identified in the Guidance appropriate or even applicable for models other
than pollutant fate and transport models? Or put another way, would the Guidance be very
different if other types of models had been more fully considered? The panel recommends
that the Guidance "articulate the broad range of model types to which it is to apply" and
"ensure that the guiding principles ... reflect this diversity of model types." However, I
think it may be difficult to develop concise, comprehensive, and understandable guidance
that covers the full breadth of models EPA employs. Would it make more sense to
recommend that the Draft Guidance the panel reviewed be represented as applicable to a
more limited range of models (e.g., pollutant fate and transport models and their close
relatives), with separate guidance developed for other types of models, if necessary?
Charge Question 3. Graded Approach. I found this section of the report to adequately
address the charge question, to be clear and logical, and to provide adequate support for the
recommendations and conclusions made.
Charge Question 4. Advice for Decision-Makers.
p. 31, lines 16-17. I'm glad the panel identified "scenario uncertainty" as an important
source of uncertainty in modeling that should be clearly identified in the Draft Guidance.
But doesn't this particular source of uncertainty warrant further discussion by the panel and
in the Draft Guidance? EPA's applications of models (including pollutant fate and
transport models) are often made in forecast mode (e.g., using REMSAD to examine
whether the Clean Air Interstate Rule will suffice to bring Pittsburgh into attainment with
the PM NAAQS), where huge uncertainties are associated with future economic,
regulatory, and physical conditions. Quantitative uncertainty analysis techniques that are
tractable for model parameters and inputs developed for historical conditions, may not
work well for "scenario uncertainties." Yet if these scenario uncertainties are significant, a
complex and expensive QUA that focuses only on model input uncertainty would have
little meaning for decision-makers. The panel suggests something along these lines on p.
39, lines 16-17, when it recommends that "the REM Guidance be clear on the types of
model uncertainty that most QUA tools address." However, I think the point needs more
explicit articulation and emphasis. Additionally, the panel might be able to significantly
assist EPA by pointing the Agency to best practices for dealing with scenario uncertainty.
My colleague, Roger Pielke, Jr., argues that a large part of the reason we academic
modelers have had such a difficult time getting practioners who have real decisions to
make to utilize formal uncertainty analysis techniques is that often fail to address the most
critical uncertainties in real-world decisions - those having to do with uncertainty in
forecasts of socio-economic and technological trajectories.
p. 32, lines 2-7. I'm not sure what the panel means by the recommendation that the
Guidance "advise modelers to begin model development or use only after they have
obtained an awareness of how a decision maker plans to use the information on uncertainty
that they will be providing." Is the point that modelers need to understand how uncertainty
in model results factors into decisions about a particular issue, and take that into account in
selecting or developing and applying a model? In any case, could the recommendation be
phrased more directly?
18
-------
Charge Question 5. Identification and Structure of Optimal Information. This section of
the report is clear and adequately responds to the charge question.
5. Dr. Lauren Zeise:
This report is well done. The original charge questions to the SAB Panel were adequately
addressed in the draft report. Overall, the draft report is well constructed, clear, logical, at
just the right level of detail for the type of document reviewed, and the quality of the
commentary is excellent. The conclusions drawn and recommendations made are
supported in the draft report text.
Specific, mostly editorial comments
The letter makes the important point that the use of increasingly complex quantitative
uncertainty analysis without a sophisticated framework for decision-making and
communication may only make decision making more challenging. It then emphasizes the
report's practical advice for guidance to the modeler, which is fitting for the SAB panel
report. However, I wonder whether the letter would be the place to point out to the
Administrator the need to develop risk management frameworks that might be better able
to cope with the results of uncertainty analyses. The report takes the existing decision-
making as a given, but perhaps the letter need not. In this transmission letter from the SAB
chair and REM Panel chair the observation could be made that this appears to be an area
where efforts are sorely needed.
The end of the Introduction to the Panel report needs a punch line to tie the REM report to
the series of recommendations and bring the reader back to the issue at hand, the review of
the REM report.
The Panel stresses the importance of post application audits and recommends the addition
of a section of its own to model application. Alternative Figure 1 on page 14 shows the
audit on the public policy process side, as part of a policy observation box, with an arrow
leading into the problem identification and stakeholder boxes. While this is a bit of a
contradiction with the text, it is a logical spot to refer to it. But it could have its own box on
the Model Development and Application side of the figure, perhaps with a dashed arrow
leading into it, with arrows going from it to model identification and development boxes,
since there would be also be a significant science effort to the audit.
Regarding the discussion at the end of page 29, the panel takes as a given the current
decision making framework and does not take on the issue that work on decision-making
frameworks would enable better use of uncertainty information in decision-making. The
panel report calls for communication between modelers and risk managers and
stakeholders regarding how they view scientific uncertainty and would like to see
expressed and that should help produce more effective uncertainty assessments. However,
a general coordinated and formalized approach toward use of uncertainty information by
decision makers seems needed, beyond the problem specific approach suggested by the
Panel. This may be a bit beyond the scope of the Panel review though.
Letter, Page 2, line 3. would add "advocacy groups" and "general public" to the list, or use
the groups named in the asterisk to Alternative Figure 1.
19
-------
The Panel makes the important observation that the complexity of the optimal modeling
framework depends on the problem specification and resource constraints and goes beyond
Figure 2 in the REM report. The sentence on page 8 at lines 18-20 is a bit hard to take in. I
think it may be better to italicize "for the problem and available resources" than "the best
available, practicable science," to emphasize the point being made.
Page 9, line 10. "encourages the document to urge" - wording a bit awkward
Page 11, lines 1-2. Suggest adding another sentence indicating the nature of the
clarification is that the Panel is seeking.
Page 18, line 12. Suggest adding in "ecological" and perhaps "fate and transport" and take
out "scientific" which is overly broad.
The report gives a fairly comprehensive treatment to model uncertainty. The advice on the
other three sources of uncertainty listed on page 31 is more limited. Structural model
uncertainty is addressed at different places in the Panel report. The Panel's message/advice
on treatment of structural model uncertainty in the REM report may be more effective if
placed in a separate section.
In the Panel report, it probably would be better to define model (structure) uncertainty as
something like "structural model uncertainty." The term model uncertainty is being used to
mean this but also the overall uncertainty, and perhaps in one place model input
uncertainty.
Page 39, line 20, the word "necessarily" seems to be missing. Mismatches of observations
and model simulation can signal problems in the modeling effort.
p. 25, lines 16-17. unclear if square bracket US EPA is a placeholder to remind writer to
spell out a title.
6. Dr. Trudy Cameron:
It is desirable, to the extent possible, to require a standardized method for
documenting and archiving the myriad different models used in formulating environmental
policies. However, I am somewhat concerned that there will always be a percentage of
models that cannot easily be shoe-horned into a standard format. Perhaps it would be
sensible to allow for non-conforming models to be flagged as such and to permit variances
from standard documentation protocols whenever the benefits from standardization do not
seem to outweigh the costs. I have in mind the difficulty of adapting a protocol to an
atypical model. For example, it is likely that the attributes that convey a conceptual
description of an economic model might not correspond exactly to the attributes that
convey a conceptual description of a model of fate and transport. Will there be an "escape
clause" that permits sufficiently non-conforming models to reported differently, if
necessary? One can only waste so much energy forcing a square peg into a round hole.
20
-------
Details:
The Glossary might be improved by including not just definitions of terms as they are used
in the MKB, but explanations where these same terms have different meanings in
particular disciplines, so that any potential confusion is cleared up when this is the case.
p.4, line 19: "meaningful" is never a very illuminating adjective. ".. .the allocation of
sufficient resources..." would be better.
p. 6, line 10: "identify key areas [needing additional] study"
p. 6, line 15: What are these "[model]development and application skills"
p. 6, line 18: if models used now are not personal computer-based models, what are they?
Supercomputer-based? Are mainframes still the rule? Unix workstations?
p. 8, line 16: Make it clear that there is a z-direction (if there is one).. Merely mentioning
"... the x- and y-directions) in the uncertainty versus model complexity curve" is
confusing.
p. 8, line 18: "...Panel believes that when amodel['s] complexity is ..."
p. 8, line 21 "..whether the guidance [that has been] provided [does in fact aid] the modeler
in finding..."
p. 9, line 11: Define QAPP in the text, the first time it is used. Readers who forget the
definition after is first use can refer to a glossary,
p. 9, line 13: not just afterthe mode['s] application.
p. 9, line 14: Why would crucial technical errors or omissions be difficult or impossible to
rectify after the project is over? Is a model only ever presented as a fait accompli?
p. 9, line 23: The Alternative Figure 1 represents] the same general logic...
p. 10, line 7: there should probably be a hyphen in "model-based" as an adjective for
decision making...
p. 10, line 14: most people mean "costs" when they refer to "economics." Economists
prefer to reserve the term "economics" as a shorthand for "the study of the allocation of
scarce resources among competing end uses." If you mean "costs," use "costs."
p. 10, line 26: "appropriate [temporal and spatial] scales, [user acceptance of the model],
and very importantly, the degree of accuracy..."
p. 12, line 1: by "model calibration," do you mean the same thing as "ground-truthing with
empirical data" (as in using statistical techniques with data to estimate unknown
parameters that may merely be given assumed values in other instances)?
p. 12, line 13: likewise, does "parameterization by calibration" mean "empirical estimation
of model parameters"?
p. 12, line 23: by "post-auditing," do you mean ex post validation via forecasting or
backcasting of predicted values, achieved by estimating a model based on a subset of the
data and using it to predict, out-of-sample, some realized outcomes that have actually
already been observed?
p. 19, line 27: "the current terminology used to describe ["the] graded approach["] needs to
be clarified.
p. 20, line 24: The "or not" is rarely necessary. "It is unclear whether this is assumed to be
part of the overall modeling project documentation."
p. 22, line 16: ".. .be introduced earlier in the document[,] before the discussion of model
development, as [an example of an] overarching [concept that is relevant to] all of the
modeling stages.
p. 22, line 24: "...or that screening models are used[, where appropriate,] instead ofmore[-
21
-------
]complex models.
p. 23, line 6: "i.e. what is the simplest [construct] to be considered as a ["model"] in the
REM Draft Guidance... Models Knowledge [B]ase..."
p. 23, line 23: "This level of deeper model evaluation [would also] be appropriate when
[attempting to transfer a model] to unique or extreme [circumstances, relative to those
wherein it has previously been used.]
p. 24, line 8: "potentially litigious applications" sounds strange. How about "using models
in applications where the results may be contested in a court of law."
p. 24, line 20: "relative reduction factors and ensemble modeling" may not be terms that
are globally familiar to all readers. As an econometrician, they are foreign to me.
p. 24, line 26: by "its ability to replicate historical situations" do you mean "within-sample
predictive validity"? Or do you mean out-of-sample "backcasting"?
p. 26, line 7: "Just because individual modeling components are behaving properly does
not necessarily mean that the full system will provide authentic overall analyses." Perhaps
use "Just because the separate components that can be linked together to form an overall
model each seem to be performing properly does not necessarily mean that the overall
model will make reliable predictions."
p. 26, line 8: perhaps use "countervailing errors"
p. 26, line 17: perhaps use "can be mutually offsetting" rather than "can counterbalance
each other"
p. 26, line 20: "[However,] the fundamental flaws in the model['s] formulation [may cause
it] to respond incorrectly to [other simultaneous] changes in the inputs..."
p. 29, line 20: "... and are not intended as a substitute for the [sometimes] hard task of
selecting the 'right' answer. [A degree of regulatory discretion may still be required.]
p. 30, line 12: "To [some stakeholders,] expressions of uncertainty can be [interpreted] as
an indication that experts "don't know."
p. 30, line 26-27: Is uncertainty defined as the degree of statistical precision in the
estimated model parameters (embodied in the parameter variance-covariance matrix)?
Sensitivity analysis seems to concern things such as the selection of a functional form for
the model. However, what is meant by "model factors"? This term in unfamiliar to me.
p. 31, line 20: Do "implications" surround things? Perhaps you could just say "propagation
of uncertainties in each component through a set of linked models" (since linked models
have been introduced previously),
p. 31, line 24: "too[-]brief advice"
p. 31, line 26: How do you "perform a problem formulation"? Perhaps just "Much more
emphasis must be place on robust and iterative problem formulation. This process must
involve modelers, decision-makers, and stakeholders. More emphasis must also be placed
upon accurately conveying model results using non-technical (and often non-quantitative)
language that is fully accessible to all interested parties."
p. 33, line 10: "...thereby identifying the uncertainties that [may] matter."
p. 34, lines 9-12: There is duplication of the sentence beginning "It would be very useful to
have a "Box" example. The two versions have different endings, so it is not clear which
one was intended to be employed.
p. 35, line 16 "Bay[e]sian"
p. 36, lines 16 and 20: "prescriptive" means "specifically forbidden", or something to that
effect. I believe you intend to use "prescriptive"?
p. 36, line 18: "e.g. [for a] modeler within the regulatory community..."
p. 36, line 23+: "For example[,] the Panel is aware of the extensive uncertainty analysis ....
While it is clear that this one example should not be taken to [define a universal standard
for] QUA, the MKB would provide... .such examples with [each instance described by] the
22
-------
nature of the QUA,.... This would provide at least some [prototypes] that model users and
decision makers could [consider (] beyond the cited statistical references [)].
p. 37, line 4: The appeal of QUA is that it can be used to provide quantitative estimates of
the "degree of confidence" [to be placed in] model results [when they are used] as a
component of regulatory decisions. Nevertheless, [QUA] results should be presented with
some caution. It might be tempting [to attribute] a high degree of confidence [to an]
uncertainty analysis [if it is a] highly elaborate or complex analysis. [However,] the
validity of the QUA is of course dependent on the quantity and quality of the information
[employed in] the analysis. The choice of [an] appropriate QUA method... .effort to
conduct [ ] various types of QUA. As compared to the REM , the guidelines [for QUA]
do not contain a similar set of "best practices" [concerning how to evaluate, present, and
incorprate] model uncertainty [into] decision-making.
p. 37, line 17: ...recommendations [ ] to provide a model-user/decision-maker...
p. 37, lines 26-28: This is too hard to understand. I am not persuaded that massaging the
data can "avoid or cancel out systematic biases in the model formulation". What is meant
by the distinction between "observed (measured) conditions" and "absolute predictions",
p. 37, line 28: "A third [possible] approach [for] dealing with uncertainty is [to use]
"ensemble modeling"..." Given the quotations around "ensemble modeling," it seems that
this is intended to be the initial description of these methods. However, the subject was
already mentioned on page 24, without a definition. The order of these events should thus
probably be changed.
p. 38, line 2: When you refer to a "composite" of the results, is this actually some sort of
meta-analysis of the range of possible predictions across the range of possible models?
p. 38, line 4: "... [may be] worth considering for applications or decisions involving
extreme cost [or the reduction of very large] risks." Do you actually mean that they should
be used when the benefits do not so vastly exceed the costs that the choice is a no-brainer,
even if benefits and costs are inaccurately measured? Or are you thinking only about the
types of regulations that involve a safety standard (as defined by current science) and the
only question is how to achieve this designated standard at the lowest cost to society?
p. 38, line 4: "These [candidate] approaches could be included, among others [as part of]
the REM Guidance to provide decision-makers [with some] practical examples of methods
[for] incorporating uncertainty in the decision framework.
p. 38, line 12: "...to the decision-maker (and [to the] public/stakeholders) should
[probably include] a range of [illustrative] examples [ ]. Again the MKB may be useful as
an [archive for] such examples.
p. 38, line 17: As the analyst/modeler and [the] decision[-]maker are usually not the same
individual, it is important [that any results should be accompanied by] the key assumptions
and caveats [embodied] in the analysis.
p. 38, line 21-22: "activities that [may] be most beneficial ... .[it is] of the [the case that]
only a relatively small subset of inputs is responsible for a majority of the variance in a
model['s] output.
p. 38, line 25: "Broader approaches [to] risk communication...."
p. 39, line 9: "uncertainties arise as a result of [the models] that produced the data..."
p. 39, lines 10-11: "... of comparing environmental data (collected at a particular point in
time and space) to a model prediction based on..."
p. 39, line 22: "...concept to decision-makers who may [be inclined to discredit] modeling
results if the comparisons between observations and [model predictions] are less than
perfect."
p. 39, line 26: "In some cases[,] these uncertainties [may actually] be more significant than
the uncertainties [introduced by] the modeling itself.
23
-------
p. 41, line 21: "...and basic [instructions] for obtaining and using the model."
p. 42, line 7: "The information [solicited] in the current data entry sheet [covers] most of
the critical elements..."
p. 42, line 10: "... subcategories of information should [probably] be added to the data
entry sheet."
p. 42, line 14: "1. Model Name [and Acronym]"
p. 42, line 16: "3. Contact Information [for Model Custodian]"
p. 43, line 10: "access, download, and use [an already-compiled and] executable version of
the model."
p. 43, line 12: ".. .must be obtained or licensed [to permit] use of the model."
p. 43, line 15: "section of the data entry [process], the Panel believes..."
p. 42, line 21: "...explicitly ask for this information as [a specific item in] the data entry
sheet.
p. 45, line 1: "... that were identified [during] these evaluations should be reported..."
p. 45, lines 4-5: "...Benchmarking studies in which the model's predictions and/or its
accuracy are compared with [those/that of] other models."
p. 45, line 12: This seems an odd use of the term "criteria." Do you mean
"characteristics"?
p. 45, line 23: ".. .not appropriate for models that address economic activity, behavior, and
emissions [outputs]." What kind of "behavior" is implied in this statement? Do you mean
the behavior of households, or firms? Do you mean adaptation by firms in the face of
changing relative prices as a result of regulation (e.g. input substitution)? By models that
"address" economic activity, do you mean computable general equilibrium (CGE) models
that capture not only the initial impact of regulations, but also the subsequent rounds of
effects as changes are propagated through markets that are interconnected?
p. 46, line 8: Is the intention in this passage to encourage a data-entry format that invites
model proprietors to be ambitious in defining the broadest possible "market" for their
models.
p. 46, line 21: due to concerns regarding [threats to "]drinking water quality at the tap["]
from accidental contamination [(e.g. the Walkerton incident in Canada) or terrorist activity
(e.g. Homeland Security)...].
p. 47, line 23: "and [should direct] site users to specific examples.... In the public record
and[/or in] the peer-reviewed scientific literature.
p. 48, line 18: .. .could reconstruct and rerun that [same] version of the model at a later
time...
p. 48, line 27: The form might provide fields for "supercedes X," with a link to that older
version of the model.
p. 51, line 26: "useful advice", rather than "useful advise"
p. 52, line 7: "...for the documentation effort [] very few of the terms in the Data
Dictionary [are] repeated there
p. 53, lines 6-9: This last sentence seems really awkward. Perhaps it could be two
sentences. It runs on.
p. 53, lines 13, 16: Rather than using the word "bias" which also denotes a very specific
statistical concept, would it be possible to use the notion of "apparent advocacy"?
p. 24, line 24: Has the term "preconceptual bias" been invented at this point in the review
document? Or is it standard usage in some discipline other than those with which I am
familiar? Just as Amazon.com offers reader reviews and ratings, along with some
information about the experience of the reviewer (I believe), this database could also offer
opinions about models. If the name of the reviewer is not given, perhaps just some
identifying information about job category and professional experience could be elicited to
24
-------
accompany the opinion.
p. 59, line 23: "As a result, [it is difficult to obtain (from this tool alone) a] sufficient level
of detail about scales of data used and assumptions made during the formulation of any
specific model in the MKB.
p. 61, line 6: "developed their own [clearinghouse] for models"
p. 61, line 14: "developing common model documentation [protocols]..."
p. 61, line 25-26: "may be necessary for the Agency to provide additional [incentives (or
penalties)] as part of their plan to encourage w3hat is currently a voluntary effort by
modelers to put their [models] in the MKB.
p. 62, line 3: Perhaps the job would be better described as that of an "archivist" rather than
a librarian.
p. 63, line 12: "The level of detail [about what?] provided by each model...."
p. 63, line 24: "was not identified with the [keyword] search using the phrase
p. 64, line 28: "[However, a real understanding about] how a given model works and what
are its specific strengths and weaknesses would appear to require....
p. 65, line 4: "return link from [the] exit disclaimer page [actually sends] the user to the
[keyword] search page."
p. 65, lines 6-8: [Perhaps these navigational inefficiencies are merely an artifact of the
somewhat bewildering array of models and their varying characteristics.]"
p. 65, line 24: [The MKB currently limits contact information to that for a single
responsible individual. It does not provide any suggested format for comments. Neither
does it provide for open dialogue or discussion of different users' modeling experiences.]
This seriously limits the Agency's ability to adapt the MKB and improve its utility
{Why?} The lack of an open forum also limits [opportunities for] model developers [to
take advantage of feedback from model users... ]
p. 66, line 11: "... and as such should clearly follow the [principles established] in the
Guidance on Environmental Models."
p. 66, line 14: "If such a model selection tool is developed, it will likely be used early in
the life of a project. [At this stage, it would be difficult to identify specific needs or to
assess the tradeoffs among these different needs in a way that would facilitate a ranking of
models.]"
Incomplete references are noted in the document. I assume that these will be supplied.
p. 73, lines 20-23: This is too jargon-laden. What are "process issues"? What is "kinetic
resolution"? "... should take/assume a secondary posture"? What is a secondary posture?
Perhaps it would be best to describe "first-order problems/issues" and "second-order
problems/issues" (or something to that effect).
p. 79, line 22: "how model results have [fared] in actual decision-making."
p. 80, line 11: "considered to be preferred or acceptable alternatives to preferred models?"
This needs to be fixed. It makes no sense this way.
p. 80, line 15: "beyond its scope" {beyond the scope of what?}
p. 80, line 24: "As indicated in the [Panel's] Report..."
p. 81, line 6: "The top page of [the] CALPUFF model developer's website...." Do
websites have top pages, or main pages?
p. 82, line 16: "Guideline[s] on Air Quality Models" ?
p. 82, line 30: "... is intended for use on [geographical] scales from tens of meters [, to
hundreds
of kilometers,] from a source. However, it does not mention...."
25
-------
p. 83, line 1: "...a simulation range that does not include important short-term
phenomena...
{Is an accidental spill always a short-term phenomenon?}
p. 83, line 5: "As indicated in the Panel's report, especially important information [about
models] that should be [archived] in the MKB includes (i)..."
... past evaluation[ ] (especially cross-evaluation) studies... [more-detailed] and [more-
consistent] information needs to be included in the MKB. {Or, do you mean more exactly
conformable information? }
p. 83, line 14: The role of the EPA as the "model contact" is not clear for the feedback
forum.
{What does this mean?} The appropriate or desired role of the model contact[,] as either
an
internal (Agency) or external (public) interface for the model[,] should be made clear....
{I'm
not sure what this means.}
p. 83, lines 24-26: "it is not unusual for busy modelers to get phone calls from graduate
students
wanting help running complex environmental models for thesis projects." {What are we to
impute about the relative social values of the time of the "busy modelers" and the
"graduate
students"? Is this a presumption that graduate theses represent ventures with no redeeming
social value?} This passage suggests that model developers are not responsible for
explaining
their models (at least somewhere) at a level that can be understood by graduate students.
Even if
modelers are "busy," some of them enjoying public funding in order to develop their
models. In
these circumstances, especially, there is some obligation to create a useful product that can
be
readily understood by others with adequate training or experience,
p. 84, line 1: "The [write-up] on [the] IPM..."
...This is sufficient as long as the appropriate items are covered [in] sufficient depth.
... Page 2-5 of the IPM Model Documentation begins a section... .is determined). This
section could be simplified and incorporated into the MKB to {bring the reader one level
further down in detail} {What does this last bit mean? Could it be expressed differently?}
p. 84, line 24: Why does the fact that IPM is entrenched in the Air Office imply that it
would be "unlikely to attract 'new model developers'"? How is 'new model developer'
defined?
p. 84, line 31: "A high[ly] spatially resolved model..."
p. 85, line 9: What is meant by "time step." Does this mean "degree of time
disaggregation" or "periodicity of the time-series information"?
p. 85, line 17: "...outside the scope [ofwhat?]."
26
-------
p. 85, lines 20-22: [However, two aspects of the documentation of the IPM are currently
inadequate: (a) there should be a better forum for feedback concerning uses of the model
outside
Agency applications, and (b) there should be a better way to collect suggestions from
external users for updating or improving the structure of the model.]
p. 86, line 4: .. very helpful [if one goal is to eliminate duplication of effort. Processes
represented in the model are well-documented] in the MKB and the associated ..."
p. 86, line 16: "...limitations [on] the [model's] use [as a consequence of] these
assumptions...."
p. 87, line 13: "...describing the [model's] conceptual basis..."
p. 88, line 13: Is the "Scientific Editor" envisioned as being the same person as the
"Librarian" mentioned earlier (or the "Archivist" I suggested above)? Are resources
available to have multiple specialists minding the MKB?
p. 88, line 24: If there are vast numbers of accesses to a particular model description in the
MKB, this may indeed serve as an incentive for the model's proprietors to update the
information in the MKB. However, if there are only a few accesses, this information may
serve as a disincentive. Perhaps if accesses are lower than some threshold, this information
is not very helpful to the goal of encouraging regular updating.
p. 88, line 28: "The user community may provide a very effective policing mechanism to
maintain model quality, especially when money is at stake." {Please elaborate a little.
Whose money is at stake? Are costs to be borne by the user community? By the public
(consumers, workers, investors)? If self-policing is to be relied upon, it is necessary to
determine who gains, and how much, by this policing, as well as who bears the costs of
this policing.
p. 89, lines 3-4: The report should tread very cautiously in suggesting that the MKB should
be privatized. This is a public good that has value to individual users, but this value is
probably not commensurate with the fixed costs of maintaining the whole MKB system.
This is why public goods, in general, are sources of market failure. Even a "non-profit
organization" does not necessarily have a sufficient incentive to provide public goods.
B. OTHER BOARD MEMBER COMMENTS:
Dr. Myrick Freeman:
I have read the Draft SAB Panel Report on on the agency's Draft Guidance. In my
judgment, the answers to the three charge questions for reviewers are "Yes, Yes, and Yes."
I did note three minor editorial changes:
p. 5, line 23-4: the reference to 2 1/2 decades, apparently since
1989. By my count this would be 1 1/2 decades.
p. 34, line 8: I think "distribution" should be singular.
p. 37, line 24: "is to use of the ..." should be either "is the use of
..." or "is to use the ..."
Dr. James Johnson:
27
-------
My biggest concern of the report is that the introduction section leaves the reader hanging.
At a minimum it should include the footnote on page 1.
The second concern is the use of calibration in the text and corroboration in alternate figure
2.
Dr. Cathy Kling:
I've read the review panels report on the "Draft Guidance on the
Development, Evaluation, and application for Regulatory Environmental Models and
MKB." This is a very well done review. It is clear and comprehensive.
I have a single comment that the committee is welcome to take or leave:
I found the commentary in the introduction section entitled "Background Material"
to be odd and somewhat out of place. It read to me as complementing the agency
on taking the advice of the SAB, and of being self-congratulatory about the
importance and impact of the SAB's previous work (the material about the EEC's
Modeling Resolution). I'm not sure what connection there is between this section
and the remainder of the report.
Again, I think the overall report is very well done. I especially like the material concerning
the treatment of uncertainty and the role of models in decision making.
Dr. Granger Morgan:
Overall the review looks to me to be in very good shape.
I am concerned that the current discussion in the review suggests that whether and to what
extent a model should incorporate an analysis and treatment of uncertainty should be
entirely driven by the analytical sophistication of the decision makers and the extent to
which the current regulatory decision framework allows for a consideration of uncertainty.
While I certainly agree that these factors should be a consideration in the choice of the
level and nature of the treatment of uncertainty that is undertaken, I do not believe that they
should completely dominate.
If a problem involves considerable uncertainty it should not be completely ignored or
suppressed simply because decision makers are not sophisticated in thinking about
uncertainty, or will be bothered to learn that there is uncertainty. Such suppression is a
recipe to keep naive decision makers naive, and inadequate regulatory decision
frameworks, inadequate. Followed strictly, such advice would slow, or perhaps even begin
to reverse, the dramatic progress the Agency has made over the past three decades in
thinking about and dealing with uncertainty.
Rather, I would like to see the discussion on pages 29 and 30 (and in the executive
summary) reworked to indicate that while the level and sophistication of the treatment of
uncertainty should be appropriately matched to the problem at hand, and to the way the
results will be used, whenever uncertainty is an important element in a problem, it should
at a minimum be acknowledged and receive some basic quantitative analytical treatment. I
do very much agree that analytical sophistication for its own sake should be avoided.
28
-------
I like the distinction that is drawn between different kinds of uncertainty on page 31 of the
SAB draft review. To my quick reading of the EPA document itself, I did not see any
serious discussion of what to do about "model (structure) uncertainty." I urge the review
panel to suggest that some discussion of this topic be included in the EPA document. In
many cases, this source of uncertainty swamps all others, and yet is not considered or
discussed, even in qualitative terms.
Finally, I ask the review committee to take another look at Figure C.5.1. The pie diagram
does not make sense to me given the shape of the response surface shown. Also, it looks to
me like the orientation of the plane in Figure C.5.2 should be rotated to correspond to the
slope of the response surface. At the moment it is not properly aligned, making it very
hard for a reader who does not already understand, to figure out what is intended.
Dr. Kathleen Segerson:
I have only very small comments on the draft SAB Panel Report: (1) The Introduction
provides background information on the Modeling Resolution, but doesn't explicitly link
that effort to the current efforts under review. I suspect that the current effort grew out of
the recommendations on p. 6, but that is not stated explicitly. Providing some context
linking the two efforts would be helpful.
(2) I particularly applaud the report's discussion of uncertainty, including the need to
identify how the information about uncertainty will be used and the distinction between
sensitivity analysis and uncertainty analysis. A small comment on this latter issue: on p.
32, lines 25-26, the report states that "the discussion in Section C.5.5 relating to Monte
Carlo analysis currently reads more like a discussion of uncertainty analysis, rather than
sensitivity analysis." Perhaps this statement needs more explanation, since many
economists (myself included) view Monte Carlo analysis as a form of uncertainty analysis.
(3) The report notes in several places that the criteria and discussion included in the EPA
draft documents seem to focus on models for pollution fate and transport and exposure. It
notes the need to consider other models, such as economic models designed to predict
behavior and the resulting emissions or other environmental impacts. I would agree that
the modeling guidance and knowledge base need to include these other types of models,
which can be important in regulatory as well as other settings. I would add another
category of models that might also be considered for inclusion, namely, ecological models.
There is increasing interest in the ecological impacts of EPA actions (see the CVPESS
work) and a need for ecological models (e.g., ecological production functions) that can
predict, for example, how a given water quality change will affect a fish or insect
population.
(4) I think the question of the selection criteria to be used in deciding what to include (or
not include) in the MKB is key. The SAB Panel report notes the need to identify criteria
(p. 63) but doesn't suggest what those criteria should be. Can the panel give EPA any
advice on this?
(5) The Panel notes the need to provide incentives to encourage the voluntary effort by
modelers to put their models in the MKB (p. 61). Does the Panel want to recommend that
this be a requirement for models developed under EPA funding (e.g., STAR grants)?
29
-------
(6) A minor editorial comment: In several places, the word "however" is used as a
conjunction (synonymous with "but" in the middle of a sentence) rather than as an adverb
(e.g., p. 2 line 21-22). I've always thought that this is not grammatically correct.
Dr. Deborah Swackhamer
1. Have the original charge questions to the SAB Panel been adequately addressed in the
draft report? The Charge Questions have been addressed very well by the committee's
report, and in fact in many cases they have gone beyond the Charge Questions (this isn't
necessarily bad, just an observation).
Is the draft report clear and logical? The report is generally clear and logically organized. I
found the Letter to the Administrator to be too long - it is the same length as the Exec
Summary, and has the same tone, where in fact they should be oriented to different
audiences. The Exec Summary would benefit from having a sentence or two that says that
the Committee was asked to address 7 charge questions. The introduction would greatly
benefit by telling the reader that there are 7 charge questions and that the report is
organized to address each of them in subsequent chapters. The chapters themselves (esp 1-
7) would benefit from having the recommendations summarized up front; there is a
tendency for the report to meander.
4. Are the conclusions drawn and/or recommendations made by the panel supported by
information in the body of the draft report? Absolutely. This is a very well done and
thorough report. The recommendations and discussion are supportive of the overall
effort, yet highly constructive. Each recommendation is fully discussed in the body of
the report.
Dr. Valerie Thomas:
1. Where charge questions adequately addressed? Yes.
2. Is the draft report clear and logical? The Introduction of the Draft Report presents the
1989 SAB Modeling Resolution recommendations. However, there is no clear discussion
of the extent to which the EPA has achieved these resolutions. Nor is there discussion of
whether the current draft report is a reprise of that Resolution or is focusing on different
issues. This makes the draft report unclear; it is difficult to find the logical connection
between the Introduction and the rest of the report.
The connection needs to be made between the 1989 SAB Modeling Resolution, the
resulting developments at EPA, and the current SAB review. The one small connecting
link is on p. 9 lines 2-3: "the Panel finds that the Agency has been responsive to previous
SAB advice on modeling practices." What does "the Agency has been responsive" mean?
That the Agency followed all the recommendations? Some of them? Which ones? Or
simply that the Agency responded to the Modeling Resolution with a letter or comments?
30
-------
In at least some cases, the draft report goes farther than the 1989 Modeling Resolution.
For example, recommendation 3 of the Modeling Resolution, on model validation (p. 6 line
6), does relate to the discussion of Model Post-Audit (p. 12) although the Post-Audit
discussion addresses models of system change, which seems to be beyond what was
considered in the 1989 Resolution.
It would be helpful to have a clearer statement of whether the EPA
and the SAB have now moved beyond the recommendations of the 1989 Modeling
Resolution, or whether EPA and SAB are still working to address those issues, or whether
this draft report addresses a largely distinct set of modeling issues.
pp. 54-55. The discussion of "inclusion of additional information on model performance"
(p. 54 line 20 - p. 55 line 5) is not clear. The meaning of p. 54 lines 2-4 is not at all clear;
perhaps these paragraphs should be cut.
3. Are the conclusions and recommendations supported by information in the body of the
draft report? The draft report states (p. 1, lines 19-22) that "the panel is concerned that the
REM vision is not matched by a commensurate, and steady, allocation of resources on the
part of the Agency. It is therefore recommended that the Agency provide a meaningful
commitment of resources to the REM initiative." No information in the body of the draft
report addresses the allocation of resources to the REM initiative. In Appendix C, there is
the suggestion of the need for oversight and for a Scientific Editor, on p. 89, there is the
suggestion that EPA might be better off turning the MKB over to the private sector. If this
Appendix C discussion is the basis for the recommendation for more resources, it should
be moved up into the main body of the draft report.
Dr. Robert Twiss:
I concur in the REM report (with deference to conclusions that might be raised in the call).
31
------- |