UNITED STATES ENVIRONMENTAL PROTECTION AGENCY
WASHINGTON D.C. 20460
OFFICE OF THE ADMINISTRATOR
SCIENCE ADVISORY BOARD
November 25, 2008
EPA-SAB-09-005
The Honorable Stephen L. Johnson
Administrator
U.S. Environmental Protection Agency
1200 Pennsylvania Avenue, N.W.
Washington, D.C. 20460
Subject: Consultation on Pollution Prevention Measures
Dear Administrator Johnson:
EPA's Office of Pollution Prevention and Toxics (OPPT) currently uses a suite of
measures to evaluate the effectiveness of EPA's pollution prevention (P2) programs.
OPPT requested the Science Advisory Board (SAB) provide consultative advice on
sound methodologies for measuring P2 program performance results, particularly
recurring results. In response to OPPT's request, the SAB Environmental Engineering
Committee (EEC), augmented with additional SAB members, conducted a consultation
on this subject at a public meeting on September 3-4, 2008. (See Enclosure 1 for the
Committee's roster.)
The Committee commends OPPT on the creative development and substantial
progress of the seven P2 programs. These programs have resulted in avoidance of
emissions, energy consumption, and resource depletion and have provided other benefits
documented by the metrics developed by OPPT. The P2 programs are among the most
forward-looking and important programs of the EPA, defining a new model for
environmental protection and restoration based on voluntary initiative that is prompted by
incentives for cost savings and higher product and process performance. These are also
among the most difficult programs to implement, as they are based on voluntary
participation by partner organizations, the appropriate data to assess the individual
programs are not readily available, the information can be sensitive as it may provide a
competitive advantage to competing organizations, and effective approaches are often
specific to the particular product or process. Measuring progress achieved at the program
level is challenging because, for a variety of good reasons, results are measured in
different ways, and on different time scales. As a consequence, results are often difficult
to compare and aggregate. Nevertheless, the P2 programs of OPPT have made
-------
significant progress, and the metrics developed to date at least partially document this
success.
An SAB consultation is a mechanism for individual technical experts to provide
comments for the Agency's consideration early in the development of a technical
product. A consultation does not require achievement of consensus among the
Committee members or preparation of a detailed report. In this consultation, however,
the Committee would like to highlight key points that were developed in the discussions
in response to the five charge questions (Enclosure 2) which are summarized in the
document enclosed with this letter (Enclosure 3).
The Committee would like to thank EPA presenters for their expertise,
perspectives, and insights that assisted the Committee's understanding of the P2
programs. Thank you for the opportunity to provide early advice on this important topic.
As this is a consultation, the SAB does not expect a formal response from the Agency.
Sincerely,
/Signed/
David A. Dzombak, Chair
Environmental Engineering Committee
Science Advisory Board
cc: Deborah L. Swackhamer,
Chair, Science Advisory Board
-------
Enclosure 1
U.S. Environmental Protection Agency
Science Advisory Board
Environmental Engineering Committee Augmented for Pollution
Prevention Measure Consultation
CHAIR
Dr. David A. Dzombak, Walter J. Blenko Sr. Professor of Environmental Engineering,
Department of Civil and Environmental Engineering, College of Engineering, Carnegie
Mellon University, Pittsburgh, PA
MEMBERS
Dr. Viney Aneja, Professor, Department of Marine, Earth, and Atmospheric Sciences,
School of Physical and Mathematical Sciences, North Carolina State University, Raleigh,
NC, USA
Dr. James Boyd, Senior Fellow, Director, Energy & Natural Resources Division,
Resources for the Future, Washington, DC
Dr. John C. Crittenden, Richard Snell Presidential Professor , Department of Civil and
Environmental Engineering, Ira A. Fulton School of Engineering, Arizona State
University, Tempe, AZ
Dr. T. Taylor Eighmy, Assistant Vice President for Research and Director of Strategic
Initiatives, Office of the Vice President for Research, University of New Hampshire,
Durham, NH
Dr. Madhu Khanna, Professor, Department of Agricultural and Consumer Economics,
University of Illinois at Urbana-Champaign, Urbana, IL
Dr. Cindy M. Lee, Professor, Department of Environmental Engineering and Earth
Sciences, Clemson University, Anderson, SC
Dr. Reid Lifset, Director of the Industrial Environmental Management Program and a
member of the faculty, Industrial Environmental Management Program, School of
Forestry and Environmental Studies, Yale University, New Haven, CT
Dr. Jill Lipoti, Director, Division of Environmental Safety and Health, New Jersey
Department of Environmental Protection, Trenton, NJ
Dr. Michael J. McFarland, Associate Professor, Department of Civil and
Environmental Engineering, Utah State University, Logan, UT
-------
Dr. James R. Mihelcic, Professor, Civil and Environmental Engineering, State of Florida
21st Century World Class Scholar, University of South Florida, Tampa, FL
Dr. Horace Moo-Young, Dean and Professor, College of Engineering, Computer
Science, and Technology, California State University, Los Angeles, CA
Dr. Catherine Peters, Associate Professor, Department of Civil and Environmental
Engineering, Princeton University, Princeton, NJ
Dr. Susan E. Powers, Associate Dean and Professor, Department of Civil and
Environmental Engineering, Wallace H. Coulter School of Engineering, Clarkson
University, Potsdam, NY
Dr. Mark Rood, Professor, Department of Civil and Environmental Engineering,
University of Illinois, Urbana, IL
Dr. Bryan Shaw, Commissioner, Texas Commission on Environmental Quality, Austin,
TX
Dr. John R. Smith, Division Manager, Environmental Science and Sustainable
Technology, Alcoa Inc., Alcoa Technical Center, Alcoa Center, PA
SCIENCE ADVISORY BOARD STAFF
Ms. Kathleen White, Designated Federal Officer, 1200 Pennsylvania Avenue, NW,
Washington, DC, Phone: 202-343-9878, Fax: 202-233-0643, (white.kathleen@epa.gov)
-------
Enclosure 2
Charge for the Consultation on
Pollution Prevention Program Measures
with the
Environmental Engineering Committee
of the
EPA's Science Advisory Board
EPA Office of Pollution Prevention and Toxics
September 3, 2008
-------
Pollution Prevention Program Measures
The Pollution Prevention (P2) Program seeks feedback from the SAB on considerations
associated with technical elements for the development of a sound methodology for measuring
recurring performance results for P2 Program Centers. It seems like there are a variety of
measures that could be used, some of which are depicted in the illustration, below.
Generic Depiction of Performance Results
0)
13
s
u
g
c
3
-Baseline without P2 Program
Intervention
-With Pollution Prevention Program
Annual Recurring Results
| Cumulative Recurring Results
New Annual Result
from P2 Program FY2004
1999 2000 2001 2002 2003
Fiscal Year
2004
2005
This illustration addresses a single hypothetical P2 Program Center which may undertake one or
more activities in any given year. It presents a baseline curve of toxic materials generated over a
period of years without any P2 Program intervention; then a separate curve depicting generation
over the same span of years as a result of P2 Program interventions. One measure of results
would be to capture new annual results, the measure of new interventions in a year, which is
calculated by taking the difference between the pounds of toxic materials generated in one year
from the previous year. Many P2 Center interventions can also generate results over a span of
years. One could consider an annual recurring result which is the vertical distance between the
curves during any single year. Alternatively, cumulative recurring results are depicted by the
shaded area between the curves. It is the recurring results measures that are the focus of this
consultation with the Science Advisory Board.
In developing sound methodology for measuring different performance results, the P2 Program
is particularly interested in understanding technical factors that are important to consider when
measuring recurring results. This understanding should help us, for example, to decide on
appropriate modifications to the methodology when facing constraints on the collection of results
data, particularly on a year-after-year basis. The P2 Program has developed some understanding
for counting recurring results in certain contexts, and is seeking additional input in order to
develop sound methodology for counting recurring results across P2 Program Centers. The
methodology will need to meet the following technical parameters:
-------
1) completeness - a methodology for recurring results must work for all Centers because all
Centers must report performance results into a unified Agency reporting system; and
2) comparability - a methodology for recurring results must support the ability to generate
time-series data to analyze changes in performance over time, must support performance
comparisons between Centers, must be sufficiently flexible to accommodate the design
and implementation differences among Centers, and must be sufficiently neutral and
transparent so that report users can understand the factors that may contribute to
differences in performance among Centers.
Key Terms
Please refer to the Annex 1 (Glossary) for a list of key terms that will be used throughout this
document.
Background
The P2 Program has a number of reasons for wanting to improve its measures of performance.
The P2 Program is guided by statute to measure and report its performance. The Pollution
Prevention Act (PPA) of 1990 directs the Agency to develop an approach to measuring P2 (42
USC 13101). The Government Performance and Results Act (GPRA) directs federal agencies to
develop strategic plans outlining what programs intend to accomplish, how they measure their
progress, and how they communicate that information about their performance to Congress and
to the public, among other requirements. In response to GPRA, the Office of Management and
Budget (OMB) developed the standardized Program Assessment Rating Tool (PART) to
evaluate the effectiveness of program performance in budget submissions for individual
programs and program groupings. Beyond these statutory reasons for measurement, in this age
of results orientation it is important to be able to measure performance to demonstrate the value
of P2 Programs to government and private sector stakeholders to foster support and participation.
Supporting Materials
Additional information on the P2 Program, its Centers, and other topics relevant to performance
measures and recurring results is provided in a supporting materials document. The additional
information includes:
- a narrative overview of the seven P2 Centers and their streams of results, and
- a table on data management within the P2 Centers.
In addition, weblinks are provided for the following:
- the Pollution Prevention Act of 1990,1
- Executive Order 13423, Strengthening Federal Environmental, Energy, and
Transportation Management,2
1 http://www.epa.gov/oppt/p2home/pubs/p2policy/actl990.htm
2http://www.ofee.gov/eo/EO_13423.pdf
Page 2
-------
- the Pollution Prevention Program PART Program Assessment (Fall 2006).3
Please keep in mind that some of these materials (e.g., PART assessment questions and answers)
are provided as background only and are not directly relevant to this consultation.
Charge to the SAB
The P2 Program is seeking the consultative input of the SAB Engineering Committee on the
technical elements necessary to consider in the development of a sound methodology for
measuring P2 program performance results, particularly recurring results. The Program is at a
fairly early stage of methodological development, and is seeking the benefit of Committee
members' thinking and expertise at this stage of the process. The Program is specifically
interested in receiving input from the SAB on the questions presented below on the technical
elements identified thus far.
1. Over what time period is it appropriate to count recurring results?
It is generally accepted that entities adopting a pollution prevention practice will frequently
keep such a practice in place for more than a year, generating pollution reduction results over
a span of years. The life span over which it is appropriate to count these results likely varies
by industry. What technical elements should be considered when determining how long to
count recurring results? What data sources exist to estimate the typical lifecycle of a
commercial technology and how might this typical lifecycle vary by industry?
Are there other factors that are relevant to consider when identifying the end of a benefits
stream of results from a P2 practice?
2. What are appropriate data sources?
What data sources are appropriate for substantiating recurring results from P2 Centers whose
technical assistance activities are referral driven, and facility driven, but not sector driven?
Discussion: Could innovation rates, and/or the birth and death rate of firms, be used towards
building a basis for counting recurring results from Centers that work with small- and
medium-sized businesses broadly across sectors on a technical assistance basis? What data
sources exist to estimate the typical length of time an innovation is retained in small U.S.
businesses, and for medium-sized U.S. businesses? What other factors can be considered
when determining how to substantiate recurring results for technical assistance programs that
work broadly across sectors?
What should we do with data source gaps over time for producing time-series data?
Discussion: See Table 3 in the supporting materials comparing our data sources and
literature search data sources and identifying gaps to explore.
1 http ://www. whitehouse. gov/omb/expectmo re/summary/I 00043 04.2006. html
Page 3
-------
3. How should aggregation issues be addressed?
Assuming practices have different life cycles, thus different time periods over which results
would presumably be counted, what are the technical issues faced when aggregating results
for all practices at a center or across all centers?
4. How do other programs and organizations consider or measure
recurring results, and can we benchmark and learn from their
experience?
The experiences of other programs and organizations regarding recurring results may be
informative to the P2 Program's decision on a path forward on its own methodology
development. The P2 Program would like to understand how other programs or
organizations considered or measured recurring results and, if so, whether there are existing
benchmarks that the P2 Program can utilize. The Program has conducted an initial
benchmarking exercise and is interested in expanding the scope and depth of information
searched and compiled. The Program is particularly interested in additional information on
the following:
Are there other programs and organizations that count recurring results? What is the scope
and purpose of these programs and are they similar to the voluntary P2 Program?
- What data do these programs collect and what is (are) the source(s) of the data?
- What are the typical performance results that are measured and what is the methodology
used to compute the performance results?
- Do any of their experiences relate to specific P2 Program Centers?
- Are there other indicators as to how recurring benefits are viewed (regardless whether
they are reflected in measurement approaches)?
See Section III in the supporting materials for the P2 Program's initial findings from a
benchmark literature search summarized in Table 2 organized by program/organization with
a description of each approach to recurring results. Section III provides a discussion of the
P2 Program's initial benchmark literature search findings, including the relevance of specific
findings to the individual seven P2 Program Centers as well as across the P2 Program.
Information gathered includes government programs (state and federal), some industry
practices (at the individual company level), as well as global environmental measurement
principles. The benchmark information in Section III is provided solely as a source of ideas
Page 4
-------
building on the experience of others who have addressed similar performance measurement
questions OPPT is facing. It is not intended that they will be part of this consultation in any
other manner.
5. How can we best use the initial benchmark literature search findings to
help us further inform our investigation on recurring results and begin
methodology development?
The P2 Program is seeking to relate findings of the initial benchmark literature search to the
data sources that differentiate our Centers and are relevant to resolving how to approach
recurring results across the P2 Program. Additionally, the Program is interested in how to
use the initial findings to further focus and prioritize our investigation on recurring results for
the purpose of methodology development. The P2 Program has identified the following
preliminary list of task-related considerations on which we are seeking consultative advice.
See also Section IV of the supporting materials for more information.
- In analyzing the current data sources for each Center, how does the availability or
limitations in data sources for an individual Center (such as publicly available market
data or statutory barriers to data collection) affect P2 Program options for measuring
recurring results program-wide? Table 3 in Section IV of the supporting materials
compares P2 Program data sources with data sources inferred from examples in the
literature.
- Where data gaps exist for certain Centers, are there other data sources that the Centers
could consider (such as economic, market trends, market demographics) in
methodology development to allow for measuring recurring results consistently across
the P2 Program?
- In continuing a benchmarking exercise, are there other benchmarks we might
consider?
Page 5
-------
ANNEX 1 - Glossary
P2 Centers. The P2 Program has seven Centers or distinct program units that produce
performance results. Each P2 Center has its own program design, targets, and goals. All P2
Centers use common outcome performance measures - pounds of hazardous materials
reduced, BTUs of energy/million metric tons of carbon equivalent reduced (program is
switching from BTUs to MMTCE), gallons of water reduced, dollars saved. Not every
center reports to all four measures, but all of them report to at least three of these measures.
Details on the Centers appear in attached background documents, identified immediately
below.
Program intervention (or P2 intervention). This is a Center activity that produces a specific
environmental outcome (pounds reduced, energy saved, etc.) at a point in time. There is also
a view that it includes the ongoing administrative investments in building and/or maintaining
the overall project which produces the series of specific environmental outcomes.
Results. For purposes of this consultation, "results" refers to program performance results
expressed in terms of specific indicator measures. These specific P2 Program indicator
measures are pounds of hazardous materials (releases to air, water, land, and material inputs)
reduced, billion BTUs of energy reduced (this measure is being changed to million metric
tons of carbon equivalent reduced), gallons of water reduced, and dollars saved from P2
interventions. Provided on the following pages is a graph that further illustrates how a P2
Center may represent annual and recurring results. Equations and examples are also
provided to illustrate how the various types of results may be interpreted from the example
graph.
Baseline Year. The baseline year is the year from which results in following years are
measured. In the example below, there are no Annual Results in the baseline year (FY2000).
(New) Annual Results. New Annual Results or simply Annual Results refer to one year's worth
of performance results from a program intervention that occurred in a given year.
Example: To calculate the New Annual Result in FY2004 in the graph below, subtract the
amount of toxic material generated in FY2004 (with the P2 Program in place) from the
amount of toxic material generated in FY2003.
e.g. New Annual ResultpY2oo4 = 16,500 - 15,000 = 1,500 Ibs of toxic materials reduced
Annual Recurring Results (Cumulative Annual Results). Annual Recurring Results are
benefits that occur in a single year from P2 interventions initiated in previous years. The P2
Programs include any New Annual Results in this total as well. Note: Annual Recurring
Results are equivalent to Cumulative Annual Results which are the sum of New Annual
Results year-after-year, starting from a baseline year of measurement.
Assuming a baseline year of FY2000, New Annual Results begin to occur in FY2001 and
Page 6
-------
continue every year through FY2004. Then the Annual Recurring Results for the P2 Center
inFY2004wouldbe:
2004
Annual Recurring Results FY2004 = ^ Annual Re suits FYk
k=2001
Example: To calculate the Annual Recurring Result in FY2004 in the graph below, add the
Annual Results in each year. This calculation can be simplified to subtract the total
amount of toxic waste generated with the P2 Program in place from the baseline level of
waste.
e.g. Annual Recurring ResultFY2oo4 = 20,000 Ibs - 15,000 Ibs = 5,000 Ibs of toxic
materials reduced.
Cumulative Recurring Results. Cumulative recurring results are the sum of recurring results
over two or more years, starting from the baseline year of measuring.
Starting with a baseline year of FY2000, the cumulative recurring results for the P2 Center in
FY2004 would be:
2004
Cumulative Recurring ResultsFY2004 = ^ Annual Recurring ResultsFYk
k=2001
(Note: In FY2001, annual recurring results are equivalent to new annual results.)
Example: To calculate the Cumulative Recurring Results in FY2004 in the graph below, add
the Annual Recurring Results in each year.
e.g. Cumulative Recurring ResultsFY2oo4 = (20,000-19,000) + (20,000-17,800) + (20,000-
16,500) + (20,000-15,000) = 11,700 Ibs of toxic materials reduced.
Page 7
-------
u
'x
3 0)
Depiction of Performance Results
• Baseline without P2 Program
Intervention
-With Pollution Prevention Program
| Annual Recurring Result FY2004 \
Cumulative Recurring Results FY2004
New Annual Result
from P2 Program FY2004
7999 2000 200? 2002 2003 2004 2005
Fiscal Year
2004
Annual Recurring Results
FY2004
Annual Results
FYk
k = 2001
Cumulative Recurring Results
2004
FY2004 = XAnnual Recurring Results FYk
k = 2001
PageS
-------
Enclosure 3
EPA SAB Consultation on Pollution Prevention Program Measure
Methodology
September 3-4, 2008
Summary of Key Points
Charge Question 1: Over what time period is it appropriate to count recurring results?
1. It is appropriate to count results over the life of a technology (product or process),
when the technology use is stable and not largely displaced by a new technology.
An argument can be made for only counting results during the time period of
technology distribution in the marketplace, but the majority opinion on the
committee favored counting results over the life of the technology use.
2. Appropriate time periods depend on the particular technology. For example, a
time period relevant for some devices could be the mean time to failure and its
replacement with a new technology.
3. By focusing on the technology rather than the company, OPPT can avoid
counting problems associated with the birth and death of companies.
4. The baselines or benchmarks used for counting are extremely important, they
should be carefully selected and the reasoning for their selection should be both
clear and defensible. Because EPA P2 programs are not the only source of
innovation, defining a baseline needs to account for the conventional innovation
rate and ranges of outcomes.
5. OPPT currently uses amounts of materials that are not emitted to the environment
and reduction in energy consumption as measures. Resources not expended and
cumulative impacts avoided could also be used as measures of results that
consider preservation of natural capital more broadly.
Charge Question 2: What are appropriate data sources?
6. OPPT has chosen appropriate data sources and evaluated them. OPPT also has
opportunities to use other EPA data sources not directly related to P2 programs,
e.g., air and water permitting data, and TRI data.
7. OPPT should continue to strengthen processes for data validation.
8. OPPT should continue to develop uniform methods of data collection across the
P2 centers.
-------
9. The EEC recognizes that, because of the government's interest in not over-
burdening individuals and organizations with requests for information and the
need to limit barriers to voluntary participation in P2 programs, collection of
additional data is neither a simple question nor is it a choice fully within the
domain of OPPT or EPA. However, if it was possible, it would be useful to
collect data that document how EPA is accelerating innovation and economic
competitiveness. The acceleration of innovation and related economic
competitiveness are also measures of the P2 programs' results.
10. Ideally and over the long-term, more meaningful measures of the results of
programs, including pollution prevention programs, should incorporate risk
averted in addition to chemical releases averted. Unfortunately, the data required
to implement this measure for all EPA P2 programs do not exist and will not exist
any time soon. Similarly, measures of energy conservation could be improved by
weighting energy sources by human health and environmental impacts because
not all amounts of energy have equal impacts on resources and the environment.
11. To support better analyses when complete suites of data are lacking, OPPT should
examine the scientific literature on missing data and develop methodologies based
on this literature, e.g., the use of proxy data for particular facilities or processes.
Charge Question 3: How should aggregation issues be addressed?
12. While aggregation makes it difficult to convey the influence of P2 programs on
systems, it is acceptable if similar data that describe, for example, source
reductions or changes in consumption, are being aggregated. As noted in (1),
health risks averted could be a valuable common measure, but it is not yet
possible to determine this for the range and variety of technologies in the P2
programs. This is primarily due to the lack of available data, the constraints on
collecting the relevant data, and the lack of resources with which to do so.
13. Work to segregate product and process impacts because a process that produces a
product can have a different lifetime than its product. Also, the duration and
degree of the impact to the environment by the process can be quite different than
its product.
14. If OPPT considers different ways of normalizing results, e.g., per unit of
production or per functional unit (per action, facility, household, etc), it is likely
to find a measure robust enough that changes of production can be considered
more realistically when describing the impact of results from P2 activities over a
wide range of programs.
-------
Charge Question 4: How do other programs and organizations consider or measure
recurring results?
15. Companies track performance to change behavior, with objectives for the
behavior change desired. Using this model, OPPT could consider setting specific
targets for P2 programs.
16. Experience with other organizations demonstrates that, for P2 metrics to be
effective, they need to be clear and simple.
17. OPPT should establish metrics only for activities that can be clearly and
defensibly influenced by the P2 programs.
18. Experience with other organizations demonstrates that it really matters what is
measured, as performance metrics motivate and demotivate behavior.
19. OPPT may find it helpful to examine the Global Reporting Initiative
(www.globalreporting.org) and how this organization handles recurring results
and includes life cycle considerations.
20. The Intergovernmental Panel on Climate Change has developed methods of
accounting for recurring and cumulative emission reductions achieved by control
measures for greenhouse gases. The IPCC methods may serve as useful models
for counting recurring results for P2 programs.
21. Industrial trade associations have developed methods for counting recurring and
cumulative results of pollution prevention activities, e.g., the Aluminum
Association, the American Chemistry Council, and the World Business Council
for Sustainable Development.
22. The U.S. Center for Disease Control and the World Health Organization track
recurring results from interventions for disease prevention. The methodologies
developed may be useful in determining methods for counting recurring results
from P2 program interventions.
Charge Question 5: How can EPA best use the initial benchmark literature search
findings to help further inform recurring results method development?
23. Information in peer-reviewed journals should be explored in much greater depth.
24. It would be useful to explore how other organizations (e.g., IPCC) reflect
uncertainty in new or recurring results from interventions.
-------
25. The EPA should examine international sources to inform recurring results method
development, e.g., emission accounting compendia developed by the Institute for
International Studies and Training (IIST) and by the European Union.
Other Key Observations
26. There are multiple audiences for the P2 program metrics: Measures of P2 can be
used to evaluate OPPT's programs, as OMB does in PART evaluations. Another
important audience is the companies/organizations being solicited to participate in
the program. While EEC recognizes that development of metrics to document
performance is a priority, it suggests some thought be given to development of
additional metrics that capture innovation, economic competitiveness, and the
evolution of the P2 discipline that should be directed to the participants in these
voluntary programs.
27. It is important to convey the barriers to data collection faced by the P2 programs.
The programs involve voluntary participation and data submittal by the
organizations that partner with EPA. Too much pressure for data submittal could
result in the loss of participation by the partner organizations. Further, the EPA is
limited in the surveys it can conduct by federal information collection request
(ICR) regulations.
28. OPPT has been careful to be conservative in its estimates of impacts of P2
program interventions. This care is commendable, but the Committee encourages
the OPPT to be less conservative in its data compilation and analysis in order to
demonstrate more clearly the range of benefits. The current bias toward
underestimating benefits is counterproductive with respect to attracting more
organizations as participants in the P2 programs. In this respect there is harm in
underestimating benefits. Defensible approaches to estimation of benefits that are
not so conservative could help expand P2 program participation more rapidly.
------- |