UNITED STATES ENVIRONMENTAL PROTECTION AGENCY
                                      WASHINGTON, D.C. 20460
                                      August 21,2001
                                                                    OFFICE OF THE ADMINISTRATOR
                                                                     SCIENCE ADVISORY BOARD
EPA-SAB-EEC-COM-01-005

Honorable Christine Todd Whitman
Administrator
U.S. Environmental Protection Agency
1200 Pennsylvania Avenue, NW
Washington, DC 20460
              Subject:       Measures of Environmental Technology Performance: a
                            Commentary by the EPA Science Advisory Board

Dear Governor Whitman:

       At its November 1997 retreat, the EPA Science Advisory Board's Executive Committee
encouraged its standing committees to undertake more self-initiated efforts.  This commentary is
one of several Environmental Engineering Committee (EEC) initiatives undertaken in response
to that guidance.

       Briefly, the EEC recommends that the Agency build on existing strengths in technology
evaluation and quality management to provide easy access to reliable information about a wider
suite of measures of environmental technology performance. It does so because access to such
information is increasingly important for evaluating the effectiveness of risk reduction programs
and projects.

       Three trends cause the Committee to call your attention to the need for an expanded suite
of measures of environmental technology performance. These are:

       a)     the expanding use of non-regulatory approaches to environmental protection;

       b)     the increasing demand from the international community for effective
              environmental technology; and

       c)     the growing desire for sustainable environments.

       Determining progress towards some of these goals means addressing metrics that have
not traditionally been considered. Some examples of metrics which have been considered in the
past include: capital costs, operating costs, removal rates, concentrations in discharges or
emissions, and reliability. Some examples of additional metrics which could be considered are:

-------
life cycle costs, the nature and quantities of waste generated, energy use, and some measure of
social acceptability.

       Collecting the information is one thing, making it easily available to decision-makers is
another. The need to widely disseminate relevant information criteria for evaluating
environmental technology performance has been recognized by other organizations. For
example, the Federal Remediation Technologies Roundtable is an interagency working group
which exchanges information on the use and development of innovative hazardous waste
characterization, monitoring and treatment technologies. The exchange synthesizes the technical
knowledge that Federal Agencies have compiled and provides a more comprehensive record of
performance and cost. Because technology cost and performance are affected by waste
characteristics and operating conditions, the relevant factors are technology specific and the
Roundtable has identified the most important parameters for various technologies.

       Public and private decision-makers in the United States and abroad use evaluations of
environmental technology performance to determine whether a given technology can potentially
address a given problem. If information on relevant measures of technology performance is not
available to decision-makers, sub-optimum decisions could be made in technology selection: and
sub-optimum decisions may result in increased risks to human health and the environment.
Because of its demonstrated strengths in technology evaluation and quality management, EPA
has the opportunity to contribute to such information-based decision-making by collecting and
disseminating additional information on environmental technology performance.

       The more widely accepted evaluations tend to be government sponsored and the  EPA is
the major, although not sole, provider of such evaluations. EPA evaluates the performance of
environmental technologies and reports the results to decision-makers and, within the limits the
Agency has established, it does so with skill and credibility. Already a world leader in applying
the concepts and practices of quality assurance to data collection and analysis, EPA has
pioneered the extension of these concepts to evaluating technology performance.  Because of the
trends mentioned above and the resulting opportunities for environmental protection, the
Committee now recommends that the Agency  develop a more comprehensive suite  of measures
for evaluating environmental technology performance. These measures can be used by the
Agency and others to develop the necessary information for decision-makers charged with
selecting technologies for use.

       In preparing this commentary, the Committee has used the expertise of individual
members and consultants; experience gained since  1995 in four reviews relating to
environmental technology evaluation; reviews of the Agency's quality management system and
its implementation; the participation of two EEC members in the November 1999 EPA-
sponsored Industrial Ecology Workshop; the March 2000 review of the Environmental
Technology Verification Program; and interactions  with EPA staff and managers of other
relevant national programs.

-------
       Although the Attachment A provides the background, supporting details, and related
recommendations, the Subcommittee would like to highlight the following two
recommendations here:

       a)     The EPA should identify, with stakeholders, the technology areas most in need of
              shared information on technology performance and, begin developing measures
              and information systems which would allow that sharing to occur in a meaningful
              way.

       b)     The EPA should re-consider, at regular intervals, whether to permit technology
              evaluation programs, such as the SITE program, to make cross-cutting analyses of
              technologies.

       We look forward to your written response to the Committee's recommendations to make
environmental technology performance measures more comprehensive and useful. Please
contact us if we may be of further assistance.

                            Sincerely,
                                   /Signed/

                            Dr. William daze, Chair
                            EPA Science Advisory Board
                                   /Signed/

                            Dr. Hilary Inyang, Chair
                            Environmental Engineering Committee
                            EPA Science Advisory Board
                                   /Signed/

                            Dr. Edgar Berkey, Chair
                            Subcommittee on Measures of Technology Performance
                            Environmental Engineering Committee
                            EPA Science Advisory Board

-------
                U.S. ENVIRONMENTAL PROTECTION AGENCY
                            EPA Science Advisory Board
                       Environmental Engineering Committee

CHAIR
Dr. Hilary I. Inyang, Duke Energy Distinguished Professor and Director, Global institute for
       Energy and Environmental Systems, University of North Carolina, Charlotte, NC1

MEMBERS
Dr. Edgar Berkey, Vice President and Chief Science Officer, Concurrent Technologies
       Corporation, Pittsburgh, PA

Dr. Calvin C. Chien, Senior Environmental Fellow, E. I. DuPont Company, Wilmington, DE

Dr. Barry Dellinger, Patrick F. Taylor Chair and Professor of Chemistry, Louisiana State
       University, Baton Rouge, LA

Mr. Terry Foecke, President, Waste Reduction Institute, St. Paul, MN

Dr. Nina B. French, President, SKY+ Ltd., Napa, CA

Dr. Domenico Grasso, Rosemary Bradford Hewlett Professor and Chair, Picker Engineering
       Program, Smith College, Northampton, MA

Dr. Byung Kim,  Staff Technical Specialist, Ford Motor Company, Scientific Research
       Laboratories, Dearborn, MI

Dr. Gordon Kingsley, Assistant Professor, Georgia Tech, School of Public Policy, Atlanta, GA

Dr. John P. Maney, President, Environmental Measurement Assessment, Gloucester, MA

Dr. Michael J. McFarland, Associate Professor, Utah State University, River Heights, UT

SCIENCE ADVISORY BOARD STAFF
Kathleen W. Conway, Designated Federal Official, U.S. EPA Science Advisory Board
       (1400A), 1200 Pennsylvania Avenue, NW, Washington, DC 20460

Mary L. Winston, Management Assistant, U.S. EPA Science Advisory Board (1400A), 1200
       Pennsylvania Avenue, NW, Washington, DC 20460
        At the time of the review, University Professor and Director, Center for Environmental Engineering
Science and Technology (CEEST), University of Massachusetts, Lowell, MA

-------
                                         NOTICE
       This report has been written as part of the activities of the EPA Science Advisory Board,
a public advisory group providing extramural scientific information and advice to the
Administrator and other officials of the Environmental Protection Agency. The Board is
structured to provide balanced, expert assessment of scientific matters related to problems facing
the Agency. This report has not been reviewed for approval by the Agency and, hence, the
contents of this report do not necessarily represent the views and policies of the Environmental
Protection Agency, nor of other agencies in the Executive Branch of the Federal government, nor
does mention of trade names or commercial products constitute a recommendation for use.
Distribution and Availability: This EPA Science Advisory Board report is provided to the EPA
Administrator, senior Agency management, appropriate program staff, interested members of the
public, and is posted on the SAB website (www.epa.gov/sab).  Information on its availability is
also provided in the SAB's monthly newsletter (Happenings at the Science Advisory Board).
Additional copies and further information are available from the SAB Staff [US EPA Science
Advisory Board (1400A), 1200 Pennsylvania Avenue, NW, Washington, DC 20460-0001; 202-
564-4533].

-------
                                      Attachment A
                Measures of Environmental Technology Performance

                           1.  Existing EPA Programs and Policies

       EPA evaluates technologies — Since its formation, the EPA has evaluated and reported
on the performance of environmental technologies.  Most Agency evaluations are conducted to
meet a specific regulatory need.  Additionally, two specialized programs exist to conduct formal
and independent evaluations - the Superfund Innovative Technology Evaluation (SITE) Program
and the Environmental Technology Verification (ETV) Program.

       Because the Agency is likely to continue to evaluate environmental technologies, it will
be advantageous for the Agency to consider how its evaluation program can be changed to better
serve the needs of decision makers. To make technology selection decisions, decision-makers
will need answers to questions such as these:

       a)     Are measures of environmental technology performance being adequately
              addressed and integrated into the Agency's role?

       b)     Are site, environmental and operating conditions being considered in testing
              protocols?

       c)     Are procedures in place for assuring that performance measures are realistic and
              adequate for decision-makers?

       d)     Do descriptions of performance in final Agency reports convey all the essential
              measures and pertinent information?

       e)     What are the installation, operation and maintenance life cycle costs?

       f)      How can the desired information be obtained cost effectively?

       EPA has important and useful policies on  quality - Providing environmental
technology performance data of known and usable quality is a significant challenge, especially
because the environmental technologies of concern to EPA vary significantly in size,
complexity, intended use, and the media in which they operate. The technologies range from
relatively simple monitoring or sensing instruments to more complex treatment systems for
wastewater, solid and hazardous waste, and air pollution control.

       Although EPA is a world leader in applying the concepts and practices of quality
assurance to obtaining and using environmental data, the application of quality assurance
principles and practices to the evaluation of environmental technology performance is a recent
development which the Agency needs to fully implement. This is likely to improve technology
selection efforts and enhance the transparency of decisions to stakeholders. Also, it is necessary

                                           A- 1

-------
to include measures of performance that realistically indicate to decision-makers how a
technology is likely to perform in real-life situations.

       If, as the Committee believes, future decision-makers will typically demand a more
comprehensive suite of measures, then technology evaluators would have to consider appropriate
ways to determine technology performance within the expanded evaluation program and to make
the information easily available to decision-makers.  In such instances, EPA policy encourages
use of a structured planning process such as the Data Quality Objective process for evaluating
environmental technology performance.

       Application of a systematic planning process, such as the Agency's Data Quality
Objectives process, can ensure that:

       a)      measurements are appropriate for achieving project objectives

       b)      data quality is known, and appropriate for performing analyses such as life-cycle
               costing for full-scale technology implementation

       c)      data are defensible and reproducible

       The systematic planning process would establish clear goals for the evaluations. If the
Agency required that all evaluation reports incorporate additional measures of performance, then
decision-makers would have a better basis for judging how a technology will perform outside the
range of conditions tested; extrapolations of results from one set of circumstances and scenarios
to others could then be possible. This is of high utility because resource constraints usually
make it impossible to test technologies under the complete set of factorial experiments when
they are considered for use beyond the initial set of conditions in which they were tested. This is
particularly important because some technologies that perform well in limited-term tests may
have excessive life-cycle costs if they are implemented in the field.

                         2. The Need for a Wider Suite of Measures

       Evaluations that provide maximum value to the decision-makers describe the quality of
the performance data being measured, including the bias and variability of the data under
varying operating conditions and situations.  The Agency could require that technology
evaluations include sufficient measures of performance to provide decision-makers with
information on how a technology will perform under realistic and likely conditions of use.

       If the quality of the performance data is not fully addressed, if test conditions are too
tightly prescribed (and are consequently unrealistic), or if performance under varying conditions
is not determined, then the resulting evaluations of technology performance have limited value
as decision aids, especially when conditions for proposed use of a technology are somewhat
different from  those under which tests were performed. Thus, providing adequate information to
decision-makers requires that measures of performance used to describe how a technology
                                             A-2

-------
performs are sufficiently comprehensive.

       A useful performance description should address, among others, two particularly critical
elements. First, there should be explicit treatment of the experimental uncertainties (which are a
part of all technical measurements). Second, it must provide the parameters or variables that
help describe real-life use of a technology.  A successful suite of measures will also meet the
following criteria:

       a)      The measures are based on a variety of realistic and well documented
               circumstances under which a technology is to be used - or the limited
               circumstances of testing are clearly documented and emphasized

       b)      The measures identify all key variables that affect the performance and life-cycle
               costs of a technology

       c)      The measures provide an indication of how rugged a technology is with respect to
               these variables.

       d)      The measures include purchase, installation, operation and maintenance costs

       e)      The measures convey in practical terms the level of performance that a
               technology can meet

             3. Stakeholder Involvement in Determining Performance Measures

       Strong stakeholder involvement in the development of verification protocols and test
plans, a key aspect of the DQO planning process, has been a strength of the ETV Program.
Stakeholder involvement could improve other evaluations by  helping determine the most
relevant performance measures for decision-making. To determine better measures of
environmental technology performance, it would be useful to include stakeholders such as:

       a)      regulators

       b)      regulated communities

       c)      technology users

       d)      technology developers

       e)      professional and trade associations

       f)      environmental groups

       g)      financial investment groups

       h)      insurance underwriters.
                                            A-3

-------
       Each of these groups is concerned with deciding whether environmental technologies can
satisfy given requirements. They are in an excellent position to help the Agency define what
information is really needed.

                             4.  Doing More with What We Have

       In its 1996 review of the SITE program, the EEC noted that, while there were several
cases where competent analyses had been performed on several individual technologies in a
single technology family, there were no cross-cutting analyses comparing them to one another
and drawing general conclusions. Yet, the staff was clearly capable of such analysis.  This
logical next step in technology evaluation was not taken.

       There were policy reasons why this was the case in addition to the fact that cross-
comparisons would, likely, not have been welcomed by all the technology vendors. The vendors
have found that participation in the SITE or ETV program has facilitated commercialization of
their technologies.  However, individual vendors may not find being compared by the
government with other similar technologies  to be beneficial.  Communities, state and local
regulatory officials, and consulting engineers, on the other hand, are likely to find that kind of
cross-cutting analysis very helpful.

       While the Committee favors the cross-cutting analyses and comparisons, it recognizes
that this decision involves balancing the needs of different groups. Therefore, the Committee
recommends that Agency formally re-visit this issue from time to time.
                                             A-4

-------