EPA/600/3-91/023
March 1991
THE INDICATOR DEVELOPMENT STRATEGY
FOR THE ENVIRONMENTAL MONITORING
AND ASSESSMENT PROGRAM
By
Charles M. Knapp1, David R. Marmorek2, Joan P. Baker3, Kent W. Thornton4,
Jeffrey M. Klopatek5, and Donald F. Charles6
Prepared for:
Environmental Research Laboratory
Office of Research and Development
U.S. Environmental Protection Agency
Corvallis, Oregon
1 Technical Resources, Inc., Davis, California
2 ESSA Ltd., Vancouver, British Columbia
3 Western Aquatics, Inc., Durham, North Carolina
4 FTN Associates, Little Rock, Artcansas
5 Arizona State University, Temp*, Arizona
6 Indiana University, c/o U.S. EPA Environmental Research Laboratory, Corvallis, Oregon
-------
NOTICE
The research described in this document has been funded by the U.S. Environmental Protection Agency.
The document has been prepared at the EPA Environmental Research Laboratory in Corvallis, Oregon,
through contract #68-00-0021 wtth Technical Resources, Inc., purchase order OBO 387 NTTA to Arizona
State University, and cooperative agreement CR817489 wtth Indiana University. It has been subjected to
the Agency's peer and administrative review and approved for publication. Mention of trade names or
commercial products does not constitute endorsement or recommendation for use.
-------
ffteate tftd Imttm
TECHNICAL REPORT DATA
f/mumctioni on tht rtvme be fort cempler
\. REPORT NO.
EPA/600/3-91/023
2.
PB91-168500
TITLE AND SUSTITI^ Indicator Development Strategy
for the Environmental Monitoring and Assess-
ment Program
». REPORT DATE
Marrh 1QQ1
I. PERFORMING ORGANIZATION CODE
K
D>R> Marmorek2, J.P. Baker3,
5 Charles
LL/TMOHICL. ] )
C.M. Knapp1, D.R. Marmorek , J.P
W. Thornton4, J.M. Klopatek5, and D.F
'f VAAfcJIfejft A»A AfcJI V AVIAAJ AJAkJE AUFH AnrkBftftC
. PERFORMING ORGANIZATION REPORT MO
PERFORMING ORGANIZATION NAME AND ADDRESS
TRI, 2ESSA,Western Aquatics, 4FTN,
AZ State U., ^fS EPA, ERL-Corvallis, OR
10. PROGRAM ELEMENl
11. CONTRACT/GRANT NO.
12. SPONSORING AGENCY NAME AND ADDRESS
US Environmental Protection Agency
Environmental Research Laboratory
200 SN 35th Street
Corvallis, OR 97333
1». TYPE OF REPORT AND PERIOD COVERED
Pnhl i ghprl T?enoT"h
14. SPONSORING AOENCV CODE
EPA/600/rz
it. SUPPLEMENTARY NOTES
1991. U.S. Environmental Protection Agency, Environmental
Research Laboratory, Corvallis, OR.
1C. ABSTRACT
The overall goal of Environmental Monitoring and Assessment Program (EMAP)
is to provide a quantitative assessment of the current status and long-
term trends in the condition of the nation's ecological resources on
regional and national scales. This document outlines a strategy for
indicator selection, development, and evaluation within EMAP. Its
objectives are twofold: 1) to present general guidelines, criteria, and
procedures for indicator selection and evaluation, and 2) to establish an
organizational framework for coordinating and integrating indicator
development and use within EMAP. It should serve both to promote internal
consistency among EMAP resource groups and to provide a basis for external
review of the proposed indicator development process.
KEY WORDS AND DOCUMENT ANALYSIS
DESCRIPTORS
b.lDENTIPIERE/OPEN ENDED TERMS
c. COSATi Field/Croup
t. DISTRIBUTION STATEMENT
Release to Public
IB. SECjJRiTr CLASS
Unclassified
21. NO. OF PAGES
99
20 SECURITY CLASS
Unclassified
22. PRICE
Firm 1110-1 (»?)
-------
INSTRUCTIONS
1. tttFORT NUMBER
bum the EPA report amber as fc appears on the com of the publication.
2. LEAVE BLANK
. RECIPIENTS ACCESSION NUMBER
lUaerved for aae by each report recipient.
4. TITLE AND SUBTITLE
TWe tjtould indicate dearly and briefly the wbject covenfc of the report, and be displayed prominently. Set eubtitk. if used, in snuBei
type or otherwise nbordinate ft to main title, when a report if prepaid ID more thin one volume, repeat the primary titk, add volume
' umber and include subtitle for the apadfic titk.
. REPORT OATE
Each repori shall carry dau indicatm* at ami month and year, bdicate the bads on which It was aekcted (ej.. 4ut of issue, dtle of
tpfrottl. Ate o/prtpentfioa, *tt,).
. »f ftFOftMING OHOANIZATION CODE
Laavebtank.
7. AUTMOfttS)
(Mi* R. Dot, J. Robert Doe. ttc.).
§. KMPOMMINO ORGANIZATION MErOMT NUMBER
fatten if perfonnin( ocjaniMticm wiahet to aatifn this Bomber.
B. PERFORMING ORGANIZATION NAME AND ADDRESS
Qve name, atraet, city, atate.aad ZIP code. Uit BA more than two levels of an orpniiatioaal hircardiy.
tO. PROGRAM ELEMENT NUMBER
Vat the program element number under which the report was prepared. Subordinate numben nay be included in parentheses.
11. CONTRACT/GRANT NUMBER _ _
l&ttrt contiftct or p«ut pgnr^ff **^^** vbicb npovt wu pvcpucd*
12. SFONSORING AGENCY NAME AND ADDRESS
behidc ZIP cade.
IX TYPE OF REFORT AND PERIOD COVERED
bdieate interim final, etc^ and if applicable, date* covered.
14, BPONSORING AGENCY CODE
Leave blank. .
IB. BUFPLEMENTARV NOTES
Enter mfonrution not inchided elsewhere but useful, auch as: Prepared in cooperation with, Transbtkm of. Presented at conference of.
To be published in, Supersedes, Supplement*, etc.
IB. ABSTRACT
include a brief (200 words or lot} factaal agrnmary of the most ajgntficairt afonnation contained in the report. If the report contains a
asfnifKant bibliography or literature survey, jnrnrion it here.
17. KEY WORDS AND DOCUMENT ANALYM
(B) DE5CMPTOKS - Select from the Thesaurus of Enjineerinf and Scientific Terms the proper authorized term* that identify the major
cnacepi of the research and are auffidently specific and precise to be used as index entries for cataloging .
(b) IDENTIFIERS AND OPEN-ENDED TERMS - Use identifiers for project names, code names, equipment deaputon, etc. Use open-
ended terms written in descriptor form for those subjects for which no descriptor exists.
(c) COSAT1 FIELD CROUP Field and group alignments arc to be taken bom the 1965 COSATJ Subject Category List Since the ma-
jority of documents are multidiscipunary in nature, the Primary Field/Croup assignment(s) will be specific discipline, area of human
endeavor, or type of physical object The application(s) wQl be cross-referenced with secondary Field/Group assignments that will follow
«w primary pectfegM.
IB. DISTRIBUTION STATEMENT
Denote rekasabOity to the public or imitation for reaaons other than security for example "Releasr Unlimited.*' Che any availability to
Oe public, with address and price.
IB. ft XO. SECURITY CLASSIFICATION
DO NOT aubmil dassficd report* to the National Tec*aical Inforaatk* aenict.
tl. NUMBER OF PAOES
bum the ivtal awsbei of pafes, mchtdmt this one and unnumbered pares, but exclude distribution Hit, If any.
S2. PRICE
Insert the price act by the National Technical Information Service or the Government Printint Office, If known.
-------
TABLE OF CONTENTS
Section Page
EXECUTIVE SUMMARY vii
1. INTRODUCTION 1
2. BACKGROUND 5
2.1 EMAP Overview 5
2.1.1 Perspective 5
2.1.2 EMAP Objectives 5
2.1.3 EMAP Approach 6
2.1.4 Design Attributes 6
2.1.5 EMAP Activities . . 8
2.2 Ecological Resource Classification 10
2.3 Role of Indicators in Ecological Monitoring 10
2.3.1 Endpoints as the Foundation for Assessment 10
2.3.2 Indicators within EMAP 11
2.3.3 Indicator Utilization 12
3. FRAMEWORK FOR INDICATOR DEVELOPMENT 15
3.1 General Strategy for an EMAP Resource Group .... 15
3.2 Intergroup Integration . . 20
3.2.1 Internal Integration ..... . . . 20.
3.2.2 External Integration . . . . . 21
4. INDICATOR DEVELOPMENT PROCESS 23
4.1 Phase 1: Identification of Environmental Values and Assessment Endpoints ... 23
4.1.1 Environmental Values . . ... 23
4.1.2 Assessment Endpoints 24
4.1.3 Response Indicators 24
4.1.4 Stressors 26
4.2 Conceptual Models 28
4.3 Criteria for Indicator Selection . 29
4.3.1 Purpose of Indicator Selection Criteria 33
4.3.2 Indicator Selection Criteria 33
4.4 Phase 2: Identification of Candidate Indicators 35
4.4.1 Objectives 35
4.4.2 Approach 36
4.4.2.1 Generating Lists of New Candidates 36
4.4.2.2 Conducting Preliminary Screening of Candidate Indicators 36
4.4.2.3 Establishing and Maintaining a Computerized Data Base 38
4.4.3 Evaluation 38
4.5 Phase 3: Selection of Research Indicators 38
4.5.1 Objectives 38
4.5.2 Approach . 39
4.5.2.1 Literature Review . . 41
4.5.2.2 Critical Review of Conceptual Models 41
4.5.2.3 Expert Workshops 41
4.5.2.4 Indicator Data Base Expansion and Update 42
4.5.3 Evaluation 42
Hi
-------
4.5.4 Research Plan Update 42
4.6 Phase 4: Evaluation of Research Indicators to Select Probationary Core
Indicators 43
4.6.1 Objectives 43
4.6.2 Approach 43
4.6.2.1 Formulation of Research Question 46
4.6.2.2 Literature Reviews 46
4.6.2.3 Identification of Useful Data Bases 46
4.6.2.4 Analysis of Existing Data 48
4.6.2.5 Analysis of Expected Indicator Performance 48
4.6.2.6 Example Assessments 49
4.6.2.7 Umfted-scaJe Field Pilot Studies 49
4.6.3 Evaluation 52
4.6.4 Update of Indicator Status Documents and Research Plan 52
4.6.5 Indicator Date Base Update 52
4.7 Phase 5: Selection of Core Indicators 53
4.7.1 Objectives .... 53
4.7.2 Approach 53
4.7.2.1 Regional Demonstration Project Design and Implementation 56
4.7.2.2 Annual Statistical Summary 57
4.7.3 Evaluation 57
4.7.4 Update of Research Plan and Indicator Status Documents 58
4.7.5 Indicator Data Base Update 58
4.8 Phase 6: Reevaluation and Modification of Indicators 58
4.8.1 Objectives 58
4.8.2 Approach 60
4.8.3 Evaluation 60
4.8.4 Update of Research Plan and Indicator Status Documents . 61
5. INTEGRATION AMONG RESOURCE GROUPS 63
5.1 Conceptual Model of Indicator Integration 64
5.2 Categories of Indicators that Facilitate Integration .... 66
5.2.1 External or Off-site Indicators 66
5.2.2 Linking Indicators 67
5.2.3 Common or Shared Indicators 67
5.2.4 Migratory Indicators 67
5.3 Use of Conceptual Models to Facilitate Integration 68
5.4 Coordination of the Indicator Development process Among Resource Groups . . . . 68
5.5 Problems Associated with Differences in Indicator Spatial and Temporal Scales .... 71
6. INDICATOR COORDINATOR 75
6.1 Need for an Indicator Coordinator 75
6.2 Role of the Indicator Coordinator 75
6.2.1 Facilitate Communication 75
6.2.2 Promote Implementation of the Indicator Development Strategy 76
6.2.3 Create and Maintain an Indicator Data Base 76
6.2.4 Review Indicator Research Proposals 77
7. PROCEDURES FOR INfTIATING INDICATOR RESEARCH 79
8. REFERENCES 81
APPENDIX A - INDICATOR DATA BASE 83
tv
-------
FIGURES
Figure Page
2-1 Four-tier structure of EMAP and the major activities associated with each
of the tiers 7
2-2 Potential interactions among the various elements of EPA's Environmental
Monitoring and Assessment Program 9
3-1 The indicator development process, showing the objectives, methods,
and evaluation techniques used in each phase 16
4-1 Example of relationships among environmental values, assessment endpoints,
and indicators 27
4-2 General conceptual model linking a response indicator (woodland extent)
with the environmental value of sustainable biodiversity 30
4-3 Conceptual model of the estuarine ecosystem ... .... 31
4-4 Example of preliminary screening to identify candidate indicators 37
4-5 Example of an evaluation of candidate indicators to identify research indicators .40
4-6 Example of an evaluation of research indicators to identify probationary
core indicators ... .44
4-7 Example of an indicator testing and evaluation strategy (for the 1990
EMAP-Estuaries Demonstration Project in the Virginian Province) . 51
4-8 Example of an evaluation of probationary core indicators to identify core
indicators 54
4-9 Cumulative frequency distributions (CDF) for Index of Biotic Integrity
in streams in Ohio during four months of 1986. ... 55
5-1 Methods of indicator integration across EMAP resource groups £5
5-2 Conceptual model of the agroecosystem ecological resource with associated
inputs and outputs £9
5-3 Precipitation sutfate inputs versus surface water sulfate concentrations
for National Surface Water Survey subregions without significant sulfate
absorbing soils .72
-------
TABLES
Table Page
3-1 Environmental Values Selected by Different EMAP Resource Groups 18
4-1 Examples of Potential Assessment Endpoints 25
4-2 Association between EMAP-Agroecosystem Assessment Endpoints and Indicators 28
4-3 Indicator Selection Criteria 34
4-4 Example Questions for Evaluating Research Indicators -47
VI
-------
EXECUTIVE SUMMARY
THE INDICATOR DEVELOPMENT STRATEGY
FOR THE ENVIRONMENTAL MONITORING AND ASSESSMENT PROGRAM
THE ENVIRONMENTAL MONITORING AND ASSESSMENT PROGRAM
The U.S. Environmental Protection Agency (EPA) initiated the Environmental Monitoring and Assessment
Program (EMAP) in 1989 to provide improved information on the current status and long-term trends in
the condition of the nation's ecological resources. When fully implemented, EMAP will be an
inter-agency, multi-resource program designed to quantitatively address six critical questions:
1. What is the current status, extent, and geographic distribution of ecological resources (e.g.,
forests, agroecosystems, arid lands, wetlands, lakes, streams, and estuaries) in the United
States?
2. To what levels of environmental stress and pollutants are these ecological resources exposed
and in what regions are the problems most severe?
3. What proportions of these resources are degrading or improving, where, and at what rate?
4. What are the most likely causes of poor or degrading condition?
5. What ecological resources are at greatest current or future risk from environmental stresses
and pollutants?
6. Is the overall condition of ecological resources responding as expected to control and
mitigation programs?
ECOLOGICAL INDICATORS
EMAP's success will depend, in large part, on its ability to characterize ecological condition, or by
human analogy, the heatth of ecological resources on regional and national scales. Because factors
such as ecological condition and health often are difficult to measure or cannot be measured directly,
EMAP will monitor a set of indicators that collectively describe the overall condition of an ecological
resource. The term indicator has been adopted within EMAP to refer to the specific environmental
attributes to be measured or quantified through field sampling, remote sensing, or compiling of existing
data. Achieving EMAP's objectives wfll require not only indicators of ecological condition (termed
response indicators), but also indicators of pollutant exposure (exposure indicators), habitat quality
(habitat indicators), and both human and natural sources of stress that might be affecting ecological
condition (sfressor indicators).
VII
-------
There are many indicators of potential value, but only a subset of these environmental attributes can be
monitored given the available funding and desired regional scope of EMAP. Identification of the best set
of indicators for use in a broad-scale regional status and trends network is, therefore, a major activity
within EMAP.
OBJECTIVES OF THIS DOCUMENT
This document outlines a strategy for indicator selection, development, and evaluation within EMAP. Its
objectives are twofold: (1) to present general guidelines, criteria, and procedures for indicator selection
and evaluation, and (2) to establish an organizational framework for coordinating and integrating
indicator development and use within EMAP It should serve both to promote internal consistency
among EMAP resource groups and to provide a basis for external review of the proposed indicator
development process.
INDICATOR DEVELOPMENT PROCESS
The proposed process for indicator development within EMAP consists of six phases:
Phase 1: Identifying issues (environmental values and apparent stressors) and assessment
endpoints.
Phase 2: Identifying candidate indicators that are linked to the identified endpoints and responsive
to stressors.
Phase 3: Screening candidate indicators, based on a set of indicator evaluation criteria and
conceptual models, to select as research indicators those that appear most likely to fulfill
key requirements.
Phase 4: Quantitatively testing and evaluating the expected performance of research indicators in
field pilot studies, or through analysis of existing data sets, to identify the subset of
developmental indicators suitable for regional demonstration projects.
Phase 5: Demonstrating developmental indicators on regional scales, using the sampling frame,
methods, and data analyses intended for the EMAP network, to identify the subset of
core indicators suitable for full-scale implementation.
Phase 6: Periodically reevaluating and refining the core indicators as needed within the national
monitoring network.
Monitoring activities in EMAP will be conducted by seven resource groups, each of which will focus on
one of seven ecological resource categories: surface waters, the Great Lakes, estuaries, wetlands,
forests, agroecosystems, and arid lands. Each EMAP resource group's indicator program will be kept
on track through the use of research plans, peer reviews, and an indicator data base. Annual and
viii
-------
five-year comprehensive research and monitoring plans will be produced, describing EMAP's results to
date, the evidence and rationale for all decisions made within each phase of the indicator development
process, future directions and promising new indicators, and specific plans for indicator research and
evaluation. The current status of all indicators and the rationale for all decisions regarding indicator
selection or rejection also will be documented in an indicator data base.
CONCEPTUAL FRAMEWORK
It is critical to the success of EMAP that the environmental attributes monitored are appropriate to the
program's objectives. The first phase of the indicator development process, therefore, is intended to
establish a framework for indicator selection and interpretation, by identifying the environmental values,
assessment endpoints, and environmental stressors of primary concern and delineating the conceptual
models for each ecological resource category or class. EMAP response indicators must correspond to
or be predictive of an assessment endpoint, which is a quantitative or quantifiable expression of an
environmental value. Changes in response indicators should reflect a corresponding change in an
assessment endpoint, and thus in the value of an ecological resource. Exposure, habitat, and stressor
indicators, and possibly response indicators, will provide the basis for diagnosing plausible causes of
observed poor or changing ecological condition. Conceptual models delineate the linkages between
environmental values (assessment endpoints), major ecological resource components and processes,
and the external stressors and factors that influence ecological resource condition. These models serve
three primary purposes:
1. To explicitly define the framework for indicator interpretation (e.g., relationships between the
selected indicators and the assessment endpoints of interest).
2. To identify any "gaps" within the proposed suite of indicators (i.e., missing indicators or links for
which additional or new indicators are needed.
3. To guide the data analysis strategy for diagnosing plausible causes of poor or degrading eco-
logical condition.
Therefore, in Phase 1 of the indicator development process, three major activities will be initiated:
1. Listing environmental values associated with each ecological resource category.
2. Listing major problems and external stressors currently impacting or threatening each resource
category.
3. Developing conceptual models that link assessment endpoints and stressors to the major eco-
logical resource components and processes, and subsequently to EMAP indicators.
ix
-------
INDICATOR SELECTION AND EVALUATION
In Phase 2, based on literature reviews and interactions with scientists conducting relevant research, all
potentially useful measures of ecological resource status and the natural and anthropogenic factors that
influence status w8l be identified as candidate indicators for EMAP The identification of candidate
indicators is a continual process. Each ecological resource group must continually reassess its
proposed suite of indicators for completeness, consider new candidate indicators that may arise as a
result of advances in ecological research and monitoring technologies, and revisit potential indicators
previously rejected or postponed as new information or technologies after the context of indicator
evaluation.
During Phases 3 through 5, candidate indicators will be critically evaluated and tteratrvely filtered to
identify the best possible suite of core indicators for implementation in the EMAP network. The process
of indicator testing and prioritization will be guided by specific criteria for indicator selection and peer
reviews of decisions made at each phase. The use of clearly defined criteria will increase the objectivity,
consistency, and depth of indicator evaluations. The amount, quantification, and quality of data
necessary to satisfy each of the critical criteria will increase at each phase in the indicator development
process.
Phase 3 (selecting research indicators) will rely on literature reviews and expert knowledge (e.g.,
workshops with outside scientists) to qualitatively evaluate the likelihood that each candidate indicator
will be able to satisfy the indicator selection criteria and thus be worthy of additional attention. Enough
information will be needed to determine which indicators merit further research, but such investigations
will not actually be carried out. Expert judgement, therefore, plays a particularly important role in the
selection of research indicators.
In Phase 4 (selecting probationary core indicators), the selection criteria are applied more stringently to
quantitatively address questions concerning spatial and temporal indicator variability, optimal sampling
protocols, and data interpretability. Indicator performance will be assessed by analyzing existing data
sets and/or by conducting field pilot studies. The ecological resources sampled for field pilots will
encompass the full range of environmental conditions expected in EMAP. but will not necessarily be
sites on the EMAP sampling grid. Issues to be addressed in field pilot studies include the following:
Characterizing ecological resource units to identify the optimal sampling period and to quantify
the most probable temporal variability of each indicator within that sampling period.
Conducting extensive spatial sampling within an ecological resource unit to quantify indicator
spatial variability and select the optimal sampling protocol.
-------
Sampling along gradients, from "polluted" to unpolluted" or "impacted" to "natural" sites to
evaluate (1) the responsiveness of the indicator to stress, (2) the specificity of the indicator to
particular types of stress or change, and (3) the repeatability of the indicator response in
different regions or ecological resource classes.
Phase 5 (identifying core indicators) is a demonstration, not a research effort. Thus, by the time Phase 5
is implemented, there must be a high degree of confidence that the selected developmental indicators
will indeed ultimately be accepted as core indicators. Phase 5 addresses issues that can only be
answered by testing the developmental indicators using the EMAP sampling frame, methods, and data
analyses. Important functions of Phase 5 include: (1) confirming the pilot study results on a regional
scale, (2) developing the EMAP infrastructure for conducting regional monitoring efforts, (3) confirming
the utility and interpretability of the EMAP outputs for assessing the regional status of ecological
resources, and (4) determining whether the proposed EMAP sampling grid density is sufficient to assess
regional patterns and associations between indicators of ecological condition and anthropogenic
stressors.
Scientific advances and technological improvements will occur throughout the duration of EMAP This
may necessitate modifying certain indicators and replacing some indicators with others that provide
improved information or equivalent information at reduced cost, as well as adding indicators that
address emerging issues of importance. Phase 6 (Devaluating and modifying the suite of core
indicators), therefore, will be a continual process. EMAP must balance the need for continuity of
methods (to maximize trend detection capability) with the benefits of refining or replacing indicators that
fail to perform optimally. The core suite of indicators will be revised only after a thorough assessment
indicates that it is clearly necessary. The advantages of the new indicator or method must be significant
and well documented. Field pilot studies and
demonstration projects will be conducted to calibrate the relationship between the old and new
indicators (or measurement techniques), and both the old and new (or modified) indicators will be
monitored long enough to ensure data comparability before phasing out the original indicator.
INTEGRATION ACROSS EMAP RESOURCE GROUPS
At present, individual EMAP resource groups have the primary responsibility for selecting and evaluating
indicators within each of the seven major ecological resource categories (agroecosystems, arid lands,
forests, wetlands, inland surface waters, the Great Lakes, and estuaries). Integration and coordination of
the indicator deve/opment process across EMAP resource groups is necessary, however, to fully achieve
the program goals. The primary approach for achieving an integrated set of indicators is through a
coordinated effort of communication and information exchange. Within EMAP, the Indicator Coordinator,
working closely with the EMAP Integration and Assessment task group, has the responsibility for
xi
-------
facilitating and encouraging these activities as they relate to the selection and evaluation of EMAP
indicators. Five major tasks are planned:
1. Compile and cross reference lists of assessment endpoints, environmental values, stressors,
and indicators proposed by each group, to identify areas of similarity, commonality, or
inconsistency in approach.
2. Conduct one or more workshops involving locking-outward (interaction) matrix exercises to
identify gaps or important linking and stressor indicators that have been overlooked.
3. Develop conceptual models that identify cross-linkages and relationships among ecological
resource categories. Development of integrated conceptual models will (1) help to formalize
expected relationships among indicators In different resource categories, (2) ensure
consistency in indicator definitions, (3) encourage the identification and use of linking,
common, and migratory indicators, (4) Identify commonalities in approach and indicator use
among EMAP resource groups, and (5) ensure that all important processes and linkages are
considered within the EMAP monitoring network.
4. As appropriate, propose alternative assessment endpoints and indicators that would provide
similar information, but would be common or improve comparability with endpoints and
indicators being monitored by other EMAP resource groups.
5. For indicators selected by more than one group, examine and compare the proposed field
sampling and measurement methods and suggest modifications as needed to improve
comparability among groups.
Communication among groups will be ensured through regular meetings and workshops, and through
the exchange of written materials and research plans. The lists, matrices, and models related to each
of the above tasks will be continually updated, as needed, as the indicator development process
within each EMAP resource group advances and evolves.
Indicators monitored by different resource groups may not be measured during the same sampling
period or be co-located in the same sampling unit. This could result in both temporal and spatial
displacement of indicators and could hinder diagnostic analyses examining linkages that cross resource
boundaries. Data analysis techniques must be developed, therefore, to deal with non-co-located and/or
spatially displaced data that rely primarily on regional-scale data aggregations and associations.
THE INDICATOR COORDINATOR
As noted earlier, the responsibility for coordinating the indicator development process among ecological
resource groups has been assigned to the Indicator Coordinator. The Indicator Coordinator will play a
pro-active role in facilitating communication and the flow of information among groups and promoting
the implementation of the indicator development strategy. Four primary functions for the Indicator
Coordinator have been identified:
xii
-------
1. Centralized information point/communication: The Indicator Coordinator will ensure that all
relevant information is continually exchanged among the EMAP resource groups and task
groups. The Indicator Coordinator will review all documents relating to indicators, and pass on
relevant information to the respective EMAP resource groups. Also, the Indicator Coordinator
will serve as the centralized contact point and source of information on EMAP indicators for the
external scientific community.
2. Strategy implementation: The Indicator Coordinator will be responsible for ensuring that the
steps and procedures for inter-group integration, outlined earlier, are completed. The Indicator
Coordinator will work with the EMAP resource groups and the Integration and Assessment task
group to (1) develop integrative conceptual models to assist with indicator selection and
evaluation, (2) identify linking indicators, and (3) encourage the use of compatible sampling
and measurement methods.
3. Indicator data base: The Indicator Coordinator will be responsible for technical oversight of the
content, accuracy, and completeness of the indicator data base as a record of the indicator
development process.
4. Research review: Research to identify new indicators or evaluate existing indicators within
EMAP will be initiated and directed primarily by the EMAP resource groups. However, the
Indicator Coordinator will play an active role in reviewing research proposals as well as final
project reports. In addition, the Indicator Coordinator will maintain an updated, integrated
listing of priority research needs relating to EMAP indicators. The Indicator Coordinator will
also monitor ongoing and planned research on indicators outside EMAP by maintaining regular
contact with appropriate research agencies and programs.
xiii
-------
xiv
-------
1. INTRODUCTION
The Environmental Monitoring and Assessment Program (EMAP) has been initiated by the U.S.
Environmental Protection Agency's (EPA) Office of Research and Development. EMAP is intended to be
an umbrella program under which EPA can work as part of an interagency effort to monitor and
periodically assess the condition of the ecological resources of the United States. When fully
implemented, EMAP will be an Integrated, multi-resource program that can be used to quantitatively
determine the status of ecological resources at various geographic scales and over long periods of time,
and to detect changes in the status of these resources on a regional and national basis.
EMAP's success depends on Its ability to characterize ecological condition (or health), and to identify
likely causes of adverse changes. Because concepts such as ecological 'condition" or "health" are often
difficult to measure directly, EMAP will monitor a set of environmental indicators that, acting as surro-
gates for less easily measured ecosystem characteristics, will collectively describe the overall condition
of an ecological resource. An indicator is defined as an environmental attribute that, when measured or
quantified over appropriate temporal and spatial scaies through field sampling, remote sensing, or com-
pilation of existing data, quantifies the magnitude of a stress, the status of a habitat characteristic, the
degree of exposure of a resource to a stressor, or the degree of response of an ecological resource to
an exposure. Because of the importance of indicators in interpreting ecosystem condition, the selection,
development, and evaluation of these indicators for use in a broad-scale regional status and trends net-
work are major EMAP activities.
This document provides an overview of EMAP, discusses the role of indicators in EMAP, and outlines a
strategy for indicator development and evaluation within EMAP- Its objectives are twofold: (1) to
present general guidelines, criteria, and procedures for indicator selection and evaluation, and (2) to
establish an organizational framework for coordinating and integrating indicator development and use
within EMAP. It should serve both to promote internal consistency among EMAP resource groups and
to provide a basis for internal and external review of proposed EMAP indicators.
This document focuses on indicator development and its use within EMAP. but the issues discussed and
the guidelines developed may have broader applications. Detailed, process-oriented research on indi-
cators will be conducted wtthin other related research programs (e.g., EPA's Core Research Program).
Since these efforts can be expected to develop information that can complement or supplement the
capabilities of the EMAP indicators, EMAP's objectives wfll be best served through a close linking of the
EMAP indicator development process to these ongoing and planned research programs.
-------
Conceptually, indicator development is a process of sequential hypothesis testing to determine whether
or not proposed indicators can be used to estimate ecological conditions on a wide spatial scale and
over a long time period, using synoptic survey monitoring methods. When applied to a wide array of
potential indicators, this process of testing potential indicators is intended to develop an integrated suite
of indicators for monitoring the fundamental elements and processes of ecosystems in different cate-
gories of ecological resources (e.g., forests, wetlands). The procedures described in this document are
intended to guide this testing process from the identification of potentially useful indicators (candidate
indicators) through adoption of acceptable indicators for use in the monitoring program (core indica-
tors). Implementation of this strategy, and the timing of the development of indicators within each of the
EMAP ecological resource groups, is the responsibility of the resource groups, with the assistance of the
EMAP Indicator Coordinator.
In addition to developing indicators for routine monitoring, EMAP resource groups will need to track
fundamentally catastrophic stressors (e.g., hurricanes, wildfires, El Nino, volcanic eruptions) that could
affect either the resource condition or the underlying structure and function of the ecosystems being
monitored by the groups. Although catastrophes may or may not have influenced current conditions,
their incidence may obviate the relationships on which indicators have been developed by the resource
groups. Tracking such events will allow the resource groups to explain radical changes in resource
condition, and to evaluate the adequacy of their indicator suite, as necessary. Procedures for tracking
these catastrophic events are not addressed by the Indicator Development Strategy.
Because EMAP was initiated in 1989, it is still in the early stages of program design and planning. As
the program matures, the indicator development process is also likely to evolve and improve. This
document and the guidelines presented are not intended to be rigid; they will be revised and updated as
needed. The document is formatted as follows:
Section 2 provides background on EMAP and the role of indicators in ecological monitoring.
Section 3 presents an overview of the indicator development strategy.
Section 4 provides detailed procedures for selecting and evaluating indicators and
documenting decisions made during these evaluations.
Section 5 discusses procedures for inter-group coordination and integration.
Section 6 defines the role of the EMAP Indicator development coordinator.
Section 7 outlines procedures for initiating indicator research to support the EMAP monitoring
network.
Section 8 lists the references cited.
-------
Some of the background material presented in this report relies heavily on Hunsaker and Carpenter
(1990). In some cases, text from this source has been modified only slightly to reflect recent changes in
EMAP or to alter its emphasis to address indicator development issues. For additional information
regarding the use of indicators in the various EMAP activities, readers should refer to that document and
to the research plans for each of the ecological resource groups.
In the process of writing this document, we received many comments, Ideas, and expressions of
concern about issues that were either beyond the scope of the indicator strategy or Impossible to
address fully, given our limited resources and time. We expect that EMAP participants will consider
these issues and take necessary actions, and that new developments will be reflected in revised versions
of this strategy. Some of the issues of greatest concern that require further attention are:
EMAP indicator concepts and terminology
Roles and responsibilities of the Indicator Coordinator
Uses and development of conceptual models for selecting, explaining, and interpreting
indicators (they need to be more specific and better adapted to spatial and temporal scales
relevant to EMAP)
Ways to express ecosystem condition in terms of "health"
Coordination and integration of the use of indicators among EMAP groups, one goal of which
is to provide a stronger ecosystem perspective
-------
-------
2. BACKGROUND
2.1 EMAP OVERVIEW
In 1988, EPA's Science Advisory Board recommended Implementing a program within the Agency to
monitor ecological status and trends, as well as to develop innovative methods for anticipating emerging
problems before they reach crisis proportions. More recently, the Administrator of EPA established an
agency priority for the 1990s of measuring for results; that is, confirming that the nation's annual
expenditure on environmental issues is producing significant results in maintaining and improving
environmental quality (Reilly, 1989). In an effort to identify emerging problems before they become
widespread or irreversible, and to foster evaluation of the success of policies and regulatory programs,
EPA's Office of Research and Development began planning the Environmental Monitoring and Assess-
ment Program (EMAP). Initiated in 1989, EMAP was created in response to the need for better
assessments of the condition of the nation's ecological resources.
2.1.1 Perspective
EMAP is intended to be an umbrella program, under which EPA participates in an interagency effort (i.e.,
federal, state, local and private agencies) to monitor and report on the condition of the nation's
ecological resources. When fully implemented, EMAP is intended to answer six critical questions:
What is the current status, extent, and geographic distribution of our ecological resources (e.g.,
forests, agroecosystems, arid lands, wetlands, lakes, streams, and estuaries)?
To what levels of environmental stress and pollutants are these ecological resources exposed
and in what regions are the problems most severe?
What proportions of these resources are degrading or improving, where and at what rate?
What are the possible reasons for adverse or beneficial conditions?
What ecological resources are at current and future risk from environmental stresses and
pollutants?
Are adversely effected ecological resources improving overall in response to control and
mitigation programs?
2.1.2 EMAP Objectives
To answer the questions in Section 2.1.1, EMAP has adopted an interdisciplinary approach to design
and implement an integrated resource monitoring program with the following objectives:
Preceding page blank
-------
Estimate current status, extent, changes, and trends in indicators of the condition of the
nation's ecological resources on a regional basis with known confidence.
Monitor indicators of pollutant exposure and habitat condition and seek associations between
human-induced stresses and ecological condition that identify possible causes of adverse
effects.
Provide periodic statistical summaries and interpretive reports on status and trends of the
environment to the EPA Administrator and the public.
Meeting the first two of these objectives requires identifying indicators of the condition of ecological
resources, pollutant exposure, habitat loss or degradation, and both human and natural sources of
stress (e.g., dimate change, introductions of exotics) that might be associated wtth degraded or
changing ecological condition. This report addresses the strategy that will guide the development of
potential indicators and the procedures that will be used to develop a comprehensive set of indicators
for implementation in EMAP.
2.1.3 EMAP Approach
EMAP has been designed to provide information needed to conduct a 'top-down' or effects-driven
approach to risk assessment (Messer, 1990). In 'top-down' risk assessments, the observation of an
effect stimulates efforts to identify plausible hazards or stressors that may have caused the effect, by
focusing on changes in system status and associations among various indicators of stress exposure and
response. This approach enhances the likelihood of detecting cumulative impacts of natural and
anthropogenic influences on ecological resources.
2.1.4 Design Attributes
EMAP is intended to provide reliable and unbiased estimates of the status and trends in the condition of
our nation's ecological resources by monitoring indicators of these ecological resources and assessing
the relationships among these indicators. To accomplish this goal requires the availability of indicator
data of known precision and quantifiable confidence limits at regional scales of resolution. In addition,
these data must cover temporal periods of years to decades. EMAP has been designed to provide a
probability-based sample of ecological resource condition that can yield the necessary data quality. The
following paragraphs summarize the major attributes of the design in the context of indicator develop-
ment activities. For additional detail on the EMAP design, see Overton et a). (1990).
The EMAP design is hierarchical, and has four distinct tiers (Figure 2-1). Tier 1 is the broadest level and
has the greatest spatial coverage among the tiers. This tier describes the extent of ecological resources
-------
TIER 2
TIER1
Process
Research
Sampling at
Increased Spatial or
remporal Resolution
Status and Trends
Monitoring
Landscape
Characterization
Figure 2-1. _ Four-tier structure of EMAP and the major activities associated with each of the tiers
(after Paulsen et al., 1990).
available for sampling and monitoring in the other tiers. The landscape and ecological resources within
40-km2 hexagons surrounding approximately 12,600 gridpoints in the conterminous United States will be
characterized, based largely on remote sensing and existing maps. Similar sampling grids will eventually
be developed for Alaska and Hawaii. Tier 2 uses probability methods to select a subset of the ecolog-
ical resource units contained within these 40-km2 hexagons, in proportion to their occurrence and a
subjective assessment of their importance. The Tier 2 subset will be used for field sampling and moni-
toring of indicators of ecological condition. Tier 3 activities provide for increased sampling intensity,
either temporally or spatlatfy, to ensure adequate Information about status and trends for subpopulations
of interest (e.g., redwood forests, low alkalinity lakes) or to provide diagnostic information beyond that
available from Tier 2 efforts. Tier 4 includes process-level research that may be conducted at specific,
nonrandomly selected sites to develop new methods or to pilot test potential indicators.
Tier 2 activities rely heavily on the specific indicators implemented in each of the components of the
program. The proposed Tier 2 sampling design involves periodic visitation to selected points in the
nationwide grid to collect samples and data on ambient conditions. The nominal density of these grid
-------
points is one point per 640 km2, which can be increased or decreased to meet specific needs. Selection
of specific sites for monitoring will strongly depend on the location of the ecological resources within the
grid. The grid used in Tier 2 activities can be extended to provide global as well as national coverage
and can be enhanced for state coverage.
Other agencies, including the U.S. Departments of Agriculture, Commerce, Energy, and Interior, have
active, ongoing monitoring programs that address some of EMAP's needs for certain data. EMAP will
develop procedures for directly integrating data and components from these monitoring programs into
the EMAP grid, where the form and nature of the data are appropriate. In cooperation wtth other
agencies, EMAP will supplement existing networks to fill critical data gaps. The spatial and temporal
constraints of the sampling design are discussed in Overton et a!. (1990).
2.1.5 EMAP Activities
The development of ecological indicators is one of eight primary activities under EMAP, as shown in
Figure 2-2. In addition to the strategic development, evaluation, and testing of indicators (Ecological
Indicators), these activities include:
Design and evaluation of integrated statistical monitoring frameworks and protocols for
collecting data (Monitoring Systems Design).
Nationwide characterization of the extent and location of ecological resources (Landscape
Characterization).
Demonstration studies and implementation of an integrated sampling network (Operational
Monitoring).
Development of quality assurance and quality control procedures, and new methods (Methods
Development).
Data storage, retrieval, management, and dissemination (Information Management).
Statistical analytical procedures (Environmental Statistics).
Periodic statistical reports and interpretive assessments on the status and trends in condition of
the nation's ecological resources (Integrated Assessments).
The arrows in Figure 2-2 depict direct Information linkages or dependencies that require interaction
among these activities. Ecological indicators are clearly linked to all other activities In EMAP, which
denotes the fundamental importance of the indicator development process to the overall success of
EMAP.
-------
CO
EMAP Interactions
Landscape
Characterization
Monitoring
Systems Design
Inlormatlon
Management
Rgure 2-2. Potential Interactions among the various elements of the EPA's Environmental Monitoring and Assessment Program.
Notice the heavy interactions of Ecological Indicators with the other elements.
-------
2.2 ECOLOGICAL RESOURCE CLASSIFICATION
EMAP will provide regional and national estimates of the condition of ecological resources that have
been hierarchically dassified into ecological resource categories, classes and, in many cases, sub-
classes. The seven ecological resource categories include broadly defined components of the nation's
ecological resources: surface waters, the Great Lakes, estuaries, wetlands, forests, agroecosystems,
and arid lands. EMAP activities in each of these categories are the responsibility of an ecological
resource group. Specific ecosystem types within the category (e.g., oak-hickory forest), are referred to
as ecological resource classes. Subclasses of ecological resource classes represent further subdivi-
sions, such as the oak-hickory-pine subclass of the oak-hickory dass of the forest ecological resource
category. EMAP's objectives require that the questions posed in 2.1.1 be answered for all of the
resource classes.
2.3 ROLE OF INDICATORS IN ECOLOGICAL MONITORING
Determining the status of ecological resources at a known statistical level of confidence requires a sound
statistical design, indicators that cover the observed range of ecological conditions, and methods for
distinguishing among nominal, subnominal, and marginal resource conditions. The following sections
discuss indicators and their usage in EMAP as well as in broader applications. This section relies heavily
on the concepts discussed in Hunsaker and Carpenter (1990).
2.3.1 Endpoints as the Foundation for Assessment
EMAP will monitor the status and trends in ecosystem characteristics, or attributes. It is extremely
important that the attributes being monitored in EMAP represent valued characteristics of the ecosystem
and that changes in these attributes reflect the kinds of changes actually occurring in the ecosystem.
Identification of valued attributes provides the initial step in the development of indicators, as described
in Section 4. The following paragraphs discuss important concepts regarding ecosystem attributes in
relation to indicator development.
Valued ecosystem attributes can be referred to as endpoints of concern, or assessment endpoints
(Suter, 1990). Assessment endpoints, comprising both an entity (e.g., species richness) and an attribute
(e.g., sustainable), are formal expressions of the actual environmental value that is to be protected, and
as such should have unambiguous operational definitions, have social or biological relevance, and be
accessible to prediction or measurement (e.g., extent of habitat for an endangered species, availability of
sufficient habitat diversity to support big game, crop productivity). In many cases, they are identical to
10
-------
the ecosystem attribute of concern (e.g., crop productivity), and measurement of the desired ecosystem
attribute is straightforward.
However, assessment endpoints often cannot be directly or conveniently measured. In such cases, indi-
cators are used as surrogates for the assessment endpoints. To be useful, measurement endpoints
must correspond to or be predictive of the assessment endpoint (e.g., reproductive performance of an
endangered species, ecotone/edge ratio, crop biomass). Measurement endpoints can be thought of as
indicators of condition for the valued ecosystem attribute. For example, lakes are valued, among other
reasons, for their recreational fishing; the associated assessment endpoint is fishability, and the
measurement endpoint is catch of legal-sized sportfish per standard unit effort. Lakes are also valued
for swimming, aesthetics, and drinking water supply. Each of these attributes is affected by the lake's
trophic status. Thus, trophic status has been identified as a major measurement endpoint for surface
waters. Trophic status may also be used as an assessment endpoint if the trophic status of the eco-
logical resource is the actual attribute of concern (e.g., oligotrophic high-altitude lakes).
2.3.2 Indicators within EMAP
A major use of indicators in EMAP will be to assess the condition, or health, of ecological resources.
Rapport (1989)" lists three approaches or criteria commonly used to assess ecosystem health: (1) identi-
fication of systematic indicators of ecosystem functional and structural integrity, (2) measurement of
ecological sustainability or resiliency (i.e., the ability of the system to handle stress loadings, either
natural or anthropogenic), and (3) an absence of detectable symptoms of ecosystem disease or stress.
Thus, ecological health is defined as both the occurrence of certain attributes deemed to be present in a
healthy sustainable resource and the absence of conditions that result from known stressors or prob-
lems affecting the resource.
An objective of EMAP monitoring is to determine the proportion of the resource in good condition or, by
human analogy, healthy, as opposed to ecosystems that are unhealthy, or in poor condition. To avoid
semantic problems that could arise from using words such as good or acceptable, the terms nominal,
subnominal, and marginal have been adopted to refer to healthy and unhealthy conditions and the
transition between these conditions, respectively. Ongoing investigations within the scientific community
and EPA are addressing ways to better measure and express ecosystem condition
A key element of EMAP's approach is the linkage of indicators to assessment endpoints. Important
information about assessment endpoints falls into one of the following categories: (1) condition of the
ecosystem, (2) exposure of the endpoint to potential stressors, and (3) availability of conditions
necessary to support the desired state of the endpoint. To provide appropriate linkage between
11
-------
assessment endpoints and indicators, indicator development in EMAP will produce indicators that fall
into one of four types (Hunsaker and Carpenter, 1990):
1. Response indicators represent characteristics of the environment measured to provide evi-
dence of the biological condition of a resource at tha organism, population, community, eco-
system, or landscape level of organization.
2. Exposure indicators provide evidence of the occurrence or magnitude of contact of an eco-
logical resource with a physical, chemical, or biological stressor.
3. Habitat indicators are physical, chemical, or biological attributes measured to characterize
conditions necessary to support an organism, population, community, or ecosystem (e.g.,
availability of snags; substrate of stream bottom; vegetation type, extent, and spatial pattern).
4. Stressor indicators are natural processes, environmental hazards, or management actions that
effect changes in exposure and habitat (e.g., climate fluctuations, pollutant releases, species
introductions). Information on stressors will often be measured and monitored by programs
other than EMAP.
Potential indicators are identified using conceptual models of ecosystems, followed by systematic evalu-
ation and testing to ensure their linkages to the assessment endpoints and their applicability within
EMAP, as described in Section 4. The models used may be based either on current understanding of
the effects of stresses on ecosystems or on the structural, functional, and recuperative features of
healthy ecosystems.
2.3.3 Indicator Utilization
The evaluation of ecosystem condition will not rely on any single indicator, but on the full set of
monitored response, exposure, habitat, and stressor indicators. One approach to using complete sets of
indicator information is the development of formal indices that composite or aggregate more than one
indicator into a single variable. For example, Karr et al. (1986) developed the Index of Biotic Integrity
(IB!) to describe conditions in freshwater streams. Properly developed, indices of ecosystem condition
can more easily be compared across regions than can the measurements from which they are derived
(e.g., Hughes, 1989). However, the process of indicator aggregation can be highly controversial and
mathematically complex; the results tend to be extremely dependent on the indicators and the aggre-
gation procedures used (Westman, 1985). The utHity of indices for EMAP assessments of ecosystem
condition is an important concept for further investigation in the indicator development activities of each
ecological resource group.
In addition to knowing resource condition, ft is also desirable to identify plausible causes of degrading
conditions. EMAP monitoring data will be used to examine the statistical association, on a regional
12
-------
scale, between ecosystem conditions and plausible causes of these conditions, using stressor, exposure,
and habitat indicator data. Although these correlative analyses cannot establish causality, they do serve
to narrow the range of probable causes for observed regional patterns and trends in resource status.
More detailed monitoring and research efforts to determine cause-and-effect relationships (e.g., activities
in Tiers 3 and 4) can then be focused on those geographical areas, stressors, and resource classes of
greatest concern.
Interest in the use of indicators and Indices extends beyond EMAP Ott (1978) identified six basic uses
of environmental indices of ecosystem health:
1. Prioritizing funding for dealing with environmental problems
2. Ranking locations (regional comparisons)
3. Conducting environmental trend analysis
4. Providing public information
5. Condensing and focusing scientific research
6. Enforcing standards
The National Academy of Sciences (1975) also highlighted the need for ecological indicators to monitor
the environment (ecosystem health) and to judge the effectiveness of environmental protection pro-
grams. Research programs evaluating the extent and magnitude of effects from specific stressors (e.g.,
ozone effects on crop production, pesticide effects on nontarget plants and animals, acidic deposition
effects on terrestrial and aquatic communities) often rely extensively on indicate: s. Finally, it is expected
that ecological indicators will become increasingly useful in investigations into the effectiveness of
alternative environmental management policies.
Further information on the use and interpretation of indicators within EMAP is provided within Hunsaker
and Carpenter (1990).
13
-------
14
-------
3. FRAMEWORK FOR INDICATOR DEVELOPMENT
Indicators in EMAP must be developed by using a consistent strategy across ecological resource
groups, completeness in the overall set of indicators (so that significant ecological changes on regional
scales do not escape detection), and creativity in the program over time (so that the program can evolve
to accommodate new knowledge). Strategic planning of indicator development must occur both within
and among EMAP resource groups. These perspectives are addressed in the following two sections.
3.1 GENERAL STRATEGY FOR AN EMAP RESOURCE GROUP
This section provides an overview of the general strategy of indicator development for an ecological
resource group; the strategy is considered in much greater detail in Section 4. Figure 3-1, an expansion
of Figure 2-7 in the EMAP Indicators Report (Hunsaker and Carpenter, 1990), summarizes the steps
each EMAP resource group must complete to advance its indicators to the point of regional and national
implementation in EMAP. As shown in Figure 3-1, the overall process of indicator development within
each EMAP resource group consists of six phases:
1. Phase 1: Identify environmental values, apparent stressors, and assessment endpoints.
2. Phase 2: Develop a set of candidate indicators that are linked to the identified endpoints and
are expected to be responsive to stressors.
3. Phase 3: Screen the candidate indicators to identify those with reasonably well established
data bases, methods, and responsiveness to be further evaluated as research indicators.
4. Phase 4: Quantify the expected performance of research indicators to identify probationary
core indicators.
5. Phase 5: Quantify the performance of probationary core indicators on a regional scale to
select core indicators.
6. Phase 6: Reevaluate and modify the set of core indicators.
Although this strategy has been written to address the development of individual indicators, it should
also be used by each resource group to assess Us full suite of indicators. Often multiple indicators may
be under development at the same time. The objective of this process is to develop a comprehensive
suite of indicators that complement each other and provide a dear picture of the status and trends of
ecological resource condition through time. It is also anticipated that, due to limited financial or human
resources, time, or scientific knowledge, an ecological resource group may concurrently be developing
multiple indicators through different phases of this process, and at the same time may have suspended
evaluation of other indicators until additional time, money, or knowledge becomes available.
Preceding page blank 15
-------
Phase 1
Phase 2
Phases
IDENTIFY
ISSUES/ASSESSMENT ENDPOINT
Objectives
Develop indicators
linked to endpoints
Methods
Expert Knowledge
Literature Review
Conceptual Models
CANDIDATE INDICATORS
Priorize based
on criteria
Expert Knowledge
Literature Review
Conceptual Models
Evaluation
Workshops
Expert knowledge
Criteria (Qualitative)
Peer Review
Criteria Qualitative/
quantitative
Phase 4
Phase 5
RESEARCH INDICATORS
Evaluate expected
performance
- quantify variability
evaluate interpretability
develop procedures
Analysis of Existing Data
Simulations
Pilot Tests
Indicator Comparisons
Example Assessments
DEVELOPMENTAL INDICATORS
Evaluate actual performance
on a regional scale
build infrastructure
Regional Demonstration
Projects
Regional Statistical
Summary
CORE INDICATORS
Quantitative Criteria
Evaluation
Peer Review
Criteria (Quantitative) at
Regional Scale
Peer Review
Agency Review
of Summary
Phase 61
Implement regional and
National monitoring
penodic reevaluation
EMAP Data Analysis
Correlate Old Indicators with
Proposed Relacements
Assess Promising Candidate
Indicators
Revisit Assessment Endpoints
Feedback from
Peers and Agencies
Peer Review
Figure 3-1. The indicator development process, showing the objectives, methods, and
evaluation techniques used in each phase.
16
-------
Each EMAP resource group's indicator program will be kept on track through the use of research plans,
peer reviews, and interaction with other resource groups. Each EMAP resource group will produce
research and monitoring plans that will be subject to written peer review. These plans will describe the
evidence and rationale for all decisions made within each phase, Identify the specific types of information
needed to complete the evaluation of Indicators being considered under each of the phases, and pre-
sent the next year's research plans for each phase. Annually, the current status of all indicators and
rationale for all decisions made during each phase will be documented in an indicator data base
(described in Appendix A). This data base will be used to facilitate rapid review of both the current state
and the evolution of all indicators used by each of the ecological resource groups.
In addition to annual peer reviews of program status and progress, every five years a peer review work-
shop will be held to examine each ecological resource group's comprehensive five-year research and
monitoring plans. These plans will describe EMAP's results to date, outline future directions and
promising new indicators, and provide research plans for each of the six indicator development phases.
The first two phases of the indicator development process are meant to generate ideas for endpoints
and indicators. The processes used in these phases should therefore encourage broad-scale, lateral
thinking, with the focus on breadth rather than depth of coverage.
Phase 1 (identifying environmental values, potential stressors, and assessment endpoints) requires a
broad perspective on both desired ecosystem attributes (as expressed by resource managers, scientists,
private industry, legislators, and the general public) and ecosystem stresses (which may occur on local
to global spatial scales, and over short- to long-term temporal scales). Proper identification of
assessment endpoints also requires well-developed conceptual models of the ecosystems of concern, to
ensure that identified endpoints are connected to the current and anticipated stresses of concern.
Developing these models is one of he most important aspects of the indicator development process, and
may be one of the most difficult. The process of developing these models, and the considerations that
each model must address (e.g., the degree of quantification required, whether the models focus primarily
on ecosystems structure or on processes), are the responsibilities of each of the EMAP resource groups.
A summary of the environmental values Initially Identified by the different EMAP ecological resource
groups (Table 3-1), shows a strong overlap, reflecting commonality in the perceptions of key issues
among these groups. This commonality highlights the need for integration and coordination among the
ecological resource groups to ensure that alt important information is collected efficiently, as discussed
in Sections 5 and 6, respectively.
Phase 2 (identifying candidate indicators) similarly requires a broad sampling of scientific opinion,
through both detailed literature reviews and interactions with scientists conducting relevant research.
17
-------
Table 3-1. Environmental Values Selected by Different EMAP Resource Groups8
EMAP Resource Group Environmental Values
Estuaries Ability to support harvestable and contaminant-free fish,
maintenance of habitat structure, aesthetics
Surface waters Fishability, biological integrity, trophic condition
Wetlands Area of wetlands, water quality functions, water quantity
(hydrologic) functions, ecological support for aquatic and
terrestrial organisms
Forests Spatial extent, sustainability, productively, aesthetics,
biodiversity
Arid lands Sustainability, productivity, biodiversity, water balance
Agroecosystems Productivity, sustainability, biodiversity, contamination
Great Lakes Fishability, water quality, trophic conditions
3 These values direct the selection ot assessment endpoints and response indicators; in some cases they are synonymous
(after Hunsaker and Carpenter 1990).
This is a continuing process, as scientific and technological advances will generate new candidate indi-
cators, or improve the feasibility of previously rejected indicators (see return arrows in Figure 3-1). In
identifying candidate indicators, it is useful to consider various components of ecosystem health {e.g.,
species composition, physical structure, and ecological function) and various levels of biological organ-
ization (e.g., landscape, ecosystem, community, population, or individual). As in the previous phase,
each ecological resource group should use one or more explicit conceptual models linking appropriate
response indicators and stressor information with assessment endpoints. Conceptual models should
serve as reference points, both for identifying needed indicators for assessing ecological resource
condition, and for guiding data analyses and pilot tests during later phases.
The next three phases of the Indicator development process provide critical evaluation and iterative
filtering of the set of candidate indicators to obtain a defensible, practical set of core indicators.
Whereas Phases 1 and 2 are designed to Include all possible relevant indicators, the next three phases
are designed to systematically eliminate indicators that fail to satisfy specific criteria for adoption, that
are not amenable to complete evaluation, or that do not perform as well as alternative indicators of the
same resource conditions.
18
-------
The process of testing and prioritization is guided by a set of criteria for indicator selection and peer
reviews of the decisions and by research plans made in each phase. Indicator selection criteria were
developed through workshop discussions with all EMAP ecological resource groups using the criteria
presented in the EMAP Indicators Report (Hunsaker and Carpenter, 1990) as a starting point. These cri-
teria are described in detail in Section 4.3; they cover issues such as responsiveness, appropriateness
for regional application, ability to Integrate effects, quantifiable spatial and temporal variation,
interpretability, and cost effectiveness. The evidence and rationale needed to satisfy the indicator
selection criteria and the peer review process that oversees these decisions become more stringent with
each phase. For example, evidence from the literature regarding responsiveness along laboratory or
field exposure gradients is sufficient at the research stage, but quantitative evidence of responsiveness in
most of a region's habitats is required before the indicator can be accepted as developmental. Also, the
emphasis of the criteria shifts from concerns about the indicator's responsiveness, to issues relating to
the feasibility of sampling the indicator using the EMAP integrated monitoring design. During any of
these phases, evaluation of a specific indicator may be suspended due to insufficient data or technology
for evaluation, or an indicator may be either accepted or rejected for the next phase of evaluation or
implementation.
Phase 3 (selecting research indicators) relies on literature review and expert knowledge (e.g., work-
shops) to provide further qualitative evaluations and to initiate quantitative indicator assessments.
Quantitative assessments in this phase should focus on whether or not adequate information exists (or
can be readily obtained) to assess the likelihood that candidate indicators will be able to satisfy the
indicator selection criteria, and to begin evaluating the expected responsiveness of the candidate
indicators to changes in the assessment endpoints. Key considerations at this stage are whether or not:
(1) the indicator can be measured effectively using an index sample, and (2) there is strong spatial and
temporal evidence of the responsiveness of the indicator, in monitoring an Identified assessment end-
point. The primary product of this phase is the set of research indicators for evaluation in Phase 4. All
decisions are documented in the indicator data base and the annual research plan.
In Phase 4 (selecting probationary core indicators), the selection criteria are applied more stringently to
quantitatively address questions concerning spatial and temporal variability, data interpretability (in
regard to EMAP's objecth/es to monitor status and detect trends and associations), and proposed
methods. Satisfying the criteria quantitatively will require more Intensive analysis than was provided in
previous phases. Such analyses may require a variety of activities, such as (1) intensive searches for
useful (often unpublished) data bases, (2) analyses of these data, (3) simulations to determine minimum
detectable trends, preferred index periods, ability to track sensitive subpopulations, etc., (4) example
assessments to evaluate the utility of the complete suite of indicators, and (5) field studies to test and
evaluate indicators across a suite of regional habitats (pilot studies). All results and decisions will be
19
-------
documented in periodically updated research plans and annually in the indicator data base. The primary
product of this phase is the set of probationary core indicators.
Phase 5 (identifying core indicators) addresses issues that can be answered only by testing the proba-
tionary core indicators using the EMAP integrated sampling design and data analysis protocols at
regional scales. Regional demonstrations will be used to test whether or not data collected on these
indicators are regionally interpretable and to confirm the results of site-specific pilot studies on regional
scales. Data from regional demonstration projects will be assessed through peer, agency, and public
review of statistical summaries, associated interpretive reports, and special study reports. The primary
product of this phase is a set of core indicators for Implementation in routine EMAP monitoring efforts.
Phase 6 (Devaluating and modifying the suite of core indicators) is an ongoing process that begins
upon initial implementation of core indicator monitoring at regional and national spatial scales. This
continual process of reinspecting the indicator suite ensures completeness of indicator coverage of
important environmental values, assessment endpoints, and stressors; incorporation of appropriate
advances in technology and information; and adequate ability to detect changes and identify trends in
the status of ecological resources. In this phase, it is important that EMAP balance continuity of
methods (to maximize trend detection capability) with procedures for refining or replacing indicators that
fail to perform optimally. This phase is implemented through procedures designed to critically review the
performance of core indicators through time, evaluating alternative indicators to address emerging issues
and inadequate core indicator performance, adding new indicators as deemed desirable, and substi-
tuting superior indicators for inadequate core indicators.
3.2 INTERGROUP INTEGRATION
Integration both among EMAP resource groups and between EMAP and other state and federal agencies
is critical to the success of EMAP Organizationally, the responsibilities and authorities of individuals
Involved in these Integrat'rve activities rest with management personnel in the EMAP resource groups and
with the Indicator Coordinator. Many line-of-responsibility issues remain to be resolved for these indi-
viduals. However, the primary concerns related to internal and external integration are discussed in the
following sections.
3.2.1 Internal Integration
At present, individual EMAP resource groups have the primary responsibility for implementing the indi-
cator development process. However, to fully achieve EMAP's goals will require integration and coordin-
ation of these activities across EMAP resource groups. Important issues to be addressed include (1)
20
-------
consistency in the definition and use of response, exposure, habitat, and stressor indicators, (2) con-
sistency in collecting and applying off-frame stressor information, (3) inclusion of special indicators within
EMAP that integrate across EMAP resource groups (e.g., migrating birds), (4) encouraging the use of
common indicators and compatible sampling and analysis methods, and (5) co-locating sampling units
for special studies. Thus, It is essential that all EMAP resource groups formally communicate on a regu-
lar basis to facilitate intergroup Information exchange, avoid redundant data collection efforts, and
improve the amount of information available for each EMAP resource group to use in assessing status
and trends in ecological resource condition. In addition, because the EMAP resource groups proceed at
different rates in implementing their programs, intergroup integration to foster learning will improve the
efficiency and effectiveness of the overall program. Section 5 provides a detailed discussion of the
proposed process and approach for achieving an integrated EMAP indicator development program.
3.2.2. External Integration
The Science Advisory Board's Ecological Monitoring Subcommittee recently stressed the importance of
interagency coordination and integration to the success of EMAP (U.S. EPA Science Advisory Board,
1990). Although integrating results from other monitoring efforts into EMAP is both efficient and
essential, interagency cooperation should also include information and expertise sharing. For example,
in addition to valuable data that can be obtained from the USDA Forest Service's Forest Inventory and
Analysis program, Forest Service personnel can be active participants in the indicator development
process for EMAP-Forests. Similarly, the National Oceanic and Atmospheric Administration (NOAA) and
numerous other federal agencies can contribute to the indicator development efforts of EMAP resource
groups.
Formal arrangements will be established by EMAP resource groups as necessary, to assist the indicator
development process and ensure that EMAP develops appropriate tools to monitor the condition of eco-
logical resources. For example, it may be appropriate for EMAP resource groups to obtain indicator
information from sources that could include state agencies (e.g., for regional resource management
actions), other federal agencies (e.g., USDA for sofl erosion rates and crop production data), and other
EPA programs (e.g., Office of Water for pollutant discharge information).
21
-------
22
-------
4. INDICATOR DEVELOPMENT PROCESS
The following pages describe in detail the phases of the indicator development process outlined in
Section 3.1 and Figure 3-1. Sections 4.1 and 4.2 describe Phase 1, identification of environmental
values, assessment endpoints, and major stressors (Section 4.1), and development of conceptual
models (Section 4.2). The criteria for Indicator selection are listed in Section 4.3, and then applied as
the indicator proceeds from the candidate to the core stage in Phases 2 through 5 (Sections 4.4-4.7).
Finally, issues and procedures for indicator reevaluation and modification (Phase 6) are discussed in
Section 4.8.
The objective of this section is to define a common framework wfthin which indicator development will
proceed in each ecological resource group. This framework will be fostered in all EMAP resource
groups through the activities of the Indicator Coordinator (see Section 6), who will provide training and
otherwise facilitate the use of this strategy in developing indicators. It is not necessary that all indicators
for a single EMAP resource group proceed through this development process at the same rate, nor is it
necessary for all EMAP resource groups to proceed at the same rate in developing indicators.
4.1 PHASE r IDENTIFICATION OF ENVIRONMENTAL VALUES AND ASSESSMENT ENDPOINTS
It is critical to the success of EMAP that the environmental attributes monitored are appropriate to the
program's objectives, defined in Section 2.1.2. The first phase of the indicator development process,
therefore, is intended to establish a framework for indicator interpretation, by Identifying the environ-
mental values, assessment endpoints, critical ecosystem components and processes and environmental
stressors of primary concern for each EMAP resource group. This phase defines the boundaries of the
problem, the functional relationships among indicators and assessment endpoints, and stressor inputs
for the conceptual model (Section 4.2), and thus the basis for indicator selection and evaluation.
4.1.1 Environmental Values
Ecological resources have both intrinsic and extrinsic values, ranging from the societal value placed on
the protection of pristine ecosystems to more direct economic values derived from resource harvests,
such as agricultural and timber production and commercial fisheries, and to inherently ecological values
or characteristics that are necessary for ecosystem function, such as nutrient cycling and population
reproduction rates. Results from the EMAP monitoring network will be used to track changes in the con-
dition of the nation's ecological resources. A logical first step, therefore, is to develop a listing of the
major environmental values associated with each EMAP resource category. Forest ecosystems, for
example, are of value for timber production, wildlife habitat, water storage, erosion control, and
Preceding page blank
-------
aesthetics. Wetland ecosystems may moderate downstream flooding, improve water quality, control
erosion, and provide breeding, shelter, and feeding habitat for both aquatic and terrestrial organisms. A
preliminary listing of the environmental values identified by each EMAP resource group is provided in
Table 3-1.
4.1.2 Assessment Endpoints
Assessment endpoints are quantitative or quantifiable expressions of environmental values (see Section
2.3). In some cases, the assessment endpoints may be Identical to the environmental values; for
example, sustainable crop production is both an environmental value and an assessment endpoint for
agroecosystems. It not only expresses an important societal value associated with agricultural lands,
but can also be quantified. In other cases, environmental values may not be amenable to quantitative
assessment (e.g., aesthetics). In these instances, one or more distinct assessment endpoints must be
defined that are related to the environmental value but more amenable to prediction or measurement.
For example, the abundance of harvestable sportfish may be an appropriate assessment endpoint for
evaluating fishability, an important environmental value for surface waters.
Suter (1990) lists five characteristics of good ecological assessment endpoints:
1. Social relevance
2. Biological relevance
3. Unambiguous operational definition
4. Accessibility to prediction and measurement
5. Susceptible to the environmental stressors of concern (and those that are unknown)
In addition, an assessment endpoint should be susceptible to the cumulative effects of complexes of
stressors, including both currently known and unknown stressors, (James R. Karr, pers. comm.). A
complete operational definition of an endpoint requires both a subject (e.g., bald eagles or endangered
species in general) and a characteristic of the subject (e.g., local extinction or a percentage reduction in
range). Examples of potential regional assessment endpoint characteristics are noted in Table 4-1.
4.1.3 Response Indicators
EMAP response indicators must correspond to or be predictive of an assessment endpoint. Indicators,
however, must be directly measurable, on the EMAP monitoring network, ft is possible, as with agro-
ecosystem production, that the response indicator directly measures either the assessment endpoint or
a specific portion of it; or the assessment endpoint and indicator may be equivalent. Often, however,
24
-------
Table 4-1. Examples of Potential Assessment Endpoints'
Traditional
Population
Extinction
Abundance
Yield/production
Frequent gross morbidity
Contamination
Massive mortality
Community
Market/sport value
Recreational quality
Change to less useful/desired type
Abiotic
Air and water quality standards
Characteristics of Regions
Population/species
Range
Productive capability
Soil loss
Nutrient loss
Regional production
Pollution to other regions
Pollution of discharged water
Pollution of outgoing exported air
Susceptibility
Pest outbreaks
Fire
Flood
Low flows
Landscape aesthetics
Long-term dimata changes
Continental glaciation
Sea level rise
Drought
Increased UV radiation
From Suter (1990). A complete operational definition of an assessment endpoint requires both a subject (e.g., bald eagles)
and a characteristic of that subject, such as the variables listed. Assessment endpoints can represent either ecosystem
elements (e.g., grassland species composition) or processes (e.g., germination rates).
25
-------
the assessment endpoint cannot be directly measured and multiple response indicators may need to be
monitored to estimate or evaluate changes in the assessment endpoint. Examples of the linkages
between environmental values, assessment endpoints, and response indicators are simplistically
portrayed in Figure 4-1. Table 4-2 illustrates the association between EMAP-Agroecosystem assessment
goals and indicators.
4.1.4 Stressors
In addition to monitoring status and trends of ecological condition, the EMAP data also will be used for
diagnostics, to identify plausible causes of adverse or subnominal conditions. Together with stressor
information, exposure and habitat indicators provide the basis for linking plausible causes to observed
effects. The starting point for assembling stressor information, in Phase 1. is the listing of the major
problems currently impacting or threatening the resource and the possible associated Stressors. For
example, EMAP-Arid Lands has identified the following as some of the major environmental problems of
concern: (1) loss of riparian habitat, (2) over-grazing and the introduction of exotic species, (3)
increased fire frequency and effects of global warming, and (4) the reduction of water supplies. EMAP-
Estuaries noted the following Stressors. (1) additions of excessive amounts of pollutants to the air and
water, (2) modification and destruction of ecologically important habitats, such as wetlands and forested
areas along the shoreline, (3) changes in land use that increase the amount and types of pollutants that
reach coastal environments, and (4) over-harvesting of fish and shellfish populations. Explicitly defining
potential Stressors serves to increase the relevance of the selected response indicators to current and
future environmental concerns.
Most of the data on Stressors will not be collected by EMAP (i.e., off-frame data collected by other
programs). Listing potential Stressors, and placing them in context with response, exposure, and habitat
indicators, is the first step in allocating efforts to acquire and organize off-frame data.
Lists of environmental values, assessment endpoints, and major Stressors are not static, but rather must
be periodically reevaluated as new issues emerge, environmental values shift, and experience is gained
with the use and interpretation of EMAP monitoring data. In addition, unforeseen Stressors may begin to
operate on ecological resources, or ecosystem relationships may change. Either of these circumstances
could require alterations to the suite of indicators in order for monitoring of changes in the status and
trends in resource condition to continue.
26
-------
B
Environmental
Values
Crop
Production
Fishability
Wetland Hydrologic and
Water Quality Functions
Assessment
Endpoint
Sustainable Crop
Production
Indicators
Corn
Yield
Abundance of
Harvestable Sportf ish
Extent of
Sustainable Wetlands
Numbers of Sportfish
Greater than Minimum
Size Limit Caught
per Standard Unit
of Sampling Effort
Area of each
by Wetland
Class
. . .
HVdr°-
Perlod
Vegetation
Community
Composition
Accretion of
Sediment and
Organic Matter
Figure 4-1 . Example off relationships among environmental values, assessment endpoints, and Indicators. Column A represents a
situation where an indicator directly measures a portion of the assessment endpointand environmental value of
concern. Column B demonstrates a direct relationship between a single indicator and its associated assessment
endpointand environmental value, which cannot be directly measured. Column C depicts a situation where multiple
indicators are required to provide needed information about the assessment endpointand environmental value of
concern.
-------
Table 4-2. Association between EMAP-Agroecosystem Assessment Endpoints and Indicators8
ASSESSMENT ENDPOINTS
INDICATORS
Crop productivity
Soil productivity
Nutrient holding capability
Erosion
Contaminants
Microbia) component
Irrigation water quantity
Irrigation water quality
Density of beneficial insects
Pest density
Foliar symptoms
Agricultural chemical usage
Socio-economic factors
Exports (chemical, sediment)
Status of biomonrtor species
Land use
Landscape descriptors
Wildlife populations
Sustainability of
Commodity
Production
X
X
X
X
X
X
X
X
X
X
X
X
X
Contamination
of Natural
Resources
X
X
X
X
X
X
X
X
X
Quality of the
Agricultural
Landscape
X
X
X
X
X
After Meyer et al. (1990)
4.2 CONCEPTUAL MODELS
Conceptual models define the linkages between assessment endpoints. stressors, and important eco-
system components and processes. The delineation of the conceptual model for each resource class is
an essential part of the indicator development process. The model serves four primary purposes:
1. To explicitly define the framework for indicator Interpretation, for example, how the response
Indicators relate to the assessment endpoints, the role that they play in determining endpoint
status, and how they will be used to assess that status.
2. To Identify any gaps within the proposed set of indicators, that is, missing indicators for the
assessment endpoints or links for which additional or new indicators are needed.
28
-------
3. To guide the data analysis strategy for diagnosing plausible causes of subnominal conditions.
4. To promote an integrated program and facilitate coordination among EMAP resource groups.
Indicators used in EMAP must be linked to ecosystem resources through conceptual models. These
models are important representations of scientific understanding of the ecological resource for
monitoring purposes. They must be descriptive and should dearly demonstrate linkages between the
indicators and the environmental values being monitored. Developing conceptual models is not a simple
task, nor can models be extracted from the literature for all the ecological resources of concern.
Furthermore, the temporal and spatial scales of these models can prejudice monitoring results (Wiens,
1989). Developing such models, however, is an extremely important exercise that is required to
substantiate the choice of a particular indicator. For example, annual wood increment can be linked
directly to forest productivity and can be incorporated into a conceptual model. Data can be readily
obtained at the temporal scale appropriate to EMAP On the other hand, soil microbial respiration is
more difficult to link to forest productivity, and is fraught with interpretation problems at the temporal and
spatial scales.
Conceptual models can be constructed at many scales, ranging from simple, single-linkage models (e.g.,
Figure 4-2) to complex ecosystem models identifying the full complement of ecosystem functional and
structural attributes. Each of these approaches may be useful, and should be dev: ?d or reviewed as
appropriate by individual resource groups. However, the critical EMAP conceptual model for indicator
development is of intermediate complexity, and focuses on indicators and the relationships among indi-
cators, and between indicators, assessment endpoints, and external stressors. An example of such a
model developed for the estuarine environment is provided in Figure 4-3.
Like the lists of values, endpoints, and stressors, the conceptual model linking these components should
not be viewed as static. The utility, validity, and completeness of the model should be continually
reevaluated as part of the data interpretation process. Both the lists of issues and assessment endpoints
described in Section 4.1 and the conceptual models should be subject to external comment and review
in the workshops conducted and the research plans prepared during Phases 2 and 3 (identification of
candidate and research indicators).
4.3 CRITERIA FOR INDICATOR SELECTION
The Identification of environmental values and assessment endpoints represents only the first of six
phases of indicator evolution, as detailed in Figure 3-1. Four succeeding phases of indicator develop-
ment and evaluation will occur before indicator implementation in the full-scale EMAP program:
29
-------
NATURAL
PROCESSES
RESPONSE
INDICATOR
ASSESSMENT
CLIMATE
PRECIPFTATION
WOODLAND
EXTENT
SUSTAINABLE
BIODIVERSITY
§
o
S«-
_ g
v *
o x
O 0)
ft
K to
Q. >
time
time
time
Rgure4-2. General conceptual model linking a response indicator (woodland extent) wtttttha mivlroninental value of sustainable
biodiversity. Data from the literature suggest that if climate changes and precipHaHondecreases, the woodland will
initially expand Its range and out-compete other vegetation types, then decline. The result will be a decline in both
habitat type and species number, thus decreasing the biological diversity of the region.
-------
INPUTS
INDICATORS
ENVIRONMENTAL VALUES
Expotur*
HiMUl
Input*
G»
R»Bpon»»
Pool*
H.bH.1
Salinity
Tenpertrtura
Depth
Sediment
RPD
Pathology
Growth
Reproduction
Othw
talk Integrity
9p*clM
Production
Moafth
HurmnllM
Swim
AmtMic*
Figure 4-3. Conceptual model of the estuarine ecosystem. Solid lines indicate material flows and dashed lines indicate interaction.
Dissolved oxygen can be considered both an exposure and a response indicator.
-------
1. Phase 2: identification of a set of candidate indicators
2. Phase 3: selection of indicators for further research
3. Phase 4: evaluation of research indicators
4. Phase 5: selection of core indicators
Once the full-scale EMAP program is in place, reevaluation of the core indicators (Phase 6) will be an
ongoing process intended to periodically confirm the appropriateness of each indicator, and to modify,
add, or replace indicators as necessary.
For the indicator selection process to be scientifically defensible, each ecological resource group must
use a consistent evaluation procedure, employ specific criteria to judge each proposed indicator, and
document each step in the evaluation process. This section outlines the criteria that will be used
throughout EMAP to guide the evaluation and selection of indicators. Details on the application of these
criteria at each phase in the indicator development process are provided in Sections 4.4-4.7.
Each phase in the evaluation of an indicator involves two stages: (1) an assessment of the sufficiency of
the available data to support an evaluation of the indicator and (2) if sufficient data exist, a screening of
the indicator based on the selection criteria. This screening results in one of three outcomes: (1) accep-
tance for consideration at the next stage of evaluation or implementation, (2) temporary suspension of
consideration due to insufficient data, technology, time or resources for proper evaluation, or (3) rejec-
tion for failure to satisfy one or more of the selection criteria. The latter two outcomes may lead to new
approaches to collecting, synthesizing, and analyzing data.
As discussed in the following sections, both the focus and the level of scrutiny given to each indicator
during evaluation change with each evaluation phase. This affects the assessment of the quality and
quantity of data needed for indicator evaluation and the standards applied in indicator screening. It is
likely that many candidate indicators will end up in a state of suspended evaluation, to be revived at
some future date when evidence, time, and resources are sufficient to thoroughly evaluate them. Gener-
ally, a candidate indicator will be rejected only if It fails certain critical criteria and there is no anticipated
improvement in the Indicator's weaknesses over the next decade. In contrast, in the Phase 5 evaluation,
ft is likely that a much higher proportion of indicators will either advance or be rejected, rather than
being suspended, because sufficient data on regional responsiveness and feasibility will be available to
make a firm decision. The resutts of all data sufficiency and screening evaluations are to be recorded in
an indicator data base, described in detail in Sections 4.4-4.7.
32
-------
4.3.1 Purpose of Indicator Selection Criteria
The use of clearly defined criteria increases the objectivity, consistency, and depth of indicator
evaluations. Criteria also guide scientists in developing new indicators and facilitate the documentation
of indicator screening decisions. Although certain decisions made in the evaluation process will be
subjective, the goal of the selection procedure is to provide an appropriate amount of information at
each step of the evaluation, so that another, independent evaluation, by peer reviewers, for example,
would be able to quickly assess the validity of the original decisions. A record of the decision process
also simplifies the subsequent identification of additional information needed to complete the evaluation
of suspended indicators.
4.3.2 Indicator Selection Criteria
The indicator selection criteria, listed in Table 4-3, consist of sets of critical and desirable criteria that
should be used by EMAP resource groups to test for acceptance or rejection of potential indicators.
Specific tests will be developed by each resource group in a manner that is appropriate for the indica-
tors under consideration. Table 4-3 is based on discussions by EMAP scientists at the Indicator Strategy
Workshop (June 1990), and subsequent efforts by the authors of this report, to simplify the initial set of
criteria developed in Messer (1990).
In general, the critical criteria are those considered essential for satisfying EMAP's first two objectives.
The first two critical criteria (regional responsiveness and unambiguous interpretability) should be the
dominant focus of evaluations in Phases 2 and 3, which lead to the selection of research indicators.
Though these two criteria continue to be important in the evaluation of subsequent phases, other critical
criteria relating to the utility and feasibility of sampling the indicator within EMAP (index period stability,
simple quantification, low year-to-year variation, environmental impact) rise in importance.
Indicators that fulfill some or all of the desirable criteria have obvious advantages over those that do not
These advantages may include an improved assessment of associations between stresses and ecologi-
cal conditions (the second objective of EMAP), an increased timespan over which the indicator can be
quantified, higher information value per unit cost, greater ease of implementation, or special value for
early detection of widespread ecological changes. The desirable criteria should be applied to assist in
distinguishing among alternative Indicators for the same assessment endpoint. or to assist In obtaining
the best set of indicators If all desirable indicators cannot be developed for implementation (e.g., due to
funding constraints).
33
-------
Table 4-3. Indicator Selection Criteria
Regionally responsive
Unambiguously interpretable
Simple quantification
Index period stability
High signal-to-noise ratio
Environmental impact
Critical Criteria
Must reflect changes in ecological condition, pollutant
exposure, or habitat condition, and respond to stressors
across most pertinent habitats within a regional resource
class.
Must be related unambiguously to an assessment endpoint
or relevant exposure or habitat variable that forms part of
the ecological resource group's overall conceptual model of
ecological structure and function.
Can be quantified by synoptic monitoring or by cost-
effective automated monitoring.
Exhibits low measurement error and stability (low temporal
variation) during an index period.
Must have sufficiently high signal strength (when compared
to natural annual or seasonal variation) to allow detection of
ecologically significant changes within a reasonable time
frame.
Sampling must produce minimal environmental impact.
Sampling unit stable
Available method
Historical record
Retrospective
Anticipatory
Cost effective
New information
Desirable Criteria
Measurements of an indicator taken at a sampling unit (site)
should be stable over the course of the index period (to
conduct associations).
Should have a generally accepted, standardized
measurement method that can be applied on a regional
scale.
Has an existing historical data base or one can be
generated from accessible data sources.
Can be related to past conditions by way of retrospective
analyses.
Provides an early warning of widespread changes in
ecological condition or processes.
Has low incremental cost relative to its information.
Provides new information; does not merely duplicate data
already collected by cooperating agencies.
34
-------
The amount, quantification, and quality of data necessary to satisfy each of the critical criteria increase
at each stage. During the evaluation of candidate indicators, it is not critical to satisfy each criterion
completely; rather, there should be reason to believe that the criterion can be satisfied when the appro-
priate data and models for detailed analyses are assembled in later stages of the evaluation process.
The unavailability of detailed, extensive data bases and models should not result in the rejection of a
candidate or research indicator. By Phase 5, there must be strong evidence demonstrating that the indi-
cator fuffills each of the critical criteria and preferably some of the desirable criteria. Examples and
further specifics on the application of the indicator selection criteria at each phase of the indicator
development process are presented in the following sections.
4.4 PHASE 2: IDENTIFICATION OF CANDIDATE INDICATORS
4.4.1 Objectives
The identification of candidate indicators (Phase 2 of the indicator development process shown in Figure
3-1) provides the raw material for the subsequent phases of indicator screening and refinement. Candi-
date indicators include all the potential measures of ecological condition (response indicators) and the
natural or anthropogenic factors that could influence that condition (exposure and habitat indicators).
Identifying candidate indicators calls for scientifically well-grounded creative thinking, review, and
evaluation of published literature, and investigation of available data to prepare lists of potentially useful
indicators. Although discrimination among potential indicators at this stage may be appropriate in some
cases, it will usually be better to err on the side of listing too many candidate indicators than to overlook
a potentially useful indicator.
Identification of candidate indicators is an ongoing process that must be documented. It is necessary
for each EMAP resource group to continually reassess its suite of indicators for completeness, to
reassess indicators previously rejected or suspended pending new findings, and to identify additional
potentially useful candidate indicators. Newly developed indicators may either augment or substitute for
existing indicators. New candidate indicators serve to capture advances in environmental sciences and
monitoring technologies (e.g., new methods of remote sensing), as well as to consolidate insights gained
through analysis of data collected by EMAP and other research programs.
The EMAP resource groups made preliminary identifications of numerous candidate indicators and have
conducted initial assessments of them, as summarized in Hunsaker and Carpenter (1990). These groups
have progressed through at least the first iteration of Phase 2 (see Figure 3-1). Though most of the
groups' efforts are being directed towards the later phases of indicator screening, there is a need to
periodically reassess and update the list of candidate indicators.
35
-------
4.4.2 Approach
This phase of indicator development involves three steps:
1. Generating lists of candidate indicators
2. Preliminary screening to eliminate ineffective or impractical indicators
3. Recording each candidate indicator in a computerized data base
4.4.2.1 Generating Lists of New Candidates
The key to continual replenishment of the set of candidate indicators is active and effective communi-
cation with the scientific community. Although workshops were heavily utilized in the initial development
of candidate indicators, other approaches should increase in importance in Phase 2. These approaches
include annual systematic literature reviews to identify potential improvements to the current suite of
indicators, attendance at major conferences, solicitation of involvement of the scientific community
through presentations and published articles, and continued personal contact with leading scientists
researching relevant topics.
The annual literature reviews should supplement rather than duplicate previous syntheses of information,
and provide preliminary evaluations according to the criteria in Table 4-3. Though these reviews are not
intended to involve new data collection or analyses, they should examine ongoing, unpublished work to
the extent possible.
4.4.2.2 Conducting Preliminary Screening of Candidate Indicators
The list of candidate indicators should be as comprehensive as possible. Application of the criteria in
Table 4-3 should not be too restrictive at this stage, since it is important for the candidate list to include
all potential indicators. Critical evaluation of candidate indicators in subsequent stages of the indicator
development process will eliminate inappropriate indicators or suspend evaluation of indicators that
cannot be fully evaluated.
The main issue in this phase is whether or not each indicator appears likely to satisfy the criteria, given
that enough data become available for a thorough evaluation. Figure 4-4 provides an example of how
nonstringent application of the indicator selection criteria can stHI be useful in refining the list of
candidate indicators. At this stage, ft isn't necessary to consider the relative merits of similar indicators
that could potentially be used to measure the same assessment endpoint.
36
-------
Reject if:
1) Only locally applicable
2) Unlikely to provide important
and useful information about
assessment endpoints
3) No appropriate sampling or
measurement methods forseen
Accept if:
1) Likely to be regionally responsive
2) Likely to provide important
and useful information about
assessment endpoints
3) Likely to have a sufficient body of
data to assess responsiveness,
variability, and quantifiability
4) Insufficient knowledge exists to
justify rejection
Candidate Indicators
(see Figure 4-5)
Figure 4-4. Example preliminary screening to identify candidate indicators (see text, Section
4.4.2, for further explanation).
37
-------
4.4.2.3 Establishing and Maintaining a Computerized Indicator Data Base
During the evaluation of candidate indicators, each ecological resource group should develop and main-
tain a computerized list of all indicators that have been evaluated at this level of screening, including
categories for those that have been accepted, rejected, or suspended. This list will form the initial
template for the development of a computerized indicator data base. The data base should contain
information about each indicator that has been considered, the sources of information employed in
evaluation, the current status of the evaluation, reasons for accepting, suspending, or rejecting each
indicator as it moves through each phase of evaluation, and references to both extramural and EMAP
documents that provide more detailed information and analyses.
4.4.3 Evaluation
Little critical evaluation is to be expended in identifying candidate indicators, and little additional
evaluation is desirable at this stage. It is not necessary to provide substantial amounts of evidence as to
the behavior of the indicator, nor is it necessary to conduct peer reviews of the selection process and
documentation. Satisfaction of criteria at a minimal level of scrutiny (as per Figure 4-4) is sufficient for
an indicator to be included as a candidate. More problems may arise from not including a good candi-
date indicator than from listing too many indicators as candidates. At this stage of the indicator
development process, it is more important to establish a process for identifying candidates, develop an
innovative set of possible indicators, and establish a data base to be used in documenting subsequent
assessments of the indicator, than to spend time eliminating indicators from further consideration.
4.5 PHASE 3: SELECTION OF RESEARCH INDICATORS
4.5.1 Objectives
Whereas Phase 2 of the indicator development process has the objective of generating new indicators,
Phase 3 begins the process of indicator screening. The primary objective of Phase 3 is to prioritize
evaluation activities by selecting candidate indicators with sufficient promise to merit further research,
rejecting those candidates which clearly do not fulfill the EMAP indicator selection criteria, and placing in
suspended status those candidates that either have not yet been evaluated or are considered to be at a
less advanced stage of development than the selected candidates.
The indicators selected during Phase 3 will become research indicators that will be subjected (in Phase
4) to intensive data analyses, simulations, and possibly laboratory or limited-scale field pilot tests to
determine their applicability for regional demonstration. A research indicator can be operationally
38
-------
defined as an indicator that appears to fulfill the EMAP indicator selection criteria based on published
information, but requires more detailed, quantitative assessments before being included in a regional
demonstration project.
It is important in this phase to gather all readily available information to evaluate candidate indicators
against the selection criteria (i.e. their variability, interpretability, and methods for sampling and measure-
ment). The level of intensity of this evaluation is intermediate between Phases 2 and 4. Enough infor-
mation is needed to determine which indicators merit further investigation as research indicators, but
such investigations will not actually be carried out. Expert judgment therefore plays a particularly
important role.
4.5.2 Approach
Figure 4-5 provides a general algorithm for deciding whether to advance or reject a given candidate indi-
cator. This figure is an adaptation of the indicator selection criteria list (Table 4-3), focusing on the
issues most relevant to this phase. All of the activities of Phase 3 focus either on providing enough
information to assess the issues raised in this figure or on documenting the decisions made.
The evaluation of a candidate indicator has three outcomes, as shown in Figure 4-5: rejection, accep-
tance (advancement to research status), or suspension of evaluation. If the assembled information is
sufficient to conclude that the candidate indicator has any of the seven critical weaknesses listed on the
left side of the figure, it should be rejected. The criteria for acceptance (right side of the figure) are
generally the converse of the rejection criteria. Criteria 2 through 6 are essential for a candidate
indicator to advance to research status. Notice that absolute proof of desired indicator qualities is not
required at this stage. The other two criteria (#1 and #7) are qualifiers. If a candidate indicator has
overwhelming importance for assessing endpoint status and trends (criterion #1), then it should receive
priority in a resource group's list of research activities, even though some of the other indicators may
have stronger evidence of responsiveness or more standardized methods. The final criterion (#7) recog-
nizes that some indicators may not totally fulfill criteria 2-6, but still merit advancement due to their small
incremental cost. Candidate indicators without enough information to support either rejection or accep-
tance should be placed in suspended status. The activities required to evaluate candidate indicators and
implement the decision algorithm in Figure 4-5 include the following four steps:
1. Complete literature reviews that focus on quantifying indicator response characteristics.
2. Assess indicator utility using conceptual models.
3. Conduct structured workshops to evaluate indicators.
4. Update the Indicator Data Base.
39
-------
Candidate Indicators
(Irom Figure 4-4)
Reject If:
1 )" Not responsive to changes in resource
condition
2 ) Unlikely lo provide important and useful
information about assessment endpoinls
3 ) Redundant with other measures
4 ) Natural temporal or spatial variation of
Indicator too high, even during
index period
5 ) No appropriate sampling or
measurement methods forseen
6 ) Unikely to obtain valid measurements
at each site willin a resource class
7 ) High cost for additional evaluation
Accept il:
1 ) Overwhelming importance lo assessing
status and trends
2 ) Critical component of conceptual model
linking stressors and assessment
endpoinls
3 ) Responsiveness demonstrated
along lab or field exposure gradients
4 ) Likely lo be useful as an index sample,
but may need additional testing
5 ) Methods available and fairly standardized.
although may need additional testing
6 ) Likely lo obtain valid measurements at
each sile
7 ) Small cost for additional lesling
Research Indicators
(See Figure 4-6)
Suspend if:
1 ) Insufficient data available to complete
the evaluation
2 ) Data suggest other indicators might
be more appropriate for the same
purpose, but further testing of the
other indicators is required
Figure 4-5. Example off an evaluation of candidate indicators to identify research indicators {see text, Section 4.6.2, for further
explanation).
-------
No field activities nor data analyses need be conducted during this phase. The results of the information
syntheses and evaluations should be documented in the indicator data base and research plans for
indicator development.
4.5.2.1 Literature Review
The literature review initiated in Phase 2 (Identification of Candidate Indicators) should be expanded to
address issues pertaining to both Individual indicators (Figure 4-5) and the larger issue of the complete-
ness of the overall suite of indicators for a resource dass. As previously discussed, reviews are con-
ducted both within EMAP, covering the full range of indicators being considered, and by outside scien-
tists, generally focusing on specific indicator types related to the investigator's area of expertise. These
reviews should be updated annually, to ensure that new data are included in the selection of research
indicators. The reviews of individual candidate indicators should be organized around the issues raised
in Figure 4-5 and around the other criteria in Table 4-3.
4.5.2.2 Critical Review of Conceptual Models
Particularly important in establishing priorities among potential research indicators is the degree to which
an indicator provides information to an assessment endpoint, fills gaps in the current set of indicators,
contributes to a balanced EMAP design that includes different indicator types, or provides a link among
EMAP resource groups. These questions about an indicator's role can best be addressed by critical
analysis of each EMAP resource group's conceptual models (prepared in Phase 1), as well as analysis of
other conceptual models that link several resource classes together (discussed in Section 5).
4.5.2.3 Expert Workshops
The primary method for assessing candidate indicators is an annual technical workshop to be conducted
by each EMAP resource group. These workshops, comprising small working groups of scientists, have
the primary objective of applying expert judgment to the individual indicator issues raised in Figure 4-5,
the larger issues pertaining to the overall suite of indicators, and the priorities for research activities.
These working groups may also identify gaps in the indicator suite and generate additional ideas for
candidate Indicators. To facilitate effective workshops the Technical Director of an EMAP resource
group should prepare information summarizing the status of indicator development and distribute it
before the workshop. This information should include, at a minimum: (1) a description of EMAP in
general, (2) the EMAP resource group's current implementation design, (3) examples of results to date,
(4) a list of the current research indicators and copies of their fact sheets, and (5) the proposed list of
new research Indicators and justification for their selection.
41
-------
4.5.2.4 Indicator Data Base Expansion and Update
Decisions about the selection of research indicators must be recorded in a timely manner into the
Indicator Data Base. The results of, and rationale for, changes in indicator status (see Section 4.4.2),
whether the decisions occurred at the staff level or in a workshop, should be clearly documented. After
completion of this step, the status of all previously identified candidate indicators should be changed to
reflect these decisions. All indicators that have progressed through this phase of the indicator selection
process will fall into one of the following three status categories: research-active, candidate-hold
(suspended), or candidate-rejected.
4.5.3 Evaluation
As described at the start of Section 4.5.2 (Approach), candidate indicators should be selected for further
research if they: (1) appear likely to fulfill the indicator selection criteria presented in Section 4.3 and
Figure 4-5 (i.e., have a reasonable chance of becoming a core indicator), and (2) fill an existing gap in
the EMAP indicator suite, improve the balance among existing indicators, or represent an improvement
in an existing EMAP core indicator (see Section 4.8). This prioritization is a major output of the annual
technical workshop, and is evaluated through peer review of the annual research plans and five-year
research and monitoring plans.
Each ecological resource group should establish a small panel of outside experts to serve as peer
reviewers for their specific program activities and plans. Individuals should serve for multiple years, with
overlapping periods of assignment. The panel can function to provide oversight peer review of EMAP
resource group activities and program directions and serve as reviewers for indicator research proposals
(see Section 4.4.2). The oversight peer review panel should meet once per year to (1) review and
discuss the peer review comments on the annual (and five-year) research plans, and (2) provide con-
structive comments and guidance on all phases of the general program.
4.5.4 Research Plan Update
Every five years, research and monitoring plans w3l be prepared by each of the resource groups in
EMAP, under the direction of the resource group's director, and each resource group's program will be
thoroughly reviewed at a peer review workshop. Results of Phase 3 activities will feed into these plans
as descriptions of selected research indicators, the rationale for their selection, the results of previous
indicator testing and evaluation efforts, and proposed research activities to overcome past problems and
advance the testing and evaluation of selected research indicators. Other sections of the plans will
address proposed pilot testing and regional demonstration activities planned for Phases 4 and 5
42
-------
(described in Sections 4.6 and 4.7). Annual updates, addressing the specific research and indicator
evaluation activities proposed for the following year, are also likely to be required.
4.6 PHASE 4: EVALUATION OF RESEARCH INDICATORS TO SELECT PROBATIONARY CORE
INDICATORS
4.6.1 Objectives
In Phase 4, research indicators will be screened to achieve two general objectives: identification of pro-
bationary core indicators (i.e., indicators ready for full-scale regional demonstration), and evaluation of
the expected performance of proposed indicators relative to EMAP's overall objectives. Identification of
probationary core indicators requires the quantitative evaluation of research indicator performance
across different EMAP sampling units. This objective can be accomplished through literature reviews,
analyses of existing data, simulations of expected indicator performance over varying temporal and
spatial scales, statistical evaluations of minimum detectable trends, limited-scale field pilot tests and
laboratory experiments, and assessments of the logistical requirements for field sampling. The second
objective, evaluating the expected performance of indicators relative to EMAP's overall objectives,
obviously overlaps with the identification of probationary core indicators, but the aim is somewhat
different. Here, the focus is on the ability of indicators to meaningfully reflect assessment endpoints; to
detect associations between response indicators, exposure/habitat indicators, and stressor information;
and to possibly be combined into useful indices of ecological condition. For example, it is important to
consider whether or not the regional cumulative frequency distributions of individual response indicators
and indices are likely to be stable over the index period. Another consideration regarding selection of
usable indicators or indices is whether or not ways can be devised to extract the signal of changing
condition from noisy data gathered from inherently variable ecosystems.
4.6.2 Approach
Figure 4-6 Illustrates several key questions that drive many of the indicator evaluation activities in this
phase. To fulfill the first criterion on the right side of Figure 4-6, there must be quantitative evidence that
an Indicator can: (1) respond to changing stressor levels, (2) respond in most resource classes, and (3)
have a signal-to-noise ratio stable enough during the index period not to mask this responsiveness. This
criterion demands that indicator testing consider both spatial and temporal variation within the index
period. For example, fulfilling criterion #2 on the right side of Figure 4-6 requires data on the costs and
logistical constraints associated with sampling, and criterion #4 demands estimates of natural annual
variation, using simulations and statistical analyses. In addition, since properties and relationships can
43
-------
Research Indicators
(From Figure 4-5)
Reject if:
1 ) Poor responsiveness along known
damage or exposure gradienls in
regional field studies
2) Signal-to-noise ratio is so small
that significant changes cannot he
detected within a reasonable time
following the change
Accept if:
1a)For response indicators, index samples
show quantitative responsiveness along
known exposure gradients in most or all
regional habitats within a resource class
1 b )For exposure, habitat, and stressor
indicators, index period measurements
show high correlation with factors known
to inlluence ecological resource condition
2 (Likely to be feasibly sampled on a
regional scale
3 (Overwhelming importance for evaluating
assessment endpoinls or diagnosing
change in resource condition
4 )Year-to-year natural variation small
enough to allow detection of meaningful
effects in a reasonable lime frame
Probationary Core Indicators
(See Figure 4-8)
Suspend
1 ) Insufficient evidence to complete
evaluation in this phase
2 ) Data suggest potential for use, but
a significant amount of new
research is needed to develop the
indicator
Figure 4-6. Example of an evaluation of research indicators to identify probationary core indicators (see text. Section 4.6.2, for
further explanation.
-------
change over time as environmental conditions alter underlying mechanisms, there is a need to provide a
process for continually reevaluating and reassessing the adequacy of each resource group's indicator
suite for meeting the overall program goals (see Section 4.8).
Different intensities of investigation of these issues produce different levels of proof. The overall strategy
should be to pursue the simplest approaches first, until there is enough evidence to decide on accep-
tance or rejection. Insufficient evidence will lead to suspension of the indicator at the research stage.
Because of the expense of performing quantitative evaluations of indicators, research activities should
focus first on those indicators with the strongest relationship to assessment endpoints.
To implement the decision algorithm in Figure 4-6 and address the adequacy of indicators with respect
to EMAP's overall assessment objectives, we propose that the evaluation of research indicators follow an
eight-step process:
1. Formulate research questions
2. Complete literature reviews targeted to these questions
3. Identify useful data bases
4. Analyze existing data
5. Perform analyses of expected indicator performance
6. Produce example assessments
7. Conduct limited-scale field pilot studies
8. Update the Indicator Data Base
These eight steps are described in the following sections.
Since steps 1 through 7 involve increasing costs, they should be completed in sequence, with the higher
order tasks being implemented only if needed (i.e., the indicator passes evaluation in earlier steps). To
the greatest degree possible, the performance of research indicators should be evaluated using existing
data. However, much of the existing data are not appropriate at the temporal and spatial scales
required by EMAP. Most of the data needed to evaluate research indicators may need to be gathered
from field pilot studies at non-EMAP sites, because spatial representativeness is not evaluated until
Phase 5. However, because assessment of logistical constraints is also an objective of this phase, it
may be beneficial to evaluate some of the proposed indicators at EMAP sites. The next phase of evalu-
ation involves demonstration projects, and wOl probably be significantly more costly and logistically
difficult. This implies that by the lime Phase 4 is completed, there should be a high degree of confi-
dence that the selected probationary core indicators will indeed be ultimately accepted as core indica-
tors. Therefore It is essential to resolve all issues that can help in avoiding unnecessary expense and
effort during the demonstration project.
45
-------
4.6.2.1 Formulation of Research Question
The first task in evaluating whether a research indicator should be accepted as a probationary core
indicator is to formulate specific research questions. Figure 4-6 provides an initial set of questions, and
others are provided in Tables 4-3 and 4-4. The questions should be as detailed and indicator-specific as
possible and should consider issues of interpretation, methods, and variability.
Key questions of interpretation must be answered at this time to determine the merits of implementing
each indicator. These questions include:
How will the data be used?
How will the monitoring results from the indicator be used in the EMAP Annual Statistical
Summaries?
How will indicators be combined to determine the condition of assessment points?
How will the indicators aid in identifying the likely causes for patterns and trends in ecological
condition?
In addition to these general questions, it may be necessary to formulate questions unique to the specific
indicator data requirements (e.g., sufficiency of sample size, environmental impacts associated with the
sample collection effort).
4.6.2.2 Literature Reviews
The existing literature should be reviewed and used, to the greatest degree possible, to prepare a
summary response to each research question listed above. Questions that the available literature
(including unpublished studies) may best be able to address are issues of interpretability, sampling
methods, and analytical techniques. This literature review should be more exhaustive than the reviews
previously conducted for Phases 2 and 3 (Sections 4.4 and 4.5). After this step, it will be determined
whether additional data or indicator research projects are needed to address the questions, what
specific information is lacking, and what approaches could be used to acquire it.
4.6.2.3 Identification of Useful Data Bases
The next step is to determine what types of data and data bases currently exist that may be useful for
addressing unanswered issues for each of the research questions defined in Step 1. For example, retro-
spective data, such as diatom assemblages and tree ring chronologies, may be useful in assessing the
46
-------
Table 4-4. Example Questions for Evaluating Research Indicators
Questions related to data interpretation
How will the data be used?
How will the monitoring results be summarized in the Annual Statistical Summaries?
How do the data relate to the assessment endpoirrts?
How does the indicator contribute to defining the percentage of the ecological resource
considered to be degrading, improving, or impacted?
How will the indicator aid in Identifying likely causes for observed patterns or trends in
ecological resources status?
Does the indicator provide important information about the status of the ecological resources
of concern?
Questions addressing methods for sample and data collection
Are accepted methods available to collect the samples and data?
At what time of the year should measurements be made (definition of the index period)?
At what location in the resource sampling unit should an indicator be measured (identification
of the index sampling site/area)?
What field sampling methods should be used for sample collection, and what are the logistical
constraints on the use of these methods?
What are the best analytical techniques for measurement of the indicator?
Does this indicator provide a cost effective way of obtaining the needed information?
Questions addressing variability of the measurements
How precisely can the indicator be measured?
What is the background spatial variability among concurrent samples collected at different
locations within a resource sampling unit?
What is the background spatial variability among non-concurrent samples collected within the
index window, but from different ecological resource sampling units within the same region?
What is the spatial variability among samples collected within the index period in different
regions during the same year?
What is the temporal variability among samples collected from the same ecological resource
sampling unit, during the same year, and at the same locations, but during different potential
index periods?
What is the temporal variability among samples collected within the same region, during the
same index period, but spanning a number of years?
How responsive is the indicator to change or stresses?
Miscellaneous questions
Can sufficient measurements of the indicator (e.g.. numbers of target organisms) be collected,
given the sampling design, logistical constraints, and the need to minimize the environmental
impacts of the sampling process?
Can the information be used to conduct retrospective Investigations?
47
-------
natural temporal variability (noise) of an indicator, such as sedimentation rate, water chemistry con-
stituents, moisture availability, forest productivity. Data from long-term plot experiments can be very
useful in assessing local spatial and temporal variability (e.g., Franklin et al., 1990). Data on indicator
values at regional reference sites or data on indicator responses across gradients of damage or stress
are very valuable in addressing many of the questions in Table 4-4. Similarly, data sets that include
more intensive sampling than that being considered for EMAP can be extremely valuable in defining the
best index period given intra-annual variability, Identifying the best index location for sampling, and
assessing spatial variability.
4.6.2.4 Analysis of Existing Data
The data bases identified in Step 3 should be analyzed to address, as far as possible, the key research
questions. These data should be used to explore plausible patterns and trends in ecological resources
and to assess whether or not these patterns are reflected in the values of particular indicators. Data sets
from spatial exposure gradients or changing exposures over time are particularly valuable. Analyses of
data bases should be used to investigate questions related to the appropriate form for data presentation,
methods for statistical summarization, apparent redundancy among different indicators, responsiveness
of indicators to specific types of stresses, temporal and spatial variability, possible index periods, and the
relative merits of alternative sampling and analytical methods (including considerations of sampling and
analytical error). These data analyses provide the raw material for the next step, simulations of expected
indicator performance.
4.6.2.5 Analysis of Expected Indicator Performance
Analyses should be conducted to assess the performance of research indicators during hypothesized
index periods. Analyses may use simple hand-calculator analytical techniques to determine levels of
detectable effect for given levels of confidence and temporal variation, more sophisticated statistical
assessments, or simulation models. The analyses may be repeated (and improved) with any additional
data collected in field pilot studies (4.6.2.7).
The results of these analyses can be used to estimate the preferred index period for sampling, the time
needed for an indicator to detect changes of a specified magnitude, or the usefulness of a response
indicator for defining the regional extent of degraded systems. Data acquired from frequently monitored
sites can be used to assess indicalor stability during different index periods. Spatially intensive survey
data, where available, can be used to obtain estimates of spatial variability in indicators for specific times
of the year, and these data can be subsampfed at various densities, both less than and greater than
those of the EMAP frame, to assess the stability of regional cumulative frequency distributions. This kind
48
-------
of analysis may suggest forms of spatial stratification (e.g. new resource classes) that had not previously
been considered.
Ideally, simulation models would be run for sufficient numbers of monitoring locations to explore the
change in cumulative frequency distributions of indicators during different index periods and also over
longer time frames of several years. Simulated data streams for longer time periods can be fed into
statistical programs to determine minimum detectable trends in response indicators (e.g., 2% change per
year in indicator value over 10 years). Such simulations should reflect the varying sensitivity of different
subpopulations (e.g. the tails of the cumulative frequency distribution may be more likely to respond to
changing exposures). If only limited data are available, It may be possible to use bootstrapping tech-
niques or simple process models to generate hypothetical data wfth reasonable spatial and temporal
variability.
4.6.2.6 Example Assessments
Example assessments explore the types of data analysis, data presentation, indicator responsiveness,
and indicator variability that can be expected. These assessments may be conducted using either
plausible (i.e., simulated) or real data. Conducting an example assessment is intended to assist in
selecting among alternative indicators, identifying redundant indicators, identifying gaps in the suite of
indicators, deciding how data from each indicator would be used in EMAP for assessing status and
trends, and exploring the ability of the indicator suite to ascribe plausible causes to observed patterns
and trends in the region's percentage of subnominal areas. An EMAP resource group's conceptual
model, which links stressors with assessment endpoints, should be used throughout the example
assessment to guide the analysis and interpretation of both empirical and simulated data.
4.6.2.7 Limited-scale Field Pilot Studies
In general, field pilot studies should be used to gather any additional data needed to address the
questions defined in Step 1, except for issues that can only be addressed through a regional demon-
stration project (Phase 5, described in Section 4.7). As observed above, many or all of the sampling
sites for field pilot studies need not be on the EMAP sampling grid, particularly if data can be collected
more efficiently at other sites, stfll addressing the critical issues of ecological health with enough
confidence to permit proceeding to the regional demonstration.
If necessary, questions identified in Step 1 should be redefined to specifically address the hypotheses
that need to be tested during the pilot study. Examples of subjects to concentrate on in a pilot study
include:
49
-------
Intensive temporal sampling to define the best boundaries for the index period (e.g., fall
turn-over In lakes, seasonal low water in wetlands) and to quantify the within-index period
sampling variability.
Extensive spatial sampling within a regional resource class to determine the value of data col-
lected at index sampling sites relative to more intensively or randomly located sampling sites,
to quantify Indicator variability within the index sampling area (e.g., the central basin of the
lake), where permanently fixed monitoring sites cannot be established.
Sampling along gradients, from polluted to unpolluted, or impacted to natural sites to (1)
evaluate the responsiveness of the indicator to stress, (2) akJ in defining nominal and
subnominal classifications (or similar schemes for data interpretation), and (3) evaluate the
specificity of the indicator to particular types of stress or change and the repeatability of the
indicator response in different regions or ecological resource classes.
The optimal pilot study design will depend on the specific questions to be addressed. However, two
examples from the EMAP-Estuaries resource group illustrate the types of studies that may be useful.
Definition of the Index period. Levels of dissolved oxygen (DO) in estuaries are highly
variable, yet DO also serves as an important exposure and response indicator for assessing
estuarine condition. Therefore, field studies were conducted in 1990 to determine (1) the
optimal boundaries for the summer index sampling period and (2) the utility of point-in-time
measurements of DO. At about 100 sites in the Virginian Province, three point-in-time
measurements of DO were collected during three sampling intervals (early, mid-, and late
summer). Comparison of the DO cumulative distribution functions for the three periods
provides information of the regional stability of the DO indicator. In addition, DO was
measured continuously at a subset of 30 sites, selected by experts as sites expected to
experience problems with low DO. These continuous records will be used to both refine the
index period and to evaluate the utility of point-in-time measurements as an indicator of the
frequency, severity, and extent of low DO episodes.
Indicator responsiveness to stressors. Using expert judgement, 24 sampling sites were
selected to reflect important gradients of both pollutant exposure (DO gradient) and habitat
(salinity) within two geographic regions (latitudinal gradient) (Figure 4-7). A variety of indicators
(e.g., benthic biomass, species abundance) were sampled at each site, three times during the
summer index period. Response indicators that consistently reflect the effects of pollutant
gradients across a range of habitats and regions are obviously preferred, and fulfill the prime
criterion for acceptance as EMAP probationary core indicators (see Figure 4-6).
In choosing sampling sites and regions for the pilot study, an attempt should be made to include the full
range of conditions expected in EMAP. Answers to some of the above questions (e.g., best index
period) may vary from region to region or among ecological resource classes or types. If so, It may be
necessary to Include multiple regions In the pilot, or to evaluate the indicator in regions that are
expected to represent the extreme conditions for the indicator. Multi-regional pilot testing should be
limited to investigation of issues that cannot be resolved within a single region, and should be designed
to obtain only the minimum information needed to complete the assessment.
50
-------
HIGH
CONTAMINANTS CONCENTRATION
LOW HIGH LOW HIGH
LOW
POLYHALINE
SAUNITY GRADIENT
Rgure4-7. Example of an Indicator testing and evaluation strategy (for the 1990EMAP-Estuaries Demonstration Project in the
Virginian Province).
-------
4.6.3 Evaluation
As soon as sufficient evidence has been assembled, research indicators should be evaluated against the
selection criteria described in Figure 4-6 and Table 4-3. This evaluation is intended to be more stringent
and specific than previous evaluations. Throughout this phase, high priority should be given to indi-
cators that have definite regional applicability, relevance to assessment endpoints, and importance as
integrators across EMAP resource groups.
The peer review panel for each EMAP resource group should review the following documents that result
from the evaluation of research indicators to identify probationary core indicators: literature reviews,
summary of data analyses, pilot study reports, indicator status sheets, and an indicator evaluation report
(see expert workshops discussion in Section 4.5.2). The Indicator evaluation report should be formatted
to present similar information for each indicator evaluated and should be arranged to present all infor-
mation about each indicator in a concise manner. Attempts should be made to publish the results in the
open literature, both to provide another level of peer review, and to make advances in knowledge widely
available to the scientific community.
4.6.4 Update of Indicator Status Documents and Research Plan
The indicator status report should be updated to summarize the results of each evaluation. This report
should draw upon all of the sources of information used or generated in the evaluation of research
indicators and should summarize the available data on research indicator application, interpretation,
evaluation, and testing. The indicator status sheets should be updated to reflect all decisions and
summarize their justifications.
As discussed in Section 4.5.4, the annual research plans and more comprehensive five-year research
and monitoring plans must be updated to describe the selected probationary core indicators, their
associated justifications, and the activities proposed for Phase 5 to evaluate probationary core indicators
for inclusion in the core EMAP program. With respect to Phase 4, the five-year plan will describe the
current list of probationary core indicators, summarize the results of prior research indicator evaluation
and testing efforts, and describe plans for further evaluating new indicators that have advanced from
research to developmental status.
4.6.5 Indicator Data Base Update
The only new category of information added to the data base during this phase is the location of field
pilot tests, if conducted. The results of evaluations should be used, however, to expand, improve, or
52
-------
verify data for each of the previously established information categories in the data base. Citations
should direct the reader to more detailed information contained in summary reports and pilot study
reports. Each indicator record should be updated to indicate changes in status, as well as the justi-
fication for the change. All research indicators that have progressed through this phase should be
classified as either developmental-active, research-hold, or research-rejected.
4.7 PHASE 5: SELECTION OF CORE INDICATORS
4.7.1 Objectives
Following the detailed scrutiny of research indicators, it is necessary to confirm that the selected
probationary core indicators are appropriate for implementation in the EMAP core program. Regional
demonstration projects are used for this purpose since they allow full-scale testing of the utility and
applicability of the indicator. During this phase of the indicator evaluation process, the objectives of the
demonstration projects focus more on confirming the validity of the selected indicator over a broad
range of conditions than on eliminating indicators that fail to satisfy fundamental criteria relating to
responsiveness and interpretability.
The specific objectives of this phase are similar to those described for evaluating research indicators, but
they focus more on regional scale feasibility and utility and less on indicator procedures and methods,
which should already be well defined by this phase. A key function of this phase is to determine
whether the proposed density of resource sampling unit is sufficient to assess associations between
regional patterns in ecological condition and anthropogenic stresses. Activities during this phase will
build up the EMAP infrastructure for conducting regional monitoring activities, through the field imple-
mentation activities that are necessary to conduct regional demonstrations. Also during this phase, the
first outputs are obtained from monitoring by EMAP resource groups.
4.7.2 Approach
Figure 4-8 illustrates some of the key issues that need to be resolved for identification of core indicators.
Criteria #1 and #3 on the right side of Figure 4-8 are the critical tests for each probationary core indi-
cator: regional feasibility and stability of the regional cumulative frequency distribution over the index
period. Figure 4-9 Plustrates similarity among cumulative frequencies suggesting that a summer index
period would be appropriate for monitoring the Index of Biotic Integrity, and that late spring monitoring
may not be expected to represent the same general conditions as the summer index period. Criterion
#2 (regional utility) is obviously important, but not critical, since some indicators (e.g. those sensitive to
global warming) may not be useful for several decades, although establishing baseline data early may be
53
-------
Probationary Core Indicators
(From Figure 4-6)
en
Reject if:
1) Regional cumulative frequency
distribution not sufficiently stable
throughout Index period
2 ) Low slgnal-to-noise ratio within
Individual sampling units
3 ) Logistlcally infeasible for regional
implementation
Accept if:
1) Feasibility for regional implementation
demonstrated
2 ) Utility at regional scale demonstrated,
provides new information over existing
data
3 ) Regional cumulative frequency
distribution shows high signal-to-noise
characteristics during the index period
4 ) For exposure, habitat, and stressor
indicators, ability to assess associations
with response indicators demonstrated
5 ) Values from individual resource sampling
units show high signal-to-noise ratios
during the index period
Suspend if:
1) Data from regional demonstration
project suggest the need for
development of refined methods or
other (external) scientific
refinements
Core Indicators
Rgure 4-8. Example of an evaluation of probationary core indicators to identify core indicators (see text, Section 4.7.2, for further
explanation).
-------
CUMULATIVE DIST OF IBI IN OHIO, 1986
JUNE
* JULY
ALIG
SEPT
0.0 -"7
10
40
50
60
181
Figure 4-9. Cumulative frequency distributions (CDF) for Index of Biotic Integrity in streams in
Ohio during four months of 1986. The dissimilarity of CDFs for spring and summer
months suggests that June should not be included in the index period for IBI in
Ohio streams (after Paulsen et al., 1990).
55
-------
critical to detecting changes in these indicators at a later time. For response indicators, criteria #4 and
#5 on the right side of Figure 4-8 (ability to assess associations, stability of values from individual
sampling units over the index period) are desirable for determining the probable causes of subnominal
conditions in parts of the region, but are not critical to the determination of ecological condition in a
region. However, exposure indicators must have high signal-to-nolse ratios at each sampling unit over
the index period, since the primary function of exposure indicators is to assist in determining plausible
causes of subnominal conditions in parts of a sampled region. Hence, failure to demonstrate sampling
unit stability is enough to cause the rejection of exposure indicators.
Each indicator must be evaluated for all pertinent resources classes. Rejection of an indicator for one
resource class should not affect decisions regarding the utility of that indicator for other resource classes
(e.g., failure of nutrient concentration indicators to adequately characterize the condition of one agro-
ecosystem resource class should not result in rejection of this indicator in other agroecosystem resource
classes).
The acquisition of information to follow the decision algorithm in Figure 4-8 will involve the following
three-step process:
1. Designing and conducting regional demonstration projects
2. Completing an Annual Statistical Summary
3. Updating the Indicator Data Base
4.7.2.1 Regional Demonstration Project Design and Implementation
This step constitutes implementation of the probationary core indicators in one or a few regions, using
the sampling frame proposed for the EMAP core program. This implementation will require the indicator
development team to work closely with EMAP field crews and other members of the ecological resource
group to ensure that any new indicators being tested are implemented appropriately.
To verify the findings from the field pilot studies (Phase 4) in new regions or across regions, it may be
appropriate to include intensive temporal or spatial sampling at a subset of EMAP sites. Specific
objectives and approaches for these tests would be similar to those defined for the pilot studies in
Section 4.6.2. In some cases, the differences among regions and/or ecological resource classes may
be large enough to require additional demonstration projects, for other areas or ecological resource
classes, prior to full-scale implementation. However, the amount of testing required is expected to
decline substantially as additional regions and ecological resource classes are added. An example of a
regional demonstration project, the National Stream Survey - Phase I Pilot Study, was conducted during
56
-------
the National Surface Water Survey. The project was performed during the first year of the National
Stream Survey on streams in a single region of concern, the Southern Blue Ridge Province. Although
most of the analytical methods and logistic methods for collecting and analyzing stream samples had
been developed and tested previously, three questions could not be answered without testing on a
regional scale:
1. Could streams selected using a regional probability-based experimental design be reliably
sampled on a routine basis to provide estimates of regional stream status?
2. Could an index period be defined during which streams would display low natural variability?
3. Could data of known high quality be collected using a probability-based stream survey?
The National Stream Survey was implemented in the Southern Blue Ridge Province to answer these
questions in a geographic region where overcoming logistical problems would indicate a high probability
of implementation success in all other regions of concern, and where a large enough data base could be
developed to identify an appropriate index period and assess the expected quality of data collected in
such a survey. Following successful completion of this project, the same approach was implemented by
the National Stream Survey throughout the eastern United States in the following year. This regional
demonstration project was useful as a relatively low-cost method of investigating questions that could
only be answered on a regional scale, without incurring the cost and complexity of implementation on a
program-wide basis.
4.7.2.2 Annual Statistical Summary
The Annual Statistical Summary is a major output from each ecological resource group. Results from
the demonstration study should be analyzed as proposed for each EMAP resource group's core pro-
gram to confirm the utility of the data. Although the degree of evaluation possible is extremely limited
during the first year of data collection, this assessment will allow confirmation of the basic rationale for
including each indicator. Subsequent annual summaries will be increasingly important for evaluating the
abilities of each indicator to identify changes and trends in the status of ecological resources.
4.7.3 Evaluation
Critical issues for evaluating probationary core indicators are listed in Figure 4-8, although the complete
set of indicator selection criteria (Table 4-3) should be re-examined. The primary objective of this
evaluation is to determine to what degree data obtained through the use of the indicator, as analyzed in
4.7.2.2, aid in achieving the overall EMAP assessment objectives: (1) estimating the current status,
57
-------
extent, changes, and trends in the condition of the nation's ecological resources and (2) identifying
associations between human-induced stresses and ecological condition. Probationary core indicators
that are found to produce satisfactory results and to contribute to the achievement of these assessment
objectives will be accepted as core indicators for full implementation.
The results of these evaluations, and the data gathered at this and subsequent stages of implementation,
will be subjected to extensive peer, agency, and public review. Unless the indicator fails to advance
from probationary core to core status, the outputs from the demonstration studies will be incorporated
into the Annual Statistical Summary and associated interpretive reports, which will be part of the
continuing legacy of EMAP. Outputs from regional demonstration studies when the indicator fails to
advance to core status will be prepared as summaries of the reasons for rejection and included in the
Indicator Data Base.
4.7.4 Update of Research Plan and Indicator Status Documents
All indicator status documents, including the research plan, the indicator fact sheets, and the indicator
status report, should be updated to identify newly proposed core indicators, and to identify those
probationary core indicators that are rejected or suspended for inclusion in the core group, along with
the justifications for these decisions. Similar notations should be made in the five-year research and
monitoring plan. Decisions to accept probationary core indicators as core indicators should be sub-
jected to peer review before full-scale implementation of these indicators in EMAP
4.7.5 Indicator Data Base Update
The Indicator Data Base should be updated on an annual basis, by providing additional insights derived
from analysis of data from the regional demonstration. A listing of core indicators with appropriate
documentation will be compiled within this data base.
4.8 PHASE 6: REEVALUATION AND MODIFICATION OF INDICATORS
4.8.1 Objectives
Scientific advances and technological innovations will occur during the ongoing implementation of EMAP
and may improve the precision, accuracy, representation, cost effectiveness, and overall applicability of
EMAP indicators. This may necessitate modifying specific indicators, replacing indicators with others
that provide improved information or equivalent information at reduced cost, or adding indicators that
address emerging issues of importance. To accommodate these changes, it will be necessary to specify
58
-------
appropriate procedures. This section presents a preliminary outline of a systematic approach to
indicator reevaluation and revision that will ensure the use of the best possible set of indicators for
achieving EMAP objectives. This section should be revised and expanded as EMAP begins to mature
and modification of the set of core indicators is considered.
An EMAP resource group's set of core indicators should be revised only after a thorough assessment
indicates that it is clearly necessary (i.e., when revision results in a significant improvement in the quality
of the assessment of status and trends of ecological condition, without diminishing the continuity of the
assessment record). Once the current EMAP program has been assessed, and a recommendation has
been made to modify the list of core indicators or replace a current indicator, the recommendation and a
plan for the transition will be included in the annual research plan, which is the vehicle for overall
programmatic peer review of EMAP. Recommended changes will not be implemented until after the
recommendation is approved by the peer review process. Once a determination has been made to
modify the set of indicators, the primary objective is to accomplish a smooth transition. Continuity of the
data base and the assessment resource is extremely important for ensuring that the ecological moni-
toring effort is detecting any trends or changes in condition. Situations that may require reexamination
of the core indicators include the following scenarios:
A new indicator may be identified that appears to be superior to the EMAP core indicator
currently in use for measuring an assessment endpoint. The decision to replace the current
indicator with the new one and to discontinue monitoring the current indicator must be made
after obtaining adequate information to ensure continuity of the assessment record and com-
parability of the new assessments with those that have relied on the old indicator.
The environmental conditions have changed such that underlying mechanisms are altered.
Because of this, previous linkages between the indicators and environmental values may not be
representative of the existing situation.
A method improvement may occur that promises to provide similar quality data at lower cost,
or higher quality data at a similar cost using the improved method. The impact of using the
improved method to assess endpoints and to detect trends must be assessed before replacing
the original method, to ensure that data quality actually equals or exceeds that available using
the current method.
Assessment method evolution may also result in changes in the data analysis, presentation, or evalua-
tion procedures for the Annual Statistical Summary (see Paulsen et al., 1990), such as developing a new
index or redefining the threshold for assessing ecological condition (nominal/subnominal) for selected
resource classes. Although these changes will not directly affect the set of indicators, they may result in
an opportunity for modifying or adding core indicators. Therefore, the impact of these changes on the
assessment process should be evaluated as early as possible, to increase the ability of the indicators to
provide needed information.
59
-------
4.8.2 Approach
The primary approach for evaluating core indicators is routine review, evaluation of assessment outputs,
and searches for new ideas for indicators. This calls for continual tracking of the published literature and
ongoing research programs (see Sections 4.4 and 4.5), to identify promising new information; it is an
ongoing, institutionalized Phase 1 effort, as described in Section 3.1.
Once potential new or revised indicators or methods are identified, the process of investigation and
assessment of the idea should proceed through Phases 2-5 of the indicator development process, as
described in Sections 4.4 through 4.7. Implementation of this review process ensures that indicators or
methods cannot be revised or replaced without (1) carefully conducting the evaluations needed to
ensure that the new indicator or method provides a meaningful improvement in assessment capabilities
and (2) quantifying the relationship between the new and old indicators or methods (i.e., calibrating the
new indicator).
Evaluation of proposed changes to core indicators (using the approaches described in Sections 4.4-4.7)
should be conducted with the added objectives of evaluating the relative merits of the new and old
indicators or methods and quantifying the relationship between the two. This evaluation requires that
field pilot studies and demonstration programs be designed and conducted to test for comparability and
relative responsiveness of the two indicators or methods under a range of conditions. Field demonstra-
tions should be conducted to test alternative indicators or methods in one to several regions for a
number of years, to verify the consistency of that relationship. Field pilot studies and regional
demonstration projects will be conducted to calibrate the relationship between the old and new
indicators (or measurement techniques), and both the old and new (or modified) indicators will be
monitored long enough to ensure comparability of the data sets from both indicators, before phasing out
the old indicator. Once the spatial and short-term temporal relationships between the alternatives are
well established, simultaneous collection of data for both indicators may be desirable for an extended
period of time at a limited number of sites to ensure the similarity of the relationship over an extended
time period.
4.8.3 Evaluation
Revision of core indicators requires all assessments described in Sections 4.4 through 4.7 to be con-
ducted and all criteria for adoption of the changes to be satisfied. The advantages of new indicators or
methods must be significant and must represent improvements over existing indicators. These advan-
tages must be well documented, and the documentation of the research efforts (laboratory, pilot, and
demonstration studies) must include the following information:
60
-------
Quantification of the calibration between the old and new indicators or methods under the full
range of conditions observed during EMAP monitoring to date.
Evaluation of how the proposed change in indicators or methods would affect the Annual
Statistical Summary and data interpretation and integration (may require recalculation of
indices).
Each EMAP resource group will formally reevaluate its indicator suite every five years (see Section 4.5)
under the direction of the resource group's Technical Director. Proposed revisions to the EMAP core
indicators, and the associated justifications, should be included in the five-year plan. These proposed
revisions will be subjected to peer review along with the rest of the program at this time. Therefore,
revisions to the core indicator suite can only occur during the overall program evaluation that occurs
every five years.
4.8.4 Update of Research Plan and Indicator Status Documents
The research plan, indicator status report, and data base will be updated at each stage of evaluation of
proposed new indicators and methods. It will also be necessary to ensure that the documentation for
each of the new indicators identifies the reasons for the investigation (e.g., identified gaps, inadequate
precision of current indicators). All information developed through comparison of alternative methods
should also be summarized in these documents.
61
-------
62
-------
5. INTEGRATION AMONG RESOURCE GROUPS
As discussed in Section 2.2, seven broad ecological resource categories have been defined within
EMAP: Surface Waters, the Great Lakes, Estuaries, Wetlands, Forests, Agroecosystems, and Arid Lands.
At present, individual ecological resource groups have the primary responsibility for selecting and
evaluating EMAP indicators to address these ecological resource categories. Section 4 outlines the
process of Indicator development for an individual resource group. Integration of indicators and
monitoring data across these resource groups is necessary, however, to fully achieve the program goals.
This section describes the issues and steps required to ensure that effective integration and coordination
among ecological resource groups occur during the indicator development process.
Integration occurs at two levels: (1) during indicator selection, to ensure that all important inter-group
linkages are considered and (2) during data interpretation. Because this document focuses on indicator
selection, the second level of integration is beyond its scope. Tasks relating to an integrated inter-
pretation of the EMAP monitoring results are the responsibility of the EMAP Integration and Assessment
Task Group. Procedures for EMAP integration and assessment will be described elsewhere. However,
the utility of each indicator for interpreting resource status and trends is an important consideration in
the indicator selection process. Thus, close cooperation between ecological resource groups and the
Integration and Assessment group is essential. Integrating monitoring results among ecological resource
categories will enable EMAP to address a wide range of issues, including:
Source apportionment and diagnostic analyses across resource boundaries (e.g., nonpoint
sources to surface waters)
The status of whole regions, encompassing all ecosystem types
The extent and magnitude of environmental problems that impact multiple ecological resource
categories
The effectiveness of regulatory actions
Emerging environmental problems and new questions that EMAP can address
The primary approach to achieving an integrated set of indicators across all resource groups is through
communication and information exchange. Within EMAP, the Indicator Coordinator has been assigned
responsibility for facilitating and encouraging these activities as they relate to the selection and
evaluation of EMAP indicators. The role of the Indicator Coordinator is discussed in greater detail in
Section 6. The following subsections describe (1) types of indicators that integrate across ecological
resource categories (Section 5.1), (2) extension of the conceptual models described in Section 4.2 to
encompass multiple resource categories and linkages among resource groups (Section 5.3), (3) coordin-
63
Preceding page blank
-------
ation of the indicator development process among groups (Section 5.4), and (4) problems arising from
displacement of indicators in space and time (Section 5.5).
5.1 CONCEPTUAL MODEL OF INDICATOR INTEGRATION
Integration of Indicator development and application of indicators across all EMAP resource groups
involves consideration of a number of factors, including (1) maintaining inter-group communication and
interaction to foster development of indicators that will integrate ecosystem level information among
different ecological resource groups, (2) assimilating new knowledge, (3) ensuring consistency in the
definition of indicator types, (4) providing for consistency in the collection and use of off-frame stressor
indicator data, (5) collaborating in identifying special response indicators that integrate across EMAP
resource groups (e.g., wide ranging or migratory organisms that use multiple habitats, food sources,
etc.), and (6) co-locating sampling units for special studies. Figure 5-1 illustrates these factors. This
figure has been expanded from concepts originally presented by Messer (1990) to illustrate interaction
between two EMAP resource groups, "A" and "B."
Within EMAP, diagnosing plausible causes of changing trends is secondary to documenting the status
and detecting trends in ecological resource condition. Diagnoses will be facilitated, however, if the
pathways of interaction between ecological resources are explicitly identified (i.e., what stresses does
each ecological resource receive as a result of processes or conditions in other resource categories?).
For example, the nutrient balance of a lake may be highly dependent on nutrient fluxes in the sur-
rounding forest. Nutrient flux from the forest may be measured as a response indicator in the forest, but
as a stressor indicator for the lake. Such identification will help clarify the off-site stressor indicator data
requirements of each EMAP resource group and the level of mutual assistance that is needed to acquire
such information. Consistency in using off-site information will improve the abilities of all EMAP resource
groups to detect spatial and temporal associations among exposure, habitat, and response indicators
and the natural and anthropogenic stressor affecting them, particularly inter-system problems and issues.
Substantial consistency already exists among EMAP resource groups in the definition of response, expo-
sure, habitat, and stressor indicators. This parallelism provides EMAP with opportunities for identifying
plausible causal relationships on large regional scales. For example, if the EMAP-Surface Waters
resource group detects that nutrients are significantly increasing in streams across a broad region, but
data from the EMAP-Agroecosystems resource group indicate no increase in nutrient export from agri-
cultural lands, then other non-point sources (e.g., atmospheric loadings) or point sources (e.g.,
discharges) may be responsible for the observed trends in aquatic nutrients. Conversely, these noncom-
plementary data may indicate that the conceptual models being used are inappropriate or incomplete,
and should be reexamined. Investigation of associations among indicators to identify potential causes of
64
-------
EMAP Resource Group A
SPATIAL
ASSOCIATIONS
SPATIAL
ASSOCIATIONS
EMAP Resource Group A
Special
Response
Indicators
(8.9., birds)
EMAP Resource Group B
ON-FRAME (MEASUREMENT ENPOINTS)
Exposure (E) and
Habitat (H) Indicators
Exposure (E) and
Habitat (H) Indicators
EMAP Resource Group B
This figure shows the types of integration possible for two hypothetical EMAP resource groups, "A" and "B." Response,
exposure, and habitat indicators (R, E, and H) represent data collected from EMAP (on-frame), and stressor indicators (S)
represent data collected from off-frame investigations. The same relationships apply for all seven resource groups. The types
of integration, in general order of priority from EMAP implementation, are:
1. Cross-resource group consistency in off-frame stressor information (external indicators), and intergroup exchange of
such data.
2. Cross-resource group consistency in exposure and habitat indicators (e.g., nutrients, chemical contaminants), and
indicators that link two or more resource categories.
3. Consistency in thrust of response indicators (e.g., relative abundance of selected animal species).
4. Special response indicators that integrate across resource groups (e.g., birds).
5. Co-location of sampling units for special studies.
Figure 5-1. Methods of indicator integration across EMAP resource groups.
65
-------
such trends may require cooperative studies by different EMAP resource groups at co-located sites,
(sites in the same hexagon monitored by more than one EMAP resource group). These special studies
may be conducted as part of Tier 3 or Tier 4 EMAP activities.
At present, different EMAP resource groups are at different stages of implementation: EMAP Great
Lakes has identified some core indicators for implementation; EMAP-Near Coastal is conducting a
regional demonstration study (Phase 5); EMAP-Forests is undertaking pilot studies (Phase 4); other
EMAP resource groups are in Phases 3 or 4 of their indicator selection activities. The differences in
phasing among EMAP resource groups is very valuable for learning, as It allows the pioneering groups
to pass on lessons learned to other groups. This process began at an Indicator strategy workshop in
Las Vegas, Nevada, in June 1990, and should be maintained and fostered in EMAP. Circulation of
annual research plans, communication of lessons learned through the Indicator Coordinator, and
informal inter-group discussions will all be very important for maintaining learning (Section 6.2).
5.2 CATEGORIES OF INDICATORS THAT FACILITATE INTEGRATION
Section 2 identifies four types of indicators: response, habitat, exposure, and stressor. Although all
indicators fall within one of these four indicator types, many indicators can also be classified in terms of
their contributions to integrative understanding of status and trends and their abilities to contribute to
diagnostic evaluations. For these purposes, four categories of integrative indicators have been defined:
external (or off-site), linking, common, and migratory. The following paragraphs discuss these cate-
gories of indicators.
5.2.1 External or Off-site Indicators
External or off-site indicators reflect external stresses or pressures that arrive from outside a sampling
grid cell (hexagon) to affect ecological resource conditions within the grid cell. These indicators are
most often anthropogenic in origin (e.g., pesticide applications), but they also include natural forcing
functions, such as precipitation or solar radiation, which in turn may be affected by anthropogenic
factors (e.g.. global dimate change) or indirectly by the ecological resources themselves. Many external
indicators are anthropogenic, including: human population densities, livestock grazing pressures, atmos-
pheric deposition, emissions of atmospheric pollutants, applications of fertilizers or other nutrients,
numbers of fishing and hunting permits, and numbers of discharge permits. Generally, these external
indicators are measured or estimated by other EMAP groups, EPA programs, or agencies, rather than by
the EMAP resource groups. Gathering and using these data may be extremely important in developing
indicators. However, considerable effort may be needed to assemble these data and put them in the
proper format.
66
-------
5.2.2 Linking Indicators
Linking indicators interface one EMAP resource group with another and are often an output from one
resource category and an input to another (e.g., nitrogen present as fertilizer applications in
agroecosystems and as subsequent runoff to surface waters and wetlands). Thus, linking indicators are
measured on the EMAP frame, and the data can be used by more than one EMAP resource group. For
example, an index of soil erosion measured by the Forest or Agroecosystem resource group would
provide the Wetlands, Surface Waters, and Estuaries resource groups with an indicator of the potential
export of sediment, nutrients, or pesticides. Soil and sediments represent sinks for chemical con-
taminants in all ecosystems. As a result, soil and sediment contaminant data can be used as important
links between ecological resources. For example, soil and sediment contaminant measurements would
be of importance not only in evaluating forest status but also in assessing potential effects on aquatic
receiving systems. However, linking indicators may not be sampled at co-located sites or with common
metrics, thus complicating data analyses, synthesis, and integration, as discussed in Section 5.4.
5.2.3 Common or Shared Indicators
Common or shared indicators are measured in multiple resource categories using similar techniques.
Examples include wildlife biomarkers, landscape attributes, and commonly used metrics of population or
community status, such as relative species abundance. By using consistent sampling and analysis tech-
niques in all resource categories, interpretation of multi-resource patterns in ecosystem status and trends
is facilitated. Landscape-level indicators (e.g., mosaic diversity, patch fractal dimensions) may be
applicable for many or all resource categories as measures of habitat quality or as surrogates for other
indicators that are more difficult to measure (e.g., wildlife density). Biomarkers (e.g., DNA alterations,
cholinesterase levels) are common indicators that can be used as a metric of exposure to metals or
organic constituents, whether the organism is a plant, fish, or mammal.
5.2.4 Migratory Indicators
Migratory indicators are measures of organisms that move across resource boundaries, from one
resource category to another and back again (e.g., honey bees, migratory birds, white-tailed deer).
Migratory indicators would be expected to reflect changes in exposure or habitat in one or more
ecological resources, and in some cases might indicate cumulative impacts in several resource classes
or categories within or outside a region.
Also, response indicators that integrate the effects of ecological resource conditions in multiple regions,
resource classes, or resource subclasses (e.g., some birds, amphibians, top carnivores) may be par-
67
-------
ticularly important for detecting the cumulative effects of changes in more than one resource category.
Observing such indicators may lead to the detection of stress pathways that had not previously been
recognized (e.g., DDT and reduced reproduction in raptors due to eggshell thinness).
5.3 USE OF CONCEPTUAL MODELS TO FACILITATE INTEGRATION
As discussed in Sections 3 and 4, conceptual models are an important tool for formalizing possible
relations among indicators, assessment endpoints, and stressors, and for identifying data or knowledge
gaps that could be filled through the selection and development of additional indicators. In a similar
manner, conceptual models also play a key role in identifying indicator-endpoint-stressor relationships
and interactions across resource groups.
Each EMAP resource group will be asked to prepare a conceptual model that emphasizes the major
inputs, outputs, and structural and functional attributes of interest for the resource class. Figure 5-2
provides a preliminary example of a model from the EMAP-Agroecosystems resource group depicting,
among other linkages, soil erosion as an output from agroecosystems and a potential input to wetland,
surface water, and near-coastal environments. These resource-specific conceptual models then provide
the basis for integration of needs and results, the framework for which is defined in Section 4.2.
As a first step toward integration among ecological resource group efforts, the individual models
developed for each group (e.g., see Figure 4-3) will be compared. This process will (1) help to formalize
expected relationships among indicators in different resource groups, (2) ensure consistency in the
definitions of response, exposure, habitat, and stressor indicators, (3) encourage the identification and
use of linking, common, and migratory indicators, (4) identify commonalities in approach and indicator
use among resource groups, and (5) ensure that important processes and linkages are considered
within the EMAP monitoring network. Developing, updating, and revising the structural aspects and the
inputs and outputs of the individual conceptual models will form the focal point for workshops and small
working group discussions, and also the framework for coordinating EMAP indicator development, as
outlined in Section 5.3.
5.4 COORDINATION OF THE INDICATOR DEVELOPMENT PROCESS AMONG RESOURCE
GROUPS
Five major tasks are planned to facilitate coordination and integration of the indicator development
process among ecological resource groups:
68
-------
zes
\
Inputs
A. Management Practices
Chemicals
Pesticides
Other organics
Fertilizers
Atmospheric
Water
Irrigation
Soil Manipulation
Tillage
B. Natural
Pests
Precipitation
Radiation sunlight (UV-B)
Soil development
Temperature
Relative humidity
Evapotranspiration
Agroecosystem
1 ) Food & Fiber Production
Soil
Vegetation
Animals
2) Natural Vegetation
Soil
Vegetation
Animals
3) Animal Life
Soil
Vegetation
Animals
Outputs
Harvests
Crops
Animals
Wildlife
Surface Runoff
Salts
Nutrients
Pesticides
Sediment
Leaching to Groundwater
Nutrients
Pesticides
Fertilizers
Erosion
Sediments
Nutrients/pesticides
Atmospheric
Methane, nitrous
oxide
Pesticides
Dust
Figure 5-2. Conceptual model of the agroecosystem ecological resource with associated inputs
and outputs.
69
-------
1. Compile and cross reference lists of assessment endpoints, environmental values, stressors,
and indicators proposed by each resource group to identify areas of similarity or commonality.
Assessment endpoints and environmental problems in different resource categories are, in
general, highly interdependent and linked to common stressors. Foliar damage, fish loss, and
estuarine eutrophication, for example, can all be related to atmospheric deposition. Compila-
tion of the proposed endpoints, stressors, and indicators serves as the first step towards
identifying areas of overlap or, on the other hand, inconsistencies in approach among resource
groups. A preliminary listing of the environmental values identified by each resource group is
provided in Table 3-1.
2. Conduct one or more workshops involving looking-outward (interaction) matrix exercises to
identify linking and stressor indicators that have become necessary or have been overlooked.
Interaction matrices are commonly used to develop and chart linkages in computer or simula-
tion models. This same principle or approach can be used to identify possible linkages among
resource groups for indicator development. The conceptual models identifying major inputs
and outputs for each resource group (see Figure 5-2) provide a starting point for these
discussions.
3. Develop conceptual models that identify cross-resource linkages and relationships (see Section
5.3 and Figure 5-1).
4. As appropriate, propose alternate assessment endpoints and indicators that would provide
information similar to that provided by the ecological resource group, but would be common or
improve comparability with endpoints and indicators being monitored by other resource
groups. For example, sustaining biodiversity is an environmental value common to all resource
groups. To the degree possible, therefore, it makes sense to assess biodiversity using similar
assessment endpoints and indicators in each group. The greater the compatibility of indi-
cators, assessment endpoints, and off-frame stressor information used in the different resource
groups, the easier and more direct will be program integration and cross-resource analyses.
Direct comparison of responses and effects among resource categories allows for a weight of
evidence approach to diagnosing possible causal factors and mechanisms, and thus greater
confidence in the EMAP results.
5. For indicators selected by more than one group, examine and compare the proposed field
sampling and measurement methods and suggest modifications as needed to improve compar-
ability among groups. Comparable methods and units are also important if comparisons are to
be made across resource groups. In some cases, further research may be needed to develop
methods applicable to several resource categories. For example, nutrient and pesticide
exports from terrestrial systems are typically measured using different techniques and
expressed in different units than are estimates of inputs of these constituents to aquatic
systems. Selection of the optimal approach for satisfying the needs of both the terrestrial and
aquatic resource groups may require additional simulation analyses and/or field testing.
Efforts related to each of the above tasks will involve continually, and the lists, matrices, and models will
be updated as needed. Communication among groups will be ensured through regular meetings and
workshops involving the technical leads for indicator development from all ecological resource groups.
Many of the tasks will also assist in interpreting the EMAP monitoring results, and thus will be conducted
cooperatively with the EMAP Integration and Assessment Task Group.
70
-------
5.5 PROBLEMS ASSOCIATED WITH DIFFERENCES IN INDICATOR SPATIAL AND TEMPORAL
SCALES
Indicators monitored by different ecological resource groups will typically not be sampled during the
same index period or be co-located in the same sampling unit. This will result in the displacement of
indicators in both space and time. Methods for dealing with this displacement, during data analysis or
by supplementing the network design (e.g., during Tier 3; see Section 2.1.4), are currently being
investigated as part of the EMAP design activities.
For some stressor and exposure indicators, temporal displacement might be desirable; the observed
response may be displaced in time from the perturbation that caused the response. For example, soil
erosion indices measured during the spring period in agroecosystems and forests, combined with
nutrient export coefficients, might correspond better with estimates of summer chlorophyll concen-
trations in lakes than would export estimates for the summer period. Selection of the optimal index
period for indicator measurements must consider, therefore, hypothesized stress-response relationships
and the potential displacement that might be required to associate a dependent response variable with
an independent stressor or exposure indicator. Expert opinion, obtained during workshops (see
Sections 4.4 and 4.5), and peer review will be used to assist in evaluating the importance and effects of
temporal displacement.
Spatial displacement is, perhaps, more difficult to evaluate and address. Paired comparisons are
generally used for association, and regression analysis is used to relate dependent and independent
variables. Indicators linking multiple resource categories, however, may not be co-located. Data
analysis techniques are being developed, therefore, to deal with non-co-located data, relying largely on
aggregation of regional or subregional EMAP results before conducting diagnostic analyses. For
example, aggregation of data by subregions was used during the National Acid Precipitation Assessment
Program to identify a linear relationship between sulfate deposition and surface water sulfate
concentrations (Figure 5-3).
Analysis of associations at regional or subregional scales is consistent with EMAP's design objectives of
determining regional patterns and trends in the status of ecological resources. Most environmental anal-
yses to date, however, have focused on causal relationships at local or site-specific scales. Extension of
these techniques to larger spatial scales will require new perspectives and approaches for data aggrega-
tion and analysis, and perhaps the development of new indicators better suited for application and inter-
pretation on regional scales. The potential utility of regional-level analyses also hinges, in part, on the
degree of irrtra- versus inter-regional indicator variability, and the spatial scale over which indicator
values and variability are relatively homogeneous.
71
-------
300-f
200-
2
"5
o
1
-------
Indicator integration both within and across ecological resource categories represents a major challenge;
however, the benefits resulting from a successfully integrated program are substantial. The process of
integration will require a considerable amount of cooperation, communication, and coordination. The
Indicator Coordinator (discussed in Section 6) and the Integration and Assessment Task Group will work
jointly to ensure that these necessary levels of interaction and cooperation occur.
73
-------
74
-------
6. INDICATOR COORDINATOR
6.1 NEED FOR AN INDICATOR COORDINATOR
It is clear from the preceding sections that indicator development within EMAP will require extensive
cooperation and coordination both among ecological resource groups and between these groups and
other EMAP components (e.g., task groups dealing with landscape characterization, atmospheric deposi-
tion, and statistical design). It will also be necessary to coordinate the monitoring responsibilities of
different EMAP resource groups when changes in land use or other landscape changes alter the
resource classes being monitored by these groups (e.g., when forests become agroecosystems, or
wetlands are created). Furthermore, interactive relationships with other, non-EMAP indicator research
programs (e.g., within EPA's Core Research Program) would be highly beneficial. The responsibility for
coordinating these efforts, both within EMAP and between EMAP and other research programs, has
been assigned to the EMAP Indicator Coordinator and staff.
6.2 ROLE OF THE INDICATOR COORDINATOR
The Indicator Coordinator will play a pro-active role in facilitating communication and the flow of
information on indicators among groups, promoting implementation of the indicator development
strategy, supporting the integration of indicator development and research results, creating and
maintaining an indicator data base, and reviewing indicator research proposals. These functions of the
Indicator Coordinator are described in the following paragraphs. In addition, the Indicator Coordinator
will identify, enlist, and supervise appropriate staff; prepare annual budget requests; and prepare reports
of indicator coordinator activities.
6.2.1 Facilitate Communication
One of the two primary roles of the Indicator Coordinator will be to ensure that all relevant information
about indicators is continually exchanged between the ecological resource groups and the appropriate
components of the EMAP hierarchy (e.g., Steering Committee, Statistical Design Group, Integration and
Assessment Task Group). Within the EMAP organization, the Indicator Coordinator will work closely with
all Technical Directors, Technical Coordinators, and Assistant Directors, and their designated repre-
sentatives. Close contact will be maintained with each ecological resource and task group and with
appropriate elements of EPA's Core Research Program and other relevant research programs. The
Indicator Coordinator will also review all EMAP and EPA documents relating to indicators, and pass on
relevant information to the respective resource groups. In addition, the Indicator Coordinator will provide
regular status reports on indicator development to the EMAP Steering Committee and EPA program
Preceding page blank
-------
managers. At EMAP management meetings, the Indicator Coordinator will represent concerns relating
to indicator development and evaluation. Finally, the Indicator Coordinator will act as a centralized
contact point and source of information on EMAP indicators for the external scientific community.
Requests for information or funding will be directed to the appropriate resource or task group as
needed.
6.2.2 Promote Implementation of the Indicator Development Strategy
The second primary role of the Indicator Coordinator will be to help ensure that the steps and
procedures outlined in Sections 4 and 5 of this document are implemented by each ecological resource
group. The Indicator Coordinator will work with the EMAP resource groups and the Integration and
Assessment groups to (1) develop integrative conceptual models to assist with indicator selection and
evaluation, (2) identify linking indicators (that serve as an output from one resource category and an
input to one or more other resource categories), and (3) ensure the use of comparable sampling and
measurement methods for common or shared indicators. Interactions among resource groups will be
promoted through periodic (approximately twice per year) inter-group meetings and workshops on
indicators as well as regular exchange of written materials and information. At least one workshop per
year will focus on broad indicator issues of broad relevance to EMAP. Based largely on discussions at
this workshop, the Indicator Coordinator will update this Indicator Development Strategy on a yearly
basis until at least 1995. The Indicator Coordinator will attend key meetings (e.g., annual reviews)
organized by the individual ecological resource groups and related task groups (e.g., landscape
characterization). As needed, the Indicator Coordinator can be called upon by individual EMAP resource
groups to provide assistance in obtaining information or cooperation in monitoring indicators from other
EMAP groups. As noted in Section 5, many of these tasks will be conducted in close cooperation with
the Integration and Assessment Task Group, responsible for ensuring a coordinated and integrated
interpretation of the EMAP indicator monitoring results. The Indicator Coordinator should assist EMAP
resource groups in preparing descriptions of their needs for indicators and their long-term plans for
meeting those needs.
6.2.3 Create and Maintain an Indicator Data Base
The purpose and format of the indicator data base are described in Appendix A. Management of this
data base will be coordinated with the EMAP Information Management Center (Information Management
Group, 1990). The Indicator Coordinator, however, will be responsible for technical oversight of the
content, accuracy, and completeness of the data base as a record of the indicator development
process. The Indicator Coordinator will work with the Information Management Center to develop the
data base design and procedures for quality control, and to facilitate access to the data base by all
76
-------
interested parties. The data base will be used to obtain up-to-date information on the status of all
indicators and as a means of cross-checking indicators among ecological resource groups. Beginning
on the first day of the first complete quarter following appointment, the Indicator Coordinator will
generate quarterly reports from the data base and distribute them to all ecological resource groups and
other appropriate EMAP personnel. These quarterly reports will subsequently be produced on the first
day of the months of January, April, July, and October of each year.
6.2.4 Review Indicator Research Proposals
Research to Identify new indicators or evaluate existing indicators within the EMAP framework will be
initiated and directed primarily by the EMAP resource groups. Such research will be initiated either from
within EMAP or by interested researchers in other agencies or institutions, as described in Section 7. A
role of the Indicator Coordinator will be to work with the Steering Committee to develop a process to
receive, review, and select for funding proposals for indicator research from EMAP resource groups and
from outside EMAP The Indicator Coordinator will play an active role in reviewing research proposals
as well as final project reports. On an as requested basis, the Indicator Coordinator will organize and
implement external peer review of proposals received by EMAP resource groups or the Steering Com-
mittee. As EMAP develops, the Indicator Coordinator may initiate proposals to develtip linking, shared,
or migratory indicators, or any other indicators that may be needed by EMAP, but do not clearly fall
within the responsibility of a single EMAP resource group. The Indicator Coordinator will also monitor
ongoing and planned research on indicators outside EMAP by maintaining regular contact with appro-
priate research agencies and programs. The EMAP ecological resource groups and other relevant per-
sonnel will be provided with updates on major findings or new initiatives. A second objective of the
regular contacts with other research programs is to encourage the funding of indicator research of direct
relevance to EMAP.
77
-------
78
-------
7. PROCEDURES FOR INITIATING INDICATOR RESEARCH
Two types of indicator research are envisioned within EMAP: supplemental research and new initiatives.
Supplemental research is conducted specifically to fulfil a defined need within the indicator development
process for selected candidate, research, or probationary core indicators. Activities include literature
reviews, data base searches, simulation modeling, methodology development, and participation in field
pilot studies. By contrast, new initiative research is directed toward identifying new indicators needed to
fill important gaps or data needs within the EMAP monitoring network. Funding levels and the relative
effort applied for each type of research are decided principally by the individual resource groups and the
Steering Committee. Research can be proposed and performed by any qualified researcher in EMAP
groups, federal and state agencies, universities, or other research organizations or businesses. The
Indicator Coordinator's (see Section 6) role is to advise, encourage, and facilitate research on indicators,
especially those that integrate across resource groups.
Requests for specific supplemental research tasks originate within the ecological resource groups.
Solicitations will be announced as needed. New initiative research, however, may arise as either
solicited or unsolicited proposals. Requests for proposals will be issued annually by the ecological
resource groups. To the degree possible, these solicitations will be coordinated closely with research
planning and proposal requests from EPA's Core Research Program. Interactions with EPA's Core
Research Program are primarily the responsibility of the Indicator Coordinator. Unsolicited proposals for
research on indicators pertinent to a single EMAP resource group should be sent to the Technical
Director of the appropriate group. Research proposals involving indicators that integrate across
resource groups (external, linked, or shared indicators) should be sent directly to the Indicator
Coordinator.
Proposal reviews will be conducted jointly by members of the EMAP ecological resource group spon-
soring the research, the Indicator Coordinator, and a panel of outside experts. The Indicator
Coordinator is responsible for ensuring that consistently high standards of research quality are
maintained across all resource groups. In addition, the Indicator Coordinator must be notified and will
keep track of all funding decisions and provide regular reports on indicator research, planned and
ongoing, to the EMAP Steering Committee.
This strategy document provides Information that can be used to evaluate unsolicited and competitive
proposals for research on specific indicators. It does not, however, provide guidance or criteria for
prioritizing proposals for different types of indicators, or for establishing priorities for indicator research
between different EMAP resource groups. These processes require evaluation of the relative importance
of the various assessment endpoints being considered by specific EMAP resource groups. These
79
Preceding page blank
-------
evaluations must take into account user needs (both within the specific resource group and across
resource groups), relative importance of issues, and political and funding realities. Most of the
information for setting priorities should come from the EMAP resource group research plans. Additional
information needed to guide the preparation and evaluation of research proposals can be obtained from
the Indicator Data Base and annual statistical summaries of the different resource groups.
80
-------
8. REFERENCES
Einhaus, R.L, D.M. McMullen, R.L Graves, and P.H.Friedman. 1990. Environmental Monitoring and
Assessment Program Quality Assurance Program Plan. U.S. EPA, Office of Research and Devel-
opment. Environmental Monitoring Systems Laboratory, Cincinnati, OH.
Fava, J.A., W.J. Adams, R.J. Larson, G.W. Dickson, and W.E. Bishop. 1987. Research priorities in
environmental risk assessment. Soc. Environ. Toxic. Chem. Washington, D.C.
Franklin, J.F., C.S. Biedsoe, and J.T. Callahan. 1990. Contributions of the long-term ecological research
program. Bioscience 40:509-523.
Hughes, R.M. 1989. Ecoregional biological criteria. Pages 147-151 in Water Quality Standards for the
21 st century. U.S. EPA, Office of Water, Washington, D.C.
Hunsaker, C.T., and D.E. Carpenter (eds.). 1990. Ecological indicators for the Environmental Monitoring
and Assessment Program. EPA 600/3-90/060. U.S. EPA, Office of Research and Development,
Research Triangle Park, NC.
Information Management Group. 1990. Environmental Monitoring and Assessment Program Information
Management Program Plan-FY 90/91. Draft Report. U.S. EPA Environmental Monitoring Systems
Laboratory, Las Vegas, NV.
Karr, J.R., D.D. Fausch, P.L Angermeir, P.R. Yant, and I.J. Schlosser. 1986. Assessing biological
integrity in running waters: A method and its rationale. Spec. Pub. No. 5. Illinois Natural History
Survey, Champaign, IL 28 pp.
Messer, J.J. 1990. EMAP Indicator Concepts. Pages 2-1 through 2-26 in C.T. Hunsaker and D.E.
Carpenter, eds. 1990. Ecological Indicators for the Environmental Monitoring and Assessment
Program. EPA 600/3-90/060. U.S. EPA, Office of Research and Development, Research Triangle
Park, NC.
Meyer, J.R., C.L Campbell, T.J. Moser, J.O. Rawlings, and G. Hess. 1990. Indicators of the ecological
status of agroecosystems. Presented at the International Symposium on Ecological Indicators, Ft.
Lauderdale, FL
National Academy of Sciences. 1975. Planning for Environmental Indices (Report of Planning Com-
mittee on Environmental Indices to the Environmental Studies Board). Washington, D.C.
O'Neill, R.V. 1988. Hierarchy theory and global change. Pages 29-45 in T. Rosswall, R.G.
Woodmansee, and P.G. Risser, eds. Scales and global change: Spatial and Temporal Variability in
Biospheric and Geospheric Processes. John Wiley & Sons, New York.
Ott, W.R. 1978. Environmental Indices: Theory and Practice. Ann Arbor Science Pubi., Ann Arbor, Ml.
371 pp.
Overton, W.S., D.L Stevens, C.B. Pereira, P. White, and T. Olsen. 1990. Design Report for EMAP,
Environmental Monitoring and Assessment. Program (Draft). A report to the U.S. EPA Environ-
mental Research Laboratory - Corvallis. Oregon State University, Corvallis, OR.
81
-------
Paulsen, S.G., DP. Larsen, P.R. Kaufmann, T.R. Whittier, J.R. Baker, D.B. Peck, J. McGue, D. Stevens,
J.L Stoddard, R.M. Hughes, D. McMullen, J. Lazorchak, and W. Kinney (with contributions by:
S. Overton, J. Pollard, D. Heggem, G. Collins, A. Selle, M. Morrison, C. Johnson, S. Thiele, R. Hjort,
S. Tallent-Halsell, K. Peres, S. Christie, and J. Mello). 1990. EMAP-Surface Waters Monitoring and
Research Strategy - Fiscal Year 1991. Peer Review Draft. Prepared for U.S. EPA Environmental
Research Laboratory, Corvallis, OR.
Rapport, D.J., H.A. Regier, and T.C. Hutchinson. 1985. Ecosystem behavior under stress. Amer. Nat.
125:617-640.
Rapport, D.J. 1989. What constitutes ecosystem health? Perspectives in Biology and Medicine
33(1):120-132.
Reilly, W.J. 1989. Measuring for environmental results. Pages 2-4 in EPA Journal, May/June, 1989.
Rrtters, K., K. Hermann, and R. Van Remortel. 1990. Forest Task Group Annual Statistical Summary,
Hypothetical Example. Prepared for U.S. EPA, Office of Research and Development, Research
Triangle Park, NC.
Sala, O.E., W.P. Parton, LA. Joyce, and W.K. Lauenroth. 1988. Primary productivity of the central
grassland region of the United States. Ecology 69:40-45.
Suter, G.W. 1990. Endpoints for regional ecological risk assessments. Environ. Manage. 14:9-23.
Westman, W.E. 1985. Ecology, Impact Assessment, and Environmental Planning. Academic Press,
New York.
Wiens, J. 1989. Spatial scaling in ecology. Functional Ecology 3:385-387.
U.S. EPA Science Advisory Board. 1990. Evaluation of the Ecological Indicator Report for EMAP.
Report of the Ecological Monitoring Subcommittee of the Ecological Processes and Effects
Committee. EPA-SAB-EPEC-91-001. Washington, D.C.
82
-------
APPENDIX A
INDICATOR DATA BASE
Each EMAP resource group will compile and maintain information about its indicator development
activities in the Indicator Data Base (IDB), which will be managed by the EMAP Information Management
Center (Information Management Group, 1990). As described in Section 6, the Indicator Coordinator will
be responsible for technical oversight of the content, accuracy, and completeness of the data base as it
records the indicator development process.
The IDB should be used to store up-to-date information about each indicator being evaluated or
considered by each EMAP resource group. The data base should contain all pertinent information about
each indicator ever considered by the resource group, including at least the level of detail presented on
the indicator fact sheets in the appendices to the Indicator Report (Hunsaker and Carpenter, 1990).
Once an indicator is listed, it should never be deleted from the data base, although it can be deleted
from further consideration (status: rejected) or revised and refined during later stages of indicator
development.
As information is developed in the process of evaluating indicators in each of the phases of this strategy,
it should be added promptly to the IDB. Data base listings for each indicator should be initiated during
the identification of candidate indicators (Phase 2) and updated during each of the subsequent phases
of the indicator development process (described in Sections 4.5-4.8). Indicator information entered into
the data base should be kept simple and short, to make the data base easy to update. It is extremely
important for each ecological resource group's indicator data base to be updated frequently. Not all
informational categories accommodated by the data base will be evaluated during the first phases of
indicator development; therefore, It will not be possible to complete all data base entries while updating
records for each indicator. The appropriate results from each phase of the development process should
be entered into the data base following completion of that phase of development.
Although the design of the data base has not been completed, the structure and format of the indicator
data base must be consistent across all EMAP resource groups. This consistent structure will allow for
information exchange and synthesis. Data base entries should be brief and reasonably easy to
complete. Possible information categories in the final IDB could include the following:
EMAP Resource Group: This is the EMAP resource group that is evaluating the indicator
(e.g., Arid Lands, Forests).
Indicator Title: This is a brief, descriptive name (e.g., agricultural pest density, tree growth
efficiency, fish gross pathology).
83
-------
Endpoint Assessed by the Indicator: This is the name of the assessment endpoint (taken
from the group's conceptual model) and whether the indicator is a direct metric of the endpoint
or is one of several metrics necessary for assessment of the endpoint (see Figure 4-1).
Indicator Type: This entry is composed of the indicator type and the descriptor (e.g.,
Response-Community Structure; Exposure-Bioassay). Indicator types include response,
exposure, habitat, and stressor. A given indicator may fit into more than one type. Descriptors
include those structural or functional features of the indicators that most dearly and concisely
describe the purpose of measuring (e.g., ecosystem process rates, community structure, popu-
lations, sensitive species, bioassay, ambient concentrations, tissue concentrations, pathology,
pathogens, landscape, habitat, biomarkers, exotics, genetically engineered microorganisms).
Other descriptors (e.g., retrospective) should be added for clarification.
Status: This information includes two components: (1) the indicator's present position in the
selection process and (2) the current level of evaluation activity associated with the indicator.
The position in the selection process is assigned one of four conditions: candidate, research,
developmental, or core, depending on the level of evaluation the indicator has passed (all indi-
cators addressed in Phase 1 should be identified as potential candidate indicators, even though
they may be rejected in Phase 2). The current level of evaluation activity represents one of
three conditions: active, rejected, or suspended. Active indicators are currently being
assessed or are scheduled for research and evaluation in the near-term. Rejected indicators
have been identified as unacceptable for further evaluation, at least at the present time.
Indicators that are suspended appear to be promising, but are not actively being evaluated due
to limitations in the availability of funding, time, data, or technology. For example, Research-
Active denotes that the indicator has reached the research stage of evaluation and is actively
being tested. Core-Rejected denotes an indicator previously accepted as a core indicator, but
subsequently deleted, replaced, or modified.
Application: This is a concise description of the applicability of the indicator. This identifier
should indicate the degree of interclass (as opposed to local or single ecological resource
subclass) applicability. This entry should also display the degree of linkage of the indicator to
other EMAP resource groups (e.g., more than one group measuring the same indicator, such
as Agroecosystems and surface Waters both monitoring pesticide levels), and the degree to
which the indicator integrates effects among resource classes (e.g., migratory birds that utilize
both estuarine and wetland habitats).
References: A brief listing of the sources of relevant information about the indicator should be
included. This information should include the source of information leading to the selection of
the indicator. It may include citations of workshop reports, unpublished reports, published
literature, and personal communications. This section is not intended to be a comprehensive
listing at this stage; however, it is important that all documentation be available to justify the
process of moving an indicator from one position to the next.
Justification: The summary of the reasons for accepting or rejecting the indicator as a
research indicator should be concise, yet contain sufficient detail to allow verification of the
validity of the decision. Providing the reasons for rejecting or suspending the indicator is just
as important as the justification for acceptance. Justifications for acceptance and rejection
should be based on the criteria listed in Figure 4-5, though there may be other valid reasons
for either decision. Other reasons for acceptance include:
- The proposed indicator fills an important gap covering an aspect of environmental condition
that is not yet covered by a core indicator.
- The proposed indicator promises to provide higher quality data (or data of equivalent quality
at lower cost) than is being provided by existing indicators.
84
-------
- The indicator provides important information for diagnosing ecological condition in another
EMAP resource group's program.
The most probable reasons for suspending an indicator are:
Although the proposed indicator is potentially useful, research efforts are concentrating on
other types of indicators (i.e., it is impossible to pursue all promising indicators at once).
- Additional basic information is needed about the candidate indicator before it can be
properly evaluated as a research indicator.
Index Period: The preferred time period for measuring the proposed indicator should be
addressed. Notes should be included on both the empirical justification for the proposed index
period and practical problems associated with sampling during this period. Reference should
be made to temporal variation of the indicator during the index period at different geographic
locations.
Measurements: The database should list possible field and laboratory methods, identify the
preferred methods, and provide references for each approach. The analytical and logistical
details of each method should not be presented, but remarks highlighting differences in the
type, quality, or cost of the data should be included.
Variability: Information and expert judgments concerning the relationship between natural
spatial and year-to-year temporal variability in the indicator and the expected magnitude of a
change in ecological condition should be included in the data base. Estimates of the
measurement error associated with indicators (both sampling and analytical methods) should
also be included.
Primary Problems: A list of the major issues that need resolution through subsequent EMAP
research should be presented. This listing should identify the expected magnitude of the effort
(i.e., literature search, pilot studies) needed to resolve the issues.
References: Both summary documents prepared for EMAP and selected key primary refer-
ences should be recorded. These lists should be frequently updated.
85
-------
JH
O
(j CD
._ >
*d C3 *"* ^
0) 3
0)
O
O
d
a
00
O O
t ^
J3 8
c o
5 o
£3 u
* < d g
*!
I Reproduced by NTIS
I National Technical Information Service
U.S. Department of Commerce
Springfield, VA 22161
This report was printed specifically for your
order from our collection of more than 2 million
technical reports.
For economy and efficiency, NTIS does not maintain stock of its vast
collection of technical reports. Rather, most documents are printed for
each order. Your copy is the best possible reproduction available from
our master archive. If you have any questions concerning this document
or any order you placed with NTIS, please call our Customer Services
Department at (703)487-4660.
Always think of NTIS when you want:
Access to the technical, scientific, and engineering results generated
by the ongoing multibillion dollar R&D program of the U.S. Government.
R&D results from Japan, West Germany, Great Britain, and some 20
other countries, most of it reported in English.
NTIS also operates two centers that can provide you with valuable
information:
The Federal Computer Products Center - offers software and
datafiles produced by Federal agencies.
The Center for the Utilization of Federal Technology - gives you
access to the best of Federal technologies and laboratory resources.
For more information about NTIS, send for our FREE NTIS Products
and Services Catalog which describes how you can access this U.S. and
foreign Government technology. Call (703)487-4650 or send this
sheet to NTIS, U.S. Department of Commerce, Springfield, VA 221 61.
Ask for catalog, PR-827.
Name
Address,
Telephone
- Your Source to U.S. and Foreign Government
Research and Technology.
------- |