United States
Environmental Protection
Agency
842B08004
vvEPA
Indicator Development
Estuaries
TfMfl
-------
Cover photo : Maryland Coastal Bays by Joseph Hall II
-------
ACKNOWLEDGEMENTS
This Indicator Development for Estuaries manual was prepared by the U.S.
Environmental Protection Agency (EPA), Office of Water (OW) with contract support
from Lynn McLeod, Battelle. The EPA Project Manager for this document was Joe Hall,
who provided overall management, guidance and project coordination. The document
was reviewed by the National Estuary Programs, EPA Regions and Headquarters staff.
Special appreciation is extended to Kerry St. Pe' and Dean Blanchard for their assistance
with the Barataria-Terrebonne NEP Case Study and Philip Trowbridge and the new
Hampshire NEP for assistance in preparing their case study. Thanks go to the National
Estuary Program (NEP) Directors and their staff for contributing information and data
and the following reviewers, who provided written materials, technical information,
reviews and recommendations throughout the preparation of this document.
Barry Burgan, Office of Water
Dean Carpenter, Albemarle-Pamlico National Estuary Program
Diane Gould, Northeast EPA Region
Barbara Keeler, Region 6
Cheryl McGovern, Region 9
Shana Miller, Peconic Estuary Program
-------
[This page left intentionally blank]
11
-------
TABLE OF CONTENTS AND ACRONYMS
TABLE OF CONTENTS
EXECUTIVE SUMMARY ES-1
INTRODUCTION 1
What are Indicators? 1
Why Should Indicators be Developed? 4
Who is Developing Indicators? 5
Who Should Use Indicators? 9
Lessons Learned During Previous Efforts Toward Indicator Development 13
Indicator Development Process 15
PLANNING THE PROGRAM 17
Step 1: Determine the Spatial Scale of the Program 19
Step 2: Establish a Steering Committee 20
Step 3: Identify the Purpose and Need for Indicators 21
Step 4: Identify the Key Issues 22
Step 5: Conduct a Baseline Assessment of Each Issue 25
CONCEPTUAL MODELS DEVELOPMENT 27
Use of Conceptual Models in Indicator Development 28
Development of Conceptual Models 29
INDICATOR SPECIFICATION 35
Conceptual Relevance 40
Feasibility of Implementation 40
Response Variability 41
Interpretation and Utility 42
MONITORING PLAN DEVELOPMENT AND MODIFICATION 47
INDICATOR IMPLEMENTATION 53
Formal Adoption and Funding 53
Communication Among Organizations 54
Monitoring Plan Implementation 55
Data Collection and Analysis Plans 56
Communication of Indicator Findings 56
INDICATOR REASSESSMENT 65
l
SUMMARY 67
REFERENCES 71
-------
TABLE OF CONTENTS AND ACRONYMS
APPENDICES
APPENDIX A-l BARATARIA-TERREBONNE PROGRAM CASE STUDY A-l
APPENDIX A-2 NEW HAMPSHIRE ESTUARIES PROJECT CASE STUDY A-9
APPENDIX A-3 NORTHEAST COASTAL INDICATORS WORKSHOP CASE STUDY A-17
APPENDIX B INDICATORS DEVELOPED BY VARIOUS GROUPS B-l
APPENDIX C RESOURCES ON INDICATORS C-l
FIGURES
FIGURE 1. MAP OF THE ESTUARIES IN THE NATIONAL ESTUARY PROGRAM (NEP) AND
NATIONAL ESTUARINE RESEARCH RESERVE (NERR) SYSTEM 7
FIGURE 2. NATIONAL COASTAL ASSESSMENT SYNTHESIS OF WATER CLARITY DATA ... 13
FIGURES. INDICATOR DEVELOPMENT PROCESS 15
FIGURE 4. EXAMPLE OF A COMMON ECONOMIC INDICATORDow JONES
INDUSTRIAL AVERAGE FROM 1975 TO 2005 28
FIGURE 5. CONCEPTUAL MODEL OF ESTUARINE ECOSYSTEM WITH MULTIPLE STRESSORS
AND RESPONSES 29
FIGURE 6. THE PSR CONCEPTUAL MODEL 30
FIGURE 7. EXAMPLE OF A PSR CONCEPTUAL MODEL FOR NUTRIENT INPUTS AND
ASPECTS OF EUTROPHICATION 32
FIGURE 8. EXAMPLE OF A PSR/E CONCEPTUAL MODEL FOR NUTRIENT INPUTS AND
ASPECTS OF EUTROPHICATION 32
FIGURE 9. EXAMPLE OF MULTIPLE LEVELS OF INDICATORS ASSOCIATED WITH
EUTROPHICATION AND THE INPUTS OF NUTRIENTS 36
FIGURE 10. FIVE STEPS IN DESIGNING A MONITORING PROGRAM 48
FIGURE 11. MULTISTAGED OR TIERED SAMPLING DESIGN OF THE MWRA WATER
QUALITY MONITORING PROGRAM 52
FIGURE 12. CASCO BAY ESTUARY PARTNERSHIP 2005 STATE OF THE BAY REPORT 58
FIGURE 13. SAN FRANCISCO REPORT CARD 1996-1999 59
FIGURE 14. MWRA 5 YEAR PROGRESS REPORT 2000-2004 61
TABLES
TABLE 1. EXAMPLES OF VARIOUS INDICATOR EVALUATION GUIDELINES 39
TABLE 2. SAMPLING OF INDICATORS AND THEIR RESPECTIVE ASPECTS UNDER THE
FOUR CRITERIARELEVANCE, FEASIBILITY, VARIABILITY, AND UTILITY ... 44
IV
-------
EXECUTIVE SUMMARY
ACRONYMS
ANCMS Atlantic Northwest Coastal Monitoring Summit
BOD biological oxygen demand
BTES Barataria-Terrebonne Estuary System
BTNEP Barataria-Terrebonne National Estuary Program
CBBEP Coastal Bend Bays Estuary Program
CBEP Casco Bay Estuary Partnership
CCMP Comprehensive Conservation Management Plan
CHNEP Charlotte Harbor National Estuary Program
CPUE catch-per-unit-effort
CSO combined sewer overflow
CWA Clean Water Act
CZMA Coastal Zone Management Act
CZMP Coastal Zone Management Program
DDE dichlorodiphenylethylene
DDT dichlorodiphenyltrichloroethane
DO dissolved oxygen
EMAP Environmental Monitoring and Assessment Program
EPA U.S. Environmental Protection Agency
FEMA Federal Emergency Management Agency
GBEP Galveston Bay Estuary Program
GIS geographic information system
GLNPO Great Lakes National Program Office
GOM Gulf of Maine
GOMOO S Gulf of Maine Ocean Observing System
GPRA Government Performance and Results Act
LISS Long Island Sound Study
LSU Louisiana State University
LTEI Long Term Environmental Indicators
MOU Memorandum of Understanding
MWRA Massachusetts Water Resources Authority
NCA National Coastal Assessment
NCCR National Coastal Conditions Report
NCIW Northeast Coastal Indicators Workshop
NEP National Estuary Program
NERR National Estuarine Research Reserve
-------
TABLE OF CONTENTS AND ACRONYMS
NGO non-governmental organization
NHEP New Hampshire Estuaries Project
NMFS National Marine Fisheries Service
NOAA National Oceanic and Atmospheric Administration
NFS National Park Service
NRC National Research Council
OCPD Oceans and Coastal Protection Division
OCRM Ocean and Coastal Resource Management
OECD Organisation for Economic Co-operation and Development
ORD Office of Research and Development
PAH polycyclic aromatic hydrocarbon
PCS polychlorinated biphenyl
PNNL Pacific Northwest National Laboratory
PSR Pressure-State-Response
PSR/E Pressure-State-Response-Effects
QA/QC quality assurance/quality control
SCCWRP Southern California Coastal Water Research Project
SFEI San Francisco Estuary Institute
SJBE San Juan Bay Estuary
SOLEC State of the Lakes Ecosystem Conference
TAC Technical Advisory Committee
TBT tributyltin
TEP Tillamook Bay Estuary Program
U.S. United States
USGS U.S. Geological Survey
VSP visual sampling plan
VI
-------
EXECUTIVE SUMMARY
EXECUTIVE SUMMARY
The National Estuary Program (NEP) was established by Congress in 1987 under Section
320 of the Clean Water Act, to promote and restore the health of nationally significant
estuaries, while concurrently supporting beneficial uses of the estuary's natural resources.
Under the NEP, the Administrator of the U.S. Environmental Protection Agency (EPA) is
authorized to convene Management Conferences to identify priority problems within
these estuaries and develop a Comprehensive Conservation and Management Plan
(CCMP) to address those problems. Since the programs inception, 28 NEPs around the
Nation have been nominated and accepted into the National Estuary Program.
Each NEP is responsible to track the progress of CCMP implementation and to monitor
associated ecological conditions in the estuary. Many NEPs share common priority
problems or key management issues including: habitat, pathogens, freshwater inflow,
nutrients, fish and wildlife, invasive species and toxics. However, each NEP's goals and
issue-specific management actions are unique and, therefore, the specific data collected
to track CCMP implementation progress and monitor ecological conditions, varies widely
among the NEPs. Indicators developed are unique ranging from horseshoe crabs in
Delaware Estuary to alligator nests in Barataria-Terrebonne National Estuary Program.
Most of the NEPs share two or more of the key management issues, but may approach
them differently based on differing cultural, economic and political characteristics. Each
NEP reports on the status of indicator development and implementation yearly.
Overview of Environmental Indicators
"Environmental Indicators are specific, measurable markers that help assess the
condition of the environment and how it changes over time. Both short term changes and
general trends in those markers can indicate improved or worsening environmental
health. "(Based on Barbara Keeler, personal communication, April 18, 2006)
"Monitoring the status of an estuary is a complex undertaking. Measuring water and
living resource quality at all times, in all locations, and at all depths would be
prohibitively expensive." (EPA, 1994) Tracked over time, indicators can provide cost-
effective information on the status and trends of a system and the effectiveness of
management actions. Indicators let us express complex information as simple and useful
measures of status and trends. Indicators can provide measures of the success of
management actions and allow for mid-course corrections. They can provide qualitative
and quantitative measures that can be useful on local, regional or national scales both on
a temporal and spatial basis. Indicators can be used to inform diverse audiences
including: environmental managers, scientists, resource managers and the public.
EPA's Ocean and Coastal Protection Division (OCPD) evaluated the usefulness of data
being collected by individual NEPs as national environmental indicators. EPA decided to
focus an initial evaluation on two key estuarine challenges: habitat degradation/loss and
nutrient overloading. To achieve this objective, OCPD formed an NEP Indicators
Workgroup to review and assess NEP data. The Workgroup concluded that indicator
information collected by the NEPs could be useful on a local, regional, as well as, a
national scale.
^^^^i
ES-1
-------
EXECUTIVE SUMMARY
As a result of this effort and the growing importance of indicator development, OCPD
decided to offer technical support to the NEPs for indicator development. Once the NEP
selects appropriate indicators and the Management Committee formally adopts them,
they are incorporated into the Monitoring Plan. The broad experience of the NEPs in
indicator development led to the preparation of this "Indicator Development for
Estuaries" manual, which provides a framework and a logical, stepwise process for
selecting, validating and implementing indicators. Based on the NEPs' expertise, it
became clear that this valuable expertise could be shared with other NEPs currently
developing indicators and with estuaries facing some of the same issues.
The Manual
The Manual is organized to provide the user with a logical, stepwise process in
developing and implementing indicators for the estuarine environment. It is organized
under seven major headings:
Introduction
Provides the background for the identification and use of indicators;
Planning the Program
Covers spatial scale, establishing a steering committee, key management
issues, and baseline assessments;
Conceptual Models Development
Discusses conceptual model development and use;
Indicator Specification
Presents concept, feasibility, response and interpretation, and usefulness of
indicators;
Monitoring Plan Development and Modification
Covers development and revision of the monitoring plan;
Indicator Implementation
Formal adoption, funding, communication, monitoring plan
implementation, data collection and analysis plans;
Indicator Reassessment
Reassess every five-years or less, reevaluation of each indicator as
needed.
The Manual is tabbed for easy access to the chapter of interest and allows the user to
focus on the appropriate step in the process.
ES-2
-------
EXECUTIVE SUMMARY
Case studies of the Barataria-Terrebonne NEP, New Hampshire NEP Indicator
Development Process and Northeast Coastal Indicator Workshop are provided to give the
reader examples of how other programs have approached indicator development
following this process. Additionally, to provide the reader with a quick overview, further
understanding of programs, and references to indicator development, a list of indicators
selected by NEPs and other programs and a list of available indicator-focused resources
have also been included.
MMMH^BHUi^BIHHMHMiMiHMHHi^H^MHHMBIMMHMMMHHBMI
ES-3
I
-------
EXECUTIVE SUMMARY
[This page left intentionally blank]
ES-4
-------
INTRODUCTION
This manual has been prepared to provide information on indicator development and to
offer a framework for the development of indicators for use in coastal waters. The goal is
to provide:
Background information on indicators and why indicators should be developed.
Information on indicator development by Federal programs and the advantage of
developing indicators for use on more than just a local or regional scale.
Information on who should develop indicators.
Lessons learned by programs.
Step-by-step process of how to select indicators.
Program needs for indicator development as related to the stage of program
development.
General information on developing monitoring plans for indicators, and
incorporating and implementing indicator programs.
Throughout the document, statements and examples from the U.S. Environmental
Protection Agency's (EPA) National Estuary Programs (NEPs) and other Federal,
regional, and local programs are highlighted.
WHAT ARE INDICATORS?
The definition of an indicator varies from program to program. The following are
examples of the definitions of "indicator" used in differing applications:
"Environmental Indicators are specific, measurable markers that help assess the
condition of the environment and how it changes over time. Both short term
changes and general trends in those markers can indicate improved or worsening
environmental health. "(Based on Barbara Keeler, personal communication, April
18, 2006)
"6. Ecol. A plant or animal that indicates, by its presence in a given area, the
existence of certain environmental conditions." (Random House, 2001)
"An Indicator is a particular characteristic or reference marker used to measure
whether an outcome is being achieved." (EPA, 1994)
1
"Indicators are objective descriptions of a particular aspect of our natural,
economic, or social environment." (The Heinz Center, 2003)
1
It is clear that the varied definitions of an indicator reflect the application, the complexity
of language used, and the degree of precision required based on programmatic context.
^^^^^^^^^^^^^^^^^^^^^^
L *
-------
INTRODUCTION
Implementation of indicators depends on the systems to which the indicators are being
applied.
Indicators are used to summarize complex information into a simplified and useful form
to facilitate the measurement of status and trends. Indicators communicate information,
quantify responses, and simplify information about complex data. Indicators can be a
cost-effective, accurate alternative to monitoring the individual components of a system.
Therefore, indicators can be an effective means of assisting groups in tracking the
progress of their programs (EPA, 2003a).
"When tracked over time, an indicator can provide information on trends in the condition
of a system. In order to develop an appropriate environmental indicator, it must be
directly linked to the cause, effect, or action it is tracking. Ideally, indicator development
should be preceded by the development of an assessment question" (EPA, 2003a).
Specifically, indicators should be linked to the issues and goals specific to an estuary
program's Comprehensive Conservation and Management Plan.
For NEPs: indicators should be linked to the issues and goals specific to the
estuary program's Comprehensive Conservation and Management Plan.
As stated above, indicators can assist the programs in tracking progress toward their
goals. Indicators that are not linked to an estuary program's goals and objectives will not
support efforts to assess the progress of management actions. Where possible, local and
regional indicators can augment national assessments; therefore, to the degree possible,
comparable indicators should be developed to support all levels of objectives.
Indicators are invaluable for measuring the achievement towards milestones and progress
in meeting environmental goals. Indicators can also function as early warning signals for
detecting relatively small adverse changes in environmental quality. For example, the
change in air and ocean temperatures throughout the world has been used for years as an
indicator of global warming, while the change in land use within an area can be an
indicator of changes in human activities. Although these require very different types of
measurements, both are indicators of human influence on our ecosphere.
The following definitions illustrate the use of different levels and types of indicators:
Worldwide Indicator
An indicator with worldwide applicability as a response to a common stressor (e.g.,
global warming) or as an indicator with value regardless of geographic location (e.g.,
water temperature).
Cultural/Societal Indicator
An indicator that can measure human activityspecifically, the impact of human activity
on ecosystem integrity or human response to ecosystem stressors. Examples of the former
-------
INTRODUCTION
include population, impervious land cover, and wetland filling; examples of the latter
include fish consumption advisories and beach closure days.
Economic Indicator
An indicator that normally shows general trends in the economy. Examples of an
economic indicator include unemployment levels, the Consumer Price Index, industrial
production, bankruptcies, and stock market prices.
Ecological Indicator
An indicator that characterizes measurable (quantifiable) characteristics related to the
structure, composition, or functioning of ecological systems (EPA, 2003b). Generally
biotic in nature, these can be a specific individual measurements, an index of measures,
or a model that characterizes an ecosystem or one of its critical components (EPA,
2003b). An important aspect of an ecological indicator is that it quantitatively estimates
the condition of ecological resources, the magnitude of stress, the exposure of biological
components to stress, or the amount of change in condition (EPA, 2003b).
Environmental Indicator
An indicator that measures the state of air, water, and land resources, pressure on those
resources, and the resulting effect on ecological and human health. An environmental
indicator shows progress in making air cleaner and water purer and in protecting the land
(EPA, 2003b). This type of indicator measures environmental conditions (e.g., human
health, quality of life, and ecological integrity) or stressors that provide useful
information on patterns and trends.
Delaware Inland Bays ProgramDefinition of an Environmental Indicator
"As commonly employed, an environmental indicator is a discrete measure of one
1 aspect of environmental quality that can be used alone or in combination with other
indicators to deliver a message or tell a story related to the overall environmental
health of an ecosystem." (Price and Huerta, 2001)
Charlotte Harbor NEP (CHNEP)Definition of an Environmental Indicator
("An environmental indicator is defined here as a measure, an index of measures or a
model that characterizes the ecosystem or one of its components." (CHNEP, 2004)
'I
§
Programmatic Indicator
A program, policy, or administrative response to an environmental problem. These
performance measures may or may not lead to detectable improvements in environmental
conditions.
|
Each of these indicator types can be broadly applied or can be useful in certain situations.
In the examples given above, global warming is considered a worldwide indicator, while
changes in human activities are considered a cultural/societal indicator. This manual
1
MBHii^MM MM MMBM MB
-------
INTRODUCTION
focuses on the development of ecological or environmental indicators on a local,
regional, or national level. Even so, the steps outlined can be used to develop indicators
for other applications.
For more information on cultural/societal and economic indicators, the following
websites are suggested:
Cultural IndicatorsContact the United Nations Educational, Scientific, and
Cultural Organization - http://www.unesco.org/culture/worldreport/
html_eng/wcr5.shtml
Societal IndicatorsGovernment Performance and Results Act -
http://www.ed.gov/offices/OUS/PES/gpra/OPM.html
Economic Indicatorssee http://www.investorwords.com/cgi-
bin/getword.cgi? 1643&economic%20indicator
WHY SHOULD INDICATORS BE DEVELOPED?
In the late 1960s, the United States began to develop an awareness of the importance of
preserving and protecting our nation's coastal waters, including the Great Lakes. Data
from all over the United States showed that industrial and human practices had degraded
the nation's coastal waters, along with the lives and livelihoods of populations living
along the coast.
Programs and Other Initiatives
For over 40 years, the nation has worked to improve its coastal waters by enacting
important legislation (see below) and developing a range of programs and initiatives that
protect the coastal environment. Among these are programs that focus attention on
identifying impacts that degrade the U.S. coasts on an estuarine, regional, and national
level. Once the impacts are identified and their causes understood, these same programs
work to develop plans to prevent further degradation of the area and develop ways to
improve these ecosystems to a desirable condition. One tool that is used to track the
environmental response to implementation of these programs is the environmental
assessment program; a key component of the environmental assessment program is the
inclusion of indicators.
Legislation
In 1972, Congress enacted both the Federal Pollution Control Act (renamed in 1977 to
the Clean Water Act [CWA]) and the Coastal Zone Management Act (CZMA) to begin
protecting and cleaning our coastal waters. These acts and their revisions also created
several national initiatives to improve our estuaries of national significance, including the
NEPs and National Estuarine Research Reserve (NERR) programs. Other agreements and
acts have created other programs such as the Great Lakes Program to focus on specific
bodies of water.
Clean Water ActThe CWA established a structure through the EPA for
implementing and regulating discharges of pollutants into the waters of the United
-------
INTRODUCTION
States and to develop pollution control programs such as setting wastewater
standards for industry. The CWA granted EPA the authority to set water quality
standards for all contaminants in surface waters. A revision in 1987 created the
NEP to (1) identify nationally significant estuaries that are threatened by
pollution, development, or overuse, and (2) promote comprehensive planning for
and conservation and management of nationally significant estuaries (for more
information see http://www.epa.gov/region5/water/cwa.htm).
Coastal Zone Management ActThe CZMA established a program through the
National Oceanic and Atmospheric Administration (NOAA) to "preserve, protect,
develop, and where possible restore or enhance the resources of the coastal zone
for this and succeeding generations" (CZMA of 1972 as amended by P.L. 104-
105 The Coastal Zone Protection Act of 1996, Section 303(1); NOAA, 2005). The
CZMA established the NERRs and a process for coastal states to develop Coastal
Zone Management Programs (CZMPs). The CZMPs provide "mechanisms to
improve the cooperation and coordination among state agencies and with other
levels of government and the public" (The Heinz Center, 2003).
These two acts were, and still are, the leading legislation for the protection and
restoration of America's coastal environment. Through the adoption of these acts, many
programs have started to monitor, protect, and restore the U.S. coastal areas and marine
resources.
Since the development of the CWA and CZMA, Federal agencies and states have been
working to improve their coastal waters as specified by these acts, but no specific
measurement of the improvements has been conducted. In 1993, the Government
Performance and Results Act (GPRA) called for "Federal agencies to undertake efforts to
measure their performance and the effectiveness of their programs" (The Heinz Center,
2003), including those programs mentioned above. The process focused on developing a
series of indicators that could track the effectiveness of these programs and provide
quantifiable measures that demonstrate the response of our nation's coastal waters
overall. Since the enactment of GPRA, programs like the National Coastal Assessment
(NCA) have been implemented by EPA to measure improvements nationwide (see
http://www.epa.gov/emap/nca/ for more information on the NCA).
WHO IS DEVELOPING INDICATORS?
Organizations throughout the world and the United States have begun developing
indicators, including programs by the World Bank, the Organisation for Economic
Co-operation and Development (OECD), and Federal, state, and local agencies. Some
programs only develop indicators that can be used in a specific location, while others are
developing indicators to track changes in ecological conditions throughout entire regions.
Several Federal programs have initiatives to develop indicators. The following
discussions provide short descriptions of some of these initiatives.
-------
INTRODUCTION
EPA's Environmental Indicator Initiative
On November 13, 2001, EPA Administrator Christine Todd Whitman announced an
"Environmental Indicators Initiative" to improve EPA's ability to report on the status of
and trends in environmental conditions and their impacts on human health and the
nation's natural resources (EPA, 2005a). The Indicators Initiative also identified where
additional research, data quality improvements, and information were needed. EPA's
long-term goal is to improve indicators and the data that are used to guide the Agency's
strategic plans, priorities, performance reports, and decision-making (EPA, 2005a).
EPA's Office of Environmental Information and the Office of Research and
Development (ORD) are the lead contacts for this program.
One of the key products of the Environmental Indicators Initiative is EPA's Draft Report
on the Environment 2003 (EPA, 2003b). The document reports on the environmental
conditions and human health concerns of the environment, using available national-level
data and indicators. The report includes data on human health, ecological conditions,
clean air, "pure water," and better-protected land. Under "human health," the report
explores trends in diseases, human exposure to environmental pollutants, and diseases
thought to be related to environmental pollution (EPA, 2003b). The nation's "ecological
condition " is determined by looking at land use and cover, living resources, and
pressures on living resources and our sustainable natural resources. To establish a
national baseline for "clean air, " the report examines outdoor air qualityits impact on
human health and ecosystemsand indoor air quality impacts on human health. The
"pure water" theme examines drinking water and food safety, recreational water use, the
condition of the nation's water resources, and the living resources sustained by them. To
ensure "betterprotected land" in the future, the report explores existing land cover and
use, activities that affect the condition of the American landscape, the location and
condition of degraded land, and various conservation and management practices (EPA,
2003b). The 2003 report is available at http://www.epa.gov/indicators/roe/index.htm.
EPA 's National Estuary Program
EPA established the NEP to promote and restore the health of nationally significant
estuaries, while simultaneously supporting all beneficial uses of the estuaries' natural
resources. Under the NEP, the Administrator of the EPA is authorized to convene
Management Conferences to identify priority problems within these estuaries and
develop a Comprehensive Conservation Management Plan (CCMP) to address those
problems. At present, there are 28 NEPs throughout the United States and 27 NERRs.
Figure 1 shows the biogeographic coverage of the NEPs and the general vicinity of the
NERRs.
-------
INTRODUCTION
Estuaries of the National
Estuary Program and Subregions
of the National Estuarine Research
Reserve System
Baneiie
au [
NM MT I LS
NEP
Albemarle/Parnlico Sounds
Barataria-Terrebonne
Barnegat Bay
Buzzards Bay
Casco Bay
Charlotte Harbor
Coastal Bend Bays
Delaware Estuary
Delaware Inland Bays
Gah/eston Bay
Indian River Lagoon
Long Island Sound
Lower Columbia River
Maryland Coastal Bays
Massachusetts Bays
Mobile Bay
Morro Bay
Narragansett Bay
New Hampshire Bays
New York/New Jersey Harbor
Peconic Bay
Puget Sound
San Francisco Bay
San Juan Bay
Santa Monica Bay
Sarasota Bay
Tampa Bay
Tilamook Bay
NERRS
1) Wells Reserve, ME
2) Great Bay. NH
3) Waquoit Bay. MA
4) Narragansett Bay. Rl
5) Hudson River, NY
6) Jacques Cousteau, NJ
7) Delaware
8) Chesapeake Bay. MD
9) Chesapeake Bay, VA
10) Old Woman Creek, OH
11) North Carolina
12) North Inlet-Winyah. SC
13) Ace Basin, SC
14) Sapelo Island. GA
15) Guana Tolomato
Matanzas Reserve. FL
16) Jabos Bay, PR
17) Rookery Bay, FL
18) Apalachicola, FL
19) Weeks Bay, AL
20) Grand Bay, MS
21) Mission-Aransas, TX
22) Tijuana River. CA
23) Elkhorn Slough. CA
24) San Francisco, CA
25) South Slough. OR
26) Padilla Bay. WA
27) Kachemak Bay, AK
Figure 1. Map of the estuaries in the National Estuary Program (NEP) and National
Estuarine Research Reserve (NERR) System
-------
INTRODUCTION
Over the past few years, EPA's Oceans and Coastal Protection Division (OCPD)
determined the need to evaluate the usefulness of data being collected by individual
NEPs as national environmental indicatorsinclusive of indicators associated with
restoration actions undertaken and changes in overall ecological conditionof NEP
progress. NEP indicators must be directly linked to the cause, effect, or action that is
proposed in the CCMP or monitoring plan. EPA considers the establishment of
assessment questions and the development of a framework or model of the system
relevant to the assessment question(s) important to the process of developing a suite of
indicators. It is the responsibility of each NEP to track the progress of CCMP
implementation and monitor associated ecological conditions in the estuary. Many NEPs
share common priority problems; however, each NEP's goals and issue-specific actions
are unique and, therefore, the specific data collected to track CCMP implementation
progress and monitor ecological conditions varies widely among the NEPs (NCIW,
2004). Both the Barataria-Terrebonne NEP (Appendix A-l) and New Hampshire NEP
(Appendix A-2) followed the process of developing indicators based on the goals and
objectives of their CCMPs. Appendices A-l and A-2 highlight the indicator development
process of these two NEPs.
NEP indicators must be directly linked to the cause, effect, or action that is
proposed in the CCMP or monitoring plan. EPA considers the establishment of
assessment questions and the development of a framework or model of the
system relevant to the assessment question(s) important to the process of
developing a targeted suite of indicators.
EPA's Great Lakes Program
EPA's Great Lakes National Program Office (GLNPO) works with agencies in Canada to
manage the shared resources of the Great Lakes under the Boundary Waters Treaty of
1909, the 1987 Great Lakes Water Quality Agreement, and portions of the CWA and the
Clean Air Act. Through this program, EPA works with various Federal and state agencies
to manage the ecosystems of the Great Lakes, including addressing issues such as
"reducing toxic substances, protecting and restoring important habitats, and protecting
human/ecosystem species health" (EPA, 2004). Each lake has its own Lakewide
Management Plan, which has been developed to manage the top issues within that lake.
Since 1994, the U.S. and Canadian governments have hosted biennial State of the Lakes
Ecosystem Conferences (SOLECs), which have focused on reporting the health of the
Great Lakes using indicators. "The SOLEC process is a rare opportunity to bring
stakeholders together to identify common objectives and data needs, and to encourage
cooperative data collection, evaluation, and reporting." (Environment Canada, 2005).
NOAA National Coastal Management Performance Measurement System
The National Coastal Management Performance Measurement System is part of an
ongoing effort by the NOAA to work with coastal states to assess the effectiveness of the
CZMA as carried out by coastal management programs and NERRs. This system
responds to Congressional requests to assess the national impact of coastal management
programs and to report to the Appropriations Committees on progress in meeting the
-------
INTRODUCTION
objectives of the CZMA. NOAA's Office of Ocean and Coastal Resource Management
(OCRM) is responsible for developing and implementing the performance measurement
system. OCRM has worked with the coastal management programs and reserves to
develop contextual and performance indicators related to coastal hazards, habitats, public
access, coastal community development, coastal dependent uses, coastal water quality,
government coordination and decision-making, education, stewardship, and research. In
2004, OCRM implemented a phased approach for collecting information on the identified
indicators. Under Phase I of the coastal management programs, most of the performance
indicators in a subset of states will likely be implemented. The reserves will phase in
indicators over time, with Phase I limited to indicators with known data available. In
addition to assessing management outcomes, NOAA will prepare annual assessments of
activities funded under the CZMA. NOAA is also working with the states, other Federal
agencies, and stakeholders to develop a consistent framework for a national state of the
coast report that will serve as a report card on the condition of America's coastal
resources (NCIW, 2004).
National Park Service (NFS) Vital Signs Monitoring Program
Fundamental to fulfilling the NPS mission of managing park resources "unimpaired for
the enjoyment of future generations" is knowing the condition of natural resources in
each national park. The National Parks Omnibus Management Act of 1998 established
the framework for fully integrating natural resource monitoring and other science
activities into the management processes of the National Park System. Section 5934 of
the Act requires the Secretary of the Interior to develop a program of "inventory and
monitoring of National Park System resources to establish baseline information and to
provide information on the long-term trends in the condition of National Park System
resources." In the Appropriations Bill for Fiscal Year 2000, Congress reinforced this
message by calling on the NPS to implement a "systematic, consistent, professional
inventory and monitoring program ... that is regularly updated to ensure that the Service
makes sound resource decisions based on sound scientific data." The 2001 NPS
Management Policies specifically directed the Service to inventory and monitor natural
systems in national park units, and to use the results of monitoring and research to
develop appropriate management actions. The NPS has implemented a three-tiered
strategy to institutionalize natural resource inventory and monitoring throughout the
agency: (1) completion of basic resource inventories upon which monitoring efforts can
be based; (2) creation of experimental prototype monitoring programs to evaluate
alternative monitoring designs and strategies; and (3) implementation of operational
monitoring of critical parameters (i.e., "vital signs") in all natural resource parks. To
implement vital signs monitoring, all parks with significant natural resources (about
270 nationwide) have been grouped into 32 monitoring networks linked by geography
and shared natural resource characteristics. Network parks share funding and professional
staff to plan, design, and implement an integrated long-term monitoring program (NPS,
2003; NCIW, 2004).
WHO SHOULD USE INDICATORS?
i
Any program that monitors a condition can develop an indicator. One example of a
monitoring program that uses indicators is a weather forecast. Meteorologists use several
^^Ml
9
-------
INTRODUCTION
measurements and techniques (e.g., temperature, wind speed, and precipitation) to
forecast the weather. Each item used is an indicator of something. If the temperature is
below freezing and the radar says there is precipitation, then more than likely snow, sleet,
or freezing rain is falling in the area. Thus, indicators can be used by anyone.
Today, a large percentage of the nation's population lives within coastal areas, which has
created environmental pressure on coastal resources. Each coastal program that is
developed to address these environmental
pressures, such as the NEPs and NERRs,
develops goals for its area. Along with these
goals, measurement programs and indicators are
established. The use of indicators supports the
determination of whether an ecosystem is
sustainable by helping to track the status and
trends of an ecosystem. Typically, coastal
programs choose indicators that track progress
in a local area. However, several agencies may
join their efforts, such as those instituted by the
Estuary programs should
consider including indicators
from the National Coastal
Conditions Report (NCCR I
and II [EPA 2001 and 2005b]) to
assist in collecting data on the
overall health of the nations
coastal areas.
Gulf of Maine [GOM] Council, to develop indicators on a regional level. Federal
agencies, including EPA, are interested in indicators that also determine the overall
national health of coastal ecosystems. Although the application of indicators ranges in
scale, the need for indicator development is the same depending on whether the indicators
are being established for local, regional, or national efforts.
At the regional level, coastal programs such as the NEPs develop CCMPs. The purpose
of a CCMP is to identify issues that require management strategies to best address and
resolve the issues. As part of the CCMP development, program objectives are defined
(for example, "Ensure public health associated with contact recreation and seafood
consumption" [CBBEP, 1998]). To determine
whether these objectives have been met, monitoring
programs are developed to measure progress. As
part of these monitoring programs, indicators are
selected for measurement. Indicators provide the
basis to answer the CCMP questions. Together,
indicators and a monitoring plan ensure that policies and management efforts are
effective in tracking the status of an ecosystem. Appendix A-l and Appendix A-2 provide
more details on the Barataria-Terrebonne NEP and New Hampshire NEP process of
developing monitoring programs and indicators.
Coastal waters are not defined by state borders, making it critical that neighboring
communities cooperate to address environmental concerns. Joint efforts are required to
identify and prioritize issues and questions. The need for regional indicators has become
a forefront issue as the necessity for coordinated monitoring increases. Regional
indicators serve to bring consistency to the process of informing decision-makers and the
public on the status of the area or region. This type of effort helps address gaps between
monitoring and management, such as consistent monitoring approaches, data reporting to
For NEPs: Indicators should
provide the basis to answer
their CCMP questions.
10
-------
INTRODUCTION
ensure the work is relevant, and allocation of resources. For regional indicators to be
successful, the use of the indicators must be consistent throughout the system to show
overall trends.
A national approach to developing indicators will provide an integrated assessment
framework for scientists, decision-makers, managers and, ultimately, the public. Federal
agencies are required by the GPRA to report the status of the nation's coastal waters and
their national programs. The nation's decision-makers want to know what the present
conditions of estuarine resources are in the United States, how the conditions are
changing, and what causes those changes. Therefore, a set of indicators must be
developed to correlate data from the
nation's coastal waters into one data set |
-------
INTRODUCTION
National Coastal Assessment Indicators
Water Quality Index
Nutrients
Nitrogen (dissolved inorganic nitrogen)
Phosphorus (dissolved inorganic phosphorus)
Chlorophyll-a
Water clarity
Dissolved oxygen (DO)
Sediment Quality Index
Sediment toxicity10-day toxicity test with the amphipod Ampelisca abdita
Sediment contaminants
Metalsarsenic, cadmium, chromium, copper, lead, mercury, nickel,
silver, zinc
Organic compoundsacenaphthene, acenapthylene, anthracene,
fluorene, 2-methyl naphthalene, naphthalene, phenanthrene,
benz(a)anthracene, benzo(a)pyrene, chrysene, dibenzo(a,h)anthracene,
fluoranthene, pyrene, low-molecular-weight polycyclic aromatic
hydrocarbon (PAH), high-molecular-weight PAH, total PAHs,
4,4'-dichlorodiphenylethylene (4,4'-DDE), total
dichlorodiphenyltrichlorethane (DDT), total polychlorinated biphenyls
(PCBs)
Total organic carbon
Benthic Index
Benthic community diversity
Presence and abundance of pollution-tolerant species
Presence and abundance of pollution-sensitive species
Coastal Habitat Index
Average of the mean long-term decadal wetland loss rate (1780-1990) and the
present decadal wetland loss rate (1990-2000).
Fish Tissue Contaminants Index
Metalsarsenic, cadmium, mercury, selenium
Organic compoundschlordane, DDT, dieldrin, endosulfan, endrin,
heptachlor epoxide, hexachlorobenzene, lindane, mirex, toxaphene, PAH
(benzo(a)pyrene), PCB
12
-------
INTRODUCTION
Water Clarity -West (1999-2000)
Site Criteria: Light
penetration at 1 meter
depth
Good = > 20%
oFair = 10% to 20%
Poor = < 10%
o Missing
Good
48%
Water Clarity - Northeast (2000)
,
Site Criteria: Light
penetration at 1 meter depth
Good = >20%inNE
> 25% in CB
> 10% in DB
OFair = 10%to20%inNE
20% to 25% in CB
5% to 10% in DB
Poor = < 10% in NE*
< 20% in CB*
< 5% in DB*
O Missing
Figure 2. National Coastal Assessment Synthesis of Water Clarity Data
(EPA, 2005b)
Although these indicators are reviewed on a larger (national) scale, the same indicators
are also useful on the regional and local level (see the Sneaker Index callout box on
page 35 for an example of how water clarity is used on a local level). It is suggested that
these indicators be considered when developing local indicator sets, so that local data can
be compared with this national data set.
LESSONS LEARNED DURING PREVIOUS EFFORTS TOWARD INDICATOR
DEVELOPMENT
A number of programs have spent considerable time and effort over several years to
develop appropriate indicators. Since this process can be daunting to any new group, it is
always helpful to find out what other programs experienced, especially any lessons
learned. For the Northeast Coastal Indicators Workshop (NCIW), conducted in January
2004, the Maine State Planning Office prepared "Tapping the Indicators Knowledge-
base" (Pidot, 2003). This document summarizes information on lessons learned collected
from several Federal, state, and local programs throughout the United States. The key
findings of this document are summarized below. The details can be found at
http://www.gulfofmainesummit.org/docs/ Lessons_Learned_Report.pdf.
13
-------
INTRODUCTION
Lessons Learned from the Northeast Coastal Indicator Survey
Developing indicators and indicator-based products is a lengthy process.
Query the members of the target audience throughout the process.
Involve a wide range of individuals from the beginning.
Select indicators with good prospects for long-term monitoring.
Replace an indicator if it does not produce meaningful results.
Allow time for important decisions.
Report clear and direct linkages between the indicators and the results/needs.
Develop separate simplified reports developed for managers and policy makers.
Indicators need to be sold to the managers and policy-makers.
Each lesson learned is important to every program attempting to develop indicators
because they are all interconnected. As noted in the first bullet, development of indicators
is not something that can be done in a day or two. To develop indicators that will be
useful to the program, each group must carefully look at its issues, ecological system, and
available data to determine the best indicator for that situation. It will take time to pull
this information together in a way that can be reviewed. However, it is important so that
the indicators selected have good prospects for long-term monitoring and effective
results, but also so that the indicators are clearly linked to the items that need to be
reported. Part of the reason indicator development takes time is because members of the
target audience need to be queried, and a wide range of individuals must be involved to
ensure that the questions the public and environmental managers need answered are
addressed. In the case of the NEPs, this step is conducted for their CCMP development;
however, the data necessary to choose indicators may not be consolidated during CCMP
development.
Adequate information must be collected prior to indicator development
so that indicators with good prospects for long-term monitoring and
effective results are selected.
Another lesson learned is that once indicators are developed, the process does not stop.
What looks good in theory does not always work in practice; therefore, once the data
collection begins, the indicators should be further evaluated to determine whether the
indicators are producing meaningful results and are useful to the end users. The indicators
selected and information collected also need to be reported to the managers; therefore, the
process of developing indicators should not be rushed, but it should also not be avoided.
If the indicators supply useful information, indicator development can help save program
funds or justify additional funds.
The last important lesson learned is that there are distinct advantages to indicator
development; however, if poor choices are made, there can be some disadvantages and
consequences. Indicators can help programs track changes efficiently, thus being more
14
-------
INTRODUCTION
cost-effective and less time-consuming than monitoring a number of items. However, if
the indicators selected do not communicate the information needed, then money can be
wasted and important data needed to determine whether changes have occurred can be
lost. Therefore, indicators must be selected wisely and reviewed often to ensure they
meet the needs of the program.
Long Island Sound Study (LISS)Lessons Learned
The biggest challenge during indicator development was the significant commitment of
time necessary for developing indicators (Pidot, 2003).
Casco Bay Estuary Partnership (CBEP)Lessons Learned
"...with a small budget and staff, Diane Gould reported that the CBEP staff has been
challenged by the necessity of spreading itself out over all of the issues and topics
deemed important (Pidot, 2003).
INDICATOR DEVELOPMENT PROCESS
As noted in the lessons learned section, there are several necessary
steps to follow when developing indicators. These steps generally fit
into a consistent sequence (Figure 3) that, when followed, result in
robust useful indicator sets. Each step in the process will be
discussed in more detail throughout the remainder of this manual. In
some instances, guidance documents previously developed by EPA
provide greater detail on the steps. In cases where other documents
already exist on these detailed processes, this manual will supply
some of the highlights of the documents but will rely on the original
documents to supply the entire process.
Many programs, such as the NEPs and NERRs, may have already
completed a number of the steps outlined in this manual. Thus, to
make this manual easier to follow and more user-friendly, we will
use the flowchart in Figure 3 in the margins of the next few sections
to show the step to which the accompanying text is referring. As the
different steps in the process are explained in the text, a tab in the
side margin will indicate where the text applies in the process (see
example on page 16). This will allow groups to tab directly to the
steps they are interested in.
Program
Planning
1
Conceptual
Model
Development
Indicator
Specification
Monitoring
Program
Development
Implementation
Reassessment
Figure 3. Indicator
Development Process
15
-------
INTRODUCTION
Example:
Program Planning
Case studies have been included from the Barataria-Terrebonne Program
(Appendix A-l), the New Hampshire Estuaries Project (NHEP) (Appendix A-2), and the
NCIW (Appendix A-3). These case studies represent successful programs that developed
indicators in a local and regional area. In addition, as we move through the steps toward
indicator development, examples of additional programs will be given to assist new
programs in understanding the process.
16
-------
Program
Planning
PLANNING THE PROGRAM
Conceptual
Model
Devebpment
Indicator
Specification
I
Monitoring
Program.
Devebpment
Implementation
Reassessment
For groups to be successful in their mission to improve or protect the
ecosystems within their regions, programs must be created with a set
of goals and the result in mind. Therefore, each program must be
designed around a clear purpose. For example, a purpose might be to
collect data that will inform scientists and managers about important
aspects of a region they are working to protect.
Programs such as the NEPs, CZMPs, and NERRs were designed as
partnerships between the Federal government and states working
toward protecting, restoring, and sustaining development of the
nation's coast through joint resources, funds, and management
authorities. These programs also work to provide research, data, and
education to sustain conservation and development of the coasts. When
these collaborative efforts begin, a management plan is created to
focus a program's efforts toward its goals.
In accordance with EPA Section 320 of the CWA (EPA, 2000a) requirement, NEPs
develop a CCMP to document the partnership's plan for improving the estuary (see the
callout box on the next page for more information on developing a CCMP). During
development of the CCMP, the NEPs conduct a comprehensive review of the key
management issues for their estuary. The CCMP identifies the estuary's priority
problems, causes, and linkages to changes in the estuary. It also identifies the
environmental quality goals and objectives of the program and explains the actions the
NEP plans to take to abate or correct the problem. Background information on the estuary
is included, such as "the status and trends of the estuary's water quality, natural
resources, and uses" (EPA, 1992). The CCMP is not the indicator plan, but indicators are
developed based on CCMP and monitoring plan management questions.
Similar steps are also followed when developing monitoring programs. In Managing
Troubled Waters, the National Research Council (NRC) developed a seven-step process
for developing and implementing monitoring programs:
1. Define program expectations and goalsThis includes identifying public
concerns along with current regulations and focusing the objectives on pertinent
environmental and health regulations.
2. Define the strategy of the studyDeveloped by addressing specific questions to
be answered. Scientists and managers must focus the questions being asked on the
monitoring that is to be conducted, which will deliver the information required.
These focus questions will vary from program to program.
3. Conduct relevant studies and researchProvide the groundwork for the
construction of the monitoring design through development of methods, models,
and techniques.
17
-------
PLANNING THE PROGRAM
4. Develop a sampling and measurement programThe purpose of this step is to
produce a sampling design that identifies what measurements should be
monitored and where and when the measurements should be taken.
5. Implement the studyThe implementation of the study will provide information
and data for scientists and managers; however, the data will need to be analyzed
and converted into useful information for managers and decision-makers to
utilize.
6. Synthesize the data.
7. Report the results of a monitoring program to a varied audience consisting of
managers, decision-makers, and the public (NRC, 1990).
The most important aspect of the process is that each step builds upon the previous steps.
Therefore, when developing a program, it is important to revisit and rethink the steps in
the process. Over time, the objectives and goals,
monitoring techniques, and data available may change, as
well as many other aspects of the process. When these
changes occur, the plan should be updated to reflect the
most current concerns.
The most important
aspect of the process is
that each step builds upon
the previous steps.
Steps to Develop a CCMP, Monitoring Plan, and Indicators
The CCMP encompasses the management objectives established by the program. There
are four phases to follow when developing a CCMP:
Phase 1: Convening a management conference and establishing a structure of
committees and procedures for conducting the group's work.
Phase 2: Characterizing the estuary to determine its health, reasons for its
decline, and trends for future conditions; assessing the effectiveness of existing
efforts to protect the estuary; and defining the highest priority problems to be
addressed in the CCMP.
Phase 3: Specifying action plans in the CCMP to address priority problems
identified through characterization and public input. The CCMP should build on
existing Federal, state, and local programs as much as possible.
Phase 4: Monitoring the implementation of the CCMP, reviewing progress, and
redirecting efforts where appropriate.
Once the CCMP is developed, the NEP will draft a monitoring plan in accordance with
its CCMP. The monitoring plan implements the management objectives and carries out
action plans. Indicators are developed to address the specific estuary needs defined in
the monitoring plan. NEPs work through a long process to develop and implement
priority corrective actions and compliance to restore and maintain the health of an
estuary. (EPA, 1993)
18
-------
PLANNING THE PROGRAM
The following five steps are helpful when beginning the indicator development process
and are discussed in more detail below:
Determine the spatial scale of the program
Convene a steering committee
Identify the purpose and need for indicators
Identify the issues
Conduct a baseline assessment of each issue
For NEPs, the CCMP should be used for Steps 1,3,4, and 5; therefore, only Step 2 is
required to start the indicator development process.
STEP 1: DETERMINE THE SPATIAL SCALE OF THE PROGRAM
The assessment of the nation's coasts occurs on a number of different levels. Local
programs assess one or more specific issues for their local area (e.g., NERRs); regional
programs assess differences over a slightly larger area (e.g., NEPs, Gulf of Mexico
Program, Southern California Coastal Water Research Project [SCCWRP]); and national
programs assess changes in the overall coastal condition throughout the nation (e.g.,
NCA). The first step in the process is to determine the level at which the group is
interested in interacting. This will determine who will be included in discussions
regarding program development.
For example, a local group may be interested in tracking efforts to restore wetlands
throughout a town or county. In this instance, the group will include representatives from
the local agencies working to solve this problem but may also include representatives
from the state level to get a perspective on how other groups throughout the state are
handling this issue, or how the state agency itself is addressing the issue. Other programs,
such as the NEPs and NERRs, need to track issues on a local, state, and national level.
These groups would need to consider including local monitoring groups, state agencies,
and people involved at the national level.
Whenever possible, it is always best to try to align local and regional programs with
programs at a higher (i.e., national) spatial scale. This allows for future comparisons with
data collected over the larger area. If the group is interested only in local issues, it may
not feel it needs to consider regional initiatives, so some convincing may be necessary.
Whenever possible, it is always best to try to align local
and regional programs with programs at a higher (i.e.,
national) spatial scale.
The benefit of aligning a program with a larger effort can been seen when unexpected
problems or changes arise. For instance, maybe the local group is interested only in
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
19
-------
PLANNING THE PROGRAM
studying invasive species in the local area. Aligning the program with a regional program
may come in handy when a sudden unexpected change in species counts occurs with no
apparent direct cause. Groups aligned with the regional sampling can then compare their
local data with data collected on the regional level. This assists the program with
determining whether the change was a local phenomenon that needs to be studied further
or a regional issue experienced by other programs.
STEP 2: CONVENE A STEERING COMMITTEE
Steering committees should be formed during the initial phases of the indicator
development process so that they can be a part of the entire process. The earlier in the
process the steering committee is involved, the more efficient and effective the indicator
development process will be to achieve the desired program outcomes.
Steering committees are normally formed with a mix of people from different
backgrounds, agencies, and organizations. Because the committee members are an
integral part of the indicator development process, it is important that each person on the
committee be included for a specific reason (for example, his or her expertise in a
technical area or understanding of monitoring programs in the region). Committee
members also must be actively involved in each step of indicator development, not only
as reviewers of the final result. Groups that have an effective steering committee have
found that it is easier to establish indicators and obtain the desired outcome by the end of
the process.
The most important aspect of an effective steering committee is to convene the right
balance of managers, policy-makers, researchers, and the public so that all are
represented. Representatives from the area's key monitoring and management groups
should be included, along with members of local environmental groups and the public.
The people involved do not have to be scientists with previous indicator knowledge.
Members such as managers and policy-makers should be selected for their ability to
inform decision-makers on funding and regulations and should be able to provide support
for the future. Researchers, scientists, and educators who possess a strong knowledge of
the ecosystem and science should be included on the steering committee to make
informed decisions on indicators. It is also important to include the public for several
reasons. Most important, public support is critical to the success of the indicator
development process by providing public opinion on the ecosystem. Ultimately, the
public is the final recipient of the program's findings on the state of the ecosystem.
Once the steering committee is formed, members should be briefed on the goals of the
indicator development process. If a definition of the word "indicator" has not already
been developed, the committee members should be asked to do so based on the needs of
their program. The committee should also assist in developing a list of topics, questions,
and conceptual models to develop indicators. The members do not need to develop all of
the information themselves, but they should agree on the topic areas and review the
questions and conceptual models developed to ensure that they agree on what is included.
In the case of the NEPs, the steering committee should use the topics and questions from
20
-------
PLANNING THE PROGRAM
the CCMP so that indicators will be developed to answer the NEP's management
questions.
Workshops have been found to be successful events where steering committee members
can gather with other participants to present the indicator development information that
has been prepared and to receive feedback on whether they are on the right track. Just as
it is important to have the right people as members of the steering committee, it is crucial
to have the appropriate workshop participants to complete the indicator development
process. Although the indicator development process continues long after the workshop
has ended, everyone involved in the process has a responsibility to continue the work.
The key to a successful steering committee is communication. Regular communication of
information on indicator development can be accomplished through e-mail distributions,
conference calls, meetings, and workshops. Members should be required to commit to the
development process, which could include bi-weekly or monthly meetings, whether
through conference calls or attending the meetings in person. E-mail updates on the
progress of the process should be distributed promptly based on a time frame established
by all members (for example weekly, bi-weekly, or monthly).
Great Lakes ProgramSteering Committee
"The process involved over 130 people that could be identified by name." "Experts,
including researchers, academics, and managers, were included in each working
group. They sought out individuals for inclusion in these groups based [on] expertise,
rather than attempting to equally represent all sectors of the environmental world
(policy, research, industry, etc...)." (Pidot, 2003)
STEP 3: IDENTIFY THE PURPOSE AND NEED FOR INDICATORS
Step 3 in the process should answer the following questions: (1) Why are we developing
indicators? and (2) Why is there a need for it? The answers to these questions are the
starting blocks for the rest of the program, so getting consensus on these answers is
important.
Normally, the purpose and need for the program are not difficult to determine because
most groups were motivated by a specific issue or group of issues that needs to be
addressed. Some programs have their purpose and need specified as part of their charter.
For example, the NEPs have their purpose
and need specified by Section 320(b)(6) of
the CWA, which states that NEPs must
".. .monitor the effectiveness of actions
taken in pursuit of the plan." In this
particular instance, "plan" refers to the
CCMP developed by each NEP. Other programs have similar goals under GPRA and
other statutes. The important step is agreeing on and documenting the purpose and need.
21
For NEPs: the purpose and need foi
indicator development is to track
progress towards the goals outlined
in their CCMP.
-------
PLANNING THE PROGRAM
The actual purpose of the program will depend on the complexity and scope of the issues
the group is attempting to address. If the group is addressing only a single issue, then the
purpose and need statement will focus on just that issue. For example, maybe the group is
focused on lowering the concentration of fecal coliform throughout the estuary. In that
case, the purpose and need statement for the program might be:
Purpose: To monitor the change in fecal coliform levels throughout the estuary.
Need: At present, the amount of fecal coliform entering the estuary is causing a
health hazard to the local population that is exposed to the water. This program is
needed to track changes in fecal coliform levels throughout the estuary to
determine whether levels are increasing or decreasing based on recent efforts to
prevent fecal coliform contamination.
The following is an example of a purpose and need statement developed for a program
aimed at monitoring more than one issue.
PurposeTo give the region the ability to compare data, assess the regional
status of the environment, and provide early warning of potential problems.
NeedTo track the status and trends in ecosystem integrity throughout the region
through collaborative partnerships. To provide information for policy,
management and advocacy decisions at regional and local scales.
The more focused the purpose and need statements are, the more focused the resulting
program will be. In addition, it is important that all parties involved in the program
development understand the purpose and need statements clearly and are reminded of
them throughout the process, so that a program can be developed to meet these goals.
Great Lakes ProgramSOLEC Goal
"The goal of [SOLEC] is to assemble a basin-wide suite of scientifically valid
indicators that will be most useful and understandable in determining the health
of the Great Lakes ecosystem to the interested public." (Bertram and Stadler-
Salt, 2000)
STEP 4: IDENTIFY THE KEY ISSUES
Step 4 in the process uses the purpose and need statements to identify the issues,
management objectives, and questions the program will address. For many programs, this
was addressed when their management plans (i.e., NEP CCMPs) were developed. Critical
attributes for issue identification are:
1. The issues must directly link to the purpose and need statements;
22
-------
PLANNING THE PROGRAM
2. Consider public, scientific, and management concerns in a measurable fashion;
and
3. Details on the issues should be stated in terms of management objectives and
questions that point to the critical information needs (EPA, 1993).
For NEPs: the key issues for indicator development
should be the same as those identified in their CCMP.
The process of identifying issues can be simple or intricate, depending on the program
goals. If the program has only one goal, such as eliminating hypoxia events from
occurring within the estuary, then it will develop management objectives around this one
issue. For more complex programs, the number of issues addressed will depend on the
key issues affecting the ecosystem and what the program plans to cover. In this instance,
the steering committee will need to define the priority issues within the estuary along
with the coinciding management objectives. The document Successful Coastal
Management Solutions outlines seven key management issues that estuaries should
consider (EPA, 2003c):
1. Habitat
2. Pathogens
3. Freshwater in flow
4. Nutrients
5. Fish and wildlife
6. Introduced species (invasive species)
7. Toxics
Develop Management Objectives
Management objectives are specific actions designed to quantify/qualify the changes
intended by the program for each priority issue. For example, if the issue is coliform
contamination within the estuary, the management objectives for that issue might be:
To decrease the number of boats discharging their holding tanks within the
boundaries of the estuary by 70 percent within the next 3 years.
To decrease the number of failing septic systems throughout the estuary's
watershed by 50 percent within the next 15 years.
To decrease the number of overflow instances from municipal sewer plants in the
area by 25 percent within the next 10 years.
To decrease the amount of runoff containing animal waste entering the estuary by
25 percent within the next 10 years.
Each of these management objectives has a specific goal and time period against which
progress can be measured. In some instances, a quantitative value may not be associated
with an issue. In these instances, it is important to be as specific as possible in order to
ensure the program has some baseline condition to measure against.
23
-------
PLANNING THE PROGRAM
These management objectives are then used to form questions that the selected indicators
will address. The goal of the NEP is to determine the effectiveness of its CCMP and the
implementation of the management objectives. Both the Barataria-Terrebonne and New
Hampshire NEPs developed indicators based on questions formed from their CCMP
management objectives; details on this process are provided in Appendix A-l and
Appendix A-2.
Basic Steps for Action Plan Development
State the problem, identifying the probable causes and sources.
State the program goals related to the problem and its sources.
Set specific, measurable objectives to attain the goals.
Determine the universe of possible management activities, both new and existing,
for consideration.
Select the activity that will work, that the public will support, and that can be
implemented within a reasonable time and with reasonable resources.
Establish specific action plans needed to abate and control the problem or to
protect the resources.
Implement and monitor results, collecting data on measurable indicators of
progress.
Report on progress, costs, and results.
Review, re-evaluate, and redirect efforts as needed (EPA, 2005c).
Define Questions to be Answered by Indicators
Under each management objective, a question or series of questions is used to answer
whether the management objective has been met or how much progress has been made
toward accomplishing the objective. The questions can be developed by simply turning
the management objective into a question or a series of questions that look at different
aspects of the objective.
For NEPs: Management objectives and question definitions should have been
conducted in the CCMP. If not, these should be connected with issues
| identified in the CCMP. I
Question development is an important task because the selected indicators must answer
the questions. Therefore, the questions must be specific enough that someone can look at
a series of data and develop an answer to that question.
For example, the management objective might be:
To determine the health of fisheries with regard to ecosystem integrity.
24
-------
PLANNING THE PROGRAM
The associated questions could be:
1. What are the trends in and the status of
commercially important fisheries stocks?
2. What are the effects of fishing on non-
commercial species and their associated
communities?
3. What are the effects of fishing and non-fishing
activities on marine habitat and fisheries
productivity?
4. What are the trends in the socioeconomic
characteristics of fishing?
If the indicators being
developed will be used at
more than one level (i.e.,
nationally and locally),
then there may be separate
questions for each level of
use of the indicator.
It is important that each question be clear and understandable. This will allow an
appropriate indicator to be selectedi.e., one that will answer the question. That answer
will then be used with information from the other questions to answer whether the
management objective was met.
New Hampshire Estuaries ProjectGoals and Objectives
("Those charged with developing indicators for the New Hampshire Estuaries
began by considering the goals and objectives written into the estuary
i management plan. Each objective was rephrased as a monitoring question - for
! which one or more indicators were selected based on their ability to appropriate
I answers. The hypothetical data required to track each of those indicators was then
described and compared with actual data sets produced by existing monitoring
programs." (Pidot, 2003)
STEP 5: CONDUCT A BASELINE ASSESSMENT OF EACH ISSUE
Once the management issues and objectives are selected and outlined, the next step in the
process is conducting a baseline assessment of each issue. Mature programs have
normally already accomplished this task, but should review the information to make sure
it is up to date. For new programs, how well this task can be accomplished will depend
on how well the issue has been studied in the area.
A baseline assessment of an issue compiles and analyzes all available information on that
subject for that area. It defines the present conditions of that issue for that particular area.
If the issue is a new one, then an initial monitoring program might need to be conducted
to determine the starting point; for others, the baseline assessment may only need to
consist of a review of the most recent reports on the issue. It is important to understand
current conditions so that trends can be identified. For example, if the group were
concerned about changing dissolved oxygen (DO) levels within the estuary from year to
year, the baseline assessment would need to include information on DO levels throughout
the estuary over the past year and, if possible, from previous years, so that it can be
determine how levels have changed over time.
25
-------
PLANNING THE PROGRAM
The baseline assessment should also include information on current monitoring being
conducted, including what is measured; when, where and how often it is measured; how
it is measured; and who conducts the monitoring. It is also helpful to know how often the
monitoring programs report their data. When choosing an indicator, it is important to
understand whether current monitoring conducted in the area will adequately answer the
objectives. A number of programs have focused their indicator development on
parameters currently measured through mandatory monitoring programs. The reason for
this approach is that the baseline data are already available and the organization or
agency already has a mandate to conduct the sampling, sample analysis, and data
analysis. Other programs choose their indicators based on best scientific knowledge, then
determine whether monitoring occurs in the area for that parameter. If the parameter was
not monitored and was determined to be a priority, a monitoring plan could then be
developed for it.
A high-profile baseline assessment was conducted by the Massachusetts Water Resources
Authority (MWRA) in conjunction with the construction of a sewage treatment plant
outfall in Massachusetts Bay. The outfall, which was brought on-line in September 2000,
discharges secondary-treated effluent into Massachusetts Bay. MWRA has been
monitoring the bay and Boston Harbor since 1992. The monitoring conducted prior to
September 2000 was part of the baseline assessment of Massachusetts Bay and Boston
Harbor. The baseline monitoring conducted allowed managers and scientists to gain vast
knowledge about water quality, nutrients, benthos, sediment quality, and fish and
shellfish in Massachusetts Bay and Boston Harbor. The extensive baseline assessment
that MWRA conducted, which led to the comparison of pre- and post-outfall conditions
within Massachusetts Bay and Boston Harbor, enabled scientists, managers, and
decision-makers to make informed decisions on regulatory issues and responses needed.
There is a strong national push to establish a consistent effort in conducting baseline
assessments and monitoring. Establishing a national monitoring effort would allow data
to be easily compared and provide practical value for scientists and managers. To be fully
effective, monitoring data collected by state, territorial, tribal, and local governments,
non-governmental organizations (NGOs), and volunteers will need to be coordinated with
the national monitoring network (U.S. Commission on Ocean Policy, 2004). Currently,
the responsibility for monitoring and assessing marine resources is divided among a
number of Federal, state, and local agencies, and other NGOs. A more unified approach
with comprehensive monitoring can provide scientists and managers with the knowledge
to facilitate ecosystem change and understand whether their goals and objectives are
effectively being met.
San Juan Bay NEPBaseline Information
"The proposed study will concentrate on establishing detailed Long-Term
Environmental Indicators for the SJBE (LTEI-SJBE) by initially collecting
baseline information from the system, establishing the indicators, and further
enabling the analysis of achieved programmatic goals." (Otero, 2002)
26
-------
Program
Planning
Conceptual
M.del
Development
Irxdxator
Speculation
Morio
Prcgram
Devebpmenl
CONCEPTUAL MODEL
DEVELOPMENT
The purpose of an indicator is to summarize complex information into a
simplified and useful manner and facilitate the identification of status
and trends. In a common analogy to the field of medicine, the patient
represents a system or phenomenon of interest. Indicator development
is conducted by linking a complex collection of subsystems with many
compartments and interactions, just like the multitude of physiological
systems of the human body. Indicators act as "vital signs" used to
measure the state of the system, just as temperature and pulse are used
to assess the overall health of a patient.
Indicators are used to convey information, quantify responses, and
simplify information about complex ideas. They are assumed to be a
cost-effective and accurate alternative to monitoring individual
components of a system. Indicators can be quantitative or qualitative in
nature and are useful at many scales, both temporally and spatially. When tracked over
time, an indicator can provide information on trends in the condition of a system.
Perhaps the most well-known indicators are those describing the condition of the
U.S. economy, such as the Dow Jones Industrial Average. To capture the complexity of a
system, multiple relevant indicators can be aggregated into an "index." The Dow Jones
Industrial Average, for example, serves as a measure of the entire U.S. market, covering a
diverse mix of businesses in each market sector - financial services, technology, retail,
entertainment, and consumer goods (Figure 4).
To be useful, indicators must answer the questions being asked (see page 24) while being
grounded within a conceptual framework that conveys not only what is being measured,
but why and in what context. The Dow Jones, for instance, is an index within the
framework of the U.S. stock market. In general, the higher the value of the Dow Jones
index, the better the U.S. stock market is doing.
Following up on the management goals/objectives/questions developed under the
previous sectionthis section focuses on the use and development of conceptual models
in indicator identification and development.
27
-------
CONCEPTUAL MODEL DEVELOPMENT
Dow Jones Industrial Average
12,000
1975
1980
1985
1990
1995
2000
2005
Figure 4. Example of a Common Economic IndicatorDow Jones Industrial
Average from 1975 to 2005 (weekly mean index data compiled from
http ://www.djindexes.com)
USE OF CONCEPTUAL MODELS IN INDICATOR DEVELOPMENT
Conceptual models interpret systems by organizing information on the structure and
interactions of the system into an easily understood and sometimes visual format, which
simplifies the process of identifying appropriate indicators. These models identify key
ecological compartments and linkages between those compartments. Within the
conceptual model, the various perturbations (Pressures) are put into context with system
ecology and potential responses. Several types of conceptual models can be used to
organize and identify environmental indicators. These models run the gamut from simple
text describing an ecological system to complex, multifaceted flow charts that detail
many of the compartmentalized aspects and interactions occurring within a particular
ecosystem (see Figure 5 for an example).
New York/New Jersey Harbor Estuary ProgramIndicator Development
"There is no program that monitors habitat function directly. However, one indirect
way to determine whether habitats are functioning properly is to examine the
population sizes of organisms that those habitats support." (Steinberg, Suszkowski,
Clark, and Way, 2004)
28
-------
CONCEPTUAL MODEL DEVELOPMENT
Nutrients
Contaminants
Organic Material
Food Chain
Community Structure
Living Resources
Human Health
Contaminants
Bacteria
Viruses
Bioaccumulatiun
Sources
Rivers
Boundary
Nonpoint
Effluents
Figure 5. Conceptual Model of Estuarine Ecosystem with
Multiple Stressors and Responses
DEVELOPMENT OF CONCEPTUAL MODELS
Several different types of frameworks have been created for developing conceptual
models. One of the more prominent frameworks categorizes (1) environmental indicators
as pressures and stressors that degrade ecological condition, (2) the state of ecological
conditions, and (3) society's responses at improving ecological condition. As seen in this
categorization, environmental indicators can be used to measure ecological condition, but
may be used to measure progress towards meeting goals, milestones, and objectives.
These indicators are often referred to as "programmatic indicators," measuring
implementation of actions, funding milestones, and changing laws, policies, and
regulations. The following section presents several frameworks that can be used to
organize environmentalboth programmatic and ecologicalindicators to monitor and
track estuarine health and restoration efforts. As noted previously, this manual focuses on
ecological indicators, but similar frameworks and processes apply to the development of
other types of indicators.
Pressure-State-Response (PSR) and Pressure-State-Response-Effect (PSR/E)
Frameworks
Used internationally and nationally, the PSR framework is a conceptual framework
developed by the OECD for environmental monitoring. The PSR framework (see
Figure 6) represents the associations among the pressures exerted by human activities on
the environment (Pressure); the changes in the quality and quantity of natural resources
(State); and the societal responses to these changes through environmental and other
polices (Response) (OECD, 1993).
29
-------
CONCEPTUAL MODEL DEVELOPMENT
PRESSURES
Human Activities
Agriculture
Industry
Transport
Energy
Other
t
i
_
Pressures
.
H/
A-
Resources
| STATE
| RESPONSES
Information
\
/
State of the Environment
and of the Natural
Resources
Air
Water
Land
Living Resources
Other
Societal Responses
(Decisions-Actions)
Information
Societal Responses
(Decisions-Actions)
4,
1
1
Economic and
Environmental Agents
Administration
Households
Enterprises
International
Figure 6. The PSR Conceptual Model (OECD, 1993)
Pressure indicators are measurements of the pressures exerted on the environment by
human activity, whether direct (i.e., proximate pressures) or indirect (i.e., indirect
pressures). Examples of pressure indicators include emissions from cars, discharges from
municipal wastewater treatment plants, and runoff from agricultural operations. State
indicators describe the quality of the environment and the quality and quantity of natural
resources. State indicators generally are measurable quantities, such as water quality
parameters, concentrations of air or water toxicants, the extent of viable wetlands, or the
functionality or productivity of wetlands. Response indicators relate how society is
responding to environmental changes and concerns by protecting and restoring the
environment and preventing environmental damage. Societal responses may range from
economic incentives such as taxation and subsidies to enforcement with legislative and
management programs. The framework assumes that there is a causal relationship
between each of the components that links human activity to environmental impacts.
Building on the existing PSR framework, the EPA Office of Policy, Planning and
Evaluation modified the PSR framework to include interactions among pressure, state,
and response indicators, called "effects" indicators (PSR/E) (EPA, 1995). The principles
of the PSR/E framework have been adopted by EPA's ORD, which focuses its indicator
research on the state and effects components of the PSR framework. ORD's indicators
are science-based, rather than policy-based, and the guidance document Evaluation
Guidelines for Ecological Indicators presents examples of three different types of
indicators (EPA, 2000b).
With regard to the PSR and PSR/E approaches, the models can be relatively simple,
focusing on only primary or secondary effects/interactions, or they may be more
complex, including many factors influencing and being impacted within a system. The
30
-------
CONCEPTUAL MODEL DEVELOPMENT
simpler the model, the more clearly defined the relationship between PSR and PSR/E.
The main drawback in using a simple model is that a component of the real ecosystem
that is not taken into account may play a critical role in how the ecosystem responds to or
is affected by pressures or response actions.
It is important that conceptual models be easily understandable by both scientists and
managers and that the models include enough information to make educated choices on
what might be used as an indicator. For example, nutrients are a crucial ingredient in the
biogeochemical functioning of an estuarine system. However, too much of a good thing,
in this case anthropogenic nutrient inputs, could drive the system toward eutrophication
with elevated biomass (organic material) and, eventually, lower bottom-water DO levels
or even hypoxic conditions. This is
just one example of the interactions
of pressures on the state of an
estuarine system, but it conveys the
simple idea that additional input of
nutrients could lead to low DO. In
this case, the annual point source
nutrient load may be a useful
It is important that conceptual models be
easily understandable by both scientists and
managers and that the models include
enough information to make educated
choices on what might be used as an
indicator.
indicator of the pressures on the system. The annual or seasonal phytoplankton biomass
or DO minima would be an indicator of the state of the system. If the management
response is to decrease point source loading, then all three might be useful in
understanding the success of the action both directly (nutrient loading) and indirectly on
the effect on the system (biomass and DO).
Tillamook Bay Estuary Program (TEP)State Indicators
i'TEP made a conscious decision to focus on "state" indicators. State indicators
were selected because they best describe the quality of the environment, and
integrates the effects of pressures and responses over time." (TEP, n.d.)
This example is presented in Figures 7 and 8 within the more formal PSR and PSR/E
frameworks. The primary difference between these frameworks is that the PSR/E
framework formalizes the effects of the response actions into the conceptual model.
Although it is not a specified component in the PSR framework, continued monitoring of
pressure or state variables/indicators is implicit and serves to provide an understanding of
the effect of management responses. In Figure 7, the management actions result in some
change in both pressures and state as signified by the returning arrows. In Figure 8, the
impact of these actions is specified as expected effects to both pressure and state
variables (bottom box). A more complex version using multiple variables would follow
the same process but would have many more interconnections between pressures, states,
responses, and effects. At some point, the model becomes less useful and it would be
preferable to use an ecological framework to describe the conceptual model, as discussed
in the next section.
31
-------
CONCEPTUAL MODEL DEVELOPMENT
PRESSURES |
| STATE |
RESPONSES
Information
Human Activities
Point source
nutrient inputs
(industrial and
public wastewater
treatment plants)
^ ^
^
^
Natural
Increased nutrient
concentrations
Higher rates of
primary production
Increased rates of
organic deposition
Lower DO
Information
Responses
(Decisions-Actions,
i
r
Management
Actions
Set more stringent
loading limits
Improve treatment
technologies
Responses
(Decisions-Actions)
Figure 7. Example of a PSR Conceptual Model for Nutrient Inputs and Aspects of
Eutrophication.
PRESSURES
STATE
| RESPONSES |
Information
Human Activities
Point source
nutrient inputs
(industrial and
public wastewater
treatment plants)
Increased nutrient
concentrations
Higher rates of
primary production
Increased rates of
organic deposition
Lower DO
Information .
Management
Actions
Set more stringent
loading limits
Improve treatment
technologies
EFFECTS
Expected Change in
Natural Resources
Lower nutrient concentrations
Lower rates of primary production
Decreased rates of organic deposition
Higher DO
Responses
(Decisions-Actions)
Effect on Human Activities
Decrease in nutrient
and particulate
loading to system
(industrial and
public wastewater
treatment plants)
Figure 8. Example of a PSR/E Conceptual Model for Nutrient Inputs and Aspects of
Eutrophication.
Note: Figures 7 and 8 were developed for this manual using example indicators.
32
-------
CONCEPTUAL MODEL DEVELOPMENT
Ecological Framework
Another environmental indicator framework that is related to the PSR/E framework is
presented in the NRC's guidance document Ecological Indicators for the Nation (NRC,
2000). The NRC proposes national indicators of ecological condition that are influenced
by multiple stressors. These indicators may be used to estimate the ability of a nation's
ecosystems to continue to provide goods (e.g., food and building materials) and services
(e.g., flood protection and recreation) for the survival of the society. These indicators fall
into three categories:
1. Indicators of ecosystem extent and status;
2. Indicators of ecological capital;
3. Indicators of ecosystem functioning.
Indicators of ecosystem extent and status include measurements of land cover and land
use. Indicators of ecological capital measure the biotic and abiotic natural capital, or raw
materials, of the nation. Biotic raw materials include the number and distribution of
native species, and the number of introduced or exotic and invasive species, while abiotic
raw materials include soil and nutrients. Indicators of ecosystem functioning measure
ecosystem processes or end results of processes, such as productivity and nutrient-use
efficiency and nutrient balance. The interactions between raw materials and the
ecosystem process are initially developed in a conceptual model of the estuarine
ecosystem in order to develop relevant indicators to model the system.
In order to develop an appropriate environmental indicator, it must be directly linked to
the cause, effect, or action it is tracking. Ideally, indicator development should be
preceded by the development of an assessment question. An example assessment
question relevant to the objective of this report is "What percent of the estuary is
hypoxic?" The next critical step is the development of a framework or model of the
system relevant to the assessment question. In the example, the estuary may be exhibiting
hypoxic conditions due to lack of oxygen from algae growth, loss of seagrass, industrial
pollutant discharges, invasive species changing ecosystem dynamics, or nutrient
overloading.
Ideally, a conceptual model should be
developed based on the current
understanding of the structure and
function of the system in question
Ideally, a conceptual model should be
developed based on the current
understanding of the structure and
function of the system in question (an
estuarine ecosystem example is provided
in Figure 5). The model considers
temporal and spatial dynamics, evaluates recuperative capacities of the resource to
combat stressors, and identifies where stressors are introduced to the system and may
potentially impact resources. The model should present a thorough understanding of the
inputs and outputs of the system that will lead to a selection of indicators in which to
perform the research. Common mistakes encountered while developing indicators include
33
-------
CONCEPTUAL MODEL DEVELOPMENT
selecting indicators that are not linked to the assessment questions, developing indicators
prior to posing an assessment question, and settling for indicators based on the currently
available data.
Common mistakes encountered while developing indicators
include selecting indicators that are not linked to the
assessment questions, developing indicators prior to posing
an assessment question, and settling for indicators based on
the currently available data.
34
-------
INDICATOR SPECIFICATION
Conceptual
Model
Development
Indicator
pecification
Monitoring
Program
Development
Once the management goals/questions have been defined and at least
one conceptual model has been developed, the process focuses on
selecting appropriate indicators for addressing each question and model
Specification compartment. These indicators can be either quantitative measures (e.g.,
DO levels) or qualitative measures (e.g., aesthetics; see the Sneaker
Index callout box below). Indicators can also be direct measurement
indicators, index indicators, or complex multi-metric indicators. Direct
measurement indicators, such as DO or nutrient concentrations, directly
correlate the measurements of the indicator (DO) to the effect on the
environment (hypoxia). Index indicators (multiple indicators), such as
the index of benthic condition, integrates measures of community
composition and diversity and discriminates between impacted and
unimpacted areas. Complex, multi-metric indicators are a composite
index, which integrates various structural and functional attributes of an
ecosystem and provides an overall assessment of ecosystem condition (EPA, 2000b). An
example of a multi-metric indicator is the characterization of a stream fish assemblage
that measures the effects of a variety of stressors across different time scales and levels of
ecological organization and evaluates the impact offish
consumption by the general public. The development of
this type of indicator is based on the multi-metric Index
of Biotic Integrity originally developed by Karr (Karr,
1981; Karr et al., 1986). Therefore, each of these
indicator types varies by the type of information and
extent of analysis involved in its development.
Reassessment
"The symbolic value of
an indicator may
outweigh its value as a
literal measure."
(Cobb and Rixford, 1998)
Sneaker Index
"The name Sneaker Index was originally coined by Sen. C. Bernard
(Bernie) Fowler, around 1988. Sen. Fowler was deeply concerned about
the future of Maryland's Patuxent River. To evaluate the condition of
the river water, he began to measure how deep he could wade into the
water and still see his sneakersthus came the name 'Sneaker Index'.
People understood this form of assessment very easily. Consequently,
the public accepted it." (Price and Huerta, 2001)
35
-------
INDICATOR SPECIFICATION
A range of possible indicators stemming from eutrophication issues is presented in
Figure 9. In this case, the input of nutrients to a system can have a variety of impacts that
range from primary, to secondary, to even tertiary symptoms. Each level of symptoms in
Figure 9 carries with it additional effects from other stressors. These indicators integrate
impacts not only across multiple stressors, but often across wide spatial areas, over time,
due to cumulative effects. A number of factors must be considered for the selection of
indicators suitable for each area/region of interest (parameters and metrics).
External
Nutrient
Inputs
Nitrogen and
Phosphorus
^
w
^
w
^-
w
Primary
Symptoms
Decreased Light
Availability
Extreme Chl-a
Concentrations
Problematic
Epiphytic Growth
Problematic
Macroalgal Growth
Algal Dominance
Changes
Diatoms to
Flagellates
Benthic
Dominance to
Pelagic Dominance
Increased Organic
Decomposition
Extreme Chl-a
Concentrations
Problematic
Macroalgal Growth
,
w
^
w
^
w
Secondary
Symptoms
Loss of SAV
SAV Spatial
Coverage
SAV Spatial
Coverage Trends
Harmful Algae
Nuisance Blooms
Toxic Blooms
Low Dissolved
Oxygen
Anoxia
Hypoxia
Biological Stress
w.
^
W
^
w
r>
^
W
\>
Potential effects and
use impairments
Loss of habitat
Commercial fishing
Recreational fishing
Tourism
Increase of algal
toxins
Commercial fishing
Recreational fishing
Human health
Swimming
Tourism
Fish kills
Commercial fishing
Recreational fishing
Aesthetic values
Tourism
Loss of habitat
Commercial fishing
Recreational fishing
Tourism
Offensive odours
Aesthetic values
Tourism
Figure 9. Example of Multiple Levels of Indicators Associated with Eutrophication
and the Inputs of Nutrients (Bricker, Ferreira, and Simas, 2003)
36
-------
INDICATOR SPECIFICATION
Great Lakes Program SOLEC 1996Science Based Indicators
At SOLEC 1996, constituents decided to create "a basin-wide, systematic framework
using science-based indicators." "Small working groups of experts were assembled and
asked to both 'extract' indicators from Great Lakes studies pertinent to their topic, and
to identify new indicators to fill crucial gaps. According to the interviewees, breaking
the indicator development process into manageable topic areas, and assigning each
piece to a separate working group, made for a fairly efficient process." (Pidot, 2003)
To determine whether an indicator provides consistent information for evaluating both
short- and long-term conditions and supporting management decisions, EPA has
established guidelines using a four-phase approach for evaluating potential and
acknowledged indicators (EPA, 2000b). The four-phase criteria are as follows:
1. Conceptual Relevance or Soundness
Is the indicator relevant to the assessment question and to the resource at risk?
The choice of indicators is dependent upon initial questions and conceptual
models for the relevant area.
2. Feasibility of Implementation (Current and Future)
Are the methods for long-term sampling and measuring the environmental
variables technically feasible, appropriate, and efficient for use in a monitoring
program? Evaluation of the indicators must focus on both the short- and long-term
feasibility of monitoring, the associated costs, and the complexity of analysis and
data interpretation.
3. Response Variability
Are human errors of measurement and natural variability over time and space
sufficiently understood and documented? Indicators will likely integrate both
anthropogenic and natural factorscan the spatial and temporal variability of
each factor be determined (regional vs. local, short-term or long-term, etc.)?
4. Interpretation and Utility
Will the indicator convey information on resource conditions that is meaningful to
environmental decision-makers? In addition, is the indicator currently monitored
or likely to be easily monitored in the future?
These phases describe an idealized progression for indicator development that flows from
fundamental concepts, to methodology, to examination of data from pilot or monitoring
studies, and finally to consideration of how the indicator serves the program objectives.
The guidelines are presented as sequential steps that can be used iteratively to refine the
selected indicator.
-------
INDICATOR SPECIFICATION
Both the NRC and EPA's Environmental Monitoring and Assessment Program (EMAP)
have put forth their own sets of criteria for evaluating the appropriateness of indicators
for environmental systems (EMAP, 1994; NRC, 2000). Table 1 compares indicator
evaluation criteria recommended by these two programs with those suggested in EPA
(2000b) guidelines. Although some of the individual criteria vary between the three sets
of guidelines, all of the criteria share the four phases described above, with several of the
criteria in these groups overlapping across programs. The essential elements for
evaluating the suitability of an indicator are whether the indicator is measurable using
available technology, is relevant and responds to the assessment question, and provides
information for management decision-making. Additionally, the best indicators are able
to quantify information so its significance is more readily apparent and simplify
information about complex phenomena to improve communication between researchers,
managers, and ultimately the public.
Long Island Sound StudyIndicator Development
Indicator development began with a review of monitoring programs already
collecting data in the Long Island Sound region. First, developers exclusively
looked at existing programs and did not consider which information might be
most useful to managers or scientists. LISS also reviewed the work of other
groups that had completed indicator-based State of the Environment Reports to
gain a sense of what choices were made by others with similar projects. A list of
approximately 100 potential indicators was created from the review. Indicators
were selected from this list based on the extent and quality of data immediately
available, as well as their relevance to Long Island Sound management
objectives. (Pidot, 2003)
Tillamook BayIndicator Selection Criteria
In addition to the selection criteria noted above, Tillamook Bay applied the
following criteria:
1. Correlated to environmental conditions and/or responses
2. Representative of system-wide conditions
3. Understandable and relevant to audience
a. Directly applicable to resource management
b. Linked to public concern or interest
4. "Monitorable"
a. Quantifiable
b. Repeatable
c. Affordable
d. Practical
(TEP, n.d.)
38
-------
INDICATOR SPECIFICATION
Table 1. Examples of Various Indicator Evaluation Guidelines1
General
Criteria Group
Conceptual
relevance or
soundness
Feasibility of
implementation
(current and
future)
Response
variability
Interpretation
and utility
EPA (2000b)
Relevance to the
assessment
Relevance to
ecological function
Data collection
methods
Logistics
Information
management
Quality assurance
Monetary costs
Estimation of
measurement error
Temporal variability
- within the field
season
Temporal variability
- across years
Spatial variability
Discriminatory
ability
Data quality
objectives
Assessment
thresholds
Linkage to
management action
NRC (2000)
General importance
Conceptual basis
Necessary skills
Data archiving
Cost, benefits, and
cost-effectiveness
Data requirements
Temporal and spatial
scales of
applicability
Robustness
Statistical properties
Data quality
Reliability
International
compatibility
EM AP (1994)
Unambiguously interpretable
Available method
Minimal environmental
impact
Amendable to synoptic
survey
Cost effective
Index period stability
High signal-to-noise ratio
Ecologically responsive
Nominal-subnominal criteria
Retrospective
Anticipatory
Historical record
New information
'Criteria that are common to more than one program are italicized.
39
-------
INDICATOR SPECIFICATION
CONCEPTUAL RELEVANCE
The indicator must provide information that is relevant to societal concerns about
ecological condition. The indicator should clearly pertain to one or more identified
assessment questions. These, in turn, should be germane to a management decision and
clearly relate to ecological components or processes deemed important in ecological
condition. Often, the selection of a relevant indicator is obvious from the assessment
question and from professional judgment. However, a conceptual model can be helpful to
demonstrate and ensure an indicator's ecological relevance, particularly if the indicator
measurement is a surrogate for measurement of the valued resource. This phase of
indicator evaluation does not require field activities or data analysis. Later in the
process, however, information may come to light that necessitates re-evaluation of the
conceptual relevance, and possibly indicator modification or replacement. Likewise, new
information may lead to a refinement of the assessment question. (EPA, 2000b)
The first step in indicator identification and development flows directly from the
appropriate conceptual models identified for the specific estuary, ecosystem, or regional
area of concern. These models may be specific to a particular segment of the ecosystem
or more detailed, including multiple trophic levels and habitats. The suite of possible
indicators also covers a wide range from parameter-specific to integrations of multiple
metrics/parameters. In all cases, however, the indicator needs to be directly relevant to
the resources at risk or the management questions being addressed. A compendium of
indicators is included in Appendix B. This list, although quite comprehensive, is not
necessarily complete; additional indicators may be valid in a particular system.
The strategies for selecting indicators based on conceptual models are as varied as the
programs themselves, but most focus on some form of brainstorming. This activity can
occur internally with NEP or other groups, externally utilizing the experience and
knowledge of area scientists who are brought together as a Technical Advisory
Committee (TAC) or similar types of advisory groups, or publicly with a wide range of
stakeholders participating. Each level of involvement has benefits and drawbacks.
Internal staff discussions can be focused, expedient, and driven by knowledge of the next
three steps in the process. Expanding discussions to include a TAC will likely extend the
timeframe of the process; however, it will also expand the knowledge base and may
provide a more comprehensive list of indicators. Public workshops are certain to take the
most time, but in addition to the benefit of likely producing a more comprehensive list of
indicators that will be easily communicated, workshops also provide a mechanism of
public education and a buy-in to the process.
FEASIBILITY OF IMPLEMENTATION
Adapting an indicator for use in a large or long-term monitoring program must be
feasible and practical. Methods, logistics, cost, and other issues of implementation
should be evaluated before routine data collection begins. Sampling, processing and
analytical methods should be documented for all measurements that comprise the
indicator. The logistics and costs associated with training, travel, equipment and field
40
-------
INDICATOR SPECIFICATION
and laboratory work should be evaluated and plans for information management and
quality assurance developed. (EPA, 2000b)
The factors that determine the feasibility of indicator implementation fall into two general
categoriesavailable infrastructure/expertise and costs. The availability of the
infrastructure necessary for sample/data collection, analysis, and management is directly
related to costs, but such costs likely have been covered by previous budgets. If existing
monitoring program infrastructure is not present, then the feasibility of implementing a
wide variety of indicators is limited. It is expected that most systems will have a
modicum of ongoing monitoring activities and that the current system in place not only
provides data relevant to some of the selected indicators, but also has the capacity to be
modified to implement additional monitoring efforts. Again, the cost/benefits of each
indicator will need to be evaluated based on available funding sources, both current and
with an eye to the future for any long-term metrics.
RESPONSE VARIABILITY
It is essential to understand the components of variability in indicator results to
distinguish extraneous factors from a true environmental signal. Total variability
includes both measurement error introduced during field and laboratory activities and
natural variation, which includes influences ofstressors. Natural variability can include
temporal (within the field season and across years) and spatial (across sites)
components. Depending on the context of the assessment question, some of these sources
must be isolated and quantified in order to interpret indicator responses correctly. It may
not be necessary or appropriate to address all components of natural variability.
Ultimately, an indicator must exhibit significantly different responses at distinct points
along a condition gradient. If an indicator is composed of multiple measurements,
variability should be evaluated for each measurement as well as for the resulting
indicator. (EPA, 2000b)
There are two primary sources of variability in environmental dataanalytical and
natural. Although it is important to understand the variability inherent in specific
analyses/measurements, that variability is not described herein. EPA (2000b) provides a
detailed discussion of analytical variability and its context in indicator development. For
this manual, it is expected that the variability from most methods of data/sample
collection and analysis can be minimized or at least quantified by following explicit
quality assurance/quality control (QA/QC) protocols. To this end, it is critical to have a
QA/QC plan in place for any monitoring activity. Not only will it allow for assessment of
field and laboratory variability, but the data quality objectives outlined in a typical
QA/QC plan will also be useful during subsequent interpretation activities.
Natural variability occurs over many temporal and spatial scales, and a comprehension of
natural variability is crucial to both understanding the system and selecting appropriate
indicators. Ecosystem characteristics vary over time scales from hourly to interannual;
selection of the optimal time scale is important in developing monitoring approaches and
interpreting the data.
41
-------
INDICATOR SPECIFICATION
In most cases, the spatial scale that is of most concern to managers is their local area, but
this may be as small as a localized area within an embayment, an entire embayment, a
larger bay, or a large regional coastal area. Not only is the scale of the area of concern
important, but important factors influencing localized areas are also often regional (e.g.,
coastal currents), hemispheric (e.g., North Atlantic Oscillation, El Nino/Southern
Oscillation), or even global (e.g., climate change) in scale.
In these contexts, the expectation is that the natural variability over time and space is
such that an anthropogenic signal can be discerned. The natural variability either has to
be relatively small or well-defined in comparison to expected changes due to human
pressures. To this end, when selecting indicators to track ecosystem health and response
to management actions, numerous questions should be considered concerning the
temporal and spatial scale variability of environmental data. For example:
1. Are there natural seasonal patterns in the data?
2. What is the most representative time period from which to measure or average
data?
3. Is the local expression of the indicator indicative of localized impacts or driven by
larger regional forces?
INTERPRETATION AND UTILITY
A useful ecological indicator must produce results that are clearly understood and
accepted by scientists, policy makers, and the public. The statistical limitations of the
indicator's performance should be documented. A range of values should be established
that defines ecological condition as acceptable, marginal, and unacceptable in relation
to indicator results. Finally, the presentation of indicator results should highlight their
relevance for specific management decisions and public acceptability. (EPA, 2000b)
In this last step for indicator evaluation, the expected needs that the indicator must fulfill
become a bit more diverse (see Table 1). The main need is for an a priori understanding
or establishment of a threshold level or range of values that is considered 'good' or 'bad'
with which to evaluate current conditions or trends based on a particular indicator. In the
best-case scenario, this level or range of values would be based on a long-term data set
baseline or historical.
In the absence of data specific to the system of interest, comparisons to other systems
may suffice. These comparative systems could be impaired or pristine or likely
somewhere in between, but should have enough similarities to be germane to the system
of interest. Best professional judgment can also be a valid source when no other data are
available. Regulatory levels or management goals could also serve as a threshold for
many quantitative indicators.
The selection of indicators will always be site-specific, but the process by which
indicators are selected is nearly always the same and more or less follows the four steps
described above.
42
-------
INDICATOR SPECIFICATION
Table 2 lists a sampling of potential indicators and their relevance, feasibility, expected
variability, and interpretation utility. Although the details in the table are limited, these
examples provide a starting point and model for this approach.
For example, DO is a key indicator and integrator of water quality in coastal waters. As a
basic necessity for aquatic life, DO levels directly affect ecosystem health. Diaz and
Rosenberg (1995) state that no other environmental variable of such ecological
importance to coastal marine ecosystems has changed so drastically in such a short period
of time as DO. These authors argue that while hypoxic environments have existed
through geological time, their occurrence in shallow coastal and estuarine areas appears
to be increasing and the cause seems most likely to be accelerated by human activities
(Nixon, 1995; Bricker et a/., 1999). Thus, DO is obviously relevant to understanding
human impacts on our coastal ecosystems.
The measurement of DO is straightforward for both in situ sensors and water samples
(Winkler titrations), and the methods are quite accurate. DO is typically measured as part
of coastal water quality monitoring programs and is relatively inexpensive in comparison
to other data-gathering efforts. Historic data are often available, current monitoring
programs are normally measuring DO, and data will continue to be easily and
economically obtained into the future. All these factors indicate that DO is a very feasible
indicator.
As mentioned, the analytical variability in DO analysis is tightly constrained, as methods
are quite accurate and precise. The amount of DO contained in marine waters at
saturation is a function of physical, chemical, and biological conditions. Cold waters hold
more DO than warm waters at a given salinity. Seawater at equilibrium at a given
temperature contains substantially less DO than freshwater. Thus, DO concentrations
naturally follow a seasonal pattern of winter maxima and summer minima that is directly
related to temperature but is influenced by biological processes. This aspect of natural
variability in DO concentrations, and the fact that historic and present data monitoring
programs further describe these trends or provide a baseline, suggests that it is likely that
an anthropogenic signal in this indicator could be observed.
Biological production and utilization of DO in coastal waters has a well-known
theoretical relationship to nutrient supplies. Increased nutrient supplies often lead to
increased photosynthetic production of organic matter by phytoplankton or other algae.
This increase in production often results in super-saturated DO levels in the upper water
column. Alternatively, a dominance of heterotrophic activity, especially microbial
respiration, can lead to greatly under-saturated conditions. Highly productive waters may
experience super-saturated conditions during the day and under-saturated conditions at
night, especially just before sunrise as respiration has been occurring for maximum
duration.
43
-------
INDICATOR SPECIFICATION
Table 2. Sampling of Indicators and their Respective Aspects under the Four
CriteriaRelevance, Feasibility, Variability, and Utility
Indicator
Nutrient
\c\zxr\ir\c\
ivjduii ly
Di^^nlvpH
LJ IOOCH V \j\J
oxygen
Frequency of
toxic/nuisance
phytoplankton
blooms
Relevance
Point and non-
point source
inputs are one of
the primary
factors in
eutrophication.
Integrator of
many water
quality
processes and
directly relevant
to marine
species (and
fishermen).
Public health
and aesthetic
issue. Shellfish
closures also a
monetary
incentive for
monitoring these
species.
Feasibility
Point sources are
required to
measure nutrients
by permits, and
most monitoring
programs include
these relatively
inexpensive
measures.
Easily measured
and among
normal suite of
measurements.
Often part of state
monitoring
programs (e.g.,
Maine Department
of Marine
Resources). Local
researcher with
experience -
otherwise can be
very expensive.
Variability
Analytical
variability is
minimal and
known. The
inputs are also
well-constrained
(large natural
variability in
ambient waters,
but not loading).
Analytical
variability is
minimal and
known. Natural
variability can be
large, but
seasonality of
signal is typically
known and
changes in
seasonal DO
minima could be
detected.
Little analytical
variability,
assuming counts
and
identifications are
made by
experienced
personnel.
Natural variability
can be large, but
often well-known
due to historical
data and shellfish
closures or other
public health
records.
Utility
One of the
responses of
management is to
set loading limits.
Thus, baseline and
post-action
changes can be
measured and
changes in the
ambient waters
measured.
Given the
understanding of
this parameter,
interpretation of the
data is relatively
straightforward
(though ancillary
information on
physical current
structure and
bathymetry is very
helpful).
Frequency of these
blooms has
increased unclear
from literature
whether due to
increase monitoring
effort or as a result
of anthropogenic
impacts.
44
-------
INDICATOR SPECIFICATION
Table 2 (continued). Sampling of Indicators and their Respective Aspects under the
Four CriteriaRelevance, Feasibility, Variability, and Utility
Indicator
Aprfsc nf
rAUl Co (Ji
existing
seagrass and
hahitat
i idui id i
restored
Benthic
indices
(health,
abundance,
taxonomic
identification
and diversity)
Relevance
Importance to
fisheries and
sensitive to
nutrients.
Integrator of
eutrophication
processes
(decreased light,
increased
epiphyte growth)
and other
anthropogenic
pressures
(trawling,
development,
increased
sedimentation,
etc.).
Benthos is an
integral part of
the ecosystem
and tends to be
the repository of
much of the
organic material
and
contaminants
from
anthropogenic
inputs. Need to
develop linkages
between
stressors and
benthic impacts.
Feasibility
Established direct
(divers) and
indirect (in situ
instruments and
remote sensing)
methods exist for
mapping the
density and extent
of seagrass beds.
Can be expensive,
but can be
conducted on a
cyclical basis to
minimize annual
costs.
As with the
phytoplankton,
this type of
indicator can be
very expensive if
not part of an
ongoing
monitoring plan.
Unlike plankton,
the benthos could
be monitored less
frequently if
appropriate and
still provide a clear
indication of
improvement or
degradation.
Variability
Increased
variability with
the indirect
measurements
that quantify over
a larger range,
but can be
minimized by
ground truthing
sampling.
Interannual
variability a direct
indicator of
habitat loss or
gain.
The benthos is a
highly variable
environment, and
this is reflected in
the data. This
variability can be
minimized by
implementing a
QC program, by
understanding
the relative
temporal and
spatial variability
across the
system, and by
tailoring the
sampling schema
to capture only
the specific time
and area of
interest to both
focus the effort
and minimize
these sources of
variability.
Utility
Necessary for
establishing
baseline conditions
and to monitoring
the effectiveness of
restoration
programs. Once a
baseline
distribution map is
available, can
revisit at 3- to 5-
year intervals to
gauge changes in
this valuable
habitat resource.
Many types of
indices listed in the
literature. The more
effort taken in
selecting an
appropriate index,
the more useful the
results will be.
Critical in
establishing
'baseline'
conditions and for
managers tasked
with both assessing
ecological condition
and mitigating
impacts caused by
anthropogenic
inputs.
-------
INDICATOR SPECIFICATION
Table 2 (continued). Sampling of Indicators and their Respective Aspects under the
Four CriteriaRelevance, Feasibility, Variability, and Utility
Indicator
Fish/shellfish
consumption
warnings
Relevance
Designed to
protect public
health usually
using a risk-
based approach
to contaminant
levels. Directly
impact public's
perception of
water quality and
toxics.
Feasibility
Typically issued
by a state
agency the
monitoring,
analysis and
assessment of risk
all conducted by
the state. Data
publicly available
(historic and into
the future).
Variability
Primary sources
of variability are
controlled or at
least taken into
account in the
risk-based
system. State-to-
state variability
may exist, but
relative numbers
will likely be
comparable over
time.
Utility
One of the end-of-
the-line type
indicators if
warnings increase
or decrease, a
clear message is
understood by the
public. The more
localized the range
of the animals, the
more pertinent to
individual estuaries
or locations.
Another factor that affects DO concentration in estuarine and coastal waters is mixing (or
lack thereof). Deeper waters, where vertical density differences exist (especially sub-
pycnocline waters), may become hypoxic during the summer when DO solubility is
lowest and ample supplies of labile organic carbon are available (due to sinking of
senescent phytoplankton) to support microbial respiration and benthic respiration in the
bottom waters. DO utilization in deeper stratified waters may outpace DO replenishment
through transport of atmospheric DO and mixing and any potential net gains of DO from
photosynthesis. DO concentration in coastal waters is a dynamic property that varies
spatially and temporally, depending on physical, seasonal, biotic, and anthropogenic
influences. Thus, the foundation for interpreting the DO indicator is sound and readily
available. Not surprisingly, DO is one of the most widespread indicators in use for water
quality objectives.
46
-------
Pro grain
Planning
Conceptual
Model
Developmen
MONITORING PLAN
DEVELOPMENT AND
MODIFICATION
Monitoring
Program
Development
Implementation
Reassessment
The development of monitoring plans has been discussed in detail in
other guidance manuals (EPA, 1992). This section highlights the steps
discussed elsewhere, describes how monitoring activities fit into the
indicator paradigm, and focuses on how ongoing monitoring programs
may need to be modified to better address indicator program needs.
EPA's Monitoring Guidance for the National Estuary Program (EPA,
1992) specifies five steps for designing a monitoring program
(Figure 10):
1. Develop monitoring objectives and performance criteria
2. Establish testable hypotheses and select statistical methods
3. Select analytical methods and alternative sampling designs
4. Evaluate expected monitoring program performance
5. Design and implement a data management plan
The first two steps are somewhat analogous to the processes outlined earlier in this
manual for indicator development. The development of management goals for indicators
and the indicators themselves can be used as the monitoring objectives and performance
criteria for a monitoring program (Step 1). The conceptual models are in essence the
basis for formulating testable hypotheses (Step 2). The selection of methods and
sampling designs will be driven by available equipment/expertise, regulatory
requirements, location of sensitive areas, and local geomorphology, to name a few factors
(Step 3). Programmatic indicators will be critical in evaluating monitoring program
performance (Step 4). The design and implementation of a data management plan
(Step 5) is a key part of any monitoring program, but with regard to indicators, the only
connection is the need for the data management schema to be able to record and track
data associated with indicators and their calculation.
Many sources of information for developing a monitoring plan from scratch are available,
such as EPA's 1992 guidance document and Managing Troubled Waters (NRC, 1990).
These and other documents lay out the objectives, approach, and detailed examples for
monitoring program development. Any new program should take into account current
and potential future indicators and include measurements that are both directly and
indirectly relevant to the indicators. Not only should the parameters included as part of
specific indicators be measured, but also ancillary information pertinent to understanding
the conceptual model and information necessary for interpreting trends in the indicators.
47
-------
MONITORING PLAN DEVELOPMENT AND MODIFICATION
DEVELOP / REFINE MONITORING OBJECTIVES
EVALUATE / ASSESS PROGRAM PERFORMANCE
COMMUNICATE
MONITORING
RESULTS/REDIRECT
MANAGEMENT
PROGRAM
Figure 10. Five Steps in Designing a Monitoring Program (EPA, 1992)
Rather than revisit the steps involved with monitoring plan development, this section
focuses on utilizing data from ongoing long-term programs and adapting current
monitoring programs as necessary to fit the proposed indicator paradigm. Most groups
looking at indicators begin the process by focusing on parameters that are already being
monitored. What also needs to occur is a reevaluation of the monitoring plans to make
sure the data being collected on the selected indicators are sufficient to answer the
question. If not, programs could select indicators that will not address the scientific
needs.
The expectation is that there is an existing, clearly defined, long-term monitoring
program(s) in place in the area of interest. The first step is to list what variables are
currently monitored and identify where, when, and how often they are monitored. Does
the list of variables and the spatial and temporal extent of the sampling provide enough
information and resolution to feasibly characterize an indicator(s)? If so, move on to the
next indicator of choice and run through the same process. If not, decide whether the
indicator warrants the cost of enhancing the monitoring program to make the additional
measurements needed. At this point in the process, the scientific relevancy and utility of
the indicator has already been established, but if the measurements are not made in the
existing monitoring program, there may be limited historical data with which to compare.
48
-------
MONITORING PLAN DEVELOPMENT AND MODIFICATION
This lack of data would diminish the overall worth of the indicator in question. If,
however, it was still deemed a scientifically necessary component, then the decision
comes down to relevance versus costs. Modification of indicators may be a viable and
less costly approach when long-term data sets are available, but the necessary data are not
available.
In many cases, there will be multiple monitoring programs from which to draw
information for indicators. This is especially true for the development of regional
indicators. The aspects of coordinating data and efforts across various monitoring
programs not only provides a regional context for data and indicators, but also may
provide significant cost savings to the agencies or groups currently conducting the
monitoring. The steps are similar to the
approach for an individual program. The first
step is to obtain a list of what is presently
monitored by each program. The next is to
ensure that comparable methods have been
used and that the units are standardized before
the data are combined or compared. Whether
comparing current data to historical data sets
or one monitoring program's data to another, it is necessary to be aware of incongruent
data sets. It may be possible to rectify data sets after the fact by conducting
interlaboratory comparisons. This is recommended only in cases where different, yet
valid, methods have been used. Interlaboratory comparisons are certainly recommended
for ongoing monitoring programs to ensure comparability into the future (see the
SCCWRP callout box on page 50).
Whether comparing current data
to historical data sets or one
monitoring program's data to
another, it is necessary to beware
of incongruent data sets.
Long Island Sound StudyData Comparison
Two issues arose once the monitoring data were collected for assessment:
(1) the monitoring protocols of New York and Connecticut were not consistent,
and (2) information was needed on a watershed basis but collected by town and
zio code. (Pidot. 2003)
At times, little thought is given to statistical design during the development of monitoring
programs. This is often because there is a specific localized focus or interest. For
example, water quality monitoring can focus on an outfall for permit compliance or
seagrass monitoring at a specific resource location rather than more random coverage
encompassing areas of that resource over an entire embayment. EPA's Monitoring
Guidance for the National Estuary Program (1992), and references therein, provide
details on statistical design of monitoring programs. In order to have a robust indicator,
the monitoring data used need to appropriately describe the spatial and temporal scales of
interest.
There are four basic spatial sampling schemes: random, systematic, stratified, and
multistage. A random sampling design locates samples independently at random
locations within an area of interest. This type of design is the easiest to implement but
49
-------
MONITORING PLAN DEVELOPMENT AND MODIFICATION
Southern California Coastal Water Research ProjectInterlaboratory
Comparison
SCCWRP was designed "to gather the necessary scientific information so that
member agencies can effectively, and cost-efficiently, protect the Southern
California marine environment" (SCCWRP, 2005). To characterize the area,
several laboratories collect and analyze samples throughout the area; then
SCCWRP compiles and compares the data to develop an overall picture of the
ecosystem. At the beginning of the SCCWRP process, problems were noted with
data inconsistencies. To ensure that the overall assessment of the area was correct,
all laboratories submitting data to SCCWRP needed to be processing and analyzing
sample in ways that resulted in compatible data. SCCWRP met this challenge by
: performing intercalibration exercises and in some instances, standardizing methods.
The interlaboratory calibration data were used to compare the accuracy of data
developed before and after the standardized methods. Prior to standardizing
methods, the data ranged 20-fold between the lowest and highest values (top table),
while data after standardization were more uniform (bottom table).
Data Prior to Intercalibration and Standardization
SANTA MONICA BAY SEDIMENTS - FIRST ROUND
COMPOUND
Naphthalene
2- Methyl naphthalene
1 -Methyl naphthalene
Bipnenyl
2,6-Dimethylnaphthalene
Acenaphtrtylene
Acenaphthene
2,3, 5-Trimethyi naphthalene
Fluor ene
Phenanlhrene
Anthracene
1-Methylphenanthrane
Fluoranthene
Pyrene
Benz[a]anthracene
Chrysene
Be nzo[ b ffl uora nt h en e
Benzo[k)fluoranthene
Benzo[e)pyrene
Benzo[a]pyrene
Perylene
lnfleno[l .2 ,3-c.d]pyiene
Dibenz[a.h)anthracene
Benzo^h.^jjrene
Total PAHs
54
129
61
233
131
ND
ND
ND
ND
ND
NC
ND
76
91
ND
60
ND1
ND
ND
ND
ND
ND
ND
ND
835
LAB-1
'71
485
172
756
217
4
15
19
38
137
ND
154
ND
NC
ND
ND
ND
ND
NO
ND
249
ND
ND
NC
2420
LAB-2
279
721
272
1140
401
ND
46
NC
75
465
1 1 1
NC
495
1120
284
320
672
206
3fl7
409
183
MO
NC
60
7630
LAB-3
27
59
23
97
37
ND
ND
4
2
9
13
ND
26
28
30
31
10
18
11
13
5
ND
ND
ND
453
LAB -4
139
405
181
606
228
ND
ND
15
24
109
19
51
87
79
65
83
205
77
171
162
72
69
ND
109
2960
LAB-5LAB-6
259
615
222
770
203
ND
ND
ND
69
112
18
ND
108
111
38
46
38
41
63
ND
32
23
38
30
2S40
Data After Intercalibration and Standardization
SANTA MONICA BAY SEDIMENTS
COMPOUND
Naphthalene
2-Methylnaphthalene
1-Melhytnaphthalene
Biphenyi
2.6-Dimelhylnaphthalene
Acenaphthylene
Acenaphthene
2,3.5-Trimethylnaphthalene
Fluorene
Phenanlhrene
Anthracene
1-Methylphenanthrene
Fluor anthene
Pyrene
Benz|a]anthracene
Chrysene
Benzo[bffluoran(nene
Benzo[k]fluoranthene
Benzol e]pyr ene
Benzo[a]pyrene
Perytene
lnrJeno[l.2.3-c.d]pyrene
Dibenz[a.h]anthracene
Benzola.hjrj^ne
Total PAHs
LAB-1
173
188
650
365
ND
114
ND
183
211
93
115
117
94
ND
34
LAB-2
162
435
145
644
212
B
ND
22
25
131
33
62
280
196
126
«W
164
63
115
109
91
44
26
100
3280
LAB-3
170
48 -j
185
350
255
ND
25
ND
49
145
34
27
50
55
45
20
30
03
55
95
78
NC
N::
NC
3650
- FINAL ROUND
LAB-4
191
532
166
ox
343
ND
15
119
40
130
58
^R
135
230
118
52
'9
67
83
91
10
ND
ND
ND
3930
LAB-5
139
336
153
535
214
NC
NC
47
39
142
41
73
146
125
37
127
60
60
51
52
70
33
ND
30
2610
LAB-6
193
525
144
796
269
ND
ND
ND
52
141
29
28
83
85
14
45
92
90
115
65
26
66
ND
97
3450
(Weisberg, 2002)
50
-------
MONITORING PLAN DEVELOPMENT AND MODIFICATION
may not provide the most cost-effective approach or achieve a true understanding of the
entire system, as the coverage is random (fine for standard error and other statistics, but
not when clear geographic gradients are known a priori). A systematic design has
sampling locations spread over equal intervals across the region and provides
representative coverage of an area. Stratified sampling separates a region into multiple
areas and allows for different sampling intensity in each area based on the expected
variability or areas of concern. This approach allows for more cost-effective sampling as
more resources can be applied to known areas of concern and less in areas that are
relatively homogeneous (e.g., many stations in a confined area in the vicinity of an
outfall, but fewer in a larger area further offshore). See the Visual Sampling Plan (VSP)
callout box below for information on a helpful software developed specifically for
designing statistically based sampling plans.
Visual Sampling Plan (VSP)
If a program needs help assessing the spatial schemes of sampling the area of interest,
free software is available that can help. EPA, in conjunction with the
U.S. Department of Energy and U.S. Department of Defense, has developed a
program called Visual Sampling Plan, which provides "simple, defensible tools for
defining an optimal, technically defensible sampling scheme for characterization"
(PNNL, 2005). VSP, which can be downloaded from http://dqo.pnl.gov/vsp/, can be
used to design a cost-effective monitoring program to meet specific statistical criteria
or can be used to evaluate a current monitoring program. One benefit of using VSP to
design monitoring programs is that it "provides immediate feedback of the projected
results of selected statistical sampling plans by overlaying random sampling locations
or grids directly onto the site map" (PNNL, 2005). In addition, it "provides graphic
decision tools such as graphs of probability of hot spot detection vs. total sampling
costs" (PNNL, 2005). See http://dqo.pnl.gov/vsp/ for more details.
The last strategy described in EPA (1992) is multistage, or tiered, sampling. This applies
to both the areas sampledthe first stage might be the entire region, the second stage
representative areas within the region, and the third stage specific areas of concern. Not
only could sampling be done on one or more of the stages, but also the types of
parameters measured could be spread over different stages. This is often the case with
monitoring programs. There are many stations where a basic suite of measurements are
collected (low effort and low costs), and then a subset where more costly and time-
intensive measurements are made. An example of this is provided in Figure 11, which
shows the sampling design for the MWRA water quality monitoring program. This
multistage sampling design spreads out the parameters measured across multiple stations
and also has different frequencies with which stations are sampled. The nearfield stations,
which are within a 10-kilometer-square area of concern around the MWRA outfall, are
sampled 17 times per year, while the remaining 'farfield' stations are visited only 6 times
per year.
51
-------
MONITORING PLAN DEVELOPMENT AND MODIFICATION
42M5'
42°35'
42'25'
42°15!
42°05S
41°55'
41°45'
41°35'
MWRA Water Column Stations
rrr-v o3GW^«>
_fi3 *f24 »j«au
pan/?) fi hz4 «» /a G_ /*
- j^TliA
4^3 - «"*
^0
5>8?h*flge)j
o°F07
-Foe
'FOS
Sanit
O
Key
=AII Nutrients
=AII Nutrients and Planktons (5 depths)
- Dissolved Inorganic Nutrients only
= Dissolved Inorganic Nutrients
plus Dissol\«d Oxygen
" All Nutrients and Planktons (Sdepths
= Zooplankton only
O 20
km
'
71°00'
70'30r
70°00'
Figure 11. Multistaged or Tiered Sampling Design of the MWRA
Water Quality Monitoring Program
52
-------
Program
Planning
INDICATOR IMPLEMENTATION
Conceptual
Model
Development
Indicator
Specification
Monitoring
Program
Development
mple mentation
Reassessment
Once indicators have been selected and a monitoring plan developed, the
indicator program needs to be implemented. The process of implementing
an indicator program will vary, depending on how many organizations are
involved in the process and the overall goals of the program. In some
instances, indicator programs are implemented by a group of
organizations working toward the same goals; in a few instances, only a
single organization is involved. This
section focuses on some of the important
aspects of implementing an indicator
program that involves more than one
organization, but several of these steps
also apply if only one organization is
implementing the program. The steps that
will be covered under implementation are:
The success of indicator
development depends on
how the program is
implemented and involves
many steps.
Formal adoption and funding of the program
Communication among organizations
Monitoring plan implementation
Data collection and analysis plans
Reporting of indicator findings
FORMAL ADOPTION AND FUNDING
The first step in implementing an indicator program is getting it formally adopted by the
organization(s). This means that the organization plans to do its best to implement the
program using available funds. Most programs implemented by agencies and groups have
been mandated in some way by an act of Congress, through a state legislature, or as part
of an agreement with another organization that supplied the funding. Thus, the goals and
reasons for conducting the work are set by what the group has been tasked to accomplish.
In the NEP, formal acceptance by the management or TAC is required for formal
acceptance of indicator implementation. The agreements sometimes include signed
Memorandums of Understanding (MOUs), which specify the goals and obligations the
groups have agreed to try to reach. MOUs are particularly useful when trying to
implement an indicators program that stretches beyond the area of one monitoring group.
It allows members of regional programs to have an exact understanding of what they
have agreed to when joining the program. It also gives a regional group an understanding
of what it should expect from its constituents. Each MOU is written based on the
individual programs and groups involved. Either way, the important point is that someone
in each organization agrees to seek the funding and staff to implement that organization's
portion of the program so that it can deliver the necessary data to reach the end result.
53
-------
INDICATOR IMPLEMENTATION
MOUs are particularly useful when trying to implement an indicators program
that stretches beyond the area of one monitoring group.
Formal adoption of a program is important, but so is funding. It is unlikely that any single
agency or organization will have enough funding to accomplish every task. One goal for
many indicator programs is to reduce the amount of money spent by determining whether
questions raised for that program have already been answered elsewhere and, if so,
obtaining the answers to those questions from those other sources. Programs developing
indicators for additional questions should plan to find the funding to cover the new work.
Get buy-in on plans from agencies so that they can help fund programs. Try to find other
groups that may already be monitoring the parameter and see if data can be shared. Other
programs have used the development of a list of indicators to negotiate for additional
monitoring funds. Lack of funds for monitoring does not have to be a reason to forgo
developing indicators.
Great Lakes ProgramManagement Involvement
"The interviewees strongly suggest bringing managers into the process early on, both
so that the product is as useful to them as possible, and to create a sense of ownership
which might increase managerial use." (Pidot, 2003)
COMMUNICATION AMONG ORGANIZATIONS
Communication among all parties within any program is one of the most important
aspects of a successful indicators program. Communication must occur in order to
develop an appropriate list of indicators, implement the monitoring plans, and report
_ results. Successful programs result because everyone
involved knows exactly what needs to be done, when
it needs to be accomplished, and who is doing the
work. Most importantly, if a problem arises, it is
important that it be discussed early on and that all
parties work to solve the issue. For instance, if an
indicator is selected to monitor a situation, but
someone discovers that the indicator is not properly
documenting the changes as intended, this should be immediately communicated to the
group so that the situation can be evaluated and money is not spent on an indicator that
does not work. Another problem that must be communicated is lost or unavailable data. If
the program is relying on the data to make a judgment about a portion of the
environment, the entire group should be notified that the data are not available or that
help is needed in collecting it. It is important that communication occur freely and openly
within the program to ensure its success.
Communication among all
parties within any program
is one of the most important
aspects of a successful
indicators program.
54
-------
INDICATOR IMPLEMENTATION
Communication with stakeholders throughout the area is also important. This includes
not only the organizations or agencies involved in the program, but also the public.
Programs that demonstrate usefulness and answer questions that environmental managers
and the public are interested in tend to get more funding. Therefore, from the beginning
of the program, those involved with its development need to sell its usefulness. The group
also needs to show timely results. Thus, the results of the indicators program need to be
analyzed and reported promptly so that area managers can use the information to make
decisions on next steps. Data from a couple of years past may not even be reviewed by an
environmental manager or the public because it is considered outdated. Thus, the
indicators program needs to develop a communication plan to ensure that information
flows easily within the program and that data can be used by others outside of the
program.
MONITORING PLAN IMPLEMENTATION
As previously noted, once the indicators have been selected and a monitoring plan
developed, the program needs to be implemented. In some instances, the monitoring is
already being conducted under other programs and the data only needs to be collected
and analyzed for their intended use. In other instances, the monitoring will need to begin
in new areas or for new parameters. It is assumed that the developed monitoring plan
specifies who will be monitoring which parameters and when. If it does not, then a plan
should be developed. Some indicator plans may call for the collection of a number of new
parameters. In these cases, depending on the funding available, a tiered approach to
implementing the monitoring plans may need to be taken.
When developing a monitoring program, one important aspect is that, depending on how
the indicators are selected, the indicator may or may not be monitored at that time and the
program may or may not be able to afford to monitor all of the indicators at once.
A monitoring plan can still be written to include all of the indicators selected, but should
point out that new indicators will be implemented as funding becomes available. A plan
could also be developed to add sampling for one or more of the selected indicators to the
monitoring program during each future year of sampling or at other specified times. This
tiered approach can then be used to negotiate for additional funding from other programs
and the state legislature.
Ongoing monitoring is essential to assess the health of ocean and coastal ecosystems and
detect changes over time. More than any other measure, monitoring provides
accountability for management actions (U.S. Commission on Ocean Policy, 2004).
Ongoing monitoring is essential to assess the health of
ocean and coastal ecosystems and detect changes over time.
55
-------
INDICATOR IMPLEMENTATION
DATA COLLECTION AND ANALYSIS PLANS
Within the monitoring plan and MOUs, statements should be included regarding how
data will be collected and analyzed. Sometimes it is easy to collect and analyze the
samples, but difficult to compile the final data in one place for analysis. These steps need
to be part of the plan. Groups collecting data for indicators have used both centralized
and distributed data locations successfully. The form selected depends on program needs,
funding, and accessibility to the databases. Evaluation of secondary data is critical.
REPORTING OF INDICATOR FINDINGS
Accurate and appropriate reporting of indicator results and data is critical to justify the
program and to ensure that it is credible. Moreover, data collected and analyzed, but not
properly reported, are of no value to scientists, managers, regulators, or the public.
Early in the program planning process, each indicator and monitoring project should
develop a plan for reporting and communicating findings that supports the program's
objectives. The plan may include a range of documents that convey the project's
activities, data, and findings. These can range from brochures and flyers for public
dissemination and relatively simple data reports to comprehensive interpretive reports
that focus on progress and convey information to management and scientists. The plan
should clearly convey the purpose of the different reports and modes of communication,
their focus and content, the timeframes for publication, and distribution mechanisms.
Reporting plans differ for each program, as project objectives and communications needs
vary. Reports will generally need to be customized for different stakeholders (e.g.,
scientists, managers, the public). It is important to get the information to the stakeholders
in a format they can understand and that will be useful for their particular needs. Formats
such as scientific reports, report cards, science meetings, and newspaper articles and
news conferences have been used successfully in different estuary programs. Each
estuary program should plan on including this broad range of documentation to report on
its indicators and progress.
The audience for which the indicator reporting may be intended generally falls into three
general categories.
Public. Reporting to the public requires information to be presented in a concise,
public-friendly format with less technical content and with straightforward
presentations. The objective is generally to keep the public informed, to conduct
public relations, and to generate support for management activities.
Examples of Reporting to the Public
"State-of-the-Bay" report
Report cards
Flyers
Newspaper articles
Web site
56
-------
INDICATOR IMPLEMENTATION
Long Island Sound StudyReporting to the Public
"Mark Tedesco felt that the process of putting together a report that was primarily
directed at the public was actually quite healthy for the project as it forced the
developers to clearly and concisely describe the trends they had uncovered, and to draw
some conclusions that could be easily presented." (Pidot, 2003)
Casco Bay Estuary PartnershipReporting to Management
"Since many decision makers will often not read lengthy documents, it is essential,
[according to Diane Gould, to have a summary highlighting the report results and
detailing their significance directed specifically at policy makers and managers. (Pidot,
2003)
! Great Lakes ProgramReporting Status and Trajectories
I "The assessment for each indicator...provide both a 'status' component (Good, Fair,
jPoor, Mixed) and a 'trajectory' component (Improving, Unchanging, Deteriorating,
Undetermined)." (SOLEC, 2004)
Management/Regulators. Reporting to program management and environmental
regulators generally includes providing both highly concise summaries and
"light" technical reporting. The objective is generally to provide updates that
directly relate to past management actions by assessing the progress and success
of management activities, and to provide recommendations and justification for
future management activities, along with supporting information and data.
Example of Reporting to Management/Regulators
"State-of-the-Bay" report
Progress report
Report cards
Technical summaries
Scientific Uses. Reporting for scientific use generally includes scientific,
technical interpretive reports, which provide data that can be used by the scientific
community for detailed analysis. The objective of these reports is to make data
available and develop an in-depth understanding of the environmental
conditionsan understanding which, in turn, may also be used for public and
management reporting.
Examples of Reporting for Scientific Uses
Comprehensive data reports
Interpretive reports, with data appendices
Web sites with databases
Peer-reviewed papers and publications
-------
INDICATOR IMPLEMENTATION
The following sections are intended to provide broad guidance on how to make program
findings available and the level of detail that is appropriate in various reports. They are
not intended to prescribe ways to write a specific type of technical report or other
document, or how to summarize indicator information for the public. No format or
approach fits all programs. Fortunately, many programs and organizations are already
actively reporting results from their environmental studies. The reporting and
communication from these other programs and organizations can serve as excellent
examples of reporting that can be considered, and modified to meet the needs of a
specific program Again, each program should have its own well-considered reporting
plan, to address specific well-defined objectives of the program. Some programs will
emphasize scientific reporting of the results, while other may be more heavily weighted
towards informing the public and public outreach. In the aggregate, experience from
many programs demonstrates that successful programs incorporate the full spectrum of
reports and written materials for communication to scientists, managers/regulators, and
the public. Regardless of report type, a process of conceptualizing, outlining, annotating,
drafting and polishing each report should be practiced.
Reporting to the Public
There are many and varied examples of effective
reports that convey the state of an estuary to the
general public. Examples include the State of the Bay
reports (Figure 12) by the CBEP (Casco Bay Estuary
Partnership, 2005a) (http://www.cascobay.
usm.maine.edu/SOTB.html); the Pulse of the Estuary
reports by the San Francisco Estuary Institute (SFEI,
2005) (http://www.sfei.org/rmp/
pulse/2005/RMP05_PulseoftheEstuary.pdf); and the
State of Boston Harbor reports by the MWRA (2002)
(http://www.mwra.state.ma.us/harbor/
enquad/pdf/2002-09.pdf). These types of reports are
useful for communicating to those in the public who
are actively involved in issues related to the program
and wish to receive more information than the general
public. In many cases, these reports have helped
define the key issues and been used to form the basis
of more technically sophisticated reports to
management and the scientific community.
Figure 12. Casco Bay
Estuary Partnership
2005 State of the Bay Report
58
-------
INDICATOR IMPLEMENTATION
REPORT CARD
I .
PRIWHTV1
Figure 13. San Francisco Report Card
1996-1999
Report cards (San Francisco Estuary
Project, 1999) (Figure 13) can be a
valuable way to summarize program
actions and related indicator responses.
However, they can become tedious and
carry the risk of oversimplification,
which may result in misuse of the
information presented. Thus, care must
always be taken when simplifying
information. Moreover, simplification
must not happen at the expense of
accuracy and should recognize the
potential for misinterpretation. In
addition to report cards, informational
flyers can be highly effective in
summarizing specific components of a
program in a simple, eye-catching
format that can reach a wide audience.
Programmatic summaries (e.g., annual
updates) are also effectively
communicated through concise flyers.
Newspaper articles, news releases, and
news briefings are other means of
communicating to the public, as long as
care is taken to ensure accurate
representation of the information.
Finally, well-designed program web sites can be an excellent mode of communicating to
the public, providing updates on activities, and providing an archive for access to
historical documents. An example of an effective web site is the Chesapeake Bay
Program's site (http://www.chesapeakebay.net/). Other examples include web sites by the
MWRA (http://www.mwra.state.ma.us/), the St. Johns River Water Management District
(http://sjr.state.fl.us/), the San Francisco Estuary Institute (http://www.sfei.org/), and the
CBEP (http://www.cascobay.usm.maine.edu/).
Developing Public Materials. Primary among the challenges associated with developing
public materials is ensuring accurate communication of information in a manner the
general public can understand. The suggested writing level for these reports is at an 8th
grade level reading ability.
Often estuary program staff are challenged to find creative ways to present information.
When developing public materials, it is important to focus on answering those questions
that are foremost on the public's mind in straightforward language and with concise
images. Reports for the public should emphasize, but should not be limited to, addressing
concerns around the "what and why" questions, and less on "who, when, and how." For
example:
59
-------
INDICATOR IMPLEMENTATION
Is the water safe to swim in or drink? What has improved or gotten worse? Why?
Is the fish/shellfish safe to eat? If why not, what can be done about it?
Have the changes that estuary programs have requested worked toward defined
goals? For example,
Have fertilizer reductions and sewerage plant upgrades focused at reducing
nutrient levels worked towards improving DO levels in the estuary?
When the dam was demolished, did the fish return upstream?
Have the rebuilt wetlands or open lands that have been conserved helped the
estuary program in any way?
What needs to happen next to improve the estuary? How can the public help (besides
providing more money)?
While many in the public primarily are interested in whether the financial investment and
effort they have put in to save the estuary has merit, some will want more in-depth
reports. They often want to know that there is a plan to move forward.
Suggested forms for public reports have been conveyed previously. How that information
is communicated also must be carefully considered. Any graphs used should be simple
and easy to follow. Simple one-dimensional bar or line graphs seem to be the best at
showing changes over time. Limited and carefully prepared information on statistical
considerations can be effective (i.e., indicating a trend is statistically significant rather
than a detailed explanation of the statistical methods). Pictures, diagrams, and artist
renditions are also helpful in documents prepared for the general public (and also for
more technically enlightened audiences), especially when describing various estuarine
species and habitat restoration projects. Text should describe the problem's past history,
the current situation, and the required actions to be taken to reach the "optimal" or a
desired end. If the project is long-term, developing mid-progress milestones that can be
celebrated will help maintain public interest and involvement.
Questions invariably arise on how to best handle questions from the news media.
Depending on the circumstances of the interaction, but especially for formal press
briefings or news releases, information sheets should be prepared in advance and should
include details on the information being conveyed. This will help ensure that journalists
have the correct numbers and other pertinent information, rather than having them rely
solely on their notes.
Reporting to Management/Regulators
Different types of state-of-the-bay and state-of-the-esruary reports are often excellent
guides for developing written and oral reports to management/regulators (Casco Bay
Estuary Partnership, 2005a; SFEI, 2005; MWRA, 2002), and may by themselves be
effective for communicating information. In contrast to reports for public consumption,
reports prepared for management/regulators often include recommendations and require
technical and other justifications to support these recommendations. The reports prepared
for managers generally have more detail and content than public reports and support the
more public-oriented reports. The level of detail provided in management reports will
60
-------
INDICATOR IMPLEMENTATION
Figure 14. MWRA 5 Year
Progress Report 2000-2004
also vary, depending on the managers'/regulators'
oversight responsibility. One example of such a report
is the 5 Year Progress Report: 2000-2004, prepared
by MWRA for the governor and legislature of
Massachusetts (MWRA, 2006) (Figure 14). Report
cards (see Figure 13) can be valuable for providing
summary-level information to management/regulators
but have the same limitations and risks associated with
disseminating such materials to the public.
Developing Management-/Regulator-Focused
Reports. Management/regulator-focused reports
address similar questions as those raised in public
reports. They tend to provide more details and
supporting information and focus on answering
questions regarding whether environmental conditions
or responses conform with an agency's mission or
goals or a manager's oversight function. Reports for managers/regulators should address
"what, when, where, and why" concerns and also address the "how" (either measurement,
interpretive, or environmental) issues pertinent to the program's objectives. Depending
on the specifics of the program, consideration of "who" (e.g., responsible parties,
ecological entities) may come into play. This means there normally needs to be an
accounting of objectives as they relate to the agency's overall goals, the tasks that have
been completed or started to date, the amount of funding that has gone toward these
effort, and the status toward reaching the final goal. For estuaries within the NEP, this
may mean linking progress made over a certain timeframe back to the specific goals
outlined in the CCMP.
Reporting to the Scientific Community
The different types of state-of-the-bay and state-of-the-estuary reports can also be an
excellent resource for the scientific community, and often form the basis for further in-
depth analysis. Conversely, in-depth scientific reports and peer-reviewed papers often
validate the content of the higher-level interpretive and synthesized reports prepared for
managers and the public. Typically, the flow of reports is from detailed scientific
reporting to the higher-level syntheses and integration at the management and public
levels. Regardless, each of these audiences has influence over the content and direction of
reports across the entire program.
Generally, science-based interpretive reports provide the details of the monitoring,
research, and assessments that take place within the program. While there are no standard
formats for interpretive reports, each should include a section that introduces the report's
subject and objective(s), describes the method(s) used to collect and analyze the data,
presents the results and findings, discusses the results, and develops conclusions. A
concise executive summary is a valuable tool for these reports, as they help inform
managers and the interested public. Depending on the project, the reports should
incorporate recommendations regarding changes to the project/program and further
61
-------
INDICATOR IMPLEMENTATION
studies. The level of detail in a report depends on where and how it will be published. An
interpretive report often includes in-depth considerations, while a peer-reviewed paper
provides a succinct presentation of the findings, with the degree of detail depending
greatly on the publisher.
Interpretive reports are also developed with many different formats, including highly
graphical and "reader-friendly" formats that, in many ways, are an expansion of a state-
of-the-bay report. One good example of such a report is the "Baywatchers II" report,
prepared by the Coalition for Buzzards Bay (Buzzards Bay Project National Estuary
Program, 1999). Examples of technical reports with additional technical rigor include the
National Coastal Condition Report II (EPA, 2005b), the State of the Estuary: A Report
on Conditions and Problems in the San Francisco/Sacramento-San Joaquin Delta
Estuary document (San Francisco Estuary Project, 2002), and the Regional Monitoring
Program (RMP) for Trace Substances report (SFEI, 1999). Technical reports are
particularly valuable for the rest of the scientific community when raw and summarized
data are included as appendices. Well-designed program web sites can also be valuable to
the scientific community, both in terms of being a repository for documents and also for
housing and making available for general use data that may be accessed and downloaded
by scientists.
Developing Scientific Community-Focused Reports. Unlike public- and management-
focused reports, reports focused toward the scientific community are geared specifically
toward reporting, interpreting, and synthesizing data in depth. These reports typically
address in detail the "who, what, when, where, how, and why" questions. Scientists want
to know everything, from the methods used to collect and analyze the samples, to how
the data were treated for interpretation, to how the new data fit into scientific theories,
hypotheses, and previously obtained data. These reports are normally highly technical,
with figures and tables that support presentation of the findings, discussion, and
conclusions. These reports are equally important as (some would say more important
than) the public and management reports because they form the basis of future
evaluations and conclusions regarding the overall condition, variability, and changes in
the estuary. Reporting the actual data in these scientific reports is also crucial for future
data comparisons. These reports often form the basis of peer-reviewed publications.
Estuary programs should strive to ensure that reports prepared in support of their program
maximize the development of information from the data collected.
Authors of scientific and technical reports that address environmental indicators should
clearly communicate how the data from each selected indicator is linked to a specific
outcome, represents broader environmental concerns, and supports decision-making. This
documents how an indicator is useful to the estuary program and how it provides the
necessary information to the program. If the authors and an indicator do not provide the
necessary information, the link between the parameters and the interpretive results may
result in estuaries spending unnecessary funds.
62
-------
INDICATOR IMPLEMENTATION
Report Data Quality/Timeliness
Inaccurate data and interpretation can lead estuary programs to make incorrect decisions
based on those data or findings. It does not matter whether the report is for the public,
management, regulatory, or scientific community, each report should be prepared
carefully and should be based on accurate and complete data. An effective means of
developing reports that meet program expectations is to have the authors develop an
outline (preferably annotated) for each report in advance. Experience has also found that
each report should be developed under a known level of data QA and interpretation
verification (e.g., peer review). To this end, technical, QA, and editorial reviews should
be defined for each report and practiced by the estuary program.
It is also important that the reports be generated in a timeframe that will allow their
findings/conclusions to be useful to management, regulators, and decision-makers. Data
that is reported years after it has been collected can be useless if major changes are
occurring within the estuary. Good practices are to have data available within 6 to
12 months of sample collection and interpretive reports completed within 1 year. An
excellent example of the effective reporting schedules can be found under the MWRA
Harbor and Outfall Monitoring Project, where data are required to be available within
3 months of collection and interpretive reports within 6 to 8 months of the end of the
monitoring year. Such reporting enables implementation of preventative or corrective
measures when a problem is just beginning to develop, not years later. Thus, it is
important that estuary programs include in their reporting plans a schedule for reporting
data. Another example of timely reporting is the 2006 draft Assessment Strategy
developed for the Florida Everglades Restoration Monitoring and Assessment Plan. At a
minimum, programs should provide data reports and preliminary findings at least every
2 years. This will provide the data needed for scientists to make decisions but will allow
the program a little longer period (no more than every 5 years) to develop the larger
programmatic or public reports.
The role of any report card, newsletter, management overview, or estuary data report
developed is to make sure it conveys the intended message to the intended audience.
A report that is useless to its audience will ultimately be useless to the estuary program
that developed it.
63
-------
INDICATOR IMPLEMENTATION
[This page left intentionally blank]
64
-------
Program
Planning
INDICATOR REASSESSMENT
Conceptual
Model
Develop
Monitoring
Program
Devgbpmen.t
Most programs that develop a suite of indicators spend months, if not
years, trying to select the most representative parameters and develop a
robust monitoring program to support them. However, the process does
not stop once the indicator measurement program is implemented.
Continual assessment and reassessment of the performance of the
program is the necessary next step. Reassessment of an indicator
program ensures
that the indicators
are meeting
expectations.
It is important to reassess indicator
programs a minimum of every 5 years to
ensure that they are meeting expectations.
Reassessing an indicator program is not always a simple or clear-cut
process. Some indicators answer specific questions (e.g., monitoring DO
levels to determine long-term increases/decreases in the water column or
compliance with a state standard). Other indicators address the status of
a broader question that cannot be easily answered (e.g., monitoring
catch of a species to estimate fish stock size). Even though an indicator was carefully
selected, it is possible that it does not adequately address the question. For example, if a
program is specifically concerned with metals inputs to sediment, a possible indicator
may be to measure the amount of two or three key metals in the sediments of an area over
time. If, after a period of time, the monitoring program finds that the concentration of
metals in sediments is not changing as expected, concerns are raised as to why. In this
case, the program needs to reassess the appropriateness of the metals monitored or
conduct additional studies to determine why expected changes did not occur. These could
be related to uncertainty in loading, physical changes in the sediment, geochemical
processes, or the inappropriate selection of the indicator metal. Thus, the program needs
to reassess and should potentially select a different indicator to effectively track changes
in metals input to the sediment.
Each program should develop a reassessment plan that is designed to review the
usefulness of the selected indicators. The reassessment should be conducted at a least
every 5 years to ensure that funds are being spent economically and indicators are
answering the intended questions. The initial step in the reassessment process is to review
the current issues of importance. This review should allow issues that have been
addressed to be removed, concerns to be modified, and new issues to be added.
65
-------
INDICATOR REASSESSMENT
Galveston Bay NEPIndicator Refinement
"By consensus, the [Galveston Bay Council] will determine the final official set of
indicators to be used by GBEP for inclusion on reports and public outreach materials.
This is not to say that further refinements will not take place in the future as better
datasets are found, monitoring programs improve or expand, and advances in research
are made." (GBEP, 2004)
The next step in the reassessment process is to evaluate the questions the program must
answer. Previous questions should be examined to determine whether they have been
answered. New issues should generate specific questions. For the issues and questions
that are still relevant, the next step is to determine whether the corresponding indicators
remain valid. If so, the program should confirm the adequacy of the monitoring plan. If,
during the review process, issues and questions were added, or if an indicator was no
longer valid, the program needs to develop indicators appropriate to the questions and
revise the monitoring plan. The key to any program review is relevant and recent
information on the issues and questions. This includes selecting a new parameter whose
measurement may be more cost-effective, or revising a methodology to provide a better
understanding of the issue. Using outdated information may result in incorrect choices for
the most appropriate indicators. Finally, the last step in the reassessment process is to
implement the indicator program and revised monitoring plan.
66
-------
SUMMARY
Since the 1960s, the necessity of preserving and protecting our nation's coastal waters
has been recognized. The nation has worked diligently to enact legislation (the CWA and
CZMA) and develop programs (e.g., NEPs and the NERR system) and initiatives to
protect our coastal environment. As a result, plans and environmental assessment
programs have been developed to prevent further degradation and address ways to
improve ecosystems.
In 1993, GRPA called for "Federal agencies to undertake efforts to measure their
performance and the effectiveness of their programs" (The Heinz Center, 2003). In
response, a series of indicators were developed that could track the effectiveness of these
programs and provide quantifiable measures that demonstrate the response of our
nation's coastal waters. Since the enactment of GPRA, programs such as EPA's NCA
have been implemented to measure improvement of coastal estuaries nationwide.
In support of programs and initiatives focused on preserving, protecting, restoring, and
improving estuaries, EPA has developed this "Indicator Development for Estuaries"
manual to help further those efforts. As previously stated, the intent of this manual is to
provide an interactive process, considerations, and lessons learned to assist coastal and
estuarine programs, including the NEPs, in ecological and environmental indicator
development on a local, regional, or national level.
To summarize this manual and provide assistance to programs, the following checklist is
provided as a supporting tool for indicator development, covering the important areas of
program planning through implementation and reassessment. Additionally, three case
studies have been included in Appendix A to demonstrate how local and regional
programs have conducted their indicator development process.
Finally, Appendices B and C are included as supplemental information to assist programs
with their indicator development. Appendix B lists some of the indicators chosen by
various programs, including NEPs. Appendix C lists some available resources on
indicator development. This list is not meant to be all inclusive, but rather a sampling of
available documents.
67
-------
SUMMARY
INDICATOR DEVELOPMENT CHECKLIST
Check
Box
Steps/Considerations
Program Planning (unless already conducted as part of the estuaries
CCMP development)
Decide on the spatial scale of the program
Convene a steering committee
Reach agreement or consensus on the purpose and need for indicators
Identify the key issues
Develop management objectives
Define questions to be answered by indicators
Conduct baseline assessment on each issue
Conceptual Models Development
Determine the type of conceptual model to use
Develop conceptual models for each issue
Indicator Specification
Collect information on monitoring programs being conducted in the area
(e.g., who, what, where, when, how often, using what methods)
Determine how indicators should be selected (e.g., based on parameters
already being monitored; scientifically sound parameters)
Develop possible indicators list
Select indicators to answer key questions based on the following criteria:
Conceptual relevance/soundnessIs the indicator relevant to the
assessment question and to the resource at risk? Scientifically
sound?
Feasibility of implementation (current and future)Are the
methods for long-term sampling and measuring the environmental
variables technically feasible, appropriate, and efficient for
monitoring use?
Response variabilityAre human errors of measurement and
natural variability over time and space sufficiently understood and
documented?
Interpretation and utilityWill the indicator convey information on
resource conditions that is meaningful to environmental decision-
makers? Is the indicator currently monitored and likely to be
monitored in the future? Sustainable indicators.
Monitoring Plan Development and Modification
Develop or revise current monitoring plan to incorporate selected
indicators
Identify indicators critical to the evaluation of monitoring program
performance
Design and implement data management plan
68
-------
SUMMARY
INDICATOR DEVELOPMENT CHECKLIST (Continued)
Check
Box
Steps/Considerations
Indicator Implementation
Formally adopt program
Obtain funding
Initiate communication among organizations
Convene an implementation oversight committee
Develop a data analysis and assessment plan
Implement the monitoring plan
Report indicator findings
Indicator Reassessment
Reassess selected indicators a minimum of every 5 years
69
-------
SUMMARY
[This page left intentionally blank]
70
-------
REFERENCES
ANCMS (Atlantic Northwest Coastal Monitoring Summit). 2003. ANCM Summit Fact
Sheet #1. February 2003. http://www.gulfofmaine.org/nciw/Fact_Sheet.pdf
Bertram, P. and N. Stadler-Salt. 2000. Selection of Indicators for Great Lakes Basin
Ecosystem Health. Version 4. For the State of the Lakes Ecosystem Conference. March
2000.
Bricker, S.B., C.G. Clement, D.E. Pirahalla, S.P. Orlando, and D.R.G. Farrow, 1999.
National Estuarine Eutrophication Assessment: Effects of Nutrient Enrichment in the
Nation's Estuaries. NOAA, National Ocean Service, Special Projects Office and the
National Centers for Coastal Ocean Science. Silver Spring, MD, 71 pp.
Bricker, S.B., J.G. Ferreira, and T. Simas. 2003. An Integrated Methodology for
Assessment of Estuarine Trophic Status. Ecol. Modeling 169: 39-60.
Buzzards Bay Project National Estuary Program. 1999. Bay Watchers II. Nutrient
Related Water Quality of Buzzards Bay Embayments: A Synthesis of Baywatchers
Monitoring 1992-1998. The Coalition for Buzzards Bay. December, 1999.
Casco Bay Estuary Partnership. 2005. State of the Bay 2005. U.S. EPA. Fall 2005.
CBBEP (Coastal Bend Bays Estuary Program). 1998. Implementation Strategy for the
Coastal Bend Bays Plan. Coastal Bend Bays Estuary Program. SFR-60/CBBEP-2.
August 1998.
CFTNEP (Charlotte Harbor National Estuary Program). 2004. Draft Environmental
Indicators for the Charlotte Harbor National Estuary Program. Technical Report 04-4.
North Fort Myers, FL. May 26, 2004.
Cobb, C.W. and C. Rixford. 1998. Redefining Progress: Lessons Learned From The
History of Social Indicators. November 1998.
Diaz, R.J. and R, Rosenberg, 1995. Marine Benthic Hypoxia: A Review of its Ecological
Effects and the Behavioral Responses of Benthic Macrofauna. Oceanography and Marine
Biology: An Annual Review 33:245-303.
EMAP. 1994. Environmental Monitoring and Assessment Program: Indicator
Development Strategy. Barber, M.C., ed. Office of Research and Development,
Environmental Research Laboratory. Athens, GA. EPA/620/R-94/002.
Environment Canada. 2005. SOLEC 2000. Retrieved February 10, 2005, from
http://www.on.ec.gc.ca/solec/solec2000-e.html
71
-------
REFERENCES
EPA (U.S. Environmental Protection Agency). 1992. Monitoring Guidance for the
National Estuary Program: Final. U.S. Environmental Protection Agency, Office of
Water (WH-556F). EPA 842-B-92-004. September 1992.
EPA. 1993. National Estuary Program Guidance: Base Program Analysis. U.S.
Environmental Protection Agency. Office of Water (WH556F). EPA 842-B-93-00.
March 1993.
EPA. 1994. Measuring Progress of Estuary Programs: A Manual. U.S. Environmental
Protection Agency, Office of Water (4504F). EPA 842-B-94-008. November 1994.
EPA. 1995. A Conceptual Framework to Support Development and Use of
Environmental Information in Decision-Making. EPA 239-R-95-012. United States
Environmental Protection Agency, Office of Policy Planning and Evaluation, April,
1995.
EPA 2000a. Section 320, Amendment to the Clean Water Act, 1987. U.S. Environmental
Protection Agency. Washington, DC. January 2000.
EPA 2000b. Evaluation Guidelines for Ecological Indicators. Office of Research and
Development. Washington, D.C. EPA/620/R-99/005. May 2000.
EPA. 2001. National Coastal Condition Report I. U.S. Environmental Protection Agency,
Office of Research and Development/Office of Water, Washington, DC 20460. EPA-
620/R-01/005, September 2001. http://www.epa.gov/owow/oceans/nccr/downloads.html
EPA. 2003a. Usefulness of National Estuary Program (NEP) Data as National
Environmental Indicators. U.S. Environmental Protection Agency, Ocean and Coastal
Protection Division. Prepared by Battelle under Contract 68-C-03-041, WAO-12.
September 5, 2003.
EPA. 2003b. Draft Report on the Environment 2003. Technical Document. Appendix D.
Glossary of Terms. U.S. Environmental Protection Agency. Washington, DC.
EPA. 2003c. Successful Coastal Management Solutions. National Estuary Program. U.S.
Environmental Protection Agency. Washington, DC.
EPA. 2004. Great Lakes National Program Office. Retrieved November 2004, from
http://www.epa.gov/glnpo/about.html.
EPA. 2005a. Environmental Indicators Initiative. Retrieved February 10, 2005, from
http://www.epa.gov/indicators/.
72
-------
REFERENCES
EPA. 2005b. National Coastal Condition Report II. U.S. Environmental Protection
Agency, Office of Research and Development/Office of Water, Washington, DC 20460.
EPA-620/R-03/002, December 2004. Retrieved February 15, 2004, from
http://www.epa.gov/owow/oceans/nccr2/.
EPA. 2005c. Community-Based Watershed Management. Lessons Learned From The
National Estuary Program. U.S. Environmental Protection Agency. Washington, DC.
GBEP (Galveston Bay Estuary Program). 2004. Galveston Bay Estuary Program's
Preliminary List of Indicators. Supplied to the U.S. Environmental Protection Agency.
October 12, 2004.
Karr, J.R. 1981. Assessment of biotic integrity using fish communities. Fisheries 6:21-27.
Karr, J.R., K.D. Fausch, P.L. Angermeier, P.R. Yant, and I.J. Schlosser. 1986. Assessing
biological integrity in running waters: a method and its rationale. Illinois Natural History
Survey Special Publication 5, Urbana Illinois.
MWRA (Massachusetts Water Resources Authority). 2002. The State of Boston Harbor:
Mapping the Harbor's Recovery. MWRA. 2002.
MWRA. 2006. "5 Year Progress Report: 2000-2004". MWRA report to Governor and
Legislature. January, 2006.
NCIW (Northeast Coastal Indicators Workshop). 2004. National Indicator Development
Initiatives. Background paper developed by the NCIW Steering Committee.
Nixon, S.W., 1995. Coastal Marine Eutrophication: A Definition, Social Causes, and
Future Concerns. Ophelia 41: 199-219.
NOAA (National Oceanic and Atmospheric Administration). 2005. Coastal Zone
Management Act of 1972. Retrieved February 10, 2005, from
http://coastalmanagement.noaa. gov/czm/czm_act.html.
NPS (National Park Service). 2003. NPS Natural Resources Monitoring web site,
http://science.nature.nps.gov/im/monitor
NRC (National Research Council). 1990. Managing Troubled Waters: The Role of
Marine Environmental Monitoring. National Academy Press, Washington, D.C.
NRC. 2000. Ecological Indicators for the Nation. National Academy Press, Washington,
DC.
OECD (Organisation for Economic Co-operation and Development). 1993. Core Set of
Indicators for Environmental Performance Reviews. Environmental Monograph No. 83.
73
-------
REFERENCES
Otero, E. 2002. Environmental Indicators on the San Juan Bay Estuary (EISJBE). Draft
Document 5-8-02.
Pidot, L. 2003. Tapping the Indicators Knowledge-base: "Lessons Learned" by
developers of environmental indicators Report and Supplement - The interviews.
Prepared for the State of the Gulf Summit Steering Committee. Maine State Planning
Office. August 2003.
PNNL (Pacific Northwest National Laboratory). 2005. Visual Sampling Plan (VSP)
Description, http://dqo.pnl.gov/vsp/vspdesc.htm.
Price. K.S. and S. Huerta. 2001. Proposed Indicators for the Inland Bays. Delaware
Inland Bays Program, Center for the Inland Bays. Julyl3, 2001.
Random House. 2001. Random House Webster's Unabridged Dictionary, Second
Edition.
San Francisco Estuary Project. 1999. Bay-Delta Environmental Report Card. San
Francisco Estuary Project. CCMP Workbook. March, 1999.
San Francisco Estuary Project. 2002. State of the Estuary: A Report on Conditions and
Problems in the San Francisco/Sacramento-San Joaquin Delta Estuary. San Francisco
Estuary Project. Prepared by the Association of Bay Area Governments (Oakland, CA),
in cooperation with U.S. EPA. June, 2002.
SCCWRP (Southern California Coastal Water Research Project). 2005. Southern
California Coastal Water Research Project home page at http://www.sccwrp.org/.
SFEI (San Francisco Estuary Institute). 1999. Regional Monitoring Program (RMP) for
Trace Substances. 1997 Annual Report. SFEI. June, 1999.
SFEI. 2005. Pulse of the Estuary 2005: Monitoring and Managing Water Quality in the
San Francisco Estuary. SFEI. 2005.
SOLEC (State of the Great Lakes Ecosystem Conference). 2004. State of the Great Lakes
2005 - Draft. Draft Discussion at SOLEC 2004, October 6-8, 2004.
TEP (Tillamook Bay National Estuary Program). N.D. Indicator Development Process.
Table and Summary supplied to the U.S. Environmental Protection Agency in 2004.
The Heinz Center. 2003. The Coastal Zone Management Act: Developing a Framework
for Identifying Performance Indicators. The H. John Heinz III Center for Science,
Economics and the Environment.
74
-------
REFERENCES
U.S. Commission on Ocean Policy. 2004. An Ocean Blueprint for the 21st Century. Final
Report of the U.S. Commission on Ocean Policy - Pre-Publication Copy. Washington,
DC. 2004.
Weisberg, S. 2002. Cooperative Regional Monitoring: Is it worth the trouble?
Presentation given by Steve Weisberg at the Atlantic Northwest Coastal Monitoring
Summit. December 2002.
75
-------
-------
APPENDIX A-l
BARATARIA-TERREBONNE PROGRAM
CASE STUDY
PROGRAM OBJECTIVES AND HISTORY
In September 1990, the State of Louisiana and the U.S. Environmental Protection Agency
(EPA) developed a cooperative agreement and formed the Barataria-Terrebonne National
Estuary Program (BTNEP). The goal of the program is to launch a collaborative effort
that focuses government, private, and commercial resources toward the protection of the
basins.
One of the first actions the program initiated was the development of a Comprehensive
Conservation Management Plan (CCMP), which detailed specific action plans to promote
and preserve the Barataria-Terrebonne Estuary System (BTES). The plan identified
issues, assessed status and trends, developed strategies, recommended corrective actions,
and implemented and funded plans. Overall, the BTNEP CCMP outlined 12 goals:
Preserve and restore wetlands and barrier islands
Realistically support diverse, natural biological communities
Develop and meet water quality standards that adequately protect estuarine
resources and human health
Promote environmentally responsible economic activities that sustain estuarine
resources
Generate national recognition and support
Implement comprehensive education and awareness on and awareness programs
that enhance public involvement and maintain cultural heritage
Create an accessible, comprehensive database with interpreted information for the
public
Create clear, fair, practical, and enforceable regulations
Develop and maintain multi-level, long-term, comprehensive watershed planning
Be compatible with natural processes
Forge common-ground solutions to estuarine problems
Formulate indicators of estuarine ecosystem health and balance estuary use
(BTNEP, 1996).
Along with these goals, BTNEP identified seven priority problems causing impacts to the
estuary.
1. Hydrologic modification
2. Sediment reduction
A-l
-------
BARATARIA-TERREBONNE PROGRAM CASE STUDY
3. Habitat loss
4. Eutrophication
5. Pathogen contamination
6. Toxic substances
7. Living resources
When the CCMP was approved by the EPA, an organizational structure was established
for the implementation of the program. This included performing day-to-day tasks,
reporting information to the public, making policy decisions, and developing meetings
and workshops. In 2001, EPA requested that all National Estuary Programs (NEPs)
develop indicators to measure the progress of their programs. Based on this request,
BTNEP began to develop an indicator set.
INDICATOR DEVELOPMENT PROCESS
Steering Committee Involvement
BTNEP began the indicator development process by forming a planning committee with
representatives from Federal, state, and university participants who volunteered their time
toward the effort. The committee developed a workshop and formulated background
materials. The background workshop materials included goals and objectives of the
workshop, initial focus questions, and an indicator selection matrix. The planning
committee included the following participants:
Dean Blanchard, BTNEP
Rex Caffey, Louisiana State University Agricultural Center
Rod Emmer, Federal Emergency Management Agency (FEMA)
Dianne Lindstedt, National Marine Fisheries Service (NMFS)
Nancy Rabalais, Louisiana Universities Marine Consortium
Kerry St. Pe, BTNEP
Greg Steyer, U.S. Geological Survey (USGS)
Glenn Thomas, Louisiana Department of Wildlife and Fish
Monica Young, EPA
Brent Ache, Battelle
Identify the Purpose and Need for Indicators
BTNEP's indicator development effort focused on the following purpose and need.
Purpose: To develop indicators to periodically review and report the vital signs
of the BTES.
Need: BTNEP needs to protect, restore, and sustain the BTES for today and for
future generations. Indicators are needed to measure the amount of success
BTNEP has accomplished toward these goals.
A-2
-------
BARATARIA-TERREBONNE PROGRAM CASE STUDY
Issues and Management Objectives
The issues and management objectives were previously outlined in BTNEP's CCMP.
There were then used to develop indicators.
Baseline Assessment of Each Issue
Prior to the workshop, the planning committee created an indicator matrix. The matrix
was categorized by seven priority problems, and indicators were ranked on level of data
availability as high, medium, or low. The matrix also included whether and what type of
data were available to support the indicator, as well as the major pro and con
considerations for choosing the indicator.
Indicator Development Workshop
On June 13-14, 2001, an indicator development workshop was held in Gonzales,
Louisiana. The workshop assembled individuals with a vested interest in monitoring or
managing BTES who could recommend a suite of indicators that best represents the
environmental condition of BTES while also being meaningful to the estuary's residents
and public officials.
Workshop participants were separated into four breakout groups for indicator
development discussions. Three of the groups were based on the seven priority problems;
the fourth group addressed regional demographics, sustained recognition, citizen
involvement, and economic growth. The four breakout groups addressed the following
issues:
Hydrologic modification, reduced sediment flows, habitat loss
Changes in living resources
Eutrophication, pathogen contamination, and toxic substances
Quality of life: community, economy, and awareness
Each breakout group was given the same set of goals to develop indicators. They were
also instructed to identify indicators to address the specific focus questions. The goals
were to:
Develop a suite of ~20 indicators, maximum, that were both meaningful to the
target audience and supported by datasets produced under the current monitoring
efforts, that describe:
Key components representative of ecological condition related to the seven
CCMP priority problems.
Key demographic, economic, and awareness components of the region's
natural resource-based economy and quality of life.
Identify potential indicator opportunities based on planned future monitoring in
the BTES.
Identify critical indicator (and associated monitoring) gaps and needs for the
BTES.
Discuss indicators based on the Pressure-State-Response (PSR) framework, which
uses stressors, condition, and management actions to categorize environmental
indicators.
A-3
-------
BARATARIA-TERREBONNE PROGRAM CASE STUDY
Discuss indicator presentation format to present indicators in the indicator report
(Battelle, 2001).
Indicator Specification and Monitoring
Indicators were developed based on the focus questions and availability of monitoring
data; however, the indicators selected were not necessarily supported by a current dataset
or monitoring program. Participants were asked to discuss indicators that either
specifically addressed a focus question or were supported by monitoring data. Therefore,
three categories were established to group indicators: Supported, Future Indicator, and
Gap/Need.
Supported: Potential indicator by existing status and trends monitoring and
assessment.
Future Indicator: Potential indicator will be supported by planned future status
and trends monitoring and assessment.
Gap/Need: Potential indicator not supported by existing or planned status and
trends monitoring and assessment (Battelle, 2001).
The suite of indicators developed at the workshop constitutes the best indicators,
currently supported by existing monitoring programs and associated datasets. All
indicators selected followed the indicator selection criteria:
Valid
Relevant: State, pressure, or response indicators relevant to one or more of the
seven CCMP priority problems (or the region's natural resource-based economy
and quality of life, as addressed in the CCMP).
Appropriate Scale: Representative of the entire BTES (or some significant sub-
unit) over an appropriate time scale.
Sensitive / Responsive: Natural variability can be reasonably explained; quickly
reflects changes in the environment (Battelle, 2001).
Understandable
Meaningful: Interpretable and meaningful to BTES residents and their political
representatives (i.e., simple presentation format).
Trend: Demonstrates or will demonstrate a trend (increase, decrease, or stable)
from a reference condition.
Measurable: Periodic assessment, on the scale of 1 to 2 years, is supported
(Battelle, 2001).
Available
Supported (or Future): Supporting dataset is long-term trend monitoring,
immediately usable, and with a reasonable expectation that monitoring will
continue.
Data Quality: Supporting dataset quality is acceptable.
Data Provided (Cost Issue): Dataholder agrees to provide the simple data
aggregation or the analyzed/modeled results of the dataset (Battelle, 2001).
A-4
-------
BARATARIA-TERREBONNE PROGRAM CASE STUDY
Indicators Developed
Below is a listing of the focus questions and indicators that participants identified based
on available data in the region (Battelle, 2001).
Hydrolosic Modification, Reduced Sediment Flows, and Habitat Loss Indicators
Question 1. Are we losing land in the BTES, and where?
Indicator(s):
Land-water ratios in the BTES by fresh-, brackish-, intermediate-, and saltmarsh
habitat type over time.
Question 2. Why are we losing land in the BTES?
Indicator(s):
Marsh health and vigor (above and below ground)
Flooding frequency and duration
New vertical accretion
Nutria damage
Question 3. Are fish and wildlife habitats being protected and restored?
Indicator(s):
Number of acres restored in the BTES over time.
Changes in Living Resources Indicators
Question 1. Are fish and wildlife populations healthy?
Indicator(s):
Shrimp abundance in the BTES over time (one of the three significant commercial
species, or combined harvest).
Oyster abundance on public seed grounds in the BTES over time.
Red drum abundance in the BTES over time.
Community diversity in the BTES over time (trawl samples).
Mottled duck abundance in the BTES over time.
Christmas bird counts over time (which combines both migratory and non-
migratory bird species).
Freshwater catfish abundance in the BTES over time.
Largemouth bass abundance in the BTES over time.
Alligator nests in the BTES over time.
Question 2. Are invasive species a problem?
Indicator(s):
Nutria population and marsh damage estimates in the BTES over time.
Cost of invasive species control in the BTES over time.
A-5
-------
BARATARIA-TERREBONNE PROGRAM CASE STUDY
Question 3. Are seafoods safe to eat?
Indicator(s):
Seafood safety indicator, to be selected from (1) area of oyster closures in the
BTES over time; (2) health department fish consumption advisories in the BTES
over time; or (3) mercury in edible fish tissue data collected in the BTES.
Question 4. What threatened or endangered species can we use to assess the health of our
estuary?
Indicator(s):
Bald eagle population and nests in the BTES over time.
Brown pelican population and nests in the BTES over time.
Eutrophication, Pathogen Contamination, and Toxic Substances Indicators
Question 1. Are our waters healthy?
Indicator(s):
Chlorophyll-^ in the BTES over time.
Area of dead zone (off coastal Louisiana) over time.
Number of petroleum spills reports in the BTES area over time.
Question 2. Are pathogen and toxic substance concentrations increasing or decreasing?
Indicator(s):
Fecal coliform bacteria concentrations at key recreational sites in the BTES over
time.
Fecal coliform bacteria concentrations at key oyster growing water sites in the
BTES over time.
Number of pumpout and dumpstation facilities in BTES over time.
Number of fish advisories for mercury in the BTES over time.
Atrazine concentration in BTES surface waters over time.
Quality of Life: Community, Economic, and Awareness Indicators
Question 1. How are natural-resource-based business patterns changing?
Indicator(s):
Value of tourism in the BTES.
Value of citrus, row crop, cattle, sugar cane agriculture in the BTES.
Value of oil and gas infrastructure in the BTES and value of product moved
through the BTES over time.
Aggregate dockside value of commercial fisheries landed in BTES parishes over
time and number of commercial fishing licenses over time.
Aggregate landings of recreational fishing in BTES parishes over time.
Number of recreational fishing guide/charter licenses in the BTES parishes over
time.
A-6
-------
BARATARIA-TERREBONNE PROGRAM CASE STUDY
Question 2. How are environmental changes affecting our quality of life and
community's sustainability?
Indicator(s):
Number and duration of unacceptably high-chlorides in source (input) water to
regional drinking water plants (at least Lafourche Parish and Terrebonne Parish
plants) over time.
Value of flood insurance claims in BTES parishes from FEMA over time.
Question 3. How is public support for a healthy estuary changing?
Indicator(s):
Number of educational brochures distributed annually by the BTNEP over time.
Number of volunteers participating in the following four programs annually:
beach sweep, storm drain stenciling, marsh grass planting, Christmas tree
restoration over time.
Reporting Indicator Findings
The findings from the workshop were incorporated into an indicators report (2002),
which was distributed to the public, Federal, state, and local agencies. Furthermore,
BTNEP plans to release an updated indicators report every 3 years, and it is expected that
the indicator list will grow over time as more monitoring data become available.
Revision of the Monitoring Program and Indicators
Prior to development of the indicator report, the focus questions were narrowed down to
10 questions rather than the 12 previous questions developed at the workshop. From the
workshop, 38 indicators were developed. However, BTNEP narrowed the indicators to
34, which were reported in the indicator report. BTNEP plans to reassess its indicator
program every 5 years.
The information noted throughout this case study came from the following documents
and discussions with BTNEP staff.
Battelle. 2001. Workshop Summary: BTNEP Indicators Development Workshop,
Holiday Inn, Gonzales, Louisiana, June 13-14, 2001. A publication of the Barataria-
Terrebonne National Estuary Program, Thibodaux, Louisiana, June 2001,
BTNEP. 1996. The Executive Summary: Program objectives, action plans, and
implementation strategies at a glance. CCMP - Part 1 of 4. June 1996. Available from
http://www.btnep.org/default.asp?id=30.
A-7
-------
BARATARIA-TERREBONNE PROGRAM CASE STUDY
[This page left internationally blank]
A-8
-------
APPENDIX A-2
NEW HAMPSHIRE ESTUARIES PROJECT
CASE STUDY
PROGRAM OBJECTIVES AND HISTORY
The New Hampshire Estuaries Project (NHEP) was formed in July 1995 when the State
of New Hampshire and the U.S. Environmental Protection Agency (EPA) developed a
cooperative agreement. The program's mission is to protect, enhance, and monitor the
environmental quality of the state's estuaries.
The first task that the NHEP initiated was the development of a Comprehensive
Conservation Management Plan (CCMP). The plan identified the goals and objectives of
the NHEP, assessed status and trends, included research and technical development
needs, outlined plan implementation, and identified funding. Overall, the NHEP CCMP
(2000) focused on five areas of concern:
Water Quality: Identify and eliminate or reduce pollution sources that degrade
water quality.
Land Use, Development, and Habitat Protection: Work with municipalities
within the estuaries watershed to ensure that land use policies and new
developments consider impacts on estuarine water quality and habitats.
Shellfish Resources: Open shellfish beds that have been closed due to pollution
or lack of testing to certify shellfish safety for human consumption.
Habitat Restoration: Protect and restore viable and diverse habitats in the
estuarine region.
Outreach and Education: Raise awareness and engage communities,
government agencies, organizations, and individuals in responsible use and
stewardship of the estuaries.
The CCMP was completed in 2000. The plan presented goals, objectives, and specific
actions to protect, enhance, and monitor New Hampshire's estuarine resources. The plan
also included a process for implementing the actions, which included organizing tasks,
reporting information to the public, making policy decisions, developing meetings and
conferences, and securing funds. In 2001, EPA requested that all National Estuary
Programs (NEPs) develop indicators to measure the progress of their programs. Based on
this request, NHEP began to develop an indicator set.
A-9
-------
NEW HAMPSHIRE ESTUARIES PROJECT CASE STUDY
INDICATOR DEVELOPMENT PROCESS
Technical Advisory Committee Involvement
During the fall and winter of 2001-2002, the NHEP Coastal Scientist and Technical
Advisory Committee (TAC) developed a suite of environmental indicators to track
progress toward the NHEP's management goals and objectives.
The first step toward developing environmental indicators for the NHEP was to translate
the goals and objectives from the management plan into questions that could be answered
by environmental monitoring. For example, the management plan objective, "Achieve
water quality in Great Bay and Hampton Harbor that meets shellfish harvest standards"
was translated to the question, "Do NH tidal waters meet fecal coliform standards of the
NSSP for approved shellfish areas?" For some management objectives, multiple
monitoring questions were identified due to the complexity of the factors affecting
attainment of the goal. For example, the objective related to achieving water quality that
meets shellfish harvest standards depends on reducing both dry-weather and wet-weather
pollution sources. Therefore, two additional monitoring questions were developed: "Has
wet weather bacterial contamination changed significantly over time?" and "Has dry
weather bacterial contamination changed significantly over time?"
The next step was to refine the monitoring questions into a suite of environmental
indicators. The difference between environmental indicators and monitoring questions is
that indicators have precise definitions of their hypotheses, statistical methods,
measurable goals, data sources, data quality objectives, and data analysis methods.
Establishing these definitions ensures that the indicators will be interpreted consistently
and clearly. As indicators were proposed, they were vetted using the EPA's Office of
Research and Development (ORD) guidelines for ecological indicators (EPA, 1999) to
determine their level of development.
Finally, the NHEP Coastal Scientist gathered data and prepared a series of indicator
reports. The process of working with the data provided more insight and opportunities to
refine the indicator definitions.
Purpose and Need for Indicators
NHEP needed environmental indicators for two purposes. First, indicators are used to
report on progress toward management plan goals and objectives. Second, the indicators
are used to report on status and trends in water quality and estuarine resources through
periodic "State of the Estuaries" reports to the public and other coastal stakeholders.
Indicator Specification and Monitoring
The TAC followed the ORD guidelines (EPA, 1999) as guidance for developing
indicators. The guidelines included:
Conceptual RelevanceRelevance to both the ecological condition and a
management question.
A-10
-------
NEW HAMPSHIRE ESTUARIES PROJECT CASE STUDY
Feasibility of ImplementationFeasibility of methods, logistics, cost, and other
issues of implementation.
Response VariabilityExhibition of significantly different responses at distinct
points along a condition gradient.
Interpretation and UtilityAbility to define the ecological condition as
acceptable, marginal, or unacceptable in relation to the indicator results.
Indicators
According to the NHEP's monitoring plan (2004), the indicators were classified into
three tiers based on the above criteria and number of criteria that were met. The three
tiers were developed to better define which indicators would answer the monitoring
questions stated in the monitoring plan, which in turn report on the progress toward the
management objectives.
Environmental IndicatorA parameter that meets all four ORD criteria for
developing indicators. The measurable goals set for these indicators are tied to the
management goals and objectives. For cases where "baseline" was the measurable
goal, the best available baseline data were used, not just data from 2000 (the start
date for implementation of the NHEP management plan).
Supporting VariableA parameter that meets the first three of the ORD criteria but
cannot be used to interpret environmental or ecological quality independently. Some
of these variables were still considered essential to the NHEP monitoring plan
because they provided important information for interpreting trends in other
indicators. The difference between supporting variables and environmental indicators
is that supporting variables lack measurable goals.
Research IndicatorA parameter that meets the first ORD criteria for being
"conceptually relevant" but lacks clear methods and means of interpretation at the
present time. Some research indicators were retained in the monitoring plan because
they have the potential to address monitoring questions that are not covered by other
indicators. NHEP will research these potential indicators in the future.
For some NHEP management objectives, it was not possible to establish environmental
indicators because the objective is administrative in nature. "Administrative objectives"
describe actions that should be taken rather than environmental conditions to be achieved.
Therefore, NHEP's progress on these objectives were tracked by "administrative
indicators" that document the activities the NHEP or its partners have undertaken relative
to the objective. For example, for the NHEP objective to "encourage 42 coastal
communities to actively participate in addressing sprawl," the administrative indicator
reports the number of communities engaged in smart growth activities and the NHEP
actions to promote smart growth.
A-ll
-------
NEW HAMPSHIRE ESTUARIES PROJECT CASE STUDY
Issue and Management Objectives
Nearly all of the NHEP management objectives (35 of 38, or 92 percent) have been tied
to at least one indicator, with a breakdown as follows: 20 of the 38 (53 percent) will be
tracked using environmental indicators and 15 of the 38 (39 percent) will be tracked
using administrative indicators. For the remaining three management objectives, research
indicators have been identified. The NHEP also tracks 18 supporting variables that will
be used to help interpret the indicators. In total, the NHEP reports on 34 environmental
indicators, 14 administrative indicators, 18 supporting variables, and 10 research
indicators. The reason why there are so many more indicators than management
objectives (76 vs. 38) is that many objectives have been assigned multiple indicators and
supporting variables to answer multiple monitoring questions or to report on different
facets of the objective.
Environmental Indicators
The suite of indicators presented in the NHEP monitoring plan (2004) was chosen to
answer the monitoring questions discussed in the plan. The indicator's numbers are not
listed sequentially as the indicators provided below were chosen by the TAG from a
larger set of indicators that were originally developed.
A. Indicators of Bacteria Pollution
Monitoring Goal: To determine the status and trends of the sanitary quality of shellfish-
growing and recreational waters.
BAC1. Acre-days of shellfish harvest opportunities in estuarine waters
BAC2. Trends in dry-weather bacterial indicators concentrations
BAC4. Tidal bathing beach postings
BAC5. Trends in bacteria concentrations at tidal bathing beaches
BAC6. Violations of Enterococci standard in estuarine waters
BAC7. Freshwater bathing beach postings
BAC8. Bacteria load from wastewater treatment plants
B. Indicators of Toxic Contaminants
Monitoring Goal: To determine the status and trends of toxic contaminants in water,
sediment, and biota of coastal New Hampshire.
TOX1. Shellfish tissue concentrations relative to Food and Drug Administration
standards
TOX8. Finfish and lobster edible tissue concentrations relative to risk-based
standards
TOX2. Public health risks from toxic contaminants in fish and shellfish tissue
TOX3. Trends in shellfish tissue contaminant concentrations
TOX4. Trends in finfish tissue contaminant concentrations
A-12
-------
NEW HAMPSHIRE ESTUARIES PROJECT CASE STUDY
TOX5. Sediment contaminant concentrations relative to National Oceanic and
Atmospheric Administration (NOAA) guidelines
TOX6. Trends in sediment contaminant concentrations
TOX7. Benthic community impacts due to sediment contamination
C. Indicators of Nutrients and Eutrophication
Monitoring Goal: To determine the status and trends of the eutrophic conditions in New
Hampshire's coastal and estuarine waters
NUT1. Annual load of nitrogen to Great Bay from wastewater treatment facilities
(WWTF) and watershed tributaries
NUT2. Trends in estuarine nutrient concentrations
NUTS. Trends in estuarine particulate concentrations
NUTS. Exceedances of instantaneous dissolved oxygen (DO) standard
NUT6. Exceedances of the daily average DO standard
NUT7. Trends in biological oxygen demand (BOD) loading to Great Bay
NUTS. Percent of the estuary with chlorophyll-a concentrations greater than state
criteria
D. Indicators of Shellfish Resources
Monitoring Goal: To determine the status and trends ofmolluscan shellfish populations
in New Hampshire's coastal and estuarine waters.
SHL1. Area of oyster beds in Great Bay
SHL2. Density of harvestable oysters at Great Bay Beds
SHL3. Density of harvestable clams at Hampton Harbor flats
SHL4. Area of clam flats in Hampton Harbor
SHL5. Standing stock of harvestable oysters in Great Bay
SHL6. Standing stock of harvestable clams in Hampton Harbor
SHL7. Abundance of shellfish predators
SHL8. Clam and oyster spatfall
SHL9. Recreational harvest of oysters
SHL 10. Recreational harvest of clams
SHL 11. Prevalence of oyster disease
SHL 12. Prevalence of clam disease
E. Indicators of Land Use and Development
Monitoring Goal: To determine the status and trends of land use and development in
coastal New Hampshire.
1. LUD1. Impervious surfaces in coastal subwatersheds
2. LUD2. Rate of sprawlhigh impact development
A-13
-------
NEW HAMPSHIRE ESTUARIES PROJECT CASE STUDY
3. LUD3. Rate of sprawllow-density, residential development
4. LUD4. Rate of sprawlfragmentation
F. Indicators of Habitat Protection
Monitoring Goal: To determine the status and trends of habitat protections in New
Hampshire's coastal and estuarine waters.
HAB6. Protected conservation lands
HAB3. Protected, undeveloped shorelands
HAB4. Protected, unfragmented forest blocks
HAB5. Protected rare and exemplary natural communities
G. Indicators of Critical Habitats
Monitoring Goal: To determine the status and trends of critical species and habitats in
New Hampshire's coastal and estuarine waters.
1. HAB1. Salt marsh extent and condition
2. HAB2. Eelgrass distribution
3. HAB11. Unfragmented forest blocks
H. Indicators of Critical Species
Monitoring Goal: To determine the status and trends of critical species in New
Hampshire's coastal and estuarine waters.
1. HAB7. Abundance of juvenile finfish
2. HAB8. Anadromous fish returns
3. HAB9. Abundance of lobsters
4. HAB 10. Abundance of wintering waterfowl
I. Indicators of Habitat Restoration
Monitoring Goal: To determine the status and trends of habitat restoration in New
Hampshire's coastal and estuarine waters.
1. RST1. Restored salt marsh
2. RST2. Restored eelgrass beds
3. RST3. Restored oyster beds
A-14
-------
NEW HAMPSHIRE ESTUARIES PROJECT CASE STUDY
Reporting Indicator Findings
The NHEP publishes four data reports ("indicator reports") that illustrate the status and
trends in the various indicators. These reports are technical in nature. Each report focuses
on a different suite of indicators: shellfish, water quality, land use and development, and
habitats and species. All of the indicators are presented to the TAG, which selects a
subset of indicators to be presented to the NHEP management committee. After the
chosen indicators are presented to the committee, between 10 and 20 indicators are
chosen to be included in the "State of the Estuaries" report. This report is published every
3 years.
The combination of the technical reports for the scientific community and the simpler
State of the Estuaries report for other users is useful for getting indicator information to
as many people as possible.
Monitoring Program
The NHEP developed a monitoring plan for each indicator. The data quality objectives
for each indicator were matched to an appropriate sampling and analysis design using
power analysis. Sampling design details are listed in the NHEP monitoring plan.
Indicator Implementation
The NHEP TAG is tasked with initiating, overseeing, tracking, evaluating, and updating
the implementation of the monitoring plan. According to the NHEP monitoring plan
(2004), the plan will be "fully implemented" when the NHEP is able report on at least
one indicator for each management objective. Currently, 35 of 38 management objectives
are tied to at least one indicator.
Formal Adoption and Funding
The latest version of the NHEP monitoring plan (version 4) was approved by the NHEP
TAG in June 2004. This plan contains forecasts of funding needs through 2015. The
NHEP uses these forecasts to allocate monitoring funds each year.
Communication
The NHEP's goal is to communicate the results of environmental monitoring to four
audiences: the EPA, the NHEP Management Conference, the scientific community, and
the public, which is broadly defined to include coastal decision-makers, watershed
organizations, and interested citizens.
Data Collection and Analysis Plan
The NHEP monitoring plan contains information on data collection and analysis for each
indicator. As with most of the NEPs, the NHEP coordinates with agencies and
organizations who participate in monitoring activities in order to avoid duplication of
effort. This coordinated effort makes the most of current monitoring efforts and available
data. The NHEP maintains the inventory of all estuarine and coastal monitoring programs
in the state. The NHEP monitoring plan incorporates data collected by over a dozen
programs.
A-15
-------
NEW HAMPSHIRE ESTUARIES PROJECT CASE STUDY
Revision of the Monitoring Program and Indicators
The NHEP Coastal Scientist and TAC review the monitoring programs and indicators
each year. The monitoring plan is updated periodically as new indicators are developed or
monitoring programs change.
The information noted throughout this case study came from the following documents.
NHEP. 2000. Comprehensive Conservation Management Plan. New Hampshire Estuaries
Project. 2000.
NHEP. 2002. Environmental Indicator Report: Water Quality. New Hampshire Estuaries
Project. December 27, 2002.
NHEP. 2003. Environmental Indicator Report: Land Use and Development. New
Hampshire Estuaries Project. April 30, 2003.
NHEP. 2003. Environmental Indicator Report: Species and Habitats. New Hampshire
Estuaries Project. April 30, 2003.
NHEP. 2003. Environmental Indicator Report: Shellfish. New Hampshire Estuaries
Project. October 14, 2003.
NHEP. 2003. The State of the Estuaries. New Hampshire Estuaries Project. September
2003.
NHEP. 2004. Monitoring Plan. New Hampshire Estuaries Project. June 2004.
All NHEP documents can be downloaded from www.nhep.unh.edu.
A-16
-------
APPENDIX A-3
NORTHEAST COASTAL INDICATORS
WORKSHOP CASE STUDY
In 2001, representatives of Federal, state, local and non-governmental organizations
(NGOs) from eastern Canada and the New England region met to discuss issues that were
common throughout the Gulf of Maine region. Their vision was of a sustainable
Northeast Atlantic ecosystem that ensures environmental integrity and that supports and
is supported by economically viable, healthy human communities. Based on this initial
discussion and the need for information on the ecosystem, an idea was spawned to form a
coordinated regional program to monitor the coastal waters from eastern Canada to the
Long Island Sound region of New York. This particular situation was unique because it
was not mandated by Federal or state regulations, but a collaborative idea among
environmental managers of the region. The overall goal that developed was a group,
which would voluntarily coordinate their current monitoring programs to determine the
overall ecological health of the northwest Atlantic region.
In 2002, the group began to formally discuss what the program would focus on and
whether organizations throughout the region felt a coordinated program could be
developed. The first step was development of a steering committee, which included staff
from various Federal, provincial, and state governments throughout the northeast United
States and eastern Canadian region. The committee initially chose to focus on three areas
of coastal environmental monitoring: nutrient overenrichment; toxics/contamination; and
habitat loss, degradation and restoration. Participants of the steering committee focused
their efforts on developing a straw coordinated regional monitoring strategy and
collecting information on current monitoring, regional concerns, and future focus areas
(e.g., questions that should be answered through the coordinated monitoring effort).
The information development step included the preparation of white papers and other
documents by the steering committee for each of the three focus areas. This information
was presented to a larger contingency of environmental managers, policy-makers,
scientists, and the region's public at the first of two workshops held in December 2002.
At the workshop, the steering committee presented its ideas for a regional coordinated
monitoring program and why it thought such a program would be important to the region.
The group was also brought together to:
Develop an ecologically driven basis for coordinating selected monitoring
programs in Atlantic Northeast coastal waters,
Develop a framework for a regional monitoring network, and
Identify new regional monitoring needs and corresponding research needs that
respond to the region's pressing management needs.
A-17
-------
NORTHEAST COASTAL INDICATORS WORKSHOP CASE STUDY
The major conclusion from the workshop was that a coordinated regional monitoring
network was needed and could be developed. Participants recommended that the
coordinated regional monitoring network be set up with the following form and
functions:
Form:
8.
Function:
1.
2.
3.
4.
1. GeographyNova Scotia/New Brunswick to Long Island Sound. Additional
information from other areas may be needed to support some parameters (e.g.,
atmospheric deposition).
2. Type of organizationregional public/private nonprofit or charitable
organization that incorporates existing mandates.
3. Partnersgovernment, NGOs, businesses, academics, regional organizations.
4. Structuresteering committee or board that includes state/provincial
agencies, environmental groups, dischargers, researchers, and the public.
5. Governance/decision-makingwhere appropriate voluntary compliance,
consensus, and legislative mandates (existing and new).
6. Operating budgetstart with seed funding; then, after positive results have
been shown, plan on incremental growth. If funding becomes available, move
toward major initiatives.
7. Funding sourcesnew grants and contracts (e.g., government, foundations).
Larger monitoring groups involved would use some of their resources toward
involvement in the program in return for additional information on areas of
concern.
Staffingfocused full-time regional coordinator growing to additional staff.
Scope/reachgovernment, volunteer, and academic programs and more as
appropriate to answer the questions.
Scaledepends on the final questions being asked.
Links to researchidentifies priorities linked to monitoring; active proponent
of regional research; identifies new issues and problems.
Program design and implementation/methodscoordinate programs to meet
regional needs; apply performance-based standardized protocols as
appropriate.
5. Data managementstart with web links to databases with spatial references
and metadata. As the program proceeds, standardized formats for data and
policies for making data available and reported should be developed.
6. Data synthesis and communicationintegrated multifactor regional
assessments with links to management, public, and NGO needs; educational
and marketing materials; and smaller-scale assessments or larger trends and
assessments by selected issues.
7. Services providedregional multivariate
Although other programs integrate regulatory and management needs and responsibilities
into their programs, the consensus of the participants was that this regional program
A-18
-------
NORTHEAST COASTAL INDICATORS WORKSHOP CASE STUDY
should not go beyond coordinating, collecting, and disseminating monitoring data.
Instead, a coordinated monitoring group could first provide data that regulators would
find useful in assessing water quality and management needs. If the regional program
provides useful advice and creates a valuable forum for discussion on how each
jurisdiction can better manage their waters or make recommendations for comprehensive
management that cannot be handled at the state/province level, regulators should be more
open to participation.
For this program to work, the participants felt that the major monitoring groups needed to
be involved in this process. These included: the U.S. Environmental Protection Agency's
(EPA's) National Coastal Assessment (NCA), the Gulf of Maine Ocean Observing
System (GOMOOS), Gulfwatch, Plum Island Sound Long Term Ecosystem Research,
the Massachusetts Water Resources Authority (MWRA), National Estuary Programs
(NEPs), National Estuarine Research Reserves (NERRs), the National Park Service
(NFS), aquaculture monitoring programs, and industry (e.g., power plants,
manufacturing). Participation by large monitoring programs was noted as being necessary
to provide the critical mass needed to move forward. This does not mean that other
smaller programs or new programs are not needed. However, due to the lack of funding
in most areas, data will need to be extracted from existing programs, and then augmented
where needed.
Based on the conclusions from this workshop, the steering committee was expanded and
a set of goals created to further the development of the program. The expanded
committee initially focused on getting the message of its efforts out to monitoring
programs throughout the region. The committee also used the information collected at the
workshop to develop conceptual models, questions, and information on possible
indicators throughout the region. The committee refined the focus on the three issues
addressed by the workshop to include fisheries, land use, and climate change issues.
This information was used to support a second workshop, conducted in January 2004,
that focused on gaining consensus on a list of key indicators for which regional data
would be compiled and used to track trends in ecosystem integrity through the Northwest
Atlantic region. This workshop focused on:
Reviewing current efforts to coordinate monitoring and indicator development
throughout the region.
Developing indicators that apply to the northeast coastal region of the United
States (from New York to Maine) and Canada (Gulf of Maine) under six
categories: fisheries, eutrophication, contaminants, coastal development,
aquatic habitat, and climate change.
Discussing how indicators could be created and managed, including
incorporation into existing programs, in the near future.
Informing area agency managers of the results of the workshop to get buy-in
on the necessity of the coordinated program and to collect information on
what the managers need from the program.
A-19
-------
NORTHEAST COASTAL INDICATORS WORKSHOP CASE STUDY
Participants discussed the progress made to develop the coordinated regional program
and what should be done to get the program formally started. In addition, key managers
from several of the top agencies and organizations throughout the region were invited to
hear the findings and suggestions of the workshop and to provide input on next steps that
might ensure successful program implementation.
After this workshop, a Memorandum of Understanding (MOU) was developed and
distributed among interested programs. It focused on sharing data for the coordinated
effort. Members of the steering committee also took on additional tasks to move the
program forward.
INDICATOR DEVELOPMENT PROCESS
Steering Committee Involvement
For this effort, the steering committee was a key success factor in developing a
coordinated monitoring network with indicators. Commitment of staff time by agencies
and organizations from each of the states and provinces proved to be the major catalyst in
the design and development of this program. The steering committee included
participants from:
Battelle
Connecticut Department of Environmental Protection
Environment Canada
Maine Department of Environmental Protection
Maine Sea Grant Program
Maine State Planning Board
Massachusetts Coastal Zone Management
MWRA
National Oceanic and Atmospheric Administration (NOAA)
National Marine Fisheries Service (NMFS)
EPA Headquarters
EPA Region 1
U.S. Geological Survey (USGS)
Wells NERR
Members of the steering committee were responsible for assisting with the development
and design of the regional network, but they also assisted in informing their managers
and others of the importance and usefulness of this program. Each member worked hard
to make this program a success. Some assisted by developing materials for the workshop
to communicate the overall goals of the program, but also the necessary information to
make decisions towards those goals. Others assisted by taking the message of
coordination to others to get programs interested in being a part of the network; still
others helped by trying to find funding for the program. Without the assistance of each
person, the program would not have moved forward.
A-20
-------
NORTHEAST COASTAL INDICATORS WORKSHOP CASE STUDY
Purpose and Need for Indicators
The steering committee determined the purpose of this program to be to track the status
and trends in ecosystem integrity throughout the Northwest Atlantic region through
collaborative partnerships. The need of the program was to provide information for
policy, management, and advocacy decisions at a regional scale.
Identify the Issues
Several environmental issues are widespread in the region. Early in the development
process, the steering committee decided to focus on a limited number of the issues. The
plan was to start with a limited number and add additional topics as the program
progressed. Initially three topics were chosen based on the Gulf of Maine Council on the
Marine Environment Action Plan 2001 to 2006 (http://www.gulfofmaine.org/council/
action_plan/action_plan200l-06.pdf). Nutrient overenrichment, toxics/contamination,
and habitat loss, degradation and restoration were covered at the first workshop in
December 2002. Participants of the first workshop voiced concern with three additional
topics: fisheries, land use, and climate change. Based on the request from workshop
participants, these three additional topics were included in the second workshop held
January 2004, along with the first three topics from the initial workshop.
Assessment of Each Issue
Each issue included in the process was assessed by reviewing available literature and
compiling the information into a statement of present status. In most instances,
monitoring programs throughout the region had reports noting the status of individual
areas of the region, which were used to extrapolate an overall picture of the region.
Although a measurable baseline could not be specified, in most instances enough
information for the region was available to allow future changes, either beneficial or
adverse, to be noted.
As noted earlier, the steering committee developed straw documents on the issues,
questions, and possible indicators that could be used to track these issues throughout the
region. They also collected information on monitoring programs throughout the region
along with information on the types of data each program collected.
Conceptual Models
Conceptual models were developed by the steering committee in a variety of formats.
Some were written descriptions, while others were tables or pictures. Common to each of
the models was the fact that they noted pressures to the system, the current state of the
system as it was known at that time, and the response of the system to the pressures
exerted on that system. Figure 8 within the main body of this manual was one of the
models developed.
Indicators
The focus of the January 2004 workshop was the review of questions that needed to be
answered by the program and indicators that could possibly supply the necessary data to
evaluate changes in each of the six topic areas. Below is a listing of the questions and
A-21
-------
NORTHEAST COASTAL INDICATORS WORKSHOP CASE STUDY
indicators that participants suggested the network focus on answering in their initial
efforts, based on available data from the region.
Fisheries
Overarching Question: What is the health of the fisheries with regard to ecosystem
integrity, including targeted and non-targeted species, habitat, and fisheries activities?
Question 1. What are the trends in and the status of exploited fisheries stocks?
Indicator(s):
Proportion of stocks at or above targeted abundance or biomass
Age/size structure of species from surveys and/or landings
Spatial distribution of fisheries species
Spatial and Temporal Scales: Range of species or stocks; annual to every 3-5 years
Question 2. What are the effects of fishing on non-targeted species and their associated
communities?
Indicator(s):
Characteristics of bycatch and discards
Population levels for selected species
Species diversity
Spatial and Temporal Scales: Regional based on populations or stock, biogeographic
boundaries; seasonal
Question 3. What are the effects of fishing and non-fishing activities on marine habitat
and fisheries productivity?
Indicator(s):
Area closed to fishing, both pelagic and/or benthic
Benthic diversity
Spatial distribution of bottom fishing
Spatial and Temporal Scales: Region-wide (based on biogeographic boundaries); 1 to
5 years, depending on habitat to annually to continuous
Question 4. What are the trends in the socioeconomic characteristics of fishing?
Indicator(s):
Days at sea
Fleet composition
Commercial and recreational fishing economic value
Angler satisfaction
Overcapitalized fleets
Natural capital value
Market value for consumers
A-22
-------
NORTHEAST COASTAL INDICATORS WORKSHOP CASE STUDY
Contaminants
Question 1. How are contaminants in the region changing?
Indicator(s):
Area of sediments that have contaminant levels above sediment quality guidelines
Level of contaminants in representative non-migratory organisms
Area of shellfish bed closure by state by year
Days of beach closure due to bacterial contamination by state by year
Spatial and Temporal Scales: Specific water body scales; event to annual to decadal
Question 2. How is the input of contaminants changing over time and space?
Indicator(s):
Annual chemical load to water bodies by state
Number of bacterial source investigations and sources eliminated by year by state
Spatial and Temporal Scales: Water bodies region-wide; annual to source specific
Question 3. Are management actions changing the extent and severity of human health
effects?
Indicator(s):
Incidences of human disease caused by consumption offish and shellfish and
recreational contact
Level of contaminants in representative fish/shellfish and at-risk humans
Annual number of beach and shellfish closures (reopenings)
Spatial and Temporal Scales: Water bodies region-wide; annual to source specific
Question 4. How well are contaminant management actions protecting ecosystem
integrity?
Indicator(s):
Sediment quality measure by triad approach
Incidence of disease
Reproductive success
Quality of habitats as affected by contaminants
Spatial and Temporal Scales: Water bodies region-wide; annual to decadal scales
Eutrophication
Question 1. What are the extent, severity, and trends of eutrophication impacts?
Indicator(s):
Dissolved oxygen (DO)
Chlorophyll-a
Submerged aquatic vegetation
Water clarity
Spatial and Temporal Scales: Estuary-wide; seasonal to annual
A-23
-------
NORTHEAST COASTAL INDICATORS WORKSHOP CASE STUDY
Question 2. What are the sources of nutrients, can they be controlled, how are they
changing?
Indicator(s):
Measured and modeled loads
Land use/cover (load proxy)
Population (load proxy)
Spatial and Temporal Scales: Regional; seasonal to annual to decadal
Question 3. What is the state of management measures and how can they be optimized?
Indicator(s):
DO
Chlorophyll-a
Submerged aquatic vegetation
Water clarity
Measured and modeled loads
Land use/cover (load proxy)
Population (load proxy)
Aquatic Habitat
Question 1. How is the extent, distribution, or use of coastal habitats (watersheds,
estuaries, near, and offshore) changing over time?
Indicator(s):
Extent per habitat type over time
Large-scale mapping, small-scale ground surveys
Distribution per habitat type
Inventory of human use
Area, percent of public vs. private
Area, percent designated for permanent habitat protection
Question 2. How is the ecological condition of coastal habitats changing over time?
Indicator(s):
Community structure
Measure of change of relative abundance of species within habitat
Trophic structure
Species of concern
Question 3. What are the causes of coastal habitat change over time?
Indicator(s) of most important potential causes of habitat loss and degradation
(physical and hydrologic alteration, nutrient loading, resource extraction,
contaminants, climate change, sediment input)
Extent and percent habitat area altered by tidal restrictions
Boat registrations
Seagrass Nutrient Pollution Index
Indicators relating to other causes assumed covered by other groups
A-24
-------
NORTHEAST COASTAL INDICATORS WORKSHOP CASE STUDY
Coastal Development
Question 1. What is the type, pattern, and rate of land use change?
Indicator(s):
Percent change in land cover to more intensive uses
Demographic changes (population, etc.)
Types of land uses and change
Question 2. How are these changes impacting the integrity of coastal ecosystems?
Indicator(s):
Integrity of coastal ecosystems for:
Threatened and endangered coastal species
Migratory species
Invasive species
Question 3. How is the region responding to changes in coastal ecosystems?
Indicator(s):
Type, location and pace of land conservation
Type, location and pace of habitat restoration
Land management (planning, regulatory, etc)
Climate Change
Question 1. What are the causes?
Indicator(s): None identified
Question 2. What are the impacts of climate changes to: weather, atmospheric & ocean
circulation, ecosystems, and society?
Indicator(s):
Precipitation trends
Storm frequency and intensity
Water temperature surface bottom
Relative sea level rise
Spatial and Temporal Scales: Regional; annual to decadal
Question 3. What are the impacts of climate change on biotic ecosystems?
Indicator(s):
Warm vs. cold water finfish species diversity
Planktonic diversity
Wetlands extent, distribution and composition
Marine diseases indices (i.e., multinucleated spore unknown, dermo, shell
disease)
Spatial and Temporal Scales: Regional; annual
Monitoring Program
This program was not created to specifically monitor the indicators chosen. Participants
plan to request cooperative assistance from programs already monitoring specific areas of
A-25
-------
NORTHEAST COASTAL INDICATORS WORKSHOP CASE STUDY
the region. The data will be collected in one place so that they can be reviewed in total
and a decision on the health of the regions ecosystem made. Thus, a monitoring program
was not designed or implemented for this program, but programs may be asked to modify
their present sampling schemes to include areas not currently monitored.
Indicator Implementation
To ensure that an integrated decision-making system is developed, several participants
suggested that groups that are already developed and working (e.g., Gulf of Maine
[GOM] Council with financial support for the program coming from elsewhere, Long
Island Sound Study [LISS]) be used to get the coordinated monitoring program started
rather than starting from scratch. It was felt that these groups could assist in moving the
group forward at a quicker pace. Once the common needs for the program are defined,
monitoring programs not involved with the group will then be approached to join.
One important item that the participants identified is that when the program is started, a
determination needs to be made of quality of data being collected and where data gaps
may exist. Quality could be determined through an intercalibration exercise. Then, if
needed, the program can move towards standardized methods. Everyone agreed that it is
easier to compare data if they are collected in a consistent way. The other important
aspect that the group will need to include is a feedback loop.
Formal Adoption and Funding
In most instances, it was agreed that it will be difficult to get ongoing monitoring
programs (i.e., GOMOOS, MWRA, LISS, Massachusetts Bay NEP) to change their focus
and financially support a new effort. To make this a success, the group will need to
secure "buy-in" from Federal (i.e., EPA, Environment Canada, NOAA, and NMFS) and
state agency leaders. It was felt that MOUs would need to be developed to ensure that
programs do not back out of the group. It was also suggested that MOUs specify the
agreement to standardize data collection and analysis methods (where needed).
Communication
The group suggested that this aspect could best be addressed through the use of various
groups that are already working rather than having new groups created (e.g., GOM's
Gulfwatch program, GOMOOS, LISS, MWRA, NCA, Mercury Deposition Network). To
assist with communication, an implementation plan, program inventory, program
description including objectives, and monitoring and data management protocols should
be developed to ensure that everyone involved understands how the group will proceed.
Then, on a predetermined basis, indicator reports and status of the environment reports
should be written to communicate the findings of the group.
Monitoring Plan Implementation
Not applicable.
Data Collection and Analysis Plan
For this program, the stakeholders will have to develop a fairly detailed data collection
and analysis plan. Because data will be coming from a variety of programs, the plan will
A-26
-------
NORTHEAST COASTAL INDICATORS WORKSHOP CASE STUDY
need to include how the data will be supplied, to whom it will be supplied, how often,
etc. At this time, the stakeholders are still working out these issues.
Reporting Indicator Findings
Most workshop participants felt that it is very important to communicate the findings of
the program to managers and the public to show value in the efforts made. To support
managers in making decisions, the groups noted that the following items would be of
assistance:
Develop periodic assessments and maps.
Develop data integration and interpretation tools.
Produce products that have integrated assessments that can draw conclusions
and relate changes to stressors.
Provide a vehicle for workshops, seminars, and other opportunities to share
knowledge.
Provide reports on the socioeconomics of impacts and actions/inactions.
The public, on the other hand, is more interested in knowing things such as "What is the
status of the environment (encompasses a variety of spatial scales and ecological
compartments); Is it improving or not? What are the scales of influence? What are the
trends? What are the responses throughout the system? Are the responses local or
regional? By what amount? How sensitive are various biogeographic areas? Are
management strategies working? Reports directed at these answers will also be
considered for publication.
Documentation of environmental condition may take the form of easily understood
"state-of-the-environment" reports. These reports might be geographically based, issue-
based, or both. The consensus of the group was that this regional program should not go
beyond coordinating, collecting, and disseminating monitoring data. Data interpretation
and management planning will be left to the regulators already managing the areas, but
the coordinated monitoring group would provide data that regulators would find useful in
assessing water quality and management needs. If the regional program provides useful
advice and creates a valuable forum for discussion on how each jurisdiction can better
manage their waters, or make recommendations for comprehensive management that
cannot be handled at the state/province level, regulators should be more open to
participation.
Revision of the Monitoring Program and Indicators
Participants of the workshops agreed that an assessment of the overall program should be
done on a 5-year basis to ensure that the program is completing its overall goals. This
would include an assessment of the issues being monitored, the questions being
answered, the monitoring program being used, and the indicators being monitored. In
addition to the 5-year reassessment, an internal assessment of the data could be conducted
yearly or biyearly through external peer-reviews of products generated by the program.
A-27
-------
NORTHEAST COASTAL INDICATORS WORKSHOP CASE STUDY
The information noted throughout this case study came from personal knowledge of the
process (personal communication Lynn McLeod, Battelle, 2005) and the following
documents:
ANCMS. 2003a. ANCM Summit Fact Sheet #1. February 2003. Available from
http://www.gulfofmaine.org/nciw/Fact_Sheet.pdf
ANCMS. 2003b. ANCM Workshop Summary Report. February 2003. Available from
http://www.gulfofmaine.org/nciw/ancms2002.asp
NCIW. 2004. NCIW Workshop Summary. Available from
http://www.gulfofmaine.org/nciw/FinalWorkshoDSummary.pdf
A-28
-------
APPENDIX B
INDICATORS DEVELOPED BY VARIOUS
GROUPS
The following is a list of indicators chosen by various groups for monitoring progress
within estuaries or coastal areas around the United States. The list is divided into six
categories:
Fisheries
Contaminants/water quality
Contaminants/sediments
Land use change
Aquatic habitat
Other
Fisheries
Trends, abundance, and diversity (total number of species, total number of
individuals, and biomass) in fish, shellfish, and crustaceans
Toxic tissue concentration and trends of contamination (metals, polycyclic
aromatic hydrocarbon [PAHs], polychlorinated biphenyls [PCBs], pesticides,
dioxins, furans, and dichlorodiphenyltrichloroethane [DDT]) in lobsters, shellfish,
and fish
Public health risks from toxic contaminants in fish and shellfish tissue
Fish diseases (observation offish diseases [fin erosion, tumor, etc.] or individual
fish samples
Changes in the health, ecology, or other effects on recreational fish
Weights of fish populations
Number of fish kills
Biodiversity of bottom-dwelling species and mid-water species
Status of shellfish beds (changes in acreage of closed and open shellfish flats)
Changes in the health, ecology or other effects on landings (catch and effort, catch-
per-unit-effort [CPUE])
Shellfish habitat (acres of shellfish beds classified as suitable for harvesting and for
seed stock)
Shellfish harvest (bushels of oyster harvested annually and dollar value of the
harvest)
Density of harvestable clams flats
Abundance of shellfish predators
Weight of shellfish landings
Disease linked to contaminated shellfish
B-l
-------
INDICATORS DEVELOPED BY VARIOUS GROUPS
Recreational harvest of clams
False mussel stands
Oysters (population, restored beds [acres], bed acres restored with disease resistant
American oyster stock, disease, bed acreage)
Recreational harvest of oysters
Oyster density on public seed grounds over time
Oyster abundance and health on private leases
Bacteriological water quality of oyster harvesting waters
Lobster harvest (pounds of lobster harvested)
Lobster permits (permits and licenses)
Anadromous fish runs
Anadromous fish returns
Annual number offish migrating down stream
Number of stream miles opened through fish passage enhancement projects
Stream miles opened to migratory fish
Public use and access
fish advisories
fish tissue - persistent, bioaccumulative, and toxic contaminant levels
percent of streams impaired for fishing
shellfish bed closures
Right whale populations: number of right whales
Macroinvertebrates (freshwater)
Biological production and respiration (phytoplankton and zooplankton
productivity, abundance, and composition, bacterial production, respiration)
Macrophyte abundance and composition
Estimated economic impact of recreational fishing over time
Value and number of licenses of commercial and recreational fishing
Change in number of saltwater fishing licenses
Acres of commercial shellfish areas (total acres of open, restricted, closed, and
prohibited commercial shellfish areas)
Commercial fishing pressure (weight [pounds] of commercial catch)
Recreational fishing pressure (recreational CPUE for targeted resident fish species)
Presence absence of disturbance indicator species, non-native fish
Occurrence of non-native crabs
Contaminants/Water Quality
Water quality (temperature, salinity, pH, turbidity, dissolved oxygen [DO],
chlorophyll-a, total dissolved solids, total suspended solids, nutrients, phosphorus,
nitrates, metals, organics)
Light attenuation (secchi disk depths, or some equivalent measure of light
attenuation)
Point source and non-point source nitrogen loading
Trophic state index of water
Isohaline locations
B-2
-------
INDICATORS DEVELOPED BY VARIOUS GROUPS
Atrazine concentration in surface waters over time
Atmospheric deposition
Number and duration of high-chloride events in source water to the Clotilda
drinking water plant over time
Occurrence of harmful algal blooms (species, extent, duration, ecological and
human health effects)
Water bodies on Department of Environmental Protections planning or verified
lists for impairments
Tributyltin (TBT) concentration levels (trends in TBT water column
concentrations)
Types and amounts of floatable debris (ocean-side barrier island and estuaries)
Change in ambient shallow ground-water with respect to established drinking water
standards
Changes in specific conductance
Amount of contaminant inputs from major point sources and tributaries
Composition of aquatic debris (floating and in coastal areas)
Regions of concern: areas with known chemical contaminant-related impacts
Number of pumpout and dumpstation facilities over time (boat waste)
Combined sewer overflow (CSO) abatement (frequency of CSO events and volume
and duration of overflow events)
Toxic contaminants in stormwater runoff and receiving waters
Trends in permitted discharge flow and number of National Pollutant Discharge
Elimination System permits
Sewage disposal and septic tank loads
Industry reported releases and transfers of chemical contaminants
Releases and transfers of chemical contaminants from Federal facilities
Dischargers in significant noncompliance
Municipal facilities in the watershed using nutrient reduction technology
Communities implementing stormwater best management practices
Conversion of septic systems to central sewer
Removal of direct discharges into the bays
Concentrations of fecal bacteria (fecal coliform, Enterococci, E. Coli) in surface
water as a proportion of criteria/screening levels
Seagrass acreage change (temperature, pH, total suspended solids, DO, nitrogen,
biological oxygen demand (BOD), phosphorus, secchi depth, salinity)
Non-toxic sewage treatment plants
Number of volunteer water quality monitoring stations
Concentration of toxics in sediment and biota (number of water bodies on 303(d)
list-in general or for contaminants of concern)
Safety at swimming beaches: Enterococcus levels and number of beach closures
Trends in dry weather bacterial indicator concentrations (fecal coliform, e coli,
Enterococcus)
Trends in wet weather bacterial indicator concentrations
B-3
-------
INDICATORS DEVELOPED BY VARIOUS GROUPS
Bacteria load from wastewater treatment plants (fecal coliform, total coliform,
flow)
Macroalgal biomass (measure of benthic productivity)
Pollution trends
Acres of cropland under nutrient management
Eelgrass nutrient pollution index
Distribution of nuisance macroalgae
Other toxic substances in groundwater (nitrate)
Pesticides in ground and surface waters
Number of petroleum and chemical spill reports over time (total volume spilled)
Contaminants/Sediments
Sediment chemistry (using U.S. Environmental Protection Agency [EPA] National
Coastal Assessment [NCA] data)
Sediment toxic contamination (metals, PAHs, PCBs, pesticides, dioxins, furans,
butyl Tins, and halogenated hydrocarbon)
Sediment toxicity (toxicity of sediment elutriate to Ampeliscus)
Benthic community structure, composition (species and numerical data), and health
(total number of benthic species, total number of individuals [abundance], and
biomass)
Benthic index for mud flat, salt flats, and subtidal unvegetated (population density
of selected infauna, concentration of contaminants of concern, salinity, grain size,
and DO)
Sediment trends in rivers entering the bay: flow adjusted concentration and
monitored loads
Water qualitycontaminated sediment (benthic toxicity and organic toxicity)
Concentrations of selected contaminants in sediment as a proportion of probable
effects level
Suspended sediments
Freshwater macroinvertebrate community (wide array of sample statistics including
a summary index of biotic integrity)
Benthos (marine)
Sediment contaminant concentrations relative to National Oceanic and
Atmospheric Administration (NOAA) guidelines
Trends in sediment contaminant concentrations
Other toxic substances in biota (chlordane, DDT, metals, PCBs, polychlorinated
dibenzodioxins, polychlorinated dibenzofurans, mercury, cadmium)
Macroinfauna species
Meiofaunal species
Organic pollutants in sediment (volatile and semivolatile organic compounds,
polychlorinated naphthalenes)
Sediment composition
Percent organic carbon in sediment
B-4
-------
INDICATORS DEVELOPED BY VARIOUS GROUPS
Land Use
Land Use/land cover (riparian zones, wetland area, agriculture near water, amount
of edges, dominance, miles of roads, amount of agriculture and urban area,
contagion, fractal dimension, recovery time, edge amount per patch sized, land
cover transition matrix, corridors between patches, diffusion rates, inter-patch
distances, actual vs. potential vegetation, percolation thresholds, largest patch, loss
of rare land cover, habitat for endangered species)
Coastal habitat restoration (acreage and diversity of coastal habitats restored to
healthy and historic ecological functions, tidal wetlands, freshwater wetlands,
estuarine embayments, coastal and inland forest, beaches and dunes, cliffs and
bluffs, coastal grasslands, intertidal flats, rocky intertidal zones, submerged aquatic
vegetation and shellfish reefs)
Habitat restoration (number of restoration projects (a) planned; (b) in progress;
(c) implementation completed)
Percent forest cover (acreage of tree cover)
Habitat opportunity (number of reconnections between open water and diked or
levied former tidal habitats)
Habitat loss (number of dredging, fill and shoreline permits issued)
Habitat protection and conservation (number of protection and conservation
projects (a) planned; (b) in progress; (c) implementation completed) Environmental
lands acquisition (acreage of wetlands and environmentally sensitive lands
acquired)
Net change in habitats (sum of number of completed restoration and compensatory
mitigation projects minus the sum of all habitat loss projects from dredge, fill,
diking, etc.)
Location of land loss
Protected open space
Wetland loss (acres of wetlands lost and index of biological integrity)
Trends in number, type, or location of wetlands created, enhanced, or preserved
Land-water ratios by fresh-, brackish-, intermediate-, and saltmarsh habitat type
over time
Specific land-use delineation for developed and agricultural areas
Change in shoreline habitat/sensitive areas
Change in stream flow (freshwater inputs)
Salt marsh extent and condition
Acreage of land converted to alternate use
Unfragmented blocks of land (unfragmented blocks of land > 250 acres and >2,000
acres)
Indicators of freshwater wetland functions
Population and land use trends
average lot size
number of households
farmland acres
public parkland
developed land
B-5
-------
INDICATORS DEVELOPED BY VARIOUS GROUPS
Public use and access
public access points
potable water withdrawals
human and industrial water consumption
projected future water demands
Extent of turf grass
Number of sewered and unsewered homes
Remediated stormwater sites
Change in number of bay and tributary public access points/areas (boat launches,
parks, fishing piers)
Change in number and location of marine pumpout facilities
Change in commercial landings and commercial boat licenses
Change in recreational landings: number and size
Change in amount of impervious surface (aerial photography and geographic
information system (GIS) mapping)
Interior to edge ratio
Hydrologic/bathymetric change
Municipal waste water permit violations
Number of 303(d) listed streams
Number and percentage of shorelines hardened-bulkheading
Number of types of development permits
Quality, quantity, and identification of outfalls
Rate of sprawl-low density, residential development (road miles per capita in the
coastal watershed)
Estimated vehicle nitrogen oxide emissions vs. vehicle miles traveled
Rate of sprawl-fragmentation (habitat fragmentation per capita in the coastal
watershed)
Population levels or relative abundance of key plant and animal species
Number of listed, rare or endangered species by year as related to habitat acreage
Aquatic Habitat
Submerged aquatic vegetation habitat (abundance, change, health, distribution, and
density by species)
Area of brown marsh
Acres of marsh damage by non-native nutria in over time
Change in base flow of tributary streams over time
Saltwater intrusion
Water levels
Percent exotics within saltwater marshes and location
Acreage of subbasins that no longer contribute flows to their historic receiving
water bodies
Acreage of subbasins returned to historic receiving water bodies
Net difference between the acreage of subbasins that no longer contribute flows to
their historic receiving water bodies and the acreage of subbasins returned to
historic receiving water bodies
B-6
-------
INDICATORS DEVELOPED BY VARIOUS GROUPS
Tidal wetlands, tidal wetlands buffer habitats (wetland acres restored/preserved,
number of successful wetland mitigation sites, acreage of wetlands buffered,
wetland acres, riparian buffer [miles])
Overall restoration initiatives (number of acres preserved, restored, enhanced,
habitat acres on corporate properties)
Fish passage/blockages (stream miles opened, stream blockages removed)
Sneaker index (water clarity and turbidity)
Percent change in inflows from major tributaries
Annual gaged freshwater inflows compared to inflow recommendations
Riparian integrity (percent of riparian zone (50-meter and 100-meter buffer) with
native vegetation)
Distribution of coarse and soft bottoms
Diversity and composition of Riparian insect assemblages
Deposition in the estuary (sediment deposition and accumulation, changes in bay
bathymetry and tidal prism)
Erosion in the watershed (changes in channel cross sections due to aggradation and
deposition of sediment)
Portion of channels where newly deposited sediments pass suitability criteria
(contaminants in sediments)
Percentage of navigation projects that contain one or more of the following:
environmental dredging for the purpose of toxics reduction, brownfields
remediation, habitat acquisition, habitat restoration, improvement of appropriate
public access, or beneficial reuse of dredged material
Other toxic substances in sediments (PCBs, DDT, PAHs, arsenic, copper, lead,
mercury, silver, radionuclides)
Habitat quality
Biological resources
Other
Coastal, nesting, threatened, and endangered bird trends, abundance and diversity
(population estimates of birds by species)
Trends abundance, and diversity in waterfowl
Change in number of waterfowl hunting licenses
Population condition of endangered species (population size and/or reproductive
success [breeding/fledgling pairs, etc.])
Cost of invasive species control
Species diversity (wildlife)
Rare plant and animal populations
Native species assessment (number of estuarine-linked species listed under Federal
or state Endangered Species Act programs)
Percent non-native species
Number, frequency, and occurrence of non-native species
Acreage of non-native sub-macrophytes
Overall restoration initiatives (habitat acres impaired by invasive species, habitat
acres controlled for invasive species)
B-7
-------
INDICATORS DEVELOPED BY VARIOUS GROUPS
Invasive species (species composition and abundance)
Seals (number of seals)
Seal tissue toxics (PCBs, dioxins, furans, pesticides, and heavy metals)
Boating use
Water allocation
Soil types
Alligator nests
Black bear abundance
Reptile and amphibian population abundance
Atmospheric and other pollution inputs (organic pesticides, PCBs, trace metals and
byproducts of combustion)
Population within 50 miles of the watershed (measure population growth and
demographic trends to determine potential human use of the resources)
Trends in shipping traffic versus vessel fuel spills and vessel incidents
Value of shipping cargo, recreational boating, energy production wells, nature
tourism
Muck removal (volume and acreage of muck deposits removed)
All dredged material being used beneficially
Watershed population levels (measure population growth and demographic trends
in the Long Island Sound watershed)
Comprehensive Conservation Management Plan (CCMP) progress
Best management practices activity
Population of watershed municipalities
Percent of communities implementing development that works
Percent of communities implementing policies of public participation
Beach clean-up volunteers (number of volunteers)
Debris collected during International Coastal Cleanup (composition of debris,
weight of debris, miles of shoreline cleaned)
Website visitors (number of times the web site is accessed by the public per year)
Number of environmental organizations (dates/times, participation numbers, and
number of events)
Number of environmental activities-specific (dates/times, participation numbers,
and number of events)
Number of environmental science courses/sections
Number of kids reached in classrooms, field projects, on-river, service learning
Number of school districts
Number of teachers in estuary-related training courses
Number of adults completing environmental science training
Number of teachers working with estuary partnership or estuary partnership
curriculum
Number of non-formal K-12 environmental projects, events
Number of partnerships between schools and outside entities
Number of class visits to learning centers for an organized experience related to the
estuary
B-8
-------
INDICATORS DEVELOPED BY VARIOUS GROUPS
Number of educational materials distributed over time
Number of volunteers involved: restoration projects, clean up projects, and water
quality monitoring (number of projects, number of volunteers, number of first time
volunteers, number of returning volunteers, and is the demand growing)
Number of media hits
Number of recycling programs
Number of license plates sold
Cumulative number of businesses recognized as stewards of the estuary
Number and value of flood insurance claims over time
Revenues and jobs generated by tourism over time
B-9
-------
INDICATORS DEVELOPED BY VARIOUS GROUPS
[This page left internationally blank]
B-10
-------
APPENDIX C
RESOURCES ON INDICATORS
American Enterprise Institute for Public Policy Research. 2003 Index of Leading
Environmental Indicators. 2003.
Andreasen, J.K., R.V. O'Neill, R. Noss, and N.C. Slosser. Considerations for the
Development of a Terrestrial Index of Ecological Integrity. Ecological Indicators.
2001. l(l):21-35.
Atlantic Coast Environmental Indicators Consortium. Environmental Indicators in the
Estuarine Environment. Funded by EPA STAR Program. Available from
http://www.aceinc.org/.
Bakkes, J.A., G.J. van den Born, J.C. Helder, R.J. Swart, C.W. Hope, and J.D.E. Parker.
An Overview of Environmental Indicators: State of the Art and Perspectives.
Report commissioned by the United Nations Environment Programme. 1994.
Battelle. Usefulness of National Estuary Program (NEP) Data as National Environmental
Indicators. Submitted to EPA Ocean Coastal Protection Division. Work
Assignment 0-12, Contract No. 68-C-03-041. September 5, 2003.
Bay Area Alliance for Sustainable Communities. State of the Bay Area: A Regional
Report. Available from http://www.bayareaalliance.org/indicators.pdf.
Bernard, J. State of the Science Ecological Indicators Report-Unpublished Draft.
Prepared for the EPA. 2002.
Bhattacharya, B., S.K. Sarkar, and R. Das. Seasonal Variations and Inherent Variability
of Selenium in Marine Biota of a Tropical Wetland Ecosystem: Implications for
Bioindicator Species. Ecological Indicators. 2003. 2 (367-375).
California EPA and California Resources Agency. Environmental Protection Indicators
for California. 2002. Available from
http://www.oehha.ca.gov/multimedia/epic/Epicreport.htmWbigfile.
Canada's Sustainability Indicators Initiative. The Environment and Sustainable
Development Indicators Initiative of the National Round Table on the
Environment and the Economy. 2004. Available from
http://www.sustreport.org/indicators/nrtee_esdi.html.
Caughlan, L. and K.L. Oakley. Cost Considerations for Long-term Ecological
Monitoring. Ecological Indicators. 2001. 1:123-134.
C-l
-------
RESOURCES ON INDICATORS
Center for Disease Control and Prevention. Environmental Public Health Indicators.
2003. Available from http://www.cdc.gov/nceh/indicators/EPHI.pdf.
Central Texas Sustainability Indicators Project. Central Texas Sustainability Indicators
Initiative. Available from http://www.centex-indicators.org/.
Charlotte Harbor National Estuary Program. Environmental Indicators Draft Report.
2004.
Chesapeake Bay Program. Bay Trends & Indicators. Available from
http://www.chesapeakebay.net/status.cfm?view=All&subjectarea=INDICATORS
Chesapeake Bay Program. The State of the Chesapeake Bay: A Report to the Citizens of
the Region. 2002. EPA903-R-02-002.
Commission on Geosciences, Environment and Resources. Ecological Indicators for the
Nation. Washington, D.C.: National Academy Press. 2000. Available from
http://books.nap.edu/catalog/9720.html.
Dale, V.H. and S.C. Beyeler. Challenges in the Development and Use of Ecological
Indicators. Ecological Indicators. 2001. 1(1):3-10.
Delaware Estuary Program. Environmental Indicators. September 2000. Available from
http://epa.gov/owow/estuaries/coastlines/jan02/envindicator.html.
Dumanski, J. and C. Pier. Application of the Pressure-State-Response Framework for the
Land Quality Indicators (LQI) Programme. Available from
http://www.fao.org/docrep/W4745E/w4745e08.htm.
Environment Canada. Environmental Indicators [Web Page]. Available from
http://www.ecoinfo.ec.gc.ca/env_ind/indicators_e.cfm.
EPA (U.S. Environmental Protection Agency). The Ambient Air Monitoring Program.
2002. Available from http:www.epa.gov/air/oaqps/qa/monprog.html.
EPA. Aquatic Habitat Indicators and their Application to Water Quality Objectives
within the Clean Water Act. EPA/Idaho Water Resources Research Institute.
1999. EPA 910-R-99-014. Available from http://yosemite.epa.gov/R10/
ecocomm.nsf/37aa02ee25dl 1 ce 188256531000520b3/74476bac 1 ae7e9fb88256b5f
00598b43/$FILE/Ahi_fma.pdf.
EPA. Draft Report on the Environment. 2003. U.S. Environmental Protection Agency,
Office of Environmental Information and the Office of Research and
Development. EPA-260-R-02-006. June 2003. Available from
http://www.epa.gov/indicators.
EPA. EMAP Research Strategy. Environmental Monitoring and Assessment Program.
July 2002, EPA 620/R-02/002.
C-2
-------
RESOURCES ON INDICATORS
EPA. Environmental Indicators of Water Quality in the United States. 1996, EPA 841-R-
96-002.
EPA. Estuarine and Coastal Marine Waters: Bioassessment and Biocriteria Technical
Guidance. Office of Water, Washington, D.C. 2000. Report No. EPA-822-B-00-
024. Available from http://www.epa.gov/ost/biocriteria/States/estuaries/
estuaries.pdf.
EPA. Evaluation Guidelines for Ecological Indicators. EMAP Environmental Monitoring
and Assessment Program. 2000. EPA/620/R-99/005. Available from
http://www.epa.gov/emap/html/pubs/docs/resdocs/ecol_ind.pdf.
EPA. Great Lakes Monitoring: Indicators [Web Page]. Available from
http://www.epa.gov/glnpo/glindicators/index.html.
EPA. Index of Watershed Indicators: An Overview. 2002. Available from
http://www.epa.gov/iwi/iwi-overview.pdf.
EPA. Initial Analysis of Issues/Questions from Indicator Workshops. 2002. Accessed
11//232004. Available from http://www.epa.gov/cgi-bin/epaprintonly.cgi.
EPA. National Coastal Condition Report. 2001. EPA Report No. EPA-620/R-01/005.
Available from http://www.epa.gov/owow/oceans/nccr/downloads.html.
EPA. National Estuary Program [Web Page]. Available from http://www.epa.gov/nep/.
EPA. Risk Screening Environmental Indicators [Web Page]. Available from
http://www.epa.gov/opptintr/rsei/.
EPA. State of the Great Lakes Ecosystem Conference. Available from
http://epa.gov/glnpo/solec.
EPA and Environment Canada. The State of the Great Lakes. 2003.
EPA Science Advisory Board. A Framework for Assessing and Reporting on Ecological
Condition: An SAB Report. Washington D.C. 2002. Available from
http://www.epa.gov/sab/pdf/epec02009.pdf.
Florida Coastal Management Program. Florida Assessment of Coastal Trends. Available
from http://www.pepps.fsu.edu/FACT.
Fulton, E.A., A.D.M. Smith, and C.R. Johnson. Effect of Complexity on Marine
Ecosystem Models. Marine Progress Series. 2003. 253:1-16.
Fundy Forum. Hot Topics in the Bay of Fundy. Accessed 2003. Available from
http://www.fundyforum.com/issues.html.
Gosselin, P., C. Furgal, and A. Ruiz. U.S.-Mexico Border Field Office of the Pan
C-3
-------
RESOURCES ON INDICATORS
American Health Organization. Environmental Health Indicators for the U.S.-
Mexico Border Concept Document. 2002. Available from
http://www.fep.paho.org/english/env/Indicadores/Environmental%20Public%20H
ealth%20Indicators.pdf.
Green Mountain Institute for Environmental Democracy. New England Environmental
Goals and Indicators Project [Web Page]. Available from http://www.gmied.org.
Healthy Community Initiative of Greater Orlando. Legacy 2002Greater Orlando
Indicators Report. 2002. Available from
http://www.hcbs. org/files/18/882/Legacy_Report.pdf.
Heinz Center for Science, Economics and the Environment. The Coastal Zone
Management Act: Developing a Framework for Identifying Performance
Indicators. 2003. Available from http://www.heinzctr.org/NEW_WEB/PDF/
CZMA.pdf.
Heinz Center. The State of the Nation's Ecosystems: Measuring the Lands, Waters, and
Living Resources of the United States: Cambridge University Press, Cambridge
UK. 2002. Available from http://www.heinzctr.org/ecosystems/report.html.
Hill, B.H., A.T. Herlihy, P.R. Kaufmann, S.J. DeCelles, and M.A. Vander Borgh.
Assessment of Streams of the Eastern United States Using a Periphyton Index of
Biotic Integrity. Ecological Indicators. 2003. 2:325-338.
Iliopoulou-Georgudaki, J., V. Kantzaris, P. Katharios, P. Kaspiris, T. Georgiadis, and B.
Montesantou. An Application of Different Bioindicators for Assessing Water
Quality: A Case Study in the Rivers Alfeios and Pineios (Peloponnisos, Greece).
Ecological Indicators. 2003. 2:345-360.
International Council for the Exploration of the Sea (ICES). Environmental Status of the
European Seas, 2003. Available from
http:/www.ices/dk/reports/germanqsr/23222_ICES_Report_samme.pdf.
Kaplan, M.B., and L.J. McGeorge. The Utility of Environmental Indicators for
Policymaking and Evaluation from a State Perspective: The New Jersey
Experience. Risk Policy Report. 2001. 8:5 39-41. Available from
http://www.scc.rutgers.edu/cei/Resources/may4sg.pdf.
Kleinschmidt Associates. Ecological Indicators for Narragansett Bay and its Watersheds.
A report prepared for the Partnership of Narragansett Bay. 2003. Available from
http://www.kleinschmidtusa.com/ng/default.htm.
Kleinschmidt Associates. Partnership for Narragansett Bay Ecological Indicators
Framework Workshop. West Greenwich, RI. 2003. Available from
http://www.kleinschmidtusa.com/ng/default.htm.
C-4
-------
RESOURCES ON INDICATORS
Kondratyev, S., T. Gronskaya, N. Ignatieva, I. Blinova, I. Telesh, and, L. Yefremova.
Assessment of Present State of Water Resources of Lake Lodoga ad It's Drainage
Basin Using Sustainable Development Indicators. Ecological Indicators. 2002.
2:79-92.
Kurtz, J.C., L.E. Jackson, and W.S. Fisher. Strategies for Evaluating Indicators Based on
Guidelines from the Environmental Protection Agency's Office of Research and
Development. Ecological Indicators. 2001. 1(1):49-60.
Lake Champlain Ecosystem Indicators Project. Developing Ecosystem Indicators and an
Environmental Score Card for the Lake Champlain Basin Program. Accessed
2005 Jan 17. Available from http://www.uvm.edu/envnr/indicators/.
Lower Columbia River Estuary Partnership. Environmental Indices. Available from
http://www.lcrep.org/indices.htm.
Lura Consulting. Report to the Expert Panel: Societal Responsibility Indicators- Strategic
Goals and Objectives. A Work in Progress. Prepared for Environment Canada and
the U.S. Environmental Protection Agency. 2001
Manoliadis, O.G. Development of Ecological IndicatorsA Methodological Framework
Using Compromise Programming. Ecological Indicators. 2002, 32:1-8.
Massachusetts Executive Office of Environmental Affairs. Massachusetts Bay CCMP.
2003. Accessed 2004 Nov 22. Available from
http://www.mass. gov/envir/massbays/ccmp.htm#ccmp.
Meador, M.R., L.B. Brown, and T. Short. Relations between Introduced Fish and
Environmental Conditions at Large Geographic Scales. Ecological Indicators.
2003.3:81-92.
MWRA. State of Boston Harbor. Available from
http://www.mwra.state.ma.us/harbor/html/2002-09.htm.
National Academy of Sciences. Institutions for Effective Management of the
Environment. Report (part 1) of the Environmental Study Group to the
Environmental Studies Board of the National Academy of Sciences, National
Academy of Engineering. Washington D.C. 1970.
National Advisory Council for Environmental Policy and Technology. EPA- Managing
Information as a Strategic Resource: Final Report ad Recommendations of the
Information Impacts Committee. Washington D.C. 1998. EPA 100-R-98-002.
National Council for Science and the Environment. Recommendations for Improving the
Scientific Basis for Environmental Decisionmaking. A report from the First
National Conference on Science, Policy and the Environment. Washington D.C.
2000. Available from http://www/ncseonline.org/2000conference/PDF
masters/2000ncspeRecommendations_txtonly.pdf.
C-5
-------
RESOURCES ON INDICATORS
National Estuarine Research Reserve. National Estuarine Research Reserve System Wide
Monitoring Program. Available from http://nerrs.noaa.gov/.
National Science and Technology Council. Integrating the Nations Environmental
Monitoring and Research Networks and Programs: A Proposed Framework. A
Report by the Committee on Environmental and Natural Resources. Washington
D.C. 1997.
Neumann, M., J. Baumeister, M. Leiss, and R. Schulz. An Expert System to Estimate the
Pesticide Contamination of Small Streams Using Benthic Macroinvertebrates as
Bioindicators Part 2: The Knowledge Base of LIMPACT. Ecological Indicators.
2003,2:391-401.
Neumann, M., M. Leiss, and R. Schulz. An Expert System to Estimate the Pesticide
Contamination of Small Streams Using Benthic Macroinvertebrates as
Bioindicators Part 1: The Database of LIMPACT. Ecological Indicators. 2003,
2:379-389.
New York-New Jersey Harbor Estuary Program. New Jersey Hudson Bay Environmental
Indicators Initiatives [Web Page]. Available from
http://www.harborestuary.org/reports/harborh.htm.
Office of Ocean and Coastal Resource Management. National Coastal Management
Performance Measurement System. Available from
http://www.ocrm.nos.noaa.gov.
O'Malley, R., K. Cavender-Bares, W.C. Clark. Providing "Better" Data: Not as Simple as
It Might Seem. Environment. 2003. 45(4):8-18. Available from
http://www.findarticles.com/p/articles/mi_ml 076/is_4_45/ai_l 01290775.
Oregon Progress Board. Oregon State of the Environment Report. Available from
http://egov.oregon.gov/DAS/ODB/soer2000index.shtml.
Oregon Progress Board. Pacific Northwest Salmon Habitat Indicators. Available from
http://ecy.wa.gov/biblio/99301 .html.
Orfanidis, S., P. Panayotidis, and N. Stamatis. An Insight to the Ecological Evaluation
Index (EEI). Ecological Indicators. In Press, 64:1-7.
Orfanidis, S., P. Panayotidid, and N. Stamatis. Ecological Evaluation of Transitional and
Coastal Waters: A Marine Benthic Macrophytes-based Model. Mediterranean
Marine Science. 2001. 2(2):45-65.
Paul, J.F., K.J. Scott, D.E. Campbell, J.H. Gentile, C.S. Strobel, R.M.Valente,
S.B.Weisberg, A.F. Holland, and J.A. Ranasinghe. Developing and Applying a
Benthic Index of Estuarine Condition for the Virginian Biogeographic Province.
Ecological Indicators. 2001. 1:83-99.
C-6
-------
RESOURCES ON INDICATORS
Pew Ocean Commission. America's Living Oceans: Charting a Course for Sea Change.
2003. Available from http://www.pewoceans.org/oceans/oceans_report.asp.
Pidot, L. Tapping the Indicators Knowledge-Base: "Lessons Learned" by Developers of
Ecological Indicators. Prepared for the State of the Gulf Summit Steering
Committee. 2003.
Popp, J., D. Hoag, and D.E. Hyatt. Sustainability Indices with Multiple Objectives.
Ecological Indicators. 2001. 1 (l):37-47.
Portney, P. Reforming Environmental Regulation: Three Modest Proposals. Columbia
Journal of Environmental Law. 1988, 13.
President's Council on Sustainable Development. Sustainable Development in the United
States. Accessed 2004. Available from
http://clinton 1 .nara.gov AVhite_House/EOP/pcsd/.
Quantitative Ecosystem Indicators for Fisheries Management International Symposium,
March 31- April 3, 2004. Paris, France. Available at
www.ecosystemindicators.org.
Ribaudo, M.O., D.L. Hoag, M.E. Smith, and R. Heimlich. Environmental Indices and the
Politics of the Conservation Reserve Program. Ecological Indicators. 2002,
Robson, M.G. and C.J. Whitaker. New Roles for Science in Developing Environmental
Indicators: A New Jersey Example. 2001. Available from
http://www.scc.rutgers.edu/cei/Resources/may2sg.pdf.
San Francisco Estuary Institute. Bay Area EcoAtlas and Pulse of the Bay Report.
Accessed 2005 Jan 17. Available from http://sfei.org/.
Segnestam, L. Environmental Performance Indicators: A Second Edition Note.
Environmental Economics Series. 1999. Paper No. 71.
Suter II, G. W. Applicability of Indicator Monitoring to Ecological Risk Assessment.
Ecological Indicators. 200 1 . 1 : 1 0 1 - 1 1 2.
Tampa Bay Estuary Program. Baywide Environmental Monitoring Report: Summary and
Conclusions. Accessed 2005 Jan 1 7. Available from
http ://www . tbep . org/bay state/bemr . html .
Texas Commission on Environmental Quality. State of Texas Environment Report.
Available from http://www.tnrcc.state.tx.us/.
Texas Public Policy Foundation. Texas Index of Leading Environmental Indicators 2000.
2000. Available from http://www.epa.gov/air/oaqps/qa/monprog.html.
C-7
-------
RESOURCES ON INDICATORS
Thompson, B. and A. Gunther, (San Francisco Estuary Institute). Development of
Environmental Indicators of the Condition of San Francisco Estuary. 2004, SFEI
Contribution 113. Available from
http://www.sfei.org/cmr/reports/Indicatorreport_final.pdf.
Tillamook Bay National Estuary Project. Tillamook County Performance Partnership:
Goals. Available from http://www.co.tillamook.or.us/gov/estuary/tcpp/goals.html.
Top 10 by 2010. Southeastern Louisiana Top 10 by 2010 Indicators Report. 2002.
Available from http://indicators.top 1 Oby2101 .org/pdf/Top 1 OReport.pdf.
Trowbridge, P. Environmental Indicator Report: Land Use and Development. New
Hampshire Department of Environmental Services. 2003.
Trowbridge, P. Environmental Indicator Report: Shellfish. New Hampshire Department
of Environmental Services. 2002.
Trowbridge, P. Environmental Indicator Report: Species and Habitats. New Hampshire
Department of Environmental Services. 2003.
Trowbridge, P. Environmental Indicator Report: Water Quality. New Hampshire
Department of Environmental Services. 2002.
Trowbridge, P. New Hampshire Estuaries Project Monitoring Plan. New Hampshire
Department of Environmental Services. 2004. Available from
http://www.nh.gov/nhep/publications/pdf/nhepmonitoringplan-nhep-04.pdf.
U.S. Government Accountability Office. Environmental Indicators: Better Coordination
Is Needed to Develop Environmental Indicator Sets that Inform Decisions. Report
to Congressional Requesters. November 2004. GAO-05-52.
UNESCO. A Reference Guide on the Use of Indicators for Integrated Coastal
Management -ICAM Dossier 1, IOC Manuals and Guides No.45. 2003. Available
from http://www.loc.unesco.org/icam/files/Dossier.pdf.
USGS (U.S. Geological Survey). Relative Sea Level Trends. Available from
http://pubs.usgs.gov/of/2002/of02-233/ppvariables.htm.
USGS. The Status and Trend of Our Nation's Biological Resources. Available from
http://biology.usgs.gov/s+t/SNT/index.htm.
USGS. Sustainable Water Resources Roundtable. Available from
http://water.usgs.gov/wicp/acwi/swrr/.
van Buren, J., T. Smit, G. Poot, A. van Elteren, and O. Kamp. Testing of Indicators for
the Marine and Coastal Environment in Europe, Part 1: Eutrophication and
Integrated Coastal Zone Management. Prepared for the European Environment
Agency. 2002.
Co
-o
-------
RESOURCES ON INDICATORS
Watzin, M. University of Vermont. Developing Ecosystem Indicators and an
Environmental Score Card for Lake and Watershed Management: Method and
Case Study from Lake Champlain, USA and Canada. 2003.
C-9
-------
RESOURCES ON INDICATORS
[This page left intentionally blank]
C-10
-------
-------
-------
-------
United States Environmental Protection Agency
Office of Water
Washington, DC 20460
EPA842-B-08-004
http://www.epa.gov/owow/estuaries
September 2008, 2nd Edition
------- |