oEPA
United States
Environmental Protection
Agency
EPA 905-K-19-001
April 2019
Application of
Quality Assurance and
Quality Control Principles
to Ecological Restoration
Project Monitoring
Product of the Interagency Ecological Restoration Quality
Committee under the direction of the U.S. EPA



-------
Finally, we wish to acknowledge the technical reviews conducted by Anna Lopata, USFWS; Bahram
Gharabaghi, University of Guelph; Bill Route, U.S. National Park Service; Bradley Potter, USFWS; Brook
Herman, USACE; Dav Parry, Indiana Department of Environmental Management; Don Uzarski, Central
Michigan University; Emily Silverman, USFWS; Ingred Karklins, Environmental Survey Consulting; Jason
Morrison, U.S. Forest Service; John Shuey, The Nature Conservancy; Lauren Overdyk, Credit Valley
Conservation; Lynde Dodd, USACE; Matt Cooper, Northland College; Paul Bollinger, Bollinger
Environmental, Inc.; Raymond D'Hollander, Parsons Corporation; Robin Weber, Narrangansett Bay
National Estuarine Research Reserve; Roy Moser, Credit Valley Conservation; Valerie Brady, University of
Minnesota-Duluth; whose thoughtful comments contributed greatly to the quality of the guide.
This document can be cited as: U.S. Environmental Protection Agency. 2019. Application of Quality
Assurance and Quality Control Principles to Ecological Restoration Project Monitoring. Publication No.
EPA/905/K-19/001. Chicago, IL: Great Lakes National Program Office.
DISCLAIMER
This document was prepared with support by CSRA under their Scientific & Technical Support contract
with EPA (contract number EP-C-17-024).
These guidelines were collaboratively developed by the U.S. Environmental Protection Agency (EPA),
U.S. Geological Survey (USGS), and U.S. Army Corps of Engineers (USACE) to help implement effective
quality assurance and quality control (QA/QC) strategies for monitoring ecological restoration projects
conducted under the Great Lakes Restoration Initiative (GLRI). This document does not impose legally
binding requirements on EPA, states, tribes, or the regulated community; and it may or may not be
applicable to a particular situation depending on the circumstances. Federal and state decision
makers retain the discretion to adopt approaches on a case-by-case basis that may differ from this
guidance where appropriate. Any decisions regarding a particular restoration project should be made
based on the applicable statutes and regulations. Therefore, interested parties are free to raise
questions and objections about the appropriateness of the application of this guide to a particular
situation, and the project sponsor will consider whether the recommendations or interpretations in
this guide are appropriate in that situation based on the law and regulations.
Mention of trade names, products, or services does not convey official approval, endorsement, or
recommendation by the Interagency Ecological Restoration Quality Committee (IERQC).
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019

-------
CONTACT INFORMATION
For additional information, questions, or comments about this document, please contact Louis Blume
(EPA) using the information provided below.
Louis Blume
U.S. EPA
77 West Jackson Boulevard
Chicago, IL 60604-3507
Tel: 312-353-2317
blume.louis@epa.gov
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019

-------
TABLE OF CONTENTS
Acknowledgments	ii
Disclaimer	iii
Contact Information	iv
List of Exhibits	viii
Acronyms and Abbreviations	xi
Executive Summary	xiii
Chapter 1 Introduction	1-1
1.1	Background	1-1
1.2	Purpose and Intended Audience	1-2
1.3	Guidance Scope	1-2
1.4	Content and Organization	1-3
1.5	Application of QA/QC to Ecological Restoration Projects	1-5
1.6	Quality Assurance Roles and Responsibilities	1-6
Chapter 2 Fundamental Principles Concerning Quality Assurance / Quality Control (QA/QC) and
Ecological Restoration Project Monitoring	2-1
2.1	What is Quality?	2-1
2.2	Benefits of and Challenges to Implementing QA/QC in Ecological Restoration Projects	2-4
2.3	Graded Approach to the Application of QA/QC	2-7
2.4	QA/QC Documentation	2-8
2.5	Data Quality Indicators	2-9
2.6	Sources of Variability: Sampling and Measurement Error	2-11
2.7	Decision Error	2-13
2.8	Developing a Monitoring Strategy	2-14
2.9	Additional Readings	2-18
Chapter 3 Planning for Data Collection	3-1
3.1	Restoration Project Goals	3-2
3.2	Restoration Project Objectives	3-4
3.3	Sampling Objectives	3-8
3.4	Data Quality Indicators and Acceptance Criteria	3-14
3.5	Use of Existing Data	3-27
3.6	Planning for Data Collection - Checklist	3-33
3.7	Additional Readings	3-34
Chapter 4 Preparing for Data Collection	4-1
4.1	Standard Operating Procedures	4-1
4.2	Training and Certification of Field Personnel	4-12
Application of QA/QC Principles to Ecological Restoration Project Monitoring	Page v

-------
4.3	Preparing for Field Logistics	4-19
4.4	Arranging Laboratory Analyses	4-27
4.5	Preparing for Data Collection - Checklist	4-31
4.6	Additional Readings	4-32
Chapter 5 Quality Control During Field Activities	5-1
5.1	Field Practices and QA Crews	5-1
5.2	Quality Control Field Checks	5-3
5.3	Use and Reporting of QC Check Results	5-20
5.4	Quality Control during Field Activities - Checklist	5-24
5.5	Additional Readings	5-25
Chapter 6 Data Review	6-1
6.1	Planning Data Review Activities	6-1
6.2	Data Verification and Validation	6-10
6.3	Handling Data Discrepancies and Errors	6-27
6.4	Data Certification	6-32
6.5	Data Review - Checklist	6-34
6.6	Additional Readings	6-36
Chapter 7 Data Assessment, Analysis and Reporting	7-1
7.1	Planning Data Assessment, Analysis and Reporting Activities	7-2
7.2	Assessing Impacts of Data Quality on Use	7-3
7.3	Analyzing and Interpreting Data	7-18
7.4	Use of Analyzed Data within the Continual Planning Process and Adaptive Management
Process Cycles	7-24
7.5	Preparing and Reviewing Project Reports	7-27
7.6	Data Assessment, Analysis, and Reporting - Checklist	7-33
7.7	Additional Readings	7-35
Chapter 8 The Relationship between Quality Management and Adaptive Management Strategies	8-1
8.1	Definition	8-2
8.2	Role of Adaptive Management for Ecological Restoration Projects	8-3
8.3	Practicalities of Adaptive Management in Ecological Restoration Projects	8-6
8.4	Incorporating QA/QC Data into Adaptive Management	8-8
8.5	Additional Readings	8-9
References	References-1
Appendix A: Data Management Best Practices for Ecological Restoration Projects	A-l
A.l Data Management Planning	A-3
A.2 Data Acquisition and Collection	A-ll
A.3 Data Processing	A-17
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019

-------
A.4 Data Summary and Analysis	A-23
A.5 Data Preservation	A-25
A.6 Data Sharing and Publishing	A-30
A.7 Metadata	A-33
A.8 Quality Assurance in Data Management	A-36
A.9 Backup and Secure Data	A-37
A.10 Additional Readings	A-38
A.ll	Appendix A References	A-38
Appendix B: The Assessment of Data Quality Indicators in Ecological Restoration Monitoring	B-l
B.l	Measurement Scales and their Statistical Properties of Value	B-3
B.2 Data Collection and Measurement Error	B-5
B.3 Preparing Data for Quality Assessment	B-7
B.4 Precision, Bias, and Accuracy	B-8
B.5 Completeness	B-17
B.6 Detectability	B-18
B.7 Representativeness and Comparability	B-19
B.8 Measuring Repeatability	B-19
B.9 Additional Readings	B-21
B.10 Appendix B References	B-22
Appendix C: Quality Assurance Project Plan (QAPP) Template for Ecological Restoration Projects	C-l
Section A - Project Management	C-2
Section B - Data Generation and Acquisition	C-ll
Section C - Assessment and Oversight	C-16
Section D - Data Validation and Usability	C-17
Tables	C-19
Exhibits	C-22
Appendix D: Quality Assurance Project Plan (QAPP) Review Checklist for Ecological Restoration
Projects	D-l
Glossary	Glossary-1
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019

-------
LIST OF EXHIBITS
Exhibit 1-1. Relationship between the Monitoring Project Lifecycle and the Plan/Do/Check/Act
Quality Management Lifecycle	1-4
Exhibit 1-2. Example Organizational Chart for an Ecological Restoration Project	1-7
Exhibit 1-3. Example Quality Assurance Roles and Responsibilities for Restoration Project Team Members. 1-8
Exhibit 2-1. Quality Assurance and Quality Control Comparison	2-3
Exhibit 2-2. Total Quality Costs (adapted from Project Management Institute 2014)	2-7
Exhibit 2-3. Common Data Quality Indicators for Environmental Monitoring Data	2-9
Exhibit 2-4. Influence of Precision and Bias on Accuracy	2-10
Exhibit 2-5. Common Sources of Error in Environmental Sampling/Monitoring Programs	2-12
Exhibit 2-6. Types of Decision Error	2-13
Exhibit 2-7. Example Power Curve	2-17
Exhibit 3-1. Process for Establishing Data Quality Criteria	3-1
Exhibit 3-2. Example Restoration Goals and Corresponding SMART Restoration Project Objectives	3-7
Exhibit 3-3. Example Sampling Objectives for Corresponding Restoration Project Goals and SMART
Restoration Project Objectives	3-11
Exhibit 3-4. Types of Data Frequently Collected in a Field Setting for Ecological Restoration Monitoring ....3-16
Exhibit 3-5. Example Variables and Acceptance Criteria for Precision, Bias and Accuracy	3-19
Exhibit 3-6. Example Acceptance Criteria for Representativeness, Comparability, Completeness and
Detectability for Plant Cover Data	3-23
Exhibit 3-7. Example Stepwise Procedure for the Selection of Acceptance Criteria for an Ecological
Restoration Monitoring Effort	3-25
Exhibit 3-8. Examples of Existing Data	3-27
Exhibit 4-1. Recommended SOP Contents	4-4
Exhibit 4-2. Example Applications of Selected SOP Items	4-5
Exhibit 4-3. Example Data Collection SOPs Needed to Support Sampling Objectives	4-9
Exhibit 4-4. Repeatability Statistics for Tree Crown Density and Crown Dieback	4-13
Exhibit 4-5. Training and Certification for Observer-Determined Data Collection	4-18
Exhibit 4-6. Planning and Preparation for Field Logistics	4-19
Exhibit 4-7. Example Selection of Alternate Sample Locations	4-21
Exhibit 4-8. Example Field Operation Supplies and Materials for Vegetation Plots or Avian Surveys ....4-23
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019

-------
Exhibit 4-9. Example Completed Data Collection Form	4-24
Exhibit 4-10. Steps for a Successful Laboratory Contract*	4-30
Exhibit 5-1. Good Field Practices during Routine Data Collection	5-2
Exhibit 5-2. Characteristics, Purpose and Timing of Hot Checks	5-4
Exhibit 5-3. Example Hot Check Review Sheet	5-7
Exhibit 5-4. Characteristics, Purpose and Timing of Calibration Checks	5-8
Exhibit 5-5. Comparison of Cold, Blind and Precision Checks	5-11
Exhibit 5-6. Example Cold Check Data Comparison - Tree Species Identification	5-14
Exhibit 5-7. Summary of QC Checks for Observer-Determined Data	5-17
Exhibit 5-8. QC Strategies for Stable Variables	5-19
Exhibit 5-9. Iterative Process Incorporating Results of QC Checks	5-23
Exhibit 6-1. Example Strategies for Conducting Data Review Activities at Each Stage of the Data
Collection, Reporting, Transcription and Management Process	6-3
Exhibit 6-2. Example List of Materials and Documents Needed to Review Data	6-7
Exhibit 6-3. Example Variable-Specific Data Verification and Validation Procedures	6-19
Exhibit 6-4. Length-Weight Relationships (inches-pounds) for Large Wild Sport Fish	6-22
Exhibit 6-5. Using Compliance Checks to Verify Ecological Restoration Monitoring Data	6-23
Exhibit 6-6. Using Range Checks to Validate Ecological Restoration Monitoring Data	6-24
Exhibit 6-7. Using Consistency Checks to Validate Ecological Restoration Monitoring Data	6-25
Exhibit 6-8. Identifying, Handling and Documenting Questionable Data	6-28
Exhibit 6-9. Example Data Flagging Codes	6-31
Exhibit 7-1. Examples of Potential Limitations on Data Use as Identified During Data Review Efforts
that Affect All DQIs1	7-5
Exhibit 7-2. Project Specifications for Example Scenarios 1 and 21	7-9
Exhibit 7-3. Potential Data Analysis Approaches for Ecological Restoration Projects	7-21
Exhibit 7-4. Example Project Lifecycle with Key QA Components	7-25
Exhibit A-l. Data Management Lifecycle	A-l
Exhibit A-2. Comparison of Three Computer Environments Commonly Used to Manage Information .. A-6
Exhibit A-3. Example Process Flow Chart for Quality Review of Primary Data	A-9
Exhibit A-4. Data Sources Acquired by a Typical Ecological Restoration Project	A-12
Exhibit A-5. Structural Elements that Define a Comprehensive Standard Operating Procedure	A-13
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019

-------
Exhibit A-6. Essential Event-Related Data and Their Recommended Data Formats for Recording	A-14
Exhibit A-7. Important Considerations for Identifying Processing Needs for Several Types of Data
Commonly Acquired During an Ecological Restoration Project	A-18
Exhibit A-8. A Simplified Data Workflow	A-24
Exhibit A-9. File Formats Recommended for Long-Term Storage and Archiving	A-28
Exhibit A-10. Example List of Organizations Providing Resources for Publishing and Sharing
Environmental Data	A-32
Exhibit A-ll. Metadata Standards Endorsed by the Federal Geographic Data Commission	A-35
Exhibit B-l. Source Partitioning of Total Error in Ecological Restoration Monitoring	B-l
Exhibit B-2. Measurement Scales and their Statistical Properties	B-4
Exhibit B-3. Examples of Common Measurements or Observations in each of the Nominal, Ordinal,
Interval, and Ratio Scale	B-5
Exhibit B-4. Standard Statistical Equations to Assess Bias, Precision and Accuracy	B-10
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019

-------
ACRONYMS AND ABBREVIATIONS
AMP
Adaptive Management Plan
ANOVA
Analysis of Variance
ANSI
American National Standard Institute
AOU
American Ornithologists Union
ASQ
American Society for Quality
BACI
Before-After/Control-Impact
CAS
Chemical Abstract Service
CFR
Code of Federal Regulations
COC
Chain of Custody
CSV
Comma-Separated Value
CV
Coefficient of Variation
DBH
Diameter at Breast Height
DMP
Data Management Plan
DMS
Data Management System
DOI
Data Object Identifier (doi)
DQI
Data Quality Indicator
EAB
Emerald Ash Borer
EPA
U.S. Environmental Protection Agency
EU
Experimental Unit
FIA
Forest Inventory and Analysis
FGDC
Federal Geographic Data Committee
FQA
Floristic Quality Assessment
FQI
Floristic Quality Index
GIS
Geographic Information System
GPS
Global Positioning System
GL-DIVER
Great Lakes DIVER Explorer Query Tool
GLHHFTS
Great Lakes Human Health Fish Tissue Study (conducted by U.S. EPA, OW)
GLNPO
Great Lakes National Program Office (part of the U.S. EPA)
GLRI
Great Lakes Restoration Initiative
GRTS
Generalized Random Tessellation Stratified
Ha
Alternative Hypothesis
Ho
Null Hypothesis
IBI
Index of Biotic Integrity
IERQC
Interagency Ecological Restoration Quality Committee
ISO
International Organization for Standardization
ISBER
International Society for Biological and Environmental Repositories
ITIS
Integrated Taxonomic Information System
IUPAC
International Union of Pure and Applied Chemistry
LTER
Long-Term Ecological Research
M-IBI
Index of Macro-Invertebrate Biotic Integrity
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019

-------
MAIS	Macroinvertebrate Aggregated Index for Streams
ME	Mean Error
MOA	Memorandum of Agreement
MOU	Memorandum of Understanding
MSE	Mean Square Error
N	Nitrogen
NARA	National Archives and Records Administration
NOAA	National Oceanic and Atmospheric Administration
N03	Nitrate
NFWF	National Fish and Wildlife Foundation
NGO	Non-governmental Organization
OMB	U.S. Office of Management and Budget
OSTP	U.S. Office of Science and Technology Policy
OW	Office of Water (part of U.S. EPA)
P	Phosphorus
PDCA	Plan Do Check Act
PDR	Portable Data Recorder
OA	Quality Assurance
QAOT	Quality Assurance Oversight Team
QAPP	Quality Assurance Project Plan
QC	Quality Control
QMP	Quality Management Plan
QMS	Quality Management System
QS	Quality System
RC	Reed canarygrass
RDBMS	Relational Database Management System
RMSE	Root Mean Square Error
SD	Standard Deviation
SMART	Specific, Measurable, Achievable, Results-Oriented, and Time-Sensitive
SOP	Standard Operating Procedure
TXT	Tab-Delimited Text
USACE	U.S. Army Corps of Engineers
USDA	U.S. Department of Agriculture
USFS	U.S. Forest Service
USFWS	U.S. Fish and Wildlife Service
USGS	U.S. Geological Survey
VAR	Variance
WGS	World Geodetic System
XML	Extensible Markup Language
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019

-------
EXECUTIVE SUMMARY
This guidance is intended to assist managers of ecological restoration projects with developing and
implementing effective quality assurance (QA) and quality control (QC) strategies. If designed and
implemented properly, such QA/QC approaches will improve the quality of data collected, increase the
certainty of project decision making, and ultimately save time and money. Although many resources are
available to assist project managers with quality systems, this guidance focuses specifically on applying
QA/QC concepts to monitoring the effectiveness of ecological restoration projects, with the recognition that
these projects have unique QA/QC challenges. Anticipated additional users of this guidance include ecological
restoration specialists and stakeholders representing federal and state agencies, non-governmental
organizations (NGOs), civic and local groups, and the academic community. Guidance is provided on QA/QC
considerations throughout project planning and preparation, as well as data collection, review and
evaluation. Details, instructions, resources and examples are provided within each of the individual chapters.
Because every restoration project is unique and requirements vary widely among organizations, the
information is provided in a guidance document rather than an instruction manual. Notably, we have
included information about the philosophy behind the recommended QA/QC strategies so that users can
adapt the recommendations to the particular needs of their project and funding organization.
Chapter 1 provides the background, purpose and scope of the guidance document as well as a summary
of the content and a description of the project and quality management lifecycles within which the
monitoring and QA/QC activities are conducted within an adaptive management framework.
Information that can be used by project planners and others to define the QA/QC roles and
responsibilities of individuals involved throughout an ecological restoration project is also provided.
Chapter 2 provides a summary of fundamental principles that underlie much of the remaining chapters.
It serves as a primer for readers with limited backgrounds in concepts pertaining to quality management
or statistics and a quick reference tool or refresher for those with more experience in these fields. As
such, it addresses basic quality management and statistical principles and considerations, including the
intent and distinction between QA and QC activities, potential implementation challenges, the graded
approach to quality management, documentation requirements, use of data quality indicators (DQIs),
sources and types of errors to consider, and a high-level summary of aspects to consider when
developing a monitoring strategy for ecological restoration projects.
Chapter 3 provides specific guidance and examples for planning data collection activities, including
establishing appropriate restoration project goals, objectives and strategies; using project objectives to
determine sampling objectives (which in turn informs the sampling design); and defining associated data
quality acceptance criteria. Prior to initiating an ecological restoration project, project managers should
determine the type, amount and quality of data needed to support decision making, beginning with
determining general restoration project goals that convey the project's purpose and direction and
describe expected results. Project goals are used to determine specific and quantitative project
objectives that provide the basis for planning restoration strategies, including treatment and
maintenance, as well as monitoring that can be used to evaluate project success. Project objectives also
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019

-------
provide the basis for developing sampling objectives, which are clear, succinct statements defining the
ecological attributes monitored, the magnitude of change desired, and the acceptable risks associated
with making incorrect decisions. These sampling objectives are used to develop sampling designs,
determine sample sizes, and establish necessary data quality criteria. Data quality should be assessed
according to DQIs of precision, bias, accuracy, completeness, comparability, detectability and
representativeness. For each observation or measurement, project planners should define acceptable
limits for DQIs that are needed to meet stated sampling objectives. Chapter 3 concludes with guidance
on QA/QC strategies that project planning teams should address when using existing data to support
planning or supplement new data that will be generated during the project.
Chapter 4 describes a suite of QA strategies that project planning teams should address before data
collection crews are deployed to the field. Recommended strategies include preparing for data collection
by identifying, developing or modifying standard operating procedures (SOPs) that are tailored to the
needs of the project, verifying that personnel are properly trained and certified, and preparing for field
logistics. All SOPs, manuals or other written procedures should be (1) designed to meet the project's
sampling objectives and data quality criteria and (2) standardized to ensure reproducibility and minimize
errors. Project personnel collecting the data should be adequately trained and certified and should be able
to demonstrate competency. Project managers should allow adequate time for training and certification in
the project timeline, confirm that the training is relevant to the field location and conditions, and
document all training and certification records. With respect to field operation plans, project personnel
should address site permits, schedules, equipment, supplies and other field logistics before a sampling
season is scheduled to begin. Before work starts, field crews should have information on sampling
locations, SOPs, OA plans, and approaches to address unforeseen circumstances.
Chapter 5 recommends and describes several QC checks (hot checks, calibration checks, cold checks,
blind checks and precision checks) that can be implemented prior to and during data collection to
ensure that (1) field activities conform to requirements specified in project planning documents, and (2)
the quality of collected data meets sampling objectives and acceptance criteria for DQIs. These QC
checks provide a means for evaluating the ability of routine field crews to collect data as well as the
precision, bias, and overall accuracy of these data.
Chapter 6 provides guidance regarding the review of ecological restoration monitoring data. Data
generated and collected during these projects form the basis of critical decisions, including whether
objectives were achieved and what adaptive management strategies might be needed to improve
project outcomes. Data quality review is crucial for supporting such decision making. Throughout this
process, data reviewers reconcile the data with pre-established requirements and acceptance criteria to
evaluate data quality, identify limitations, and assist managers in understanding any data uncertainties.
Data review should involve identifying possible data discrepancies or errors, deciding whether to accept,
correct or flag the data, instituting any necessary corrective actions, and documenting resulting
decisions and actions.
Chapter 7 discusses various aspects of data assessment, analysis and reporting, including the QA/QC
strategies that are important to ensure data are used and analyzed appropriately, analysis techniques
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019

-------
are properly selected and executed, and reports accurately reflect project results. Guidance is provided
on data assessment, data analysis and project reporting.
Chapter 8 provides a discussion of adaptive management and, specifically, the relationship between
quality management and adaptive management strategies. Adaptive management provides a
framework for structured decision making while ensuring sufficient flexibility for restoration project
managers to adjust restoration actions as needed throughout the project lifecycle. The success of
adaptive management in ecological restoration projects will be impacted by certain considerations,
including the need for accurate monitoring baselines; the limitations of surveillance monitoring; and the
application of modeling, data substitution, and recordkeeping strategies. Project planners and managers
can help improve the effectiveness of an adaptive management framework by establishing clear
objectives and ensuring the implementation of robust QA/QC activities and documentation throughout
the quality management or project lifecycle.
Checklists are provided at the end of Chapters 3 - 7 to provide a summary of key take-away messages
and serve as quick reference tools concerning quality management strategies associated with planning
data collection activities (Chapter 3), preparing to implement the plans (Chapter 4), collection of data
and samples in the field (Chapter 5), verifying and validating the quality of resulting data (Chapter 6),
and assessing, analyzing and reporting results of the validated dataset (Chapter 7).
A suite of appendices are also included at the end of this guidance. Appendix A builds on the guidance
and information provided in Chapters 1-8 by providing additional information on important data
management considerations that should be addressed during project planning and throughout project
implementation. Appendix B supplements the information in Chapter 7 by providing guidance on
statistical procedures that are commonly used to evaluate the reliability of data acquired during
ecological restoration monitoring. Appendix C provides a template that can be used as a tool to
document project-specific quality management strategies in a Quality Assurance Project Plan (QAPP).
The template is designed to comply with EPA Requirements for Quality Assurance Project Plans (EPA
2001). but is tailored to better fit the needs of projects involving ecological restoration and/or the
control of invasive species. The template reflects recommendations provided throughout this guidance
document, and it includes cross-references to specific locations in this guidance where each topic is
addressed. Finally, Appendix D provides a checklist that can be used to assist with the review of QAPPs.
In summary, the QA/QC principles described in this document include (1) defining and documenting
ecological restoration project goals, project objectives, and sampling objectives, (2) designing and
implementing a monitoring program that will provide data needed to determine if the objectives have
been met, (3) identifying and implementing QA practices and QC checks to ensure the plan is properly
followed and data quality targets are achieved, and (4) reviewing the quality of data to confirm it is
sufficient for the intended data use. Application of these principles are valuable only if the collected
data are used in a way that allows restoration ecologists to confirm project success and draw on lessons
learned, determine if adjustments to the restoration or monitoring designs are needed, and/or
implement adjustments to improve overall project or program outcomes.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019

-------
CHAPTER 1 INTRODUCTION
1.1 BACKGROUND
Government agencies and other organizations direct
substantial resources towards addressing serious
threats to ecosystems such as habitat loss, invasive
species, toxic pollutants, shifts in wildlife populations
and alterations to natural water levels and flow
regimes. Since being launched in 2010, for example,
more than 2,000 projects have been funded in support
of the Great Lakes Restoration Initiative (GLRI) Habitat
and Wildlife Protection and Restoration and Invasive
Species focus areas.1 Other projects are conducted
each year by a diverse range of federal, state, and local
agencies and non-governmental organizations (NGOs)
interested in protecting, monitoring, or improving
ecosystems outside the GLRI purview.
Considerable resources are needed to plan and
implement these ecological restoration projects and,
after restoration activities have been implemented,
additional resources are needed to assess their effectiveness in achieving the desired outcomes (Thayer et
al. 2003). The success of each project depends on a number of factors, not the least of which is the quality
of the ecological data that are used to (1) define pre-restoration conditions, (2) ensure planned activities
are implemented correctly, and (3) assess post-restoration success. Although practitioners and decision
makers rely heavily on the quality of these data,
•	ecological data collection activities are inherently difficult to control, and
•	little guidance exists on strategies for mitigating this challenge in the field or determining if the
resulting data are reliable enough to support sound decisions. In the absence of such guidance,
many projects are compromised by poor or incomplete data.
In June 2012, the U.S. Environmental Protection Agency (EPA) convened an Interagency Ecological
Restoration Quality Committee (IERQC) to address this challenge. The IERQC provides a collaborative
environment to share quality-related concepts, practices, guidance, methods and tools to ultimately
improve ecological restoration projects. Although the committee's focus is on projects that are funded
by the GLRI, tools developed by the committee are also applicable to ecological restoration projects
undertaken for other purposes.
1 The number of projects supporting these focus areas can be obtained from information on EPA's website dedicated to the Great
Lakes Restoration Initiative (https://www.glri.us/proiects).
(EPA 2013)
Application of QA/GC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 1

-------
1.2 PURPOSE AND INTENDED AUDIENCE
This guidance is intended to encourage and facilitate the adoption of effective quality assurance (QA)
and quality control (QC) strategies in support of ecological restoration projects. Anticipated users include
ecological restoration specialists and stakeholders representing federal, state and tribal agencies, NGOs,
civic and local groups, and the academic community. Although it is assumed that users will have some
background in and knowledge of basic ecological restoration practices and QA/QC concepts, Chapter 2
includes a brief review of QA/QC principles that are discussed throughout the remainder of the document.
The practices, procedures, information, and concepts outlined in this guidance can provide the following
benefits to practitioners and stakeholders:
•	Save time and resources by enhancing the consistency of documentation and procedures in current
and future projects.
•	Improve data quality for ecological measurements and observations, aid in evaluating project
success, and incorporate long-term effectiveness monitoring as feedback to adaptive management.
•	Encourage a common approach to QA/QC across multiple entities involved in ecological restoration
projects to improve data comparability over time and support comparison of various restoration
strategies.
•	Serve as a consolidated collection of the best QA/QC practices for ecological restoration projects
across multiple agencies.
Section 2.2 provides a more in-depth discussion of the potential benefits that can be realized from
implementation of effective quality management strategies in monitoring programs. It also discusses the
importance of considering the total cost of quality over the life of a project, a concept that involves
considering the total cost of (1) QA/QC investments made before and during the project to prevent or
mitigate problems and (2) resources incurred during and after the project to address problems and
failures. Much of the guidance in Chapters 3 - 7 is focused on the first half of the equation (QA/QC
investments to prevent problems) in order to avoid problems and failures that may prove to be far more
costly in the long run (e.g., costs associated with re-work, scrapping data or an entire project, further
degradation of ecological resources or wasted financial resources that can occur when incorrect
conclusions are made based on flawed data). Project planning teams should carefully consider the
information offered in this guidance and select the QA/QC strategies that best meet their project needs
and resources (see Section 2.3 Graded Approach to the Application of QA/QC).
1.3 GUIDANCE SCOPE
This document presents QA/QC best practices compiled from the IERQC agencies; it reflects the
combined knowledge and experience of IERQC members and provides guidance on how to:
•	apply basic QA/QC concepts,
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 1


-------
design through an adaptive management framework. As shown in Exhibit 1-1, each phase of this
iifecycle aligns with the Plan-Do-Check-Act (PDCA) cycle that forms the basis of many Total Quality
Management systems. Sometimes referred to as "plan-do-check-adjust/' PDCA is an iterative four-step
model for defining new or improved processes and implementing continuous improvement of processes
and products (ASQ 2018). It is also known by other names such as the plan-do-study-act (PDSA) cycle,
the Shewhart cycle, or the Deming cycle, and in each case, the word "cycle" is sometimes replaced with
the word "circle." Regardless of the specific nomenclature, ecological restoration projects have ample
opportunities to incorporate the PDCA model into the broader project Iifecycle.
evaluate the quality of monitoring data, Appendix C provides a template for documenting project-
specific QA/QC plans, and Appendix D provides a checklist designed to facilitate the review of such
plans.
All project activities should be carefully planned and documented before data collection begins, and
then implemented throughout the project Iifecycle. In fact, some organizations will not allow data
gathering activities to begin until the QA/QC strategies for all phases of the project Iifecycle have been
documented and approved in a signed quality planning document.
Note that the project and PDCA management lifecycles illustrated in Exhibit 1-1 above are not intended
to represent an endless cycle of the same project. The dotted arrow leading into the "Plan" component
of the project Iifecycle is intended to illustrate how knowledge gained from a project can be used to
support the process of planning for improvements within the project or for a similar project. This
concept of integrating improvements into a project or similar projects is consistent with the use of
adaptive monitoring and adaptive management strategies as described in Chapter 8.
Exhibit 1-1. Relationship between the Monitoring Project
Lifecycle and the Plan/Do/Check/Act Quality
Management Lifecycle
Chapter 2 provides a brief overview of
quality-related principles that underlie
the remaining chapters of this
guidance. Chapters 3-7 present key
QA/QC strategies to be considered
during each of the Plan, Prepare,
Collect, Review and Evaluate phases of
an ecological restoration project.
Chapter 8 discusses how the QA/QC
strategies described in the preceding
chapters support implementation of
effective adaptive management
strategies for ecological restoration
projects. Four appendices are provided
at the end of the guidance. Appendix
A discusses best practices for
managing monitoring data, Appendix
B provides guidance on statistical
procedures that are commonly used to
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 1

-------
1.5 APPLICATION OF QA/QO TO ECOLOGICAL RESTORATION PROJECTS
The notion of applying quality management principles to environmental programs is not new, and tools for
doing so have long been available for many types of environmental applications— most notably those
involving the collection and analysis of samples for chemical parameters or the use of environmental
technology. Many of these concepts, however, have not been widely applied or adopted for ecological
restoration monitoring projects. Because much of the data gathered involve field observations (e.g., visual
and/or auditory identifications of species, visual estimates of canopy cover), some ecological restoration
practitioners assume that such field observations are not subject to evaluations of data quality (Stapanian
et al. 2016).To the contrary, all data, including field observations, have some level of associated
uncertainty that can and should be evaluated. (The degree of allowable uncertainty will be determined by
the intended use of these data.) Field observations can and should be standardized in how they are
collected, furthering comparability between crews, sites and years.
In developing this guidance, we have built upon QA/QC strategies that are widely used by a variety of
organizations that collect samples for chemical analysis, adapting them where needed to support the
unique challenges associated with field data gathered during ecological restoration projects. Towards
that goal, this guidance:
•	presents well-established principles to aid users in explicitly defining restoration goals and
objectives for each project;
•	recommends the use of a systematic planning process to develop sampling objectives that will
support the project objectives and be used to determine a sampling design for the monitoring
program and to identify quality requirements for data that will be collected;
•	recommends the use of data quality acceptance criteria that specify the level of quality needed for
each measurement or observation, and data quality indicators (e.g., precision, bias, accuracy,
representativeness, comparability, completeness, detectability) that can be used to determine if this
level of quality has been achieved;
•	advocates the use of standard operating procedures (SOPs), training and certification, audits and
inspections, data verification, data validation and data management procedures as effective
strategies for assuring and controlling the quality of ecological restoration project data;
•	advocates the integration of all components of the project lifecycle, including QA/QC applications
into an overarching decision-support environment, most commonly referred to as Adaptive
Management; and
•	recognizes the importance of incorporating climate resiliency strategies in project quality design and
decision making (EPA 2014a).
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 1

-------
1.6 QUALITY ASSURANCE ROLES AND RESPONSIBILITIES
This section provides information that can be used by project planners and others to define the QA/QC
roles and responsibilities of individuals involved in an ecological restoration project. Everyone involved
in the project bears some responsibility for ensuring and controlling quality at various stages of
restoration, ranging from those who define the requirements to those who are responsible for
implementing those requirements.
All restoration projects require a clear delineation of the project's organization and how responsibilities
are assigned within that organization. This includes identifying:
•	each organization involved in the project;
•	functional groups within the project;
•	roles within each functional group;
•	key staff involved within each functional group;
•	project responsibilities associated with each role; and
•	designated authority(ies) for making modifications and adapting when necessary to address
unforeseen problems along with responsibilities for:
o communicating those changes to the project team, and
o updating project planning documents, forms, data systems, and other materials to reflect the
modifications.
All individuals supporting an ecological restoration project must fully understand their roles and
responsibilities and to whom they should be reporting. They should also be familiar with who is
responsible for making specific decisions related to financial matters, field activities or other issues.
Defining roles and responsibilities is a primary component of the project planning process and helps
project planners build a cohesive team with clear and effective communication methods. Since every
project is unique, the specific roles and responsibilities will be influenced by several factors, including
the project size and scope, the organizations involved, and the skills of staff members. For example,
some projects implemented by smaller organizations with limited resources may require an individual to
assume more than one role, while others may have additional roles that are not represented in this
section (or are represented in this section but have a different description).
Note: While there is no doubt that primary responsibilities have to be defined, defining roles too
narrowly can have a negative effect on quality by isolating specific individuals or groups and
discouraging members from looking at quality concerns outside of their own group. Hierarchies
often deemphasize the idea that quality-related responsibilities are shared responsibilities. In
some cases, the less rigid the organizational structure, the better an organization is able to work
collaboratively. This benefits data quality through all stages of the project lifecycle when a focus
on quality strategies is an organizational goal.
The organizational chart shown in Exhibit 1-2 outlines common roles and Exhibit 1-3 provides examples
of QA/QC responsibilities for the project team members who fulfill those roles. The roles and
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 1

-------
responsibilities depicted in these exhibits are for illustrative purposes only and are intended to serve as
examples to provide context for subsequent discussions throughout this guidance document. Note that
the QA/QC staff shown in Exhibit 1-2 are independent of the staff responsible for data collection.
Because QA and QC are critical tools for preventing and detecting fraud, QA staff may interact with - but
should ideally operate independently of - the staff involved in data collection.
Exhibit 1-2. Example Organizational Chart for an Ecological Restoration Project
Project Partners
Steering Committee

Funding Agency
Project Manager
Funding Agency
Quality Manager
Subject Matter Specialists
(e.g., Statistician, GIS Specialist,
Field Instructor, Data Manager, Data
Reviewer, Data Analyst)
Funding Recipient
Project Manager
Funding Recipient
Quality Manager
Restoration
Crew Team
Leader
Monitoring
Field Crew
Team Leader
Laboratory
Manager
Laboratory
QA Manager




Routine
Crew

Expert/QA
Crew
Routine
Crew
Expert/QA
Crew
Technicians
This section and these roles are not
v a focus in this guidance document /
NOTE: This figure serves as an example of organizational structure for an ecological restoration project for illustrative
purposes only. The generalized structure is used as a guideline for discussions throughout this document. Note that
references to the "planning team" typically refer to roles within the top two rows of this chart with contributions from other
roles across the entire organization.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 1





-------
In response, the OMB issued its Guidelines for Ensuring and Maximizing the Quality, Objectivity, Utility,
and Integrity of Information Disseminated by Federal Agencies.2 In doing so, it defined quality based on
the following three characteristics:
•	Objectivity: (1) The information itself must be accurate, reliable and unbiased, and (2) the manner
in which the information is presented must be accurate, clear, complete and unbiased.
•	Utility: The information must be useful for the intended users.
•	Integrity: The information may not be compromised through corruption or falsification, either by
accident, or by unauthorized access or revision.
Note that the OMB definition of quality is consistent with and can be viewed as an elaboration of the
ISO definition cited above, in that any data that meet OMB's requirements for objectivity, utility and
integrity can be said to fulfill requirements and thus be of acceptable quality.
It also is important to note that U.S. federal agencies are required to adopt standards for data quality
that are consistent with the OMB definition and to develop processes for reviewing the quality of all
information before it is disseminated. These agencies also are required to establish administrative
mechanisms that (1) enable the public to seek and, where appropriate, obtain corrections to
disseminated information that does not comply with the OMB guidelines; and (2) provide an appeals
process for those who disagree with an agency's verdict on a data-quality challenge.
In other words, all data and other information generated during federally-funded ecological restoration
projects are subject to the requirements of the Data Quality Act. Although the specific processes for
complying with the Act vary among agencies, all agencies are responsible for disseminating data that
meet OMB quality requirements concerning objectivity, utility and integrity. This guidance is designed to
provide tools that will assist in fulfilling these responsibilities.
2.1.2 What is the Difference between Quality Assurance and Quality Control?
The terms quality assurance (OA) and quality control (QC) are often used interchangeably. Although
related, they address different aspects of quality management. Exact definitions vary among
organizations, but most recognize that QA is focused on preventing problems, while QC is focused on
identifying defects and confirming that project requirements are met. As a result, many people view QA
as process-oriented and QC as product-oriented.
For the purpose of this guidance, we have adopted the following definitions for these concepts, drawn
from both ISO and the American Society for Quality/American National Standard Institute (ASQ 2014)
standard that addresses quality systems for environmental data (ASQ 2014; ISO 2015a):
2 The final guidelines and subsequent corrections were printed in the Federal Register on January 3, 2002 (67 FR 369-378) and on
February 5, 2002 (67 FR 5365). respectively.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 2



-------
/ M/
/
A'W\ //%
Prescribed burn in Dow Field, Nichols Arboretum, Ann Arbor, Michigan. This burn was a training exercise for students in an ecological
restoration course at the University of Michigan. Photo Credit: Bob Grese
To be effective, quality must be planned into a project. All project planning activities involve a trade-
off between the project's scope (design), cost and schedule. These trade-offs are often referred to as
the "triple constraints of project management" or the "project management triangle/' in which each
side of the triangle represents a constraint. A change to one side will affect the other two, and
quality is impacted by decisions about aIi three. In recent years, this project management triangle
has been updated to include six constraints that include quality, risk and resources, in addition to the
original three (cost, scope and schedule). In this new model, quality is not simply impacted by
changes to other project constraints; it is considered a constraint itself (Project Management
Institute 2014). Regardless of which model one prefers, it is clear that quality plays a significant role
in project planning and is subject to competing pressures.
Project managers, including those responsible for ecological restoration projects, are often under
intense pressure to deliver results within a specified timeframe and/or budget. When the project scope
cannot be accomplished on time and within budget, managers may be tempted to reduce costs by
eliminating or reducing the level of QA/QC. Such a strategy is ill-advised, in that doing so may solve
short-term needs, but create longer term problems. The following quote from the well-known architect
Frank Lloyd Wright underscores this point: "You can use an eraser on the drafting table or a sledge
hammer on the construction site." This concept is known as the cost of quality, which recognizes the
cost of NOT creating a quality product or service. Note that the term "cost of quality" is often
misunderstood. It is not the price of creating a quality product; it is a methodology for quantifying the
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 2

-------
total cost of QA/QC activities and deficiencies in the quality of a product or service (ASQ2017). If work
has to be re-done, the cost of quality increases.
Examples of this in ecological restoration projects include:
•	repeating ecological measurements or observations made by an improperly trained field crew;
•	re-entering project data into a compromised database;
•	falsely concluding that a project was successful when it was not, leading to further ecological
degradation of and wasted financial resources; or falsely believing that a project was unsuccessful
when it was actually successful, resulting in continued expenditures in project design and failure to
implement successful designs at other sites (Stapanian et al. 2016; Stankev et al. 2005);
•	assuming project success within a shorter time period of project funding, instead of providing for
long-term maintenance and ensuring the preservation of ecological benefits; and
•	failing to incorporate restoration design elements that take into account the effects of climate
change on the ecological system being restored.
To avoid these problems, planning teams must carefully consider the total quality-related costs incurred
over the life of the project. These include costs for investing in measures to prevent nonconformance
with project requirements, costs associated with evaluating the quality of project activities and results,
and costs associated with failing to meet project requirements. Examples of such costs are shown in
Exhibit 2-2 below.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 2

-------
Exhibit 2-2. Total Quality Costs (adapted from Project Management Institute 2014)
COST OF CONFORMANCE
Investments prior to and during the
project to avoid failures
Prevention Costs
(conductingan effective ecological
restoration project)
Training
Documenting processes
Calibrating/checking equipment
Time to do it right
Evaluation Costs
(assessingquality)
Inspections/audits
Data review
COST OF NONCONFORMANCE
Investments during and after the
project because of failures
Internal Failure Costs
(failures found bythe project team)
Re-work
Scrapping data or the entire project
External Failure Costs
(failures found by clients or other organizations)
Liabilities/warranty work
Lost program fundingor business
Further degradation of ecological resources/
wasted financial resources caused by falsely
concluding project was successful
2.3 GRADED APPROACH TO THE APPLICATION OF QA/QC
In a well-designed QA/QC program, the cost of conforming to requirements is lower than the cost of
non-conformance. Accordingly, this guidance advocates a graded approach to the application of QA/QC,
which is consistent with the quality management philosophy adopted by ASQ, ANSI, EPA and others. As
stated in the ASQ/ANSI E4 standard (ASQ 2014). the graded approach "is the process of applying
management controls to an activity according to the intended use of the results and the degree of
confidence needed in the quality of the results." In other words, the level of QA/QC performed should be
commensurate with such factors as the project goals and objectives, project importance, risks
associated with decision errors, resources and schedules.
Although many governmental agencies encourage the use of a graded approach, there are no national
implementation guidelines. This is largely because its application varies according to the unique needs of
each organization. For example, some organizations are focused on regulatory development, others on
compliance monitoring or enforcement, and others on non-regulatory monitoring activities.
Organizations also differ in the maturity of their quality system and in the resources that are available to
support development and implementation of an objective approach. In addition, the diversity of project
types to which a graded approach may be applied is so extensive that it essentially precludes
development of one-size fits all criteria (Blume et al. 2013). This is true even within large organizations.
Users of this guidance should consult their funding organizations to determine if the organization has
an established protocol for applying the graded approach; if no such protocol exists, users should
work with the funding organization to develop an approach that is appropriate for their project. In
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 2

-------
doing so, organizations should bear in mind that, although a lack of QC data does not necessarily
mean that project data will be of poor quality, it does limit the ability to document or defend the
reliability of the data when used to support project decisions.
2.4 QA/QC DOCUMENTATION
Documentation of organizational and project-specific policies, requirements and procedures is a key
requirement of any quality system. Exact approaches to addressing these requirements vary among
organizations, and can take the form of various names and formats including, for example, Quality
Manuals, Quality Plans, Quality Management Plans, Quality Assurance Project Plans, Quality Assurance
Program Plans and Quality Control Plans. Some organizations incorporate their quality management
strategies in broader documents such as study plans, research and surveillance plans, project plans and
data management plans.
For simplicity, this guidance document generally relies on the approach advocated by the ASQ and EPA,
which requires a:
•	Quality Management Plan (QMP) to document an organization's quality management policies and
how the organization will plan, implement, and assess its quality system, and a
•	Quality Assurance Project Plan (QAPP) to document the specific QA/QC strategies that will be used
for each unique project or service (ASQ 2014; EPA 2000a, 2000b, 2008).
The QAPP (or equivalent document), when combined with other supporting documents discussed
throughout this guidance (e.g., SOPs, data management plans), represents a project's quality documentation.
These project-level documents are then applied under the umbrella of an organization's QMP.
Because organizations differ in their approach to documenting quality management activities, users are
encouraged to consult with their sponsoring organizations to determine if specific documentation
requirements must be met. As noted in Chapter 1, some organizations will not allow data gathering
activities to begin until QA/QC strategies for all phases of the project lifecycle have been documented
and approved in signed quality planning documentation. For example, the U.S. Army Corps of Engineers
(USACE) relies on use of a Quality Control Plan that documents the roles and responsibilities of
individuals involved in each project, projected schedules, and quality management practices. USACE
requires all branch chiefs within the applicable USACE District to sign the Quality Control Plan before the
construction phase of any project, including ecological restoration projects. The EPA applies a similar
concept with QAPPs, which must be signed by the applicable project and QA manager before initiating
any work involving the collection, analysis, or use of environmental information or the performance of
environmental technology. To facilitate compliance with these requirements, EPA's Great Lakes National
Program Office (GLNPO) has developed a QAPP template for use in documenting plans for ecological
restoration and invasive species control projects (see Appendix C) and a checklist that can be used as a
tool when reviewing QAPPs (see Appendix D).
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 2


-------
Exhibit 2-4. Influence of Precision and Bias on Accuracy
Low
Accuracy
Low
Accuracy
Low
Accuracy
High
Accuracy
20
tr 15 +
E
"35
£
©
+-»
(f)
=8:
.-E 10
cn
c
CD
Q
E
 ° °
-e£»—aa_
%
CREW 2
O o
	o	
o
o
o o
CREW 3
b O
o o
O O o
o
_o
OocP"
.__Q-o—¦'
° #<>
CREW 4
t
Crew
Mean
Range of
Observed
Values
True Value
Positive Bias/ Negative Bias I Unbiased/ High Unbiased/ Low
High Variability Low Variability Variability	Variability
Exhibit 2-4 shows four hypothetical field sampling crews, each with different combinations of
measurement precision and bias.
•	Crew 1 tended to overestimate the stem density, with a mean close to 15 stems/m2. Crew 1
measurements also tended to vary widely around that mean, including some results that were
below the true value of 10 stems/m2. Due to the combination of a high bias and high variability,
Crew l's data are inaccurate.
•	Crew 2 measurements did not vary widely, with stem densities clustered much more closely to the
crew's mean of 5 stems/m2. However, their mean was well below the true value of 10 stems/m2.
This negative bias renders Crew 2's data inaccurate, despite the low variability of their
measurements.
•	On average, Crew 3 did not over or under estimate the stem density, with a mean that was
approximately equal to the true value of 10 stems/m2. However, this crew exhibited high variability,
such that the results varied from approximately 4 — 15 stems/m2. As a result of this high variability,
Crew 3's data also are inaccurate.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 2

-------
•	Crew 4 achieved a mean that was approximately equal to the true value of 10 stems/m2 with all
results tightly clustered around the mean. Because their measurements were unbiased and
exhibited low variability, Crew 4's data are considered accurate.
2.6 SOURCES OF VARIABILITY: SAMPLING AND MEASUREMENT ERROR
All data associated with ecological restoration projects are subject to both sampling and measurement
error. Project planning teams must be aware of these sources of error and consider their impacts on the
level of confidence one can place in the data. Both concepts are illustrated in Exhibit 2-5, briefly
explained below, and widely discussed elsewhere (e.g., Lohr 2010; Lesser 1999; Gv 1998; Kish 1965).
Sampling error can be thought of as the difference between the characteristics identified in a sample
of a population (observed or measured values) and the actual characteristics of the entire population
(true values).
•	For example, in a project to restore shoreline habitats of edible mussels, one might choose to use
the number of blue mussels (Mytilus edulis) as an indicator of success. Because it is impractical to
count every mussel present in a relatively large area, the area is divided into transects or quadrats,
and counts of blue mussels within the transects or quadrats are extrapolated to estimate the total
number present in the region. The difference between this estimated value and the actual value
obtained if one had counted every mussel in the shoreline habitats is defined as the sampling error.
•	As another example, consider a project to increase the relative abundance of migratory songbirds
within a designated area in a specific timeframe. In this example, sampling error could occur if the
sampling design failed to specify sufficient frequencies, times or locations.
Generally, sampling error can be reduced by (1) increasing the number of sample units within the target
population (e.g., measuring the number of blue mussels at more plots or transects), or (2) applying a
sampling design methodology that more accurately represents the distribution characteristic the target
population (e.g., modifying the placement and size/shape of the sample unit).
Measurement error refers to the difference between a measured or observed value and the true value,
and it is caused by individuals, instruments or processes (e.g., methods, protocols, SOPs) used to obtain
the measurements or observations.
•	In the blue mussel example, sampling error can occur if the areas selected for sampling do not
accurately represent the entire population. In contrast, measurement error can occur when the
individuals, instruments or processes used fail to correctly count the number of blue mussels.
Examples include cases where mussels are so densely packed together that it is hard to count each
one in a given area, or an insufficiently trained crew member cannot differentiate ribbed mussels (a
non-target species) from blue mussels. The difference between the actual number of blue mussels
present in the area examined and the number of mussels reported represents measurement error.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 2



-------
2.8 DEVELOPING A MONITORING STRATEGY
In general, sampling error can be reduced and statistical power increased by choosing an appropriate
study design and monitoring strategy (sampling design) to measure change in one or more ecological
attributes (Morrison et al. 2008; Zhang 2007). Measurement error can be more easily controlled by
planning and implementing the QA/QC practices described throughout this guidance, such as
standardization of data collection and management procedures, training staff in the use of the those
procedures, conducting QC checks to verify that the procedures are being followed, and ensuring that
project data reflect pre-determined standards of quality.
The scope of this guidance document is not intended to include detailed information on study design
concepts or on how to develop a monitoring strategy. Project planners should work with a qualified
statistician when considering their design and monitoring strategy needs. (For less complex projects, a
biologist or ecologist with experience in statistical study design and interpretation may be able to fill this
role.) In addition, the monitoring strategy should be designed to support an adaptive management
framework as discussed in Chapter 8. In lieu of providing detailed guidance on this topic, we
recommend considering the following aspects when developing a monitoring strategy:
•	Define the measurements and observations: Be specific regarding the data to be collected. For
example, if you are monitoring for a change in vegetation structure in response to restoration
planting, you may be interested in stem density that includes all woody plant species, native or
non-native species, or a particular stem height or diameter class (e.g., stems less than 1.4 meters
height or < 5 centimeters diameter at breast height). Alternatively, if you are monitoring for a
change in wildlife abundance as result of habitat restoration, you may consider measuring bird
species absolute abundance, relative abundance, or an index based on the frequency of
occurrence for one or more indicator species. The appropriate measurements and observations
will be determined based, in part, on the conceptual model imposed (see Section 3.2.1) and the
selected study design and sampling design strategies. When making these decisions, distinguish
between primary variables of interest (i.e., those that will be used to determine if objectives have
been achieved) and ancillary variables that describe who collected the data and when, weather
conditions at the time of collection, and other information that can be used to help evaluate data
quality and interpret results.
•	Consider the need for baseline and reference condition sampling: Ecological restoration projects
attempt to restore components of an impaired ecosystem to a target condition (see Section 3.2.2).
In order to demonstrate this transition, baseline condition sampling and reference condition
sampling are often necessary components of the monitoring strategy and study design. When
conducted, baseline condition sampling is typically performed within the framework of a before-
after-control-impact (BACI) study design to assess if the site being restored has achieved a targeted
change or is trending toward a targeted threshold condition (Morrison et al. 2008). There are
several variations on the application of the BACI study design, including the use of one or more
reference sites in addition to the site being restored. A reference site is often selected because it
reflects physiographic and ecological characteristics similar to the site being restored, however, it
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 2

-------
does not receive restoration. When included in the study design, a reference site is typically paired
with the site being restored and identical sampling that includes baseline sampling occurs
simultaneously at each to monitor change over time. This can be an effective strategy to (1) control
experimental bias, and (2) develop the empirical evidence necessary to relate any change in
condition of the restored site with the restoration efforts. Such a design consideration can provide
data needed to estimate the "effect-size" required to evaluate a project's sampling objectives, and
to support decision making within an adaptive management framework. (Refer to Chapter 8 for a
discussion of adaptive management.) Reference condition sampling can also be conducted at a site
that represents optimal, un-impacted, or undisturbed conditions to establish characteristics
(benchmarks) that represent the target condition.3 Establishment and monitoring of such un-
impacted reference sites allows the success of the project to be evaluated against how closely the
restored site resembles an un-impacted (natural) reference condition (Morrison et al. 2008).
• Determine the study design; Select a study design that is most appropriate to address your project
goals and objectives (i.e., hypotheses), and maximizes the ability of producing unbiased and precise
estimates of your ecological response. There are three broad categories of empirical research
distinguished primarily by elements of research or study design and the degree to which statistical
inference can be applied. They include:
o Manipulative or true experiment - designs that include the manipulation of one or more
conditions (factors) randomly assigned as experimental treatments and where the treatments,
including a control, are replicated,
o Quasi-experimental - designs that include the manipulation of one or more conditions, but
typically lack random assignment and/or true replication, and
o Mensurative - designs that lack experimental manipulation and that may or may not
incorporate random selection and replication of independent sample units. Depending upon the
design elements incorporated, mensurative research can more precisely be defined as
observational or descriptive; for environmental monitoring projects, these designs are greatly
limited in their ability to make inference regarding causality.
Ecological restoration can be described as a quasi-experiment given that these projects are based on
an assumption that predicts an ecological response to restoration actions. With quasi-
experiments, the degree to which statistical inference can be applied is generally limited when
compared to a true experiment (e.g., the ability to establish cause-effect relationships). However,
the ability to make statistical inference can be increased in quasi-experimental designs by
controlling experimental bias and increasing precision in estimates of true effect-size. The latter can
3 Specifically, the term reference sampling in ecological restoration projects can refer to the use of either (1) a "natural" reference site
that represents conditions targeted or used to establish ecological benchmarks of progress toward a targeted state, or (2) a site that
has been determined to be similar to the restoration site (pre-restoration) so that it represents the same conditions as the site that is
being restored. The latter type is often referred to as a "control" site and receives no restoration. Both sites are monitored in the same
way; the difference in change is considered the "effect-size" and accounts for natural variation across the restoration site and the
reference (control) site.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 2

-------
be done by including reference condition sampling at one or more randomly selected, independent
control sites concurrent with sampling before and after implementing restoration actions. It is
important that these control sites are similar to the pre-restoration site in terms of size, geographic
setting or proximity, and key ecological characteristics. When these design elements are included in
ecological restoration monitoring, they are commonly referred to as a BACI study (see previous
bullet). An inherent assumption with BACI designs is that temporal variation for a response variable
among control sites that have not been restored will be similar and more consistent than the
temporal change in the same variable at the restoration site (Morrison et al. 2008; Block et al. 2001).
•	Define the sampling units and timeframe for data collection: Sampling units may be natural (e.g., a
tree, a pond) or user-defined (e.g., plots, transects, net hauls, soil cores). The units should not
overlap (unless using a hierarchical or nested design), and they may need to meet certain
assumptions of statistical independence. To minimize variance, they also should reflect the optimal
period of time that sampling can occur and result in consistent estimates for the variable(s) of
interest. Sample units should attempt to capture the characteristics and natural variability within
the area, target population, community or site of interest rather than simply among sample units.
Thus the size, shape and orientation of the sample units are important considerations (Elzinga,
Salzer. and Willoughbv 1998).
•	Determine the sample size: The required sample size will depend on the anticipated total variability
(sampling and measurement/observation), acceptable error rates (Types 1 and 2), and desired
detectable magnitude of change (targeted effect-size or threshold exceedance). Sample size
decisions should not be based solely on project budget considerations, as insufficient sample sizes
can significantly limit the ability to make informed decisions regarding the project's progress and/or
success. Exhibit 2-7 depicts the relationship between statistical power, variability and sample size
for a hypothetical project that seeks to restore optimal willow density (at least 2.5 stems/m2) along
a stream riparian shoreline. The Y-axis shows the statistical power needed to detect a 20%
exceedance of the 2.5 stems/m2 target (with the dotted line depicting the goal of at least 90%
power); the X-axis shows the number of measurements that would be needed to achieve the
statistical power; and the two curves show the relationship between power and sample size for two
levels of variability. For the scenario with lower variability (blue curve), 90% power can be achieved
with approximately 21 measurements. For the scenario with higher variability (red curve), 90%
power can be achieved with approximately 36 measurements. In the event it is not feasible to
increase sampling effort at one or more of the initially selected sites, statistical power can be
increased by adding additional reference sites to sample that are in comparable "pre-restoration"
condition to the site being restored (Morrison et al. 2008).
•	Choose a sampling design: There are many different ways to position sampling units within an area
that is to be sampled. The different ways of positioning sampling units share three characteristics:
(1) random placement (each location has a known probability of being sampled), (2) good
interspersion through the area to be sampled, and (3) independence (conditions in one sample unit
do not influence conditions in another). Random sampling allows inferences to be made to the area,
population, community, or site of interest and reduces bias. Positioning of sample units to ensure
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 2


-------
2.9 ADDITIONAL READINGS
•	Christensen, S.W., Brandt, C.C., and McCracken, M.K. 2011. Importance of data management in a
long-term biological monitoring program. Environmental Management, 47(6), 1112-1124.
•	Stokes, E., A. Johnson and M. Rao. 2010. Monitoring Wildlife Populations for Management. Training
Module 7 for the Network of Conservation Educators and Practitioners. American Museum of
Natural History and the Wildlife Conservation Society, Vientiane, Lao PDR.
http://www.fosonline.org/resource/monitoring-wildlife-populations
•	U.S. Department of the Interior. 2003. Quality Assurance Guidelines for Environmental
Measurements. U.S. Department of the Interior, Bureau of Reclamation, QA/QC Implementation
Workgroup, https://www.usbr.gov/tsc/techreferences/mands/mands-
pdfs/QualitvAssuranceGuidelines 2003.pdf
•	U.S. Environmental Protection Agency. 2016. "How EPA Manages the Quality of its Environmental
Data." Last modified August, https://www.epa.gov/qualitv.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 2


-------
3.1 RESTORATION PROJECT GOALS
The planning process begins with the identification of goals that describe the desired future conditions
or ideal states an ecological restoration effort will attempt to achieve. Goals should be descriptive and
convey a purpose. They should define the general direction for a project but should not define specific
desired future conditions in measureable terms. The latter is accomplished through identification of
restoration project objectives (Section 3.2) that will support the overall goals.
3.1.1 Preparing to Define Restoration Project Goals
When preparing to define goals for a restoration
project, it is helpful, if not essential, to have
background information on the:
•	project location, boundaries, and ownership;
•	targets of restoration (e.g., ecosystems,
natural communities, rare or protected
species);
•	threats, need for restoration, and desired
state and anticipated benefits;
•	current and past condition of the site;
•	adjacent land use;
•	social, political, and physical context of the
project; and
•	expectations from funding sources,
stakeholders and project staff.
This background information regarding the project site, restoration needs, and planning team
members will help set the stage for the selection of restoration goals. In some cases, restoration goals
are defined by decision makers in other organizations, and the project planning team is responsible
for defining objectives and strategies needed to achieve those goals. For example, a legislature may
issue funding to "restore the aquatic habitat of Icky Inlet and make it safe and healthy for recreational
activities" and then direct those funds to a federal, state or local agency to manage. When this occurs,
project planning teams are responsible for defining specific objectives that can be used to
demonstrate the project outcome will support the mandated goals—a process that will be aided by a
thorough understanding of the relationship between restoration goals and objectives, as described in
Sections 3.1 and 3.2.
Installation of willow cuttings for revegetation of the riparian area
along Elm Creek, Minnesota. Photo Credit: Britta Suppes, Joe
Magner, Chris Lenhart
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 3


-------
• Establish an urban park woodland with a diversity of native tree species considered to be resilient to
the effects of the invasive Emerald Ash Borer (EAB, Agrilus planipennis).
Note how succinctly each of these restoration goals addresses the four key elements. In the first
example, the stream riparian shoreline is the subject or resource of concern, willow density and stream
channel width (erosion) are the attributes of interest, the conceptual target is an optimal willow density
that results in reduced stream bank erosion and restored habitat for stream riparian wildlife species,
and the action or effort to be made is to "restore" optimal willow density. In the second example, a 15-
acre wet prairie plant community is the resource of concern, the attributes of interest are the cover and
floristic quality of the plant community, the conceptual target or condition is an improvement to these
attributes, and the desired actions are to "restore" the native species and "improve" the floristic quality.
Similarly, the urban park woodland is the resource of concern in the third example, a diversity of tree
species considered to be resilient to the effects of the EAB is the attribute of interest, the conceptual
target is an EAB resilient "native woodland" plant community, and the action to be made relative to the
target is to "establish" an urban park native woodland resilient to the detrimental effects of EAB. It is
important to note here that, although project goals may address multiple outcomes (e.g., restore
optimal willow density and reduce bank erosion), the more focused and specific the goal, the fewer
assumptions are needed to establish clear objectives to achieve that goal.
When defining restoration project goals, the planning team should carefully consider what can be
realistically accomplished. This includes understanding the importance of defining the conceptual target
based on the ecological scale to which the restoration actions will result in a predicted outcome. Some
restoration activities may emphasize habitats, such as restoring coastal wetlands, while others may be
directed at species, such as promoting the recovery of threatened or endangered species. On a site-level
scale, restoration activities may emphasize ecological processes, such as restoring the hydrologic
connectivity of an entire watershed. The selection of goals should recognize that an ecosystem may not
be able to return to a former state because of unknown stressors or changes in society and the
environment (Clewell, Rieger and Munro 2005). Goals may be unrealistic if they are based on an
outcome that is beyond the temporal or ecological scale of a project's influence, and could set the stage
for actual or perceived project failure. Development of realistic restoration goals, however, provides a
needed framework from which to build specific project objectives.
3.2 RESTORATION PROJECT OBJECTIVES
Once the planning team has defined restoration project goals, team members can determine specific
objectives that will support those goals, serve as the foundation for planning restoration and monitoring
activities, and provide interim benchmarks (i.e., quantitative endpoints) for each phase of the project as
the basis for measuring progress. The team will need to gather information about the targeted
ecosystem that can help them define specific components of the restoration project objectives and
maximize the effectiveness of subsequent restoration activities. The following subsections describe
these processes and provide examples of well-written objectives, each with a specific, measurable basis
for supporting the overarching restoration project goals.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 3

-------
3.2.1 Preparing to Define Restoration Objectives
In order to define restoration objectives, the team must gather and evaluate information regarding the
environmental stressors that may be compromising biota and other resources at the project site and the
ecological resources affected by these stressors (Mulder et al. 1999). Examples of stressors include fire,
alteration of hydrologic cycles, habitat fragmentation, sedimentation, road construction and pollution. A
pre-restoration assessment of the relevant biotic and abiotic conditions, as well as baseline monitoring
data on variables such as water quality, can be very helpful in characterizing both the stressors and the
ecological resources affected by them. Other resources that may be helpful include land use plans and
knowledge about the life history of species, ecological reference sites, and historical records (Clewell.
Rieger and Munro 2005).
We recommend the development of conceptual models as a tool for organizing information and
documenting key assumptions (EPA 2006b). Conceptual models are often depicted as a statement or a
diagram linking external driving forces and stressors with ecological effects and the measurable or
observable attributes that characterize the state of impacted ecological resources. These models are
representations of ecosystem components, processes and drivers, and they can facilitate understanding
of key ecosystem interactions and function, bring common understanding among interested parties, and
clarify assumptions (Ogden et al. 2005; Woodward et al. 2008). Building on the 15-acre wet prairie
example in Section 3.1.3, the project team has determined that tile drainage and introduction of
invasive reed canarygrass (stressors) have resulted in a degraded wet prairie (ecological effect) as
measured by the relative cover of existing plant species (attribute). Based on this conceptual model, a
reasonable restoration objective might be to increase the relative cover of native wet prairie species in
the wetland by X%. Once the objective is defined, specific restoration strategies and activities necessary
to accomplish the objective can be defined, and a monitoring program can be designed that will allow
project managers to determine if the objective has been met.
3.2.2 Selection of Restoration Objectives
As described previously (see Section 3.1.2), restoration project goals identify the resource of concern,
attributes of interest for that resource, conceptual targets, and an action relative to that target.
Restoration objectives build upon these goals by providing specific measurable targets for desired future
conditions. Each restoration objective should be: specific, measurable, achievable, results-oriented and
time-sensitive (SMART) (Doran 1981). These attributes should be identified for each variable of interest
that is included in a restoration goal. This is accomplished by specifying the direction and quantity of
change desired, pinpointing the specific geographic area for the restoration activity, and identifying a
time frame (or project phase) to accomplish this change or see the anticipated ecosystem response.
SMART restoration objectives, when developed to support short-, mid- and long-term outcomes, are a
necessary component of project-level adaptive management (see Chapter 8), and provide the
quantitative benchmarks or endpoints necessary to support decision making within an adaptive
management framework throughout the project lifecycle.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 3

-------
The desired ecological change can be stated in a project objective as either a targeted change from a
baseline condition (i.e., an observed trend or improvement) or the achievement of a targeted
threshold (Clewell. Riegerand Munro 2005; Elzinga. Salzer and Willoughbv 1998).
•	Targeted Change: A change from a baseline condition would be expressed either as a difference
between the results of initial and final data collection efforts (for example, an increase of 10% in
population size between baseline and final monitoring efforts) or as an observed trend in results
over multiple efforts (for example, an average 2% increase in population size per year). This type of
objective is based on a comparison of future monitoring results to baseline conditions.
•	Targeted Threshold: In contrast, restoration project objectives that define the desired change in
terms of a target threshold would define the final condition based on a single numeric goal (e.g., a
final population size of X species/acre). In this case, the objective is based on a sampling design and
sampling objectives that compare future monitoring results to the specified target or threshold. For
this type of objective, it is assumed that prior studies have established that baseline conditions do
not meet the target/threshold.5
3.2.3 Examples of Restoration Objectives
Exhibit 3-2 provides examples of SMART restoration project objectives that are designed to support the
project goal examples provided in Section 3.1.3.
Note that in Exhibit 3-2, each of the example project objectives directly supports an attribute of
interest and conceptual target in its corresponding goal by defining one or more specific outcomes
that can be measured or observed within a specific timeframe. For the first project goal, the first
objective specifies a measurable amount of shoreline (1,000 linear meters) that will be covered by a
specific, measurable amount of willow (average density of 2.5 stems per square meter), and that
should be present within a specific timeframe (after two years). The conceptual target identified (an
optimal willow density that results in reduced stream bank erosion and restored habitat for stream
riparian wildlife species) is addressed by specific, measurable, reportable and achievable objectives. A
similar relationship exists between the restoration goals defined in the second and third examples and
their corresponding project objectives.
The above examples are for illustrative purposes only, and many projects are likely to have more than
one goal, along with several objectives to support each goal. When this is the case, project planners may
find it helpful to list the project objectives according to a hierarchy based on whether they address a
short-, mid- or long-term goal (outcome). This is particularly relevant when the mid- and long-term
outcomes are only possible if and when the short-term outcomes are achieved.
5 There is some risk in assuming that prior studies have demonstrated that baseline conditions do not meet the desired target or
threshold. If effective QA/QC strategies were not used in these prior studies, your decision to initiate a restoration action may be based
on faulty initial condition estimates. Refer to Section 3.5 of this chapter for guidance on evaluating the quality of existing data.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 3


-------
3.3 SAMPLING OBJECTIVES
As mentioned above, the project planning team will need to create a monitoring program that will allow
project managers to determine if the restoration activities were effective in achieving the specified
restoration goal(s) and project objectives. Development of such a monitoring program will require:
1)	identification of specific sampling objectives (which also can be thought of as "monitoring
objectives") and
2)	development of a detailed sampling (monitoring) plan that will provide data that can be used to
determine if the sampling objectives are met.
Both activities require some understanding of the sources of variability that can arise in any monitoring
program, and of the types of decision errors that may result from the monitoring data (see Sections 2.6 and
2.7, respectively, for a brief overview of these concepts). Given the significant amount of resources invested
in ecological restoration projects and the importance of monitoring data in determining project success, we
strongly recommend that project planning teams consult with an experienced statistician for assistance.6
3.3.1 Establishing Sampling Objectives
As discussed in Section 2.7, Type 1 and Type 2 errors can lead to wasted resources, further degradation,
and missed opportunities. Therefore, sampling objectives should clearly state the level of uncertainty
that stakeholders are willing to accept for making an incorrect decision.
Type 2 errors are also dependent on the size of the change that actually occurred. Consequently, when
establishing sampling objectives, project planners need to determine the size of an effect (effect size)
that will be meaningful, and recognize that the "false negative" (Type 2/missed change) error rate will
reflect a change of this size or greater. In other words, the probability of concluding that no change had
occurred when in fact a change of at least this size did occur would be less than or equal to the Type 2
error rate. Monitoring programs, therefore, should be designed to ensure a sufficient sample size and
yield enough usable data to (1) allow a high probability of identifying when the specified change occurs,
and (2) avoid the additional costs and resources that would be needed to detect a smaller (and not
necessarily meaningful) difference at the same high probability.
The established meaningful effect size needs to be compatible with the restoration project objectives.
Specifically, if an objective is expressed as a change between a baseline and final condition, then the
effect size must be a specific amount of change, expressed as either an absolute, relative, or
standardized effect size that is consistent with the restoration goal.7 Similarly, if the objective is defined
as a comparison to a target or threshold, the effect size would be a difference (either absolute or
relative) from that target/threshold. The decision of selecting the appropriate effect size is complicated
6	In accordance with the graded approach discussed in Chapter 2, a biologist or ecologist with experience in statistical study design and
data interpretation may be sufficient for less complex monitoring projects.
7	Targeted changes specified in absolute terms are expressed in directly measured units, such as mm, mg/L or total count. Targeted
changes specified in relative terms are expressed as a percent difference from the baseline. Targeted changes specified in standardized
terms are expressed as a unitless metric, standardized based on the observed variability.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 3


-------
stakeholders in our hypothetical project are interested in a reasonable probability of detecting a change
that is considered to be biologically meaningful. In addition to specifying the biologically meaningful
effect size and Type 2 error rate, our hypothetical stakeholders want to be at least 95% confident they
will not falsely conclude there was a change when, in fact, there was no difference in abundance. In
other words, the stakeholders want to limit the Type 1 error rate to 5% and the Type 2 error rate for a
40% or greater change to 20%. The sampling strategy would then be designed to collect sufficient data
to meet these requirements.
3.3.2 Examples of Sampling Objectives
The goal in developing effective sampling objectives is to avoid ending up with an inadequate sampling
design that would make it difficult to determine whether a restoration project objective has been met.
Towards this goal, well-defined sampling objectives specify (1) the degree of change that must be
detected to define project success, and (2) the degree of certainty needed when stating that the change
has been detected. Exhibit 3-3 provides example sampling objectives that might be derived to support
the example restoration project goals and objectives presented in Sections 3.1 and 3.2, based on
traditional hypothesis testing approaches such as those discussed in Chapter 2. (Although Bayesian
Methodology and other approaches may be appropriate alternatives to traditional hypothesis testing,
discussion of such strategies is beyond the scope of this document.)
Ideally, the specific information required to develop sampling objectives can be derived from the
restoration project objectives. This information includes the target population or resource of interest,
along with its geographic location, attributes of interest, and the anticipated direction, degree and time
frame for the response. Note that:
•	restoration goals describe why a restoration activity is being conducted;
•	restoration project objectives delineate what needs to be measured or observed; and
•	sampling objectives provide additional information that planners will use to decide how, where, and
when data are to be collected, and how many samples are required or needed.
Finally, it is important to recognize that, although sampling objectives serve as the basis for designing a
monitoring strategy, these two components are often developed in an iterative manner. For example,
when designing the strategy, the project planning team may determine that it is not possible to achieve
the original sampling objectives within budget constraints. In this situation, the sampling objectives may
need to be modified to reflect the financial limitations (e.g., accepting a lower level of confidence that
the desired change will be detected or accepting a higher risk of falsely concluding that a change has
occurred). At times, it may even be necessary to revisit and re-scope the project objectives to reflect the
constraints (e.g., reduce the size of the restoration area or reduce the magnitude of change desired).
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 3




-------
3.4 DATA QUALITY INDICATORS AND ACCEPTANCE CRITERIA
In Section 3.3, we discussed the importance of defining sampling objectives that specify the degree of
certainty needed to support decisions based on the monitoring data collected. We also noted that
sources of uncertainty include the sampled population—largely addressed through the project's
sampling design—as well as the data collection processes that will be used. The next step in controlling
overall error is to address the data collection process itself. This begins with identifying the quality of
data that is needed to evaluate whether or not you have achieved your sampling objectives (i.e., how
good must your data be in order to achieve the desired levels of certainty in your decisions?).
In Section 3.4.4, we present a stepwise procedure for determining data quality acceptance criteria and
provide an example application of that procedure. Before doing so, however, it is important to review
the types of data being collected (Section 3.4.1) and how acceptance criteria for different data quality
indicators are described (Sections 3.4.2 and 3.4.3). The discussion builds on the text in Section 2.5,
where we introduced precision, bias, accuracy, representativeness, comparability, completeness and
detectability as commonly accepted data quality indicators (DQIs) for environmental monitoring data
(see Exhibit 2-3 for definitions of each term). In this section, we discuss acceptance criteria as
performance goals for individual DQIs.
The practice of using DQIs to set data quality acceptance criteria is well established for data generated
using laboratory methods for chemical and physical analysis of environmental samples. In contrast, the
use of DQIs has been less widely adopted for data that are generated primarily using methods of visual-
and/or aural-assessment (observer-determined) and best professional judgment. We believe that
project planning teams can and should develop acceptance criteria for the collection of all data,
including observer-determined data, as a means of ensuring and documenting that decisions are
based on acceptable levels of measurement error; the remainder of this section provides guidance for
doing so. Data quality acceptance criteria for laboratory measurements are discussed here only for
context and with an understanding that specific guidance is available elsewhere (EPA 2003a. 2006a;
Cross-Smiecinski and Stetzenbach 1994).
3.4.1 Types of Data
Before considering how to establish acceptance criteria for each DQI, it is helpful to review the different
types of data that are often collected in support of ecological restoration projects. For the purposes of
this guidance, and as shown in Exhibit 3-4, these types of data are categorized as:
•	Species, Taxa or Group, or Community Classification,
•	Class or Categorical Assignment,
•	Numerical Rank Assignment, and
•	Numerical Estimate.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 3

-------
These categories apply to all types of information that is typically targeted during ecological restoration
monitoring, including primary or ancillary variables of interest, stable variables that can be expected to
produce the same result when measured repeatedly over a fixed period of time, and transitory
variables that are affected by stochastic processes and easily biased by the presence of an observer, the
sampling activities, or other disturbances.
Project planners should be aware of the statistical properties imposed on data when deciding how to
quantify an ecological variable (or type of data it represents), including any implied limits in
measurement precision as result of the method used to quantify the variable (i.e., based on an
observer's best professional judgment or by use of a graduated device or calibrated instrument). For
example, distance measurements made using observer-based judgement and recorded as numeric rank
values (e.g., intervals of 0-10m, 10-20m) are less precise than measurements recorded as discrete
numeric values (e.g., to the nearest whole 1-meter integer), and both are less precise than continuous
numeric distance measurements (e.g., fractions of a meter) determined using a graduated measuring
tape, an optical or digital rangefinder, or global positioning system (GPS) device. In practice, an observer
or crew will often combine the use of best professional judgment and a graduated measuring device or
electronic instrument as part of a standard procedure to quantify a given variable. Examples include the
use of a straight ruler or digital caliper to determine the length of an anatomical feature in order to
distinguish between similar plant or animal species, a quadrat frame to interpret vegetation ground
cover, a spherical densiometer to assess canopy cover, a Secchi disk to interpret depth of water
transparency, or standard reference material (e.g., color chart) to interpret the percent of organic
matter contained in a soil sample. When such combined practices are utilized, project planners should
clearly define standard procedures to minimize the potential compounding of measurement error and
its effects on measurement precision. Appendix B, Section B.l provides additional information
regarding measurement scales and their statistical properties.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 3


-------
3.4.2 Acceptance Criteria for Precision, Bias and Accuracy
The DQIs of precision, bias and accuracy were discussed previously in Section 2.5. In summary:
•	Precision is a measure of the degree of agreement among data collection efforts under identical or
very similar conditions. When data collection relies upon observer-determined methods, precision is
commonly estimated by comparing the data collected independently by two or more crews or crew
members for the same ecological parameter or attribute. Within-crew precision can be estimated
for a single crew, or a single crew member, by having the crew (or crew member) collect
measurements or observations at the same sampling unit more than once. A true value is not
needed to estimate precision.
•	Bias is a measure of a systematic or persistent misrepresentation of a data collection process or
effort that results in error in one direction. Generally, bias includes a directional component
(whether the estimates are higher or lower than the true value being assessed) and a magnitude
(the amount that the estimate differs from the true value).
•	Accuracy is an evaluation of the degree of the closeness of the data to known or reference values
and includes a combination of random error (precision) and systematic error (bias) components.
(Refer to Exhibit 2-4 for a graphic display of the impacts of precision and bias on accuracy.)
Due to the difficulty in determining a "true value" for many ecological attributes or variables, values
collected by expert or QA crews often serve as surrogates to true values, and accuracy and bias are
commonly estimated by comparing results from routine crews to expert crews. This approach is
discussed in detail in Chapter 5. In some cases, it may be possible to evaluate accuracy and bias using an
independent method for a subset of sites or monitoring events that is more accurate and precise
(referred to here as a "reference method"). Although more complete or accurate methods may be
available for many ecological variables, these methods may not be practical because they are too time
consuming, too costly, or cause damage to the sampled environment. This is often the case when
sampling objectives require estimating transitory variables, such as wildlife response to habitat
restoration efforts or stable variables that require a large number of samples to be representative of the
area under study. For these circumstances, use of a reference method across a subset of sample units
can be an effective approach. Examples of how a "reference method" approach might be implemented
in different situations include the following:
•	Percent cover, density or abundance estimates: If the "standard" methods used in a particular
project involve visual estimates representing a numerical estimate of percent cover or abundance of
a given sessile species or slow moving organism (e.g., plants, mussels), then actual counts of
individuals for a smaller set of representative plots or subplots might be used as a "reference"
method to evaluate the accuracy (and precision) of the visual estimates.
•	Biotic index estimates: Estimates of biological indicators often employ sub-sampling to economize
the sampling effort required to determine species composition. For instance, aquatic
macroinvertebrates are often sampled using dip-nets as part of standard procedures for calculating
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 3

-------
a biological index that is indicative of water quality. Methods typically require that captured
invertebrates be picked from vegetative debris and deposited in a sample vial until a certain number
of individuals are counted (e.g., 200 individuals) or for a pre-determined duration of time. All
macroinvertebrates collected in the vial are then later identified and taxonomically sorted and
counted to estimate the relative proportion or taxonomic composition of the macroinvertebrate
community in the sampled area. A reference method approach would involve repeating this
procedure for a subset of samples or sites by conducting complete counts for all taxa in the entire
dip-net sample, recording the total time required (to normalize sample effort), and comparing the
results to the estimates produced using routine methods.
• Biomarker measurements: Biomarkers and non-lethal tissue sampling methods are often used to
model environmental levels (concentrations) of industrial contaminants or exposure risk for
certain types of organisms. These non-lethal methods are preferred for large studies where the
environmental impact of wide-scale lethal sampling of organisms would create undue harm or be
detrimental to a species' population viability. In these cases, more invasive or potentially lethal
sampling methods could be used in a subset of locations to evaluate the accuracy of the less-
invasive routine method. An example would be the routine use of fish biopsy plugs for monitoring
mercury exposure coupled with targeted whole-body analysis to evaluate accuracy and bias.
Exhibit 3-5 shows that acceptance criteria for precision, bias and accuracy of observer-determined
data generally consist of two components: an error tolerance specification and an expected frequency
of compliance in meeting that specification (Westfall and Woodall 2007). Error tolerance is defined as
the expected range for repeated measurements or observations, and is necessary to assess the
repeatability of a procedure. The expected frequency of compliance (or "compliance rate objective")
is dependent on how difficult it may be to achieve the desired error tolerance for a given variable of
interest, the level of confidence necessary for a particular variable, or the variable's importance in
defining restoration success. When applied together, the error tolerance objective and the
compliance rate objective provide a means for assessing precision, bias and accuracy.
Specific examples of quality acceptance criteria for these DQIs also are provided in Exhibit 3-5 for
several types of variables. For example, a project planning team may propose to assess the degree to
which a routine crew or crew member correctly assigns a forest classification type by having an expert
independently evaluate 10% of the areas assessed by the routine crew. If the expert's results confirm
the routine crew's results (i.e., 100% agreement at least 95% of the time), the crew's results are
considered to be accurate and reliable. In contrast, if the expert results confirm the crew's results only
20% of the time, the crew's results would be considered inaccurate and unreliable.
Such assessments also can be examined on the basis of individual crew members, rather than as a
whole. Imagine, for example, a project in which the expert results confirm the crew results (100%
agreement) 82% of the time. Further examination of the data indicates that a single crew member's
results are responsible for all of the non-compliant pairs. If that crew member is consistently making the
same type of mistake (e.g., consistently misinterpreting the scale on an analog instrument), the crew
member is contributing to bias in the data. If that crew member is making different types of mistakes
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 3



-------
•	Use multiple routine field crew members (e.g., double-observer) to produce paired datasets for
assessing precision.
•	Pair routine field crew members with experts to collect paired datasets that can be used to assess
accuracy and bias.
•	Collect duplicate samples or retain samples to allow for repeated measurements to estimate
accuracy, bias and precision.
•	Collect voucher specimens or samples to support observer-determined data, serve as standard
reference materials, and/or facilitate crew member training.
•	Periodically assess observers' ability to meet acceptance criteria.
3.4.3 Acceptance Criteria for Representativeness, Comparability, Completeness and Detectability
Also discussed in Section 2.5, there are the commonly accepted DQIs of representativeness,
comparability, completeness and detectability.
Representativeness is determined by the degree to which data represent the characteristic of a
population being assessed (EPA 2002). and planning teams should strive to ensure sampling designs are
based on sampling units that represent the population of interest. Failing to do so can have devastating
consequences on the utility of the data collected, regardless of how precise, unbiased and accurate the
data may be. Selection of unrepresentative streams in the Pacific Northwest, for example, was shown to
be a contributing factor in the subsequent collapse of the salmon stocks, as only high quality streams
that were not representative of all streams in the region were selected for monitoring. Unaware of this
inadequacy in their sampling design, fishery managers overestimated the overall productivity of the
region's salmon stocks, leading to decisions that ultimately failed to protect the resource (Siitari. Martin,
and Taylor 2014). Similarly, if a certain plant species is known to be uncommon in a project area but the
abundance recorded is high, those data should be confirmed through the collection of voucher
specimens or additional sampling within the overall study area (Stapanian et al. 2016). Plant images also
can be used for authentication by experts and can be sent as attachments to text messages to expedite
identification or verification of the species.
Comparability expresses the confidence with which the data can be compared to or combined with
other data collected using similar procedures. Within a given project, comparability among crew
members and over time can be achieved by thorough training and strict adherence to standard
operating procedures (SOPs), as described in Chapter 4.
•	In some cases, it may be possible to combine datasets that were generated using different methods,
but comparability of the methods should be carefully assessed before doing so. Field calibration
studies that include evaluations of the quality (e.g., precision and accuracy) of data generated by
multiple methods are a useful means of evaluating comparability. One such example is an
interagency calibration study (described in Section 4.4.1) that was designed to compare Lake Erie
fish abundance estimates generated by multiple agencies using different SOPs and trawl vessels
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 3

-------
(Tyson et al. 2006). Results of that study provide a means for adjusting data collected by some of the
agencies based on a procedure designated as the "standard" while also preserving each agency's
ability to continue using procedures that are consistent with their own historical data.
•	Another important consideration for ecological restoration projects is the comparability of
taxonomic data. In general, project planning teams can promote comparability by using the most
current and regionally accepted taxonomic systems. In some cases, however, it may not be possible
to crosswalk historic taxonomic data with current keys, particularly when certain taxa have been
split into multiple groups; in such circumstances, datasets need to be maintained at the lowest
possible taxonomic resolution that is consistent between historic and current data.
•	For long-term monitoring projects, the project planning team needs to consider and implement
strategies that will allow older baseline monitoring data to be compared with monitoring data
gathered using updated methods or taxonomic systems. In the context of long-term monitoring
projects, planners also may discover that historical datasets used to define baseline conditions have
become less comparable due to changes in the stability of conventional indicators of condition or
health as a result of climate change or other impacts on the ecological system of study. For example,
measures such as floristic quality index (FQI) and other indicators of biotic integrity (I Bis) developed
in relation to existing disturbance regimes may become less comparable (and potentially less
relevant) moving forward in time.
Completeness is the proportion of collected data that can be considered valid and usable relative to the
amount required in the sampling plan, with the remainder considered as missing. The concepts of
"valid" and "usable" may vary depending on project objectives and the user of the data. Sampling
designs often require a certain number of samples to ensure that sampling objectives related to Type 1
and Type 2 errors can be met. Logistical and safety considerations or unplanned events (e.g., flooding)
can limit the ability to collect all monitoring data as planned. In other words, (1) a minimum number of
results are often required to provide the statistical power needed to support restoration project
decisions, and (2) logistical, safety, or other considerations may prevent the required number of samples
from being obtained. In addition to the amount of data that could not be collected or used,
completeness also can be affected by the nature of invalid or missing data and whether they are
"missing at random." A large number of missing observations from the same subarea or time of day
could limit the representativeness of the data, even if the overall completeness goal (based on total
number of samples and measurements) was met. Project planning teams should mitigate these types of
problems by pre-determining contingency measures that can be taken to ensure that the minimum
amount of useful data needed to support decision making is obtained. Examples might include
proactively identifying alternate sampling strategies or one or more "backup plots" to be sampled in
case planned study plots cannot be accessed.
Detectability is a measure of the sensitivity and specificity of the sampling design, measurement
procedures, instrumentation and/or data collection personnel in detecting true differences in a target
variable at ambient levels or when the measurement or observation of a target variable is dependent
upon detecting the true occurrence of a rare, cryptic or secretive organism. Detectability is typically
expressed as a minimum absolute value ("lower detection limit") or as an estimate of probability
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 3







-------
measurements or observations (Kelly and Walters 2014). It is important to understand the impacts of such
data manipulations on your project objectives. For these reasons, we recommend that users investigate
existing data carefully to fully understand what is being reported; in many cases, this may require
contacting the authors or managing agency for additional information.
Planning how you will use existing data is no less important than planning how you will collect new data,
and the remainder of this section describes strategies to consider. While they may require a greater
investment up front, these strategies are designed to reduce overall project costs by avoiding resources
wasted in gathering existing data that, at best, are not useful and, at worst, can lead to inaccurate
conclusions or inappropriate actions. An added benefit is that these strategies can increase the
comparability and transparency of your project (Kelly and Walters 2014).
Identify and document your project objectives. Note that this strategy (along with many others) also is
required when planning to collect new data (see Sections 3.1 and 3.2). It is not possible to identify and
gather the data you need without first knowing your project objectives, including any decisions that
need to be made and the risks of reaching an inaccurate conclusion.
Determine and document the type of data you need and where you might find them. This seemingly
simple activity actually involves a number of steps to ensure that your efforts are focused only on data
that will directly support your project needs. These include:
•	Data Needs: Prepare a detailed list of the specific data elements needed to support the project
goals and objectives, and describe the scope of each element. For example, if you anticipate needing
data that reflect a full range of conditions (e.g., multiple seasons, multiple species), include such
details in your project plan. If your project includes development of one or more databases to
capture existing data from other sources, identify and define each database field. The intent is to
ensure that all individuals involved in data gathering and handling understand exactly what data are
needed and to avoid misunderstandings about what a particular data element means.
•	Potential Data Sources: Identify potential sources of existing data that that might support your
project objectives. Examples include topographical maps, photographs, land use databases, data
from studies previously conducted in the area of interest, meteorological data, published literature
sources, etc. If literature searches are required, describe the search engines that will be used and
key search terms. If databases will be used, describe each database in terms of who developed and
operates it, the type of data it contains, and any search/query parameters to be used when
extracting data from that source. Similarly, describe any other potential sources of data and the
rationale for considering or using them. If you plan to obtain data by contacting specific individuals
or organizations, document these plans.
•	Potential Data Constraints: Identify any legal, security or other restrictions that might limit your
application of or access to the data you need, and determine if these constraints can be resolved.
Geospatial and remotely sensed data typically are made available with associated metadata
documentation that can provide very detailed descriptions. Metadata often define use and/or
application constraints that may include statements qualifying any limitations based on temporal
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 3

-------
variation, sensor detection limits, atmospheric or location error, or an inappropriate maximum
scale of application. Other constraints associated with geospatial or remotely sensed data may
relate to lack of "ground truthing" or other applications (e.g., simulation modeling) used to
validate model results.
•	Criteria for Selecting Data Sources: If you are able to identify a number of potential data sources,
define criteria that you will use to determine which sources are most likely to meet your needs
and prioritize their use. These source selection criteria will vary according to the unique needs of
your project; examples include the comparability, reliability, applicability, format, access
constraints, or even the quantity of data available in the candidate source. Regardless of the
criteria you choose, explain the rating system that will be applied. For example, a project team
may choose to use the age of the source as a criterion for applicability, with a qualitative rating
scale in which data that are less than 3 years old are rated high, data between 3 and 8 years old
are rated medium, and older data are rated low because they may be less representative of
current conditions. As another example, the project team may apply a rating approach to the
format of the data source, with electronically available sources rated higher than sources
containing data that must be entered manually.
•	Data Value Selection Strategy: Once you have screened potential data sources, you may find that
several of the sources yield values for the data element(s) you need, only one source provides
values for the data elements of interest, no sources yield values for the data elements of interest, or
some sources address multiple data elements of interest. Therefore, it is helpful to define and
document criteria and procedures that your project team can use to determine which value(s) are
most appropriate for use. For example, if water quality measurements are available for your project
site, and these data represent state monitoring results as well as volunteer monitoring data, will you
prioritize one set of values over the other or capture all of these data?
•	Resolution of Data Gaps: Projects that rely on existing data are often cyclical because it is difficult to
gather all the data needed in a single step. This is true whether the existing data are being used for
project planning purposes (e.g., to determine baseline conditions or determine project scope) or
during project implementation (e.g., to fill gaps in data that will be collected by the project team).
Initial data gathering efforts often yield important information, but also (1) leave gaps for data that are
not available or could not be located and/or (2) reveal additional data needs that were not previously
considered. For this reason, it is useful to establish a process that the team can follow to identify and
address those gaps. Doing so during the planning stage will help the project team prepare for rather
than respond to unanticipated data needs during or after project implementation.
•	Documentation and Recordkeeping: We recommend that you also plan how you will document the
results of your source selection process, including any sources that you decided against using and
the rationale for not using them. Failure to do so can lead to accusations of "cherry picking" the
data. This is especially important for projects that may face legal challenges and federally-funded
projects that are subject to Data Quality Act requirements.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 3

-------
Identify and document how you will manage the data you gather. In most cases, data gathered from
different sources reflect a variety of formats, reporting conventions, and other differences that require
manipulations to make the data suitable for use in your project. The project planning team should
identify standard conventions and data handling procedures in advance to ensure that all project team
members understand and implement what is necessary to achieve project objectives.
• Standardization of Data Elements: Data gathered from multiple sources are often presented in
different units and may be associated with a data element code or identity based on different
nomenclature systems. Examples include, but are not limited to, the following:
o Some datasets may present spatial data in degrees, minutes and seconds, while others present
the same information in decimal degree format. Additionally, latitude and longitude values can
be based on different geodetic systems. The most widely used is World Geodetic System (WGS),
used by the Global Positioning System and last updated in 2004 (WGS 84, also known as WGS
1984, EPSG:4326). Earlier schemes included WGS 72, WGS 66, and WGS 60. Positional data
extracted from existing sources that predate implementation of the 2004 standard will likely
need to be converted to ensure data comparability, as will data extracted from sources that use
an entirely different system.
o Taxonomic nomenclature is a common type of data element in ecological restoration. In order
to standardize the representation of species taxonomic identification within and across data
management systems, the use of standardized numeric codes (e.g., Taxonomic Serial Number
maintained by the Integrated Taxonomic Information System (ITIS)), alpha-numeric codes (e.g.,
Symbol Code maintained by the U.S. Department of Agriculture (USDA) Plant Database), or
alpha codes (e.g., "alpha-code" listing maintained by the American Ornithologists Union (AOU))
can facilitate accurate comparisons and data element documentation. In addition to
standardization of taxonomic codes, project teams also may need to consider inconsistencies in
taxonomic resolution. Example considerations include using the lowest common resolution
among available datasets, excluding data that are not at the appropriate resolution, or using
multiple resolutions with appropriate adjustments to indicators or metrics.
o Chemical data are often reported using different nomenclature, including trade names,
International Union of Pure and Applied Chemistry (IUPAC) names, Chemical Abstract Service
(CAS) names and CAS numbers. Project teams should standardize nomenclature to harmonize
organic and inorganic chemical data that may be gathered and used in the project. For example,
including a CAS number field in every record containing chemical data provides a means for
comparing data for the same compound from different sources, even when each source uses a
different naming convention.
o Project teams also should identify the standard units that will be used for each data element
and the specific processes that will be used to make any necessary conversions or comparisons.
In doing so, project teams should consider simple imperial/metric conversions (e.g., ounces to
grams) as well as the practical ability to convert all units and element identifiers to a common
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 3

-------
standard. For example, some chemical results may be reported in wet weight, while others are
reported in dry weight; these results are not directly comparable without additional information
that may or may not be available.
•	Data Capture: Planning activities should include consideration and documentation of the processes
that will be used to manually enter data obtained from existing sources (i.e., data entry), and/or to
merge or upload data from existing electronic sources into the project database. Both types of
activities provide ample opportunities for error, including transcription errors associated with data
entry and large-scale errors that can arise during file transfers when delimiters are not properly
placed. Data gathering activities should include steps to both mitigate these problems and verify
that errors have not occurred.
•	Data Storage and Manipulation: Project planning teams should identify how existing data and its
associated metadata will be stored, who will be responsible for access and maintenance, and how it
will be incorporated with other project data. This includes documenting the hardware, software and
personnel requirements for managing and incorporating the existing data into the project, and the
quality management strategies that will be employed to ensure the integrity of the data is not
compromised during data storage, access/retrieval, updates or other manipulation. Additional
guidance regarding management of project data is provided in Appendix A.
Incorporate data quality validation and data analysis. After all data have been screened, gathered and
entered into the database, it is important to examine the overall dataset to confirm it supports the
project objectives, and document any quality issues associated with individual or overall results. In some
cases, the gathered data may not meet your original quality objectives, but are the only data available. If
such a situation arises, it is important to document the data limitations and their associated impacts on
data analysis and the resulting decisions or conclusions (e.g., in planning documents, internal project
files, data review files and final reports). Chapter 6 provides additional information on data review and
documentation strategies.
The project team also should determine and document exactly how the existing data will be used to
support project decisions. This includes identifying and explaining what calculations will be performed
(e.g., mean tree diameter, % change in crop cover), the exact data that will be used to perform these
calculations, and the methodology for making the calculations. If calculations will be performed after
elimination of data outliers, the project plan should define how outliers will be determined and the basis
for their exclusion.
Assess the process. Finally, the project team should consider mechanisms that can be used to verify that
data are located, extracted and validated in accordance with the project plans. For example, it may be
helpful to include periodic assessments of any existing data that have been selected by the team to
verify that decisions regarding the utility of the potential data sources, methods for documenting the
data sources, and any data quality issues are appropriate. Similarly, it may be helpful to implement
manual or automated queries that are designed to help identify errors that might arise from electronic
data transfer processes.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 3



-------
CHAPTER 4 PREPARING FOR DATA COLLECTION
In Chapter 3, we focused on QA strategies for developing a monitoring program designed to evaluate
the effectiveness of ecological restoration activities and help guide future decisions. These strategies
begin with carefully defining specific, measureable, achievable, results-oriented and time-sensitive
(SMART) project objectives that support the overall restoration project goals, followed by development
of sampling (monitoring) objectives and a monitoring program designed to determine if the desired
change has occurred within acceptable limits of uncertainty (error). We noted that sources of
uncertainty include the sampled population itself (largely addressed through the monitoring program's
sampling design) and the data collection processes used. We also recommended that project planning
teams identify data quality indicators (DQIs) and corresponding acceptance criteria as QC tools to define
and control measurement error. In this chapter, we build on the planning foundation laid in Chapter 3
by describing a suite of QA strategies that project planning teams should address before crews are
deployed to the field. These include:
•	identifying, developing or modifying standard operating procedures (SOPs) that are tailored to the
needs of the monitoring program (Section 4.1);
•	verifying that personnel who will be responsible for conducting field activities have received training
and demonstrated competency in the activities they will perform before they are allowed to gather
data without supervision (Section 4.2);
•	addressing site permit and other field logistics needs before the sampling season is scheduled to
begin (Section 4.3); and
•	making arrangements for analysis of field samples by laboratories that have demonstrated
competency in performing the required determinations (Section 4.4).
4.1 STANDARD OPERATING PROCEDURES
Much of the data collected in support of ecological restoration projects is based on best professional
judgment, which is prone to measurement error due to subjectivity associated with observer
assessments, and the limited ability to quantify and/or control many environmental factors. For
ecological restoration monitoring to be effective, SOPs need to be accurate, complete, concise,
understandable and available to all project staff prior to data collection and handling activities. The
importance of well-written SOPs cannot be overstated; time and resources can be significantly impacted
by failing to provide these documents. Similarly, if SOPs are provided, but the instructions are not
appropriate, understood and followed consistently, the resulting data will likely be inaccurate and could
have significant impacts on the quality of results, conclusions and decisions.
In practice, SOPs may be identified alternatively, as manuals, methods, protocols, work instructions or
other names. These documents play a critical role in maximizing the quality of ecological restoration
monitoring data by:
•	promoting efficiency;
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 4


-------
community and the individual species of concern, and (4) applicable to anticipated ranges in
environmental conditions and associated spatial and temporal variability. Project planners may use
existing SOPs that have been previously developed for national, regional, or project-specific
applications, and adapt those SOPs as necessary to address the specific needs of their project.
Comparability: Many of the variables measured or observed in ecological restoration projects are
procedurally defined, meaning that each value is tied directly to the procedure used to collect it. For
instance, model results for fish population estimates can differ significantly depending upon whether
fish are marked and recaptured using electrofishing or netting. Even slight variations within each
procedure (such as the voltage used, the shocking time in electrofishing surveys or net placement) can
result in substantial variability. To ensure data comparability within a project, SOPs must be adapted or
developed, and followed consistently throughout the project. Strict adherence to SOPs across different
field crews, different field sites, and different seasons or years is important in meeting comparability
needs. Well-documented SOPs, including those used to select sampling site locations, can assist in
evaluating the comparability of data between different projects; an evaluation of the SOPs used can
determine whether the results collected can be appropriately compared.
Completeness: SOPs for field and laboratory activities typically include step-by-step procedures for
collecting samples or data, as well as forms for recording observations, measurements, and ancillary
information. Together, these are important tools in maximizing the percentage of collected data that
will be considered valid and usable. The procedures are designed to ensure collection of representative
and accurate data when followed correctly, and the reporting forms are designed to facilitate accurate
and consistent capture of data, including qualifying or ancillary information such as the date and time of
collection, weather conditions, and vegetative status of target plant species (e.g., flowering, fruiting,
senescence or die-back). Thus, when used correctly, SOPs can increase the chances of obtaining a
complete set of valid data consistent with the monitoring design.
Precision, Bias and Accuracy: Field and laboratory SOPs should include detailed, step-by-step procedural
instructions for all activities associated with collecting, handling and shipping samples, making
measurements and observations, and documenting results. Before using them in a project, the
procedures should be tested to verify that they are clear enough to be understood and applied
consistently by different people with different levels of experience. Such testing can help identify areas
that require further clarification to minimize bias and enhance precision and accuracy among project
personnel. SOPs also should (1) document the acceptance criteria that have been established, and (2)
identify the QC checks (see Chapter 5) or samples that will be used to evaluate whether the data
gathered meet the specified criteria.
Detectability; In many cases, the details provided in an SOP can have a direct impact on detectability.
Specific criteria for identifying a species, for example, will impact species counts. If the criteria include
descriptions or definitions for species identification that are either too limited or too broad, resulting species
counts could include false negatives (Type 2 error, or (B) or false positives (Type 1 error, or a).
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 4





-------
Planning teams may decide to modify SOPs based on lessons learned. For example, an SOP for collecting
understory vegetation in the USFS' National Forest Health Monitoring program originally included
estimating the proportion of cover from trees > 3 meters tall in a 1-m2 quadrat. After the pilot season, it
was determined that the precision among crews was unacceptable, and the desired information could
be derived from other, more reliable measurements that were being collected at the same site.
Removing this measurement saved considerable time and money and provided a more reliable dataset
(Stapanian, Cline, and Cassell 1994, 11.1-11.44; Gartner and Schulz 2009, 55-78).
Similarly, planning teams may be confronted with a decision to switch to a newer, cheaper, faster or
more accurate procedure. While the decision to switch has obvious advantages, the impact of such
changes on the ability to evaluate long-term trends in the data should be carefully considered. Any
changes in data collection methodology should be documented and, if possible, a comparison study
between the old and the new procedures should be conducted to allow for interpretation of trends and
changes using data gathered with both the original and new methodology. Comparison studies are
recommended, particularly in cases where long-term use of a given procedure exists or is anticipated.
These studies can involve collecting data using both the new and old procedures for one or multiple field
seasons. While this will increase the costs and level of effort in the short term, it can provide huge
benefits in the long term. Before deciding to modify data collection procedures, planning teams should
consult with a qualified statistician and potential data users. In addition, and as noted above, a revision
history page should be included to summarize the changes made so that future data users can easily
identify and evaluate the impact of any changes that have been made over time.
In large-scale projects involving multiple agencies or parties, it may be impractical for the activities of all
participants to conform to the same SOP. In certain cases, results can be "corrected" or adjusted based
on a designated standard. An example is provided below.
•	In western Lake Erie, catches from annual bottom trawl surveys conducted by several agencies are
used to estimate lake-wide abundances of yellow perch (Perca flavescens) and walleye (Sander
vitreum) (Forage Task Group 2013). Each agency uses a different vessel and different SOPs (e.g.,
combination of net configuration, trawling speed, and time the trawl is on bottom) that reflect
agency-specific procedures that have been used for decades. For this reason, combining results is
difficult, and an interagency calibration study was performed to address this challenge (Tyson et al.
2006). As a result of the study, one vessel was designated as the "standard" vessel, which allows
catches of yellow perch and walleye from the other vessels to be adjusted according to the catch
from the standard vessel. Consistency with each agency's long-term data also is maintained because
the individual SOPs used by each agency were not changed.
Even in smaller scale projects (e.g., stream reach, coastal shoreline, estuary), certain circumstances may
prevent collaborators from complying with the same standards and procedures. Two examples are
provided below.
•	Restoration of estuarine ecosystems that border two or more political boundaries (e.g., national,
state, tribal, county) may require collaborators to comply with policies regarding potential impacts
of sampling procedures that risk incidental take of regulated species or disturbance of historic
landmarks or archeological sites.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 4



-------
•	The Great Lakes Inventory and Monitoring Network website,
http://science.nature.nps.gov/im/units/glkn/monitor/index.cfm, posts SOPs covering numerous
topics, including sampling designs, field preparation, hiring and training data collectors, locating
sampling points, conducting bird counts, monitoring vegetation, and data management and reporting.
•	EPA manages several surveys of the nation's aquatic resources; each survey uses standardized field
and laboratory methods designed to result in unbiased estimates. Methods and manuals used in
each of these surveys can be found at the following links:
o https://www.epa.gov/national-aquatic-resource-surveys/ncca for the National Coastal
Condition Assessment
o https://www.epa.gov/national-aquatic-resource-surveys/nrsa for the National Rivers and
Streams Assessment
o https://www.epa.gov/national-aquatic-resource-survevs/nla for the National Lakes Assessment
o http://water.epa.gov/type/wetlands/assessment/survev/index.cfm for the National Wetlands
Condition Assessment
•	Environment Canada has compiled a suite of protocols designed to help research teams detect,
describe, and report on ecosystem changes and promote standardized study designs, sampling
procedures, sample and data analysis, and reporting. These protocols are available at
https://ec.gc.ca/faunescience-wildlifescience/default.asp?lang=En&n=E19163B6-
l#Ecologicalmonitoringprotocols.
•	The U.S. Geological Survey (USGS) Forest Vegetation Monitoring Protocol for National Parks in the
North Coast and Cascades Network (2009) (https://pubs.usgs.gov/tm/tm2a8/pdf/tm2a8.pdf) includes
29 SOPs covering everything from records management through project data and posting, and
includes SOPs for hiring, orienting and training personnel; establishing and marking monitoring
plots; preparing information packets and equipment; recording visit details; measuring and
mapping; handling field forms; entering, verifying, reviewing and certifying data; developing
metadata; and debriefing and close out.
•	Wisconsin Frog and Toad Survey.
http://wiatri.net/inventorv/frogtoadsurvev/Volunteer/PDFs/WFTS manual.pdf
•	Field and Laboratory Methods for using the MAIS (Macroinvertebrate Aggregated Index for Streams)
in Rapid Bioassessment of Ohio Streams.
http://www.epa.state.oh.us/portals/35/credibledata/references/MAIS training manual 2007.pdf
•	Ohio Rapid Assessment Method for Wetlands Quality (Mack 2001).
•	Xerces Society-Bee Monitoring Protocol, http://www.xerces.org/wp-
content/uploads/2014/09/StreamlinedBeeMonitoring web.pdf
•	Published Bird Survey Protocol.
http://images.librarv.wisc.edu/EcoNatRes/EFacs/PassPigeon/ppv59no03/reference/econatres.pp59
n03.rhowe.pdf
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 4

-------
4.2
TRAINING AND CERTIFICATION OF FIELD PERSONNEL
Strategies to control the quality of ecological
restoration project data begin in the project
planning phase and continue throughout all phases
of the project's lifecycle. Data quality is particularly
sensitive during the data collection phase when
personnel are involved in field sampling,
measurement and observation activities; these
personnel must have the basic skills and training
necessary to ensure they are able to safely and
accurately perform assigned tasks. Inadequately
trained individuals can increase data variability and
inflate measurement error, significantly impacting
project decision making and outcomes. This is particularly important for observer-determined data that
do not rely on a calibrated piece of equipment. Human observation can be subjective and variable by its
nature, and proper and continuous training is critical to ensure that data are collected based on
objective assessments. Even calibrated instruments can yieid improper results when used by
insufficiently trained individuals. Ensuring that field crews are properly trained will increase project
efficiency and mitigate unanticipated adverse impacts on schedules and budgets. In general terms,
appropriate training consists of instruction, practice, and skills evaluation and/or certification prior to
data collection. If crew capabilities are not periodically assessed throughout data collection, initial
training should be supplemented by periodic refresher or "booster" training. Details regarding these
aspects of training are provided throughout this section.
4.2.1 Crew Qualifications
Many ecological restoration projects require field crew members to have a level of existing expertise
along with project-specific training. For example, bird surveys often require individuals who are experts
in detecting and identifying birds using refined visual and aural skills under less than ideal conditions;
studies have shown that human bias is one of the most noteworthy factors affecting the accuracy of
trend estimates for songbird populations (Kepler and Scott 1981. 366-371; Baker and Sauer 1995).
demonstrating the need for a well-trained and experienced field crew.
Some field data collection activities may require less expertise and skill than others. In forest surveys, for
example, assessing crown density (how much light is blocked by the tree crown) is challenging, and
focused, extensive training is required. In contrast, assessing crown dieback (estimate of mortality of
branches with fine twigs) is often easy for field crews and the percent of such observations within a
project's error tolerance limits is usually good. The USFS Forest Inventory and Analysis (FIA) National
Assessment of Data Quality for Forest Health Indicators concluded that, while training and protocols
were producing acceptable levels of repeatability for crown dieback, either more training was needed or
the acceptance criteria needed to be reevaluated. Exhibit 4-4 shows the results of 2,221 data records
that were included in this assessment, along with corresponding statistical results.
Wetland Vegetation Sampling; Environment Canada
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 4




-------
for trainees to learn the rationale behind the SOPs so that they may better adapt to slight changes in
circumstance.
Once trainees have had a chance to implement SOPs and practice data collection in the field, project
goals and objectives should be reviewed again to provide the crew with a chance to ask questions based
on what has been learned and experienced. As noted in Section 4.2.3, effective training programs
include sessions in both the classroom and the field. For ecological restoration projects, training should
provide individuals with information and skills pertaining but not limited to:
•	site access and establishing plot boundaries;
•	species to be observed, counted, and described;
•	variables to be measured or observed;
•	methods and strategies for collecting information;
•	the rationale supporting collection strategies;
•	maintenance and calibration of instruments and equipment;
•	methods for collecting samples or materials for further identification by experts or laboratories;
•	data recording/entry processes;
•	use of standardized data sheets;
•	use of handheld equipment, including GPS and PDRs;
•	equipment maintenance and calibration;
•	procedures for handling data (including samples and specimens) in the field; and
•	data backup, entry, verification and validation.
Training should be tailored to address region- and project-specific conditions to provide individuals with
a clear sense of what to expect in the field. If the project covers a broad geographic area, training can be
tailored to each specific region within the area.
Finally, successful data operations require a culture of safety in addition to a culture of quality.
Therefore, all training should include discussions of safety. Trainers should stress the importance of
creating a "safety culture" in the field, both at the field sites and during travel to and from the sites.
Trainees should learn how to avoid, react to and respond to unsafe conditions; they also should learn
how to assess risk, and be briefed on what conditions constitute a safety risk, such as working late at
night or working for extended shifts. Although the primary purpose of safety training is to protect crew
members, it is worth noting that safe conditions have a significant impact on ensuring data quality;
unsafe conditions can easily lead to collection of incomplete, non-representative and/or inaccurate
information. Examples of field-related risks that could adversely impact data collection include boat and
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 4

-------
water safety, vehicular safety, sun exposure and dehydration, hunting activities, hypothermia, allergic
reactions to plants or stinging and biting insects, and wild animals.
4.2.5 Certifying Crew Competency
Before beginning actual field work, each new crew member, no matter the level of experience, should
be able to demonstrate their ability to collect data of sufficient quality under actual field conditions.
Training should include evaluations at the end of each session to rate participants' levels of
understanding and proficiency. Various metrics, such as the use of calibration plots and hot checks (see
Section 5.2) can be used to assess field readiness. During development of forest health indicators for the
FIA Program, for example, crew members were certified in understory vegetation when their results
agreed with the expert at least 90% of the time on species identification and within 20% on vegetation
cover at least 80% of the time (Stapanian. Cline. and Cassell 1994; Stapanian unpublished data).
Monitoring of Great Lakes coastal wetlands includes the formal training and certification of all new field
personnel regardless of prior experience. For example, the anuran and bird field crews are trained and
tested to minimize errors in species identification, data entry and the locating of sampling points with
GPS receivers (Uzarski et al. 2017). As another example, crew members who conduct point counts in
Great Lakes Network parks are required to complete an online certification program developed by the
University of Wisconsin-Green Bay, which allows (1) trainees to certify their skills in identifying birds by
sight and sound, and (2) the parks to verify that certification.
Some projects may require that each field crew responsible for collecting observer-determined data on
vegetation or animal species include at least one expert skilled in the field of botany or the specific
animal taxon or species. Although field crew members with prior experience will not need as much
training as new crew members, they will need to be trained regarding the project goals, objectives,
SOPs, data acceptance criteria, and location. They also will need to demonstrate sufficient capability for
collecting data for the specific project. The USFS often uses hot checks and calibration plots (see Section
5.2) during and/or as a last step of training to evaluate crew capabilities prior to actual data collection.
In addition to a hands-on, on-site evaluation and confirmation of crew capabilities prior to actual data
collection, it is recommended that all crew members participate in periodic refresher training and
recertification. Experienced crew members can assist as additional instructors with less experienced
trainees, giving them an opportunity to continue practicing observational skills such as distance
estimation, working on identification of birds by call notes and partial songs, and improving species
identification and characterization skills. Ongoing training and certification should also be linked to QC
checks that are a part of the overall project QA strategies. Hot checks, cold checks, and blind checks are
mechanisms for ensuring that data meet the data quality acceptance criteria, but these checks can also
provide opportunities for ongoing training. Immediate feedback from a QA crew during a hot check can
reinforce training principles. Results from cold and blind checks can assess training effectiveness or
determine if refresher training is needed. These checks are described in more detail in Chapter 5.
Exhibit 4-5 shows our recommended approach to ensuring that individuals are capable of and certified
for data collection in support of ecological restoration projects.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 4

-------
Exhibit 4-5. Training and Certification for Observer-Determined Data Collection
Experienced
Crew Member
New Crew
Member
Formal Training
Provides introductory skills
and information about
methods and objectives
Testing
Determines competency
through established
qualification requirements
t
Certification
FAIL
11	s
I I ANNUAL RECERTIFICATION
4.2.6	Failure to Meet Certification Requirements
Each crew member will either pass or fail competency testing based on established qualification
requirements. If a crew member passes, that individual is certified and can begin data collection, with
periodic refresher training and recertification (see Section 4.2.5). If a crew member fails, it is
recommended that they be retrained and re-assessed for certification a second time. Following the
evaluations, time and resources should be devoted to corrective action and improvement to provide an
opportunity for participants to correct any deficiencies Ideally, a proficiency scoring system could be
applied to "grade" crew member skills along a numeric or ordinal scale, and the resulting certification
values used to assign field crew teams in a way that minimizes effects caused by systematic errors.
As appropriate, individuals failing a second assessment may be re-assigned to non-data collection
activities, assigned data collection work that is directly supervised, or released from the project. In some
cases, a probationary period may be warranted, with experienced crew members accompanying new
members until appropriate certification is obtained. Because failures and their potential impact on data
can range from small to large, they are often handled on a case-by-case basis.
4.2.7	Examples
Numerous training programs and opportunities address topics directly pertaining to ecological
restoration monitoring. The list below provides just a few examples of training that has been made
available by government and private organizations.
• Observers who conduct bird counts using sight and sound in Great Lakes Network parks and the
Great Lakes Coastal Wetland Monitoring Program are required to complete a Birder Certification
Online program developed by the University of Wisconsin-Green Bay under funding and
collaboration from the Wisconsin Department of Natural Resources, the Wisconsin Bird
Conservation Initiative, the National Park Service and the U.S. Fish and Wildlife Service.
(http://www.birdercertification.org/)
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 4


-------
locations will remain consistent and be verified throughout the design, restoration and post-
restoration monitoring phases of the project. Regardless of how the sites are selected, field
reconnaissance should be conducted prior to permanently establishing each location. During this
reconnaissance, project planners should confirm that the location (1) can be safely and consistently
accessed, (2) is representative of the area and variable(s) targeted by the project, and (3) can meet
sampling objectives.
Reconnaissance efforts also should verify that all SOPs that will be used are applicable and appropriate
for the site conditions. A site's physical and biological conditions can change dramatically throughout a
given time period; therefore, it is recommended that site reconnaissance be performed during the same
season or seasons in which data collection will occur and that any changes are documented.
Environmental conditions and seasonal impacts such as snowpack, stream flow, water elevations, or
dense undergrowth may limit site access, increase safety concerns and/or impact data quality. To offset
any likelihood that selected sites might be deemed unsuitable, or that the necessary permits or
landowner access cannot be obtained within schedules, project planning teams should consider
strategies for selecting equally representative sites that can be used as back-ups, consulting with a
statistician as needed. Such pro-active planning to include back-up locations can be critical to ensuring
the completeness of both routine and QC data collection (see Exhibit 4-7). Note that the elimination of
sampling locations should be based on predetermined criteria instead of subjective decisions by field
personnel, which can create bias. These criteria can address (1) accessibility such as landowner
permission, (2) sample unit characteristics such as trails or other unique disturbances, and (3) sampling
areas that meet the project objectives such as the presence of habitat or invasive species.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 4


-------
benthic macroinvertebrate sampling or fish and wildlife sampling). Check with the state environmental
or natural resource agency to determine which activities planned for the project may require a permit.
In some cases, the delay time for securing permits may be lengthy, so proper project planning should
initiate the permitting process well before (e.g., six months to a year) field activities are scheduled to
begin.
Coordination with Landowners: In addition to state and federal permits that might be required for
ecological restoration activities, permission may need to be obtained from local landowners to access
field sites. Ideally, any affected landowners would already be included in project planning. Written
agreements are recommended for granting access to field locations on private land; to protect both
parties, such agreements should specify the timing and location of activities, the nature of such activities
and any special conditions. When sites are located on public land or in remote locations, it is important
to notify local officials or other authorities of any scheduled field work. Ideally, authorities should be
provided with a document identifying the project manager and their contact information; the location,
dates and times of field activities; the purpose of the activities; the number and names of field crew
members; and any relevant individual health concerns (e.g., allergies). This information is critical to
support effective search and rescue or medical evacuation. It also empowers local authorities to
accurately respond to any questions, complaints, or other inquiries by the general public who may
encounter field crew members or perceive crew activities as inconsistent with existing land
management policies (e.g., the capture of wildlife or the collection of plant specimens in a state or
federal park).
Master Schedule: A master schedule that includes when and where QC checks may occur should be
provided to project managers and QA crews (see Chapter 5 regarding various types of QC checks).
Maintenance and distribution of such schedules to those who are responsible for implementing them
can help ensure these important QA activities are not forgotten or arbitrarily placed, leading to possible
inaccuracies in QA results. For instance, a project plan may state that precision checks (see Section
5.2.3.3) will be performed for each field crew at a minimum of three plots, or that specimen duplicates
will be collected at a frequency of 5%; the master schedule should detail when and where these
activities will be performed to meet the stated frequency goals, and which staff are responsible for
performing them.
4.3.2 Field Equipment and On-Site Supplies
Proper field gear, equipment and supplies are critical for ensuring that data collection activities are
conducted safely, efficiently and in a manner that ensures consistency with project quality objectives.
Examples of equipment and on-site supplies supporting ecological restoration projects are shown in
Exhibit 4-8.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 4



-------
4.3.3 Field Instructions
Although SOPs and sampling plans include specifics regarding the locations, frequencies and procedures
for collecting and recording data, these documents typically do not provide some of the information
crews need for day-to-day data collection efforts, including pre-departure activities, scheduling and daily
wrap-up activities. For example:
•	Which data should a field crew member collect first, and where should it be collected?
•	Will the effort to measure or observe one variable impact the quality of another variable?
•	Are variables stable or transitory? If transitory, are they time-sensitive, or easily influenced by
disturbances, the presence of an observer, or other sampling activities?
•	What happens if crew members encounter or are exposed to a hazard or other condition impacting
data collection and/or quality?
•	Who is responsible for ensuring collected specimens are packaged, labeled and transported
properly?
•	Who is responsible for reviewing completed data collection forms to ensure all relevant data have
been collected and reviewed and all recorded information is legible?
Field operation instructions help ensure that data are collected from the appropriate locations, at the
correct time, and by the appropriate individuals and provide information on back up plans and contacts
if problems arise that might impact data collection or impair data quality. The instructions serve as a
daily operations guide for field crews and should be used in conjunction with other project-specific
documents, including sampling plans, site packets (Section 4.3.2) and health and safety plans. Field
operation instructions can be a simple summary or quick reference guide for a single field crew
collecting measurement or observer-determined data for a single species at a single location.
Alternatively, the instructions can cover detailed activities that impact several crews collecting samples,
measurements and observer-determined data for several species at several locations. Regardless, all
instructions include the following components.
•	The sequence and locations for implementing different SOPs during a given day
•	Schedules, roles, and responsibilities for different crew members
•	Site-specific safety considerations
•	Contact information for individuals to notify when unforeseen circumstances arise that could impact
data collection or data quality
•	Information regarding required planned QA/QC checks (see Chapter 5)
•	Alternate plans and procedures for instances in which data collection is compromised, if for
example, sites are inaccessible, inclement weather occurs, or unanticipated hazards are present.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 4

-------
No matter the length of the instructions, we recommend that the following information be included to
guide crews during their daily data collection activities:
Schedules: A schedule of field activities is needed to let field crew members know when and where
sampling events will occur; which field crews are responsible for each event; and the timing and location
of certain QC activities (e.g., see Chapter 5). In addition to enhancing data quality, proper scheduling can
reduce project costs by more efficiently scheduling hourly labor, reducing vehicle miles, avoiding
unnecessary duplication, and ensuring field crews are in place during refresher training or hot checks
(Section 5.3.1). If possible, guidance for the selection of substitute sites for situations where target sites
are not accessible or are unsafe for monitoring activities can also be included.
Site-Specific Considerations: Instructions should include a review of site-specific health and safety
considerations. Field data collection presents certain risks and hazards associated with site access; the
operation of motor vehicles, boats, and sampling equipment (e.g., electro-shocking); and other dangers
such as insect bites (e.g., bees, ticks) and weather effects (e.g., hypothermia, heat exhaustion,
sunstroke). Specific safety guidelines for each of the data collection activities should be reviewed and
summarized. If not already included in land access permission forms (see Exhibit 4-8), information
regarding site access considerations (e.g., contacts, landowner expectations) also should be provided.
Overview of Site-Specific Field Activities: Field instructions should provide a summary list of the SOPs to
be used and the activities to be conducted at each monitoring site, including confirmation of the site
location (e.g., using maps, scaled images and/or GPS data), marking the site (e.g., using flagging, pin
flags and/or permanent markings), and documenting the site (e.g., using digital cameras and/or rough
sketches made by the crew) to identify the exact locations where samples and data are collected. The
overview should also identify the number and types of measurements and observations to be made,
samples to be collected, and information about any additional sample volume or samples required for
QC purposes.
Post Sample and Data Collection Activities: Instructions should include a listing of the activities to be
conducted after leaving the field and returning to the base site or office, as well as a requirement for
property owners or responsible local authorities to be notified when all crew members have returned
from conducting field activities. Once crews have returned to the base site or office, each of the data
collection forms, PDRs and sample labels that have been used or completed throughout the day need to
be reviewed again immediately to ensure that they are accurate, complete and legible. All QC check
data forms and QC sample labels also need to be reviewed for accuracy, completeness and legibility.
Unless these forms and labels are filled out correctly, it will be difficult - if not impossible - to link
results to the correct location, time, ancillary information and/or QC results for evaluation of data
quality. Any samples or specimens that were collected need to be properly processed, preserved, sealed
and labeled in preparation for shipment, which should occur as soon as is practical to ensure holding
time limits are not exceeded. Sample shipment and chain-of-custody forms will need to be filled out and
included in the packaging; the instructions should provide complete shipping address information,
including the name and physical address of the receiving organization as well as a contact name and
number at that organization (note that addresses to post office boxes should not be used). During post
data collection activities, all equipment should be cleaned and/or disinfected to reduce the risk of
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 4


-------
before a sample is extracted, and a separate holding time of no more than 14 days between sample
extraction and analysis of the extracts. Failing to meet specified holding times will adversely impact data
quality so this issue is of particular concern for analytes with short holding times (e.g., 12 hours or less),
as these typically require use of a laboratory that is within driving distance of the sampling site and that
has the capacity to begin processing samples immediately upon receipt. Sample deliveries should be
planned and coordinated to ensure samples will be delivered to laboratories during periods when the
laboratory is open (e.g., normal working hours). Project planning teams need to review specified holding
times, implement measures for immediate transfer of samples to the laboratory when holding times are
short, and ensure the laboratory has the capacity to meet the specified holding times throughout the
duration of the project. When contracting for laboratory services, planning teams also must ensure the
laboratory is basing holding times on the time of collection in the field, rather than the time of sample
receipt at the lab.
The capacity to store and handle your samples: As with holding times, specific requirements for
sample processing and storage vary widely depending on the analyte and sample matrix. Some
samples/analytes may require preservation or filtration immediately upon laboratory receipt, while
others may require only refrigeration to protect sample integrity. Project planning teams need to
review these requirements and ensure the laboratory is properly equipped to meet them. For
example, if a project will require the laboratory to store large volumes of aqueous samples at <4°C, it
is important to verify the laboratory has sufficient refrigeration capacity to do so, even during periods
of peak demand. At a minimum, the laboratory should have a dedicated sample custodian who is
available to immediately inspect and document the condition of samples upon receipt and notify your
organization of any problems.
The capacity to report laboratory results in the format and within the timeframes you need: It does
little good to have samples analyzed if you cannot receive the results in time to support your decisions
and obligations. Thus, project planning teams should determine and define the "data turnaround time"
and data format needed from the laboratory. The most common approach is to define turnaround time
as the maximum number of calendar days from the time of sample receipt at the laboratory to delivery
of the required results. Data turnaround times can vary depending on a number of factors such as the
complexity of the analyses, the complexity of the reporting requirements, and the urgency for
information. For most "routine" types of analyses, commercial laboratories usually can offer price
quotes for their "standard" turnaround time and a premium price for shorter timeframes. Note that the
data turnaround time is different than the sample holding times discussed above. For example, samples
may have a 7-day holding time, and a 30-day data turnaround time. This means that the laboratory must
analyze the samples within seven days of collection, but has up to 30 days to compile and report the
results. Project managers should clearly define the data to be reported, including metadata and QC
results, along with the reporting format that will be needed to facilitate data evaluation and transfer.
Enforcement Provisions: Even when laboratory capacity has been ascertained, unforeseen situations
may arise that can stretch the capacity to its limits, and force the laboratory to prioritize one project
over another. To mitigate such circumstances, it may be helpful to establish positive incentives for early
delivery, negative incentives for late delivery, and/or damages caused by missed holding times.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 4

-------
4.4.2 Laboratory Competency
The stakes are high when it comes to laboratory results; extensive resources have been invested to
collect samples, and the quality of your project data and conclusions will be compromised if the
laboratory is not sufficiently competent in using the required instrumentation or techniques, or in
applying those techniques to your samples. The need to verify competency is so important that EPA has
established three competency policies to better ensure the quality of field and laboratory data
generated (EPA 2016). One policy governs internal EPA laboratories, one governs organizations that
perform field or laboratory measurements under EPA contract agreements, and one governs
organizations that conduct such activities under EPA grants or other assistance agreements. Together,
these policies require an assessment of competency before a laboratory is allowed to analyze samples.
Concerns about the competency of laboratories analyzing environmental samples are not limited to EPA.
For example, many states require the use of certified or accredited laboratories for specific programs
(e.g., drinking water analysis) or when using state funding.
Although it may be tempting to rely simply on laboratory accreditation or certification as a sole indicator
of competency, we caution against doing so. Among other concerns, the cost of accreditation or
certification may be prohibitive for some organizations, and thus may eliminate highly qualified and
competent organizations from consideration. Because accreditation requirements sometimes vary by
state, highly qualified laboratories may be eliminated if they are participating in a program that does not
share reciprocity with the program required by another state. Additionally, accreditation is often specific
to a single program, method, or group of analytes, and in many cases, accreditation does not even exist
for the types of laboratory analyses you may need (e.g., taxonomic identifications, analysis for emerging
contaminants of concern, and use of novel techniques). For example, a laboratory may be certified to
analyze samples for the presence of metals in drinking water, but that certification would have little
value in demonstrating their competency to analyze benthic organisms in sediment samples or organic
compounds in aqueous samples. In such cases, relying on accreditation is primarily of value in
confirming that the lab has a quality system in place and undergoes internal and external assessments to
verify that it is adhering to its procedures (for the types of analyses covered by the accreditation). Unless
all the sample types, analytes and methods in your project fall within an existing accreditation program,
we recommend that you view accreditation/certification as one of many tools that can be used. Other
tools for demonstrating laboratory competency include, but are not limited to:
•	requesting that the laboratory provide results from QC data generated using the techniques
required in your project (e.g., can the laboratory demonstrate the ability to produce clean blanks
and achieve required detection limits using those techniques?);
•	requesting that the laboratory submit descriptions of applicable instrumentation, sampling
equipment, method sensitivities, reporting practices, capacity, experience, staffing and past
performance;
•	examining results from the laboratory's participation in proficiency testing or round-robin programs
involving the data collection techniques and sample types being used in your project;
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 4




-------
CHAPTER 5 QUALITY CONTROL DURING FIELD ACTIVITIES
The information in this chapter builds on the planning and preparation activities recommended in
Chapters 3 and 4, respectively. While those activities are essential to the success of an ecological
restoration project, they do not guarantee that monitoring activities will go as planned or that all
samples and data collected will be accurate and usable. Even well-trained personnel make errors, and
situations can arise that were not considered during the planning and preparation stages.
A variety of good field practices and QC checks can be used to identify and mitigate data collection
errors and address unforeseen problems to promote continuous improvement, help identify solutions to
challenges encountered in the field, and facilitate evaluation of data quality. This chapter provides a
QA/QC framework that can be used to (1) demonstrate how well routine field crew members are
conforming to requirements specified in the project planning documents and standard operating
procedures (SOPs), (2) obtain information that will be needed to evaluate the quality of results reported
by the routine field crews, and (3) gather information that can be used to improve data quality over
time. It also provides guidance on strategies for reporting, evaluating, and using QC check results to
facilitate continuous improvements.
The chapter is organized as follows:
•	Section 5.1 provides a brief review of good field practices that should be implemented in all
ecological restoration projects and describes characteristics of effective QA (or expert) crews.
•	Section 5.2 provides in-depth information on five QC checks that can be used to meet the unique
needs of ecological restoration.
•	Section 5.3 provides specific strategies for reporting QC check results in a manner that supports
timely improvements to data collection procedures and project decision making, and presents a
recommended approach for collecting and considering feedback from field crew members to help
support continuous process improvements.
5.1 FIELD PRACTICES AND QA CREWS
5.1.1 Good Field Practices
Field crews should be provided with all required SOPs, information, and site-specific equipment, and
should be competent in conducting data collection operations systematically and accurately— including
the correct use of appropriate and standardized data collection forms, codes, units, terminology and
charts. Data collected using these mechanisms should be legible and transferable, and should include
field notes, comments and explanations regarding any considerations that might assist with data
interpretation. Field crews also should review the collected data and completed data forms for
completeness and legibility prior to leaving a sampling site to ensure corrections are made in real time
before any changes in site conditions or lapses in memory occur.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 5


-------
consider using trainers or project principal investigators as QA crews, or accessing available experts for a
limited period of time to train routine field crew members to function as QA crews, pairing available
experts with routine field crew trainees over a period of time (allowing for simultaneous assessments of
selected sample plots). Results of the paired assessments should be evaluated, and the differences
discussed and resolved before newly trained QA field crew members are used to perform QC
assessments of routine field crew data collection.
5.2 QUALITY CONTROL FIELD CHECKS
This section describes five QC field checks that are similar in function to QC samples that are collected
by field teams and analyzed by laboratories. These field QC checks include hot checks, calibration
checks, cold checks, blind checks and precision checks. Together, these checks provide a means for (1)
verifying the readiness of routine field crews, (2) "calibrating" crew members, (3) evaluating compliance
with specified quality acceptance criteria, and (4) assessing variability among crews and individual crew
members. QC checks can also be used to identify and mitigate data collection errors, support continuous
improvement, help identify solutions to challenges encountered in the field, and increase the
confidence in (and understanding of) the quality of data collected.
Section 5.2 is organized as follows:
•	Section 5.2.1 provides detailed information on hot checks, including an example review form that
would be completed by a QA crew member while conducting a hot check
•	Section 5.2.2 describes calibration checks, including some key topics project planners should
consider with respect to these checks
•	Section 5.2.3 describes three different checks that can be used to estimate measurement error
including:
o Cold checks (Section 5.2.3.1)
o Blind checks (Section 5.2.3.2)
o Precision checks (Section 5.2.3.3)
•	Section 5.2.4 provides a summary of the checks as well as some general considerations when
selecting checks and a decision tree for selecting appropriate checks for stable variables
Project planners should consider and select appropriate QC field checks for their project, and include
instructions regarding the selected checks in project documentation.
5.2.1 Hot Checks
Hot checks are a type of field QC check used to objectively and systematically determine if data
collection activities are being implemented as planned. Hot checks are often referred to as "field audits"
or "technical systems audits," (or in laboratory settings, as "laboratory audits") and can be used as part
of routine field crew training or during data collection to examine, in real time, the procedures being
performed by routine field crew members. This type of check can help determine whether routine field
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 5


-------
project's sampling objectives and data quality acceptance criteria. These checks can also function as the
equivalent of an instrument calibration, in that they provide an early opportunity to ensure that each
field crew member is able to use the specified procedures to collect data that meet pre-established data
quality acceptance criteria.
During these checks, a QA crew observes routine field crew collecting data in real time, and provides
constructive feedback aimed at improving their data collection techniques. Although the primary activity
for the QA crew is to observe field crew activities, dialogue between the field and QA crews is also
needed to share ideas and provide feedback.
QA crews should:
•	convey attitudes that are helpful, constructive, positive and unbiased;
•	conduct evaluations in a manner that provides an opportunity for improvements in project data and
personnel performance; and
•	offer practical solutions to problems and engage in conversation with field crew members to better
understand any challenges encountered.
Hot checks also provide an excellent opportunity for field crew members to ask any questions regarding
data collection activities that may have arisen since their training and certification, and for less
experienced crew members to discuss and resolve unanticipated challenges. These feedback
opportunities are inherent to hot checks and are helpful in identifying noteworthy practices that should
be shared with other crew members. When applied during a field season, subsequent resolution of
important deficiencies may need to be verified quickly through a follow-up hot check of the same field
crew or crew member.
As with all QA activities, hot checks benefit from proper planning, including appropriate selection of
individuals for the QA crew(s); preparation of checklists, questionnaires, or reports for each protocol
being used; and the creation of printed electronic forms for collating assessment results. Project
planners should consider the following additional topics when planning to implement hot checks during
the field season or sampling period (USDA 2012).
•	Conduct hot checks early in the project cycle. Although hot checks should be conducted throughout
an entire sampling period, they are particularly useful when conducted early to identify issues that
could result in errors that might otherwise go undiscovered until late in the data collection cycle.
•	Prioritize inexperienced or problematic members. Newly certified crew members or members who
experienced problems during field training or testing may need to be prioritized for hot checks over
more qualified members.
•	Review all variables. Hot checks should include a review of all measured or observed target and
ancillary variables.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 5

-------
•	Address issues promptly. Data should be corrected onsite as issues are identified, and related
problems should be documented and later evaluated to determine the cause.
•	Consider the big picture. In cases where site conditions have been impacted and data can no longer
be collected as specified in the SOP, QA crews can still use the opportunity to ask questions, review
and discuss SOPs, and document the results of these discussions.
In addition to assessing whether SOPs are being followed correctly, the OA crew must also determine
whether the quality of the data being collected meets established acceptance criteria. Throughout the
hot check, the QA crew should compare their observations with what the field crew is recording. Any
discrepancies should be noted on reporting forms (e.g., checklists) and discussed with the field crew.
Checklists or questionnaires used by QA crews should be developed and approved for use in advance to
ensure all necessary components of the hot check are addressed and the hot check is conducted in an
orderly and efficient manner. Ideally, these forms include information such as:
•	first and last names
of routine and QA
crew members,
•	location of the hot
check,
•	date and beginning
and ending times of
the hot check,
•	steps for selecting or
locating a transect or
sampling site,
•	steps of the SOP(s)
under review, and
•	data quality
acceptance criteria.
These documents also
should include project- and site-specific questions and allow for comments related to particular
activities for each SOP under review. It is important to review each step of the SOPs to determine if
protocols are being followed as specified, and provide feedback if and when issues arise. As the routine
crew collects data, the QA crew observes the activities and fills out the appropriate checklist. An
example checklist for a hot check evaluating data collection in support of a ground cover vegetation
survey is provided in Exhibit 5-3.
Vv
University of Wisconsin Arboretum staff demonstrate seed collecting technique at Curtis Prairie to
students participating in Make a Difference Day at the Society of Ecological Restoration's World
Conference in Madison, Wl in 2013. Photo credit: Nancy Aten
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 5



-------
The list below describes topics project planners should consider with respect to calibration checks:
•	Conduct calibration checks prior to routine field crew measurements and observations. The use of
calibration checks requires QA crews to collect data that are independent of the project data prior
to field crew measurements and their observations. Calibration check data are collected by routine
field crews/crew members after the plots have been characterized by the QA crew, and compared
to the results previously reported by the QA crew.
•	Consider using repeated calibration checks. Field crews/crew members can be asked to repeat
measurements or observations of one or more target variables again as part of a regular, repeated
calibration check to periodically assess and confirm the precision of their measurements and
observations.
•	Assess crew variability. Between-crew and within-crew variability can be assessed by having
different crews/crew members conduct calibration checks within the same time frame. For each
parameter assessed, between-crew variability is estimated from a pooled variance from all crews;
within-crew variability is assessed by pooling the variance from all visits to the plot by a single crew
(Stapanian et al. 2016).
•	Evaluate and enhance field crew capabilities. Calibration check results can be used to assess the
ability of crews/crew members to consistently collect accurate data (when compared to expert
results) and provide opportunities for correction or improvement prior to collection of routine data.
Calibration checks are also particularly useful for the certification of crew members after training
and/or before the start of a new sampling event (see Section 4.2), and can be used at the end of a
sampling event or field season to document that field crews have maintained their quality standards
in data collection.
Project planners should use caution when implementing calibration checks to ensure the integrity of the
target variables is maintained over multiple visits by crews or crew members. Repeated visits to
calibration plots can easily impact the target variables due to trampling and other disturbances by crew
members, particularly those variables associated with the understory vegetation and soil or substrate
integrity. In addition, if calibration checks are conducted on a limited set of variables multiple times by
the same individual(s), the individual(s) are likely to recall previous results which may bias any
assessment of precision and accuracy. This is of most concern for those variables characterized by
simplistic results, such as the number of white pine > 5cm diameter at breast height (DBH), estimates of
percent cover, vegetation species identification, and percent slope.
5.2.3 QC Checks to Estimate Measurement Error
In this section, we describe three types of QC checks — cold checks, blind checks, and precision checks
— that can be used to evaluate how well data comply with the corresponding acceptance criteria to
ultimately assess measurement error. Cold checks and blind checks are conducted by QA crews who are
distinct from the routine field crews and whose measurements and observations are considered to be
the equivalent of a "true value." In each case, the QA crew visits a plot after data have been collected by
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 5

-------
the routine field crew and measures or observes the variables and characteristics using the same SOP(s)
as the routine field crew.
The original values generated by the routine field crew are then compared to the values collected by the
QA crew to determine differences or deviations. The paired values and calculated differences are
compared against the applicable acceptance criteria to (1) identify and possibly correct issues related to
data collection procedures (including implementation) or personnel training, (2) assess the overall
quality of the specific data elements addressed by the checks, and (3) develop more appropriate
acceptance criteria. Precision checks are typically performed when the use of a QA crew is not feasible,
and instead rely on the routine field crews to repeat measurements or observations (USDA 2012). These
checks are limited because they lack the expertise needed to provide a theoretical "true" value. While
they can provide an assessment of precision, they cannot be used to evaluate bias or overall accuracy.
As with calibration checks (Section 5.2.2), these three checks are used to evaluate the ability of crew
members to collect the required data using the SOPs; they are not used to test the ability of a crew(s) to
establish a plot site. Evaluations of plot establishments are typically addressed during training (see
Section 4.2) and/or hot checks (see Section 5.2.1). As shown in Exhibit 5-5, these three types of QC
checks differ slightly in how they are conducted, why they are conducted and how they impact the
collected data. Each is discussed in detail in subsections 5.2.3.1 through 5.2.3.3.
The overarching principles listed below should be considered by project managers when planning the
type and frequency of these checks as they are built into data collection activities:
•	Conduct QC checks promptly after field data collection. To minimize the effects of temporal
variability that might prevent an accurate assessment of data quality, cold checks, blind checks and
precision checks should be conducted as soon as possible following collection of the original data.
Although timing will depend on the project, variable and/or characteristic being observed or
measured, site conditions, and resources, these checks should be scheduled in a time frame that
minimizes the effects of temporal variability between crews. For example, if field crews are
conducting monitoring activities during a growing season, the project planners should determine the
optimal time for QA crews to revisit the site based on local phenology and other environmental
factors germane to the project's location.
•	Address key variables. These QC checks should address the primary or key set of variables that are
being observed or measured, as determined by project quality objectives. Depending on project
needs and available resources, these checks also can address supplementary or ancillary data that
are associated with the primary variables of interest.
•	Ensure relevance of QC check. Where cold checks are typically used to both assess and improve the
quality of project data that are being collected, blind checks and precision checks are typically only
used to provide an overall assessment of the quality of project data once data collection is complete
and project data compiled. However, depending on how the results of these checks are used, the
results of all three can contribute to both the collection and assessment of project data.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 5


-------
5.2.3.1 Cold Checks
Similar to hot checks, cold checks are objective and systematic assessments that are typically used to
determine if data collection activities are being implemented as planned and are suitable to achieve
data quality acceptance criteria. During a cold check, the QA crew visits a site and measures or observes
all or a subset of the variables that have been previously and recently assessed by a routine field crew.
Whereas QA crews conduct hot checks while a field crew is still present, cold checks are performed
shortly after the field crew has completed its activities and reported its data. Thus, cold checks provide
an opportunity to evaluate routine field crew results and correct errors, but do not provide a means for
observing the routine field crew's implementation of the procedures. Cold checks also differ from hot
checks in that field crews are not informed of which sites or plots will be revisited. While hot checks
provide a potential opportunity for routine field crews to "be on their best behavior," cold checks and
blind checks (see Section 5.2.3.2) are more likely to provide an unbiased assessment of routine field
crew data collection.
We recommend that project planners consider the following principles when establishing requirements
for the use of cold checks (USDA 2012):
•	Completion time: Cold checks should be completed as soon as possible following the initial routine
field crew data collection to identify problems and errors.
•	Prioritization: Cold checks should be prioritized for routine field crews that have incorporated
improvements based on results of a previous hot check. This provides an opportunity to confirm the
improvements are being implemented correctly.
•	Site selection: Although it is desirable for QA crews to randomly select plots for cold checks, the
desire for randomness may be outweighed by environmental and other logistical challenges or
random chance events (e.g., unexpected disturbance or absence of the target ecological
phenomenon) that obstruct the possibility to conduct repeat measurements or observations.
•	Data management: Cold check datasets (including hard copy and electronic data forms) generated
by the QA crew should be maintained separately from the routine field crew datasets.
During a cold check, the QA crew often has the routine field crew's data in hand to facilitate comparisons
of the field crew results with results collected by the QA crew. This differs from blind checks (see Section
5.2.3.2), during which QA crews do not have and have not seen the original field data. In cases where the
collected data are comparable, the QA crew can check off the data to indicate agreement; in cases where
discrepancies are found, the QA crew should provide notations that include what they believe to be the
correct value. Any changes made to the original data should be discussed with additional QA reviewers
prior to implementing the change. In cases where resources are limited, project managers might decide to
modify cold checks by having the QA crew refer to the routine field crew data only after they have
completed and compiled the results of the cold check. This approach also mitigates potential bias that
could be introduced due to knowledge of the previously recorded observations and measurements;
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 5

-------
although the data are not technically independent, they might be considered along with blind check data
(see Section 5.2.3.2) for use in evaluating overall project data quality.
The following QA crew procedures are recommended when conducting a cold check:
1.	Identify data collection site: The first step upon arrival is to locate and confirm the specific site
and/or sample unit(s) (e.g., plot, subplot, transect, quadrat, stream reach) for data collection by
locating any permanent markers including any necessary reconstruction of sampling lines or areas
using flagging or other non-permanent markers.
2.	Collect data: The QA crew then collects data (following the same SOPs used by the routine field
crew), being sure to address all targeted variables initially assessed by the routine field crew.
3.	Compile results: The QA crew transcribes their results into a data form that already includes the
routine field crew data.
4.	Compare data: While still at the site, the QA crew compares the cold check data to the original data
collected by the routine field crew. Data discrepancies are immediately identified and flagged. Note:
For QA crew results to be considered unbiased, comparison of results should occur following
completion of cold check data collection.
5.	Evaluate differences: Discrepancies between QA crew and routine field crew data are evaluated and
compared to the data quality acceptance criteria for each variable. The achieved compliance rates
for the field crew depicted in Exhibit 5-6, for example, were 95% (19/20) for identification of genus
and 80% (16/20) of species. In certain cases, particularly for transitory variables, quantitative
assessment of differences between QA crew and routine field crew results may not be directly
comparable, and may require use of reference tables, multiple plots, alternative statistical methods
and/or computer software.
6.	Confirm findings: The findings identified during Step 5 should be confirmed. In the example
provided in Exhibit 5-6, the QA crew might want to revisit trees 6, 10, 11, and 17 to validate the
correctness of their assessments.
7.	Assess causes: QA crew assesses potential site-related causes for discrepancies and documents
findings.
8.	Confirm cold check is complete: QA crew reviews data form(s) for completion, ensures all notes and
flagged results are correctly documented, and confirms that any collected samples are properly
labeled and secured for transport.
Exhibit 5-6 provides an example comparison between routine field crew data and cold check data, for
tree species identified on a plot. The data quality acceptance criteria are included, with a tolerance for
no errors in species identification and a corresponding compliance rate of at least 99% for genus and
95% for species. In conducting their comparison, the QA crew should have noted one genus and species
error (tree 11) and three additional species errors (trees 6, 10, and 17).
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 5


-------
on data quality derived from blind check results are meant to apply to the entire project dataset.
Exceptions would be cases where specific sites are known to have problems or data quality concerns.
We recommend that project planners consider the following principles when establishing requirements
for the use of blind checks:
•	Completion time: As with cold checks, blind checks need to be conducted as soon as feasible after
the routine field crew collected the data. Also as with cold checks, timing will depend on the project
or variable being measured or observed, site and weather conditions, and resources. As an example,
the US Forest Service strives to conduct blind checks within two weeks of the original data collection
effort (USDA 2012) so that differences identified between crew results can be attributed to
variability in measurement or observation, and not confounded with natural temporal variability.
•	Scope: Blind check results should provide enough data for each variable (e.g., 10 or greater) being
monitored to allow for evaluation of the quality of the overall (i.e., complete) dataset.
•	Data management: No corrections should be made to the original data based on the results of
blind checks. Blind check datasets need to be maintained separately, yet linked to the routine field
crew data in the project database. Similar to analytical results of duplicate or replicate samples,
blind check data should be treated exactly as the routine field crew data for data entry and
subsequent review, including data verification and validation (see Chapter 6 for a discussion of
data review steps).
The following QA crew procedures are recommended when conducting a blind check:
1.	Identify data collection site: The first step upon arrival is to locate and confirm the specific site
and/or sample unit(s) (e.g., plot, subplot, transect, quadrat, stream reach) for data collection by
locating any permanent markers including any necessary reconstruction of sampling lines or areas
using flagging or other non-permanent markers.
2.	Collect data: The QA crew then collects data (following the same SOPs used by the routine field
crew), including required voucher specimens and/or photos being sure to address all targeted
variables initially assessed by the routine field crew.
3.	Compile results: The QA crew transcribes their results into a data form that is separate from and
independent of the routine field crew data.
4.	Confirm blind check is complete: The QA crew reviews data form(s) for completion, ensures all
notes are correctly documented, and confirms that any collected samples and/or specimens are
properly labeled and secured for transport.
Once both the routine field crew data and the QA crew's blind check data have been reviewed, they can
be compared to assess whether data quality acceptance criteria have been met.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 5

-------
5.2.3.3 Precision Checks
Not all restoration projects have the ability, either due to limited funding or access to available experts,
to have separate QA crews conduct hot, cold or blind checks. Nor can many of the target variables
associated with ecological restoration projects (e.g., wildlife species counts, vegetation structure, soil
moisture) be expected to remain stable even over short time periods. For these situations, we
recommend the systematic use of precision checks that are conducted by routine field crew(s) as a
necessary component of the sampling design. Observer-determined data must be collected in a
reproducible manner in order for it to be reliable and useful in supporting project decisions, and the
simplest evaluation of data quality is a precision check.
Precision checks are conducted by having one or more field crews or members conduct re-
measurements or observations of the same target variables at the same site. Estimates of within-crew
precision can be made if a crew repeats measurement or observation of target variables for any given
site where that same crew had previously collected data. Estimates of between-crew precision can be
made if two or more crews collect data at the same site. Because these checks are conducted without
the benefit of data collected by experts for comparison, results of these checks cannot be used to
estimate the accuracy or bias of the collected data.
We recommend that project planners consider the following principles when establishing requirements
for the use of precision checks:
•	Site selection: As with other QC checks, sites for conducting precision checks should be selected so
that the results are likely to be representative of typical data collection efforts.
•	Timing and variability: Dates for monitoring will need to be coordinated to meet scheduling
requirements and to ensure that crews collect data within a specific window to minimize the
impacts of temporal variations on estimates of both within- and between-crew precision. For
example, Kercher, Frieswyk, and Zedler (2003) describe a between-crew precision test for two
sampling teams of botanists who evaluated species richness and cover estimates on twelve wet
meadows in Dane County, Wisconsin, USA. After sampling by the first team, the second team made
an independent assessment often 1 m2 quadrats on each meadow. The corners of the quadrats
were marked to assist the second team in finding them. Results indicated that species richness and
cover estimates were similar, but one of the teams tended to have higher estimates for cover than
the other.
•	Data management: As with blind checks, data collected by the routine field crew or member are not
available to the individuals conducting the precision check. Each dataset is treated as if it were the
original during the process of data review and checking, and data are reviewed and processed in a
manner similar to blind check data, with an emphasis on evaluating precision of the dataset against
pre-established quality objectives.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 5


-------
•	Incorporate results of QC checks into real-time decision making as part of feedback to inform
improvements in data collection.
•	Choose a QC check strategy that provides a reasonable assessment of data collection procedures
and/or data (e.g., minimum number of sites, variables at each site, QC checks) to support the
credibility of any conclusions based on the results of QC checks.
When selecting the appropriate QC checks, project planners also need to consider if the target variables
are stable or transitory (as described in Chapter 3). With the exception of hot checks (Section 5.2.1) and
precision checks (Section 5.2.3.3), the QC checks discussed in this chapter are applicable to assessing
the quality of procedures used to collect data during monitoring of stable variables. Hot checks are
conducted concurrently with routine data collection and are, therefore, an especially useful QCtool
when the sampling design requires monitoring of transitory variables. Precision checks offer similar
advantages for transitory variables, but are limited to providing information about precision, without
the added benefit that hot checks offer in terms of assessing accuracy.
The following QA/QC strategies should be considered when projects require monitoring of transitory
variables:
•	Implement hot checks or concurrent precision checks.
•	Conduct classroom or simulated field trials to estimate routine field crew accuracy, bias, and
precision (and for potential use in determining data quality acceptance criteria).
•	Periodically assess routine field crew ability to meet acceptance criteria.
•	Use multiple routine field crew members to produce paired datasets for assessing precision.
•	Pair routine field crew members with experts to collect paired datasets that can be used to assess
accuracy and bias.
•	Collect physical samples that can be split or duplicated to estimate accuracy, bias, and precision.
Exhibit 5-8 below presents considerations to apply when considering strategies to address data
collected for stable variables.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 5

-------
Exhibit 5-8. QC Strategies for Stable Variables
Assess SOP implementation
Assess crew efficiency
Assess crew accuracy
Provide on-site training
Provide feedback in real-time
QC PURPOSE
Assess conformance of routine
data collection results to meet
stated data quality acceptance
criteria
Standardize variable
measurement/observation
or conduct training under
field conditions without
impacting sample units
Are
target
variable(s) under
assessment resilient to crew
disturbance and expected to be
temporally stable and
reproducible?*
I
NO Transitory
Variables
J
Determine Accuracy
and/or Bias
Determine
Precision
Need
Unbiased
Data?**
YES
Hot Check
(Section 5.2.1)
Cold Check
(Section 5.2.3.1)
Blind Check
(Section 5.2.3.2)
Precision Check
(Section 5.2.3.3)
Includes results from physical and biological samples that can be split, duplicated or vouchered.
'' Access to previous monitoring results not permitted prior to assessment
Calibration Check
Apply to selected
variable(s) or
complete sample
unit.
Used for training
and/or certification.
Used to determine
quality acceptance
criteria.
(Section 5.2.2)
As just one example of the application of QC strategies described in this chapter, we refer again to the
example sampling objectives described in Chapter 3 (Exhibit 3-3 and Exhibit 3-7) and Chapter 4 (Exhibit
4-3) related to the restoration of native willow. In this project, the monitoring design might include the
overall measurement of 30 plots using three routine field crews, with each of the three crews measuring
ten of the plots. The project planners could require each crew to be tested prior to the field season by a
QA crew made up of field crew trainers, using hot checks. Blind checks could then be performed during
the field season to document data quality. Because each routine field crew is measuring ten plots, a
minimum of one blind check per crew might be selected as the re-measurement strategy.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 5

-------
5.3
USE AND REPORTING OF QC CHECK RESULTS
This section provides a general overview regarding the use and reporting of results of the QC checks
described throughout this chapter. As noted, the results of these QC checks can be used to:
•	provide feedback to improve data collection procedures and field crew capabilities;
•	inform decisions regarding data collection activities; and
•	facilitate assessments of data quality.
All of the QC checks described in this chapter result in some form of documentation of results, including
notations, checklists, data forms, or reports. The procedures used to report the results are project-
specific and will depend on the type, timing and purpose of the QC checks conducted. To facilitate and
expedite use of QC check results, findings and reports should be provided to individuals involved in
decision making, including the leaders and managers described below.
•	Field crew leaders are often responsible for ensuring the correctness and completeness of all data
from sampling site(s), and benefit from QC check findings to improve current or future data
collection efforts.
•	Project managers need to know if a field crew or member is failing to meet data quality acceptance
criteria so they can make effective and well-informed decisions and take corrective actions as needed.
•	Project data managers and QA managers ensure that corrections identified during the QC checks
are reflected in the project database.
•	Project QA managers are typically responsible for making and/or reviewing data usability
assessments (see Chapter 7) and rely on QC check information as a critical component in doing so.
The results of these checks should be clearly linked to the corresponding project data, and provided to
the appropriate responsible parties in sufficient time to (1) facilitate and improve data collection
procedures, assessments and quality; and (2) support project decision making.
5.3.1 Feedback for the Routine Field Crew
Feedback based on the results of hot checks and calibration checks can be provided to field crew
members at various stages, including as (1) immediate feedback, or (2) feedback based on results of
compiled QC check results. Regardless of when feedback is received by the routine field crew members,
the information can result in improvements to training and certification processes, data collection
procedures, and the QC check documentation itself.
Both hot checks and calibration checks (Section 5.2.1 and Section 5.2.2, respectively) are often used
during training and certification. Immediately reporting these check results to trainers and field crew
members can improve crew performance prior to initiating or continuing data collection activities. In cases
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 5


-------
On a broader level, an important component of any ecological restoration project is to allow for
continuous improvements to data collection processes and data quality. Continuous improvements can
be achieved by (1) increasing the effectiveness and efficiency of field SOPs (and resulting data quality)
through process improvements that reflect QA crew feedback provided to routine field crew members,
and (2) implementing debriefing procedures that help obtain objective and constructive feedback from
field crew members.
5.3.3	Use of QC Check Results to Inform Decisions
The extent to which QC checks are used to inform decisions regarding data collection activities and data
corrections varies widely, and is highly dependent on the timing, significance, and intended use of the
results. Results of all QC checks should be included in the project database along with all original
uncorrected field data. This allows for decisions to be made as needed, including during the data review
process (see Chapter 6) based on all available information. In some cases, depending on numerous
factors, data corrections may be needed sooner rather than later. For example, if results of a cold check
indicate that a routine field crew member measured tree stem DBH incorrectly, producing an inaccurate
estimate of basal area, the result may need to be corrected to reflect the corresponding "true" stem
diameter. This information also informs decisions regarding whether additional visits to the site or field
team should be made - or more significantly, whether the procedures used should be adjusted or
described in more detail. Each project should have documented procedures that are planned ahead and
in place for handling such corrections. In general, data errors that are identified during QC checks will
need to be corrected in the project database along with a notation to explain the reason for the
correction. These errors could include data found to be outside data quality acceptance criteria.
Each non-conformance finding pertaining to documented procedures or data quality acceptance criteria
should be reviewed, particularly if the deficiency could result in unacceptable data quality. QA crews can
often identify the root causes of a non-conformance finding (e.g., additional training needs, procedural
changes) and provide recommendations to project managers that would help address the underlying
problem. Resolution of data quality deficiencies may need to be verified by a subsequent hot check (or
other type of QC check) within a sufficient period of time to ensure appropriate changes are
implemented in a manner that improves data quality.
5.3.4	Assessment of Data Quality
QC check results are crucial for evaluating and documenting data quality. Specifically, these results allow
data to be evaluated in terms of data quality indicators (DQIs), such as precision, bias, accuracy,
representativeness, comparability, completeness, and detectability, and their related data quality
acceptance criteria (see Section 3.4). Details regarding data review and assessments including the
assessment of QC check results are provided in Chapters 6 and 7. In addition, Appendix B describes
several statistical procedures that support these assessments. Depending on the specific project and
purpose of data collection, potential impacts of QC check results include demonstrating that data quality
acceptance criteria have been met, ensuring implementation of corrective actions, and improving data
quality (see Exhibit 5-9).
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 5

-------
Results of precision checks (see Section 5.2.3.3) provide the simplest means for evaluating whether
data collected by field crews meet data quality acceptance criteria for precision. Because there are no
QA crew datasets associated with these checks, results are reported separately, and subsequently
reviewed and verified using identical procedures and considerations. Results of cold checks and blind
checks also can be used to assess precision but because these results are collected by experts, their
use is more appropriate for evaluating bias and accuracy. The ability of the data to meet the data
quality acceptance criteria for the remaining DQIs also can be assessed using results from the suite of
QC checks described in this chapter.
Exhibit 5-9. Iterative Process Incorporating Results of QC Checks
Field crews collect routine
data according to Standard
Operating Procedures
QA crews conduct checks &
collect QC data
Compare QC data to
routine data to evaluate
achievement of data
quality acceptance criteria
Do
data meet
criteria?
YES
NO
Implement corrective actions
(e.g., retrain staff, recalibrate
instruments, re-evaluate and
adjust criteria, report results
and corrective actions)
Determine
corrective actions
Report data
Report and flag data
An example of the iterative process shown in Exhibit 5-9 is a case where precise tree volume data were
needed. One variable that fed into the calculation was tree length to a 4-inch top diameter; the specified
acceptance criterion for this variable was that the reported length should fall within 1 foot of the true
value, 90% of the time. The project manager worked with the database manager on a method for
storing the data, and with experts to develop methods for collecting the data. Crews were trained, but
the data collected were neither sufficiently accurate nor precise enough to meet the quality
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 5








-------
enough to support such decisions. Ancillary data can be used to help evaluate data quality and interpret
results as shown in the following examples.
Ancillary Variables Describing...
• Where (e.g., stream reaches, transects,
plots) and when (e.g., date, time start/end)
data were collected
Can be Used to...
• Verify that reported data were collected from
the specified locations and within the required
timeframes
• Who collected the data (i.e., identification
of field crew teams and members), in
conjunction with QA crew data
• Detect bias among results reported by different
field crews
• Camera trap placement (e.g., angle and	• Explain anomalies when confirming the validity of
distance of target relative to the camera)	scientific assumptions (Parsons et al. 2015)
• Environmental conditions (e.g., air
temperature, wind, cloud cover,
precipitation) at the time of data collection
• Establish whether field activities were conducted
in accordance with pre-established criteria and
thus can help support (or refute) data collected
for a primary variable of interest
For instance, sampling during peak flows can lead to increased variation in the total counts of stream
bed invertebrates, and the reliability of bird census data can diminish when wind speed is excessive
(Larsen et al. 2004). To address the latter concern, the North American Breeding Bird Survey protocols
specify that wind speed must not exceed 12 mph (USGS 2017a). As discussed in Section 6.2.3.1, we
recommend that project planning teams develop variable-specific procedures that describe how each
primary variable, and its corresponding ancillary and QC data will be verified.
Note: In some cases, ancillary data that do not directly support the sampling objectives and monitoring
strategy are collected because they are cost effective to obtain while personnel are in the field and may
provide useful information for related projects. If such additional variables are collected, project
planning teams also need to decide when, if, and how to review the results. At a minimum, we advise
allocating enough resources to verify that these additional ancillary data are complete and comply with
requirements because (1) concurrent review with other project data is likely to be most cost effective
and (2) the ability to investigate and resolve problems tends to diminish over time.
6.1.3.2 Materials Needed for the Review
As shown in Exhibit 6-2, data reviewers will need to examine requirements governing the collection or
generation of the data they will be reviewing, results of field and laboratory QC checks, and other
information (e.g., post-season debriefing results) that may shed light on the data being reviewed. Having
access to the requirements and all field and laboratory records during data review is vital.
•	Plans, maps, SOPs and other instructions provide a baseline for confirming requirements are met.
•	Reporting forms submitted by routine field crews, QA crews and laboratories (digitally or on paper)
summarize results for the variables of interest and provide an efficient means for verifying all
Application of QA/QC Principles to Ecological Restoration Project Monitoring	Chapter 6





-------
determining if reported values are scientifically logical. From a practical perspective, data verification
and validation activities often overlap and can occur at all stages of data collection and transfer,
beginning with verifying the capabilities of the individuals collecting or generating the data. Specific
approaches to data verification and validation may vary by organization, project or data types.
Therefore, this section describes general process-related strategies (Section 6.2.1), discusses specific
activities that are performed during data review (Section 6.2.2), and provides examples of techniques
that are useful in implementing these activities (Section 6.2.3).
6.2.1 General Process-Related Strategies
We believe the three strategies described below are helpful, regardless of project scope, size, and
complexity.11
1. Begin by verifying all required data are present and legible. In essence, this involves conducting a
completeness check (described below in Section 6.2.2.1) to verify that field and/or laboratory
personnel reported results for all data they were supposed to generate or collect. Such checks
tend to be less time-consuming than those for other data verification and validation activities, so
it is usually most efficient for data reviewers to confirm they have all the required data in hand
before immersing themselves in details and discovering they have to stop, request, and wait for
missing information.
If data are illegible or missing, the data reviewer should initiate corrective action activities
immediately. For illegible results, this usually involves contacting field or laboratory personnel to
request clarification and documenting the corrections as described in Section 6.3, "Handling Data
Discrepancies and Errors." For missing values, corrective action may include re-sampling if (1) the
problem was detected while crews are still in the field, (2) there are sufficient resources to conduct
the re-sampling, and (3) the re-sampling efforts are consistent with the study design. If samples
requiring laboratory analysis were sent to a laboratory but never analyzed, corrective action should
include consideration of whether the analyses can be completed before analytical holding times
expire. In some cases, project planning teams may prefer to accept and qualify results generated
slightly outside of recommended holding times in lieu of having no results at all.12
11	For the purpose of this discussion, we assume qualifications of field and laboratory personnel have already been established as
described in Chapter 4, and all results have been reviewed before leaving the field or submitting laboratory results as discussed in
Section 5.1.
12	Analytical holding times are typically established by testing the stability of a variable in replicate aliquots over specified intervals of
time. If no statistically significant changes are detected, holding times are typically set to the maximum amount of time evaluated. If
there is reason to believe the recommended holding times represent the longest period of time evaluated rather than an actual point
at which values were determined to significantly change, it may be more helpful to qualify results generated outside of holding times
than to have no data at all.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 6


-------
Note: When using spreadsheet software to process data, project teams should document
procedures that all staff must follow to minimize the introduction and compounding of error that
normally would be avoided when working within a relational database management system that
provides advanced version and error control features. Refer to Appendix A for a more in-depth
discussion of effective data management practices.
6.2.2 Data Verification and Validation Activities
In addition to the general process-related strategies described above, certain activities apply to all data
reviews, regardless of the project size, scope and complexity. These activities are described in the
following subsections, and include determining if the reported results (1) are complete, (2) reflect
correct application of procedures, (3) meet specified data quality acceptance criteria, (4) were
documented and reported correctly, and (5) make scientific sense. Many of these activities can be
streamlined significantly through the use of electronic data reporting tools that are pre-populated with
look-up tables and application of macros, if/then statements, or range checks that can highlight non-
compliant or questionable results.
6.2.2.1 Verifying Completeness
In order to confirm all required data were collected, generated, and reported, data reviewers should
verify that:
1.	data are present from all required sampling locations;
2.	data are present for all required variables and corresponding ancillary and QC data;
3.	all planned samples and voucher specimens were collected and reported;
4.	any required data reporting forms were used and filled out legibly and completely;
5.	raw data are present and legible for each variable, as applicable (e.g., field notes, bench sheets,
laboratory notebooks, raw instrument outputs, written narratives);
6.	all required/supporting QC data are present, including:
a.	samples collected for lab analysis, such as instrument, method and matrix QC (e.g., field or trip
blanks, instrument calibration blanks, method blanks, field duplicates), and
b.	samples collected for species identification, such as plant or animal voucher specimens, tissue
samples for DNA analysis, audio recordings and digital photographs;
7.	field QC checks (e.g., hot checks, cold checks, blind checks, precision checks, calibration checks)
were implemented and documented;
8.	field crew and laboratory analyst feedback regarding data quality concerns has been documented;
and
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 6

-------
9. data quality issues identified by field or laboratory QC checks were evaluated, and the affected data
have been tagged with corresponding explanations.
Note that the items listed above should reflect the timing and scope of the data being reviewed. For
example, when verifying the completeness of routine field crew data reported from a single site
assessment on a single day, a data reviewer would address the first five items listed above. In contrast, a
reviewer examining a batch of routine field crew data, QA crew data, and laboratory data for a group of
sampling events would address the first eight items. The last item is usually applied to data that have
already been reviewed and determined to have data quality issues that require correction or annotation
of results; the purpose of a completeness check at this stage is to verify that such data review
recommendations are captured in the final dataset.
6.2.2.2 Verifying Compliance with Required Procedures
All project results should be carefully reviewed to verify they were collected from the correct location(s),
during the correct time(s), using the correct forms and procedures, and that any anomalies, corrections
or other issues were properly addressed. This includes verifying that:
•	field crews followed the specified protocols;
•	environmental samples were properly preserved and handled from collection to laboratory
processing;
•	laboratories used the specified procedures for sample analysis and analyzed samples within
appropriate holding times;
•	data were collected at specified frequencies and within specified time frames (e.g., seasons);
•	data were collected by individuals having the required experience and training;
•	sample locations are consistent with specified locations;
•	field and laboratory calculations were performed correctly;
•	field notes or laboratory narratives provide explanations of any difficulties encountered, deviations
from procedures, or deviations from QC requirements;
•	identified errors have been corrected and signed by the person who made the change; and
•	data flags that field or laboratory personnel are required to apply to results that deviate from
requirements, are present and accurate.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 6


-------
Installation of log vanes and a bankfull bench on Elm Creek, Minnesota in the southern glaciated cornbelt region. Photo Credit: Britta
Suppes, Joe Magner, Chris Lenhart
6,2.2.4 Verifying the Integrity of Results
Data handling activities introduce ample opportunity for (1) inadvertent corruption of results, or (2)
incomplete or inaccurate linkage of related results. For example, errors as small as a single stray value
can cause significant data shifts to occur when one set of data is merged with another. Similarly, data
transfers between different types of documentation (e.g., from field notes to forms), can introduce
mistakes related to transcription and typographic errors, omissions, duplication or erroneous data
associations (linkages).
To help identify and correct such problems, we recommend designing data management systems in a
manner that documents all modifications by data reviewers, managers or other project staff, such as an
audit log that documents the individual who revised the data, the date and time of the revision was
made, and the value(s) before and after the revision. (Refer to Appendix A for additional
recommendations concerning data management.) We also recommend identifying and correcting;
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 6


-------
The following subsections provide guidance on several techniques that can be used to implement the
data verification and validation activities described in Section 6.2.2. These include the development and
use of variable-specific data review procedures (Section 6.2.3.1) and the use of reference tables,
compliance checks, range checks and consistency checks (Section 6.2.3.2).
6.2.3.1 Variable-Specific Data Review Procedures
One highly effective technique for conducting the activities described Section 6.2.2 is to develop and use
tables that describe specific verification and validation procedures for each variable. Such tables:
•	help ensure that each variable is reviewed consistently over time by different staff members;
•	facilitate development of automated or manual compliance checks, range checks, consistency checks
and look-up tables that can be used to implement data review activities (see Section 6.2.3.2); and
•	should be incorporated into the project-specific data review procedures (see Section 6.1.5).
Exhibit 6-3 illustrates how such a table might be applied to data collected for the ground cover
monitoring example used in previous chapters. The example table includes two primary variables of
interest (Plant Species or Ground Cover Group and Cover Class Code), which are highlighted in grey, as
well as ancillary variables that provide information about the sampling location and date, field crew
members, and equipment used to pinpoint the location of data collection (e.g., GPS Unit #). These types
of ancillary variables are typically recorded near the header of any field data form (electronic or
otherwise) or sample label, and can easily be checked against these forms and labels during data review.
The table also includes ancillary variables that can help support or refute questionable results. For
example, flowering or fruiting plant structures are key identification features; a "yes" result for the
Flower/Fruit variable indicates such structures were present at the time of data collection and provides
data reviewers with added confidence in the accuracy of the primary variable results. Similarly,
verification of the Plant Voucher Specimen ID and Photo ID variables helps ensure that specimens and/or
photos taken by the field crew are correctly linked to the corresponding sample data. Once verified, the
specimens and photos can be used to confirm the accuracy of the data reported by the field crew.
Finally, the following observations should be noted:
•	Exhibit 6-3 includes verification procedures for all variables, but validation procedures are included
only for the target variables and those ancillary variables that serve as diagnostic tools. Making
decisions about what data to review and how those data should be reviewed, are crucial in project
planning (see Section 6.1).
•	The procedures shown in Exhibit 6-3 are for illustrative purposes only. For example, the verification
procedures for digital photographs (see the Photo ID variable) focus on confirming photos are
properly documented in terms of time, location and unique identifier. Other data (e.g., make,
model, shutter speed, exposure settings) may be useful for some projects.
•	Although the primary purpose of such variable-specific tables is to clearly identify what needs to be
reviewed and how, other information can be included. For example, Exhibit 6-3 indicates that soil
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 6









-------
•	An evaluation of plant species composition should be similar to that occurring in nearby locations of
similar size and with comparable habitat characteristics (consistency in space). These data can also
be compared to corresponding data obtained at the same location during a prior monitoring event
with comparable conditions (consistency across time) assuming no disturbance has occurred or
treatment applied that would have changed conditions since the last monitoring event.
•	A tagged animal that is captured repeatedly in live trap surveys should have the same sex recorded
each time, although its weight, age and reproductive status might differ.
•	Stream temperature data can be evaluated by examining temperatures taken in nearby streams
with similar geomorphic and riparian habitat conditions (for spatial consistency) or temperatures
taken at the same stream location and same time of year during previous monitoring events (for
temporal consistency).
•	Laboratory results, such as macroinvertebrate counts or water chemistry measurements, can be
evaluated for external consistency by comparing results to the same parameters at similar nearby sites
or to results obtained during previous monitoring events at the same site under similar conditions.
Ecological consistency checks are used to evaluate data in relation to established scientific understanding
of ecosystems. This type of check is difficult to prescribe and requires specific expertise by those
conducting data validation activities. For example, certain plant species or plant associations might be
expected to be found in different locations at a site based on soil type and hydrology; a validation check
for ecological consistency would evaluate whether or not these species or associations were found as
anticipated. If a plant species that is normally only found in dry uplands was reported in a wetland, the
result could trigger additional scrutiny of those data, as well as any data associated with them.
Similar approaches to the evaluation of ecological consistency can be applied to data collected from
field and laboratory measurements, particularly when combined with results from field observations.
For example, certain biotic assemblages are anticipated in stream riffles of fresh-water streams based
on certain field and laboratory measurements of water quality. If reported data are not consistent with
the anticipated results, they should be evaluated and possibly flagged for potential errors.
Recommended procedures for handling such discrepancies or errors (including identifying data with
potential discrepancies or errors, deciding how to address those issues, implementing corrective actions,
and documenting these processes and decisions) are described below in Section 6.3.
6.3 HANDLING DATA DISCREPANCIES AND ERRORS
Any results found during the data review process that are inconsistent with specified requirements,
acceptance criteria, or scientific expectations should be documented, investigated and, where
appropriate, corrected. A summary of the general process is shown in Exhibit 6-8, and includes:
•	identifying questionable data;
•	making decisions regarding how to handle the questionable data identified;
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 6

-------
• documenting the identified data, along with any corresponding decisions and actions taken; and
• implementing the corrective action.
Exhibit 6-8. Identifying, Handling and Documenting Questionable Data




r ^
Identify


Mark questionable data requiring further


examination using descriptors or other


notations to classify suspected error types
L	J


I


r n .. ^
Decide


Make decision regarding questionable data to


accept, correct, or flag based on a review of


completed field data forms, results from QC


checks, staff interviews, SOPs, etc.





Accept
Correct
Flag
Accept data as valid with Correct data and provide Incorporate flag into verified
explanation in metadata explanation in metadata or validated database





r ^
Corrective Action


Implement corrective action, as appropriate,


to improve or correct data collection and


corresponding results
J


4^


Document


Maintain documentation of error


identification, any changes, and


corresponding decisions and rationale




The end result should be a transparent, verified and validated database that can be advanced for use in
data analysis described in Chapter 7. Each component of Exhibit 6-8 is described in greater detail below.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 6






-------
•	specifications for the issuance of certification (e.g., criteria, minimum standards, data quality
thresholds, data verification and validation requirements, data maintenance requirements, and
prescriptive actions for handling questionable data); and
•	requirements for maintaining certification status, renewing certification, and downgrading (or
revoking) the certification.
For projects that generate a large amount of data requiring rapid, recurring or point-of-time
certification, developing a semi-automated certification process driven by software code that is
integrated into the data management system and reporting procedures may be a valuable resource
investment. Developing a semi-automated process to review, validate, document and report on data
quality provides a means for comprehensive systems analysis with enhanced capacity to analyze, auto-
correct, secure and document data quality specific to each component or property for which
certification is sought.
Once all data have been verified, validated and certified, the "check" phase of the "Plan/Do/Check/Act"
quality management cycle and the "review" phase of the "Plan/Prepare/Collect/Review/Evaluate"
project management cycle are considered to be complete (see Exhibit 1-1), and the data are ready for
the "act' and "evaluate" phases discussed in Chapter 7. The data analysis activities described in Chapter
7 include assessments to:
•	confirm the data conform to the assumptions used in designing the monitoring program (e.g., the
estimated amount of error used to estimate confidence in the resulting data),
•	identify any relevant patterns and trends, and
•	determine if the project and sampling objectives were achieved.
Where the data verification and validation processes described in this chapter are used to confirm
project data were gathered correctly and are scientifically sound, the data analyses described in Chapter
7 provide important information on the value of the plans and procedures used to prepare, collect and
review the data, and help project managers determine if modifications are needed to improve the
relevance and quality of the data for future monitoring efforts.
6.5 DATA REVIEW - CHECKLIST
The checklist below provides a summary list of overarching principles and aspects that should be
considered and implemented when reviewing project data. As with any checklist, the listed items should
not be interpreted or applied without comprehension of the supporting information. Users of this
checklist are encouraged to read and understand the corresponding details that are provided
throughout this chapter, and to implement these details using a graded approach that is commensurate
with a project's scope, importance and available resources.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 6



-------
CHAPTER 7 DATA ASSESSMENT, ANALYSIS AND REPORTING
The preceding chapter discussed procedures for verifying, validating, and certifying the extent to which
ecological restoration data were collected as intended and with the appropriate level of quality. This
chapter provides guidance on QA/QC strategies associated with data use. As with all other aspects of
ecological restoration monitoring, and as discussed in Section 7.1, these activities should be planned
and documented in advance to ensure the right personnel and procedures are used and promote
accurate and scientifically defensible decisions based on project results. The remainder of the chapter
provides guidance on data assessment, data analysis and project reporting.
Data Assessment is a continuation of the data review process that involves addressing data quality flags,
assessing data quality indicators (DQIs) across measured, observed and calculated variables, and
reconciling the data quality with project assumptions and stated sampling objectives. These steps,
described in Section 7.2, facilitate data analysis and ensure that reported results reflect the appropriate
and intended use of the data. For example, during data assessment, project staff might assess how a
particular data validation flag on a species identification result might impact the accuracy of a diversity
index calculated for the project and ultimately the ability to detect a change in species diversity.
Data Analysis is the mathematical and statistical process of using collected data to describe results,
answer questions and support decision making. In essence, the data analysis process quantifies the
degree to which restoration project objectives were met. Data analysis can include simple mathematical
operations such as the calculations of descriptive statistics or the aggregation of data representing
multiple variables into a single indicator or index value. It also can include more complicated and
rigorous approaches such as hypothesis testing, regression, trend, or principal components analysis, and
modeling. Although it is beyond the scope of this guidance to address all possible approaches to the
analysis of monitoring data, Section 7.3 provides a general overview of options, discusses items that
may be of particular interest in ecological restoration projects, and provides guidance on QC steps
during the data analysis process.
The results from ecological restoration projects often provide information that is useful in improving
data quality in subsequent collection efforts and enhancing the success of future restoration activities.
Therefore, Section 7.4 provides guidance on incorporating lessons learned and other information from:
•	data review and assessment into the continual planning process to subsequently improve the
quality of data collection, and
•	data analysis into the adaptive management process to subsequently improve the success of
restoration efforts. Additional information concerning adaptive management is provided in Chapter 8.
Finally, Section 7.5 provides guidance for ensuring that project reports are accurate, appropriate for the
intended audience, and properly vetted.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 7


-------
ecological restoration projects may rely upon the calculation of specific metrics of ecological diversity
(e.g., species-composition, -richness and -evenness) or composite indices such as an index of biotic
integrity (IBI), floristic quality index (FQI) or index of macro-invertebrate biotic integrity (M-IBI). Project
staff should establish SOPs for calculating such variables and indices just as they do for collecting the
data used in the calculations.
Quality (How the quality of the data assessment, analysis, and/or reporting activities will be
controlled): High quality data are of little value if they are analyzed and reported inappropriately.
Therefore, project teams should establish QA/QC strategies for the data assessment, analysis and
reporting phases of the project, just as they do for the data collection and data review phases. In most
cases, this involves reviews and spot checks. The data assessment step is itself a QC process to
determine data usability, but steps in this process should be reviewed internally - and possibly
externally - to ensure the correct procedures and assumptions were used. For data analysis, QC steps
should include spot checks of analyzed data to ensure the correct procedures were followed and results
were accurately calculated. For reporting, internal and external peer reviews may be used to ensure
quality (Section 7.4). Planning at each phase of the project should include identification of the
individuals who will perform reviews and/or spot checks and the nature and extent of the reviews.
Use (How the project data will be used in decision making): One of the most important aspects of
overall project planning is to establish a level of quality control that is commensurate with the intended
use of the data (i.e., the Graded Approach). This concept also applies to aspects of data assessment and
analysis. Data assessment and analysis procedures should be planned with the intended use and
decision-making process in mind. Project planners should ask the following questions:
•	What decisions will be made with the data?
•	How does the data analysis inform those decisions?
For example, data analysis that is used to inform the adaptive management process and adjust seasonal
sampling strategies may not require the level of rigor demanded by data analysis that supports ultimate
decisions to terminate a restoration project or initiate a new restoration project.
7.2 ASSESSING IMPACTS OF DATA QUALITY ON USE
This section provides guidance on data assessment, a continuation of the data review process described
in Chapter 6. Data assessment involves evaluating how the quality of the validated dataset impacts its
intended use, and the process answers a number of questions, including the following:
•	Should I use data that were flagged in the data review process? If so, how?
•	How do data quality flags influence my ability to make decisions based on the data?
•	Were project assumptions and sampling objectives met?
•	Do DQIs meet minimum levels of performance necessary to assess project objectives?
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 7

-------
• What is the level of uncertainty in the project decisions that I make?
To answer these questions, this section provides guidance on quantitative and qualitative procedures
and subsequent evaluations needed to determine the reliability of monitoring data, their limitations,
and their application to inform decision making within an adaptive management framework. Guidance is
provided on addressing data quality flags, revisiting project assumptions and sampling objectives, and
evaluating DQIs. Specific guidance is offered on statistical approaches to assess whether DQIs achieve
the specified acceptance criteria to test null hypotheses given the expected effect-size and
corresponding Type 1 and Type 2 decision errors.
7.2.1 Addressing Data Validation Concerns
Once the data review steps described in Chapter 6 are complete, an additional step is needed to
evaluate the potential impact of flagged data and determine any limitations on the use of the data.
Specific objectives of this process include the following:
1.	Identify different types of data issues and/or instances of flagged data.
2.	Determine the possible effects of these data issues for the evaluation of project objectives .
3.	Decide how to address data issues such that the impact on data analysis is minimized.
4.	Communicate corrective actions taken and data usability limitations to data users.
7.2.1.1 Types of Data Quality Flags and Impacts on Data Quality Indicators
The data review activities described in Chapter 6 may identify a variety of flagged data including, but not
limited to, the examples shown in Exhibit 7-1.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 7



-------
What is the direction of the error? For flagged data that result in bias, the direction of the bias should
be considered. Depending on the direction of the bias, the flagged data may or may not influence the
overall determinations of data analysis. For example, blank QC samples can indicate contamination in
water samples. If the level of contamination exceeds the QC criterion, data associated with this
collection effort may be flagged with possible positive bias due to contamination. If, however, the
preliminary conclusion of data analysis is that contaminant concentrations in the associated field
samples were below biological effect thresholds (even considering the possible bias associated with
sample contamination), the data flag may be considered inconsequential. On the other hand, if the
preliminary conclusion of data analysis is that contaminant concentrations were just above biological
effect thresholds, the data analyst should apply more caution in interpreting the flagged data. In this
case, the concentrations reported slightly above the effect thresholds may be a result of the same
contamination present in blank QC samples.
7.2.1.2 Corrective Action and Usability of Flagged Data
Section 6.3.4 discusses corrective action taken as a part of data review. These steps primarily involve
correcting data collection and reporting errors and preventing the errors from recurring in future data
collection efforts. After data have been reviewed and compiled in a certified system, the data
assessment phase provides another opportunity for corrective action. In this phase, project personnel
should evaluate the usability of the reviewed data for inclusion in the data analysis and, for each data
quality flag, must decide whether to:
1.	retain the flagged data,
2.	discard the flagged data from data analysis, or
3.	use some corrective action to limit the impact of the issue on data analysis and decision making.
Project personnel should consider the impacts of each of the above options on data analysis and decision
making. While it seems that discarding flagged data may be the most conservative approach, this option is
not without impact on the data analysis process. By reducing the quantity of data, Type 1 and Type 2
errors are impacted, and the ability to detect statistical differences within the dataset is reduced. Project
personnel should weigh the impacts of each of the above options and choose the option with the smallest
impact. Examples of corrective actions that can be taken to limit the impact include data consolidation,
data correction and confirmatory analysis. Each of these is discussed below.
Data consolidation: In some circumstances, data can be aggregated or consolidated to reduce the
impact of a given data quality flag on data analysis. For example, if a large amount of data reported by
one laboratory technician was flagged during data review because QC spot checks and archived voucher
specimens confirmed the technician was confusing two different benthic macroinvertebrate species
(species A and B) and incorrectly identifying both as the same species, the data analyst could choose one
of the following options:
• Retain the flagged data even though all samples analyzed by this individual would be biased low in
species richness and species diversity.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 7

-------
•	Exclude the flagged data from data analysis even though the amount of data with which to evaluate
project objectives would drop significantly and thus compromise the ability to detect actual
differences resulting from the restoration (Type 2 error).
•	Apply a corrective action that involves modifying the results such that identifications for both
species in question are consolidated. Enumerations of all individuals for species A and species B
would be combined into a single enumeration for species A and B. This corrective action would
reduce species richness and diversity systematically across all samples, but may be the most
appropriate action because it would eliminate a bias that might unevenly impact one treatment
more than another and lead to a potentially erroneous conclusion.
Data correction: In situations where the impact of a given data flag is unknown, an investigation can be
initiated to assess the potential impact and correct data appropriately. For example, if one water quality
probe used for dissolved oxygen measurements failed calibration and was shown to be inaccurately
biased low, post-restoration monitoring could be conducted to determine the impact of this discrepancy
on data analysis (e.g., calibration records and post-restoration measurements could be used to
determine the average deviation between this probe and actual dissolved oxygen conditions). If the
post-restoration monitoring revealed that the probe was biased low by an average of 0.8 mg/L oxygen,
then data from this probe could be corrected by + 0.8 mg/L.
Confirmatory analysis: In situations where the impact of a given data flag is unknown, a statistical
investigation could be initiated to estimate the potential impact. If this analysis confirms the impact is
small and does not directionally bias the data analysis or decision-making outcome, the data can be
retained. Using the dissolved oxygen example above, a statistical power analysis could be performed to
show that, based on the variability of dissolved oxygen data and the number of samples, a 3.6 mg/L
difference is the minimum that could be detected as statistically significant. If the observed difference
was only 1.7 mg/L, deviations of ±0.8 mg/L resulting from the probe error would not have impacted the
decision that there was no difference. (If results were biased low by 0.8 mg/L, there would still be no
observed statistical difference, and if results were biased high by 0.8 mg/L, results would still not reach
the threshold of determining a statistical difference.) In this case, the analysis could be used as
confirmation that the data can be retained without changing the decision-making outcome. In this case,
the flagged data may have impacted the bias, accuracy and precision of estimates of selected variables,
but did not affect the resulting conclusion.
7.2.1.3 Example Assessments of Data Limitations and Impacts on Data Use
This section provides example scenarios designed to illustrate how the data assessment process can be
used to evaluate the impact of flagged data on data analysis. Because both scenarios are based on the
example willow restoration project that has been carried throughout this guidance, Exhibit 7-2 is
provided as a summary the objectives, required data and acceptance criteria associated with the
example project.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 7




-------
•	Type, specifics, extent and direction of the flagged data: The flagged data affected a primary
variable (willow stem density) and have the potential to directly affect the achievement of project
and sampling objectives. The extent of the flagged data includes 1 out of 3 routine field crews,
therefore, 10 of the 30 plots may have been affected. QC and training logs indicate that refresher
training corrected the problem, but not until after this crew had monitored 7 plots. QC logs also
indicate that the errors resulted in an average overestimation of stem density by 1.2 stems/m2. In
summary, the issue biased the primary variable by 1.2 stems/m2 in 7 of the 30 plots. The data
analyst has the options of retaining these data as is with the associated data flags, rejecting the
flagged data, or correcting the data.
•	Impact of the non-conformity on DQIs: The data analyst investigated the impact of this issue on
each DQI.
o Completeness. If the data are retained or corrected, there is no impact on the completeness
criterion. If, however, the flagged data are rejected, completeness will be reduced to 77%, which
is below the minimum completeness criterion of 95% established in the project QA plan. While
the sampling plan was designed to accommodate a 5% loss of data, a 23% loss has the potential
to negatively impact the achievement of sampling objectives.
o Precision, Bias and Accuracy. As previously stated, this issue established a positive bias of 1.2
stems/m2 in 7 of the 30 plots; this bias affects the overall accuracy of the stem count mean. If
the affected plots were corrected by the average -1.2 stems/m2, the stem count mean (across all
30 plots) is reduced from 3.8 to 3.2 stems/m2. The analyst also compared the estimates in
precision among the 7 plots with flagged data to the precision estimated among the remaining
plots, and determined the issue produced a decrease in precision. Specifically, the analyst
concluded that the coefficient of variation (CV, also known as the relative standard deviation
or RSD) among the 23 plots with unflagged data was 12%, while the CV among the 7 plots with
flagged data was 26%. Including the 7 plots into the site mean decreased precision from a CV of
12% to a CV of 18%. If the flagged data were corrected by the average -1.2 stems/m2, the impact
on precision would be negligible (CV of 18%). This is because the error was not likely consistent
across the 7 plots, but the average correction factor was applied consistently. In other words,
the correction would adjust for the bias in the stem count and, as result, produce only a minor
change in precision.
o Representativeness, Comparability and Detectability. The data analyst determined the issue
did not have a substantive impact on representativeness. In contrast, he concluded the impact
on comparability can be stated qualitatively as significant, since results from this field crew are
biased and not comparable to the other 2 field crews. This issue also limits the comparability of
the results to results from other similar studies or projects. Detectability could be impacted
depending upon the methodology used when counting stems. For example, if the SOP defined
valid stems (for counting), as native live willow stems >30 cm in total length emerging directly
from a root base, then detectability would be reduced by miscounting the true number of stems
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 7

-------
that meet that criteria (sensitivity) or, alternatively, by miscounting as result of error in correctly
determining if a stem was valid or not (specificity).
•	Impact on the Sampling Objective: The original sampling design of 30 plots was selected to meet
the sampling objective of demonstrating that a mean density of 2.5 stems/m2 had been achieved
(with 90% certainty when actual stem density is at least 20% greater). However, and as discussed in
Scenario 1, the original design was based on assumptions of variability for the stem density
measurement. Now that data have been collected, the analyst was able to recalculate the statistical
power analysis using actual variability to determine the impact of excluding (censoring), including or
correcting the flagged data.
o If the flagged data are excluded, the sample size would be reduced from 30 to 23. A power
analysis based on actual sample variability revealed this reduction would reduce the probability
of concluding the targeted of 2.5 stems/m2 density had been achieved (when actually exceeded
by 20%) from 90% to 62%. If the acceptable level of certainty is held constant at 90%, the actual
stem density would need to be 47% higher (instead of 20%) than the 2.5 stems/m2 target. Based
on this analysis, the analyst concluded the sampling objective would not be met if the flagged
data are excluded.
o If the flagged data are included, there would be no change in the sample size, but the variability
would increase due to the reduced precision. In this case, the data analyst's power analysis
revealed the increase in variability would reduce the certainty with which the desired stem density
can be correctly detected (when actually exceeded by 20%) from 90% to 83%. If the acceptable
level of certainty is held constant at 90%, the actual stem density would need to be 32% higher
(instead of 20%) than the 2.5 stems/m2 target. Based on this analysis, the sampling objective
would not be met if the flagged data are included. In addition, the mean would remain biased.
o If the flagged data are corrected, there would be no change in the sample size, but the
variability would increase due to the reduced precision. In this case, power analysis results
would be similar to the results discussed above for the option of including the uncorrected data.
In summary, all three options fail to meet sampling objectives, and the data analyst should use
caution in assessing restoration project objectives.
•	Impact on the restoration project objective: Lastly, the data analyst considered how each of the
three options discussed above would impact the restoration project objective.
o If the flagged data are excluded, the mean willow stem density would be 3.2 stems/m2 (28%
greater than the target 2.5 stems/m2). However, reductions in the sample size increase the
minimum detectable difference from 20% to 47%, and excluding the flagged data,
hypothetically, may not support rejection of the null hypothesis. In this case, project staff would
be unable to confirm that restoration project objectives were met; they also would be unable to
definitively state whether this was due to failures in the restoration approach or failure to meet
the sampling objectives.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 7

-------
o If the flagged data are included, the mean willow stem density would be 3.8 stems/m2 (52%
greater than the target 2.5 stems/m2). In this case, the null hypothesis would be rejected, and
project staff would conclude the restoration project objective was met. In doing so, however,
project staff would likely be accepting the identified bias and ignoring evidence that the bias
significantly impacted the data quality.
o If the flagged data were corrected, the mean willow stem density would be 3.2 stems/m2 (28%
greater than the target 2.5 stems/m2). However, increases in variability increase the minimum
detectable difference from 20% to 32%, and correction of the flagged data would not support
rejection of the null hypothesis. In this case, project staff would be unable to confirm that restoration
project objectives were met. As result, they would be unable to definitively state whether this was
due to failures in the restoration approach or failure to meet the sampling objectives.
•	Based on the data assessment, the data analyst decided to exclude the flagged data and report that
results could not confirm that restoration project objectives were met. The data analyst then met
with the project team to develop a supplementary quality document for additional sampling in year
3 that would attempt to confirm whether restoration project objectives were met a year after
project completion. In making this decision, the data analyst was rightfully concerned with retaining
flagged data that were biased, shown to impact multiple DQIs, and would bias the project
conclusions. Correcting the bias in the data ultimately led to the same conclusion to exclude the
data, and the analyst decided that the most defensible decision was to exclude the data and develop
plans for supplementing the data collection effort. While collecting additional data has a cost,
accepting a Type 1 error based on biased results also has significant costs. If the restoration
approach is truly not effective, investing in additional restoration efforts using the same flawed
approach could be much more costly.
7.2.2 Statistical Techniques for Evaluating Data Quality Indicators
Restoration data typically include results obtained in field settings by one or more trained crew
members, as well as results obtained from chemical, biological and physical analyses conducted in a
controlled laboratory environment. When combined, restoration data encompass a variety of data
types, including discrete, continuous, categorical (qualitative), or binary data. The unique properties
of each variable type should be considered when selecting the appropriate statistical procedures to
assess data quality.
As examples, consider the following:
•	Observer-determined results that are based on best professional judgment tend to be coarse in
resolution. They generally consist of discrete continuous, numeric data reflecting actual counts or
proportions (e.g., 1, 15, 100); discrete classes or groups of estimated values (e.g., 0, 1-10, 11-100,
101-1000) that are reflective of a continuum; or categorical, non-numeric data (e.g., species
identification, land cover type). Statistically, the distributions of such data are typically different
from data collected by calibrated instruments.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 7

-------
• A scientific instrument or measurement device can enable field or laboratory staff to record results
for a given variable with an implied precision (regardless of whether it is appropriate to do so).
Common examples in environmental monitoring projects include: relative humidity reported as
"90.05%"; locational data reported as "WGS84: -86.75309"; or pH reported as "6.55." These types of
results represent continuous data and, though it is possible to record reported values with a large
number of significant figures, the appropriate number of significant figures (i.e., level of precision)
should reflect requirements specified in the SOPs or other QA documents.
The statistical procedures involved in evaluating the degree to which a dataset achieves sampling
objectives can vary depending on a number of factors, including the type of data (e.g., discrete,
continuous, categorical), measurement scale (e.g., nominal, ordinal, interval, ratio), number of results,
and distribution. Appendix B provides information about statistical procedures that are often suitable
for the unique types of data produced in ecological restoration projects. The material provided in
Appendix B, however, is not intended to supplant the need for data analysts to work with a statistician
who is familiar with analysis of environmental and ecological data. Instead, it is intended to provide data
analysts with information that may be helpful in discussing and understanding strategies recommended
by a qualified statistician.
7.2.3 Revisiting Project Assumptions
During the data assessment process, the analyst should revisit assumptions that were made during
project planning. These may be explicit assumptions that drove the restoration activities and sampling
design or implicit assumptions inherent to chosen options. Reevaluating these assumptions can be
critical for ensuring a proper understanding of data quality and usability, correctly interpreting results,
and informing the adaptive management process. Several common categories of assumptions are listed
below, but many others may be applicable to individual restoration projects.
Assumptions about controlling variables: At the outset of the project, the planning team determines
which variables to measure or observe based on an understanding of the ecological system and
assumptions regarding which variables will be important in controlling and responding to the restoration
actions. Throughout the course of the project, however, it may become evident that additional variables
are essential in controlling or describing restoration progress. These variables can be added to the
monitoring regime as a critical component to the adaptive management process (see Chapter 8). Likewise,
some planned variables may turn out to provide information of little or questionable value and may be
removed from long-term monitoring as a decision outcome of adaptive management.
Assumptions about data variability: When developing sampling designs, project planners typically make
assumptions about the magnitude of variability in data because these assumptions are necessary to
develop power curves that inform sample size selection. When data from the project become available,
the analyst should reevaluate stated or implied assumptions of data variability (similar to that discussed
in the example scenarios presented above). If variability is higher than expected, the chosen sample size
may not be adequate to meet sampling objectives. If variability is lower than expected, project
resources may be invested in sample sizes that are unnecessary. In either case, sample size targets may
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 7

-------
be revised through the adaptive management process. However, data analysts must be aware of the
impacts such decisions may have on data quality and consistency. In particular, comparability,
representativeness and precision may be affected. Data analysis may also be impacted if certain
statistical methods require equal sample sizes.
Assumptions about data ranges: Assumptions regarding the natural range of target variables may have
guided decisions on analytical methods and procedures used in data collection. If actual data are outside
the range of original assumptions, the selected methods and procedures may no longer be the most
appropriate option. These decisions can be reevaluated through the adaptive management process;
however, the data analyst should consider the impact of any change on data quality.
Assumptions about project timing: Many assumptions are made during project planning regarding the
project timeline, and delays in the project can potentially impact restoration success. Project activities can
be interrupted by funding delays, permitting challenges, weather, poor planning or other unforeseen
circumstances. Planning teams should allow for seasonal flexibility by defining target and alternate periods
acceptable for specific restoration activities. If the planned buffer periods prove to be insufficient (i.e., due
to more frequent or longer delays), the project team should assess the impact of the delays and may need
to modify restoration activities, objectives or goals. Project delays may also require modifications to the
monitoring design or extension of the monitoring period to ensure restoration will be effectively
monitored during ecological recovery.
Project timing can also significantly impact DQIs such as representativeness, comparability, and
detectability. In order to adequately detect some ecological measures, it is important that project timing
overlap critical seasonal cycles such as migration, nesting, reproduction, flooding regimes, growing
seasons, etc. If project delays push sampling past critical seasonal windows, sampling may not adequately
detect or represent the true ecological condition of the site. Inconsistent timing of sampling events from
year-to-year may also impact the comparability of data between those years and misrepresent trends in
the progress of ecological restoration.
Assumptions about the implementation of restoration actions: Most ecological restoration projects
assume that restoration activities will be conducted as planned. However, implementation of those
activities can be impacted by a number of unforeseen circumstances (e.g., severe weather or floods,
inability to sufficiently remove invasive species, uncontrolled wildlife foraging, or contamination of seed
mixtures). In such cases, the data analyst should consider the impact of incomplete or improper
restoration implementation on project and sampling objectives. If these objectives cannot be met
without fully implementing the restoration activities, the project team should use the adaptive
management process to consider whether the objectives need to be revised or whether supplemental
restoration activities are needed.
Assumptions about the speed and trajectory of recovery: Short-term and long-term monitoring plans
are typically developed based on assumptions regarding the speed and trajectory of ecological recovery
following restoration activities. If these assumptions are not met, monitoring plans may need to be
modified to accommodate the actual pace and direction of recovery. This could mean extending the
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 7

-------
long-term monitoring period if recovery is slower than planned or adding monitoring variables if the
direction of recovery is different than anticipated.
7.2.4 Evaluating Sampling Objectives
The preceding portions of Section 7.2 discussed the elements of data assessment necessary for evaluating
the impact of data quality issues on the intended use of the data. These elements are not distinct from the
evaluation of sampling objectives; instead, they are integral components of an adaptive management
process. Although it is not necessarily a separate step, evaluation of sampling objectives represents a
culmination of the data assessment process that ties the data assessment to criteria stated in the project
planning documents. The sampling objectives were specifically developed to ensure that project objectives
can be adequately evaluated; failure to achieve the sampling objectives can jeopardize the project's ability
to determine with certainty whether project objectives have been met.
Evaluation of whether sampling objectives have been achieved involves (1) assessing the impact of data
quality concerns, (2) providing a definitive yes or no statement as to whether each objective was met,
and (3) documenting the degree to which the objective was or was not achieved - including a
description of any explanatory factors. Each of these is discussed below.
Assess the impact of data quality concerns as described in the preceding sections: Data assessment
may identify a wide range of concerns and impacts and, along with reporting those concerns, the data
analyst should focus on how those concerns specifically impact the sampling objective. For instance, the
data analyst may report that missing data resulted in a 10% decrease in statistical power, or
measurement error caused a decrease in precision and resulted in an increase of the minimum
detectable difference from 20% to 30%.
Provide a definitive yes/no statement as to whether each sampling objective was met: Using the same
willow example discussed throughout this chapter, the analyst should statistically determine whether
the variability of the usable data would demonstrate achievement of a mean density of 2.5 native willow
stems per square meter, with:
•	90% certainty when the true density is at least 20% greater than the targeted 2.5 stems/m2 and
•	a 5% chance (a) of incorrectly concluding that the 2.5 stems/m2 objective was reached when it in
fact was not.
Describe the degree of achievement: If the sampling objective was not met, the analyst should describe
quantitatively how close the data came to meeting the objective. For instance, this could be expressed
as: "given the desired power and alpha level, the data were only capable of demonstrating if a minimum
detectable difference of 30%, rather than the targeted 20%, was achieved" or "at the targeted minimum
detectable difference level of 20%, the data could only achieve 82% power, rather than the targeted 90%
power." If the sampling objective was achieved, the analyst should describe the degree to which data
quality exceeded the objective.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 7

-------
7.3
ANALYZING AND INTERPRETING DATA
This section provides guidance specific to data analysis and interpretation with the assumption that data
review and data assessment steps have been completed. This section is not intended to describe all
possible approaches that can be used to analyze ecological restoration data. Rather, this section
provides guidance on the common series of data analysis steps (EPA 2006a), QC considerations
throughout the process, and specific conditions that may be unique to ecological restoration projects.
7.3.1	Review Project Objectives
The overall purpose of data analysis is to determine whether project objectives have been met. A review
of these objectives at the initiation of data analysis is useful in focusing the data analysis process. Within
ecological restoration projects, there is often a plethora of data and numerous potentially interesting
trends and relationships within the data. Although these relationships may be useful in informing
ongoing and future restoration activities through the adaptive management process (Chapter 8), the
primary focus of data analysis should be on the current stated project objectives. These objectives may
have remained unchanged throughout the project, or they may have been revised through the adaptive
management process based on previous data analysis and lessons learned.
7.3.2	Conduct Statistical and Graphical Data Review
A subsequent step is to calculate basic summary statistics based on the data. These might include, for
example, determining the mean, median, percentiles, interquartile ranges, standard deviations and
coefficients of variation for each variable, as well as correlations between primary or calculated
variables. It is often helpful to prepare graphical representations, such as histograms, frequency plots,
box and whisker plots, or plots of spatial data. These statistics and graphs help data analysts gain a
mental picture of central tendencies, variability, potential temporal or spatial trends, and possible
relationships between variables. Statistical and graphical reviews can also highlight outliers or anomalies
within the data that were not identified in the data review or data assessment steps.
7.3.2.1 Treatment of Outliers
Graphical representations and statistical summaries of data can reveal potential outliers. When
potential outliers are identified, the data analyst should take the following steps:
•	Confirm the validity of the result: The data analyst should consult the data reviewers to determine
if the potential outlier represents an error that was not identified during data review. Because the
outlier could represent a transcription or data entry error, data reviewers should consult original
field data sheets to confirm the result in question matches the result originally recorded by the field
crew or laboratory analyst (see Chapter 6).
•	Investigate the cause: If the potential outlier represents an accurately recorded result, the data
analyst should investigate whether there is some explanation for the potential outlier by consulting
the field or lab personnel that recorded the result and reviewing other data obtained in close
proximity and near the same time as the potential outlier. Such investigations may reveal an
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 7

-------
explanation for the potential outlier, such as animal activity, vandalism, weather, urban structures,
natural disturbances or other causes. If a cause is identified, and the cause is an anomaly that is not
representative of the site, the data analyst can consider flagging the result as an outlier.
•	Evaluate variability: The data analyst should evaluate the range of variability observed across the
site and statistically compare the potential outlier to the distribution of the variability. Standard
statistical methods are available for analyzing variability and formally identifying outliers. Commonly
used statistical outlier tests include Grubbs' test and Dixon's test (Grubbs 1969; Dean and Dixon
1951). Beyond the formal statistical evaluation, however, the data analyst should consider whether
the potential outlier falls within natural ranges for the variable and is representative of the diversity
of the restoration site. A value that falls within natural ranges and represents the diversity of a site
should not be discarded, even if it is identified as a statistical outlier. A determining factor should be
whether the estimated value is representative of the conditions at the site.
•	Make a determination and document the decision: Based on the above steps, the data analyst
should make a decision to retain or discard potential outliers. This decision and the rationale for this
decision should be formally documented in project files and data analysis reports.
7.3.2.2 Treatment of Censored Values
In addition to outliers, another data analysis consideration that deserves attention is the treatment of
censored values. By censored values, we mean data that are reported as less than (<) a certain value or
greater than (>) a certain value. Censored values can represent results that are below a sensitivity
threshold (detection limit) or above a range. Examples of censored data would include nitrate
concentrations reported as <0.1 mg/L (the detection limit) or most probable number bacterial counts of
>1184 cfu/lOOmL (the maximum measurement range for a given dilution).
When variables containing censored values are used in mathematical or statistical calculations, the data
analyst must decide how to numerically represent the censored values. This choice can potentially bias
the resulting calculation. For instance, in calculating an average for a variable with censored values at
the lower end (< values), there are several options, but each carries certain biases:
•	Ignore the censored values: This approach is not recommended, because it severely overestimates
the variable mean. In this case, only low values are excluded, so the mean is biased in the positive
direction.
•	Set the censored value at the limit: In this approach, a censored value of <0.1 would be numerically
set to 0.1 for the purpose of calculations. This approach also biases the value in the positive
direction and overestimates the mean. This approach may be appropriate in some situations;
however, the data analyst should be aware of the presence and direction of the bias.
•	Set the censored value at zero: In this approach, a censored value of <0.1 would be numerically
represented as 0 for the purpose of calculations. In contrast to the previous approaches, this
approach biases the value in the negative direction and causes an underestimate of the variable
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 7

-------
mean. As with the previous approach, this approach may be appropriate in some situations;
however, the data analyst should be aware of the presence and direction of the bias.
•	Set the censored value at half the limit: In this approach, a censored value of <0.1 would be
numerically set to 0.05 for the purpose of calculations. Although this approach is likely to produce
the least bias in the mean, the mean may still be biased and the data analyst will not know the
direction of the bias. This approach may be appropriate in some situations but, depending upon the
decisions that may be made from the data, the analyst may elect to choose another option with
greater bias just to have the certainty of knowing the direction of that bias.
Options for the treatment of censored values at the upper end (> values) are more limited, since the
upper end is unbounded. While zero bounds the lower end, > values can theoretically approach infinity.
For this reason, typically the only viable option is to set the value at the upper limit (e.g., > 1184 would
be set to 1184). In this case, the data analyst should be aware that the value is biased in the negative
direction and the variable mean is underestimated.
Recommended steps for determining the best option for handling censored values are provided below.
•	Consider whether the data gathering technique is adequate: The data analyst should first consider
whether the method or data collection technique selected to quantify the variable provides an
adequate level of sensitivity. This decision should have been considered during the project planning
phase, but if actual data ranges vary significantly from those anticipated in the planning phase, this
decision may need to be reconsidered. The decision depends upon the range and availability of
alternative methods or data collection techniques, the natural range of the variable at the site, and
the decisions that will be made from the data. Often, the decision in question is whether the
variable exceeds some biological threshold. In this case, the selected technique should have a
detection limit below the threshold that defines the decision point.
•	Consider the inherent biases: The data analyst should consider the inherent biases that result from
the different options and determine which bias is acceptable. This determination depends upon the
use of the data and the willingness to accept Type 1 or Type 2 errors. As an example, consider an
analyst who is using phosphate data to test a null hypothesis that the mean phosphate concentration
is < a biological threshold for impairment. If the data analyst is more concerned about the
consequences of a Type 1 error (false positive), they may choose a treatment option for censored
values that underestimates the mean. If the data analyst is more concerned about the consequences
of a Type 2 error (false negative), they may choose a treatment option for censored values that
overestimates the mean.
•	Make a determination and document the decision: Based on the above steps, the data analyst
should make a decision on how to treat censored values. This decision and the rationale for this
decision should be formally documented in project files and data analysis reports.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 7


-------
7.3.5 Implement Data Analysis Procedures
After completing the above steps, the data analyst can perform the selected statistical tests and
interpret project results. All of the previous steps contribute to ensuring that the calculations and
associated conclusions address the project objectives in a scientifically defensible manner.
7.3.6 Assess Restoration Project Objectives and Goals
The focus and the culmination of data analysis should be an assessment of restoration project objectives
and overall project goals. As described in Chapter 3, ecological restoration projects should set narrative
project goals that describe the change in ecological condition that is desired as a result of the project.
These goals are then operationalized into more specific restoration project objectives that (1) support
the overall restoration goals, (2) serve as the foundation for planning restoration and monitoring
activities, and (3) provide the basis for measuring progress. If developed appropriately, using the SMART
(specific, measureable, achievable, results-oriented, and time-sensitive) approach (Doran 1981),
assessing the achievement of restoration project objectives should be a relatively straightforward aspect
of data analysis and interpretation. For example, Exhibit 3-3 presents specific statistical interpretations
of project objectives for defined case studies. If similarly prepared, these statistical interpretations
should be easily evaluated and project objectives clearly assessed.
In practice, the evaluation of overall restoration goals is somewhat more complicated and inexact.
Although project objectives are developed as the most specific expression of restoration goals, it is likely
that qualitative aspects of the project team's conception of restoration success are not adequately
captured in specific project objectives. In addition, restoration outcomes cannot always be predicted
and are often influenced by factors other than the action itself (e.g., unexpected disturbances, such as
extreme weather events, malfunctioning equipment or contaminated seed/ planting stock, ineffective
herbicide, reduced sample sizes, violated assumptions). Consequently, there often remains a level of
data analysis and interpretation beyond the straightforward evaluation of project objectives. At this
level, results are evaluated in an ecological and temporal context where project planners apply best
professional judgment and a weight-of-evidence approach to interpret the overall effectiveness of the
restoration actions. In short, qualitative evaluations of project success should be considered alongside
quantitative assessments of formal project objectives. Results from this analysis are often important
inputs to the adaptive management process and can inform future actions and restoration objectives.
To demonstrate the difference in evaluating project objectives and overall restoration goals, we will use
the case study example of restoring native wet prairie that has been carried throughout this guidance
document (Exhibit 3-3). The restoration goal for this project is to "restore native wet prairie plant
species cover to improve floristic quality on a 15-acre wet prairie degraded by historic drainage and
introduction of invasive reed canarygrass, Phalaris arundinacea." One of the project objectives set to
implement this goal was to "reduce total cover of reed canarygrass (RC) to less than 10% across the 15-
acre wet prairie after eight years." After a formal assessment of this project objective, it may be
determined that data quality was insufficient to determine whether the site dropped below the 10%
threshold in reed canarygrass cover. However, monitoring may have confirmed the natural recruitment
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 7

-------
and establishment of two native endangered wetland species as a result of restoration activities. In this
case, project staff may determine that, while it was not possible to conclude the project objective was
successfully achieved, evidence indicated that the overall restoration goal of restoring native wet prairie
plant species was achieved.
7.3.7 QA/QCfor the Data Analysis Activities
Like all aspects of an ecological restoration project, data analysis activities should be subject to quality
management strategies to ensure appropriate approaches are used and accurately performed.
Recommended QA/QC strategies include the following:
•	Proper experience and training: Project personnel responsible for data analysis should have the
proper experience and training to adequately conduct the analysis. Although it is not possible to list
the exact qualifications for this position, the data analyst should have a senior position among the
project team, have training and experience in statistics, be familiar with all aspects of the
restoration project, and thoroughly understand ecological principles and processes associated with
the restoration project.
•	Development and use of SOPs: For aspects of data analysis that may be performed repeatedly, the
data analyst should develop and follow SOPs that are appropriate for specific calculations (such as
for an IBI) or for specific statistical tests (such as use of SAS software for analysis of variance). The
development and use of SOPs for these tasks provides documentation and adds transparency to the
data analysis process, provides an opportunity for review of procedures, and ensures consistency in
the implementation of data analysis procedures.
•	Spot checks of calculations and statistical results: Project personnel other than the data analyst
(such as a data reviewer) should conduct spot checks of calculations and statistical results to ensure
that errors have not occurred in the data analysis process. These spot checks should be independent
corroborations of calculations made during data analysis.
•	Confirmation of statistical assumptions: All assumptions embedded within the use of a particular
statistical method should be confirmed. These may include assumptions regarding distributions of
the data, independence of variables, randomness of samples, homogeneity of variance, or equality
of sample sizes. The confirmation of statistical assumptions should be formally documented in
project files and data analysis reports.
•	QA review: The overall data analysis packet (report and supporting documentation) should be
thoroughly reviewed by a trained statistician. The statistician should review the selection and use of
statistical methods for data analysis, the explicit and implicit assumptions made regarding the data,
the interpretation of statistical results, and the presentation of project findings.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 7

-------
7.4 USE OF ANALYZED DATA WITHIN THE CONTINUAL PLANNING PROCESS
AND ADAPTIVE MANAGEMENT PROCESS CYCLES
Assessments of project data (Section 7.2), data analysis (Section 7.3), and the preparation of project
reports (Section 7.5) provide a venue through which project managers can summarize results, suggest
opportunities to improve the quality and reliability of data collection, and improve the execution and
success of the restoration project. This valuable information can feed into two potential planning cycles:
(1) the continual QA planning process and (2) the adaptive management process.
The continual planning process is at the core of any quality system. It ensures that information gained in
collecting and reviewing data feeds back into deliberate steps to improve the quality of ongoing data
collection efforts. Information gained from the data assessment and data analysis steps discussed in this
chapter is ripe for inclusion in the continual planning process. For example, spot checks that reveal
improper identification of species can be used to plan additional species identification training sessions.
The adaptive management process is a project management approach that continually uses collected
data and lessons learned to inform decision making and adjust the project direction, if needed. The data
analysis and reporting steps covered in this chapter provide such an opportunity. For example, if data
analysis reveals that soil moisture is positively correlated with willow establishment success, project
implementation plans might be adjusted to include placement of a moisture barrier that will reduce
evaporative loss from the soil during the planting season.
Both planning cycles attempt to improve management of the project. The continual QA planning process
is focused more specifically on improving the quality of the project data, whereas the adaptive
management process focuses more broadly on improving the success of the overall ecological
restoration project. The remainder of this section discusses the continual QA planning process;
additional details on the adaptive management process are provided in Chapter 8.
7.4.1 Identifying Opportunities for Continual Improvement
As illustrated in Exhibit 7-4 below, each data collection step during a project lifecycle provides an
opportunity for improvement. Examples of potential improvements include:
•	suggesting more specific quantitative project objectives or adjustments to data quality acceptance
criteria for specific variables during the planning step;
•	improving SOPs or crew training when preparing for new field activities;
•	recommending a different sequence or frequency of QC checks during the field season;
•	suggesting that certain verification or validation checks be integrated into field data recorders to
minimize errors coming in from the field; or
•	providing recommendations for reducing or adding variables being monitored as a result of data
analysis efforts.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 7

-------
Exhibit 7-4. Example Project Lifecycle with Key QA Components
Data quality arid
usability
Corrective actions
i
Evaluate
w
•	Data verification
•	Data validation
•	Data certification
Project goals and objectives
Sampling objectives
Data quality indicators
Acceptance criteria

Standard Operating
Procedures (SOPs)
Training and
certification

Quality control field checks
(hot, precision, cold, blind, calibration)
The main point is that opportunities for quality improvement exist and should be noted in project
reports to benefit future data collection efforts.
7.4.2 Establishing Quality Improvement Processes - Basic Concepts
In addition to pursuing opportunities for improvement as they arise, project teams should consider the
following basic quality improvement concepts.
•	Plan to revise QA planning documents as needed: The development of QA planning documents such
as quality assurance project plans (QAPPs) are an important part of the continual planning process.
Managers are forced to consider all aspects of project development before the project begins. Such
planning documents establish sample collection and data review procedures that are appropriate for
the intended use of the data. However, these documents also should be viewed as dynamic
documents that can and should be revised when needed to address opportunities for improvement.
•	Strive to identify and then meet or exceed expectations: Focus on the needs, performance
expectations and opinions of all interested stakeholders or parties. A critical starting point is setting
restoration project goals and objectives as described in Chapter 3.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 7

-------
•	Prevent problems from occurring: As the saying goes "an ounce of prevention is worth a pound of
cure." Examples of preventive measures to ensure quality include the development of
comprehensive SOPs, as well as adequate training of field crews in the use of these SOPs (see
Chapter 4); the implementation of QC checks early in the field season (see Chapter 5); and the
development of written procedures for data review, assessment and analysis, and assignment of
qualified individuals to perform them (Chapters 6 and 7).
•	Identify root causes: The root causes of significant problems need to be identified and corrected as
soon as possible during a field season. If field crews are consistently unable to achieve data quality
acceptance criteria for a particular variable, the root cause of this deficiency needs to be identified.
For example, a root cause analysis may indicate that the SOP needs to be improved, training is
inadequate, or more highly qualified staff should be recruited. Once a root cause or causes have
been identified, corrective actions should be implemented to resolve and prevent the problems
from recurring.
•	Encourage a no-fault attitude: A no-fault attitude encourages staff to identify problems, solutions to
problems, and other process improvement opportunities. At the beginning of a field season, staff
should be encouraged to look for and report opportunities for improvements to data collection
activities, including those related to the procedures themselves or the associated logistics (e.g.,
ideas for improving safety or reducing staff fatigue). Management should perform timely
evaluations of staff recommendations and provide feedback, which can foster team building and an
overall quality improvement attitude. Public acknowledgement of staff contributions to quality
where planned outcomes were achieved or exceeded is encouraged. The "acknowledgement"
section of reports is an opportunity to provide this credit to staff.
•	Maintain timely data review, assessment and analysis: In order to constructively contribute to the
adaptive management cycle, assessment and analysis of ongoing data collection efforts should be
conducted with a timeliness and frequency that allows for lessons learned to inform and adjust
upcoming sampling efforts. Delaying data assessment and analysis steps to the end of a project
eliminates the advantages of the adaptive management process and may limit the ultimate success
of a project.
•	Adjust: Periodic reviews or assessments often identify opportunities for improvement. As data are
reviewed, common errors by field crews or laboratory staff can be identified, particularly those
affecting data quality. These needed changes in data collection should be identified to answer
current and emerging monitoring questions. Likewise, project teams should consider ways to reduce
data collection and associated QC costs and adjust accordingly. For example, it may be possible to
update data collection techniques to take advantage of new technologies yet maintain the integrity
of previous core variables.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 7


-------
Internal reports: Internal reports may be useful for documenting the rationale and supporting data for
internal decisions regarding a change in the restoration approach, experimental design, sampling
methodology, data review strategies, or any other aspect of the project. Internal reports can also be
useful in documenting project milestones. For instance, project personnel may prepare a data review
report documenting their assessment of data quality before embarking on data analysis and reporting of
final results. Although internal reports are generated for a limited audience, such as project personnel,
project partners, or the funding agency, they may be or may become publicly available. Internal reports
need a level of review and QC to document accuracy and readability, but they do not typically need the
level of external peer review that might be recommended for culminating project reports.
Periodic or annual reports: Annual reports are often useful for periodically documenting progress and
interim results of larger multi-year projects to a variety of audiences (e.g., the funding agency, project
partners, general public), and the writing style and level of detail in such reports should reflect the
intended readers. Because data collection efforts are still underway, annual reports usually do not
include significant data analysis. When data analysis is included, it is more likely to represent preliminary
trends rather than definitive conclusions. For this reason, annual reports may require a higher level of
review than internal reports, but probably not the level of review necessary for final reports.
Scientific publications: If the intent of a report is to disseminate the project's findings to the broader
scientific community, the project staff should consider preparing and submitting an article for
publication in the peer-reviewed scientific literature. By targeting the article to a particular scientific
journal, the authors will ensure that the content is disseminated to the appropriate audience. Delivering
publications through the scientific literature also ensures that the typical standards for scientific peer
review are met. Before submitting manuscripts, however, project staff should seek internal and project
partner reviews of draft documents.
Final reports: Almost all ecological restoration projects will require a final report that documents the
methods and procedures used, data analysis approaches, project results, and conclusions. Most
government agencies have a responsibility to make the results of their scientific studies and projects
available to the public in a timely, clearly presented, and easily accessible manner. In general, data that are
generated under government-funded projects should at some point be made available to other groups
within the sponsoring agency, other government agencies, and tax payers. As noted in the U.S. Geological
Survey (USGS) data management website (USGS 2017b), "Data sharing benefits the researcher, research
sponsors, data repositories, the scientific community, and the public. It encourages more connection and
collaboration between scientists, and better science leads to better decision making."
The remaining guidance in Section 7.5 primarily applies to the preparation of final reports. Other types
of reports can benefit from this guidance, but the content and QC strategies required will vary
depending on the report type and the intended audience.
7.5.2 Con ten t of Project Reports
By the time a report is generated, the supporting data should have been thoroughly reviewed (Chapter
6), and all data limitations should be transparent and fully understood. Data assessments as described
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 7

-------
throughout this chapter should be complete, and project reports should contain sufficient information
to explain and defend the conclusions reached, using information that is accurate, clearly presented,
and consistent with the report's discussion. For example, if a report is intended to present conclusions
regarding the success of an ecological restoration project, it should (1) describe the project objectives
along with the methods used to monitor the impacts of the restoration activities, and (2) present data in
a manner that clearly relates the data to these objectives and conclusions. Any deviations from the
original plans should be included with an explanation of why the deviations were required and their
impact on any results. Most importantly, reports should include a discussion of the quality and reliability
of the data in the context of project decisions and conclusions.
Recommendations for the general content of project data reports include, but are not limited to the
following:
•	Acknowledgements - Including reviewers
•	Definitions (Glossary) - Specialized information/vocabulary, formulas, standard values, units of
measure
•	Executive summary
•	Project goals and objectives
•	Data collection strategies- Discussion of the project design and methods used
•	Results - Discussion and presentation of data, including results of data review, data limitations, and
data analysis
•	Conclusions - Discussion based results of the data analysis, including assumptions
•	Graphics, figures, tables
•	References
•	Appendices - Supplemental information, original data, or quoted matter that is too long for the
body of the report
The data must be clearly explained, regardless of whether access is provided to complete datasets or
data are summarized in tables throughout the report. This discussion should describe how the data were
generated, include all noted limitations, and provide a sufficient amount of data to support any
conclusions. Although standardized scientific notation is generally applied to represent units of
measurement, it may not apply to much of the data typically collected for ecological restoration
projects. Regardless of the format, units of measurement should be applied consistently and explicitly
stated to avoid ambiguity and confusion. Equations, calculations, extrapolations, acceptance criteria and
alternate interpretations also should be provided to proactively address questions from those interested
in (or planning to use) the information.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 7

-------
Whatever the content of the report, authors should assume that the contents will need to be
scientifically and legally defensible for individuals involved in project decisions, funding the data
collection effort, and using or leveraging the data for other projects or related decisions, particularly if
policy decisions could be impacted. Report preparation should include the following activities.
•	Determine the audience: Consider the type of information and the level of detail and technical
complexity that is appropriate for the intended audience of the report and tailor the writing accordingly.
•	Determine report format: Organizations either conducting a restoration effort or funding that effort
may have a manuscript format that is preferred or required. These formats customarily include how
the manuscript text, text citations, footnotes, equations, tables, graphics, photographs and citations
are to be formatted. Where applicable, report authors should follow these formatting rules and seek
editorial review to confirm they have been followed correctly.
•	Determine organizational policy: Learn and follow the policy of your organization regarding
manuscript submission such as authorship, technical review, editorial review, statistical review,
policy review and final management review.
7.5.3 Assuring the Quality of Project Reports
Project reports should be considered draft or preliminary until they have been reviewed and approved
for release to ensure the presentation of results is accurate, clear and understandable, and that the
interpretations and conclusions are scientifically sound and technically defensible. Until these
documents have been sufficiently reviewed and are deemed final for release, their status should be
clearly indicated as draft, with notations restricting their release. The accompanying text box includes
some helpful suggestions for authors as a final check of their reports to ensure the presentation of
exhibits, numbering, references and other formatting issues are complete and consistent throughout
the document.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 7






-------
•	U.S. Geological Survey. 2017. "502.4 - Fundamental Science Practices: Review, Approval, and
Release of Information Products." U.S. Geological Survey Manual. Last modified February 17.
https://www2.usgs.gov/usgs-manual/500/5Q2-4.html.
•	U.S. Geological Survey. 2017. "Data Management: Data Release, Sharing, and Publication." USGS
(Supporting and Enabling USGS Data Management). Last modified March 23.
https://www2.usgs.gov/datamanagement/share.php.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 7

-------
CHAPTER 8 THE RELATIONSHIP BETWEEN QUALITY MANAGEMENT AND ADAPTIVE
MANAGEMENT STRATEGIES
The QA/QC principles presented in the previous chapters lay the foundation for sound decision making
regarding the effectiveness of ecological restoration projects.14 However, application of these principles
and meticulous collection of quality control (QC) data during the effectiveness monitoring phase are of
little value if the fully validated data are not used in a way that allows restoration ecologists to (1)
conclude project success or draw on lessons learned; (2) determine if adjustments to the restoration or
monitoring design are needed; and (3) implement those adjustments to improve overall project or
program outcomes. Adaptive management provides a flexible framework for decision making that
provides an information-driven approach to adjusting restoration actions where and when needed
(Williams. Szaro, and Shapiro 2009; Williams and Brown 2012).
Adaptive management is not a final step; it is a framework of continual improvement actions conducted
throughout a project lifecycle. Restoration often takes time, sometimes decades, requiring continuous
collection, management, and assessment of data to ensure progress toward desired outcomes.
Conversely, some ecosystem manipulations can cause a rapid change in system states and require
immediate response to unexpected outcomes. The ability to effectively apply adaptive actions depends
on the quality of all the steps preceding it, including the:
•	clarity of project objectives,
•	design of the monitoring plan,
•	quality of collected data, and
•	analysis and interpretation of the data.
Hilderbrand, Watts, and Randle (2005) present some reality checks regarding ecological restoration.
Many fail to recognize that restoration is attempting to create ecological change in a few years that
typically takes tens or hundreds of years to evolve under natural succession by itself. Restoration efforts
are often built upon oversimplified concepts, ignoring uncertainties that will inevitably result in
unexpected outcomes and failures. This often leads to the creation of a restored ecosystem that may be
incapable of adapting to current or adjusting to any new stressors or other occurrences in the future.
Once these realities are firmly grasped then one can clearly see that ecological restoration is not a one-
time event. It requires continuous monitoring and feedback inside the context of an adaptive
management strategy that will increase the probability of executing a responsive, sustainable, and
successful project. Arguably, an Adaptive Management Plan (AMP) could then be thought of as an
experiment requiring a well-crafted experimental design (Fischenich and Vogt 2012). In an ideal world,
14 Briefly, these principles include (1) carefully defining and documenting the project goals, project objectives, and sampling objectives,
(2) designing and implementing a monitoring program that will provide data needed to determine if the objectives have been met, (3)
identifying and implementing OA practices and QC checks during the monitoring program to ensure the plan is properly followed and
data quality targets are being achieved, and (4) reviewing the quality of data generated to confirm it is of sufficient quality for its
intended use.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 8

-------
adaptive management should be utilized only after disputing parties have settled on a set of questions
(hypotheses) to be tested and resolved using an adaptive methodology (Lee 1999) as would normally be
accomplished with any experimental effort.
This chapter offers guidance to ecological restoration practitioners on ways to apply adaptive
management principles to their ecological restoration projects. The most effective place to build quality
assurance measures into the entire project is with an adaptive management strategy.
8.1 DEFINITION
There are numerous definitions of adaptive management in the literature (Holling 1978; Walters 1986;
Lee 1999; Anderson et al. 2003; Roux and Foxcroft 2011). One major source of uncertainty in any
scientific or engineering field is linguistic uncertainty. Terminology and jargon tend to inhibit
communication across disciplines and within the same discipline. To that end, for the purpose of this
guidance document, we define adaptive management as: a structured and sequential learning process
(decision-support) that increases knowledge and reduces uncertainty, iteratively leading to more
effective ecological restoration (Clark County 2016; Sutter 2018). We do not advocate any one particular
definition over another, but rely on each reader to adopt a definition that best meets the needs of their
project and institutional requirements. Suffice it to say a shared aspect of all definitions is the
recognition that natural resources are ever changing and management decisions must often be made
without all the necessary information about these systems (Peters et al. 2014). An adaptive
management definition generally includes the following key components (Williams. Szaro. and Shapiro
2009; Williams and Brown 2012; Runge and Knutson 2012; Williams and Boomer 2012);
•	provides a framework for restoration,
•	addresses both short- and long-term issues,
•	recognizes the uncertainty of restoration results and the need to improve restoration actions,
•	allows for an iterative decision-making process,
•	provides an opportunity to learn throughout the process of restoration,
•	improves restoration activities based on continuous feedback (both within a single project and from
one project to the next), and
•	clarifies that restoration is based more on evidence than hypotheses.
An AMP lays out the path to follow and the corrective actions that need to be taken when there is high
uncertainty in whether a project is on a trajectory to achieve the desired outcomes. If poor quality
monitoring design or data indicates the project is on a successful trajectory when in fact it is not (Type 1
error, i.e., a false positive), ecological harm will continue. Alternatively, if poor quality monitoring design
or data indicates the project is not on a successful trajectory when in fact it is (Type 2 error, i.e., a false
negative), costly and unnecessary operational or structural changes may be undertaken. It is imperative
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 8

-------
that quality management be built into every ecological restoration project to avoid costly financial
mistakes and irreversible ecological disasters.
An AMP requires monitoring data as input on how well a project met (or is meeting) its intended
outcome. Monitoring is an integral part of the AMP, but must also be adaptable to changing needs and
findings. Sometimes the wrong indicators have been monitored; they have not been responsive to the
restoration actions. Monitoring may go on for a decade or longer after project completion. New cost-
effective methods may become available during that time that can reduce the cost of long-term
monitoring. A method may need to be changed because it is not able to achieve the desired data quality
objectives necessary to determine project success within a desired range of uncertainty. To quote
Pirsig's classic novel on quality, Zen and the Art of Motorcycle Maintenance, "the pencil is mightier than
the pen" (1974). It should be realized that an AMP is not chiseled in stone, but must be flexible (hence
the name, "adaptive") enough to address ever changing conditions and lessons learned.
8.2 ROLE OF ADAPTIVE MANAGEMENT FOR ECOLOGICAL RESTORATION
As defined in Section 8.1, adaptive management is an approach that improves management and
enhances learning. It provides a framework for:
•	implementing ecological restoration actions within the context of uncertainty (Section 8.2.1);
•	Using available knowledge and verified validated results of monitoring efforts to adapt actions
(Section 8.2.2); and
•	increasing the effectiveness, efficiency and enduring value of projects (Section 8.2.3).
8.2.1 Framework for Implementing Ecological Restoration Actions within the Context of Uncertainty
Ecological restoration projects are long-term endeavors with substantial uncertainty. Time is a primary
variable when restoring natural communities and restoration is done within the context of a changing
environment resulting from a suite of interacting anthropogenic disturbances, including, but not limited to:
•	invasive species,
•	disruption of predator-prey relationships,
•	altered hydrogeomorphic and fluvialgeomorphic regimes,
•	altered fire regimes,
•	deposition of atmospheric nitrogen and other contaminants, and
•	climate change.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 8


-------
ecological restoration undertaking without collecting information on the response of the system to the
restoration measures. By "placeholder," we mean that monitoring has been institutionalized and there
is a group of decision makers that are actually expecting the monitoring data with specific intentions for
its use. Chapters 3-7 provide guidance on applying effective quality management strategies to the
collection of monitoring data. However, monitoring is not a standalone activity — it must be integrated
into a much larger, structured decision-making framework.
Adaptive management provides a structured decision-making environment in which monitoring data
can reside, and a foundation upon which to communicate and incorporate the values and expert
reasoning of decision makers and specialists (Keenev 1982) using high-quality, real-time monitoring data
to examine the overall implications of restoration actions. A stand-alone monitoring program with no
clear linkages (i.e., a placeholder) to a structured decision-making framework will, over time, experience
loss of interest and hence reductions in funding. Preferably, monitoring should discover problems
relatively early in the project lifecycle so that remedial efforts can be implemented quickly as required
(Atkinson et al. 2004).
It is important to note that given all the uncertainties and complexities associated with restoration
projects, monitoring data need not be the only input into the adaptive management decision-making
process. For example, in complex systems there can be widely varying conditions with a variety of both
positive and negative feedback loops. Even the best monitoring data may not provide "the answer"
unless these data can be supplemented by other information sources, including the scientific literature.
The findings from other similar restoration efforts may also help explain current results identified during
the analysis of monitoring data.
8.2.3 Framework for Increasing the Effectiveness, Efficiency, and Enduring Value of Ecological
Restoration Projects
Evaluating the quality of restoration project construction is part of what is commonly referred to as
implementation monitoring, and most often lies within the purview of the contract management
specialist. While the primary focus of this guidance is on effectiveness monitoring used to document
status and trends in resource conditions resulting from restoration activities (Mulder et al. 1999).
implementation monitoring should be an essential component of the adaptive management framework.
Failure to properly implement the restoration design may have profound impacts on the efficacy of the
restoration efforts and the effectiveness monitoring results. Key questions include whether there are
any QA/QC guidelines in place for implementation monitoring, whether the project was completed in
accordance with specifications, and whether implementation and effectiveness monitoring efforts are
linked together somehow. This linkage is perhaps the essence of adaptive management. If the
restoration effort was not fully executed (or was not completed in accordance with specifications or in
the correct location), users of the effectiveness monitoring data may be looking for a response to
something that was either not completed, partially or incorrectly completed, or completed in the wrong
place. While this may not directly influence the data quality attributes of effectiveness monitoring
indicators collected in the field, it does suggest that QA/QC may need to be implemented during the
project design phase.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 8

-------
As discussed in Section 3.5, secondary data are valuable resources for establishing baseline conditions
and augmenting new monitoring efforts. Often times, a different standard operating procedure (SOP) or
field method is enlisted to gather new data compared to these historical datasets. It may be more
advantageous to adopt the SOP that was used in the secondary data to avoid data comparability issues.
That is, provided the older SOP or alternate secondary data SOP is not flawed in some manner (e.g.,
unacceptable data quality, too destructive). See Section 4.1.3 for additional guidance on the evaluation
and comparison of SOPs.
8.3 PRACTICALITIES OF ADAPTIVE MANAGEMENT IN ECOLOGICAL
RESTORATION PROJECTS
The success of adaptive management strategies can be improved if projects are well planned,
implemented and monitored. Monitoring data is the foundation upon which adaptive management is
built. Without monitoring data, there can be no assessment of whether or not a project has achieved its
objectives. The types of monitoring and the limitations associated with each are explored in the
following subsections.
8.3.1	Need for Accurate Monitoring Baselines
An effective AMP should be based on monitoring data that reflects the "typical" state of the ecosystem
being restored; it is unwise to allocate monitoring resources in response to broad-scale deviations owing
to natural biological or human influences that are not being managed (Walters 1986). For instance, a
frequent emergency response action is for organizations to establish new monitoring programs after
broad-scale changes in the system have been observed. The purpose of these new monitoring programs
is not only to demonstrate to the public a discernable sign of apprehension for the issue, but also to
determine the reasons for these changes. Unfortunately, it is not always easy to deduce the causes of
the changes afterwards. Environmental disturbances sufficient to be useful for variable assessment are
probably linked to changes in ecosystem properties (e.g., introduction of new organisms, demise of
species substocks, new resource extraction strategies). Although adaptive management requires
adaptive monitoring, strategies that move monitoring objectives to the most recent crises as they
appear will preclude the collection of compelling and stable data over time that lend themselves to
rigorous statistical evaluations and trend detection. The result is that new estimates for measured
variables are generated that are not characteristic of typical plots that could have been monitored to
detect change in response to restoration measures (Walters 1986).
8.3.2	Limitations of Surveillance Monitoring
When developing an AMP, project managers should avoid surveillance monitoring in lieu of
effectiveness monitoring. Surveillance monitoring (i.e., broadly monitoring many elements without
deciding what is most relevant to the project) is not driven by any sort of testable hypothesis (Nichols
and Williams 2006). It is a retrospective look at observations collected in overabundance. As Piatt (1964)
states, "Biology, with its vast informational detail and complexity, is a "high-information" field, where
years and decades can easily be wasted on the usual type of "low-information" observations and
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 8

-------
experiments if one does not think carefully in advance about what the most important and conclusive
experiments would be." Regardless of the motivation behind surveillance monitoring, it provides little
useful information for interpreting or recognizing any kind of signal from a restored ecosystem. Chapters
2 and 3 of this document describe strategies for developing hypothesis-based sampling objectives that
can be used as the basis for designing an effectiveness monitoring program that will allow project
managers to determine if their restoration activities are yielding the desired results and make
adjustments where needed.
8.3.3	Modeling Ecosystem Responses
Models may have a valuable role in adaptive management strategies. Conceptual ecological models are
useful in identifying relationships between ecosystem components and in interpreting monitoring
results (Fischenich and Vogt 2012). Predictive models are used to simulate ecosystem responses to
restoration actions and monitoring efforts can provide important information for these models. For
example, scientists engaged in rebuilding sandbars in the Grand Canyon portion of the Colorado River
fine-tune model assessments and decrease the level of uncertainty in those estimates by quantifying the
concentration of sand present in water samples as well as their associated particle sizes (Grams et al.
2015). They utilize this joint modeling-monitoring method since sand concentrations in the Paria River
tributary are outside of acceptable ranges for other procedures (e.g., acoustics) to perform dependably.
The information and programs to apply the model to calculate contributions of sand from the Paria River
are accessible to all interested parties including water administrators, stakeholders and other
participants in the community.
8.3.4	Space-for-Time Substitution
Searching for similar ecological restoration projects that have been completed at comparable locations
in the past can be a valuable assessment tool in adaptive management, particularly if high-quality
monitoring data have been collected and are available. Space-for-time substitution is a meta-analytic
method that can be used to look for temporal trends using projects that have been implemented at
various times across space. The replacement of space for time is frequently applied during model
projections when there are no available time-series datasets that have been collected over the long
term (Blois et al. 2012; Banet and Trexler 2013). Detractors of this approach submit that other
influences besides what is considered to be the primary driver may shape ecosystem responses and, as
these may differ spatially, false conclusions could result. To assess if temporal data could be replaced by
spatial data in predictive models, Banet and Trexler (2013) used information collected during Florida
Everglades monitoring. They found that after a drying event, the populations of bluefin killifish (Lucania
goodei) predicted from spatial models were comparable and at times superior to those predicted from
temporal models. Model performance was found to be acceptable, provided the model did not make
inferences outside of the extent of variation found in the initial dataset. Banet and Trexler (2013)
assessed the feasibility of a space-for-time substitution approach for evaluating ecosystem restoration
effectiveness by comparing their results to other studies. The comparison indicated that space-for-time
substitution performs well in ecosystems possessing certain properties such as low beta-diversity (ratio
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 8

-------
between regional and local species diversity), short-time intervals in the reaction of organisms to
primary drivers and high connectivity linking locations.
8.3.5 Recordkeeping
A key component of adaptive management over the long term is recordkeeping. Project or program
documentation should include records regarding how decisions have been made, including any findings
that triggered management actions (Medema, Mcintosh, and Jeffrey 2008). These records are of most
value when they provide a clear and understandable explanation of the whole process leading to
particular management decisions. This grants future managers with the ability to identify those
restoration methods that have been successful in the past and provides them with an understanding of
how particular choices that led to current management approaches were made. Within this context,
recordkeeping is an even more critical element of adaptive management (Lindenmaver and Likens 2010;
Sutter 2018) and no other approach is clearly superior (Westgate, Likens, and Lindenmaver 2013). The
collection and preservation of quality data is a key component to the continued improvement of
adaptive management principles for ecological restoration projects.
8.4 INCORPORATING QA/QC DATA INTO ADAPTIVE MANAGEMENT
Chapters 3-6 provide guidance on incorporating QA/QC strategies throughout effectiveness
monitoring. The QA/QC data that result from implementation of these strategies should be viewed as an
integral part of adaptive management and institutionalized in the adaptive management framework. No
decisions should be made based on findings presented in the AMP without a thorough assessment of
data quality and its impact on trend detection and decision making. Chapter 7 provides guidance on the
analysis of validated project data to determine the degree to which project objectives are being met.
Management of all routine and QA/QC data gathered during the effectiveness monitoring program is
important for identifying the results of restoration activities and related decision made under an
adaptive management context. This undertaking is very challenging considering the reality that diverse
agencies including federal, state, or tribal entities collect data independently (Peters et al. 2014).
Appendix A discusses data management strategies that project planning teams can employ when
designing and implementing an effectiveness monitoring program; these strategies also apply to the
overall adaptive management framework.
As mentioned in Section 8.2.3, implementation monitoring should be linked to effectiveness monitoring
within the framework of adaptive management decision making. QA/QC data associated with
implementation monitoring should be utilized in decisions regarding whether project specifications
were adhered to and to what extent there were departures from those specifications. Depending on the
organizations responsible for funding, managing and implementing the project, QA/QC criteria may be
found in Quality Assurance Project Plans (QAPPs), Quality Assurance Manuals, or other similar types of
documents (see Section 2.4). QA/QC activities should also be emphasized in the planning and
implementation of follow-up operations and maintenance (O&M), such as in O&M manuals for
restoration sites. Findings from QA inspections prior to, during and after project construction should be
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 8

-------
presented in the AMP. Ramifications of not meeting project specifications should be discussed, along
with how these shortcomings may manifest themselves in the ecological response of the system as
observed in the effectiveness monitoring component.
8.5 ADDITIONAL READINGS
•	Buchsbaum, Robert N., and Cathleen Wigland. 2012. "Adaptive Management and Monitoring as
Fundamental Tools to Effective Salt Marsh Restoration." In Tidal Marsh Restoration: A Synthesis of
Science and Management, edited by Charles T. Roman and David M. Burdick. Series: The Science and
Practice of Ecological Restoration. Washington, D.C.: Island Press.
•	Conservation Measures Partnership. 2013. Open Standards for the Practice of Conservation.
http://cmp-openstandards.org/wp-content/uploads/2017/06/CMP-QS-V3.0-Final-minor-update-
Mav-2107.pdf
•	Convertino, Matteo, Christy M. Foran, Jeffrey M. Keisler, Lynn Scarlett, Andy LoSchiavo, Gregory A.
Kiker and Igor Linkov. 2013. Enhanced Adaptive Management: Integrating Decision Analysis,
Scenario Analysis and Environmental Modeling for the Everglades. Scientific Reports 3, Article
number: 2922. doi: 10.1038/srep02922.
•	The National Academies of Sciences, Engineering, and Medicine. 2017. Effective Monitoring to
Evaluate Ecological Restoration in the Gulf of Mexico. Washington, DC: The National Academies
Press, doi: 10.17226/23476.
•	Mazur, Nicki, Andy Bodsworth, Allan Curtis and Ted Lefroy. 2013. Applying the principles of adaptive
management to the application, selection and monitoring of environmental projects, University of
Tasmania, Hobart, Tasmania.
•	Murphy, Dennis D. and Paul S. Weiland. 2014. Science and structured decision making: fulfilling the
promise of adaptive management for imperiled species. Environmental Studies and Science 4:200-
207. doi: 10.1007/sl3412-014-0165-0.
•	O'Donnell, T. Kevin and David L. Galat. 2007. Evaluating Success Criteria and Project Monitoring in
River Enhancement Within an Adaptive Management Framework. Environmental Management
41:90-105. doi: 10.1007/s00267-007-9010-5.
•	Rist, Lucy, Adam Felton, Lars Samuelsson, Camilla Sandstrom, and Ola Rosvall. 2013. A New
Paradigm for Adaptive Management. Ecology and Society 18(4): 63. http://dx.doi.org/10.5751/ES-
06183-180463.
•	Ruiz-Jaen, Maria C. and T. Mitchell Aide. 2005. Restoration Success: How Is It Being Measured?
Restoration Ecology Vol. 13, No. 3, pp. 569-577.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Chapter 8

-------
REFERENCES
Adamcik, Robert S., Elizabeth S. Bellantoni, Don C. DeLong, John H. Schomaker, David B. Hamilton,
Murray K. Laubhan, and Richard L. Schroeder. 2004. Writing Refuge Management Goals and
Objectives: A Handbook. Washington, DC: U.S. Fish and Wildlife Service.
Amacher, Michael C., William A. Bechtold, David Gartner, Mark H. Hansen, Olaf Kuegler, Paul L. Patterson,
Charles H. Perry, KaDonna C. Randolph, Bethany Schulz, Marie T. Trest, James A. Westfall, Susan
Will-Wolf, and Christopher W. Woodall. 2009. FIA National Assessment of Data Quality for Forest
Health Indicators, edited by James A. Westfall. General Technical Report NRS-53. Newtown
Square, PA: U.S. Department of Agriculture, Forest Service, Northeastern Research Station.
American Society for Quality. 2014. ASQ/ANSI E4:2014: Quality Management Systems for Environmental
Information and Technology Programs. Milwaukee, Wl: ASQ Quality Press.
American Society for Quality. 2017. "Cost of Quality." ASQ.org. Accessed May 2. http://asq.org/learn-
about-qualitv/cost-of-qualitv/overview/overview.html.
American Society for Quality. 2018. "Plan-Do-Check-Act (PDCA) Cycle." ASQ.org. Accessed January 8.
http://asq.org/learn-about-qualitv/proiect-planning-tools/overview/pdca-cycle.html.
Anderson, J.L., R.W. Hilborn, R.T. Lackey, and D. Ludwig. 2003. "Watershed Restoration—Adaptive
Decision Making in the Face of Uncertainty." In Strategies for Restoring River Ecosystems:
Sources of Variability and Uncertainty in Natural and Managed Systems, edited by R.C. Wissmar
and P.A. Bisson. Bethesda, MD: American Fisheries Society.
Atkinson, Andrea J., Peter C. Trenham, Robert N. Fisher, Stacie A. Hathaway, Brenda S. Johnson, Steven
G. Torres, Yvonne C. Moore. 2004. Designing monitoring programs in an adaptive management
context for regional multiple species conservation plans. U.S. Geological Survey, Western
Ecological Research Center. Sacramento, CA.
Baker, Richard J., and John R. Sauer. 1995. "Statistical Aspects of Point Count Sampling." In Monitoring
Bird Populations by Point Counts, edited by C. John Ralph, John R. Sauer, and Sam Droege.
General Technical Report PSW-GTR-149. Albany, CA: Department of Agriculture, Forest
Service, Pacific Southwest Research Station.
Banet, Amanda, and Joel C. Trexler. 2013. "Space-for-Time Substitution Works in Everglades Ecological
Forecasting Models." PLoS ONE 8 (11): e81025. doi: 10.1371/journal.pone.0081025.
Bird Studies Canada. 2000. The Marsh Monitoring Program Quality Assurance Project Plan. Accessed
May 25, 2017. http://www.bsc-eoc.org/download/mmpqualplan.pdf.
Block, William M., Alan B. Franklin, James P. Ward, Joseph L. Ganey, and George C. White. 2001. "Design
and Implementation of Monitoring Studies to Evaluate the Success of Ecological Restoration on
Wildlife." Restoration Ecology 9 (3): 293-303. doi: 10.1046/j.l526-100x.2001.009003293.x.
Blois, Jessica L., John W. Williams, Matthew C. Fitzpatrick, Stephan T. Jackson, and Simon Ferrier. 2012.
"Space Can Substitute for Time in Predicting Climate-change Effects on Biodiversity." PNAS 110
(23): 9374-9379. doi: 10.1073/pnas.l220228110.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019

-------
Blume, Louis J., Craig J. Palmer, Molly Middlebrook Amos, Justin Telech. 2013. Development of a Graded
Approach to Project-level Quality Documentation for Habitat Restoration Projects. PowerPoint.
Chicago, IL: U.S. EPA Great Lakes National Program Office. Accessed May 2, 2017.
http://conference.ifas.ufl.edu/ncer2013/Presentations/6-Connection/4-Friday/47-
Session/YES/0930%20Blume.pdf.
Clark County. 2016. 2016 Adaptive Management Report, Clark County Multiple Species Habitat
Conservation Plan. Clark County, NV: Clark County Desert Conservation Program.
Clark, Malcom J. R., and Paul H. Whitfield. 1994. "Conflicting Perspectives About Detection Limits and
About the Censoring of Environmental Data." Journal of the American Water Resources
Association 30 (6): 1063-1079. doi: 10.1111/j.l752-1688.1994.tb03353.x.
Clewell, Andre, John Rieger, and John Munro. 2005. Guidelines for Developing and Managing Ecological
Restoration Project, 2nd Edition. Tucson, AZ: Society for Ecological Restoration International.
Consolidated Appropriations Act of 2001, Pub. L. No. 106-154, § 515.
Cross-Smiecinski, Amy, and Linda D. Stetzenbach. 1994. Quality Planning for the Life Science Researcher:
Meeting Quality Assurance Requirements. Boca Raton, FL: CRC Press, Inc.
Davis, Philip M. 2012. "The Persistence of Error: A Study of Retracted Articles on the Internet and in
Personal Libraries." J Med Libr Assoc 100 (3): 184-189. doi: 10.3163/1536-5050.100.3.008.
Dean R. B., and W. J. Dixon. 1951. "Simplified Statistics for Small Numbers of Observations." Anal. Chem.
23 (4): 636-638. doi: 10.1021/ac60052a025.
Doran, George T. 1981. "There's a S.M.A.R.T. Way to Write Management's Goals and Objectives."
Management Review 70 (11): 35.
Drew, C. Ashton, Yolanda F. Wiersma, and Falk Huettmann. 2010. Predictive Species and Habitat
Modeling in Landscape Ecology: Concepts and Applications. New York, NY: Springer Science &
Business Media.
Elzinga, Caryl L., Daniel W. Salzer, and John W. Willoughby. 1998. Measuring and Monitoring Plant
Populations. Publication No. BLM/RS/ST-987/005+1730. Denver, CO: U.S. Dept. of the Interior
Bureau of Land Management
Fang, Ferric C., R. Grant Steen, and Arturo Casadevall. 2012. "Misconduct Accounts for the Majority of
Retracted Scientific Publications." Proc Natl Acad Sci USA 109 (42): 17028-17033.
doi: 10.1073/pnas. 1212247109.
Fischenich, Craig, and Craig Vogt. 2012. The Application of Adaptive Management to ecosystem
restoration projects. Environmental Benefits Assessment Technical Notes Collection. ERDCTN-
EMRRP-EBA-10. Vicksburg, MS: U.S. Army Engineer Research and Development Center.
Forage Task Group. 2013. Report of the Lake Erie Forage Task Group, March 2013. Accessed May 22,
2017.
http://www.glfc.org/pubs/lake committees/erie/FTG docs/annual reports/FTG report 2013.pdf.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019

-------
Gartner, David, and Bethany Schulz. 2009. "Vegetation Diversity and Structure Indicator." In FIA National
Assessment of Data Quality for Forest Health Indicators, edited by James A. Westfall. General
Technical Report NRS-53. Newtown Square, PA: U.S. Department of Agriculture, Forest Service,
Northeastern Research Service.
Gitzen, Robert A. 2012. Design and Analysis of Long-term Ecological Monitoring Studies. Edited by
Robert A. Gitzen, Joshua J. Millspaugh, Andrew B. Cooper, Daniel S. Licht. Cambridge, England:
Cambridge University Press.
Grams, Paul E., John C. Schmidt, Scott A. Wright, David J. Topping, Theodore S. Melis, and David M.
Rubin. 2015. "Building Sandbars in the Grand Canyon." EOS 96. doi: 10.1029/2015E0030349.
Grubbs, Frank E. 1969. "Procedures for Detecting Outlying Observations in Samples." Technometrics 11
(1): 1-21. doi: 10.1080/00401706.1969.10490657.
Gy, Pierre. 1998. Sampling for Analytical Purposes. West Sussex, England: John Wiley & Sons.
Hilderbrand, Robert H., Adam C. Watts, and April M. Randle. 2005. "The Myths of Restoration Ecology."
Ecology and Society 10(1): 19. http://www.ecologvandsocietv.org/voll0/issl/artl9/.
Holling, Crawford S. 1978. Adaptive Environmental Assessment and Management. New York, NY: John
Wiley and Sons.
International Organization for Standardization. 2015a. ISO 9000:2015, Quality Management Systems -
Fundamentals and Vocabulary. Geneva, Switzerland: International Organization for
Standardization.
International Organization for Standardization. 2015b. ISO 9001:2015, Quality Management Systems -
Requirements. Geneva, Switzerland: International Organization for Standardization.
International Organization for Standardization. 2017. ISO/IEC 17025:2017, General requirements for the
competence of testing and calibration laboratories. Geneva, Switzerland: International
Organization for Standardization.
Keeney, Ralph L. 1982. "Decision Analysis: An Overview." Operations Research 30 (5): 803-838.
http://www.istor.org/stable/170347.
Kelly, Marion, and Lynn Walters. 2014. Quality Assurance Strategies for the Use of Existing Data. Training
Session for EPA National Program Office Quality Community, presented March 18 and 20.
Kepler, C. B., and J. Michael Scott. 1981. "Reducing Bird Count Variability by Training Observers." In
Studies in Avian Biology, edited by C. John Ralph and J. Michael Scott. San Jose, CA: Cooper
Ornithological Society.
Kercher, Suzanne M., Christin B. Frieswyk, and Joy B. Zedler. 2003. "Effects of Sampling Teams and Estimation
Methods on the Assessment of Plant Cover." Journal of Vegetation Science 14: 899-906.
Kish, Leslie. 1965. Survey Sampling. New York, NY: John Wiley & Sons.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019

-------
Larsen, David P., Philip R. Kaufamann, Thomas M. Kincaid, N. Scott Urquhart. 2004. "Detecting Persistent
Change in the Habitat of Salmon-bearing Streams in the Pacific Northwest." Can. J. Fish Aquat.
Sci 61: 283-291. doi: 10.1139/F03-157.
Lee, Kai N. 1999. "Appraising Adaptive Management." Conservation Ecology 3 (2): 3.
http://www.consecol.org/vol3/iss2/art3/.
Lesser, Virginia M., and William D. Kalsbeek. 1999. "Nonsampling Errors in Environmental Surveys."
Journal of Agricultural, Biological, and Environmental Statistics 4 (4): 473-488.
Lindenmayer, David B., and Gene E. Likens. 2010. Effective Ecological Monitoring. London, UK:
Earthscan.
Lohr, Sharon L. 2010. Sampling: Design and Analysis (Edition 2). Boston, MA: Brooks/Cole.
Lyons, James E., Michael C. Runge, Harold P. Laskowski, and William L. Kendall. 2008. "Monitoring in the
Context of Structured Decision-making and Adaptive Management." Journal of Wildlife
Management 72 (8): 1683-1692. doi: 10.2193/2008-141.
Mack, John J. 2001. Ohio Rapid Assessment Method for Wetlands, Manual for Using Version 5.0. Ohio
EPA Technical Bulletin Wetland/2001-1-1. Columbus, OH: Ohio Environmental Protection
Agency, Division of Surface Water, 401 Wetland Ecology Unit.
McDonald, Tein, George D. Gann, Justin Jonson, Kingsley W. Dixon. 2016. International Standards for the
Practice of Ecological Restoration-Including Principles and Key Concepts, First Edition.
Washington D.C.: Society for Ecological Restoration.
Medema, Wietske, Brian S. Mcintosh, and Paul J. Jeffrey. 2008. "From Premise to Practice: A Critical
Assessment of Integrated Water Resources Management and Adaptive Management
Approaches in the Water Sector." Ecology and Society 13 (2): 29.
http://www.ecologvandsociety.org/voll3/iss2/art29/.
Millard, Steven P., and Nagaraj K. Neerchal. (2001). Environmental Statistics with S-Plus. Boca Raton, FL:
CRC Press.
Morrison, Michael L., William M. Block, M. Dale Strickland, Bret A. Collier, MarkusJ. Peterson. 2008.
Wildlife Study Design (2nd edition). New York, NY: Springer.
Mulder, Barry S., Barry R. Noon, Thomas A. Spies, Martin G. Raphael, Craig J. Palmer, Anthony R. Olsen,
Gordon H. Reeves, and Hartwell H. Welsh. 1999. The strategy and design of the effectiveness
monitoring program for the Northwest Forest Plan. Gen. Tech. Rep. PNW-GTR-437. Portland, OR:
U.S. Department of Agriculture, Forest Service, Pacific Northwest Research Station.
National Estuarine Research Reserve System. 2015. Tributary Water Quality and Storm Water
Monitoring, Old Woman Creek NERR. Accessed February 20, 2018.
http://www.epa.ohio.gov/Portals/35/owrc wq monitoring/Arend OWRC WMWG 2015.pdf
National Oceanic and Atmospheric Administration. 2013. Project Design and Evaluation: Logic Model
Development for Restoration Practitioners. Charleston, SC: National Oceanic and Atmospheric
Administration, Office for Coastal Management.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019

-------
Nichols, James D., and Byron K. Williams. 2006. "Monitoring for Conservation." Trends in Ecology and
Evolution 21 (12): 668-673. doi: 10.1016/j.tree.2006.08.007.
Oakley, Karen L., Susan L. Boudreau, and Sioux-Z Humphrey. 2001. "Recommended Features of
Protocols for Long-term Ecological Monitoring." In Crossing Boundaries in Park Management:
Proceedings of the 11th Conference on Research and Resource Management in Parks and on
Public Lands, edited by David Harmon. Hancock, Michigan: The George Wright Society.
Oakley, Karen L., Lisa P.Thomas, and Seven G. Fancy. 2003. "Guidelinesfor Long-term Monitoring
Protocols." Wildlife Society Bulletin 31 (4): 1000-1003.
Ogden, John C., Steve M. Davis, Kimberly J. Jacobs, Tomma Barnes, and Holly E. Fling. 2005. The Use of
Conceptual Ecological Models to Guide Ecosystem Restoration in South Florida. Wetland 25 (4):
795-809. doi: 10.1672/0277-5212(2005)025[0795:TUOCEM]2.0.CO;2.
Parsons, Arielle W., Theodore R. Simons, Kenneth H. Pollock, Michael K. Stoskopf, Jessica J. Stocking, and
Allan F. O'Connell Jr. 2015. "Camera Traps and Mark-resight Models: The Value of Ancillary Data
for Evaluating Assumptions." The Journal of Wildlife Management 79 (7): 1163-1172.
Peters, R.J., J.J. Duda, G.R. Pess, M. Zimmerman, P. Crain, Z. Hughes, A. Wilson, M.C. Liermann, S.A.
Morley, J.R. McMillan, K. Denton, D. Morrill, and K. Warheit. 2014. Guidelines for Monitoring
and Adoptively Managing Restoration of Chinook Salmon (Oncorhynchus tshawytscha) and
Steelhead (O. mykiss) on the Elwha River. Lacey, WA: U.S. Fish and Wildlife Service, Washington
Fish & Wildlife Office, Fisheries Division.
Pirsig, Robert. 1974. Zen and the Art of Motorcycle Maintenance. New York, NY: Bantam Books.
Piatt, John R. 1964. "Strong Inference: Certain Systematic Methods of Scientific Thinking May Produce
Much More Rapid Progress than Others." Science 146 (3642): 347-353.
doi: 10.1126/science.l46.3642.347.
Project Management Institute. 2014. A Guide to the Project Management Body of Knowledge (PMBOK
Guide, Fifth Edition). Newtown Square, PA: Project Management Institute.
https://www.pmi.org/pmbok-guide-standards
Quality Assurance, 48 C.F.R. § 46.
Quinn, Gerry P., and Michael J. Keough. 2002. Experimental Design and Data Analysis for Biologists.
Cambridge, United Kingdom: Cambridge University Press.
Roche, Dominique G., Loeske E. B. Kruuk, Robert Lanfear, and Sandra A. Binning. 2015. "Public Data
Archiving in Ecology and Evolution: How Well Are We Doing?" PLOS Biology 13 (11): el002295
doi: 10.1371/journal.pbio. 1002295.
Roux, Dirk J., and Llewellyn C. Foxcroft. 2011. "The Development and Application of Strategic Adaptive
Management within South African National Parks." Koedoe 53 (2), Article #1049, 5 pages, doi:
10.4102/koedoe.v53i2.1049.
Runge, Michael C. and Melissa G. Knutson. 2012. "When is Adaptive Management Appropriate?"
Chapter 3 in Adaptive Management: Structured Decision-making for Recurrent Decisions.
Shepherdstown, WV: U.S. Fish and Wildlife Service, National Conservation Training Center.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019

-------
Schneider, James C., Percy W. Laarman, and Howard Gowing. 2000. "Length-weight Relationships." In
Manual of Fisheries Survey Methods II: With Periodic Updates, edited by James C. Schneider.
Ann Arbor, Ml: Michigan Department of Natural Resources.
Society for Ecological Restoration, International Science and Policy Working Group. 2004. The SER
International Primer on Ecological Restoration. Tucson, AZ: Society for Ecological Restoration.
Siitari, Kiera, Jim Martin, and William W. Taylor. 2014. "Information Flow in Fisheries Management:
Systemic Distortion within Agency Hierarchies." Fisheries 39 (6): 246-250.
doi: 10.1080/03632415.2014.915814.
Stankey, George H., Roger N. Clark, and Bernard T. Bormann. 2005. Adaptive Management of Natural
Resources: Theory, Concepts, and Management Institutions. General Technical Report PNW-
GTR-654. Portland, OR: U.S. Department of Agriculture Forest Service, Pacific Northwest
Research Station.
Stapanian, Martin A., Michael T. Bur, and Jean V. Adams. 2007. "Temporal Trends of Young-of-year
Fishes in Lake Erie and Comparison of Diel Sampling Periods." Environmental Monitoring and
Assessment 129 (1): 169-178. doi: 10.1007/sl0661-006-9350-2.
Stapanian, Martin A., S.P. Cline, and D.L. Cassell. 1994. "Vegetation Structure Indicator for Evaluating
Forest Health." In Forest Health Monitoring Survey: A National Evaluation of Forest Health,
edited by T.E. Lewis and B.L. Conkling. Publication No. EPA/620/R-94/006.
Stapanian, Martin A., Timothy E. Lewis, Craig J. Palmer, and Molly M. Amos. 2016. "Assessing Accuracy
and Precision for Field and Laboratory Data: A Perspective in Ecosystem Restoration."
Restoration Ecology 24 (1): 18-26. doi: 10.1111/rec.l2284.
State and Local Assistance Rule, 40 C.F.R. § 35. 2015.
Stevens, Don L. and Anthony R. Olsen. 2004. "Spatially Balanced Sampling of Natural Resources." Journal
of the American Statistical Association 99 (465): 262-78. doi: 10.1198/016214504000000250.
Sutter, Robert D., and Brandon T. Rutledge. 2018. "Longleaf Pine Ecosystem Monitoring and Adaptive
Management." In Ecological Restoration of Longleaf Pine, edited by L. Katherine Kirkman and
Steven B. Jack, 265-288. Boca Raton, FL: CRC Press, Taylor & Francis Group.
Taylor, John K. 1987. Quality Assurance of Chemical Measurements. Chelsea, Ml: Lewis Publishers.
Thayer, Gordon W., Teresa A. McTigue, Russell J. Bellmer, Felicity M. Burrows, David H. Merkey, Amy D.
Nickens, Stephan J. Lozano, Perry F. Gayaldo, Pamela J. Polmateer, and P. Thomas Pinit. 2003.
Science-based Restoration Monitoring of Coastal Habitats, Volume One: A Framework for
Monitoring Plans Under the Estuaries and Clean Waters Act of 2000 (Public Law 160-457). NOAA
Coastal Ocean Program Decision Analysis Series No. 23, Volume 1. Silver Spring, MD: National
Oceanic and Atmospheric Administration, National Centers for Coastal Ocean Science.
Tyson, Jeffrey T., Timothy B. Johnson, Carey T. Knight, and Michael T. Bur. 2006. "Intercalibration of
Research Survey Vessels on Lake Erie." North American Journal of Fisheries Management 26 (3):
559-570. doi: 10.1577/M05-027.1.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019

-------
Uniform Administrative Requirements, Cost Principles, and Audit Requirements for Federal Awards, 48
C.F.R. § 1500.11. 2015.
U.S. Department of Agriculture. 2012. Quality Assurance Plan for Forest Inventory in the South. Knoxville,
TN: U.S. Department of Agriculture, Forest Service, Southern Research Station, Forest Inventory
and Analysis.
U.S. Environmental Protection Agency. 1998. Guide to Laboratory Contracting. Washington, DC: Office of
Water.
U.S. Environmental Protection Agency. 2000a. EPA Quality Manual for Environmental Programs (CIO
2105-P-01-0). Washington, DC: Office of Environmental Information.
U.S. Environmental Protection Agency. 2000b. Policy and Program Requirements for the Mandatory
Agency-Wide Quality System (CIO 2105.0). Washington, DC: Office of Environmental
Information.
U.S. Environmental Protection Agency. 2001. EPA Requirements for Quality Assurance Project Plans (EPA
QA/R-5). Publication No. EPA/240/B-01/003. Washington, DC: Office of Environmental
Information.
U.S. Environmental Protection Agency. 2002. Guidance for Quality Assurance Project Plans (EPA QA/G-
5). Publication No. EPA/240/R-02/009. Washington, DC: Office of Environmental Information.
U.S. Environmental Protection Agency. 2003a. Guidance for Geospatial Data Quality Assurance Project
Plans (EPA QA/G-5G). Publication No. EPA/240/R-03/003. Washington, DC: Office of
Environmental Information.
U.S. Environmental Protection Agency. 2003b. "A Summary of General Assessment Factors for
Evaluating the Quality of Scientific and Technical Information." U.S. Environmental Protection
Agency (Risk Assessment). Publication No. EPA/100/B-03/001. Washington, DC: Science Policy
Council, https://www.epa.gov/sites/production/files/2015-01/documents/assess2.pdf.
U.S. Environmental Protection Agency. 2005a. Guidance on Quality Assurance for Environmental
Technology Design, Construction, and Operation (EPA QA/G-11). Publication No. EPA/240/B-
05/001. Washington, DC: Office of Environmental Information.
U.S. Environmental Protection Agency. 2005b. Quality Assurance Report for the National Study of
Chemical Residues in Lake Fish Tissue: Analytical Data for Years 1 through 4. Publication No.
EPA-823-R-05-005. Washington, DC: Office of Water.
U.S. Environmental Protection Agency. 2006a. Data Quality Assessment: A Reviewer's Guide (EPA QA/G-
9R). Publication No. EPA/240/B-06/002. Washington, D.C.: Office of Environmental Information.
U.S. Environmental Protection Agency. 2006b. Guidance on Systematic Planning Using the Data Quality
Objectives Process (EPA QA/G-4). Publication No. EPA/240/B-06/001. Washington, DC: Office of
Environmental Information.
U.S. Environmental Protection Agency. 2008. Quality Program Policy for Agency Products and Services
(CIO 2106.0). Washington, DC: Office of Environmental Information.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019

-------
U.S. Environmental Protection Agency. 2012. National Wetland Condition Assessment Quality Assurance
Project Plan, Version 2. Publication No. EPA 843-R-10-003. Washington, DC: Office of Water.
U.S. Environmental Protection Agency. Heron in Wetlands. 2013. EPA Flickr. Accessed February 20, 2018.
https://www.flickr.com/photos/usepagov/albums/72157632912260109
U.S. Environmental Protection Agency. 2014a. Great Lakes Restoration Initiative-Action Plan II.
Accessed May 2, 2017. https://www.glri.us/sites/default/files/glri-action-plan-2-201409-
30pp.pdf.
U.S. Environmental Protection Agency, Office of Water, Office of Science and Technology. 2014b. 2010
GLHHFTS Fish Tissue Data Dictionary for Mercury, PFC, PBDE, and PCBs. Accessed February 20,
2018. https://www.epa.gov/sites/production/files/2014-12/documents/glhhfts-data-dictionarv-
for-fish-tissue-data-09-30-2014.pdf.
U.S. Environmental Protection Agency. 2015. Peer Review Handbook. Publication No. EPA/100/B-
15/001. Washington, DC: Science and Technology Policy Council.
U.S. Environmental Protection Agency. 2016. "Ensuring Measurement Competency." Last modified July
7. https://www.epa.gov/measurements/ensuring-measurement-competencv.
U.S. Forest Service. 2009. Southern Research Station Author's Guide. Asheville, NC: U.S. Forest Service,
Southern Research Station, Science Delivery Group.
U.S. Geological Survey. 2009. Forest Vegetation Monitoring Protocol for National Parks in the North
Coast and Cascades Network. Chapter 8 of Section A, Biological Science; Book 2, Collection of
Environmental Data. Accessed May 5. http://pubs.usgs.gov/tm/tm2a8/pdf/tm2a8.pdf.
U.S. Geological Survey. 2017a. "Instructions for Conducting the North American Breeding Bird Survey."
USGS Patuxent Wildlife Research Center. Accessed May 19.
https://www.pwrc.usgs.gOv/bbs/participate/instructions.html#WIND.
U.S. Geological Survey. 2017b. "Why Share Your Data?" USGS (Supporting and Enabling USGS Data
Management). Last modified March 23.
https://www2.usgs.gov/datamanagement/share/guidance.php.
U.S. Office of Management and Budget. 2004. Final Information Quality Bulletin for Peer Review.
Publication No. M-05-03. Washington, DC: Office of Management and Budget.
Uzarski, Donald G., Valerie J. Brady, and Matthew J. Cooper. 2015. GLIC: Implementing Great Lakes
Coastal Wetland Monitoring: Semiannual Progress Report: April 1, 2015 - September 30, 2015.
Accessed February 20, 2018. http://www.greatlakeswetlands.org/docs/Reports/GLIC-
Semi Annual-Sept-2015-final.pdf.
Uzarski, Donald G., Valerie J. Brady, Matthew J. Cooper, Douglas A. Wilcox, Dennis A. Albert, Richard P.
Axler, Peg Bostwick, Terry N. Brown, Jan J. H. Ciborowski, Nicholas P. Danz, Joseph P. Gathman,
Thomas M. Gehring, Greg P. Grabas, Anne Garwood, Robert W. Howe, Lucinda B. Johnson, Gary A.
Lamberti, Ashley H. Moerke, Brent A. Murry, Gerald J. Niemi, Christopher J. Norment, Carl R. Ruetz
III, Alan D. Steinman, Douglas C. Tozer, Ryan Wheeler, T. Kevin O'Donnell, John P. Schneider. 2017.
"Standardized Measures of Coastal Wetland Condition: Implementation at a Laurentian Great
Lakes Basin-Wide Scale." Wetlands 37 (1): 15-32. doi: 10.1007/sl3157-016-0835-7.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019

-------
Walters, Carl. 1986. Adaptive Management of Renewable Resources. New York, NY: MacMillan.
Westfall, James A., and Christopher W. Woodall. 2007. "Measurement Repeatability of a Large-scale
Inventory of Forest Fuels." Forest Ecology and Management 253 (1): 171-176.
doi: 10.1016/j.foreco.2007.07.014.
Westgate, Martin J., Gene E. Likens, and David B. Lindenmayer. 2013. "Adaptive Management of
Biological Systems: A Review." Biological Conservation 158: 128-139.
http://dx.doi.Org/10.1016/i.biocon.2012.08.016.
Williams, Byron K., and G. Scott. Boomer. 2012. "A Typology of Adaptive Management." Chapter 4 in
Adaptive Management: Structured Decision-Making for Recurrent Decisions. Shepherdstown,
WV: U.S. Fish and Wildlife Service, National Conservation Training Center.
Williams, Byron K., Robert C. Szaro, and Carl D. Shapiro. 2009. Adaptive Management: The U.S.
Department of the Interior Technical Guide. Washington, DC: U.S. Department of the Interior,
Adaptive Management Working Group.
Williams, Byron K., and Eleanor D. Brown. 2012. Adaptive Management: The U.S. Department of the
Interior Applications Guide. Washington, DC: U.S. Department of the Interior, Adaptive
Management Working Group.
Woodward, Andrea, Edward G. Schreiner, Patrick Crain, Samuel J. Brenkman, Patricia J. Happe, Steven A.
Acker, and Catherine Hawkins-Hoffman. 2008. "Conceptual Models for Research and Monitoring
of Elwha Dam Removal - Management Perspective." Northwest Science 82 (1): 59-71.
doi: http://dx.doi.Org/10.3955/0029-344X-82.S.l.59.
Zhang, Chunlong. 2007. Fundamentals of Environmental Sampling and Analysis. Hoboken, NJ: John Wiley
& Sons.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019

-------


-------
•	QA strategies for addressing data management requirements in staff training programs and defining
data management roles and responsibilities;
•	design of standard operating procedures (SOPs) and field data collection forms to facilitate effective
data management; and
•	data management strategies during acquisition, processing, analysis, preservation and sharing.
In practice, QA and data management activities are closely related. Achievement of their objectives
requires careful and well-planned coordination.
The focus of this appendix is on the management of environmental information broadly defined by the
American Society for Quality (ASQ) and American National Standards Institute (ANSI) in their national
standard, known as ASQ/ANSI E4 (ASQ 2014), to include:
"any data, measurements, or calculations that describe environmental processes, location, or
conditions; ecological or health effects and consequences; or the performance of environmental
technology." - ASQ/ANSI E4
This ASQ/ANSI E4 standard further specifies that environmental information (or data) includes:
"...data collected directly from measurements, produced from models, and compiled from other
sources such as data bases or the literature... Environmental information also includes data derived
from samples collected from the environment, the results of other analytical testing (e.g.,
geophysical, hydrological) of environmental conditions, and process data or physical parameters
collected from the operation of environmental technologies." - ASQ/ANSI E4
For ecological restoration projects, environmental data include all information generated internally at
the project level (primary data sources) as well as information acquired from secondary and external
data sources that are independent of the project (secondary data sources). Project-level data include
planning and reporting documentation; direct observations conducted by data collection teams;
instrument-based measurements; all relevant supporting documentation (e.g., calibration logs, field
journals, data forms); and all ecological, biological or physical samples (including the results and reports
generated during laboratory analysis).
When designing data management strategies for their project, project and data managers will achieve
their goals more effectively and efficiently if they consider the following principles of information science:
•	Discoverability - the ability of specific pieces of information to be found. For example, metadata, or
"information about information," maintained in a machine-readable format can allow information
to be more readily identified. Organizing information by putting it into alphabetical order or
including it in a search engine can also make it easier to find.
•	Accessibility - the ability to access accurate project information and data quickly. Accessibility
reflects the degree to which information is available to staff, partnership organizations and the
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Appendix A

-------
public, subject to privacy, sensitivity and confidentiality constraints. Accessibility for future uses
should also be anticipated.
• Usability- the extent to which information can be used with effectiveness, efficiency and
satisfaction. Data must be accurate and sufficiently documented to use and interpret the data
correctly. Usability can be enhanced by interoperability among information systems for storage,
analysis and display.
These principles are recognized by government institutions (McCulloch and McDonald 2008; NSTC 2016)
and international organizations (Wilkinson et al. 2016) as fundamental elements of modernized data
management to ensure collaboration and facilitate reuse of scientific data products.
A. 1 DATA MANAGEMENT PLANNING
Developing a data management approach for ecological restoration projects begins with conceptualizing
data management needs for three essential components: (1) data management policies, (2) a Data
Management System (DMS) and (3) a Data Management Plan (DMP). Often these three components are
co-developed and concurrently implemented using an adaptive and iterative approach instead of
providing all levels of detail at once (Sutter et al. 2015). A particular challenge for large-scale monitoring
of natural resources is supporting the interoperability of data that is consistent with changing
technology, across programs and jurisdictions (Weltzin et al. 2017).
It should be noted that data management is as much about project planning and documentation as it is
about implementation of data management procedures. As such, implementation of data management
best practices occurs simultaneously with the design of restoration and monitoring strategies and other
project planning activities, and continues throughout the remaining components of the project lifecycle.
The following subsections outline and provide recommendations concerning the data management
planning process.
A.1.1 Conceptualize Data Management Needs based on Project Scope
The scale and complexity of a restoration project will determine the appropriate data management
strategies. This is consistent with the graded approach introduced in Chapter 2 and defined as "the
process of applying management controls to an activity according to the intended use of results and the
degree of confidence needed in the quality of the results" (ASQ 2014). Sutter et al. (2015) identify several
factors that operationalize the graded approach concept in context of the scope of a project, including
any implied need for legal accountability, scientific defensibility, or logistical complexity and can serve as
a guide for determining appropriate data management practices.
In general, a greater level of detail is needed in data management planning for projects that involve the
following factors:
• large geographic scale or high levels of regional, national or institutional significance in terms of
potential impacts on social, economic or environmental resources;
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Appendix A

-------
•	high levels of complexity in terms of (1) decision making at multiple levels, (2) greater sophistication
in scientific or construction applications, (3) regional, national or international collaboration, or (4)
dependence on a consortium of organizations and individuals;
•	lengthy projects that involve long-term planning in monitoring or construction;
•	funding from institutions that mandate high levels of project documentation (e.g., federal agencies
or private foundations that are investing significant financial resources); and
•	a high likelihood that practices, results or other outcomes will be contested in a court of law, such as
projects involving regulatory compliance or remediation and mitigation actions that impact human
and environmental health.
Several examples of implementing the graded approach are provided in Sutter et al. (2015).
The project goals and objectives should clearly articulate the scope of a project (Chapter 3, Sections 3.1
- 3.3), all of which are often included in project funding proposals and project quality documentation
(e.g., QAPPs, QMPs, SOPs). Such documents can provide information necessary to identify the essential
elements and relevant institutional requirements to include in a data management plan. An assessment
of project data management needs should also identify anticipated demands for routine data processing
and analysis to support reporting at regular intervals across the timeline.
A.1.2 Data Management Components Common to All Restoration Projects
As noted above, no matter the size or scope of a project, restoration planners all share a common need
to address three essential components of data management: (1) data management policies, (2) a data
management system, and (3) a data management plan. These three components are described below.
A.l.2.1 Data Management Policies
Just as SOPs (Section A.2.1) stipulate guidelines for how data are collected, data management policies set
guidelines for how data are maintained, distributed, and used, and typically reflect higher-level policies
established by an organization. In some cases, and where relevant policy on data management is deficient
or absent, project planners are encouraged to develop rules and administrative procedures to guide data
management, and that in effect serve as policy during the project lifecycle. Policies are not only for the
intended use, but also for future unanticipated uses (secondary use). Anticipating future data uses (and
user groups) can provide insight into what supporting information may be necessary to inform the
appropriate application of the data. Policies also define how and when data quality reviews are to be
conducted, confirmed and reported. Data policies should specify the protocols required to protect the
integrity and security of project data—both during project implementation and upon final archival and
distribution of the data. This includes requirements and procedures for identifying, segregating and
protecting sensitive data, as well as policies regarding data backup, recovery and record retention.
Restoration projects that receive funding from a federal agency may be required to adopt existing
federal policies to be compliant with executive federal memoranda published by the Office of Science
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Appendix A

-------
and Technology Policy (OSTP 2013) and the Office of Management and Budget (OMB 2013). These
policies aim to enhance public access to results of research and information supported by the federal
government. Planners should identify and incorporate these data management policies, as well as
developing policies that are specific to the data management needs of the project. Thus, a project's data
management policies include those that are unique to the project, as well as those necessary for
compliance with applicable federal memoranda, and established institutional, corporate and/or
departmental requirements.
A.l.2.2 Data Management System
The DMS is the computer environment, including system hardware and software, used to electronically
manage the data. Designing and implementing a DMS improves management and data review
functionality, data security (including protection against data corruption and loss), and overall project
performance and efficiency. Generally, increased project complexity and scope increases the reliance on
integrated solutions to ensure that project data are managed effectively and securely. Restoration
projects typically employ interim strategies using conventional file-based methods on one or more
personal computers to manage data acquired during the pre-planning and planning phases for
comprehensive project design. Ideally, interim strategies are replaced as soon as feasible, with long-
term solutions that include use of an integrated DMS that leverages the functionality of a relational
database management system (RDBMS).
A DMS should generally include (1) a computer and/or server workspace that is accessible to all
approved project participants, (2) effective and clear policies on read/write permissions, and (3) well-
defined procedures and locations for storing files and data. The DMS should be flexible so that individual
components can be modified or added without compromising the functionality of other components or
the whole system, as well as scalable so that it can support input, storage, and retrieval as the data
volume increases. An effective DMS should also allow for data to be well documented and certified (to
confirm quality and completeness) prior to public distribution. Exhibit A-2 provides a comparison of
three computer environments with respect to various considerations and system features drawn, in
part, from a case study of Great Lakes data (Kolb et al. 2013).
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Appendix A


-------
and when necessary, be made available in synopsis form to meet page-limit constraints stipulated by the
funder (Michener 2015).
Project planners can find helpful resources (e.g., DMP templates, checklists, and other reference
material) online to assist in developing DMPs, including:
•	U.S. Geological Survey (USGS) - Supporting and Enabling USGS Data Management.
https://www2.usgs.gov/datamanagement
•	DataONE - Data Observation Network for Earth, https://www.dataone.org
•	Open Data Policy - Managing Information as an Asset, https://proiect-open-data.cio.gov
Project planners should consider the following list of tasks and questions as they develop and
implement their DMP:
Questions:
•	What physical or biological samples and voucher specimens will be collected, and what procedures
are necessary to ensure the integrity and security of the samples during all phases of the project
(from sample collection to analysis, and ultimately to the reporting of results)?
•	What is the estimated volume of data that will be generated over the lifespan of the project?
•	What data management needs must be addressed over the lifespan of the project?
•	How will data be collected (e.g., electronically and/or recorded on paper forms)?
•	What procedures are necessary to accommodate documentation and file-versioning as result of
routine evaluations for quality?
•	How will completed field forms, hardcopy records, audio/video recordings, photographs, and data
downloaded from electronic instruments be stored, accessed and archived?
•	How will project instrumentation be handled, shipped and stored?
•	How will project data be analyzed and published?
•	How and in what timeframe should data be made publicly available?
•	What is the preferred format to make data publicly available (i.e., as tabulated content in a
published report, as stand-alone documented datasets, or as a complete relational database)?
Tasks:
•	Assess project-level quality documents and SOPs, and interview key participants to help identify
data management and technology needs and preferences.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Appendix A

-------
•	Identify the essential data management policies, procedures and requirements.
•	Identify key data management roles and responsibilities to be assigned to project staff.
•	Identify data recording and data entry procedures that can be automated to facilitate cell-value
selection and entry of valid values, and perform numeric value range checks.
A.1.3 Develop Process Flow Charts
Charts that graphically model data flow can help project staff comprehend and communicate complex or
critical procedures, as well as how their activities fit into the big picture of data management. An
example of a process flow chart is shown in Exhibit A-3. This chart illustrates the steps involved, from
data collection to when these data are submitted for quality review and processing. Producing flow
charts during the planning stage can help ensure that all aspects of the data management process are
considered. A thorough evaluation and testing of the process, using these flow charts as a guide, can
help expose weaknesses in project planning and design that might otherwise go undetected until it is
too late to consider corrective actions.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Appendix A

-------
Exhibit A-3. Example Process Flow Chart for Quality Review of Primary Data

QUALITY REVIEW PROCESS FOR PRIMARY DATA SUBMITTED FOR DATABASE INTEGRATION
PRIMARY DATA COLLECTED
Iw
z
2
| LU
iu
2
0
u
LU
1	tc
I
Trained crew member(s)
conduct measurement or
observation
Trained crew member(s)
collect physical, biological, or
environmental sample
Trained crew member(s) use
electronic instrument to
conduct measurement
Data recorded on hardcopy
or electronic data form
Sample documented
and handled following
protocol
Data recorded
from/by calibrated
electronic instrument

J
__QC _
FAILED
Prior
to departing
sample unit location,
Crew Leader determines if
recorded data is legible,
accurate and
complete
s £
gj-o
=> 'h
?io
Lab QA
Manager ^
review
Laboratory
submits
report
Laboratory
analysis
and results
Submits
data package* to
QA Manager
Ships collected
samples to laboratory
for analysis
CREW LEADER:
Data
package*
returned to Crew
Leaderfor
corrections
FAILED
QA
Manager
reviews data
package* for legibility,
accuracy, and
completeness
QC
FAILED
Electronic data
transferred from device
Hardcopy data transcribed
into electronic format
—¦-X
<¦—
PRIMARY DATA >
QA

FLOW
Manager

V
reviews electronic
QC
FEEDBACK LOOP
data records for accuracy, — >
PLANNED
consistency and
FAILED
QA/QC REVIEW
completeness

Single-or
double-entry
reconciliation
a
Hardcopy
forms
scanned
Document review results in metadata
Electronic data records submitted for database integration and additional
QA/QCdata review for data quality documentation and certification
NOTES: Example data process flow illustrating planned QA/QC review (green diamonds) to reinforce quality management of
primary data collected during ecological restoration monitoring.
* Data package refers to a collection of completed data forms and related documents (e.g., sample handling forms, copies
of instrument calibration logs) organized by sampling event.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Appendix A

-------
A.1.4 Assign and Train Staff
Planners should assign data management roles and responsibilities for all activities considered
instrumental in the oversight of both primary and secondary data. The project's organizational structure
should reflect key positions included in quality documentation. See Chapter 1, Section 1.6 for additional
information on QA/QC roles and responsibilities that can be assigned to individuals involved in an
ecological restoration project.
The goals of assigning and clearly documenting roles and responsibilities are to (1) establish the
structure necessary to ensure that data management and QA/QC procedures are implemented as
planned, (2) maintain data quality, (3) maximize the likelihood that finalized data products will support
the project objectives, and (4) instill a sense of accountability (and stewardship) among all staff.
Establishing accountability helps ensure that data quality management is conducted and documented in
conformance with SOPs and policies as specified in the DMP and other project quality documentation.
Broadly stated, data management requires unique skills and expertise. Data management procedures
will be effective and successful if staff are proficient in the use of relevant computer hardware and
software applications and implement best practices consistently. Staff proficiency is achieved through
comprehensive training, practice and experience. Project planners are encouraged to integrate data
management training and competency testing into their training and certification programs just as they
would for staff responsible for implementing one or more SOPs associated with data. Several academic
institutions and other organizations have developed online training courses and provide a wealth of
additional resources to help staff acquire the necessary skills and expertise required for effective data
management. Examples of effective online programs include:
•	University of Minnesota.
https://www.lib.umn.edu/datamanagement/workshops
•	Data ONE (Data Observation Network for Earth).
https://www.dataone.org/education-modules
•	Earth Science Information Partners Data Management Training Clearinghouse.
http://commons.esipfed.org/datamanagementshortcourse
Project managers should involve data management specialists with a strong understanding of
information management technologies and the capability to help select and/or design relevant
technologies early in the project planning phase. Such individuals should be able to help with the
following tasks:
•	Identify, create, and evaluate the software and systems that are best suited to the project, in terms
of availability, cost, flexibility and ease of use.
•	Offer recommendations regarding the type and structure of databases (e.g., relational databases vs.
files organized separately within a file-folder structure) and information technology (IT) interfaces
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Appendix A





-------
•	Taxonomic references: These references combine dichotomous keys, anatomical illustrations, and
photographs to aid plant and animal identification. The specific taxonomic keys used for
identification of biota should be recorded.
•	Standard reference materials: Reference materials can include calibrated frames for smaller cover
classes, scaled illustrations or images of cover or density, photos or drawings of disease conditions,
or soil/water color, among other examples.
•	Example completed data form: Example completed forms reinforce understanding of how to record
data.
Well-designed data collection forms, along with well-trained crews, help ensure that all data are
recorded accurately in a standardized and logical manner.
A.2.3 Data Capture and Field Logistics
The logistics of how data are collected and managed in the field and conveyed to an electronic
workspace in a controlled environment should be determined prior to training a crew for data
collection. A primary consideration is the selection of the capture system for data - and whether one
uses data collection forms printed on durable paper or an electronic form displayed on a digital device
such as a portable data recorder (PDR) (Sutter et al. 2015). PDRs can be an effective and efficient
method of recording data since they can automatically enforce data recording protocols and improve
data quality; these devices, however, are not without problems, such as device or battery failures and
data entry errors caused by small screens and glare, which can compromise data management and data
quality. The printed data form, despite the limitation of data needing to be transcribed, can be
advantageous in circumstances where accommodating for device failure would be expensive (e.g.,
where travel costs are high, access to the sample site is difficult, the phenomenon of interest is
ephemeral (wildlife observations), or sampling activities can alter measurements (wildlife observations,
cover of fragile vegetation or community types). If used, paper data forms need to become part of the
permanent record of project information and handled in a way that preserves their future
interpretability and information content (Sutter et al. 2015).
Geospatial data require additional management considerations. In field settings, this type of data is
often represented by point locational data obtained using a GPS device. Selection of the appropriate
grade of GPS device (e.g., recreational grade vs. land surveyor grade), including its level of precision and
accuracy to estimate a location, should be determined early in the planning phase. This is also true for
data transfer (i.e., uploading or downloading of data) and post-processing methods that assist in
ensuring that data accuracy and precision are maintained, documented, and support achievement of
sampling objectives (Sutter et al. 2015). Regardless of the type of GPS device used, the unique record
identifier generated by the device (used to uniquely identify each way point, path or route) should be
maintained in association with each feature's descriptive attributes when downloading and importing
the GPS data into a new or existing feature class (or shapefile) attribute table. Maintaining the feature's
original unique identifier can allow for the creation of table record relationships within a RDBMS and
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Appendix A


-------
A.3 DATA PROCESSING
Data processing is a fundamental activity throughout a data management lifecycle that includes the
manual handling and electronic manipulation to convert primary and secondary data into a format for
project applications. It includes (but is not limited to) methods, software and equipment used in: data
capture, data entry, transcription and transference, data retrieval and simplification, normalizing
variables of interest, variable conversion to a common unit, and use of random or systematic
procedures to extract a subset of data records for a particular project need. Data processing activities
include the following key components:
•	documenting all processing steps in detail, and recording the steps as metadata to maintain data
transparency and ensure reproducibility;
•	creating digital copies of completed data forms prior to any annotations made during data entry,
data validation, and quality review;
•	ensuring that copies of original or raw data are backed up and securely archived on retrievable
media prior to processing;
•	ensuring consistent handling of field samples and management of completed forms, laboratory
reports, voucher specimens and multi-media data;
•	ensuring accurate and complete migration of disparate types (and formats) of data and information
into a DMS; and
•	using flow charts (see Exhibit A-3) and other visual aids to help project staff understand the specific
steps in data management.
The following sections expand on these components by providing additional discussion on identifying
data and processing needs, effective management of data forms and physical samples, processing
specialized data, and the integration of all data into an electronic database environment or DMS.
A.3.1 Identify Data and Processing Needs
When data managers anticipate all necessary processing steps, they can help ensure data integrity and
usability. Data managers can identify the types of data that need to be processed by conducting the
following activities:
•	Review project planning documentation, such as QA plans and operating procedures (SOPs, field
operation manuals).
•	Review field and laboratory journals, equipment/instrument calibration logs and preliminary
records, such as laboratory bench sheets.
•	Review all existing data forms (electronic and hardcopy formats) and standard reference documents
used by field and laboratory personnel.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Appendix A



-------
according to the sampling event information, such as location, sample unit, date, time, and crew or
observer ID. Exhibit A-3 illustrates an example of this approach.
Standardized methods should be used during data entry when annotating completed data forms and
when documenting the annotated notes as data qualifiers (i.e., flags) in the DMS. See Chapter 6, Section
6.3 in the guidance document for information about using data qualifiers and flags as well as handling
data discrepancies and other errors. Prior to annotation, completed data forms should be scanned or
photographed with sufficient resolution to provide a permanent digital copy representing the original.
Annotation methods should include use of a contrasting indelible ink to document corrective actions to
an existing record. In these cases, the original information should not be eliminated; instead, the change
should be annotated by striking through the original information, neatly writing the "corrected" data or
datum, and initialing the change as near to the uncorrected (i.e., striked-through) information as
possible. Other methods include the use of sequential numbers that relate to a descriptive key written
in the margin to define the corrective action taken. Hardcopy laboratory reports can also be handled in a
similar manner. Data forms should also be notated when data have been completely transcribed and the
entry of the information from the completed form has been verified as accurate and complete.
Physical Samples - Best practices include the use of consistent and thorough labeling to ensure that
samples, through their sample ID, will always remain associated with the date, time and location of
collection. The site name and collector name is of secondary importance on sample labels, assuming
that information can be determined through the sample ID. Practices now commonly require recording
the latitude and longitude obtained from a GPS device to represent the sample location with acceptable
precision and accuracy. Other practices include the use of pre-printed field labels printed with an
indelible ink, and that are securely affixed to the sample containers prior to sampling, shipping, and
short- or long-term storage. Attached labels should be durable at high or low storage temperatures.
Sample labels should include a description of handling procedures such as 'samples placed on-ice' or,
where applicable, a description of the type and volume of preservative added to stabilize the sample). In
certain applications, use of radio frequency identifiers - or electronically coded labels or tags (e.g., pit
tags) typically inserted or attached to individual animals captured for subsequent release (e.g., fish,
small mammals), can also be attached to physical samples or inserted in shipment packaging increasing
accuracy and efficiency in sample processing. Physical samples intended for transport to a laboratory for
processing should be packaged as lots or batches that represent similar field-collection procedures. A
common QC procedure is to include a field and/or trip blank sample with these routine sample lots to
detect potential influence arising from inconsistencies in sample handling. When working on EPA
projects, scientific collections are considered assets of the federal government and their ownership
carries with it trust responsibilities. EPA issued a policy (EPA 2015) to improve management and long-
term preservation of scientific collections based on the International Society for Biological and
Environmental Repositories (ISBER) Best Practices for Repositories (ISBER 2012). Both documents
provide best practices on Information Management for scientific collections.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Appendix A


-------
to fully document essential information for later retrieval. Photo media, including satellite imagery in
raster format, require substantial amounts of computer memory storage since they are not easily
reduced in size by use of file compression software.
A.3.4 Integrate Data into an Electronic Database Environment or DMS
Project data, regardless of type and format, are eventually transcribed, transferred, or converted to an
electronic format and integrated into an electronic computer environment that when even in
rudimentary form, will represent an interim-strategy as the project's DMS. Exhibit A-2 presents a
comparison of three computing environments commonly used to manage and process data associated
with ecological restoration projects. Regardless of whether project data are managed within a
centralized environment, de-centralized environment, or a combination of the two, data managers
should consider maintaining all data within an integrated DMS that uses RDBMS software (e.g., Oracle®,
PostgreSQL®, MS Access®) to offer greater versioning or editing control during simultaneous access by
multiple users across a network. Maintaining a code repository within the DMS is a valuable data
management activity to ensure reproducibility of the data processing steps. Other notable benefits
include a variety of pre- and post-processing applications that can streamline data processing, analysis
and visualization, and reporting.
It is beyond the scope of this appendix to go into depth on any specific software applications, however,
the widespread use of spreadsheet software (e.g., Microsoft Excel) to manage environmental
monitoring data warrants specific mention. Although spreadsheet software can include many advanced
features dedicated to the storage and manipulation of data, including advanced processing, analysis and
visualization, such software platforms promote features to optimize convenience of use and data
exploration at the cost of scalability, maintenance of database integrity and interoperability. Use of
spreadsheet software as a DMS generally should be constrained as an interim strategy as part of, or to
be replaced by, RDBMS software with enhanced utilities as core features specifically designed to
promote scalability, the maintenance of database integrity, and interoperability.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Appendix A




-------
1)	Provide Empirical Evidence - Preserve as a permanent record of evidence to support (1) scientific
findings, management, and regulatory decisions related to human health or public policy; or (2)
project outcomes that may be contested or litigated.
2)	Protect Voucher Collection - Preserve historical, archaeological, biological, or environmental
specimens and materials as comparative or reference material.
3)	Support Secondary Use - Preserve the integrity and quality of a dataset in an incorruptible form
accessible internally or externally by other users for the explicit purpose of secondary applications.
A.5.1 Policies to Guide Data Archiving
Most public institutions and agencies have well established policies regarding long-term archiving and
preservation of digital and non-digital information they create and manage. Project planners and data
managers use these policies to determine applicable guidelines and the types of data that should be
preserved and archived. Project SOPs should be consistent with the requirements for data archiving.
Policy components of particular relevance to ecological restoration projects are often those that provide
guidance on procedures relating to file redundancy, data retention and disposition, data censorship, and
proprietary ownership as discussed below.
Data Redundancy -This component outlines the policies that inform the types of data or files requiring
backup and the frequency with which those backups should occur. Generally, more frequent data
backups are preferable. See Section A.9 for more details on this topic.
Data Retention and Disposition - The diversity and volume of data produced by restoration projects can
be substantial over the course of a project lifecycle. When considering the need to back up and archive
data across multiple time periods and physical locations, the management of such large volumes and
types of data can become cumbersome and costly. The de facto decision to "just save everything" is a
good policy when no policy exists. However, minimizing retention of unessential interim data products
and associated files can represent good policy in the context of reducing project cost and effort
associated with data storage and management. Project planners should coordinate development of data
retention and disposition (permanent disposal of data) policies with their institution or organization
(e.g., GPO 2011; National Institutes of Health 2008) as well as with vested collaborators and
stakeholders, including funding organizations.
Data Censorship - The archival or sharing of sensitive information on a publicly accessible repository is
considered inappropriate by most standards. Policy components should provide guidance that identifies
the specific project information required to be omitted from any archival or storage environments
(including portable devices) that may be accessed by individuals that do not have approved clearance.
Examples of information that may require censorship, may include details related to the administration
of the project, such as personal information of project personnel (including volunteers) and
collaborators, account identifiers of financial institutions, and parcel ownership information (among
others). Other information that may be considered sensitive, include: locational data of endangered,
threatened or rare species, or species vulnerable to unregulated collection or hunting; the location of
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Appendix A

-------
easily disturbed natural features (e.g., caves, springs, hibernacula); and the location of sensitive cultural
features (e.g., American Indian burial mounds, archaeological sites).
Proprietary Ownership - Prior to archiving data on a public server or open-source repository, consider the
interests of, and the intellectual and financial investments made by, project collaborators and associated
stakeholders (including funding organizations). In some cases, information may be considered privately
owned and not permissible for archival, or is listed in existing memoranda of agreements stipulating
certain proprietary rights related to priority of data use (e.g., scientific publication).
When it is necessary to govern access to specific project information, project planners should work with data
managers and their respective institution (e.g., through its information technology staff) to identify internal
archival locations that are secure, but not publicly accessible, until all collaborators and stakeholders have
provided consent to do otherwise. See Section A.6.4 for a partial list of data repositories.
A.5.2 Prepare Data for Archiving
In ecological restoration projects, project managers actively manage data acquired from primary and
secondary sources throughout the entire project lifecycle. Regardless of the source, it is important to
properly prepare data for long-term archiving, addressing both the technical aspects (e.g., file structure
and format) and documentation needs (e.g., metadata), regardless of whether they will be archived
internally or archived within a public data repository. For data acquired from secondary sources, project
managers should refer to, and comply with, any policy associated with the data and existing constraints
on use and sharing.
Questions that project planners should consider when preparing data for archiving include:
•	What standard of quality should the data achieve as it relates to completeness, accuracy and
precision?
•	What type of documentation and what level of detail is needed when the data are submitted for
archiving?
•	What supporting information is necessary for the effective use of the data by secondary data users?
•	Are certain data considered more sensitive than others? If so, a memorandum of agreement (MOA)
or memorandum of understanding (MOU) may be required to help inform policies that guide future
access or distribution.
•	Are there proprietary concerns such as data ownership, intellectual property rights or established
policy on disposition of the data that need to be considered?
•	Who will be responsible for updates or corrections once data have been submitted for archiving?
•	How should access by external data users be managed or governed?
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Appendix A


-------
•	Store archived data in unencrypted and uncompressed formats.
•	Store uncorrected datasets in addition to validated versions of datasets, if space permits.
•	Use ISO standards for the creation, processing and sharing of standardized metadata for digital
documents and datasets.
•	Use XML or extensible metadata platform (XMP) formats to ensure compliance (see Section A.7).
•	Use consistent file names and standard nomenclature across all files included in the set of data to be
archived.
•	Document all decisions and procedures associated with the conversion of data or file types into non-
proprietary or open source formats.
•	Include all project documentation required to describe the data, including project identity and point
of contact information.
Preparing Physical Materials:
•	Establish SOPs for the collection, handling, preservation, labeling and shipment of samples and
specimens in collaboration and compliance with the institution to which you plan to submit your
material for archiving (archival institution).
•	Incorporate QC oversight during collection and handling to ensure your collection will reflect the
quality expected by the archival institution.
•	Submit an entire set of specimens as a voucher collection that represents your unique study or
sampling effort (unless otherwise requested by the archival institution).
•	Establish an MOA or MOU with the archival institution that requires the return of any unwanted
specimens so that those specimens may be submitted to an alternate institution.
•	Use high-resolution digital photography to photo-document representative specimens (e.g., each
species, age class and gender), if applicable, as soon as practical shortly after capture, preservation
and labeling.
•	Package copies of all SOPs, digital photos, field journal notes and other relevant supplemental
information, and submit these materials as part of the voucher collection.
•	Include a cover letter that describes the purpose of the submitted material; a copy of the MOA or
MOU; and details on the project title, administrative organization, sample location(s), duration of
the sampling effort and primary contact information.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Appendix A


-------
the final results, whether the results met the stated objectives of the project, and whether the funds
were efficiently used.
Interested parties may request information on how the project was designed, what procedures were used,
how data were analyzed, and/or the final results and other project outcomes, all of which can be used to
help them make informed decisions and improve their work. Other researchers may be interested in
combining data from multiple projects to assess restoration results across space and time, or reusing the
data to examine long-term trends at the same site. Sharing data generally increases the value of a dataset
and enhances the community knowledge base associated with ecological restoration efforts.
A.6.2 Prepare Final Data Products
To ensure the usability of data for outside parties, all of the dataset contents need to be defined and
understandable, including variable names, units of measure, formats, coded fields, file names and
dataset titles. Variable documentation is usually presented in a table format, with the variables as rows
and the characteristics (description, units, format, etc.) as columns (Hook et al. 2010). File names and
dataset titles should be as descriptive as possible. In addition, the organization and structure of the
datasets should be presented in a consistent manner. Statements of quality and completeness are
common requirements for data sharing, metadata development and data certification (see Chapter 6,
Section 6.4 of this document). Policy requirements and submittal instructions for the chosen data
repository will dictate preparation of data products and documentation.
A.6.3 Assess Use Constraints
Before sharing and publishing data, the data should be evaluated to identify confidential and sensitive
information such as locational information on endangered, threatened or rare species, protected
natural features and cultural resources and information that is considered private or personal (e.g.,
identification of individuals), or identifies account information of financial institutions. Depending on the
purpose for sharing information or the protective policies of the repository, any data identified as
confidential, sensitive or personally identifiable may need to be removed from the dataset (described in
Section A.5.1). Data managers should include statements regarding data ownership, time periods
represented by the data, and use limitations with the dataset. While all potential future uses cannot be
anticipated, it is important to consider future data users and identify important constraints on how the
data can be used. This information should be clearly outlined and published along with the dataset so
that users are fully informed about appropriate use.
A.6.4 Determine Where to Publish Data
The identified communities of interest will influence where to publish documented datasets, in addition to
the project manager's administrative institution, the funding organization, and when applicable, and the
scientific journal(s) in which results of project outcomes are published. There are several levels to data
sharing, from self-publishing and organizational websites to global data repositories. Project managers
should select a level of data sharing that maximizes the discoverability of the dataset for the community of
interest while at the same time complying with policies to safeguard confidential and sensitive
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Appendix A



-------
This information is typically structured using a standardized textual format (e.g., TXT, XML), and is either
embedded or maintained separately from the project dataset. In practice, metadata generated during the
project lifecycle may also include text, numeric output, diagrams, flow charts, images, video, sound files or
other descriptive information (Sutter et al. 2015). Project managers are responsible for recording metadata.
Metadata are essential for all projects and enhance the discoverability and future use of data. They
characterize the types of data within a dataset and facilitate the identification of data by other
researchers. Reproducibility is a primary principle of the scientific method, and a good rule of thumb
when compiling metadata is to provide sufficient information to facilitate reproducibility of the results.
Complete and well-written metadata can be viewed as an extension of good data management
practices, and should conform to the 4 Cs: correct, complete, comprehensive and comprehensible
(Sutter et al. 2015), as described below.
•	Correct: The metadata content accurately describes the data.
•	Complete: All relevant metadata elements are present.
•	Comprehensive: The metadata content fully describes the dataset.
•	Comprehensible: Someone not associated with the project can understand the metadata content.
There are four characteristics of metadata that assure that the data will be accessible to others in the future:
1.	File Formats that are Non-Proprietary. Approved standards for formal metadata currently rely on
expressing the content in XML to facilitate computer searching and conversion of the document in
multiple formats. For current applications of XML, see the Extensible Markup Language (XML) 1.0
(Fifth Edition). Because of the complexity of using XML and the need to address all metadata
elements in a consistent manner, metadata are often written using one of several software editing
tools, such as Metavist (Rugg 2004). Ecological Metadata Language (Fegraus et al. 2005). EPA
Metadata Editor (EPA 2017), USGS Metadata Editor (USGS 2017b), and others that provide plain
language instruction through a convenient graphic user interface. There are also proprietary
applications designed specifically for the development of metadata for geospatial data (e.g., ArcGIS
10®, Maplnfo®). These applications include the ability to automatically populate certain metadata
elements by extracting content from the original data. However, regardless of the editing tool used
or standard adopted, the final metadata file should be saved (archived) in an open-source format
such as XML or TXT.
Metadata can also be generated using any text editor, such as Notepad® or WordPad® (for PC), or
TextEdit® (for Apple™) when working from a standardized metadata template. Several metadata
standards and templates are available to document ecological data, and the selection of a specific
standard or template is dependent upon the type of data requiring description.
2.	File Content that Conforms to U.S. Federal or International Standards. Ecological restoration
projects funded by a U.S. federal agency may be required to use content standards endorsed by
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Appendix A



-------
of project data for primary and secondary uses are, by definition, quality assurance strategies - to
preserve is to protect. Data protected are data that can be shared.
As with data collection, regular debriefings to review the data management process and gain insight into
what is working well and what needs to be improved is recommended.
A.9 BACKUP AND SECURE DATA
Minimizing or eliminating the loss of data throughout the data management lifecycle - a fundamental
condition of data preservation - is a data management absolute. Loss and corruption of data
(intentionally or unintentionally) are significant threats to data preservation. System operator error,
virus attacks, hardware failures, power failures, fire, natural disasters and other events can impact data
integrity. Project planners should develop and implement procedures to avoid and prevent data loss.
These procedures should be in place for the physical data representing voucher or sample collections, as
well as electronic data and project documentation.
Backup and recovery procedures need to be planned and implemented throughout the data management
lifecycle for data and associated metadata, including temporary and intermediate products. Backups
should be scheduled regularly during the project and replicated at different secure locations. Archived
backups protect against unanticipated events that can result in data loss. Multiple approaches are
available for on-site and off-site backups, such as copying data to external hard drives, multiple network
servers, or the cloud using internet cloud backup services. To ensure data reliability, any revisions to the
data or metadata (and the associated reason(s) for those revisions) need to be documented (see Chapter
7, Section 7.3 of the guidance document and Section A.7 of this appendix). The backup and archival
procedures should be fully coordinated and documented with data preservation policies (see Section A.5),
which typically include specific guidance and rules for version control.
Data backup and security procedures should be conducted at various stages of implementation.
Ensuring the security of sensitive or confidential information is always a consideration. As such, backups
need to be protected from unauthorized access as the data are duplicated and protected. In general,
best practices include incorporating automated routine backup procedures as an important role of the
DMS. Procedures may include use of back-up and archival software, or computer programming code,
that allow a data manager to establish regular time intervals for file or complete system backup to one
or more central or cloud-based server storage devices. Equally important is the routine testing of data
recovery protocols to verify protection against data loss, and to minimize costly delays or redoing effort.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Appendix A


-------
Federal Geographic Data Committee. 1999. Content Standard for Digital Geospatial Metadata -
Biological Data Profile. Biological Data Working Group and USGS Biological Resources Division.
Publication No. FGDC-STD-001.1-1999. Washington, DC: Federal Geographic Data Committee.
Federal Geographic Data Committee. 2001. Shoreline Metadata Profile of the Content Standard for
Digital Geospatial Metadata. Publication No. FGDC-STD-001.2-2001. Washington, DC: Federal
Geographic Data Committee.
Federal Geographic Data Committee. 2002. Content Standard for Digital Geospatial Metadata:
Extensions for Remote Sensing Metadata. Publication No. FGDC-STD-012-2002. Washington, DC:
Federal Geographic Data Committee.
Fegraus, Eric H., Sandy J. Andelman, Matthew B. Jones, and Mark Schildhauer. 2005. "Maximizing the
Value of Ecological Data with Structured Metadata: An Introduction to Ecological Metadata
Language (EML) and Principles for Metadata Creation." Bulletin of the Ecological Society of
America 86 (3): 158-168. doi: 10.1890/0012-9623(2005)86[158:MTVOED]2.0.CO;2.
Government Publishing Office (GPO). 2011. 40 CFR 160.195 - Retention of Records. Code of Federal
Regulations. U.S. Government Publishing Office. https://www.gpo.gov/fdsvs/granule/CFR-2011-
title40-vol24/CFR-2011-title40-vol24-secl60-195.
Hook, Les A., Suresh K. Santhana Vannan, Tammy W. Beaty, Robert B. Cook, and Bruce E. Wilson. 2010.
Best Practices for Preparing Environmental Data Sets to Share and Archive. Oak Ridge, TN: Oak
Ridge National Laboratory Distributed Active Archive Center,
doi: 10.3334/ORNLDAAC/BestPractices-2010.
International Council on Archives. 2017. Guidance. Accessed April 15, 2017.
http://www.ica.org/en/guidance-0.
International Organization for Standardization. 2004. International Standard: Information technology -
Metadata Registries (MDR) - Part 1: Framework (Second Ed.). International Electrotechnical
Commission. Publication No. ISO/IEC 11179-1:2004(E). Geneva, Switzerland: International
Organization for Standardization.
International Organization for Standardization. 2007. Geographic information — Metadata - XML
Schema Implementation. Technical Specification. Publication No. ISO/TS 19139:2007. Geneva,
Switzerland: International Organization for Standardization.
International Organization for Standardization. 2009. Geographic information - Metadata - Part 2:
Extensions for Imagery and Gridded Data. Publication No. ISO 19115-2:2009. Geneva,
Switzerland: International Organization for Standardization.
International Society for Biological and Environmental Repositories (ISBER). April 2012. "Best Practices
for Repositories: Collection, Storage, Retrieval, and Distribution of Biological Materials for
Research. Biopreservation and Biobanking (BIO).
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Appendix A

-------
Kolb, Tracy L., E. Agnes Blukacz-Richards, Andrew M. Muir, Randall M. Claramunt, Marten A. Koops,
William W. Taylor, Trent M. Sutton, Michael T. Arts, and Ed Bissel. 2013. "How to Manage Data
to Enhance Their Potential for Synthesis, Preservation, Sharing, and Reuse—A Great Lakes Case
Study." Fisheries 38 (2): 52-64. doi: 10.1080/03632415.2013.757975.
McCulloch, Lewis, D. and Kenneth, R. McDonald. 2008. NOAA's GEO-IDE Initiative - Enhancing the
Discoverability, Accessibility, and Usability of Environmental Information. American Geophysical
Union, 89 (53), Fall Meet. Suppl., Abstract IN33A-1157. Abstract available at:
https://www.researchgate.net/publication/252672077 NOAA's GEO-IDE Initiative -
Enhancing the Discoverability Accessibility and Usability of Environmental Information.
Michener, William K. 2015. "Ten Simple Rules for Creating a Good Data Management Plan." PLoS
Computational Biology 11 (10): el004525. doi.org/10.1371/journal.pcbi.1004525. Available
online at:
http://iournals.plos.org/ploscompbiol/article?id=10.1371/iournal.pcbi.l004525#sec011.
National Institutes of Health. 2008. Guidelines for Scientific Record Keeping in the Intramural Research
Program at the NIH. Bethesda, MD: U.S. Department of Health and Human Services, National
Institutes of Health, Office of the Director.
National Science on Technology Council (NSTC). 2016. Principles for Promoting Access to Federal
Government-Supported Scientific Data and Research Findings through International Scientific
Cooperation. A report prepared by the: National Science and Technology Council Committee on
Science Subcommittee on International Issues Interagency Working Group on Open Data
Sharing Policy. J.P. Holdren. Executive Office of the President, Office of Science and Technology
Policy. Accessed February 2, 2018.
https://obamawhitehouse.archives.gov/sites/default/files/microsites/ostp/NSTC/iwgodsp princ
iples O.pdf.
Office of Management and Budget (OMB). 2013. M-13-13. Open Data Policy-Managing Information as
an Asset. In Memorandum for the Heads of Executive Departments and Agencies, From Silvia M.
Burwell, Steven VanRoekel, Todd Park, and Dominic J. Mancini. Executive Office of the President,
Office of Management and Budget. Accessed January 15, 2018.
https://www.whitehouse.gov/sites/whitehouse.gov/files/omb/memoranda/2013/m-13-13.pdf.
Office of Science and Technology Policy (OSTP). 2013. "Increasing Access to the Results of Federally
Funded Scientific Research." In Memorandum for the Heads of Executive Departments and
Agencies, From J.P. Holdren. Executive Office of the President, Office of Science and Technology
Policy. Accessed January 15, 2018.
https://obamawhitehouse.archives.gov/sites/default/files/microsites/ostp/ostp public access
memo 2013.pdf.
Riiegg, Janine, Corinna Gries, Ben Bond-Lamberty, Gabriel J. Bowen, Benjamin S. Felzer, Nancy E.
Mclntyre, Patricia A. Soranno, Kristin L. Vanderbilt, and Kathleen C. Weathers. 2014.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Appendix A

-------
"Completing the Data Life Cycle: Using Information Management in Macrosystems Ecology
Research." Frontiers in Ecology and the Environment 12 (1): 24-30. doi: 10.1890/120375.
Rugg, David J. 2004. Creating FGDC and NBII metadata with Metavist 2005. General Technical Report
NC-255. St. Paul, MN: U.S. Department of Agriculture, Forest Service, North Central Research
Station.
Strasser, Carly, Robert Cook, William Michener, and Amber Budden. 2012. "Primer on Data
Management: What you always wanted to know." UC Office of the President: California Digital
Library, doi: 10.5060/D2251G48.
Sutter, Robert D., Susan Wainscott, John R. Boetsch, Craig J. Palmer, and David J. Rugg. 2015. "Practical
Guidance for Integrating Data Management into Long-term Ecological Monitoring Projects." The
Wildlife Society Bulletin 39 (3): 451-463. doi: 10.1002/wsb.548.
UK Data Service. 2017. Research Data Lifecycle. Accessed April 18, 2017.
https://www.ukdataservice.ac.uk/manage-data/lifecvcle.
U.S. Environmental Protection Agency. 2007. Guidance for Preparing Standard Operating Procedures
(SOPs) (EPA OA/G-6). Publication No. EPA/600/B-07/001. Washington, DC: Office of
Environmental Information.
U.S. Environmental Protection Agency. 2015. "Scientific Collections Policy: Improving the Management
of and Access to Scientific Collections." Office of the Science Advisor. Accessed May 19.
https://www.epa.gov/sites/production/files/2015-
12/documents/epa scientific collections policy final ll-30-15.pdf.
U.S. Environmental Protection Agency. 2017. "Geospatial Resources at EPA: EPA Metadata Editor."
Office of the Science Advisor. Accessed May 19, 2017. https://www.epa.gov/geospatial/EPA-
Metadata-Editor.
U.S. Geological Survey. 2017a. "Data Lifecycle Overview." USGS Data Management. Last modified March
22. https://www2.usgs.gov/datamanagement/whv-dm/lifecvcleoverview.php.
U.S. Geological Survey. 2017b. Core Science Analytics, Synthesis, and Libraries (CSAS&L) - Online
Metadata Editor (OME). Accessed April 18, 2017. https://wwwl.usgs.gov/csas/ome/.
Weltzin, Jake F., Jennifer M. Bayer, Rebecca A. Scully. 2017. Defining opportunities for collaboration
across data life cycles, Eos, 98. Available at: https://doi.org/10.1029/2017EQ072689.
Wilkinson, Mark D., Michel Dumontier, IJsbrand Jan Aalbersberg, Gabrielle Appleton, Myles Axton, Arie
Baak, Niklas Blomberg,...and Barend Mons. 2016. "The FAIR Guiding Principles for scientific data
management and stewardship." Sci. Data 3:160018. doi: 10.1038/sdata.2016.18. Available
online at: https://www.nature.com/articles/sdata201618.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Appendix A

-------


-------
It is generally impractical to develop a project study design that completely eliminates all sampling and
measurement error. Typically, sampling error is assessed by analyzing data pooled across multiple
sampling units, sites, and/or time. Standard equations can be used to calculate a metric or sample
statistic representing one or more population parameters (e.g., relative abundance, total biomass,
species- or community-composition, net carbon sequestration, relative total phosphorus discharge) at
the treatment or site level. Efforts to quantify and document sampling error are important for
determining the usability of a particular dataset to address project sampling objectives. It is beyond the
scope of this appendix to discuss statistical procedures used to assess DQIs in context of sampling error,
and such procedures will not be addressed in any detail. However, there are many credible and readily
available publications that provide statistical guidance to assess a variety of error-related issues
associated with environmental data (see Additional Readings). One recommended source is the U.S.
Environmental Protection Agency (EPA) publication that provides such guidance developed for
environmental managers and natural resource practitioners: EPA QA/G-9S. 2006. Data Quality
Assessment: Statistical Methods for Practitioners. Online at: https://www.epa.gov/qualitv/guidance-
data-quality-assessment.
Sections B.l and B.2 of this appendix lay the foundation for understanding both the statistical
limitations associated with data representing different measurement scales and the unique attributes of
measurement error associated with data collected under different levels of control. Section B.3 presents
important considerations related to preparing data for quality assessment. Section B.4 describes
common statistical procedures used for the assessment of data precision, bias and accuracy by
comparing the differences between sample means, or individual measurements or observations, when
collected by the same routine crew at different times or by different routine crews, and between a
routine crew and a QA expert. Section B.5 identifies a simple method to quantify the level of
completeness in obtaining planned samples, measurements or observations. Section B.6 distinguishes
sensitivity and specificity, two fundamental components that are complimentary to the assessment of
detectability. Section B.7 places the context of data quality in terms of their representativeness and
comparability. Section B.8 concludes the appendix by describing a simple statistical procedure that can
be used to quantify the repeatability (precision) within and between observers, crew(s) or samples, as
well as compare the efficacy between two or more procedures.
All data users are encouraged to work with a statistician familiar with environmental and ecological data
analysis to identify statistical procedures that are most appropriate for the specific application. A
statistician will be able to provide recommendations that draw from a variety of parametric and non-
parametric procedures. While these procedures are not discussed here, the recommendations can help
support analysis and planning efforts, and ensure the appropriate treatment of data given the specific
scale of measurement (i.e., nominal, ordinal, interval or ratio).
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Appendix B

-------
B.l
MEASUREMENT SCALES AND THEIR STATISTICAL PROPERTIES OF VALUE
Given the diversity of environmental data and their formats, it is often difficult to identify the
appropriate statistical procedures to assess their reliability. This is particularly true when the data are
not continuous - such as when data represent discrete variables, including numeric data that have been
"categorized" or simplified to a categorical variable that can be ranked or ordered. Consequently, in
order to accurately assess data quality, it is important to first identify the appropriate measurement
scale represented by each variable of interest and understand their limitations and inherent statistical
properties. Exhibit B-2 summarizes important statistical considerations between categorical and
numeric data and four main types of measurement scale (nominal, ordinal, interval, and ratio) as
proposed by Stevens (1946. 1951). Despite the debate that ensued regarding the merits of Stevens'
measurement scale and his associated statistical considerations (Michell 1986; Velleman et al. 1993;
Duncan et al. 2006), the measurement scale concept has been broadly accepted and applied in a variety
of statistical references (Conover 1999; Zar 2010; McDonald 2014) and used by specific statistical
software applications (Wilkinson et al. 1996; Carlberg 2014; Mangiafico 2016).
In Velleman et al. (1993), measurement scale is described as an important concept, and Stevens'
terminology often suitable, however they argue that scale type (and implied limits of statistical
treatments) and their assignment be understood based on the awareness of how the data have been
collected (i.e., measured or observed) and the questions intended to be addressed by the data.
For the purposes of data quality assessment, environmental data are obtained through the following
two types of processes:
•	A Measurement can be defined as any numeric value associated with a unit of measurement. A
measurement represents variables typically quantified with the aid of a mechanical graduated device
or instrument (e.g., a tape measure or bulb thermometer) or an electronic analog instrument, such as
a device that displays results along a continuous scale, or digital instrument that displays results as a
discrete, numeric value. Numeric data are considered "quantitative" and, depending on sample size
and distribution, may be appropriate for use in parametric statistical methods.
•	An Observation can be defined as a categorical or numeric value that, with some exceptions,
typically lacks a unit of measurement. Examples of observations that may require a unit of
measurement include counts of occurrence or estimates of individual abundance (e.g., individuals
per taxonomic unit) and percent cover (e.g., vegetation cover per unit area). These data, regardless
of whether they are numeric or categorical, represent variables typically determined by an
observer's perception and best professional judgment. When observations represent non-numeric,
categorical data, they are often considered to be "qualitative" and generally require use of non-
parametric statistical methods.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Appendix B



-------
•	professional, junior, or entry-level seasonal staff (and may include individuals or groups part of
citizen monitoring) who are not equally proficient in the implementation of SOPs or analytical
methods, or in the calibration and use of field equipment.
2)	Field laboratory (or temporary research station) - data collected indoors (e.g., in a trailer or other
mobile laboratory setting) with reduced exposure to environmental hazards, reasonable control of
air temperature and lighting, provision of a dedicated work space, and access to precision analytical
instrumentation. Obtaining measurements or interpreting observations often involves:
•	using instrumentation ranging from field equipment to dedicated laboratory equipment
optimized for increased precision and accuracy,
•	applying analytical procedures that may or may not equal those in a fixed laboratory setting,
•	addressing potential trade-offs in terms of QA/QC procedures to accommodate logistical
constraints, and/or
•	modifying procedures to accommodate the availability of computer resources used for direct
data collection.
3)	Fixed laboratory (or permanent research station) - data generated under controlled conditions
suitable for performing complex analytical procedures (e.g., chemical, microbiological, geophysical
analyses). The laboratory may or may not be accredited by an outside organization, depending on
the specific measurements involved. Ideally:
•	Measurements or observations in these facilities are associated with a rigorous QA/QC oversight
program, where reported results provide QC assessments of precision and accuracy during
batch processing of environmental samples (e.g., blanks, replicates, calibrations, and matrix
spike recoveries).
•	Analytical methods and procedures are based on those from EPA or a voluntary consensus
standards body (e.g., ASTM, AOAC, APHA), and are implemented via SOPs that require initial and
ongoing demonstrations of performance with known precision and accuracy.
•	Reference materials, obtained from a recognized provider, are employed for QC comparison of
results during batch processing.
•	High precision instrumentation is maintained and operated in a fixed location.
•	Professional (and often full-time) staff that are proficient in SOP/analytical method
implementation and laboratory equipment calibration and use, and are familiar with all QA
procedural requirements.
Some circumstances may necessitate the use of procedures that deviate from the norm. For example,
analytical requirements may not fall under the auspices of any accrediting body, or laboratories may be
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Appendix B

-------
selected based on other considerations, including geographic proximity to the site or an affiliation with a
partner organization contributing to the restoration project (e.g., academic research laboratories).
Those circumstances may involve:
•	alternative analytical methods and procedures that may not be associated with a published standard
or may not have been demonstrated and validated to produce consistent performance with high
precision and accuracy;
•	reference materials that may be represented by a biological voucher or specimen for use in
taxonomic identification, biological pathology and physical anatomy;
•	mid- to high-level precision instrumentation operated in either a mobile or fixed location; and/or
professional, junior, or entry-level seasonal-staff who may or may not be equally proficient in the
implementation of SOPs (analytical methods) or the calibration and use of laboratory equipment.
Example scenario: An observer or crew is required to deploy a multiple-sensor sonde instrument from a
boat to measure and record the dissolved oxygen, temperature, and pH in an open lake, at a depth that
coincides with the maximum Secchi-disk depth. When completed, the observer collects a physical water
sample at the same depth for analysis of chlorophyll-a concentration at a fixed laboratory or permanent
research station. This monitoring effort is performed by multiple crews, across a large region, under
varying weather conditions and equipment availability.
The scenario described above illustrates the potential complexity associated with interpreting,
recording, and collecting reliable estimates for an environmental variable in a "field" setting. The
measurement errors associated with the data collected will reflect the contributions of both the field
and fixed laboratory settings, and include the combined error contributed by the specific method of
observation and/or measurement involved in quantifying the variable.
B.3 PREPARING DATA FOR QUALITY ASSESSMENT
The remaining sections of this appendix are organized based on the DQI of interest. There are several
steps a data analyst should consider to prepare data for quality assessment using statistical procedures.
A first step is to confirm that all data to be assessed have undergone a formal data review process and
that errors introduced during data transcription and other non-conformities have been corrected and
documented. Chapter 6 of the guidance document provides an in-depth description of the data review
process. Working with data that include correctable errors can produce misleading results during the
data quality assessment process, and unnecessarily creates additional effort. It is also helpful to organize
data chronologically and into logical sets according to one or more ancillary variables of interest, such as
the site ID, sample year, observer or crew ID, and if documented, the device ID for any instrumentation
used; and is a conventional practice in order to facilitate the selection of a subset of a larger dataset in
order to confirm that they meet established criteria or levels of quality. For example, a data analyst may
be interested in evaluating survey results obtained during a specific year or time period, or collected by
a specific crew or from a specific site, or to assess batch results reported by an independent laboratory.
In these cases, and for monitoring results that produce small sample sizes (e.g., fewer than 30), the data
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Appendix B

-------
analyst should consider what effect a reduction of sample size will have on the precision and accuracy of
a sample mean statistic when its representativeness of the sampled population mean is dependent upon
the application of the central limit theorem (Devore 2012; Walpole et al. 2012).
It is considered standard practice to summarize any given sample (or set) of data using descriptive
statistics in order to parameterize the sample based on the minimum and maximum values, standard
deviation (a measure of the spread of the data), and, as stated previously, a measure of central
tendency such as the sample mean, median, or mode. Typically, the process of describing data coincides
with the use of one or more graphical analysis tools to visually inspect the shape or distribution of the
sample data - and often evaluated to whether the data are approximately normally distributed.
Graphical analysis tools commonly used include scatter plots, box-and-whisker plots, pareto- and
control- charts, cumulative frequency plots, and histograms - all of which can help a data analyst detect
problems and potential errors (e.g., extreme or invalid values) - and to determine whether a discrete or
continuous numeric variable may require statistical transformation in order to conform to an assumed
probability distribution. When combined, the graphical analysis and review of the descriptive statistics
for each variable provides a "big picture" perspective from which a determination can be made
regarding whether the data quality is adequate to proceed with more rigorous statistical assessments.
B.4 PRECISION. BIAS. AND ACCURACY
Precision and bias (and thus, accuracy) are related error terms and can be influenced by random and
systematic errors in a measurement process (EPA 2006). Precision is the degree of agreement among
repeated measurements or observations of the same attribute and is not dependent upon the true
value. Measurement precision represents the total variation for a given set of measurements produced
by the measuring device, the procedure used, and the proficiency of the device operator. By analogy,
this is also true for a given set of observations produced by an individual applying best professional
judgment (e.g., visual assessment) and a standard protocol (Stapanian et al. 2016). When a scientific
instrument is used that can evaluate a continuous variable with increased resolution (i.e., to finer
fractional or decimal precision), the degree of precision reported will have a direct effect on the
magnitude of the variance for that variable (Walther et al. 2005). In this context, the level of resolution
used to evaluate and report a categorical (nominal or ordinal) or numerical (discrete or continuous)
variable based on observation will similarly impact the magnitude of the variance for that variable.
As introduced in Chapter 3, Section 3.4 of the guidance document, data quality acceptance criteria for
precision, bias, and accuracy collectively, can be represented and expressed in terms of a measurement
error tolerance and a frequency of compliance rate for achieving that tolerance. All three terms are
combined in this manner as a QA strategy to increase the efficiency in QA/QC oversight during data
collection. Combining the terms also facilitates coordinating the quality assessment of results related to
a specific site or sampling unit (e.g., transect or plot array) or an event period (e.g., month or season),
much like the evaluation of batch-specific QC results in a laboratory. Therefore, the most basic method
to evaluate the combined error associated with precision, bias, and accuracy is the evaluation of the
frequency with which a measured value falls within the acceptable error tolerance for that variable
(Westfall 2009. Stapanian et al. 2016). This approach is an intuitive and practical method to assess
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Appendix B

-------
discrete and categorical data types as part of QA assessment and continual improvement during data
collection activities.
However, use of the combined error tolerance and compliance rate method may not be as rigorous or
as efficient for all types of data, measurement scales, or large numbers of repeated measurements or
observations, nor is it practical during latter stages of data quality assessment. The remainder of this
section presents common statistical procedures that can be used to complete a more robust
quantitative assessment of precision and bias for a specific variable representing a set of
measurements or observations reported in their original units (i.e., variables that are not derived from
an equation or based on a modeled response). Accuracy is assessed qualitatively by evaluating the
combined performance of precision and bias and represents the total error or uncertainty associated
with an individual measurement or set of measurements.
B.4.1 Standard Statistical Equations to Assess Precision, Bias, and Accuracy for Repeated
Measurements
Precision is the degree of agreement among repeated measurements or observations of the same
attribute under the same or very similar conditions. For situations where the true value is not known,
such as when it is not possible to provide a QA crew or an expert evaluation, it is still possible to
evaluate precision of repeated measurements within the same or between two different routine crews.
The statistical procedure evaluates the variability in crew performance to consistently measure or
estimate a true value by calculating the squares of the differences between the repeated measurements
(to remove the direction of the difference), and then averaging these results. The variance (VAR) is
expressed as the average (corrected by n-1 degrees of freedom) of these squared differences. The
calculated statistic thus represents an estimation of the variance for the variable measured or assessed
either within or between observers or crews. To return this estimate of precision to the same units or
scale of the original measurement, the square root of the variance is calculated and is termed the
standard deviation (SD).
Bias evaluates systematic differences that could lead to an underestimate or overestimate of the true
value. Bias can be calculated as the average (or mean) difference between the values obtained by a
routine crew and the values obtained by an expert or QA crew for the sample variable at the same
location.
Accuracy describes the degree of closeness of a set of measurements to a known or reference value (see
Exhibit 2-3 in Chapter 2, Section 2.5). Accuracy can be estimated as Mean Squared Error (MSE) and is
calculated as the square of the differences between measured (or observed) and true values to eliminate
the direction of these differences and therefore only takes into consideration the magnitude of the
differences. To return this estimate of accuracy to the same units or scale of the original measurement, the
square root of the MSE is calculated and is termed the root mean square error (RMSE).
Statistical equations are provided for each of these DQIs and are listed in Exhibit B-4.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Appendix B


-------
B.4.2.1 When sample sizes are sufficient to produce a sample mean representative of the sampled
population (assumes n accurately reflects population characteristics)
Percent Relative Standard Deviation - %RSD
Precision: The %RSD is a useful metric by which to compare the variance between results obtained by
repeated measurements (or observations) on the same sample by the same observer (within crew) or
between observers (between crew) - but its performance or reliability generally decreases as sample
size decreases. An important statistical consideration when using %RSD is that it should only be used
for continuous variables (or the differences when comparing between repeated measurements) that
have a true zero point - such as length, height, weight, or area, and those variables with a conceptual
zero point like diameter breast height (DBH), among others. However, when the mean value is close to
zero, %RSD will approach infinity and is therefore sensitive to small changes in the mean. Of equal
importance, is that the measurements or observations be drawn randomly from the sampled
population.
1 lC^i (routine) " y(routine))2
% RSD = J	^			 X 100
y (routine)
Where:	n = total # routine measurements or observations
/ = independent measurement or observation
yt = value for ith routine measurement or observation
y = average value for all routine measurements or observations
Note: %RSD can be represented more simply as: 100*SD/y.
The %RSD is calculated by first obtaining the SD of a set of measurements, or the differences
between repeated measurements, dividing that result by the average of those measurements or
their differences, and then multiplying by 100 to represent it as a percentage. The %RSD is also
known as the coefficient of variation (CV).
Accuracy and Bias: With minor modification, the %RSD can be used as a procedure to assess
measurement accuracy (Stapanian et al. 2016). This is accomplished by replacing the term [y (routine)]
in the numerator with the term [y (expert)] to represent the estimated average or mean value
determined by a QA expert or that was determined a priori to represent an accepted true mean
value.
Bias can be expressed as a percentage indicating the direction and magnitude of the difference
between the mean value of a set of measurements (or observations) and the known or true value
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Appendix B

-------
relative to the known or true value. It represents relative bias, and indicates the ability of an
observer, crew or procedure to over- or underestimate a variable. The equation for relative bias is:
x — E
Bias (%)
E
x 100
Where:
x = the mean value for the set of measurements or observations
E = the expert or accepted true value of a variable
95% Confidence Interval (CI) of the mean
Precision: The average or mean value x of a given set of measurements collected randomly from a
sampled population can be considered an estimate of the true population mean |a,. Because of
sampling variability, this estimate is however likely different from the true mean. Information about
the precision and reliability of the estimation can be provided by reporting an interval of plausible
values, also known as a confidence interval. The computation of a confidence interval requires first
the selection of a confidence level 1-a, which is a measure of the degree of reliability of the interval.
The higher the confidence level, the more likely the value of the true mean lies within that interval.
A 95% CI is presented here because of its conventional use in ecology. Project planners should
consider alternative confidence levels (e.g., 90%, 99%, 99.9%) based on one or more of the
following: (1) the level of error (or precision) tolerable in the estimation of a sample mean for a
given variable of interest, (2) logistics and resources involved with increased sampling effort, and (3)
the effect-size necessary to correctly assess decision error for a given hypothesis test. Ideally,
confidence (and statistical significance) levels should be determined a priori to effectiveness
monitoring and subsequent statistical analysis tests on the effects of their restoration actions.
The lower and upper limits of the 95% confidence interval are computed as follows:
95 % Confidence Interval of the Mean = x ±
Where:
— 1 y^71
x = - li=1Xi
Standard Deviation (SD)
Where:
n = total # measurements or observations
x = average or mean value
Xj = value for ith measurement or observation
SD = standard deviation of the sample estimate
t = statistic from t-distribution table
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Appendix B

-------
Note: A 95% confidence interval is derived by dividing the SD by the square root
of the number of measurements (n) and multiplying by a critical value t with a =
0.05 and n-1 degrees of freedom.
Together, the SD and the confidence interval can serve other valuable purposes; for instance, to
compare the level of precision between two different estimates of the same measurement using
different procedures (Bland and Altman 1986). Assuming that sample sizes are comparable between
each set of measurements, the measurement procedure with a SD and CI (specifically the lower and
upper limits) characterized by the least variability and narrowest range of minimum and maximum
values, respectively, represents the procedure with the greatest precision in measurement.
Accuracy: The accuracy is defined in terms of whether or not the confidence interval contains the
true population mean, which is typically unknown in practice. For the purpose of estimating a
difference between results obtained by a routine crew member and results obtained by a QA expert,
an analyst could use y (expert) to represent the true population mean. Then, check whether this value
is included in the confidence interval computed from measurements yt (routine), and replacing Xj with
Vi (routine), in the equations above.
An alternative approach, which accounts for the uncertainty included in y (expert), is to also compute a
confidence interval for y (expert) and replacing X; with y (expert) in the equations above. The confidence
intervals for yt (routine) and y (expert) can then be compared. If these two intervals do not overlap, then
the difference between the two means is significantly different from zero. If there is an overlap then
a statistical test, i.e. two sample t-test (Devore 2012). needs to be performed to decide whether the
difference is significant for a given a-level. This approach can be applied more generally to compare
the means of two sets of data (e.g., measurements collected by the same crew on the same plots at
different times, or by different crews on the same plots at similar times).
95% Confidence Interval (CI) of the mean difference (MP) - 95% CI of MP
The comparison of two means using the overlap of their confidence intervals is very general in that
the number of measurements do not need to equal the number of experimental units (EU) being
sampled (e.g., points, quadrats, transects). There are however situations in which there is only one
EU and two sets of measurements are made on that unit. For example, consider the scenario where
each 1-m2 quadrat along a 100 meter line-transect has been visited by two routine crews or by a
routine crew and a QA crew, resulting in a set of n paired samples {(yj (routine), y; (expert)) i = 1, ¦¦¦, nj.
Comparing the two samples represented by the two crews is best done by first computing the
difference between measurements for each pair of observations: dj = yi (crewi) - y; (crew2). A
confidence interval for the mean difference between the paired samples can then be computed as
follows:
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Appendix B

-------
95 % Confidence Interval of the Mean Difference = d ± It x
Where:
d=\ Y]i=1di
1 V1"	- n
Standard Deviation (SD) = 		 > (dj — d)2
_ n — 1 L—i i=i
— 1 Z-i i=i
Where:
n = total # paired differences
d = average or mean difference
SD = standard deviation of the data
t = statistic from t-distribution table
Notes: 1) If the confidence interval does not include zero, then we can be highly
confident that the two means are different. 2) When paired differences
represent measurements conducted between two different routine crews, then
the mean difference is representative of precision only.
B.4.2.2 When comparing between two individual measurements or sample results with no assumption
regarding their representativeness of the sampled population
There are times when QA managers may wish to compare the level of precision or agreement between
two individuals or crews (e.g., within- and between-crew assessment), or between a routine crew
estimate and that of a QA expert, but where a meaningful sample average (i.e., sample mean) cannot be
determined (e.g., when there are fewer than 30 samples). This scenario is particularly common during
the early phases of project monitoring when sample sizes are low or incomplete, or when it is necessary
to validate whether results are in compliance with performance objectives, as defined by data quality
acceptance criteria. The procedures listed here may be used to evaluate performance related to specific
periods of sampling effort, crew deployment, or when there may be procedural changes as result of
revised SOPs. When applied in a laboratory environment, results are conventionally associated with
batch analysis. This same concept can easily be applied to performance evaluation of data packages or
sample units representing measurements conducted in field settings (e.g., one per transect, one per 10
plots, once per sampling event). It is important to point out that results produced from the following
assessment procedures cannot necessarily be extrapolated to a population of interest, but rather serve
as a QA strategy for the systematic performance assessment of observers or an entire crew.
Relative Percent Difference (RPD):
There are certain applications when it is useful to compare the proportional difference between two
measurements or calculated values (Stribling et al. 2008). This is performed by calculating the RPD -
a unit-less quantity where lower RPD values represent greater precision than higher RPD values.
RPD is estimated based on the absolute value of the mathematical difference between two
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Appendix B

-------
measured or calculated values (i.e., metric or index), then dividing that difference by their combined
average. The result is multiplied by 100 to express the proportional difference as a percentage. The
equation for RPD is calculated as:
The use of RPD is a common approach to compare results from laboratory sample analysis, but it is
also useful as a QC inspection procedure to assess within- or between-crew performance for data
derived from field measurements, such as stem or shoot density, tree height, and an object's
distance from an observer.
Percent Taxonomic Disagreement (PTD):
The taxonomic identification of plant and animal species and their associated enumeration is a
common component of ecological restoration monitoring. It typically produces two types of data,
categorical (nominal) data for species identification, and discrete counts for numbers observed per
species identified. Often, data of this type are evaluated for precision, bias, and accuracy by
comparing whole-sample identifications completed by different taxonomists, field personnel or
crews that specialize in one or more areas of botany or zoology. Data are typically generated by the
analysis of collections of preserved sampled specimens (e.g., macro- or micro-invertebrates or fish)
under controlled laboratory conditions. However, data can also be generated under field conditions
by conducting repeated evaluations of the same sample unit (e.g., quadrat, transect, or point) by the
same or different observer or crew member, or between a routine crew member and a QA expert.
Generally, the accuracy of taxonomy is qualitatively evaluated based on the a priori specification of
one or more target hierarchical levels (e.g., family, genus or species). PTD is calculated as:
\A-B I
Relative % Difference (RPD) = ^ B)/2 X
Where:
A = the first measured (or calculated) value
B = the second measured (or calculated) value
Note: A and B can represent measurements repeated at different times,
conducted by different observers or crew, or estimated (e.g., indexes) using
different instrumentation or procedures. Absolute values are included in the
denominator when the sum of A + B results is zero. However, when either A or B
are very small or zero, RPD can produce misleading results (Stribling et al. 2008).
Percent Taxonomic Disagreement (PTD) = ^1 — x 100
Where:	a = the number of agreements
n = the total number of organisms in the larger of the two counts
Note: n is represented by the specification of a target hierarchical level (e.g.,
family, genus or species).
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Appendix B

-------
A lower PTD conveys a lower number of disagreements indicating a greater similarity for count
estimates and the level of taxonomic precision between observers, crews or laboratories.
The use of PTD may also apply to certain field procedures used to enumerate the occurrence of one
or more types of community classification or other categorical or nominal types of data, provided
sample units are clearly delineated and evaluation methods are adequately documented to
standardize observer interpretation for both unit classification and unit enumeration.
Percent Difference in Enumeration (PDE):
PDE quantifies the consistency of discrete counts of a particular taxonomic unit (or other
classification) in samples and may apply in certain applications during field sampling to assess counts
determined in situ for a given sample unit. PDE is determined by calculating the absolute value of
the difference between two counts representing different observers, crews, or independent
laboratories, and dividing that value by the sum of each total, and then multiplied by 100 to express
it as a percentage. A lower PDE conveys a greater similarity between the counts or results of
enumeration.
\nl ~ n2\
Percent Difference in Enumeration (PDE) =	—	 x 100
Where:	rii = the total count in the first sample
ri2 = the total count in the second sample
Note: n is represented by the total count associated with the specification of the
target hierarchical level (e.g., family, genus, or species).
Conceptually, performance in enumeration of any discrete level of classification (e.g., individuals per
vigor class, decay class, substrate type, among others) might be evaluated with this procedure.
Minimum count total requirements may apply depending upon the classification unit of the variable
being enumerated.
Percent Taxonomic Completeness (PTC):
It may not always be possible to confirm the identity at a targeted taxonomic or hierarchical level for
all specimens or units in a given sample (or within a given sample unit). This can impact the
usefulness of estimates of PDE. In such instances, PTC is a metric that may be used to assess the
proportion of specimens in a sample or sample unit that taxonomists were able to assign to a pre-
determined taxonomic level. PTC is calculated as:
x
Percent Taxonomic Completeness (PTC) = — x 100
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Appendix B

-------
Where:	x = the number of specimens in a given sample correctly assigned to the
targeted (or pre-determined) taxonomic level
N = the total number of specimens in a given sample expected to be assigned to
the targeted (or pre-determined) taxonomic level
B.5 COMPLETENESS
Completeness is a metric for judging the overall success of a data collection effort with interest in
establishing sufficient empirical evidence to test a hypothesis or evaluate the achievement of one or
more project sampling objectives. It is most effective when the sampling design is based on using
statistical procedures to determine the minimum number of samples (or measurements/observations)
needed to achieve a specified level of confidence, or to achieve the sampling precision given the
variance in the sampling and measurement process. In other words, assessing completeness is not a
substitute for thoughtful sampling design. Percent completeness is calculated as shown below:
V
Percent Completeness (%C) = — x 100
Where:	V = the number of valid measurements or samples ("Valid" refers to
measurements, observations, or samples that meet predetermined data
quality acceptance criteria)
T = the total number of planned or targeted measurements, observations, or
samples for a given sample population (e.g., strata, treatment site, sample
period or year)
The goal for completeness should be established during the study design process. As tempting as it may
be, setting the completeness goal at 100% is usually counterproductive, since many conditions outside
of the control of the project personnel can result in the inability to collect a "valid" measurement or
observation. Common examples include persistent poor weather or unsafe conditions, faulty
instrumentation, and samples that have been either compromised during transit or held beyond
allowable holding times. For projects involving measurements of a large number of parameters (e.g.,
pollutants, conditions, or physical properties), it is often most efficient to assess completeness at the
parameter level. For example, 50 samples (representing either samples collected for laboratory analyses
or on-site sample units) analyzed for 20 parameters each means that "T" in the equation above is 1,000.
The loss of one of those 50 samples represents 2% of the potential measurements, thus equating to 98%
completeness, a completeness goal of 99% is no more achievable than a goal of 100% in this example.
The closer the %C is to 1 (or 100%), the greater the compliance of achieving a predetermined (targeted)
number of valid samples. Unmet performance in %C can result in not achieving precise estimates in
population parameters for a particular site or population of interest, and can impact their
representativeness and comparability within and across sites and sample events.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Appendix B

-------
The QA manager may decide to establish performance objectives or data quality acceptance criteria for
%C depending upon the relative importance of the need for maximizing valid samples; such as individual
samples or measurement variables considered necessary to produce unbiased and precise estimates for
a multi-metric indicator (e.g., macroinvertebrate IBI, Floristic Quality Indicator). See discussion in Section
B.4.2.2, where percent taxonomic completeness (or PTC) represents this type of example.
B.6 DETECTABILITY
Detectability in environmental or ecological monitoring is often represented simply as the sensitivity of
an instrument or analytical method to accurately measure the minimum absolute quantity (e.g., limit of
detection) of a given organic or inorganic compound, or the ability of an observer (e.g., taxonomist) to
accurately distinguish the identity of an organism to its lowest (i.e., most precisej level of taxonomic
classification. A more complete representation of detectability requires a description of its two principal
components: (1) sensitivity- the true positive rate or the proportion of correct detections (i.e., true
presence) when the condition is positive (e.g., when a species, ecological feature or environmental
condition is present), and (2) specificity - the true negative rate or the proportion of correct times no
detection (i.e., true absence) is determined when the condition is negative (e.g., a given species or
ecological feature or condition is truly absent). Respectively, each can be calculated as:
Sensitivity = Sn = True Positive (TP) / (True Positive + False Negative (FN))
Specificity = Sp = True Negative (TN) / (True Negative + False Positive (FP))
Tradeoffs exist, however, when efforts are made to optimize sensitivity over specificity, given that they are
inversely proportional (Allouche et al. 2006; Parikh et al. 2008). For example, Groom and Whild (2017),
found that false negative errors increased during attempts to reduce the number of false positive
observations of plant species reported as result of requiring additional confirmation prior to acceptance.
At a higher level, the lack of precision in detecting true ecological change is perhaps the largest concern
for restoration project planners, and highlights the importance of establishing objective criteria such as
significance tests and assessment of biologically meaningful effect-size to support effective decision
making. No data should be assumed error-free and 100% reliable in their application to support all
project decisions. The variability inherent in data precision as well as its accuracy, may lead to decision
error that can compromise the ability of project planners in discerning whether their restoration
prescriptions were effective in achieving stated restoration goals and objectives. Detectability is an
important indicator to data quality, and any imprecision in estimates of its constituent components (Sn
and Sp) can contribute to decision error in ecological restoration monitoring. In hypothesis testing, a
false positive rate is equivalent to alpha or Type 1 error (1-specificity), and where in contrast, a false
negative rate equals beta or Type 2 error (1-sensitivity) (Owusu-Ansah et al. 2016). Refer to Chapter 7,
Section 7.2 for additional discussion regarding considerations for assessing detectability.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Appendix B

-------
B.7 REPRESENTATIVENESS AND COMPARABILITY
Representativeness is "the degree to which the data accurately and precisely represent a characteristic
of a population parameter, variation of a property, a process characteristic, or an operational condition"
(EPA 2002). Essentially, poor performance in one or more data quality indicator can directly or indirectly
affect the representation of an estimated parameter of an ecological condition or community attribute.
Chapter 2, Section 2.8 emphasizes the importance of choosing an appropriate study design and
monitoring strategy that incorporates probability-based sampling to measure change in one or more
ecological attributes, and that can be considered representative of true change.
Comparability is a qualitative term and expresses the degree to which the data can be compared to or
combined with other data collected using similar procedures. Crew training, strict adherence to SOPs,
maintenance of all equipment to quality standards, use of consistent reporting units, and systematic
assessment of whether QC samples comply with stated performance objectives provide the greatest
benefit to QA managers for ensuring that data are comparable within- and between- crews, sites, and
time periods for the duration of the monitoring program. Reliance upon a performance-based approach
is recommended since it generally requires judicious use of quality performance documentation to
support a continual improvement process. This documentation can be referred to at a later time if/when
questions arise that may challenge the comparability of project results or outcomes.
B.8 MEASURING REPEATABILITY
In ecological restoration monitoring, it can be tempting to assume that observers or crews trained to
implement an identical SOP will generate data comparable in quality sufficient to support stated project
goals and objectives (i.e., their intended use). This assumption can also be extended to scientific
instrumentation that observers or crews might rely upon to quantify certain parameters, such as
dissolved oxygen or pH for a given water sample. Ideally, this assumption should be routinely validated
using quantitative methods and analysis of re-measurement (QC) data to document the level of
confidence or reliability in making this assumption. For example, observer counts of a discrete variable
such as the number of trees with DBH > 5 cm present in a 100 m2 rectangular plot may have a high level
of repeatability when measured (counted) by the same observer, or high reproducibility when counts
are conducted by different observers within a short period of time. In contrast, repeatability (and
reproducibility) may be considerably reduced as the level of difficulty in enumerating a discrete variable
increases, such as counting the number of live basal stems for an abundant shrub species with
numerous stems in the same plot size.
The level of uncertainty in data comparability can be further assessed by using a procedure to determine
repeatability (within and between observers, crew(s), samples, or procedures). This repeatability
determination addresses only the level of agreement in the precision of a sample estimate and not the
actual accuracy in the measurements themselves. Repeatability (R) is an index or measure that ranges
from 0 to 1.0 where 0 implies that results are least similar and a value of 1.0 implies that results are
most similar - and is known also as the intra-class correlation coefficient (Krebs 1989. Sokal and Rohlf
1995). In Bartlett and Frost (2008), the intra-class correlation (ICC) coefficient is described as a
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Appendix B

-------
"reliability parameter" that is a dimensionless quantity difficult to interpret at what value should it be
considered representative of high reliability.
The method of calculating R relies upon analysis of variance (ANOVA) procedures (described in detail in
most textbooks on statistical applications) to partition the sums of squares (SS) and mean squares (MS)
for "within" a sample of repeated measurements collected by a specific observer, crew or procedure,
and "between" (among or across) different samples of repeated measurements collected by different
observers, crews or procedures. See Koo and Mae (2016) for additional guidance in the use and
reporting of ICC results as a measure of reliability. Repeatability as presented in Krebs (1989), is
calculated as shown below:
Sa
Repeatability (R) = —	j
+ $E
Where:	Sf = Variance between samples (i.e., repeated measurements or observations
obtained by different observers, crews, groups or procedures)
Sf = Variance within a sample (i.e., repeated measurements or observations
obtained by the same observer, crew, groups or procedures)
Where-	S^ 	 ^'(between samples) ^^(within a sample)
A ~	n0
c2 	 j\/ic
JE ~ (within a sample)
Note: Estimates for MS(between samples) and MS(Within a sample) are derived by
calculating the sums of squares (SS) and mean square (MS) for each, and can
also be obtained directly from the respective error source presented in the
ANOVA table results.
1 (	V 72^ ¦ \
And, where: 7ln =	 I Y m — ——-1
u a-1 V2"1 1 Y,nt J
n0 = A coefficient representing the effective sample size per group
a = Number of distinct units or samples that are re-measured
7lj = Total number of re-measurements (for all observers, crews, groups or
procedures) collected on unit i
Notes: 1) The effective sample size per group (n0) is determined by multiplying
the quotient ofl/(a-l) times the result of the sum of the total number of re-
measurements on unit i (tli) minus the result of dividing the sum of the total
number of re-measurements squared (rii2) by the sum of the total number ofre-
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Appendix B

-------
measurements (n,). 2) Repeatability is a measure of the proportion of variance
in the data that occurs between samples; when R = 1.0, variance within a sample
or group will be equal to zero and measurements are perfectly repeatable (Krebs
1989).
95% Confidence Interval (CI) of repeatability - CI of R
Since R is an estimate of a parameter, equations are provided to estimate a 95% confidence interval
around R in order to determine the precision of the estimate (Bartlett and Frost 2008). and can be useful
when evaluating the significance between repeated trials to assess the significance of improvement in
repeatability over time. The following CI equations are drawn from Krebs (1989).
^ _	YIqMS(within samples)^"
^¦^(between samples)	(within a sample) (V-0 — l)-f
Where:	F = Critical value from F-distribution table for a/2 level of confidence (e.g.,
F02s(Jli>n2) when a = 0.05) with ni = a - 1 degrees freedom and n2 = E(ni — 1)
^ _	YIqMS (within samples)^"
^¦^(between samples) MS (within a sample) (V-0 — l)-f
Where:	F = Critical value from F-distribution table for 1 - a/2 level of confidence (e.g.,
F975(ni, n2) when a = 0.05) with ni = a - 1 degrees freedom and n2 = E(ni — 1)
Notes: 1) Confidence interval limits may not be symmetrical around R. 2) The
lower (Rl) and upper (Ru) limits for the ninety-five percent confidence interval
for repeatability are determined by subtracting from one, the result of the
numerator [(n0 x MS(„ithin a sample) x F)] divided by the denominator [(MS(between
samples) "I" (MS(within a sample) x(no-l)xF)].
B.9 ADDITIONAL READINGS
•	Gitzen, Robert A., Joshua J. Millspaugh, Andrew B. Cooper and Daniel S. Licht. 2014. Design and
Analysis of Long-term Ecological Monitoring Studies (First Ed.). Cambridge, United Kingdom:
Cambridge University Press.
•	Michell, Joel. 1999. Measurement in Psychology: A Critical History of a Methodological Concept. New
York, NY: Cambridge University Press.
•	Sang Gyu Kwak and Jong Hae Kim. 2017. "Central Limit Theorem: The Cornerstone of Modern
Statistics." Korean J Anesthesiology, 70 (2): 144-156. doi: 10.4097/kjae.2017.70.2.144.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Appendix B

-------
•	Stapanian Martin A., Forest C. Garner, Kirk E. Fitzgerald, George T. Flatman, and John M. Nocerino.
1993. "Finding Suspected Causes of Measurement Error in Multivariate Environmental Data."
Journal of Chemometrics, 1 (3): 165-176. doi: 10.1002/cem. 1180070304.
•	Tilmann, Deutler. 1991. Grubbs-Type Estimators for Reproducibility Variances in an Interlaboratory
Test Study. Journal of Quality Technology, 23:4, pages 324-335.
•	U.S. Environmental Protection Agency. 2004. Wadeable Stream Assessment: Benthic Laboratory
Methods. Publication No. EPA 841-B-04-007. Washington, D.C.: Office of Environmental Information.
•	U.S. Environmental Protection Agency. 2008. National Rivers and Streams Assessment: Laboratory
Methods Manual. Publication No. EPA-841-B-07-010. Washington, D.C.: Office of Environmental
Information.
•	Wolak, Matthew E., Daphne J. Fairbairn, and Yale R. Paulsen. 2012. "Guidelines for Estimating
Repeatability." Methods in Ecology and Evolution, 3:129-137. doi: 10.1111/j.2041-
210X. 2011.00125.x.
B.l 0 APPENDIX B REFERENCES
Allouche, Omri, Asaf Tsoar, and Ronen Kadmon. 2006. "Assessing the Accuracy of Species Distribution
Models: Prevalence, Kappa and the True Skill Statistic (TSS)." Journal of Applied Ecology, 43 (6):
1223-1232. doi: 10.1111/j.l365-2664.2006.01214.x.
Bartlett, Jonathan W. and Chris Frost. 2008. "Reliability, Repeatability and Reproducibility: Analysis of
Measurement Errors in Continuous Variables." Ultrasound Obstetrics & Gynecology, 31: 466-475.
doi: 10.1002/uog.5256
Bland J. Martin, and Douglas G. Altman. 1986. "Statistical Methods for Assessing Agreement between
Two Methods of Clinical Measurement." Lancet, 327 (8476): 307-310. doi: 10.1016/S0140-
6736(86)90837-8.
Carlberg, Conrad. 2014. Statistical Analysis with Microsoft Excel 2013: About Variables and Values. Que
Publishing, Indianapolis, Indiana.
Conover, William J. 1999. Practical Nonparametric Statistics (3rd Edition). John Wiley & Sons, New York, NY.
Devore, Jay L. 2012. Probability and Statistics for Engineering and the Sciences (Eighth Edition). Electronic
Version. Brooks/Cole, Boston, MA.
Duncan, R. Luce, David H. Krantz, Patrick C. Suppes, Amos Tversky. 2006. Foundations of Measurement
Volume III: Representation, Axiomatization, and Invariance. Dover Publications, Mineola, New York.
Flotemersch, Joseph E., James B. Stribling, and Michael J. Paul. 2006. Concepts and Approaches for the
Bioassessment of Non-wadeable Streams and Rivers. EPA 600-R-06-127. US Environmental
Protection Agency, Cincinnati, Ohio.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Appendix B

-------
Groom, Quentin J. and Sarah J. Whild. 2017. "Characterisation of False-positive Observations in
Botanical Surveys." PeerJ, 5:e3324. doi: 10.7717/peerj.3324.
Koo, Terry K. and Mae Y Li. 2016. "A Guideline of Selecting and Reporting Intraclass Correlation
Coefficients for Reliability Research." Journal of Chiropractic Medicine, 15 (2): 155-63.
doi:10.1016/j.jcm. 2016.02.012.
Krebs, Charles J. 1989. Ecological Methodology. Harper, New York, NY.
Mangiafico, Salvatore S. 2016. Summary and Analysis of Extension Program Evaluation in R, version
1.1.1. Rutgers Cooperative Extension, New Brunswick, NJ. Online: rcompanion.org/handbook/.
McDonald, John H. 2014. Handbook of Biological Statistics (3rd. Ed.). Sparky House Publishing,
Baltimore, MD. Online: http://www.biostathandbook.com/HandbookBioStatThird.pdf.
Michell, Joel. 1986. "Measurement Scales and Statistics: A Clash of Paradigms." Psychological Bulletin,
100 (3): 398-407. doi: 10.1037/0033-2909.100.3.398
Owusu-Ansah, Emmanuel D.J., Angelina Sampson, Amponsah K. Samuel, and Abaidoo Robert. 2016.
"Sensitivity and Specificity Analysis Relation to Statistical Hypothesis Testing and Its Errors:
Application to Cryptosporidium Detection Techniques." Open Journal of Applied Sciences, 6 (4):
209-216. doi: 10.4236/ojapps.2016.64022.
Parikh, Rajul, Annie Mathai, Shefali Parikh, G Chandra Sekhar, and Ravi Thomas. 2008. "Understanding
and Using Sensitivity, Specificity and Predictive Values." Indian J Ophthalmology, 56 (1): 45-50. doi:
10.4103/0301-4738.37595.
Stapanian, Martin A., Timothy E. Lewis, Craig J. Palmer, Molly M. Amos. 2016. "Assessing Accuracy and
Precision for Field and Laboratory Data: A Perspective in Ecosystem Restoration." Restoration
Ecology, 24 (1): 18-26. doi: 10.1111/rec.l2284.
Sokal, Robert R. and James F. Rohlf. 1995. Biometry the Principles and Practice of Statistics in Biological
Research, 3rd ed. W. H. Freeman and Company, New York.
Stevens, Stanley S. 1946. "On the Theory of Scales of Measurement." Science, 103: 677-680. doi:
10.1126/science.l03.2684.677.
Stevens, Stanley S. 1951. "Mathematics, Measurement, and Psychophysics." In S.S. Stevens (Ed.),
Handbook of experimental psychology. John Wiley, New York.
Stribling, James B., Kristen L. Pavlik, Susan M. Holdsworth, Erik W. Leppo. 2008. "Data Quality,
Performance, and Uncertainty in Taxonomic Identification for Biological Assessments." Journal of
the North American Benthological Society, 27 (4): 906-919. doi: 10.1899/07-175.1.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Appendix B

-------
Stribling, James B. 2011. "Partitioning Error Sources for Quality Control and Comparability Analysis in
Biological Monitoring and Assessment." In Modern Approaches to Quality Control, edited by
Ahmed Badr Eldin, 59-84. doi: 10.5772/22388.
U.S. Environmental Protection Agency. 2002. Overview of the EPA Quality System for Environmental
Data and Technology. Publication No. EPA/240/R-02/003. Washington, DC: Office of
Environmental Information.
U.S. Environmental Protection Agency. 2006. Data Quality Assessment: Statistical Methods for
Practitioners (EPA QA/G-9S). Publication No. EPA/240/B-06/003. Washington, DC: Office of
Environmental Information.
U.S. Environmental Protection Agency. 2012. National Lakes Assessment: Laboratory Operations Manual
(Version 1.1 October 9, 2012). Publication No. EPA-841-B-11-004. Washington, DC: Office of Water.
Velleman, Paul F. and Leland Wilkinson. 1993. "Nominal, Ordinal, Interval, and Ratio Typologies are
Misleading." The American Statistician, 47 (1): 65-72. doi: 10.2307/2684788.
Walpole, Ronald E., Raymond H. Myers, Sharon L. Myers, and Keying Ye. 2012. Probability and Statistics
for Engineers and Scientists (Ninth Edition). Prentice Hall, Boston, MA.
Walther, Bruno A. and Joslin L. Moore. 2005. "The Concepts of Bias, Precision and Accuracy, and their
Use in Testing the Performance of Species Richness Estimators, with a Literature Review of
Estimator Performance." Ecography, 28 (6): 815-829. doi: 10.1111/j.2005.0906-7590.04112.x.
Westfall, James A. 2009. FIA National Assessment of Data Quality for Forest Health Indicators. General
Technical Report NRS-53., U.S. Department of Agriculture Forest Service, Northeast Research
Service, Newtown Square, PA.
Wilkinson, Leland, Grant Blank, and Christian Gruber. 1996. Desktop Data Analysis with Systat. Prentice
Hall, Upper Saddle River, NJ
Zar, Jerrold H. 2010. Biostatistical Analysis (5th Edition). Pearson Prentice-Hall, Upper Saddle River, NJ.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Appendix B

-------

-------
APPENDIX C: QUALITY ASSURANCE PROJECT PLAN (QAPP) TEMPLATE FOR
ECOLOGICAL RESTORATION PROJECTS
This template was created as a tool to assist with the development of quality assurance project plans
(QAPPs) for projects focused on ecological restoration and the control of invasive species. It contains an
outline of the major sections of a QAPP, based on EPA Requirements for Quality Assurance Project Plans
(EPA QA/R-5). EPA/240/B-01/003. March 2001. some of which have been revised for the purposes of this
template, based on strategies described in the Application of QA/QC Principles to Ecological Restoration
Project Monitoring (referred to herein as the "IERQC Guidance") to better fit the needs of projects
involving ecological restoration and/or control of invasive species. Each section summarizes suggested
topics for users to include in their QAPP. Additional resources include: (1) example tables to help clearly
present quality assurance (QA) and quality control (QC) approaches, (2) example text to assist with
certain topics, (3) a table at the end of each section citing locations within the IERQC Guidance where
more details can be found, and (4) helpful tips for consideration. Please note that the items listed in the
IERQC Guidance Reference tables are based on Appendix D, Quality Assurance Project Plan Checklist for
Ecological Restoration Projects. Items not covered in the IERQC Guidance are noted in the table.
Users of this QAPP template must consult EPA QA/R-5, or the more general Guidance for Quality
Assurance Project Plans (EPA QA/G-5), EPA/240/R-02/009, December 2002, as needed to obtain additional
details and guidance for development of a QAPP. Instances where the IERQC Guidance and EPA QA/R-5
differ in the use of specific terms are noted in this template.
Examples provided are for illustrative purposes only and may not apply directly to any specific project.
The tables provided are meant to assist users; they are optional and can be edited to fit specific project
needs. Project planners should ensure the application of QA/QC approaches are commensurate with
factors such as project objectives, risks associated with decision errors, resources, and schedules. Any
information provided in this template or the IERQC Guidance (including associated appendices) do not
supersede requirements specified in EPA QA/R-5.
Acknowledgments:
This QAPP template was prepared by CSRA LLC under EPA contracts with the direction of Louis Blume,
Quality Manager of EPA Great Lakes National Program Office.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Appendix C

-------
DRAFT
QUALITY ASSURANCE PROJECT PLAN
Title of Project (or portion of project addressed by this QAPP)
Prepared for:

Contract/WA/Grant No./Project Identifier 


Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Appendix C

-------
Contract/WA/Grant # or Project Identifier
Version
Date
Page 2
SECTION A - PROJECT MANAGEMENT
A.l TITLE OF PLAN AND APPROVAL
Quality Assurance Project Plan

Prepared by:

	 Date:	
, Project Manager / Principal Investigator
	 Date:	
, Quality Assurance Manager (or equivalent)
	 Date:	

	 Date:	
, US EPA, Project Officer
	 Date:	
, US EPA, Quality Assurance Manager
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Appendix C

-------
Contract/WA/Grant # or Project Identifier
Version
Date
Page 3
A.2 TABLE OF CONTENTS

Section A - Project Management	2
A.l Title of Plan and Approval	2
A.2 Table of Contents	3
A.3 Distribution List	5
A.4 Project/Task Organization	5
A.5 Problem Definition/Background	5
A.6 Project/Task Description	6
A.7 Quality Objectives and Criteria for Measurement Data	7
A.8 Special Training Requirements or Certifications	8
A.9	Documents and Records	10
Section B - Data Generation and Acquisition	11
B.l	Sampling/Measurement Design	11
B.2 Field Data Collection and Sampling Method Requirements	11
B.3 Sample Handling and Custody Requirements	12
B.4 Laboratory Analytical Methods Requirements	12
B.5 Quality Control Requirements	13
B.6 Instrument/Equipment Testing, Inspection and Maintenance	13
B.7 Instrument/Equipment Calibration and Frequency	14
B.8 Inspection/Acceptance for Supplies and Consumables	14
B.9 Non-Direct Measurements	14
B.10	Data Management	15
Section C - Assessment and Oversight	16
C.l	Assessment and Response Actions	16
C.2	Reports to Management	16
Section D - Data Validation and Usability	17
D.l	Data Review, Verification and Validation Planning	17
D.2 Verification and Validation Methods	17
D.3 Reconciliation with User Requirements	18
Tables	19
Application of QA/QC Principles to Ecological Restoration Project Monitoring	Appendix C

-------
Contract/WA/Grant # or Project Identifier
Version
Date
Page 4
Table CI. Roles and Responsibilities	19
Table C2. Project Schedule Timeline	19
Table C3. Target Variables for Field Measurement or Observation	19
Table C4. Data Quality Acceptance Criteria for Precision, Bias and Accuracy for Field
Observations/Measurements	19
Table C5. Sampling Locations	20
Table C6. Standard Operating Procedures	20
Table C7. Sampling QC	20
Table C8. Analytical QC	21
Table C9. Reports	21
Exhibits	22
Exhibit 1. Types of QC Field Checks	22
Appendices

Application of QA/QC Principles to Ecological Restoration Project Monitoring	Appendix C



















-------

-------
APPENDIX D: QUALITY ASSURANCE PROJECT PLAN (QAPP) REVIEW CHECKLIST
FOR ECOLOGICAL RESTORATION PROJECTS
This checklist was created as a tool to assist with the review of quality assurance project plans (QAPPs)
for projects focused on ecological restoration and the control of invasive species. It contains the major
sections of a QAPP, based on EPA Requirements for Quality Assurance Project Plans (EPA QA/R-5),
EPA/240/B-01/003. March 2001. some of which have been revised for the purposes of this checklist,
based on strategies described in the Application of QA/QC Principles to Ecological Restoration Project
Monitoring (referred to herein as the "IERQC Guidance") to better fit the needs of projects involving
ecological restoration and/or control of invasive species.
The major sections within the QAPP review checklist and their corresponding items follow the outline
presented in Appendix C: Quality Assurance Project Plan (QAPP) Template for Ecological Restoration
Projects (for more detail on each item see the IERQC Reference tables). Users who create their QAPP
based on the template provided in Appendix C will be able to easily use this QAPP review checklist as a
quick guide during QAPP development and as a tool to better understand what components of a QAPP
will come under scrutiny during the QAPP review process.
Users of this QAPP review checklist must consult EPA QA/R-5, or the more general Guidance for Quality
Assurance Project Plans (EPA QA/G-5). EPA/240/R-02/009. December 2002. as needed to obtain
additional details and guidance for review of a QAPP.
QAPP reviewers should ensure the application of QA/QC approaches are commensurate with factors
such as project objectives, risks associated with decision errors, resources, and schedules. Any
information provided in this checklist or the IERQC Guidance (including their associated appendices)
does not supersede requirements specified in EPA QA/R-5.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019
Appendix D







-------
GLOSSARY
NOTE: The definitions provided below are intended specifically to apply for the purposes of this
document, and do not necessarily reflect definitions used by other documents or organizations.
Acceptance Criteria - See "Data Quality Acceptance Criteria."
Accuracy - The degree to which a measured or observer-determined value agrees with a known or
reference value; includes a combination of random error (precision) and systematic error (bias).
Adaptive Management - A structured and sequential learning process (decision-support) that increases
knowledge and reduces uncertainty, iteratively leading to more effective ecological restoration (Clark
County 2016; Sutter 2018).
Ancillary Variable - Information collected during data gathering that are considered "ancillary" to the
"primary" variables identified in project objective, sampling objective, and hypothesis statements. A
variable initially determined ancillary may later be reclassified as a primary variable pending the
importance of its effect on explaining the variability of the system and achievement of project
objectives. More generally, ancillary variables include those helpful in evaluating data quality and
interpreting monitoring results, but are not directly used to determine if sampling objectives were
achieved. Examples include information about who collected the data and when, weather conditions at
the time of data collection, the orientation, position, or settings of instrumentation or equipment used
when evaluating a primary variable, and other information that may be related to the primary variables
of interest collected with the intention of controlling- or accounting- for co-variability during data
analysis procedures.
Baseline Inventory - A description of current biotic and abiotic elements of a site prior to restoration,
including its structural, functional and compositional attributes and current conditions (SER 2004). A
baseline inventory is implemented at the commencement of the restoration planning stage prior to the
implementation of restoration activities (or treatment) and is replicated across any included reference
or control sites (i.e., reference models) to inform planning including restoration goals, and treatment
prescriptions and to quantify site attributes necessary to conduct baseline comparisons with targeted
change (i.e., effect-size) concurrent to or after restoration activities have been concluded (McDonald et
al. 2016) Sometimes referred to as "baseline condition sampling."
Bias - The systematic or persistent distortion of a data collection process resulting in error in one
direction.
Blind Check - A type of field QC check used to quantitatively evaluate method implementation. These
QC checks are conducted by a separate QA crew that independently measures or observes all (or a
subset) of the sampling unit or variables that have been previously assessed by a routine field crew as
part of their ongoing QC activities. A blind check serves as an objective and systematic assessment to
determine whether data collection activities are being implemented as planned and are suitable to
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019

-------
achieve data quality acceptance criteria. During these checks, the QA crew does not have access to the
completed routine crew data.
Calibration - Comparison of a method standard, instrument, or an observation with a standard or
instrument of higher accuracy, or an observation by an expert to detect and quantify inaccuracies and to
report or eliminate those inaccuracies by making appropriate adjustments.
Calibration Check - QC checks that involve routine field crew members measuring or observing stable
variables that have been previously determined by an expert. These checks can be performed on one or
more variables independent of a sampling unit or plot, staged under controlled conditions, or using
established calibration plots within or representative of the project area that have been fully
characterized by a QA crew. Results are used to evaluate the data collection abilities of routine field
crew members and can support determination of achievable sampling objectives and data quality
acceptance criteria.
Calibration Plot - A sample point or plot within or representative of the project area that has been fully
characterized by a QA crew for the purpose of evaluating the field readiness of routine crew members
and their ability to achieve data quality acceptance criteria.
Censored Values - Data that are reported as less than or greater than a certain value. Censored values
can represent measurement results that are below a measurement sensitivity threshold (detection limit)
or above a measurement range.
Chain-of-Custody (COC) - An unbroken trail of accountability that ensures the physical security of
samples, data, and records.
Cold Check - A type of field QC check used to qualitatively evaluate method implementation. These QC
checks are conducted by a separate QA crew that independently measures or observes all (or a subset)
of the sampling unit or variables that have been previously assessed by a routine field crew as part of
either their training efforts or an ongoing QC program. Cold checks are objective and systematic
assessments that determine whether data collection activities are being implemented as planned and
are suitable to achieve data quality acceptance criteria. The routine field crew may or may not be
present at the time of a cold check. During these checks, the QA crew has access to the completed
routine crew data.
Comparability - Confidence that data can be compared to or combined with other data collected for
similar purposes.
Completeness - A measure of the amount of valid data obtained from a data collection system
compared to the amount that was expected to be obtained under correct, normal conditions.
Compliance Check - A verification technique used during data review to identify data that deviate from
study requirements; the technique includes checks to determine if results conform to (1) specified
reporting formats and units, and (2) procedural requirements and QC acceptance criteria for data
quality indicators.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019

-------
Compliance Rate Objective - The expected frequency of compliance in meeting an error tolerance
specification.
Consistency Checks - A collection of data validation techniques used during data review that address (1)
how well related variables within the dataset compare internally (internal consistency), (2) how well the
data for a given variable compares to similar but external data for that variable (external consistency),
and (3) how well the data compare to predictions of natural inherent relationships of the measured
variables (ecological consistency).
Cost of Quality - A methodology for quantifying the total cost of QA/QC activities and deficiencies in the
quality of a product or service. The methodology recognizes the cost of NOT creating a quality product
or service, or the cost of creating a poor quality product or services. If work has to be re-done to address
problems or issues with quality, the cost will increase (i.e., the cost of quality increases when a poor
quality product is initially produced).
Data Analysis - The mathematical and statistical process of using collected data to describe results,
answer questions, and support decision making and determine whether restoration project objectives
were met. Data analysis may include: mathematical manipulations involved with data reduction (e.g.,
the aggregation of data representing multiple variables into a single indicator or index value),
calculations of descriptive statistics, or the application of established statistical methods necessary
evaluate model assumptions or to conduct hypothesis testing (e.g., ANOVA or GLM, or principal
components analysis).
Data Assessment - A continuation of the data review process that involves addressing data quality flags,
assessing data quality indicators (DQIs) across measured and calculated variables, and reconciling the
quality assessment of data with project assumptions and stated sampling objectives.
Data Certification - Ensures a secure validated database (or dataset) has been completed, documented,
and certified (if applicable) and that the data within the database are suitable for final usability
assessments in preparation for analysis, reporting, distribution, and archiving.
Data Management —A formal, structured process that promotes data quality, availability and
preservation of data for analyses, informed decision making, and data reuse. This process requires
planning, documentation, and effective implementation of day-to-day workflow and procedures to
facilitate the security, integrity and reliability of data collected to support project goals and objectives.
Data Management Plan (DMP) - A guiding document that combines policy-based requirements,
procedural guidelines, and standard operating procedures to facilitate the effective management of
project data for each component of the data management lifecycle. The plan describes strategies that
can create an efficient and effective flow of data from its original source to the final, archived database.
Data Management System (DMS) —The computer environment, including system hardware and
software, used to electronically manage the data. A DMS is used to help improve management and data
review functionality, data security (including protection against data corruption and loss), and overall
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019

-------
project performance and efficiency, and generally includes: (1) a computer and/or server workspace
that is accessible to all approved project participants, (2) effective and clear policies on read/write
permissions, and (3) well-defined procedures and locations for storing files and data.
Data Quality Acceptance Criteria - Qualitative and quantitative criteria used to evaluate and control
various phases (e.g., sampling, transportation, preparation, analysis) of the measurement process to
ensure the total measurement uncertainty is within the range prescribed by the sampling objectives;
data quality acceptance criteria specify the level of quality needed for each measurement or observation
and data quality indicators (e.g., precision, bias, accuracy, representativeness, comparability,
completeness, detectability) that can be used to determine if this level of quality has been achieved.
Data quality acceptance criteria are usually established during the project planning phase and usually
include a tolerance objective and a compliance rate objective. Sometimes referred to by other names,
such as "measurement quality objectives."
Data Quality Act - Passed by the U.S. Congress in as a two-sentence rider to a 2001 appropriations bill.
The Act instructed the U.S. Office of Management and Budget (OMB) to issue guidelines that (1)
"Provide policy and procedural guidance to federal agencies for ensuring and maximizing the quality,
objectivity, utility, and integrity of information including statistical information disseminated by Federal
agencies" and (2) "Establish administrative mechanisms allowing affected persons to seek and obtain
correction of information maintained and disseminated by the agency that does not comply with the
guidelines." Also referred to as the "Information Quality Act."
Data Quality Indicator (DQI) - A quantitative statistic or qualitative descriptor used to interpret the
degree to which data are acceptable or useful. Commonly accepted DQIs are precision, bias, accuracy,
representativeness, comparability, completeness, and sensitivity (EPA 2002; Taylor 1987). In ecological
restoration projects that also rely on observational measurements, the term "detectability" is
recommended because it encompasses both sensitivity and specificity.
Data Validation - See "Validation."
Data Verification - See "Verification."
Detectability - A measure of the sensitivity and specificity of the sampling design, measurement
procedures, instrumentation and/or data collection personnel in detecting true differences in a target
variable at ambient levels or when the measurement or observation of a target variable is dependent
upon detecting the true occurrence of a rare, cryptic or secretive organism. Also see "Measurement
Sensitivity" and "Measurement Specificity."
Double Data Entry - A process designed to minimize data entry errors; it involves manual entry of the
same dataset (e.g., from data reporting forms) by two different individuals, automated comparison of
the two datasets, and correction of any discrepancies based on re-examination of the original data
source. Also referred to by other terms such as "dual data entry" and "two-pass verification."
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019

-------
Ecological Restoration - The process of assisting the recovery of an ecosystem that has been degraded,
damaged, or destroyed. It is an intentional activity that initiates or accelerates ecosystem recovery with
respect to its health (functional processes), integrity (species composition and community structure),
and sustainability (resistance to disturbance and resilience), (Clewell et al. 2005).
Environmental Data - Any measurements or information that describe environmental processes,
location, or conditions; ecological or health effects and consequences; or the performance of
environmental technology. EPA defines environmental data as both primary data (i.e., information
collected directly from measurements) and secondary/existing data (i.e., data that were collected for
other purposes or obtained from other sources, including literature, industry surveys, models, data
bases, and information systems) (EPA 2002).
Environmental Information - The American Society of Quality defines this term using the same
definition provided above for "Environmental Data" (ASQ 2014).
Existing Data - Any data that were not directly and specifically generated to support the purpose or
decision at hand. Other commonly used terms to describe existing data include "acquired data," "data
from other sources," "historical data," "secondary source data" and "tertiary source data."
False Negative - See "Type 2 Error."
False Positive - See "Type 1 Error."
Field Audit - See "Hot Check."
Graded Approach - Process of applying management controls to an activity according to the intended
use of the results and the degree of confidence needed in the quality of the results (ASQ 2014). In other
words, the level of QA/QC performed should be commensurate with such factors as the project
objectives, project importance, risks associated with decision errors, resources, and schedules.
Hot Check - A type of field QC check used to objectively and systematically determine if data collection
activities are being implemented as planned. Often referred to as "field audits" or "technical systems
audits," hot checks are used as part of training or during data collection to examine crew performance
and the efficacy of the standard operating procedures used in real time. During these checks, the QA
crew is present on the plot along with the trainee/routine crew. The QA crew observes the routine crew
activities, examines the data collected, and provides immediate feedback regarding crew/trainee
performance and SOP efficacy. These checks are also used to evaluate data quality, particularly for
transitory variables.
Information Quality Act - See "Data Quality Act."
Laboratory Qualifier - Code applied to the data by the contract analytical laboratory to indicate a
verifiable or potential data deficiency or bias.
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019

-------
Management Controls - A system of management functions to plan, implement, check and evaluate
activities for conformity to management and performance objectives (ASQ 2014).
Measurement Error - The difference between a measured or observed value and the true value, and is
caused by the individual, instrument or process used to make the measurements or observations, and
from a study-design perspective, also includes all error introduced during data transcriptions and
reductions.
Measurement Quality Objectives - See "Data Quality Acceptance Criteria."
Measurement Sensitivity - Also called the true positive rate, is the proportion of positives correctly
determined as positive (e.g., the observer correctly detects the true presence of species X or a targeted
ecological condition such as the true proportion of water samples that are impacted by a disturbance).
The measurement sensitivity threshold is also referred to as the detection limit.
Measurement Specificity - Represents the proportion of accurately determined true negatives, or the
proportion of negatives or absences that are correctly identified as negatives or absences (Gitzen 2012;
Drew et al. 2010).
Metadata - Consists of data (information) that "defines and describes other data" (ISO/I EC 11179).
Metadata provides information on what users need in order to accurately interpret and use a dataset
and is generally: (1) descriptive, (2) structured using a standardized textual format (e.g., TXT, XML), and
(3) embedded or maintained separately from the applicable project dataset.
Monitoring - The systematic and usually repetitive collection of information, typically used to track the
status of a variable or system (Atkinson et al. 2004).
Peer Review - The review of a draft product for quality by specialists in the same field of scientific study
who were not involved in producing the draft publication. Used to ensure the quality of published
information meets the standards of the scientific and technical community.
Performance Evaluation - A type of audit in which the quantitative data generated in a measurement
system are obtained independently and compared with routinely obtained data to evaluate the
proficiency of an instrument, observer or crew, analyst, or laboratory.
Precision - The degree of agreement among repeated measurements or observations of the same
attribute under the same or very similar conditions.
Precision Check - A type of field QC check used to quantitatively evaluate method implementation and
provide an unbiased estimate of precision. Members of the routine field crew use precision checks to
independently assess a subset of the sampling unit. These QC checks are conducted by one or more field
crews or crew members to measure or observe the same target variables for the same sample unit or at
the same location. Estimates of within-crew precision can be made if a crew repeats measurement of
target variables for a site where that same crew had previously collected data. Estimates of between-
crew precision can be made if two or more crews collect data at the same site. Precision checks are
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019

-------
often used when it is not feasible to have a separate QA crew. Because these checks are conducted
without the benefit of data collected by experts for comparison, results of these checks cannot be used
to estimate the accuracy or bias of the collected data.
Primary Variable - A variable used to determine if objectives are achieved. Contrast with "ancillary
variables/' which describe who collected the data and when, weather conditions at the time of
collection, and other information that can be used to help evaluate data quality and interpret results.
Quality - The degree to which a set of inherent (existing) characteristics fulfils requirements. ISO 9000
Under the Data Quality Act, the U.S. Office of Management and Budget elaborated on this by saying that
the quality of information is based on the following three characteristics: (1) Objectivity: The
information itself must be accurate, reliable, and unbiased and the manner in which the information is
presented must be accurate, clear, complete, and unbiased; (2) Utility: The information must be useful
for the intended users; and (3) Integrity: The information may not be compromised through corruption
or falsification, either by accident, or by unauthorized access or revision.
Quality Assurance (QA) - Part of quality management focused on providing confidence that quality
requirements will be fulfilled (ISO 2015a). Quality assurance may include management activities
involving planning, implementation, assessment, reporting, and quality improvement to ensure that a
process, item, or service is of the type and quality needed and expected by the customer (EPA 2000b).
Note that QA is generally process-oriented and focused on preventing defects.
Quality Assurance Project Plan (QAPP) - A document describing in comprehensive detail the necessary
QA/QC and other technical activities that need to be implemented to ensure that the results of the work
performed will satisfy the stated performance or acceptance criteria (EPA 2002: ASQ 2014).
Quality Control (QC) - Part of quality management focused on fulfilling quality requirements (ISO
2015a). Quality control includes technical activities that measure the attributes and performance of a
process, item, or service against defined standards to verify that they meet the stated requirements
established by the customer (ASQ 2014: EPA 2000b). Note that QC is generally product-oriented and
focused on identifying defects and confirming requirements were met.
Quality Documentation - Generic term used to describe any form of planning documentation that
describes the QA/QC strategies for all phases of a project lifecycle. Examples include, but are not limited
to Quality Manuals, Quality Plans, Quality Management Plans, Quality Assurance Project Plans, Quality
Assurance Program Plans, and Quality Control Plans.
Quality Indicators - Measurable attributes that are used to evaluate the quality of a particular outcome
or decision.
Quality Improvement - Coordinated activities to direct and control an organization with regard to
quality (ISO 2015a). Note: Quality improvement is a management program for improving the quality of
operations. Such management programs generally entail a formal mechanism for encouraging worker
recommendations with timely management evaluation and feedback or implementation (ASQ 2014).
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019

-------
Quality Management - That aspect of the overall management system of the organization that
determines and implements the quality policy. Quality management typically includes strategic
planning, allocation of resources, and other systematic activities (e.g., planning, implementation, and
assessment) pertaining to the quality system (ASQ2014).
Quality Management System (QMS) - A structured and documented management system describing
the policies, objectives, principles, organizational authority, responsibilities, accountability, and
implementation plan of an organization for ensuring quality in its work processes, products (items), and
services. The QMS provides the framework or "blueprint" for planning, implementing, documenting, and
assessing work performed by the organization and for carrying out QA and QC activities that will help
ensure that final products and services meet or exceed the organization's objectives and expectations.
(ASQ 2014). A "Quality Management System" is also sometimes referred to as a "Quality System."
Range Check - A data validation technique used during data review to identify data that are (1)
scientifically impossible or illogical, (2) outside the normal range anticipated for the variable, or (3)
within an anticipated range, but at such high or low extremes of the range that they warrant additional
scrutiny.
Record - A completed document that provides objective evidence of an item or process. Records may
also include photographs, drawings, magnetic tape, and other data recording media.
Reference Tables - Also known as reference lists, look-up tables, or mapping tables. Reference tables
are used as the basis for conducting compliance, range and consistency checks during data review
activities. These tables may be static, dynamic, universal, or project-specific, depending on the variable.
Representativeness - The degree to which data represent the characteristic of a population being
assessed.
Restoration Project Goals - Used to describe the desired future conditions or ideal states that an
ecological restoration effort will attempt to achieve. Such goals should be descriptive and convey a
purpose that includes the attribute of interest and the action required to effect change in that attribute.
They should define the general direction for a project but should not define specific desired future
conditions in measureable terms.
Restoration Project Objectives - Specific, measureable, achievable, results-oriented, and time-sensitive
(SMART) statements that build on the restoration project goals by describing desired changes in
condition in delineated areas at a project site. These desired changes can be stated either as a targeted
change that results in a change in a baseline condition (i.e., an observed trend or improvement) or
results in an achievement of a targeted threshold.
Sampling Error-The difference between the characteristics identified in a sample of a population
(observed or measured values) and the actual characteristics of the entire population (true values).
Sampling Objectives - Qualitative and quantitative statements that clarify study objectives, define the
appropriate type of data, and specify tolerable levels of potential decision errors that will be used as the
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019

-------
basis for establishing the quality and quantity of data needed to support decisions. Sampling objectives
support the restoration project objectives and are used to determine a sampling design for the monitoring
program and identify quality requirements for data that will be collected in the program. Well defined
sampling objectives specify (1) the degree of change that must be detected in order to define project
success, and (2) the degree of certainty needed when stating that the change has been detected.
Sensitivity - See "Measurement Sensitivity."
Specificity - See "Measurement Specificity."
Stable Variable - A variable or ecological phenomena that can be measured repeatedly over a fixed time
period and expected to produce the same result for each measurement.
Standard Operating Procedure - Detailed instructions that are designed and implemented to ensure the
uniformity and consistency of a specific activity or set of activities.
Tolerance Objective - The expected range for repeated observations or repeatability.
Transitory Variable - A variable or ecological phenomena that is likely to change between repeated
measurements or observations over a fixed time period, and cannot be measured (or observed)
repeatedly and expected to produce the same result. These variables can be affected by time, stochastic
or random effects, and disturbance due to the presence of an observer and/or the sampling procedures
employed.
Type 1 Error - Incorrectly concluding that a change occurred when it did not; also known as a False
Positive or a False Change, and represented by the Greek letter alpha (a).
Type 2 Error - Failing to recognize that a change occurred; also known as a False Negative or a Missed
Change, and represented by the Greek letter beta ((B).
Validation - Confirmation, through provision of objective evidence that the requirements for a specific
intended use or application have been fulfilled (ISO 2015a; ASQ 2014). For ecological restoration data,
this includes ensuring that the data are scientifically valid, and that they support broader project and
sampling objectives. In other words, data validation asks, "Does it make sense?"
Verification - Confirmation, through provision of objective evidence that specified requirements have
been fulfilled (ISO 2015a; ASQ 2014). For ecological restoration data, this includes verification that
specified procedures (SOPs) were followed, data generated or gathered during the project met specified
data quality requirements (e.g., data quality indicators met the acceptance criteria established for each
sampling objective), data entry and data calculations were performed correctly, and that data integrity
has been protected. In other words, data verification asks, "Did they do it right?"
Application of QA/QC Principles to Ecological Restoration Project Monitoring
April 2019

-------