EPA841-F-05-002E
             ble
Goals of the
WSA Program

Produce a report on
the condition of
wadeable streams of
the U.S. by
December 2005

Promote collaboration
across jurisdictional
boundaries in the
examination and
assessment of
water quality
•
Build State capacity
through use of survey
design and
comparability of
methods or indicators.
Process  to  Examine  Comparability
of Biological Methods

Purpose
The purpose of the comparability studies is to help address the question of whether data
collected using different biological methods can be combined to assess condition.  These
studies will ascertain similarities and differences among biological methods and the
data, indicators, and assessment results derived from those methods. Ultimately, the
ability to aggregate assessment information from different methods will enhance
regional or broader scale assessments.

Primary Questions

A rigorous sampling design is used that evaluates the different methods over both
natural and stressor gradients and reference sites. To what extent are the state methods
and WSA method comparable for:
•   Data?
                                 Sampling
                             Indicators?
                             A         .o
                             Assessments/
Background
                         Design
                         Sampling
                       Win  Spr  Sum
                       2004
 Prelim.      Follow-up
 Analysis     Sampling
    I        I
 Win   Spr   Sum  Fall
2005
 Analysis   Comparability
     I    Report
     1        I
 Win   Spr   Sum  Fall
2006
                                              Timeline for Comparability Studies
O
O
                             Each state has its own SOPs, which specify the process for sample collection
                             (including reach designations and timing of sampling), sample processing (e.g.,
                             sorting), taxonomic methods, and data analysis and reporting (e.g., assessment
                             process). These state methods differ to varying degrees with each other and with
                             those being used in WSA.

                             A performance-based system (PBS) approach to methods and monitoring is
                             recommended, in part, because it specifies documenting data quality (NWQMC,
                             2001).

                         O   It is desirable to document relevant performance characteristics of each method
                             and evaluate the overall variability inherent in the method.

Performance characteristics
•  Method precision and sensitivity should be documented by using measurement endpoints used by the state in its
   assessments.
•  Precision is a function of the repeatability of a given endpoint, given the sampling and sample processing methods used.
•  Sensitivity is analogous to a chemical detection limit and is a function of both precision and responsiveness of the endpoint
   to perturbation.
      Data and Assessment Comparability Determinations

         •   Biological methods comparability should be viewed on both data and assessment (or endpoint) levels.
         •   EPA and interstate agencies conduct assessments of ecological conditions at regional and national scales.
         •   Comparability of assessments between a state method and WSA has to be carefully defined because of the
             different scales (i.e., statewide versus regional/national); the purpose of WSA is to extrapolate condition
             independent of jurisdictional boundaries.
         •   Ecological data are collected by different entities within a state and aggregation of these data can be beneficial
             in a state's program.

-------
Data vs. Assessment Comparability Determinations, continued...
                                                         If the methods are comparable at
                                                         the assessment or end point level,
                                                         then...
                                                         a.   Defensibility of assessments is enhanced for
                                                             the states and results are more robust.
b.
                                                             States can use the BCG framework as a
                                                             means of documenting comparability of
                                                             different methods and cross-calibration of
                                                             results.
                                                         >   If endpoints are comparable but assessments
                                                             are not, the states will have a way to adjust
                                                             for differences in assessment.
1.  If methods are comparable at the          2.
    data level, then...
    a.  Results of the state's monitoring program at
       reference sites can be used to help define
       reference condition and thresholds for
       interpreting the WSA data set.

    b.  WSA results can be used by states to
       supplement their data. The inverse is also
       true - If a state uses a probability design, its
       data can be used in the WSA analyses.
    c.  The states and EPA would have access to a
       larger pool of data within and among states.

    d.  Comparability between and among methods
       increases the defensibility of the methods.

    >   If the data aren 't comparable, the states will
        be able to avoid the error of combining
        dissimilar data.
Sampling Design

0  Sampling Array - Minimum of 20 sites distributed
   across environmental gradient and disturbance gradient.

0  Reach Selection - A well-defined stream segment (i.e., between two confluences) with no major water
   quality changes within the reach (i.e., a large discharge at the midpoint).

5  Sampling Guidelines - Strict adherence to the SOPs and quality control procedures is required to
   minimize sampling bias.

Analysis  Design

0  Precision — Replicate sampling is done for each method to evaluate monitoring in precision along the
   gradients.

0  Sensitivity - Reference sites used to establish benchmarks for evaluating detection of disturbance for the
   various methods.

0  Similarity in Indicators - Comparison of assemblage attributes from data is done to evaluate the
   similarity in indicators via each method.
        For more information contact:
         Susan Holdsworth, USEPA
              202-566-1187
         holdsworth. susan@epa. gov

-------