&EPA
     United States
     Environmental Protection
     Agency
  Water Security Initiative: Evaluation of the Water
   Quality Monitoring Component of the Cincinnati
        Contamination Warning System Pilot

             Monitoring and Surveillance
                Water Quality Monitoring
              Enhanced Security Monitoring
            Customer Complaint Surveillance
                ublic Health Surveillance
                 Possible Contamination
               Consequence Management
                Sampling and Analysis
                     Response
Office of Water (MC-140)
EPA-817-R-14-001B
April 2014

-------
                                      Disclaimer

The Water Security Division of the Office of Ground Water and Drinking Water has reviewed and
approved this document for publication. This document does not impose legally binding requirements on
any party. The findings in this report are intended solely to recommend or suggest and do not imply any
requirements. Neither the U.S. Government nor any of its employees, contractors or their employees
make any warranty, expressed or implied, or assumes any legal liability or responsibility for any third
party's use of or the results of such use of any information, apparatus, product or process discussed in this
report, or represents that its use by such party would not infringe on privately owned rights.  Mention of
trade names or commercial products does not constitute endorsement or recommendation for use.

Questions concerning this document should be addressed to:

Katie A. Umberg
U.S. EPA Water Security Division
26 West Martin Luther King Dr.
Mail Code 140
Cincinnati, OH 45268
(513)569-7925
Umberg. Katie @epa. gov

or

Matt M. Umberg
U.S. EPA Water Security Division
26 West Martin Luther King Dr.
Mail Code 140
Cincinnati, OH 45268
(513)569-7357
Umberg.Matt@epa.gov

or

Steve Allgeier
U.S. EPA Water Security Division
26 West Martin Luther King Drive
Mail Code 140
Cincinnati, OH 45268
(513)569-7131
Allgeier.Steve@epa.gov

-------
                              Acknowledgements
The Water Security Division of the Office of Ground Water and Drinking Water would like to recognize
the following individuals and organizations for their assistance, contributions, and review during the
development of this document.

    •   Yeongho Lee, Greater Cincinnati Water Works
    •   Jeff Swertfeger, Greater Cincinnati Water Works
    •   Mike Tyree, Greater Cincinnati Water Works
    •   Jennifer Hagar, Ohio Environmental Protection Agency
    •   Dr. Lindell Ormsbee, University of Kentucky

-------
                                Executive Summary
The goal of the Water Security Initiative (WSI) is to design and demonstrate an effective multi-
component warning system for timely detection and response to drinking water contamination threats and
incidents. A contamination warning system (CWS) integrates information from multiple monitoring and
surveillance components to alert the water utility to possible contamination, and uses a consequence
management plan (CMP) to guide response actions.

System design objectives for an effective CWS are: spatial coverage, contaminant coverage, alert
occurrence, timeliness of detection and response, operational reliability and sustainability.  Metrics for the
water quality monitoring (WQM) component were defined relative to the system metrics common to all
components in the CWS, but the component metric definitions provide an additional level of detail
relevant to the WQM component. Evaluation techniques used to quantitatively or qualitatively evaluate
each of the metrics include analysis of empirical data from routine operations, drills and exercises,
modeling and simulations,  forums, and an analysis of lifecycle costs. This report describes the evaluation
of data collected from the WQM component from the period of January 2008 - June 2010.

The major outputs from the evaluation of the Cincinnati pilot include:
    1.  Cincinnati Pilot System Status, which describes the post-implementation status of the Cincinnati
       pilot following the installation of all monitoring and surveillance components.
    2.  Component Evaluations, which include analysis of performance metrics for each component of
       the Cincinnati pilot.
    3.  System Evaluation, which integrates the results of the component evaluations, the simulation
       study, and the benefit-cost analysis.

The reports that present the results from the evaluation of the system and each of its six components are
available in an Adobe portfolio, Water Security Initiative: Comprehensive Evaluation of the Cincinnati
Contamination Warning System Pilot (USEPA 2014).

WQM Component  Design

A key monitoring component of a CWS is WQM, which consists of the following four design elements:
sensor stations, a data collection system, an event detection system and component response procedures.
Prior to implementation of the CWS, the Greater Cincinnati Water Works (GCWW) had sensors
measuring basic water quality parameters (primarily free chlorine) at major utility facilities. Operators
received alerts if the water quality values fell outside of an acceptable range defined by the utility, but
there were no formal procedures for timely investigation of and response to these alerts. Thus, all four
design elements were enhanced as part of the WSI pilot, including installation of water quality sensors at
15 new locations throughout the distribution  system (new stations were also installed at two treatment
plants), implementation of a dedicated data collection and management system, installation of an event
detection system, and development of component response procedures. Free chlorine residual,
conductivity, oxygen  reduction potential (ORP), temperature, total organic carbon (TOC) and turbidity
instruments were installed.

Each water quality sensor produces a data stream which is analyzed  independently by the event detection
system to identify anomalous patterns in WQM data. The CANARY software was used for event
detection for this pilot.  For the 15 monitoring stations in the distribution system for which alerts were
produced and investigated, a total of 69 water quality data streams were analyzed by CANARY.

-------
Once an anomaly is identified by the event detection system, a visual and audible alert is generated on the
dedicated workstation in the utility control room. This workstation is monitored 24/7 by utility operators,
who initiate an investigation according to the component response procedures when an alert occurs.

A summary of the evaluation results for each of the design objectives relevant to WQM is provided
below. For more information on this topic, see Section 2.0.

Methodology

Several methods were used to evaluate WQM performance.  Data was tracked over time to illustrate the
change in performance as the component evolved during the evaluation period.  Statistical methods were
also used to summarize large volumes of data collected over either the entire or various segments of the
evaluation period.  Data was also evaluated and summarized for each reporting period over the evaluation
period. In this evaluation, the term reporting period is used to refer to one month of data that spans from
the 16th of the indicated month to the 15th of the following month.  Thus, the January 2008 reporting
period refers to the data collected between January 16th 2008 and February 15th 2008. Additionally, three
drills and one full-scale exercise designed around mock contamination incidents were used to practice and
evaluate the full range of procedures, from initial detection through response.

Because there were no contamination incidents during the evaluation period, there is no empirical data to
fully evaluate the detection capabilities of the component. To fill this gap, a computer model of the
Cincinnati CWS was developed and challenged with a large ensemble of simulated contamination
incidents in a simulation study. An ensemble of 2,015 contamination scenarios representing a broad range
of contaminants and injection locations throughout the distribution system was used to evaluate the
effectiveness of the CWS in minimizing public health and utility infrastructure consequences.  The
simulations were also used for a benefit-cost analysis, which compares the monetized value of costs and
benefits and calculates the net present value of the CWS.  Costs include implementation costs and routine
operation and maintenance labor and expenses, which were assumed over  a 20 year lifecycle of the CWS.
Benefits included reduction in consequences (illness, fatalities and infrastructure damage) and dual-use
benefits from routine operations.

Design Objective: Spatial Coverage

Spatial coverage is the percentage of the distribution system area that is covered by the WQM network.
For WQM, this depends on the location and density of monitoring points in the distribution system and
the hydraulic connectivity of each monitoring location to downstream regions and populations.  Metrics
evaluated under this design objective include area coverage and population coverage and are a
superposition of the areas covered by the individual monitoring stations.

For the Cincinnati pilot, the WQM component has 72% area coverage. This translates to 84% population
coverage, which is higher than area coverage because most of the uncovered portions of the distribution
system have low population density.

While the area and population coverage were relatively high, results from the simulation study show that
only 737 of 2,015 simulated scenarios (37%) had at least one WQM station that witnessed a potentially
detectable contaminant concentration. The majority of the 1,278 scenarios that were not potentially
detectable occurred in isolated sections of the distribution system and did not spread widely. Thus, while
they were difficult to detect, the consequences of these contamination incident were also limited. For
more information on this topic, see Section 4.0.
                                                                                               IV

-------
Design Objective: Contaminant Coverage
Contaminant coverage is the ability to detect a wide range of water contaminants and is measured by the
contamination detection potential, contamination scenario coverage, and contaminant detection threshold
metrics. Since there were no contamination incidents during the evaluation, water contamination was
simulated with 17 contaminants to assess contaminant scenario coverage and the contaminant detection
threshold.  In order for a contaminant to be considered theoretically detectable by the WQM component,
at least one of the measured water quality parameters must produce a statistically significant change in the
presence of that contaminant. In order for a scenario to be practically detectable, a contaminant
concentration sufficient to produce such a change must reach at least one WQM location.  Table ES-1
shows that all 17 contaminants simulated are theoretically detectable by the WQM component based on a
bench-scale evaluation of the impact of various contaminants on measured water quality parameters. The
table also presents the percentage of practically detectable scenarios that were actually  detected by the
WQM component. Note that the 17 contaminants being modeled in the simulation study were assigned
generic IDs for security purposes. For more information on this topic, see Section 5.0.

Table ES-1. Contaminant Coverage for the WQM Component
Contaminant
Nuisance Chemical 1
Nuisance Chemical 2
Toxic Chemical 1
Toxic Chemical 2
Toxic Chemical 3
Toxic Chemical 4
Toxic Chemical 5
Toxic Chemical 6
Toxic Chemical 7
Toxic Chemical 8
Biological Agent 1
Biological Agent 2
Biological Agent 31
Biological Agent 41
Biological Agent 51
Biological Agent 61
Biological Agent 71
Theoretically
Detectable?
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
% Practically Detected
Scenarios Detected
94%
84%
91%
78%
65%
89%
81%
90%
57%
0%
94%
67%
93%
89%
94%
64%
80%
 For these contaminants, the co-contaminant was used to determine the practically detectable concentrations
Design Objective: Alert Occurrence

-------
An alert is an indication from an event detection system that unusual water quality characteristics have
been detected. In the case of the Cincinnati CWS, the CANARY event detection system was
implemented and both visual and audible notifications were produced for each alert generated. Alert
occurrence tracks the frequency of alerts to determine how well the event detection system can
discriminate between true water quality  anomalies (including contamination) and normal variability in the
underlying data.  Metrics for this design objective include invalid alerts, valid alerts and alert co-
occurrence.  Invalid and valid alert rates were characterized using empirical data gathered during the real-
time monitoring phase. The number of alerts produced dropped significantly over the evaluation period
as sensor and event detection system performance improved:  154 alerts were produced across the 15
distribution system monitoring stations for the first month of the evaluation period, compared with 19 for
the final month of evaluation.

The data was also retrospectively analyzed using the final CANARY software and configurations in order
to better characterize performance.  For the final six months of the evaluation period, using these final
configurations, an average of 15 alerts per month was produced. And while this is much higher than was
originally expected, the utility found that this rate was sustainable, as the average time to complete an
alert investigation by the end of the evaluation period was under 15 minutes with training and experience.

The utility staff found investigation of alerts to be useful in increasing distribution system knowledge and
identifying water quality changes relevant to system operations or water quality management. During the
evaluation period, 49 real incidents of unusual water quality were identified by reviewers and CANARY
detected 69% of them.  This accounted for 5% of all alerts.

The analysis of alert occurrence was supplemented through analysis of alerts generated during the
simulation study, which provides an indication of the ability of the component to detect contamination
incidents.  Of the 737 simulated contamination scenarios that were practically detectable by the WQM
component (see the discussion under the previous design objective), an alert was generated in 643 (87%)
of them.  For more information on this topic, see Section 6.0.

Design Objective: Timeliness of Detection and Response

The timeliness of detection refers to the  time between the presence of unusual water quality in the
distribution system and the start time of the first WQM alert.  Metrics evaluated to characterize this
design objective  include time for initial detection and time to fully investigate an alert. The time for
initial detection includes the hydraulic travel time from the injection point to the WQM location and the
time for the CANARY event detection system to recognize the anomaly and generate an alert. For
WQM, the time to collect and transmit data is negligible (less than 4 minutes).  The time to fully
investigate an alert captures the time to perform all activities necessary to fully investigate the alert and
determine whether it is an indication of possible contamination.

During real-time operation, the time for initial detection could only be calculated for the four incidents
that originated at the treatment plant.  The source of the water quality changes was unknown for the other
incidents, and thus the time that the unusual water quality originated in the distribution system could not
be identified. For the four incidents originating at the treatment plant, it took between 6.3 and 11.3 hours
for unusual water to flow from the treatment plant to  a WQM location with a median travel time of 7.5
hours.  The median time it took for CANARY to produce an alert once unusual water had reached a
monitoring location was 1.6 hours.  Overall,  the time  for initial detection of these four incidents ranged
from 7.6 to 17.4 hours, with a median of 13.1 hours.  Note that an alert was not always generated at the
first WQM location reached.
                                                                                               VI

-------
The time of initial detection can be calculated precisely for simulated contamination incidents because the
injection time is known. For the simulated contamination incidents that were detected by WQM, the
hydraulic travel time ranged from 30 minutes to 35.5 hours with a median of 7 hours.  The time between
contaminated water arriving at a monitoring location and generation of an alert ranged from nine minutes
to 120 hours with a median of 46 minutes. Overall the time for initial detection of simulated
contamination incidents ranged from 26 minutes to 154 hours with a median of 10.8 hours.

The time to investigate alerts received in real-time monitoring was between one and 23 minutes, with a
median of 4.5 minutes. However, none of these resulted in full implementation of the component
response procedures and thus do not illustrate how long a full WQM investigation would take. Therefore,
the values from drills and exercises were used to quantify the time to fully investigate a WQM alert.
Total investigation times from these drills and exercises ranged from 118 to 191 minutes. Figure ES-1
shows the timeline from the first drill, in which it took 118 minutes to fully investigate this alert. Much
of the time during the exercise was for site inspection and characterization.  For more  information on this
topic, see Section 7.0.

00:00
OWQM
Alert
00:02
Operator
Recognizes
Alert
00:26
WQ&T Chemist
Notifies Water
Utility Emergency
Response
Manager


00:25
WQ&T QO'28
°0:03 Chemist Remote
Operator Determines Sample
WQ&T5 Alert is Collection
Chemist
u









01:11
WQ&T
Technician
Prepares for
Site
Investigation






01:59
OWQM Alert
Investigation is
Complete as
WQ&T Technician
Reports Results of
Station Inspection
V







02:06
Water Utility
Emergency Response
Manager Determines
Contamination is
Possible
1

1 1
 00:00
                                      01:00
                                                                          02:00
                                                                                            02:30
Figure ES-1. Timeline Progression of the WQM Alert Investigation During Drill 1

Design Objective: Operational Reliability

Operational reliability metrics quantify the percentage of time that the four WQM design element -
WQM stations, data collection system, event detection system and component response procedures -
were operating and producing accurate outputs. Overall, the component had 81.7% availability.

Causes of downtime included malfunctioning sensors producing no data or inaccurate data, loss of power
to monitoring stations, failure of the communication system and the event detection system being turned
off for maintenance or trouble shooting. The WQM station, data collection, and event detection design
elements were unavailable for 28.2, 110, and 3,795 hours respectively. The event detection system
element was by far the biggest contributor to component unavailability.  For more information on this
topic, see Section 8.0.

Design Objective: Sustainability

Sustainability is a key objective in the design of a CWS and each of its components, which for the
purpose of this evaluation is defined in terms of the cost-benefit trade-off.  Costs are estimated over the
20 year life-cycle of the system to provide an estimate of the total cost of ownership and include the
                                                                                              VII

-------
implementation costs, enhancement costs, operation and maintenance (O&M) costs, renewal and
replacement costs and the salvage value.  The benefits derived from the system are defined in terms of
primary and dual-use benefits. Metrics that were evaluated under this design objective include costs,
benefits, and compliance. The costs used in the calculation of costs for the WQM component are
presented in Table ES-2. These costs were tracked as empirical data during the design and
implementation phase of project design, and were analyzed through a benefit-cost analysis.  It is
important to note that the Cincinnati CWS was a pilot research project, and as such incurred higher
costs than would be expected for a typical large utility installation.

Table ES-2. Cost Elements used  in the Calculation of Cost
Parameter
Implementation Costs
Annual O&M Costs
Renewal and Replacement Costs
Salvage Value
Dual-use benefits
Value
$4,229,333
$178,478
$1,555,555
($96,686)
($4,410)
To calculate the total cost of the WQM component, all costs and monetized benefits were adjusted to
2007 dollars using the change in the Consumer Price Index (CPI) between 2007 and the year that the cost
or benefit was realized. Subsequently, the implementation costs, renewal and replacement costs, and
annual O&M costs were combined to determine the total cost over the 20 year life-cycle of the project:
        WOM Total Cost: $8,202,994

A similar WQM component implementation at another utility should be less expensive when compared to
the Cincinnati pilot as it could benefit from lessons learned and would not incur research-related costs.

Dual-use benefits and compliance were evaluated through documentation of qualitative data during drills
and exercises, and during  forums with the utility including lessons learned workshops.  While many dual-
use benefits were  realized over the course of the evaluation period, only one could be monetized and thus
included in Table ES-2: a savings in the cost of chlorine feed solution was realized by allowing utility
operators to more accurately adjust the amount of chlorine added at the treatment plants while
maintaining the target disinfectant residual in the distribution system.

Compliance was demonstrated through 100% utility participation in drills and exercises which required
substantially more effort than routine investigations, but was beneficial to the pilot utility as reported by
personnel who indicated that they were able to better understand component response procedures through
response to simulated water contamination incidents.  Furthermore, compliance was evidenced by a high
rate of alert investigations completed by utility personnel: by the end of the pilot evaluation period,
compliance with the component response procedures reached an average value of 97%. For more
information on this topic,  see Section 9.0.
                                                                                             VIII

-------
                                Table of Contents
LIST OF FIGURES	xi

LIST OF TABLES	xii

SECTION 1.0: INTRODUCTION	1

  1.1     CWS DESIGN OBJECTIVES	1
  1.2     ROLE OF WQM IN THE CINCINNATI CWS	2
  1.3     OBJECTIVES	2
  1.4     DOCUMENT ORGANIZATION	2

SECTION 2.0: OVERVIEW OF THE WQM COMPONENT	4

  2.1     WQM STATIONS	5
    2.1.1    WQM Station Design	5
    2.1.2    WQM Station Location Selection	7
  2.2     DATA COLLECTION SYSTEM	8
  2.3     EVENT DETECTION SYSTEM	8
  2.4     COMPONENT RESPONSE PROCEDURES	10
  2.5     SUMMARY OF SIGNIFICANT WQM COMPONENT MODIFICATIONS	11

SECTION 3.0: METHODOLOGY	14

  3.1     ANALYSIS OF EMPIRICAL DATA FROM ROUTINE OPERATIONS	14
  3.2     DRILLS AND EXERCISES	14
    3.2.1    WQM Drill 1 (July 14, 2008)	14
    3.2.2    CWS Full Scale Exercise (October 1, 2008)	15
    3.2.3    WQM Drill 2 (February 25, 2009)	15
    3.2.4    WQM Drill 3 (After Hours) (April 29, 2009)	16
  3.3     BENCH-SCALE CONTAMINANT STUDIES	16
  3.4     SIMULATION STUDY	16
  3.5     FORUMS	20
  3.6     ANALYSIS OF COSTS	20

SECTION 4.0: DESIGN OBJECTIVE: SPATIAL COVERAGE	22

  4.1     AREA COVERAGE	22
  4.2     POPULATION COVERAGE	23
  4.3     EXTENT OF CONTAMINANT SPREAD THROUGH THE WQM NETWORK	24
  4.4     SUMMARY	25

SECTION 5.0: DESIGN OBJECTIVE: CONTAMINANT COVERAGE	26

  5.1     CONTAMINANT DETECTION POTENTIAL	26
  5.2     CONTAMINANT SCENARIO COVERAGE	30
  5.3     CONTAMINANT DETECTION THRESHOLD	32
  5.4     SUMMARY	35

SECTION 6.0: DESIGN OBJECTIVE: ALERT OCCURRENCE	37

  6.1     ALERT OCCURRENCE DURING ROUTINE OPERATIONS	37
  6.2     VALID ALERTS	49
    6.2.1    Valid Alerts from Simulated Contamination Incidents	50
    6.2.2    Valid Alerts from Observed Water Quality Anomalies	51
  6.3     ALERT CO-OCCURRENCE	54
    6.3.1    Co-occurrence of Alerts for Simulated Contamination Incidents	54
    6.3.2    Co-occurrence of Alerts on Utility Data	56
  6.4     SUMMARY	57
                                                                                         IX

-------
SECTION 7.0: DESIGN OBJECTIVE: TIMELINESS OF DETECTION AND RESPONSE	59

  7.1     TME FOR INITIAL DETECTION	59
     7.1.1    Timeliness of Detection for Valid Alerts from Simulated Contamination Incidents	59
     7.1.2    Timeliness of Detection for Valid Alerts from Observed Water Quality Anomalies	63
  7.2     TME TO FULLY INVESTIGATE A WQM ALERT	64
  7.3     SUMMARY	68

SECTION 8.0: DESIGN OBJECTIVE: OPERATIONAL RELIABILITY	70

  8.1     DATA COMPLETENESS	70
     8.1.1    Data Completeness for the  WQM Component.	70
     8.1.2    Data Completeness for Individual Water Quality Sensors	73
  8.2     DATA ACCURACY	76
     8.2.1    Data Accuracy for the WQM Component.	77
     8.2.2    Data Accuracy for Individual Water Quality Sensors	79
  8.3     AVAILABILITY	80
  8.4     SUMMARY	83

SECTION 9.0: DESIGN OBJECTIVE: SUSTAINABILITY	85

  9.1     COSTS	85
  9.2     BENEFITS	88
  9.3     COMPLIANCE	92
  9.4     SUMMARY	93

SECTION 10.0: SUMMARY AND CONCLUSIONS	94

  10.1    HIGHLIGHTS OF ANALYSIS	94
  10.2    LIMITATIONS OF THE ANALYSIS	95
  10.3    POTENTIAL APPLICATIONS OF THE WQM COMPONENT	96

SECTION 11.0: REFERENCES	98

SECTION 12.0: ABBREVIATIONS	99

SECTION 13.0: GLOSSARY	100

-------
                                  List of Figures
FIGURE 2-1. SUMMARY TIMELINE OF WQM COMPONENT DEPLOYMENT	13
FIGURE 3-1. SUPERPOSITION OF A CONTAMINATION INCIDENT ON BASELINE WATER QUALITY DATA	19
FIGURE 4-1. NUMBER OF SCENARIOS IMPACTING "X" WQM STATIONS	24
FIGURE 6-1. CAUSE OF REPROCESSED AND REAL-TIME ALERTS BY REPORTING PERIOD	41
FIGURE 6-2. CAUSE AND SUB-CAUSE OF REPROCESSED AND REAL-TIME ALERTS	43
FIGURE 6-3. CAUSE OF REPROCESSED AND REAL-TIME ALERTS BY LOCATION	46
FIGURE 6-4. EXAMPLE OF NORMAL WATER QUALITY VARIABILITY AT MONITORING STATION A	47
FIGURE 6-5. PERCENTAGE OF ALERTS WITH EACH PARAMETER LISTED AS A TRIGGER	49
FIGURE 6-6. CHLORINE DATA FROM AN OBSERVED WATER QUALITY ANOMALY	52
FIGURE 6-7. CHLORINE DATA FROM AN ADDITIONAL SITE DURING THE ANOMALY SHOWN IN FIGURE 6-6	52
FIGURE 6-8. CLUSTER SIZES FOR SIMULATED CONTAMINATION INCIDENTS	55
FIGURE 6-9. NUMBER OF ALERTS GENERATED FOR DETECTED INCIDENTS BY CONTAMINANT	55
FIGURE 6-10. ALERT CLUSTER CAUSES	57
FIGURE 7-1. TIMELINESS OF DETECTION FOR SIMULATION STUDY SCENARIOS	60
FIGURE 7-2. TIMELINESS OF DETECTION BY MONITORING LOCATION	61
FIGURE 7-3. COMPONENTS OF TIME TO DETECT BY MONITORING LOCATION	62
FIGURE 7-4. TIMELINESS OF DETECTION BY CONTAMINANT	63
FIGURE 7-5. TIMELINE PROGRESSION OF THE WQM ALERT INVESTIGATION DURING WQM DRILL 1	65
FIGURE 7-6. TIMELINE PROGRESSION OF WQM ALERT INVESTIGATION DURING FULL SCALE EXERCISE	65
FIGURE 7-7. TIMELINE PROGRESSION OF THE WQM ALERT INVESTIGATION DURING WQM DRILL 2	66
FIGURE 7-8. TIMELINE PROGRESSION OF THE WQM ALERT INVESTIGATION DURING WQM AFTER-HOURS DRILL...67
FIGURE 8-1. DATA COMPLETENESS FOR THE WQM COMPONENT OVER THE EVALUATION PERIOD	72
FIGURE 8-2. CAUSE AND SUB-CAUSE OF INCOMPLETE DATA FOR THE WQM COMPONENT	73
FIGURE 8-3. INCOMPLETE SENSOR DATA BY SUB-CAUSE	75
FIGURE 8-4. PERCENTAGE OF POTENTIAL DATA HOURS THAT WERE UNUSABLE FOR THE WQM COMPONENT	78
FIGURE 8-5. PERCENTAGE OF COMPONENT POTENTIAL DATA HOURS THAT WERE INACCURATE BY CAUSE	78
FIGURE 8-6. WQM COMPONENT UNAVAILABILITY AND UNAVAILABLE HOURS BY DESIGN ELEMENT	81
FIGURE 9-1. PERCENTAGE OF WQM ALERTS INVESTIGATED AND NUMBER OF ALERTS RECEIVED	92
                                                                                       XI

-------
                                  List of Tables

TABLE 2-1. WQM DESIGN OBJECTIVES	4
TABLE 2-2. INSTRUMENTATION INCLUDED IN EACH OF THE WQM STATION PROTOTYPES	6
TABLE 2-3. FINAL CANARY PARAMETER SENSITIVITY VALUES	9
TABLE 2-4. ROLES AND RESPONSIBILITIES UNDER THE WQM COMPONENT RESPONSE PROCEDURES	10
TABLE 2-5. SEQUENTIAL LISTING OF WQM COMPONENT MODIFICATIONS	11
TABLE 4-1. AREA COVERED BY AT LEAST "X" NUMBER OF WQM STATIONS	23
TABLE 4-2. POPULATION COVERED BY AT LEAST "X" NUMBER OF WQM STATIONS	23
TABLE 5-1. NORMALIZED WATER QUALITY RESPONSE FOR WATER FROM THE SURFACE WATER PLANT	27
TABLE 5-2. NORMALIZED WATER QUALITY RESPONSE FOR WATER FROM THE GROUNDWATER PLANT	27
TABLE 5-3. AVERAGE NORMALIZED WATER QUALITY RESPONSE	28
TABLE 5-4. EXPECTED WATER QUALITY RESPONSE FOR CONTAMINANTS EVALUATED IN THE SIMULATION STUDY .28
TABLE 5-5. PRACTICALLY DETECTABLE CONTAMINANT CONCENTRATIONS AND RESULTING WATER QUALITY
CHANGES	29
TABLE 5-6. SCENARIOS DETECTED BY CONTAMINANT	31
TABLE 5-7. RATIO OF CRITICAL CONCENTRATION TO DETECTION THRESHOLD BY CONTAMINANT	33
TABLE 5-8. DETECTION THRESHOLD ACROSS 14 WQM LOCATIONS (STATION! EXCLUDED)	34
TABLE 5-9. CONTAMINANT COVERAGE FOR THE WQM COMPONENT	35
TABLE 6-1. ALERTS BY MONITORING STATION	45
TABLE 6-2. ALERTS BY WATER QUALITY PARAMETER	48
TABLE 6-3. ALERTS BY MONITORING LOCATION	50
TABLE 6-4. OBSERVED WATER QUALITY ANOMALY CAUSES AND DETECTIONS	53
TABLE 6-5. INCIDENT DETECTION PERCENTAGES BY NUMBER OF SITES OF POTENTIAL ALERTS	53
TABLE 7-1. TIME TO IMPLEMENT KEY ACTIVITIES DURING DRILL AND EXERCISE WQM ALERT INVESTIGATIONS .... 67
TABLE 7-2. SUMMARY OF DELAYS IN TIME TO DETECT	69
TABLE 8-1. AVERAGE ANNUAL PERCENTAGE DATA COMPLETENESS FOR WQM SENSORS	74
TABLE 8-2. WATER QUALITY PARAMETER ACCURACY RANGES	76
TABLE 8-3. AVERAGE PERCENTAGE ACCURACY FOR SENSORS	79
TABLE 8-4. CONCURRENT UNAVAILABILITY OF "X" NUMBER OF WQM STATIONS	83
TABLE 9-1. COST ELEMENTS USED IN THE CALCULATION OF TOTAL COST OF THE WQM COMPONENT	85
TABLE 9-2. IMPLEMENTATION COSTS FOR THE WQM COMPONENT	86
TABLE 9-3. ANNUAL O&M COSTS FOR THE WQM COMPONENT	87
TABLE 9-4. EQUIPMENT COSTS FOR THE WQM COMPONENT	87
                                                                                     XII

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot
                            Section 1.0:  Introduction

The purpose of this document is to describe the evaluation of the online water quality monitoring (WQM)
component of the Cincinnati pilot, the first such pilot deployed under the U.S. Environmental Protection
Agency's (EPA) Water Security Initiative (WSI).  This evaluation was implemented by examining the
performance of the WQM component relative to the design objectives established for the contamination
warning system (CWS).

1.1    CWS Design Objectives

The Cincinnati CWS was designed to meet six overarching objectives, which are described in detail in
WaterSentinel System Architecture (USEPA, 2005) and are presented briefly below:

    •   Spatial Coverage.  The objective for spatial coverage is to monitor the entire population served
       by the drinking water utility. It depends on the location and density of monitoring points in the
       distribution system and the hydraulic connectivity of each monitoring location to downstream
       regions and populations. Metrics evaluated under this design objective include area coverage and
       population coverage.

    •   Contaminant Coverage. The objective for contaminant coverage is to provide detection
       capabilities for all priority contaminants. This design objective is further defined by binning the
       priority contaminants into 12 classes according to the means by which they might be detected
       (USEPA, 2005).  Use of these detection classes to inform design provides more comprehensive
       coverage of contaminants of concern than would be achieved by designing the system around a
       handful of specific  contaminants. Contaminant coverage depends on the specific data streams
       analyzed by each monitoring and surveillance component, as well as the  specific attributes of
       each component. Metrics evaluated under this design objective include contaminant detection
       potential, contamination scenario coverage, and contaminant detection threshold.

    •   Alert Occurrence.  The objective of this aspect of system design is to minimize the rate of
       invalid alerts (alerts unrelated to  contamination or other unusual water quality conditions) while
       maintaining the ability of the system to detect real incidents. It depends on the quality of the
       underlying data as well as the event detection systems that continuously analyze that data for
       anomalies. Metrics evaluated under this design objective include invalid alerts, valid alerts and
       alert co-occurrence.
    •   Timeliness of Detection and Response. The objective of this aspect of system design is to
       provide detection of a contamination incident in atimeframe that allows for the implementation
       of response actions that result in  significant consequences reduction. Metrics associated with
       timeliness of detection and response include time for initial detection and time to investigate an
       alert. Timeliness of response is not addressed in  this report: it is covered under the consequence
       management and sampling and analysis components.

    •   Operational Reliability. The objective for operational reliability is to achieve a sufficiently high
       degree of system availability, data completeness, and data accuracy in  order to minimize the
       probability of missing a contamination incident.  Metrics evaluated under this design objective
       include data completeness, data accuracy and availability.
    •   Sustainability. The objective of this aspect of system design is to develop a CWS  that provides
       benefits to the utility and partner organizations while minimizing costs. This can be maximized
       by leveraging existing systems and resources. Furthermore, a design that results in dual-use
       applications that benefit the utility in day-to-day operations, while also providing the capability to

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

       detect intentional or accidental contamination incidents, will also improve sustainability.  Metrics
       evaluated under this design objective include life cycle costs, benefits and acceptability.

The design objectives provide a basis for evaluation of each component - in this case WQM - as well as
the entire integrated system. Because the deployment of a drinking water CWS is a new concept, design
standards or benchmarks are unavailable.  Thus, it was necessary to evaluate the performance of the pilot
CWS in Cincinnati against the design objectives relative to the baseline state of the utility prior to CWS
deployment.

1.2    Role of WQM in the Cincinnati CWS

Under the WSI, a multi-component design was developed to meet the above design objectives.
Specifically, the WSI CWS architecture utilizes four monitoring and surveillance components common to
the drinking water industry and public health sector: WQM, enhanced security monitoring (ESM),
customer complaint surveillance, and public health surveillance. Information from these four components
is integrated under a consequence management plan, which is supported by sampling and analysis
activities, to establish the credibility of possible contamination incidents and to inform response actions
intended to mitigate consequences.

As one of the  four monitoring and surveillance components, WQM is intended to provide early detection
of possible contamination incidents through monitoring for typical water quality parameters that have
been experimentally shown to change in the presence of harmful contaminants (Hall, et al., 2007). In
order to provide effective coverage throughout the distribution system, monitoring stations were installed
at  strategic locations selected with the aid of the utility's hydraulic model and sensor placement
optimization software. Data from these monitoring stations is collected at a central location and analyzed
in  real-time for anomalies that might be indicative of contamination.  If an  anomaly is detected, an alert is
generated and an investigation ensues to determine whether the alert can be explained by a known, benign
cause.  If it cannot, contamination is considered Possible and the Cincinnati Pilot Consequence
Management Plan is activated to determine the credibility of the incident and respond as appropriate.

1.3    Objectives

The overall objective of this report is to demonstrate how well the WQM component functioned as part of
the CWS deployed  in Cincinnati (i.e., how effectively the  component achieved the design objectives).
This evaluation will describe how the deployed WQM component could reliably detect a possible
drinking water contamination incident based on the operational strategy established for the Cincinnati
pilot. Although no known contamination incidents occurred during the pilot period, data collection
during  routine operation, drills and exercises and computer simulations yielded sufficient data to evaluate
performance of the  WQM component against each of the stated design objectives. In summary, this
document will discuss the approach for analysis of this information and present the results that
characterize the overall operation, performance, and sustainability of the WQM component of the
Cincinnati CWS.

1.4    Document Organization

This document contains the following sections:

    •  Section 2:  Overview of the WQM Component. This section introduces the WQM component
       of the Cincinnati CWS and describes each of the major design elements that make up the
       component. A summary of significant modifications to the component that had a demonstrable
       impact on performance is presented at the end  of this section.

-------
     Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                      Cincinnati Contamination Warning System Pilot

•   Section 3: Methodology.  This section describes the data sources and techniques used to
    evaluate the WQM component.

•   Sections 4 through 9:  Evaluation of WQM Performance against the Design Objectives.
    Each of these sections addresses one of the design objectives listed in Section 1.1. Each section
    introduces the metrics that will be used to evaluate the WQM component against that design
    objective. Each supporting evaluation metric is discussed in a dedicated subsection, including an
    overview of the analysis methodology employed for that metric and discussion of the results.
    Each section concludes with a summary of WQM component performance relative to the design
    objective.

•   Section 10:  Summary and Conclusions. This section provides an overall summary of the
    WQM component evaluation, discusses limitations of the study, and describes potential
    additional applications.

•   Section 11:  References. This section lists all sources and documents cited throughout this
    report.

•   Section 12:  Abbreviations.  This section provides a list of acronyms approved for use in the
    WQM component evaluation.

•   Section 13:  Glossary. This  section defines terms used throughout the WQM component
    evaluation.

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                         Cincinnati Contamination Warning System Pilot
           Section 2.0:  Overview of the WQM Component

The WQM component of the CWS deployed at the Greater Cincinnati Water Works (GCWW) was
operational by the end of 2007. A detailed description of the system at this point in the project can be
found in Water Security Initiative: Cincinnati Pilot Post-Implementation System Status (USEPA, 2008a).
During the next phase of the pilot, from January 2008 through June 2010, the system was evaluated and
modified in an effort to optimize performance.

The WQM component of the Cincinnati CWS consists of four design elements:
    1.  WQM Stations:  the sensors and ancillary systems that monitor water quality parameters at
       specific locations throughout the distribution system in real-time.
    2.  Data Collection System: the communication and data management system that captures the data
       from each WQM station and transfers it to the event detection system and a centralized data
       repository for further analysis and archiving.  The  data collection system also includes a user
       interface that displays  event detection system alerts and real-time data from each monitoring
       location.
    3.  Event Detection System: the computer hardware and software that continually analyzes the time-
       series WQM data for anomalies indicative of possible contamination.
    4.  Component Response  Procedures: the procedures involved in routine operation of the WQM
       component, including  the initial investigation of alerts.

The objectives for each of these WQM design elements are shown in Table 2-1 and were derived from
the overarching design objectives for the CWS presented in Section 1.1.

Table 2-1. WQM Design Objectives
Design Element
1 . WQM Stations
2. Data Collection
System
3. Event Detection
System
4. Component
Response
Procedures
Descriptions
Deploy monitoring stations consisting of a suite of water quality sensors that provide
broad contaminant coverage at locations in the distribution system that optimize
spatial coverage and timeliness of detection. The sensors and equipment used in the
design of the WQM stations must function within specifications and consistently
produce accurate data. Proper instrument maintenance and routine calibration are
essential to meeting this design objective, and the utility must be able to sustain the
effort required to maintain the equipment.
Deploy a communication system that transfers data from remote monitoring stations to
a user interface for real-time monitoring, the event detection system, and a centralized
data repository with minimal delay (i.e., less than five minutes from the time of
measurement) and a high degree of reliability.
Deploy an event detection system to continuously analyze the large amount of water
quality data produced by the water quality sensors to detect anomalies that may be
indicative of contamination. The event detection system should produce a minimal
number of invalid alerts without missing significant water quality anomalies, including
possible contamination incidents.
Deploy procedures, roles, and responsibilities that support routine operation and the
systematic review of WQM alerts in an effective and efficient manner that is aligned
with normal utility activities to the extent possible.
The WQM design elements have been revised since they were presented in the Water Security Initiative:
Cincinnati Pilot Post-Implementation System Status report (USEPA, 2008a).  The "WQM Stations" and
"WQM Network" design elements from the System Status report were combined into the "WQM

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                           Cincinnati Contamination Warning System Pilot

Stations" design element. The "Data Management and Communications" design element from the
System Status report was renamed "Data Collection System".  Lastly, the "Water Quality Event
Detection" design element from the System Status report was divided into two design elements.  The first,
"Event Detection System", includes the hardware and software necessary to analyze time-series data for
water quality anomalies. The second, "Component Response Procedures", includes the procedures that
guide the investigation of WQM alerts. These changes were made in order to better align the design
elements with the performance metrics discussed in this report.

Sections 2.1 through 2.4 provide an overview of each of the four WQM design elements, with an
emphasis on changes to the component during the evaluation period. Section 2.5 summarizes all
significant modifications to the WQM component that are relevant to the interpretation of the evaluation
results presented in this report.

2.1    WQM Stations

The purpose of the WQM stations is to provide broad contaminant coverage by reliably and accurately
monitoring select water quality parameters in real-time. WQM stations are located throughout the
distribution system with the intent of optimizing spatial coverage and timeliness of detection.  The WQM
station designs were developed and locations were selected to satisfy these objectives.
2.1.1  WQM Station Design
The design of the WQM sensor stations includes selection of the specific water quality parameters to be
monitored, the sensors used to monitor each parameter, and the design of the monitoring station that
houses the sensors and ancillary equipment.  Based on empirical data from bench and pilot studies
demonstrating the response of typical water quality parameters to priority contaminants, the following
parameters were selected for the design of the WQM  stations: free chlorine residual; total organic carbon
(TOC); conductivity and pH.  Turbidity and temperature sensors were also incorporated into the
monitoring stations even though they are not expected to be reliable indicators of contamination (Hall, et
al., 2007). Some monitoring stations were also equipped with sensors for oxidation reduction potential
(ORP), which tracked  changes in the disinfectant residual and thus were used to corroborate trends
observed from the chlorine sensors.

In order to evaluate different types of instruments and sensors, three monitoring station prototypes were
installed which incorporated technology from different vendors.  The three prototypes are generically
referred to as Types-A, B, and C monitoring stations. The  Type-A and Type-B stations were installed
initially, and experience with these units were used to inform design of the Type-C stations. Table 2-2
lists the instrumentation included in each of these prototypes. It is important to note that this was a pilot
study and thus multiple instruments were chosen to provide information on a range of equipment. Also,
the performance of each instrument is particular to the water quality and application in the Cincinnati
pilot. Different results may be experienced by other utilities or on different waters.

One  each of the Type-A and Type-B systems also included an s::can "carbo::lyser" TOC/turbidity
analyzer with "con: :stat" transmitter. The carbo: :lyser is an optical spectral (visible-ultraviolet (UV)
range) instrument, as contrasted with the  standard, chemically-based methodology used by the Hach
Astro and GE-Sievers  TOC instruments.  It was included as a redundant TOC analyzer to allow for side-
by-side comparison of these two technologies.

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot
Table 2-2.  Instrumentation Included in each of the WQM Station Prototypes
Parameter
PH
Conductivity
Turbidity
ORP
Temperature
Chlorine
TOC
Type-A
Hach GLI pHD
Hach GLI 3422
Hach 1720D
Not included
Hach GLI pHD
Hach chlorine-17
Hach Astro 1950Plus
Type-B
US Filter Depolox 3+,
YSI 6500 multiparameter probe
YSI 6500 multiparameter probe
YSI 6500 multiparameter probe
YSI 6500 multiparameter probe
YSI 6500 multiparameter probe
US Filter Depolox 3+ (Bare-Electrode
Flow Cell was replaced by Membrane-
Type Flow Cell in July 2008)
YSI 6500 multiparameter probe
GE-Sievers 900
Type-C
Hach pHD sc
Hach D3422C3
Hach 1720E
Hach pHD/ORP sc
Hach pHD sc
Hach chlorine-17
GE-Sievers 900
All three WQM prototypes were designed as free standing systems mounted on casters for easy setup and
relocation.  They were neither hard-wired to a power source nor hard-piped to a water source or drain.
Each system includes an electric cord compatible with a standard 120 VAC power receptacle. Each
system is powered by an uninterruptable power supply (UPS) which provides approximately 24 hours of
operation of all instruments, a local programmable logic controller (PLC), and communication equipment
in the event the main power supply fails.

Each WQM station is equipped with a Normal/Calibrate switch which is used to indicate when a
monitoring station is being serviced or calibrated. The state of this switch is transmitted back to a
dedicated data collection system where it is used to suppress alerts from the event detection system, as
discussed in Section 2.3.

The Type-B prototype WQM stations were equipped with two types of chlorine and pH sensors; one set
of sensors was manufactured by U. S. Filter Depolox and the other by YSI. The intent of this design was
to provide a preliminary evaluation of the solid-state technology offered by YSI. The YSI chlorine
readings spiked and dipped on numerous occasions when compared to the U. S. Filter Depolox chlorine
readings, which were relatively constant. Technicians also found that the YSI chlorine calibration process
was tedious and overly-sensitive. YSI pH readings were consistently high, and technicians observed
during site visits that the Depolox pH readings on the sensor display were accurate.  In January 2007,
prior to the start of the formal evaluation period, the decision was made to move forward with only the
Depolox sensors and to  decommission the YSI chlorine and pH sensors. This opened a channel that
allowed temperature data to  be transmitted to the supervisory control and data acquisition (SCADA)
system,  consistent with the Type A and C prototypes. Also, an analog signal cable was routed from the
Depolox pH output to a spare analog input on the PLC, allowing Depolox pH to be monitored by the
SCADA and event detection system systems.

In addition to the modification listed above, several changes were made to the WQM stations during the
evaluation period. First, the s::can carbo::lyser TOC sensors deployed at one Type A monitoring station
and one Type B monitoring  station were producing erratic TOC measurements that deteriorated with time
after periodic cleanings  of the lamp assembly. The inaccurate  readings were  due to excessive buildup of
aluminum oxide on the lamp assemblies, which was a product of a chlorinated water supply and the
relatively high pH of the water. The problem was first observed in September 2007.  The impact was
reduced with additional  cleanings but the issue persisted until May 2008, when the manufacturer agreed
to provide two stainless  steel carbo: lysers in exchange for the aluminum-based units at no cost to the

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

utility. The new units were installed on May 19, 2008, requiring six hours of effort for installation, setup
and calibration.

The Hach temperature sensors deployed on nine Type C stations produced measurements that were
consistently 4.5 to 5.5 degrees higher than the values generated by temperature sensors on the Type A and
B stations. The inaccurate readings were  a result of the measurements being taken from water flowing
through the pH probe, which does not maintain a constant head pressure. The problem was first observed
in December 2007 and resolved on March 18, 2008 when the temperature inlets for the nine impacted
stations were switched from the pH probe to the conductivity probe. No additional parts were needed for
this modification which required 6.8 hours of effort.

The U.S. Filter Depolox chlorine and pH  sensors deployed on five Type B stations were producing erratic
and inaccurate measurements that could not be attributed to calibration or other routine maintenance
activities. This problem was first experienced in January 2008. In March 2008, it was identified that the
relatively high pH of the distribution system water was incompatible with the upper pH tolerance of the
unit. The chlorine measurements were  also determined to be inaccurate because chlorine is pH-
compensated.  The issue was resolved on  July 16, 2008 when the US Filter-Depolox bare electrode probes
and flow cell assemblies were replaced with US Filter-Depolox membrane probes and flow cell
assemblies at all of the impacted stations. The manufacturer provided materials for four of the five
upgrades, and the total cost for materials at the remaining monitoring station was $1,700.  The
replacements required 29.5 hours of effort for installation, setup, and calibration of the new probes.

The Hach Astro TOC units at all three Type A stations were eventually taken offline due to a long  history
of erratic and inaccurate measurements. The first was taken offline in June 2009.  At this time, one of the
Type B stations had two TOC units  reading accurately (s::can and Sievers 900), so the s::can carbo::lyser
was moved to the Type  A monitoring station where the Hach TOC was removed.  The Hach Astro TOC
unit was taken offline at the next station in February 2010 and the last was taken offline in April 2010.
2.1.2  WQM Station Location Selection
For the Cincinnati CWS, the available sensor budget allowed for deployment of 17 WQM stations. Two
of these monitoring stations were installed at the two entry points to the distribution system to provide a
baseline for water quality  in the distribution system.  The remaining 15 monitoring stations were located
throughout the distribution system using the drinking water utility's distribution system model, updated in
2005, and the Threat Ensemble Vulnerability Assessment and Sensor Placement Optimization Tool
(TEVA-SPOT).

TEVA-SPOT uses a utility's distribution  system model to simulate tens of thousands of contamination
incidents throughout the distribution system and estimate the consequences associated with each.  The
resulting database of consequences is used to place a pre-defined number of monitoring stations at
locations that maximize public health protection across all of the simulated scenarios (USEPA, 2008b).
Potential monitoring locations were constrained to approximately 200 sites, including utility-owned
facilities  (e.g.,  pump stations), fire department stations, police department stations and a handful of other
government-owned buildings to which GCWW had access. From this pool of potential sites, TEVA-
SPOT produced candidate locations for the 15 remaining stations. These sites were inspected to verify
that the locations met established requirements including site security, 24/7 access for utility staff,
sufficient space for the monitoring equipment, an available 20-amp circuit, adequate water flow and
pressure and hydraulic residence time of less than one hour in the supply line to the facility. Alternate
locations were selected  if the original location was practically infeasible. Finally, a regret analysis was
performed to verify that the final design provided the desired coverage.

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

The selected monitoring locations included utility facilities, fire department facilities, and a police
department facility.  These locations were spread throughout the distribution system and typically served
large downstream populations.

Only one change was made to the WQM locations during the evaluation period. In late 2007, one of the
monitoring stations located at a utility pumping station began to periodically produce erratic sensor
readings. Investigations revealed that the problem was caused by low pressure and intermittent flow to
the monitoring location:  operation of the pump station had been gradually reduced as more flow was
moved through a newer pump station located just a few blocks away.  To  remedy this situation, the WQM
station was moved to the new pump station on March 10, 2009 at a cost of $3,108 and approximately 40
hours of effort.

2.2     Data Collection System

The purpose of the data collection system is to receive and manage the raw data, provide this data to the
event detection system, display the data from each WQM station in real-time, and display alerts generated
by the event detection system when an unusual water quality condition is  detected.

In the Cincinnati pilot, data from the WQM stations is transmitted to a central location using a secure
digital cellular network. Two SCADA systems are used:  one was pre-existing and handles the water
quality and operations data collected outside of the pilot, and the second was implemented specifically for
this pilot project to manage and store the data generated by the WQM sensors.  In addition, this second
SCADA system provides data to and collects results from the CANARY event detection system,
described in Section 2.3, which is installed on a separate dedicated workstation.

Users access the data on SCADA workstations located throughout the utility via Human Machine
Interface (HMI) software. HMI screens, designed as part of this pilot project, allow users to view a
system map detailing the location and status of each monitoring station, as well as real-time values for all
data streams collected, instrument faults, and event detection system alerts. Through the HMI, operators
can also initiate remote sample collection at any monitoring station.

The data communication network includes many network security devices such as routers, switches, and
firewalls to ensure data security and integrity.  In addition, servers and user workstations are placed in
demilitarized zones (DMZs) in order to protect the utility's critical networks. Protected networks can
only communicate out through firewalls to these DMZs. This ensures that outside users do not have
direct access to these systems.

In general, the data collection system performed according to specification and met the performance
objectives for this design element of the system. Thus there were no significant modifications to this
design element.

2.3     Event Detection System

The purpose of a WQM event detection system is to analyze time-series water quality data to search for
anomalies indicative of possible contamination in real-time. If an anomaly is detected, an alert is
generated and sent to the SCADA HMI implemented as part of the pilot project to notify users, and staff
from various divisions in the utility participate in a joint investigation to determine the validity of the alert
based on component response procedures.

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

In the Cincinnati pilot, the CANARY event detection system, developed by the Sandia National
Laboratories in cooperation with USEPA's National Homeland Security Research Center, was deployed
for real-time monitoring. CANARY contains multiple algorithms (Hart et al., 2007) developed and tested
using empirical data relating water quality response to specific contaminants, as well as historic baseline
data from large water utilities.

Deployment of the event detection system at the Cincinnati pilot involved training CANARY on historic
water quality data from each monitoring station as well as establishing the procedures used to investigate
an alert. Initially, three months of water quality data from each of the seventeen monitoring stations was
used to train CANARY. The CANARY developers used this data to establish baseline water quality and
determine initial algorithm configuration settings for each monitoring station.

The Event Detection Deployment, Integration, and Evaluation System (EDDIES) is an application that
was developed to interface with the SCADA system and CANARY. EDDIES  supports CANARY by
collecting data from the SCADA systems in real-time, providing the data to CANARY and sending the
outputs from CANARY back through the SCADA system to be viewed by utility staff.

CANARY and EDDIES underwent several modifications over the course of the evaluation period which
are further described in detail in Table 2-5 and Section 6.1.  The most effective changes were removing
sensor data streams from analysis that had a long history of sensor issues and updating the parameter
sensitivity variable in CANARY (described below). Forty-two hours of effort  were required for
CANARY configuration. Significant additional effort was required for debugging, implementing and
testing software updates.

An important configuration setting in CANARY is the parameter sensitivity assigned to each data stream,
which should reflect the smallest change that can be discriminated from normal instrument noise.
Practically, this represents the smallest true change in the parameter value that  could generate an alert.
When CANARY was initially implemented in real-time, the CANARY parameter sensitivity settings
were set to the manufacturer-specified sensor sensitivity. However, these did not represent actual
instrument performance, and invalid alerts occurred due to extremely small changes  in water quality
parameter values. Parameter sensitivity settings for all parameters were updated to reflect actual
instrument performance observed by GCWW staff.  For example, the parameter sensitivity setting for
chlorine sensors was changed from 0.01 to 0.1 mg/L, as a chlorine change of 0.01 mg/L was well within
the range of typical instrument noise. The final parameter sensitivity values for all parameter types are
shown in Table 2-3.

Table 2-3. Final CANARY Parameter Sensitivity Values
Parameter Type
Chlorine
Conductivity
PH
ORP
TOC
Final Parameter
Sensitivity Value
0.1
5
0.1
10
0.1
EDDIES and CANARY also caused significant downtime of the WQM component. The SCADA system
implemented as part of the pilot would periodically post files to the input source directory that contained
no data, and this caused EDDIES to lock up. Also, there was no prompt verifying that the user indeed
wanted to stop EDDIES, and there were several instances when staff accidently hit a button when
browsing and EDDIES quit running. Following an EDDIES downtime event, CANARY required two to
three days of data collection before producing a valid output. These issues were first discovered in

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

October 2007 and resolved on February 22, 2010 when new versions of EDDIES and CANARY were
installed. 48 hours of effort were required.

After the February 2010 installation of the new version of CANARY, the performance of the event
detection system was still not ideal. CANARY often alerted shortly after calibration events and would
not alert for significant water quality (WQ) anomalies. As a result, new CANARY configuration files
were developed which suppressed alerts shortly after calibration events, which significantly reduced the
rate of invalid alerts. The updates to the CANARY configuration files required 80 hours of effort.

One significant issue remains at the time of writing.  If the time between the SCADA system and
EDDIES workstation is out of sync at all, CANARY output becomes erratic.  Despite significant effort,
no robust solution to this issue has been found. GCWW has found that the issue is generally resolved if
the systems are reset and CANARY is restarted.

2.4    Component Response Procedures

When an event detection system alert indicates unusual conditions at a particular monitoring station, the
pilot utility follows procedures that guide the initial investigation into potential causes of the alert
(USEPA, 2008c).  Component response procedures established a process flow, roles and responsibilities,
information flow paths and checklists to provide a systematic process for reviewing relevant information
about the possible cause of the alert. Specifically the following checks were performed:

   •   Treatment plant data is reviewed to determine if the alert was triggered by plant water quality
       changes.
   •   Distribution system operations are reviewed to determine whether recent changes in pumping,
       tank/reservoir operations or other changes in the hydraulic operations of the system caused the
       water quality change that generated the alert.
   •   Distribution system maintenance activities are reviewed to determine  whether ongoing problems
       or repairs in the system  caused the water quality change that generated the alert.
   •   Maintenance logs are reviewed for the alerting monitoring station.
   •   The monitoring station is inspected to determine whether instrument malfunction is responsible
       for the alert.

Several utility divisions and personnel are  involved in the  investigation of a WQM alert, and Table 2-4
describes the role of various  utility users. If the initial investigation does not reveal an obvious cause,
contamination is considered Possible and the investigation is turned over to the Water Utility Emergency
Response Manager, who will take additional steps to determine whether contamination is credible.

While no major changes were made in the component response procedures during the evaluation period,
the process underwent several revisions based on the results of drills and exercises and experience with
routine operation of the WQM component. Most modifications to the component response procedures
involved clarifying roles  and responsibilities and streamlining the investigation process.

Table 2-4. Roles and Responsibilities under the WQM Component Response Procedures
Utility User
Water Utility Emergency
Response Manager
Role in Routine Operation of WQM
• Assume the lead in the credibility determination process, as outlined in the
Cincinnati Pilot Consequence Management Plan, once possible contamination
incident has been reported
                                                                                           10

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot
Utility User
Water Quality &
Treatment Chemist1
Water Quality &
Treatment Technician
System Operator
Distribution Dispatcher
Treatment Supervisor/
Senior Plant Supervisor
Role in Routine Operation of WQM
• Lead investigation of WQM alerts
• Coordinate with System Operators and the Distribution Dispatcher to determine if
operations or maintenance activities caused the alert
• Review relevant data maintained by Water Quality & Treatment
• Assess all information compiled during the investigation to determine if the alert is
valid
• Notify the Emergency Response Manager of possible contamination incidents
• Assist the Water Quality & Treatment Chemist during investigation of WQM alerts
• Inspect monitoring stations that have detected unusual water quality, perform field
verification of water quality sensor readings, and collect samples from the site
• Receive the initial alert and notifies the Water Quality & Treatment Chemist
• Support the alert investigation by reviewing operational data and assessing
whether system operations might be the cause of the alert
• Support alert investigation by reviewing maintenance activities in the distribution
system and assessing whether these activities might be the cause of the alert
• Support the investigation of alerts by reviewing operational data and assessing
whether system operations might be the cause of the alert
 During off-hours or if the Water Quality & Treatment Chemist is unavailable, the Water Quality & Treatment Shift
Chemist assumes these responsibilities. If neither of these is available, the Plant Supervisor leads an abbreviated
investigation.
2.5    Summary of Significant WQM Component Modifications

The modifications discussed in the previous subsections were implemented to improve the performance
of the WQM component. The impact of these component modifications on performance can be observed
in the metrics used to evaluate the degree to which the component met the design objectives described in
Section 1.1.  Table 2-5 summarizes these modifications and will serve as a reference when discussing the
results of the evaluation presented in Sections 4.0 through 9.0.

Table 2-5. Sequential Listing of WQM Component Modifications
ID
1
2
3
Design Element
Event Detection
System
Monitoring Stations
Monitoring Stations
Component Modification
Modification
Cause
Modification
Cause
Modification
Cause
The CANARY configuration settings were changed for
all stations.
The number of alerts was unacceptable.
The aluminum housings for the s::can carbo::lysers
were switched to stainless steel housings at one Type A
and one Type B monitoring station.
The aluminum housings were producing erratic TOC
measurements resulting from a buildup of aluminum
oxide on the lamp assemblies, which was caused by
the relatively high pH of the chlorinated distributed
water.
US Filter-Depolox bare electrode probes and flow
assemblies were decommissioned and replaced with
US Filter-Depolox membrane probes and flow
assemblies at five Type B WQM Stations.
The sensors were producing erratic and inaccurate
chlorine and pH measurements because of the
relatively high pH of the utility's water, which was higher
than the upper pH tolerance of the sensor.
Date
May 16,
2008
May 19,
2008
July 16,
2008
                                                                                           11

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot
ID
4
5
6
7
8
9
10
Design Element
Event Detection
System
Event Detection
System
Event Detection
System
Monitoring Stations
Event Detection
System
Monitoring Stations
Event Detection
System
Component Modification
Modification
Cause
Modification
Cause
Modification
Cause
Modification
Cause
Modification
Cause
Modification
Cause
Modification
Cause
The CANARY sensor settings (parameter sensitivity,
max value, and min value) were changed.
The number of alerts was unacceptable.
The TOC data streams were removed from analysis for
eight stations
The number of alerts triggered by the inaccurate TOC
data was unacceptable.
Removed problematic data streams from two sites and
changed configuration settings at two other sites with
high false alert rates.
The number of alerts was unacceptable.
One of the monitoring stations at a utility pump station
was moved to a newer pump station located a few
blocks away. Both pump stations serve the same
general area of the distribution system.
Operation of the first pump station had been gradually
reduced by the utility as more flow was moved through
the newer pump station several blocks away. The
sensors at the original pump station location were
increasingly producing erratic readings because of low
pressure and intermittent flow.
The CANARY parameter sensitivity values were
returned to previous values.
The values were accidentally changed during the recent
installation of the new version of EDDIES.
The Hach Astro TOC units were taken offline at three
Type AWQM Stations.
There was a long history of erratic and inaccurate
measurements.
New versions of EDDIES and CANARY were installed.
A series of updates were installed as various bugs were
encountered.
Software updates and an unacceptable number of
alerts.
Date
October 29,
2008
November
13,2008
February 23,
2009
March 10,
2009
May 18,
2009
June 2, 2009
Aprils, 2010
February -
March 2010
Figure 2-1 presents a summary timeline for deployment of the WQM component, including milestone
dates indicating the occurrence of significant component modifications and drills and exercises. The
timeline also shows the completion date for design and implementation activities, followed by a transition
period during which a few stations at a time were added to active monitoring until June 2009, when the
full system was being monitored in real-time.
                                                                                          12

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                           Cincinnati Contamination Warning System Pilot
            May-08
        s::can carbo::lyser
        Housing Changes


       Jan-08
    Jul-08
USF Deplox Bare
Electrode Probes
   Replaced
 with Membrane
    Probes
     Nov-08
TOC Data Streams
   Removed at
   8 Stations
     Mar-09
Monitoring Station at
Pump Station Moved
to New Pump Station
     Nearby
                               All 3 Hach
                               TOC Units
                            Decommisioned
                             Jun-09 - Apr-10
                                                                Jun-10
                         V
                     Optimization
                    Jan-08 - Dec-08
                       Transition
                    Dec-08 - Jun-09
    Design &
 Implementation
    Complete
     Jan-08
Figure 2-1. Summary Timeline of WQM Component Deployment
                            Real-time Monitoring
                               Jun-09 -Jun-10
                                                                 End of Data
                                                                 Collection
                                                                   Jun-10
                                                                                                 13

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot


                           Section 3.0:  Methodology

The following section describes five evaluation techniques and data sources that were used to fully
evaluate the performance of the WQM component against the design objectives described in Section 1.1:
empirical data from routine operations, results from drills and exercises, results from bench-scale
contaminant studies, results from computer simulations of the Cincinnati CWS and findings from forums
such as lessons learned workshops.

3.1    Analysis of Empirical Data from Routine Operations

This evaluation includes data on the performance, operation, and sustainability of the WQM component
from January 16, 2008 to June  15, 2010. Metrics presented in a time-series format include data
summarized on a month-to-month basis, and illustrate fluctuating trends in data over time.  In this
evaluation, the term "reporting period" is used to refer to a month of metrics data which spans from the
16th of one month to the 15th of the next month. Thus, the January 2008 reporting period refers to the data
collected between January 16th, 2008 and February 15th, 2008. This time-series analysis is used to
characterize the effectiveness of refinements made during the evaluation period.

The following design objectives were evaluated using empirical data collected during the evaluation
period from the utility: operational reliability, alert occurrence and sustainability.  Raw data produced by
the water quality sensors and the output of the CANARY event detection system were evaluated. In
addition, information about alerts and subsequent investigations was gathered from investigation
checklists, which were completed for each WQM alert received by the utility during the evaluation
period.

3.2    Drills  and Exercises

Drills and Full Scale Exercises served a variety of purposes, both for optimization of the  system and for
evaluation. Benefits included:
    •  Providing utility staff with the opportunity to practice alert investigation procedures associated
       with recognition of and response to WQM alerts.
    •  Providing an opportunity to identify which portions of the component response procedures
       required modification to be more representative of preferred investigation and communication
       procedures.
    •  Allowing for evaluation of the timeliness of detection and response design objective.

Four drills and exercises were conducted for the purpose of evaluating the WQM design objectives.
These are discussed below and include:

    •  WQM Drill 1 (July 14, 2008)
    •  CWS Full Scale Exercise (October 1, 2008)
    •  WQM Drill 2 (February 25, 2009)
    •  WQM Drill 3 After-Hours (April 29, 2009)

3.2.1  WQM Drill 1 (July 14,  2008)

Description: The scenario for the first WQM drill was an initial alert caused by changes  in chlorine and
conductivity, followed by an alert triggered by a change in TOC 30 minutes later.
                                                                                          14

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

Relevant Participants:
•   GCWW WUERM, GCWW Water Quality & Treatment Division
•   GCWW Water Quality & Treatment Chemist and Shift Chemist, GCWW Water Quality & Treatment
    Division
•   GCWW Water Quality & Treatment Technician, GCWW Water Quality & Treatment Division
•   GCWW System Operator, GCWW Distribution Division
•   GCWW Plant Supervisor, GCWW Distribution Division
•   GCWW Distribution Dispatcher, GCWW Distribution Division

This drill revealed several opportunities  for improvement to the alert investigation process.  The post-drill
discussion also revealed some aspects of the component response procedures and site characterization
sections of the Cincinnati Pilot Consequence Management Plan that should be revised to be consistent
with utility procedures.  Overall the evaluators observed that participants did a good job implementing the
component response procedures given that this was the first drill.
3.2.2   CIVS Full Scale Exercise (October 1, 2008)
Description: The purpose of this Full Scale Exercise was to allow GCWW and local response partner
agencies to exercise their protocols for detecting and responding to a possible drinking water
contamination incident. The exercise incorporated all components of the CWS.  Several WQM alerts
were simulated at different monitoring stations at different times. The overall impression of the
evaluators was that utility personnel successfully followed the component response procedures for
investigating and responding to a WQM alert.

Relevant Participants:
•   GCWW WUERM, GCWW Water Quality & Treatment Division
•   GCWW Water Quality & Treatment Chemist, GCWW Water Quality & Treatment Division
•   GCWW Water Quality & Treatment Technician, GCWW Water Quality & Treatment Division
•   GCWW System Operator, GCWW Distribution Division
•   GCWW Plant Supervisor, GCWW Distribution Division
•   GCWW Distribution Dispatcher, GCWW Distribution Division

3.2.3   WQM Drill 2 (February 25, 2009)
Description: The objective of the second WQM drill was to evaluate changes made to the component
response procedures based on results from the first WQM  drill and the Full Scale Exercise.  Multiple
WQM alerts were  simulated at various times from several  utility facilities. While evaluators felt that the
System Operator and GCWW WUERM effectively implemented response procedures, training needs
were identified for the GCWW Water Quality & Treatment Shift Chemist and the GCWW Water Quality
& Treatment Technician. This was expected given that this was the first time that these staff were asked
to perform these activities during a simulated contamination incident. This drill served as a training
activity for staff with limited prior experience in implementation of the WQM component response
procedures.

Relevant Participants:
•   GCWW WUERM, GCWW Water Quality & Treatment Division
•   GCWW Water Quality & Treatment Shift Chemist, GCWW Water Quality & Treatment Division
•   GCWW Water Quality & Treatment Technician, GCWW Water Quality & Treatment Division
•   GCWW System Operator, GCWW Distribution Division
•   GCWW Plant Supervisor, GCWW Distribution Division
•   GCWW Distribution Dispatcher, GCWW Distribution Division
                                                                                           15

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

3.2.4   WQM Drill 3 (After Hours) (April 29, 2009)

Description: The WQM After-Hours Drill was conducted to evaluate implementation of the component
response procedures during non-business hours.  An alert at a WQM station was simulated after normal
business hours and actions of the utility personnel responsible for participating in the investigation of the
alert were observed. The overall impression of the  evaluators was that all utility personnel involved did
an excellent job at implementing the procedures in the component response procedures.

Relevant Participants:
•   GCWW WUERM, GCWW Water Quality & Treatment Division
•   GCWW Water Quality & Treatment Shift Chemist, GCWW Water Quality & Treatment Division
•   GCWW Water Quality & Treatment Technician, GCWW Water Quality & Treatment Division
•   GCWW System Operator, GCWW Distribution Division
•   GCWW Distribution Dispatcher, GCWW Distribution Division

3.3     Bench-scale Contaminant Studies

Bench-scale studies were performed to quantify the response of the water quality parameters monitored
by the WQM component to specific contaminants over a range of concentrations.  These experiments
were performed on finished water from the two treatment plants operated by GCWW: the source for the
plant that supplies the majority of the distribution system is surface water, while that for the smaller plant
is groundwater. Chlorine is the residual disinfectant for both treatment plants.

Fresh aliquots of finished water from each treatment plant were collected. The water quality parameters
were measured, and then the water was incrementally dosed with the contaminant under evaluation. In
most experiments, five incremental doses were evaluated,  with concentrations ranging from less than 1
mg/L to more than 50 mg/L. Actual concentration ranges  were contaminant dependant. After each dose,
the aliquot of test water was allowed to mix for two minutes after which the water quality parameters
were re-measured.

The change in each water quality parameter was plotted as a function of concentration for each
contaminant, and the following equation forms were applied to the data: linear, binomial, exponential and
logarithmic. The best equation form was selected to model each correlation, considering both the
correlation coefficient as well as the applicability of the model beyond the range of the empirical data.  In
most cases a linear equation was used to model the  correlation between water quality parameter response
and contaminant concentration. The results from the bench-scale contaminant studies were used to
evaluate the contaminant coverage design objective and to parameterize the CWS simulation study,
discussed next.

3.4     Simulation Study

Evaluation of certain design objectives relies on the occurrence of contamination incidents with known
and varied characteristics. Because contamination incidents are extremely rare, there is insufficient
empirical data available to fully evaluate the detection capabilities of the Cincinnati  CWS. To fill this
gap, a computer model of the Cincinnati CWS was  developed and challenged with a large ensemble of
simulated contamination incidents in a simulation study (Allgeier et al., 2009).  These incidents varied
with respect to the contaminated used, the simulated injection location, the  injection time, and the
quantity and speed of contaminant injection.

For the WQM component, simulation study data was used to evaluate the following  design objectives:
                                                                                           16

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

    •  Contaminant Coverage: Analyses conducted for this design objective quantify the ratio of
       contamination scenarios actually detected by the WQM component versus those that could
       theoretically be detected.

    •  Spatial Coverage:  Spatial coverage was indirectly investigated by considering extent of
       contamination spread for contamination scenarios.

    •  Alert Occurrence:  Analyses conducted for this design objective characterize valid alerts, as well
       as clusters of alerts involving multiple monitoring stations.

    •  Timeliness of Detection: Analyses conducted to evaluate this design objective quantify the time
       between the start of contaminant injection and the first WQM alert.

A broad range of contaminant types, producing a range of symptoms, was utilized in the simulation study
to characterize the detection capabilities of the monitoring and surveillance components of a CWS. For
the purpose of the simulation study, a representative set of 17 contaminants was selected from the
comprehensive contaminant list that formed the basis for CWS design.  These contaminants are grouped
into the broad categories listed below (the number in parentheses indicates the number of contaminants
from that category that were simulated during the study).
    •  Nuisance Chemicals (2): these chemical contaminants have a relatively low toxicity and thus
       generally do not pose an immediate threat to public health. However, contamination with these
       chemicals can make the drinking water supply unusable.
    •  Toxic Chemicals (8): these chemicals are highly toxic and pose an acute risk to public health at
       relatively low  concentrations.
    •  Biological Agents (7): these contaminants of biological origin include pathogens and toxins that
       pose a risk to public health at relatively low concentrations.

Development of a detailed CWS model required extensive data collection and documentation of
assumptions regarding component and system operations. Model decision logic and parameter values
were developed from data generated through operation of the Cincinnati CWS, as well as input from
subject matter experts  and available research.

The  simulation study used several interrelated models, three of which are relevant to the evaluation of
WQM: EPANET, the Health Impacts and Human Behavior (HI/HB)  model and the WQM component
model. The function of each of these models, and their relevance to the evaluation of WQM, is discussed
below.

EPANET
EPANET is a hydraulic and water quality modeling application widely used in the water industry to
simulate contaminant transport through a drinking water distribution system.  In the simulation study, it
was used to produce contaminant concentration profiles at every node in GCWW's distribution system
model (which was calibrated for 2005  summer weeks), based on the characteristics of each contamination
scenario in the ensemble. The concentration profiles were used to determine the number of miles of pipe
contaminated during each scenario, which is one measure of the consequences of that contamination
scenario. EPANET was also used by the WQM component model to generate alerts at the monitoring
stations.

The HI/HB model used the concentration profiles generated by EPANET to simulate exposure of
customers in the utility's service area to contaminated drinking water. Exposures occurred during one
showering event in the morning (for the inhalation exposure route) or during five consumption events
spread throughout the day (for the ingestion exposure route). The dose received during exposure events
                                                                                            17

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

was used to predict infections, onset of symptoms, health-seeking behaviors of symptomatic customers
and fatalities.

The primary output from the HI/HB model was a case table of affected customers, which captured the
time at which each transitioned to mild, moderate, and severe symptom categories. Additionally, the
HI/HB model outputted the times at which exposed individuals would pursue various health-seeking
behaviors, such as visiting their doctor or calling the poison control center. The case table was used to
determine the public health consequences  of each scenario, specifically the total number of illnesses and
fatalities. Furthermore, EPANET and the  HI/HB model were run twice for each scenario; once without
the CWS in operation and once with the CWS in operation. The paired results from these runs were used
to calculate the reduction in consequences due to CWS operations for each simulated contamination
scenario.

WQM Component Model
The WQM component model is based on the component as deployed and operated in the Cincinnati
CWS.  The WQM model consists of three  modules, which are described below: a Contamination
Simulator, the CANARY event detection system, and an Alert Validation module. The primary inputs to
the WQM component model are the contaminant concentration profiles at each monitoring station for
each contamination scenario, baseline water quality data collected from the sensors deployed at each
monitoring station over the evaluation period, and contaminant properties.

As described in Section 5.1, bench-scale studies demonstrated that all 17 contaminants evaluated in the
simulation study produced a measurable change in a measured water quality parameter at a sufficiently
high concentration. Thus, all scenarios in  the simulation study were considered theoretically detectable
by WQM.

The Contaminant Simulator used the results from these bench-scale studies, along with the contaminant
concentration profiles generated by EPANET, to predict the change in water quality that would
correspond to the concentration profile for the specific contaminant used in the scenario (Allgeier, et al.,
2011). The resulting water quality change profile was superimposed on the baseline water quality data
for that monitoring station, as shown in Figure 3-1.  The simulated dataset shown is identical to the
baseline data shown in blue except for the  short time period on September 1, 2011 where the chlorine dips
down during the simulated contamination  event.
                                                                                           18

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot
      2.5
      0.5
            ^^^ Baseline Water Quality Data
            	WaterQuality during Simulated Contamination Event
        8/30
8/31
9/1
                                              Date
Figure 3-1. Superposition of a Contamination Incident on Baseline Water Quality Data

The following assumptions used in the design of the Contaminant Simulator are important to consider
when evaluating the simulation study results presented in this report:

    •  The period of baseline data selected for use in the simulation study was selected to ensure that no
       invalid alerts would be generated during the study (though there were a few instances where
       CANARY produced an invalid alert - an artifact of using external software).

    •  As described in Section 5.1, for five of the biological agents, it was assumed that a co-
       contaminant was injected in order to maintain the potency of the biological material, and the co-
       contaminant produced the water quality response.

The modified water quality profiles generated by the Contaminant Simulator provided the input for
CANARY. CANARY analyzed this time series data for each WQM station and each scenario.  When
CANARY detected an anomaly, it generated an alert along with a list of the water quality parameters that
contributed to the alert.

Alerts generated by CANARY were inputted into the WQM Alert Validation module, which is modeled
on the procedures used by utility personnel to investigate a WQM alert. Investigators assess the water
quality parameters that changed (i.e., TOC or chlorine), check for hydraulic connectivity if more than one
alert has occurred, and review distribution system  operation work activities. The investigation also
involves a site inspection at the monitoring station that generated the alert in order to evaluate whether the
equipment is functioning properly.  Relevant assumptions used in the WQM Alert Validation module are
listed below.  Note that these may not precisely reflect GCWW's actions.

    •  All WQM alerts are found to be valid (not due to a benign cause such as distribution system work
       activities)  and due to changes in the water quality parameters impacted by the particular
       contaminant used in the scenario.
                                                                                            19

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

    •  All alerts triggered by a change in chlorine or TOC are fully investigated. This assumption is
       based on the observation that the Cincinnati utility staff place more weight on these two
       parameters when considering possible contamination because of past experience with these
       parameters. An alert investigation can be terminated before completion if there is only a single
       alert and that alert is not due to a change in either chlorine or TOC.

    •  All sensors are determined to be reading correctly during the site inspection. Shortly after
       completion of the site inspection,  contamination is considered Possible.

    •  If two or more WQM alerts  are generated, the investigator will conclude that they are
       hydraulically connected (they must be connected based on the study design which is based on the
       Cincinnati distribution system model).  Thus, contamination is considered Possible shortly after
       receipt of a second alert.

3.5    Forums

Feedback and suggestions from utility personnel on all aspects of the WQM component were captured
during the forums listed below.  Information gathered through these forums provided insight regarding
acceptability of the component to end users, as well as lessons learned from routine operations and
recommendations for other utilities interested in implementing a CWS.  Results from the forums were
used to evaluate sustainability of the system related to the benefits of implementing a CWS.

    •  Quarterly WQM Component Meetings: Quarterly WQM component meetings were held
       throughout the evaluation period.  These meetings were attended by EPA and utility personnel
       and a team of contractors. Component design, functionality, and modifications were discussed
       during these meetings, including the component modifications listed in Table 2-5.

    •  WQM Lessons Learned Workshop: A workshop was held on August 31, 2009 to capture
       lessons learned from the Cincinnati pilot through interactive discussions, and to elicit feedback
       regarding how these lessons learned could be incorporated into guidance and tools. Utility
       personnel provided a detailed assessment of the strengths and weaknesses of the tools, equipment,
       and systems used in the WQM component (i.e., sensors, TEVA-SPOT, SCADA, CANARY, the
       component response procedures, etc.) used over the course of the pilot.

    •  WQM Exit Interview: An exit interview was held on August 18, 2010 to discuss the future of
       the WQM component at the Cincinnati pilot and to capture additional lessons learned since the
       workshop in August 2009.  The utility indicated that they would continue to operate and maintain
       the WQM component, they  discussed the dual-use benefits of the CWS which will be presented
       in Section 9.0, and they provided  advice for new utilities considering CWS deployment.

3.6    Analysis of Costs

A systematic process was used to evaluate the overall cost of the WQM component over the 20-year
lifecycle of the Cincinnati CWS. The analysis includes implementation costs, component modification
costs, annual operations and maintenance  (O&M) costs, renewal and replacement costs and the salvage
value of major pieces of equipment at the  end of the lifecycle.

Implementation costs include labor and other expenditures (equipment, supplies and purchased services)
for installing the WQM component.  Implementation costs were summarized in Water Security Initiative:
Cincinnati Pilot Post-Implementation System Status (USEPA, 2008a), which was used as a primary data
source for this analysis. In that report, overarching project management costs incurred during the
implementation process were captured as  a separate line  item. However, in this analysis, the project
                                                                                            20

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

management costs were equally distributed among the six components of the CWS and are presented as a
separate line item for each component.

Component modification costs include all labor and expenditures incurred after the completion of major
implementation activities in December 2007 that were not attributable to O&M costs.  These modification
costs were tracked on a monthly basis, summed at the end of the evaluation period and added to the
overall implementation costs.

It should be noted that implementation costs for the Cincinnati CWS may be higher than those for other
utilities given that this project was the first comprehensive, large-scale CWS of its kind and had no
experience base to draw from. Costs that would not likely apply to future implementers (but which
were incurred for the Cincinnati CWS) include overhead for EPA and its contractors, cost associated
with deploying alternative designs, and additional data collection and reporting requirements. Other
utilities planning for a similar large-scale CWS installation would have the benefit of lessons learned
and an experience base developed through implementation of the Cincinnati CWS.

Annual O&M costs include labor and other expenditures (supplies and purchased services) necessary to
operate and maintain the component and investigate alerts. O&M costs were obtained from procurement
records, maintenance logs, investigation checklists and training logs.  Procurement records provided the
cost of supplies, repairs, and replacement parts, while maintenance  logs tracked the staff time spent
maintaining the WQM component. To account for the maintenance of documents, the cost incurred to
update documented procedures following drills and exercises conducted during the evaluation phase of
the pilot was used to estimate the annualized cost. Investigation checklists and training logs tracked the
staff hours spent on investigating alerts and training, respectively. The total O&M costs were annualized
by calculating the sum of labor and other expenditures incurred over the course of a year.

Labor hours for both implementation and O&M were tracked over the entire evaluation period.  Labor
hours were converted to dollars using estimated local labor rates for the different institutions involved in
the implementation or O&M of the WQM component.

The renewal and replacement costs are based on the cost of replacing major pieces of equipment at the
end of their useful life. The useful life of WQM equipment was estimated using field experience,
manufacturer-provided data and input from subject matter experts.  Equipment was assumed to be
replaced at the end of its useful life over the 20-year lifecycle of the Cincinnati CWS.  The salvage value
is based on the estimated value of each major piece of equipment at the end of the lifecycle of the
Cincinnati CWS. The salvage value was estimated for all equipment with an initial value greater than
~$1,000. Straight line depreciation was used to estimate the salvage value for all major pieces of
equipment based on the lifespan of each item.

All of the cost parameters described above (implementation costs, component modification costs, O&M
costs, renewal and replacement costs, and salvage value) were used to calculate the total cost for the
WQM component, as presented in Section 9.1.
                                                                                           21

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                         Cincinnati Contamination Warning System Pilot


          Section 4.0:   Design  Objective: Spatial  Coverage

It is economically infeasible to install WQM stations at a large percentage of distribution system nodes.
Thus, 15 WQM locations were strategically selected in the distribution system through use of TEVA-
SPOT to maximize spatial coverage in the distribution system. In order to evaluate how well the WQM
component met the design objective for spatial coverage, the following two metrics were evaluated: area
coverage  and population coverage. Sections 4.1 and 4.2 define each metric, describe how it was
evaluated and presents the results. Section 4.3 presents an indirect measure of spatial coverage by
considering how many scenarios from the simulation study were monitored by the WQM component.

4.1    Area Coverage

Definition: Area coverage is defined as the percentage of the distribution system area that is covered by
the WQM network, which is a superposition of the areas covered by each individual monitoring station.
The area covered by each monitoring station is made up of the areas monitored by and protected by the
station, as described below:

    •  Area Monitored: A portion of the distribution system is monitored by a WQM station if a
       contaminant injected in that area would flow past the monitoring station and thus potentially be
       detected. To determine the area monitored by each WQM location, contaminant injections were
       simulated using single point source injections at each of the nodes in the distribution system
       model with anon-zero demand, resulting in a total of 5,799 attack nodes. These modeling results
       were used to determine the monitoring stations that would receive contaminated water under the
       model conditions and thus potentially generate an alert, for each attack scenario.

    •  Area Protected: An area is protected by a WQM station if it is downstream of the station.
       Water flowing into a protected area would flow through the monitoring station first, thus
       providing an opportunity for detection and response.  To determine the area protected by each
       monitoring station, contaminant injection was  simulated at each of the 15 monitoring stations to
       determine the downstream area of the distribution system that would be impacted if contaminant
       flowed through that monitoring station.

Analysis  Methodology: EPANET and the distribution system model were used to simulate contaminant
injections throughout the distribution system. The flow path of each injection was captured to determine
the areas  covered by each monitoring station.

A standard contamination scenario, using the same contaminant type and total mass of contaminant, was
used at all nodes for both analyses. The scenario selected for these analyses was one with the potential to
spread widely throughout the distribution system and expose a large number of individuals.

The aggregate area containing nodes monitored by and protected by a WQM station constitutes the area
covered by that station. The combined area covered by all 15 monitoring stations is the area covered by
the entire WQM network. Note that the area covered by the WQM network is not simply the sum of the
areas covered by the individual stations: some areas are covered by more than one monitoring station, as
described below.

Table 4-1 shows the area and percentage of the distribution system covered by at least "X" monitoring
stations, where "X" ranges from one to five stations. The distribution system area (339.2 square miles)
                                                                                         22

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

was defined as the retail service area in Hamilton County included in the 2005 version of the utility
distribution system model.

Table 4-1. Area Covered by At Least "X" Number of WQM Stations
Area
Area Covered By At Least "X"
Monitoring Stations (mi2)
% of Distribution System Area
Covered By At Least "X" Stations
1 Monitoring
Station
243.8
72%
2 Monitoring
Stations
188.8
56%
3 Monitoring
Stations
137.6
41%
4 Monitoring
Stations
89.7
26%
5 Monitoring
Stations
65.8
19%
The table shows that 72% of the distribution system is covered by at least one monitoring station. Also,
more than half of the system (56%) is covered by at least two monitoring stations, and almost 20% of the
distribution system is covered by five or more monitoring stations. There are areas covered by up to 13
monitoring stations, but these  areas are in a small, densely populated region of the distribution system.

Redundancy in protection can prove valuable in detecting unusual water quality as there are more chances
for potential detection (as subsequent sections show, missed detections are common).  In addition,
redundancy is beneficial when attempting to validate an alert: an alert is much more likely to be
considered Possible if a water quality change can be seen at (or an alert is received from) multiple
hydraulically connected monitoring stations.

4.2    Population Coverage

Definition:  Population coverage is defined as the portion of the retail population within the area served
by the distribution system that is covered by the WQM network.

Analysis Methodology:  The  results from the analysis of Area Coverage, presented in Section 4.1, were
converted to population estimates using census data from 2000 on population density information. An
individual is considered to be  covered if they live in an area covered by the WQM network.

Results:  Similar to Table 4-1, Table 4-2 shows the population covered by at least "X" WQM stations,
where "X" ranges from one to five. The total population served by the retail area of the distribution
system, according to census data from 2000, is 759,000 people.

Table 4-2. Population Covered by At Least "X" Number of WQM Stations
Population
Population Covered By At Least
"X" WQM Stations (individuals)
% of Population Covered By At
Least "X" Stations
1 Monitoring
Station
635,000
84%
2 Monitoring
Stations
481,000
63%
3 Monitoring
Stations
351,000
46%
4 Monitoring
Stations
248,000
33%
5 Monitoring
Stations
152,000
20%
Percentages of'the population covered by at least one or two monitoring stations (84%, 63%) are
substantially larger than the percentages of the distribution system area covered by at least one or two
monitoring stations (72%, 56%).  This resulted from the network design that was optimized to protect the
population rather than attempting to maximize the area covered. Geospatial analyses (not presented)
show that the regions within the distribution system with the highest population density are generally
covered by at least one monitoring station.
                                                                                           23

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

4.3    Extent of Contaminant Spread through the WQM Network

Definition: A WQM location is considered to be impacted if it receives sufficient concentration to cause
a change in at least one water quality parameter greater than or equal to the parameter sensitivity value set
in CANARY in Table 2-3. A scenario is considered practically detectable by WQM if at least one
monitoring station is impacted.

Analysis Methodology: In the simulation study, contaminant injections were simulated throughout the
distribution system and the contaminant concentration was recorded at each timestep for each WQM
station, as described in Section 3.4. In this analysis, the number of stations impacted in each simulated
contamination scenario was determined through analysis of the contaminant concentration profile at each
monitoring location:  if the contaminant concentration at any timestep was sufficient to cause a change in
at least one water quality parameter greater than or equal to the parameter sensitivity value set in
CANARY in Table 2-3, then the monitoring location was considered impacted and added to the count
used to quantify contaminant spread for the given scenario.

Note that the results presented in this section cannot be used to make general conclusions about spatial
coverage.  Contaminant spread is highly dependent on the design of the contamination scenarios selected
for this study - particularly the injection locations and times, which determine the flow paths through the
system, and the total mass of contaminant which determines its spread throughout the system.  In
addition, it relies on a specific version and configuration of the distribution system model.

Results:  Of the 2,015 simulation study events, 737 scenarios were practically detectable by WQM. This
is 36.6% of the simulated contamination scenarios. Within these scenarios, there were a total of 1,959
impacted WQM stations with an average of 2.7 stations impacted per scenario.

Figure 4-1 shows the number of scenarios for which at least "X" WQM stations were impacted (in red)
and for which exactly "X" stations were impacted (blue). No scenarios impacted all 15 monitoring
stations, though 8 scenarios impacted 14 stations.
snn
700
,. cnn
£
'^
£ ^nn
0)
u
w 400
o
k.
8 ^nn
l
z 200
mn










• #of Scenarios Impacting
At Leastthis # of Stations
• # of Scenarios Impacting
Exactly this # of Stations

Total Scenarios: 2,015

1

m


I
a. i, i • . .



1 2 3 4 5 6 7 8 9 10 11 12 13 14
N um be r of Stations
Figure 4-1. Number of Scenarios Impacting "X" WQM Stations
                                                                                           24

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

The majority of the 737 practically detectable scenarios (63.9%) impacted one or two monitoring stations,
and the number of stations impacted seems to decrease exponentially from there.  Looking at the second
series, which shows the number of scenarios impacting exactly a given number of stations, it is interesting
to note the unexpectedly low number of scenarios impacting exactly three stations compared to the
number impacting two and four stations, and the relatively high number of scenarios impacting 10 and 14
stations. This is likely related to network hydraulics:  there  are large areas of the system where, if
contaminant enters, it spreads significantly and impacts multiple monitoring stations.

In addition to the 1,959 impacted monitoring stations noted  above, there were 1,664 instances where non-
zero contaminant concentration reached a monitoring  station, though not at a concentration that produced
a sufficient change in a water quality parameter. Per the analysis methodology, these stations were not
considered impacted and were thus not sites of potential alerts. A total of 866 of these instances occurred
in scenarios that were not practically detectable by WQM, as no monitoring station observed a sufficient
concentration during the scenario. There were 278 such scenarios, with a maximum of 14 stations
receiving non-zero, though insignificant, concentrations  during the scenario.  Thus, the contaminant
reached a WQM station in 1,015 scenarios (50.4%).

4.4    Summary

Spatial coverage of the WQM component is entirely dependent on the locations of WQM stations.
Cincinnati's EPANET model was used to estimate the portion of the distribution system covered by the
15 monitoring stations, both in terms of area and population. The distribution area was well covered by
the monitoring network as a whole, with 72% of the area and 84% of the population covered by at least
one monitoring station. Almost 20% of the distribution area and population were covered by five or more
monitoring stations. Portions of the most densely populated area were covered by up to 13 monitoring
stations. The percentage of the population covered by at least one monitoring station was considerably
larger than the percentage of the distribution system area covered by at least one monitoring station
because the WQM network was designed to optimize the population protected rather than the amount of
area covered.

Simulation study results were used to supplement the evaluation of this design objective. The
contaminant concentration profiles generated during each of the simulated scenarios were used to identify
all WQM stations that were impacted by a practically  detectable contaminant concentration. Scenarios in
which at least one WQM station received sufficient concentration to cause a change in at least one water
quality parameter greater than or equal to the parameter sensitivity value set in CANARY found in Table
2-3 were considered to be practically detectable. Of the  2,015 simulation study scenarios, 737 (36.6%)
were practically detectable by WQM, as at least one of the WQM stations was impacted. For detectable
scenarios, between one and 14 stations were impacted. Note that these values are entirely dependent on
the scenarios selected for this study; however, an effort was made to construct an ensemble of diverse and
representative scenarios, as described in Section 3.4.
                                                                                           25

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot


      Section 5.0:  Design Objective:  Contaminant Coverage

Given the large number of potentially harmful drinking water contaminants and the uncertainty regarding
which contaminant might be involved during a specific incident, the WQM component of the Cincinnati
CWS was designed to detect a broad range of contaminants.  Specifically, sensors were selected to
monitor water quality parameters that respond to a broad range of potential contaminants. In order to
evaluate how well the WQM component met this design objective, the following three metrics were
evaluated: contaminant detection potential, contaminant scenario coverage and contaminant detection
threshold. The following subsections define each metric, describe how it was evaluated and present the
results. Note that the 17 contaminants being modeled in the simulation study were assigned generic IDs
for security purposes.

5.1    Contaminant Detection Potential

Definition: The contaminant detection potential is the capability of the system to detect specific
contaminants or contaminant classes. The detection potential is a function of the monitoring station
design - specifically the water quality parameters that are monitored. In order for the WQM component
to have the potential to detect specific contaminants, at least one of the measured water quality parameters
must produce a statistically  significant change from the baseline in the presence of the contaminant at a
concentration capable of producing significant consequences. These critical concentrations are described
below and are shown in Table 5-7.

Analysis Methodology: The critical concentrations used in this analysis were based on adverse impacts
to the exposed population or utility infrastructure. Each contaminant was grouped into the categories
described in Section 3.4, which determined how the critical concentration was determined:

    •   Nuisance Chemical: The critical concentration for nuisance chemicals was selected at levels that
       would make the water unacceptable to customers, e.g., concentrations that result in objectionable
       aesthetic characteristics.

    •   Toxic Chemical: For chemical contaminants that are lethal to individuals exposed to a high dose,
       the  critical concentration was based on the mass of contaminant that a 70 kg adult would need to
       consume in one liter of water to have a 10% probability of dying (LD10).

    •   Biological Agent: For biological contaminants that are lethal to individuals exposed to a high
       dose, the critical concentration was based on the mass of contaminant that a 70 kg adult would
       need to consume in one liter of water to have a  10% probability of dying (LDi0).

    •   Co-contaminant: For contaminants that are sensitive to inactivation by chlorine, it was assumed
       that the contaminant would be injected with a dechlorinating agent to maintain the viability of the
       biological agent. The critical concentration was calculated based on this co-contaminant and was
       based on the concentration office chlorine that would need to be quenched by the dechlorinating
       agent (for Cincinnati, the value used was 2 mg/L of free chlorine).

Empirical correlations derived from the bench-scale studies on distributed water, as described in Section
3.3, were used to calculate the change in each water quality parameter for eleven contaminants at the
critical concentration. The study considered finished water from both of GCWW's treatment plants.

The calculated values were normalized by the parameter sensitivity values shown in Table 2-3.  If the
normalized response for a specific parameter was greater than or equal to 1.0, then that parameter has the
potential to detect that contaminant.  Note that the true capability of the WQM component to detect a
                                                                                          26

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

contamination incident is strongly dependent upon the performance of the sensor hardware and event
detection system.  The use of a static threshold in this analysis is intended to serve as a theoretical
estimate of the detection capabilities of the WQM component.

Results:  The normalized response for each water quality parameter is presented in Table 5-1 and Table
5-2 for GCWW's two plants. The response value is shown for parameters significantly impacted by the
contaminant, and these cells are green.  Fields in the table with a dash indicate that no significant change
was observed. Note that the reactions are influenced by the measurement technique and instrument used.
Table 5-1 . Normalized Water Quality Response for Water from the Surface Water Plant
Contaminant
Nuisance Chemical 1
Toxic Chemical 1
Toxic Chemical 2
Toxic Chemical 3
Toxic Chemical 4
Toxic Chemical 5
Toxic Chemical 6
Toxic Chemical 8
Biological Agent 1
Co-contaminant 1
Co-contaminant 2
TOO
4.8
23.5
-
-
No Data
48.3
354.8
-
No Data
No Data
123.8
UV
-
-
-
175.8
-
33.6
791.3
-
214.9
-
-
Chlorine
1.7
228.4
107.9
8.8
-
54.0
22.2
-
29.8
8.3
4.4
ORP
-
5.3
329.3
136.4
-
6.4
-
1.1
12.3
22.6
7.9
Conductivity
-
13.1
-
3.9
-
-
-
-
-
-
3.6
PH
2.2
6.9
-
178.5
-
-
-
-
2.9
-
-
Table 5-2. Normalized Water Quality Response for Water from the Groundwater Plant
Contaminant
Nuisance Chemical 1
Toxic Chemical 1
Toxic Chemical 2
Toxic Chemical 3
Toxic Chemical 4
Toxic Chemical 5
Toxic Chemical 6
Toxic Chemical 8
Biological Agent 1
Co-contaminant 1
Co-contaminant 2
TOC
4.9
No Data
No Data
-
102.6
51.5
334.8
-
266.3
-
16.1
UV
-
No Data
No Data
153.7
-
169.8
665.1
-
152.0
-
2.3
Chlorine
-
No Data
No Data
-
-
49.9
38.5
-
18.9
22.2
2.4
ORP
-
No Data
No Data
160.6
-
34.2
-
2.8
15.7
26.6
4.0
Conductivity
-
No Data
No Data
13.3
8.5
3.1
-
-
-
-
-
pH
-
No Data
No Data
128.8
-
-
-
-
1.0
-
-
For the surface water plant, at least one parameter changed for each of the eleven contaminants tested,
with the exception of Toxic Chemical 4 for which the TOC sample could not be analyzed. For the
groundwater plant water, at least one parameter changed for each of the nine contaminants tested (Toxic
Chemical 1 and Toxic Chemical 2 were not tested with this water).  In general, the water quality
parameter responses were consistent between the two water matrices. The differences in response were
generally small and partially attributable to experimental error or uncertainty in the curve fit.

The results  from the two waters were averaged to produce the response matrix shown in Table 5-3. In
cases where results were available for only one of the waters, that value is used.
                                                                                            27

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot
Table 5-3. Average Normalized Water Quality
Contaminant
Nuisance Chemical 1
Toxic Chemical 1
Toxic Chemical 2
Toxic Chemical 3
Toxic Chemical 4
Toxic Chemical 5
Toxic Chemical 6
Toxic Chemical 8
Biological Agent 1
Co-contaminant 1
Co-contaminant 2
TOO
4.9
23.5
-
-
102.6
49.9
344.8
-
266.3
-
70.0
uv
-
-
-
164.7
-
101.7
728.2
-
183.4
-
1.2
I Response
Chlorine
-
228.4
107.9
4.4
-
51.9
30.4
-
24.4
15.2
3.4
ORP
-
5.3
329.3
148.5
-
20.3
-
2.0
14.0
24.6
6.0
Conductivity
-
13.1
-
8.6
4.3
1.6
-
-
-
-
2.0
PH
1.1
6.9
-
153.7
-
-
-
-
2.0
-
-
The average results show all contaminants have the potential to be detected through WQM. Also, all
contaminants except for Toxic Chemical 8 changed two or more parameters, which would increase the
likelihood of detection.

Chlorine and ORP (which generally tracks chlorine) provided the greatest contaminant coverage,
responding to eight of the eleven test contaminants. TOC increased in the presence of all organic
contaminants to provide detection potential for seven of the test contaminants.  UV changed above the
parameter sensitivity value for five contaminants, four of which also produced a response in TOC. Five
contaminants impacted conductivity, and all five were either salts or compounds that dissociate into ionic
species in water. Four contaminants changed pH above the parameter sensitivity value.

The results of these empirical, bench-scale studies were used to establish the detection potential for each
of the 17 test contaminants evaluated in the simulation study, which was described in Section 3.4. The
results in Table 5-4 show the water quality parameters that are expected to change significantly for each
contaminant present at the critical concentration.  The potential response is indicated only by a YES/NO
designation. Also, the parameter type(s) which are most significantly impacted are specified by bolded
text in Table 5-4. These  are the parameter type(s) which reached the required minimum change at the
lowest concentration.

In cases where the particular contaminant was not tested during the bench-scale contaminant study, the
response was extrapolated from the results of a tested contaminant that exhibits similar chemistry to the
subject contaminant.  For five of the biological agents, the co-contaminant introduced with these
contaminants is responsible for the water quality change detectable by WQM. While the actual biological
contaminants would be undetectable at the critical concentration, the concentrations of co-contaminants
could cause detectable changes in water quality parameter values.

Table 5-4.  Expected Water Quality Response for Contaminants Evaluated in the Simulation Study
Contaminant
Nuisance Chemical 1
Nuisance Chemical 2
Toxic Chemical 1
Toxic Chemical 2
Toxic Chemical 3
TOC
YES
YES
YES
NO
NO
UV
NO
NO
NO
NO
YES
Chlorine
NO
NO
YES
YES
YES
ORP
NO
NO
YES
YES
YES
Conductivity
NO
NO
YES
NO
YES
pH
YES
NO
YES
NO
YES
                                                                                            28

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot
Contaminant
Toxic Chemical 4
Toxic Chemical 5
Toxic Chemical 6
Toxic Chemical 7
Toxic Chemical 8
Biological Agent 1
Biological Agent 2
Biological Agent 3
Biological Agent 4
Biological Agent 5
Biological Agent 6
Biological Agent 7
TOO
YES
YES
YES
YES
NO
YES
YES
NO
NO
NO
NO
NO
uv
NO
YES
YES
NO
NO
YES
NO
NO
NO
NO
NO
NO
Chlorine
NO
YES
YES
YES
NO
YES
NO
YES
YES
YES
YES
YES
ORP
NO
YES
NO
NO
YES
YES
NO
YES
YES
YES
YES
YES
Conductivity
YES
YES
NO
NO
YES
NO
NO
NO
NO
NO
NO
NO
PH
NO
NO
NO
NO
NO
YES
NO
NO
NO
NO
NO
NO
All simulation study contaminants are theoretically detectable through WQM. For every detected
contaminant except for Toxic Chemical 3, chlorine or TOC was the most significantly impacted
parameter.

Table 5-5 shows the practically detectable concentration for each of the 17 simulation study
contaminants, along with the water quality changes that concentration produces. As discussed in Section
4.3, a contaminant concentration is practically detectable if it causes a change in at least one water quality
parameter greater than or equal to the value of the parameter sensitivity value as configured in CANARY.
Each parameter's sensitivity value (from Table 2-3) is shown in the second row of the table, and the
second column shows the practically detectable concentration for each contaminant. The remaining cells
show the change in each water quality parameter that would be caused by this detectable concentration.
The green shading and asterisk indicate that the water quality change meets the sensitivity value for that
parameter.

Table 5-5.  Practically Detectable Contaminant Concentrations and Resulting Water Quality
Changes
Contaminant
Nuisance
Chemical 1
Nuisance
Chemical 2
Toxic
Chemical 1
Toxic
Chemical 2
Toxic
Chemical 3
Toxic
Chemical 4
Toxic
Chemical 5
Toxic
Chemical 6
Minimum Practically
Detectable
Concentration
2.1 mg/L
0.30 mg/L
0.13 mg/L
0.11 mg/L
2.0 mg/L
0.56 mg/L
0.29 mg/L
0.29 mg/L
Parameter Type and Parameter Sensitivity Value
TOC
(0.1 ppm)
0.1*
0.1*
0.01
0
0
0.1*
0.08
0.1*
Chlorine
(0.1 mg/L)
-0.03
0
-0.1*
-0.03
-0.01
0
-0.1*
-0.01
ORP
(10 mV)
0
0
-6.42
-10*
8.76
2.94
-1.12
-0.5
Conductivity
(5 uS/cm)
2.91
0
0.29
0
0.42
0.42
0.09
0
PH
(0.1)
0.05
0
0.003
0
-0.1*
0
0
0
                                                                                            29

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot
Contaminant
Toxic
Chemical 7
Toxic
Chemical 8
Biological
Agent 1
Biological
Agent 2
Biological
Agent 3
Biological
Agent 4
Biological
Agent 5
Biological
Agent 6
Biological
Agent 7
Minimum Practically
Detectable
Concentration
0.29 mg/L
0.45 mg/L
0.17 mg/L
0.15 mg/L
0.00005 mg/L1
1,271 organisms/L1
127 organisms/L1
29,863 organisms/L1
2,818,131
organisms/L1
Parameter Type and Parameter Sensitivity Value
TOC
(0.1 ppm)
0.1*
0
0.1*
0.1*
0
0
0
0
0
Chlorine
(0.1 mg/L)
-0.01
0
-0.01
0
-0.1*
-0.1*
-0.1*
-0.1*
-0.1*
ORP
(10 mV)
-0.5
10*
-0.46
0
-10*
-10*
-10*
-10*
-10*
Conductivity
(5 uS/cm)
0
0.44
0
0
0.1
0.1
0.1
0.1
0.1
PH
(0.1)
0
0
0.001
0
0
0
0
0
0
 For these contaminants, the co-contaminant was used to determine the practically detectable concentrations
* Water quality change meets the sensitivity value for that parameter
5.2    Contaminant Scenario Coverage

Definition:  Contaminant scenario coverage is defined as the number or percentage of contamination
scenarios detected by the WQM component.  A scenario is considered detected if at least one alert is
generated during the scenario.

Analysis Methodology: The results from the simulation study were used to characterize contaminant
scenario coverage by the WQM component.  As discussed in Section 5.1, all contaminants evaluated in
the simulation study are theoretically detectable by WQM, but a scenario is only considered practically
detectable if one or more WQM station is impacted by sufficient contaminant concentration. At lower
concentrations, the contaminant changes monitored parameter(s) but the resulting water quality changes
are below the parameter sensitivity value in CANARY, which reflects normal system water quality
variability.

Note that the results presented in this section cannot be used to make general conclusions about
contaminant scenario coverage outside of the context of the simulation study. Different scenario designs,
using different contaminant masses, target in-pipe concentrations, injection rates, and injection locations,
could produce drastically different results leading to different conclusions about contaminant scenario
coverage.

Results:  The analysis presented in Section 4.3 showed that 1,278 scenarios (63%) were not practically
detectable because no WQM location witnessed a practically detectable contaminant concentration. Of
the 737 practically detectable scenarios, 643 were actually detected, which is 87.2% of practically
detectable scenarios and 31.9% of all scenarios. WQM was the only component to detect 178 of the
scenarios (9% of all scenarios), and it produced the first alert for 257 scenarios (12.8% of all scenarios).

Table 5-6 summarizes scenario detection by contaminant. The second column lists the number of
scenarios that were simulated for each contaminant and the third column shows the number and
                                                                                            30

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

percentage of simulated scenarios that were practically detectable by WQM. The final column shows the
number and percentage of practically detectable scenarios for the contaminant that were detected.

Though the number of events simulated for each contaminant is essentially the same, the number of
practically detectable scenarios varies significantly by contaminant, from 1(1% of simulated events) for
Toxic Chemical 8 to 85 (71%) for Biological Agent 5.

Table 5-6. Scenarios Detected by Contaminant
Contaminant
Nuisance Chemical 1
Nuisance Chemical 2
Toxic Chemical 1
Toxic Chemical 2
Toxic Chemical 3
Toxic Chemical 4
Toxic Chemical 5
Toxic Chemical 6
Toxic Chemical 7
Toxic Chemical 8
Biological Agent 1
Biological Agent 2
Biological Agent 3
Biological Agent 4
Biological Agent 5
Biological Agent 6
Biological Agent 7
OVERALL
# of Scenarios
Simulated
119
119
119
119
119
119
119
119
119
119
119
119
119
119
119
113
117
2,015
Practically Detectable
Scenarios
84(71%)
83 (70%)
45 (38%)
18(15%)
23(19%)
45 (38%)
31 (26%)
72(61%)
7 (6%)
1 (1%)
31 (26%)
15(13%)
84(71%)
80 (67%)
85(71%)
28 (25%)
5 (4%)
737 (37%)
Practically Detectable
Scenarios Detected
79 (94%)
70 (84%)
41 (91%)
14(78%)
15(65%)
40 (89%)
25(81%)
65 (90%)
4 (57%)
0 (0%)
29 (94%)
10(67%)
78 (93%)
71 (89%)
80 (94%)
18(64%)
4 (80%)
643 (87%)
This is directly related to the amount of contaminant injected relative to the practically detectable
concentration. The three contaminants with the least number of practically detectable scenarios (Toxic
Chemicals 2 and 8 and Biological Agent 2) were also the contaminants that had the lowest total mass
available for injection into the distribution system. Thus, the contaminant did not spread far from the
injection location at practically detectable concentrations, limiting the potential for detection by WQM.
At the other end of the spectrum, more than 67% of the simulated scenarios were practically detectable
for the two nuisance chemicals and three of the biological agents. The two nuisance chemicals are
available in large quantities and are injected at rates that can spread widely throughout the distribution
system. Biological Agents 3, 4, and 5  are available in much smaller quantities but are injected with a co-
contaminant that is available in large quantities:  it is this co-contaminant that is detectable by WQM.

Biological Agent 6 is also injected with a co-contaminant.  However, the dose of this contaminant
required for infection is much larger than the other biological agents. The scenarios using this
contaminant were designed to achieve a limited spread in order to prevent the contaminant from being
diluted to harmless concentrations. Thus, practically detectable concentrations often did not reach a
monitoring location.

The percentage of practically detectable scenarios that were detected also varied by contaminant, from
0% for Toxic Chemical 8 to 94% for Nuisance Chemical 1, Toxic Chemical 4 and Biological Agent 5.
                                                                                             31

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

With the exception of Toxic Chemical 8, at least 57% of practically detectable scenarios were detected.
There was a direct relationship between the number of practically detectable scenarios and detection
percentage: less than 80% of scenarios were detected for all contaminants with less than 30 practically
detectable scenarios, and more than 80% were detected when there were more than 30 practically
detectable scenarios.

Comparison of Table 5-4, which shows the water quality parameters impacted by each contaminant, with
the detection percentages shown in Table 5-6 shows no obvious correlation. Two contaminants impact
only TOC (Toxic Chemical 4 and Biological Agent 2); however, scenarios involving the former were
detected at a rate of 94%, while those involving the latter were detected at a rate of only 67%.  Scenarios
involving Toxic Chemical 7 were detected at the lowest rate other than Toxic Chemical 8, yet this toxic
chemical impacts two reliable water quality parameters, TOC and chlorine. What cannot be gleaned from
these tables is the actual change in water quality values caused by the contaminants which is further
investigated in Section 5.3.

5.3     Contaminant Detection Threshold

Definition: The contaminant detection threshold is the lowest concentration of a specific contaminant
that can be detected by the WQM component.

Analysis Methodology:  Two  sources of information were used to assess the contaminant detection
threshold:  the bench-scale contaminant studies  and the simulation study. The results from the bench-
scale contaminant studies, described in Section  3-3, were used to develop empirical relationships between
the contaminant concentration  and a change in water quality parameters impacted by the contaminant (see
Table 5-5). The minimum change in each water quality parameter considered practically detectable (i.e.,
the parameter sensitivity values used in CANARY, shown in Table 2-3) were used with these empirical
relationships to estimate the detection threshold for each of contaminants evaluated in the simulation
study.

Additionally, results from the simulation study were used to  investigate the minimum contaminant
concentrations that triggered WQM alerts during simulated contamination scenarios.  Unlike the analysis
of the empirical results from the bench-scale studies, which rely on only the water quality change in a
beaker, the results from the simulation study incorporate the effects of the sensitivity of the water quality
event detection system and the baseline water quality variability of each monitoring location. For this
analysis, a simplifying assumption was made that the peak contaminant concentration at a monitoring
location that detected the scenario was the concentration that triggered the specific alert (an assumption
necessary due to limited concentration data extracted during simulations). In reality, the alert may have
been triggered before this peak was reached and thus detected at a lower concentration. This simplifying
assumption generally results in an overestimate  of the contaminant detection threshold. Again, these
results are specific to conditions of the simulation study and  the configuration of the WQM component of
the Cincinnati CWS  and cannot be extrapolated to WQM in general.

Results: Table 5-7 presents estimates of the detection threshold for each contaminant, normalized to the
critical concentration, as described in  Section 5.1.  The second column shows the ratio of the critical
concentration to the minimum concentration at which practically detectable water quality changes occur,
from Table 5-5. As described previously, this practically detectable concentration is based on the results
of bench-scale studies and the precision settings in CANARY. The third column shows ratio of the
critical concentration to the smallest peak concentration that generated a WQM alert during the simulation
study. The ratios presented in this table are intended to show whether the detection limit, as characterized
by these two different techniques, was lower than the critical concentration (i.e., the smallest
concentration that could produce significant consequences).  A ratio greater than one indicates that the
                                                                                            32

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

contaminant can be detected at a concentration below the critical concentration. Larger ratios generally
imply superior detection capabilities.

Table 5-7.  Ratio of Critical Concentration to Detection Threshold by Contaminant
Contaminant
Nuisance Chemical 1
Nuisance Chemical 2
Toxic Chemical 1
Toxic Chemical 2
Toxic Chemical 3
Toxic Chemical 4
Toxic Chemical 5
Toxic Chemical 6
Toxic Chemical 7
Toxic Chemical 8
Biological Agent 1
Biological Agent 2
Biological Agent 31
Biological Agent 41
Biological Agent 51
Biological Agent 61
Biological Agent 71
Ratio of Critical Concentration
to Contaminant Concentration
that is Practically Detectable
4.76
33.0
228
463
185
104
57.6
352
1.97
0.0333
265
1,310
2.40
3.57
7.87
9.70
0.582
Ratio of Critical Concentration
to Minimum Peak Concentration
Actually Detected in Simulated
Contamination Scenarios
2.7
16
122
227
99
66
23
182
1.1
-
128
736
1.03
3.5
5.3
5.3
0.41
 For these contaminants, the co-contaminant was used to determine the practically detectable concentrations

Detection limits characterized using the practically detectable concentration were consistently lower than
those determined from the simulation study results, and thus the ratios were larger for the former for all
17 contaminants. This is expected given that the simulation study results incorporated the variability in
the water quality baseline at each monitoring location as well as the performance limitations of
CANARY. However, both methods used to assess the detection limit show that WQM can detect the
contaminants at concentrations lower than the critical concentration, with the exception of Toxic
Chemical 8 and Biological Agent 7. The ratios for Biological Agent 7 were 0.58 and 0.41 according to
the two methods, indicating that the detection limit is about twice the critical concentration.  Toxic
Chemical 8 was not detected during the simulation study and has an extremely low ratio using the
practically detectable concentration due to the extremely low critical concentration established for this
contaminant.

Each monitoring location has different baseline water quality patterns, as discussed in Section 6.1, and
these differences dramatically impact the monitoring location's contaminant detection threshold.
Simulation study results were used to evaluate the impact of water quality variability on the detection
threshold. For each monitoring station, the minimum peak concentration that was detected was captured
for each contaminant.  Location J had detection thresholds that were significantly greater than those
observed at the other WQM locations over all contaminants. This was not due to water quality variability
at location J, but instead was an artifact of the simulated contamination scenarios used in this study which
resulted in this monitoring location witnessing only extremely high  contaminant concentrations and thus
not having the opportunity to detect lower concentrations.  In order to obtain a representative range of
detection thresholds for each contaminant, location J was excluded from the following analysis.
                                                                                            33

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot
The range of detection thresholds across the remaining 14 monitoring locations is presented in Table 5-8:
specifically, the minimum, maximum and median detection thresholds are shown for each contaminant,
normalized by each contaminant's practically detectable concentration.  For the minimum and maximum
thresholds, the monitoring location at which that threshold occurred is shown in parentheses.  For
example, the lowest peak concentration of Nuisance Chemical 1 that was detected was 1.8 times (less
than twice) the practically detectable concentration, and that detection occurred at location F.  However,
this same contaminant was not detected at concentrations less than  14.8  times the practically detectable
concentration at location I.

Table 5-8.  Detection Threshold Across 14 WQM Locations (Station J Excluded)
Contaminant
Nuisance Chemical 1
Nuisance Chemical 2
Toxic Chemical 1
Toxic Chemical 2
Toxic Chemical 3
Toxic Chemical 4
Toxic Chemical 5
Toxic Chemical 6
Toxic Chemical 7
Toxic Chemical 8
Biological Agent 1
Biological Agent 2
Biological Agent 3
Biological Agent 4
Biological Agent 5
Biological Agent 6
Biological Agent 7
Ratio of Minimum
Detection Threshold
To Practically
Detectable
Concentration Across
Monitoring Locations
1.8(F)
2.1 (F)
1.9(C)
2.1 (0)
1.9(C)
1.6(C)
2.6 (N)
1.9(L)
1.8 (F)
N/A
2.1 (F)
1.8(M)
2.1 (A)
0.3 (F)
0.2 (F)
0.2 (O)
2.4 (C)
Ratio of Median
Detection Threshold
To Practically
Detectable
Concentration Across
Monitoring Locations
3.9
5.4
14.7
4.6
13.3
14.4
5.3
9.5
10.6
N/A
15.5
4.5
12
1.2
1
0.3
8.6
Ratio of Maximum
Detection Threshold
To Practically
Detectable
Concentration Across
Monitoring Locations
14.8(1)
131. 5 (H)
102.5 (H)
12.7(1)
178.9 (O)
37.2 (H)
18.4 (K)
130.5 (O)
35.9 (O)
N/A
253.5(1)
9.7 (K)
53.8 (O)
7.9(1)
2.8 (L)
2.5 (B)
128.8 (B)
The impact of the monitoring location on the contaminant detection threshold is apparent from the results
presented in this table. For most contaminants, the highest detection threshold across monitoring
locations is orders of magnitude larger than the lowest. Even the median thresholds shown in the table
are generally much higher than the minimum threshold. On average, the median detection threshold
across the monitoring locations was 4.5 times the minimum.

In general, the trend  in detection capability by monitoring location was fairly consistent across
contaminants. For example, the lowest concentrations were able to be detected at locations C and F for
10 of the 17 contaminants. Conversely, locations H, I and O had the highest detection thresholds for the
most contaminants.  These trends are influenced by the simulation study design, but are also largely
influenced by the water quality variability at each monitoring location, which is discussed in greater detail
in Section 6.
                                                                                           34

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot
5.4    Summary

For WQM, contaminant coverage is determined by the water quality parameters that are monitored.
Specifically, a contaminant is covered by WQM if the contaminant impacts at least one monitored
parameter. The analyses in this section focused on the 17 contaminants used in the simulation study.

Laboratory testing was performed to determine the impact of each of the  17 contaminants on the water
quality parameters monitored in the Cincinnati CWS: free chlorine, conductivity, pH, ORP and TOC. As
shown in Table 5-5, all contaminants impacted at least one of these parameters and were thus
theoretically detectable by WQM.

Simulation study scenarios using each contaminant were analyzed to estimate the true "detectability"  of
each contaminant: the percent of practically detectable scenarios using the contaminant that were
detected was captured, as well as the minimum concentration at which it was detected. Table 5-9
summarizes detection of the 17 simulation study contaminants.

The second and third columns show the number of practically detectable  scenarios using each
contaminant and the percentage of these that were indeed detected. All contaminants were detected
except for Toxic Chemical 8, though only one scenario with that contaminant was practically detectable.
As Section 5.2 discusses, this percentage is closely related to the number of practically detectable
scenarios for each contaminant, and thus these values are heavily dependent on the  simulation study
scenarios used.

The final column shows the practically detectable  concentration for each  contaminant, which  represents a
theoretical lower bound on the detection threshold for each contaminant for the water quality  parameters
monitored by the Cincinnati CWS.  With the exception of Toxic Chemical 8 and Biological Agent 7, the
practically detectable concentration is below the concentration that would cause infrastructure or public
health consequences.

Table 5-9. Contaminant Coverage for the WQM Component
Contaminant
Nuisance Chemical 1
Nuisance Chemical 2
Toxic Chemical 1
Toxic Chemical 2
Toxic Chemical 3
Toxic Chemical 4
Toxic Chemical 5
Toxic Chemical 6
Toxic Chemical 7
Toxic Chemical 8
Biological Agent 1
Biological Agent 2
Biological Agent 3
Biological Agent 4
Biological Agent 5
# of Practically
Detectable
Scenarios Using
this Contaminant
84
83
45
18
23
45
31
72
7
1
31
15
84
80
85
% of Practically
Detected
Scenarios
Detected
94%
84%
91%
78%
65%
89%
81%
90%
57%
0%
94%
67%
93%
89%
94%
Practically Detectable
Concentration
2.1 mg/L
0.30 mg/L
0.1 3 mg/L
0.11 mg/L
2.0 mg/L
0.56 mg/L
0.29 mg/L
0.29 mg/L
0.29 mg/L
0.45 mg/L
0.1 7 mg/L
0.1 5 mg/L
0.00005 mg/L
1,271 organisms/L
127 organisms/L
                                                                                           35

-------
Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                 Cincinnati Contamination Warning System Pilot
Contaminant
Biological Agent 6
Biological Agent 7
Overall
#of Practically
Detectable
Scenarios Using
this Contaminant
28
5
737
% of Practically
Detected
Scenarios
Detected
64%
80%
87%
Practically Detectable
Concentration
29,863 organisms/L
2,818,131 organisms/L

                                                                                   36

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot


          Section 6.0:   Design Objective: Alert Occurrence

An important capability of a CWS is its ability to generate an alert when indicators of a contamination
incident have been detected. In the case of WQM, this requires differentiating between normal variations
in water quality and unusual deviations that could be indicative of contamination. For water quality
anomalies to be reliably detected, the monitoring stations, data collection and event detection system
design elements must be functioning properly. Additionally, in an effective system, the majority of alerts
produced are caused by water quality anomalies.

Minimizing the occurrence of invalid alerts is important because an excessive occurrence of invalid alerts
may cause utility staff to lose confidence in the system and stop investigating alerts.  But it is just as
critical to maintain the  ability of the system to detect unusual water quality. The effectiveness of WQM
alert generation was evaluated by analyzing the following metrics:  alert occurrence during routine
operations, alert occurrence for simulated contamination incidents and alert co-occurrence.

6.1    Alert Occurrence During Routine Operations

Definition: An alert is an indication from an event detection system that unusual water quality
characteristics have been detected.  In the case of the Cincinnati CWS, the event detection system in use
is CANARY, and both visual and audible notifications are generated for each alert produced. Alerts are
considered either valid or invalid, as defined below.

•   Valid Alert: An alert resulting from a verified water quality anomaly. Several potential sources of
    water quality anomalies that may lead  to valid alerts, including contamination, are discussed below in
    the analysis methodology.

•   Invalid Alert: An alert that is not caused by a verified water quality anomaly. Again, several causes
    of invalid alerts are discussed under analysis methodology.

Analysis Methodology: During the evaluation period, the water quality data generated by the sensors at
each monitoring location was collected.  This data was analyzed by the CANARY event detection system
in real-time to monitor for water quality anomalies. When an anomaly was detected, CANARY
generated an alert for a specific monitoring location and time and outputted the water quality parameters
whose changes triggered the alert (referred to as trigger parameters).  Utility staff investigated these
alerts  as they were generated and documented their conclusion regarding the  alert cause in an
investigation checklist. If an alert was not investigated or a cause not recorded on the investigation
checklist, the water quality and system data were analyzed in an attempt to categorize the cause of the
alert.  Alerts were first  categorized as valid or invalid per the definitions above and then categorized by
cause.

Valid alerts were grouped into the following causes:

    •   Contamination Incident:  Confirmed presence of a contaminant in the distribution system.

    •   Main Break: A confirmed break  in a water distribution system pipe.

    •   Distribution System Work: A planned activity in the distribution system such as flushing
       mains, pipe repair or replacement and opening or closing valves.

    •   Treatment Plant  Change: An  adjustment in chemical feed or a unit process at a drinking water
       treatment facility.
                                                                                           37

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

    •   Verified Non-Standard System Operation: Atypical system operations confirmed by the utility
       such as a change in system pumping or valving, resulting in unusual water quality patterns.

    •   Other:  A verified change in water quality that could not be definitively attributed to one of the
       above causes.

Invalid alerts are grouped into five major categories according to the cause of the alert:

    •   Background Variability: Changes in water quality parameter values that fall within the range of
       typical water quality patterns. The most common cause of background variability is normal
       system operations. Changes in pumping and valving can result in a WQM location receiving
       water from different sources within a short span of time, often causing rapid changes in the
       monitored water quality. Background variability also includes shifts in the baseline due to
       seasonal operation and changing source water quality. Figure 3-1 shows an example of highly
       variable baseline data.

    •   Equipment Problem: Monitoring Station Hardware: The following monitoring station
       hardware equipment problems caused incomplete or inaccurate data, as discussed in Section 8.0,
       to be provided to CANARY, which often caused alerts.

       o   Monitoring Station Power Loss:  A loss of power to a monitoring station causes a failure of
           data generation and transmission. CANARY alerts often occurred when the monitoring
           station came back online following restoration of power, as the new values represent a large,
           sustained change from the values provided to CANARY while the power was lost (generally
           a default invalid value of "0" or a flat-line  of the last value received).

       o   Monitoring Station Flow Loss: An interruption in the flow of pressurized water to a WQM
           station can impact all of the monitoring station's data streams.  Sensor faults can occur, as
           well as actual water quality changes such as depletion of chlorine in the stagnant water.
           These changes, though real, are not considered valid alerts because they are due to an
           equipment problem and not representative of water quality in the distribution system.

       o   Sensor Malfunction: Hardware malfunctions can result in inaccurate or erratic data. While
           the water quality is normal, the sensor readings and thus the values provided to CANARY,
           are not.

    •   Equipment Problem: Communication System: Data collection failure often causes incomplete
       data, as  discussed in Section 8.0. As with monitoring station power loss, CANARY often
       generates an alert when data communications are restored.  Communication system problems are
       further broken down into the following sub-causes:

       o   Monitoring Station Data Collection Failure:  Communication failure at a specific WQM
           station, causing flat-lined or missing data from all data streams from that monitoring station.

       o   System-Wide Outage: A system-wide network malfunction resulting in flatlined or missing
           data from all monitoring stations.

    •   Equipment Problem: Event Detection System:  A number of invalid alerts were caused by
       bugs in CANARY'S internal alert generation processes and configuration settings.  For alert
       classification, these were broken into two sub-causes.

       o   Identified CANARY Bug: A defect within the CANARY software that caused invalid alerts.

       o   Incorrect Parameter Sensitivity Settings in CANARY:  Incorrect settings due to non-optimal
           software configuration, not a software problem.
                                                                                           38

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

    •   Procedural Error:  Failure to follow procedures during instrument maintenance can produce
       erratic or inaccurate data that generates CANARY alerts. Alerts due to procedural errors were
       grouped into the following sub-causes:

       o   Calibration Selector Switch Not Used Correctly: The selector switch is supposed to be
           placed to the "Calibration" position when a monitoring station is being serviced so that all
           data streams are flagged as unusable and thus  not analyzed by CANARY.  In some cases this
                            OO                            J     J
           was not done, and the calibration signal that would cause CANARY to ignore the erratic data
           often produced during maintenance activities was not transmitted.

       o   Sensor Maintenance Error: In some cases, a maintenance activity resulted in degradation in
           sensor performance. For example, there were instances where improper maintenance trapped
           air in the internal plumbing of a sensor, yielding erratic data that generated CANARY alerts.

Section 2.3 notes that improvements were made to the  CANARY software and configuration settings in
an effort to reduce the rate of invalid alerts and improve its ability to detect true water quality anomalies.
Changes which reduced avoidable, clearly invalid alerts included:

    •   Update parameter sensitivity values: As described in Section 2.3, the parameter sensitivity
       value in CANARY represents the smallest change in the parameter value that could  generate an
       alert. When CANARY was initially implemented in real-time, the sensitivity settings did not
       represent actual instrument performance and invalid alerts occurred due to extremely small
       changes in water quality parameter values.  These were updated based on the experience of
       GCWW staff to reflect the smallest change in water quality that could be reliably detected by the
       deployed sensors.

    •   Suppress alerts after calibration: As described above, CANARY uses the calibration signal
       from each monitoring station and does not analyze the data, or produce alerts, during monitoring
       station maintenance.  However, many invalid alerts were received immediately following
       calibration due to the large change in water quality parameter values that can occur following
       sensor calibration or other maintenance.  CANARY configurations were updated to  suppress
       alerts just after calibration.

    •   Fix software bugs:  A variety of bugs were identified and corrected during and after the
       evaluation period. Most bugs resulted from the fact that the EDDIES software was used to
       deploy CANARY. The Cincinnati CWS is currently the only utility where EDDIES is used with
       CANARY in real-time, and thus testing of the  interface between these software applications was
       limited.  As an example, a substantial bug was encountered when a CANARY update was
       installed that caused CANARY to enter "alert loops." With each initial alert, CANARY would
       begin producing one alert after another until analysis for the monitoring location was restarted.

    •   Remove sensors from analysis: Sensor issues can cause inaccurate and widely varying water
       quality values. There were several instances of significant sensor hardware problems that
       persisted for months which caused CANARY to produce invalid alerts.  By removing these data
       streams from analysis until the sensor issue was resolved, additional alerts were avoided.

    •   Definition of acceptable range: Knowledge about valid values from specific sensors was used
       to screen out inaccurate data that might otherwise  generate an invalid alert. For example, for one
       model of TOC instrument, a value of 10 indicated a sensor fault. As a result, CANARY was
       adjusted to ignore any TOC parameter value larger than 9.9, thus eliminating invalid alerts that
       might result from this instrument malfunction.
                                                                                           39

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

During the evaluation period, understanding of the CANARY analysis methodology and configuration
variables improved significantly, as did familiarity with local water quality patterns at each WQM
location. This was used to reconfigure CANARY (separate from the above fixes) and improve
performance, increasing the ability of CANARY to detect unusual water quality while decreasing the
number of invalid alerts.

    •    Periodic changes were made to the configuration settings for individual WQM locations to reflect
        changes in the baseline water quality. At some WQM locations, real-time data from pumps, tanks
        and valves was used to create CANARY "cluster files" which helped suppress invalid alerts
        caused by changes in operations.

    •    In late 2010, a rigorous re-analysis of CANARY configurations was performed in which multiple
        configurations were compared for each monitoring location using  104 days of data and 12
        simulated contamination incidents.  For several stations, significant improvements were seen
        under new configurations. The new configuration at one monitoring location produced half the
        number of invalid alerts while increasing the number of incidents detected from zero to four.

Several of these modifications to CANARY were made during the evaluation period, while others were
implemented later as additional issues were encountered:  modifications were made well into 2011.  To
assess CANARY'S performance under this optimized condition, the data collected over the entire
evaluation period was reprocessed through the latest version of CANARY in off-line, batch mode. The
results  of this reprocessing were a new set of alerts that better represent performance for the conditions
under which the Cincinnati CWS is currently operating.
Both sets of alert data will be presented in this section:

    •    Real-time monitoring alerts that were collected during real-time operation.

    •    Reprocessed alerts that were generated by the latest version and configurations of CANARY in
        an off-line batch analysis.

Results: This section summarizes the alerts produced during the evaluation period, which were analyzed
by reporting period, alert cause, location and trigger parameter. Both real-time monitoring and
reprocessed alerts are included in all figures. Those showing reprocessed alerts are always shown first
and have a blue background. Real-time monitoring alert summaries follow and have a pink background.

Figure 6-1 shows the number of alerts for each reporting period for the real-time monitoring and
reprocessed results. Alerts are shown by cause, with the three categories related to equipment problems
(Event Detection System, Monitoring Station Hardware and Communication System) combined into one
overall Equipment Problem category.

Overall, the number of observed alerts during real-time monitoring ranged from 7 to 203 per reporting
period with an average of 68 per reporting period.  There  was  a downward trend in alert occurrence
during  real-time monitoring, which was driven by improved sensor performance and CANARY upgrades.
The benefit of the CANARY configuration updates in May 2008  and October 2008, described in Table 2-
5, can clearly be seen, as alert numbers drop significantly after these periods.  The exception to this
downward trend is seen as an increase in alert occurrence from February through March 2010.  This was
largely due to problems when a new, faulty version of CANARY was installed. More information on
specific changes that were implemented during the evaluation period can be found in Table 2-5.
                                                                                           40

-------
       Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the

                     Cincinnati Contamination Warning System Pilot
    70
    60
    50
  5 40
  o
  o
  •c

    30
    20
    10
         Rep ro cessed Alerts

   250
   200
   150
 8
 o
   100
                                     nnmn
                                                                  n
                          Start Date of Monthly Reporting Period
            EquipmentProblem  a Background Variability  Procedural Error   ValidAlert
                       *Note: The scale of the vertical axis varies by chart.
*Note that the y-axis scales are different due to the markedly different number of alerts generated

Figure 6-1. Cause of Reprocessed and Real-Time Alerts by Reporting Period
                                                                          41

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

With the reprocessed analysis, between 10 and 65 alerts were generated per reporting period, averaging
28 over the evaluation period; 59% fewer alerts than the average number generated during real-time
monitoring.  There were fewer invalid alerts in the reprocessed results compared to real-time monitoring
for most reporting periods with the following exceptions: March through May 2009, and September
through November 2009.  During these periods, CANARY was unavailable in real-time due to software
updates, as discussed in Section 8.3. For much of this downtime, complete and accurate data was still
being generated and stored, but CANARY was not analyzing it and thus not generating any alerts.  This
resulted in an artificially low number of real-time monitoring alerts - in some cases zero alerts over an
entire reporting period.  However, during reprocessing, the entire dataset was processed by CANARY,
even during these periods in which CANARY was not operating in real-time. Thus, more alerts were
generated during reprocessing than were observed during real-time monitoring.  This issue is seen very
clearly during the noted reporting periods, and is likely present to a smaller degree in other periods as
well.

Figure 6-2 looks more closely at alert causes, showing the causes and sub-causes for the alerts generated
during real-time monitoring and reprocessed analysis. A description of the causes and sub-causes is
provided above, under analysis methodology. Note that the total number of alerts, shown in the upper-left
corner of each figure, is different for the reprocessed and real-time monitoring alerts.
                                                                                            42

-------
          Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                             Cincinnati Contamination Warning System Pilot
   Real-Time Alerts
                                      Valid Alert
                                        3.1%
  Total Number of Alerts =1,977
    Procedural Error.
         1.8%
         (36)
   Equipment Problem:
 Co mmun ication System.
        13.2%
        (261)
                     ...  . , ....
             ackg round Variability
                  31.4%
                   (620)
                                            Equipment Problem:
                                           Event Detection System
                                              m

                              Equipment Problem:
                                Station Hardware
          Equipment Problem:
         Event Detection System
                                                                   27.3%
                                                                   (175)
                                                                         72.7%
                                                                         (466)
5.3%°-3%   Equipment Problem:
<19) -il'     Station Hardware
 9 8%   Valid Alert
                                   11.1%
                                          Procedural Error
                                                                Equipment Problem:
                                                              Communication System
   Reprocessed Alerts
 [Total Number of Alerts = 809
                                               Valid Alert
                                                 4.8%
                                                 (39)
   Equipment Problem:
  Co mmun ication System.
         12.0%
         (97)

    Procedural Error.
        3.0%
         (24)
                          Background Variability
                                40.0%
                                (324)
                                  Equipment Problem:
                                   Station Hardware
                                        40.2%
                                        (325)
                                                                            Equipment Problem:
                                                                             Station Hardware
    10.3%

 5.1% 
  (2)
Valid Alert
 15.4%
  (6)
                          4.2%
                           (1)
                                 Procedural Error
           Equipment Problem:
         Communication System
                                                    Calibration
                                                    Switch Not
                                                    Used Correctly
                                                   DSensor
                                                    Maintenance
                                                    Error
Figure 6-2. Cause and Sub-cause of Reprocessed and Real-Time Alerts
                                                                                                      43

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

The following discussion provides additional analysis of, and comparison between, the number of alerts
observed in real-time and reprocessed alert sets for each category of alert cause.

   •   Invalid Alerts: The reprocessed analysis generated fewer invalid alerts for each cause when
       compared to the alerts generated in real-time, as discussed below in order of significance.
       o  Equipment Problem: Event Detection System:  CANARY issues were the most significant
           cause of invalid alerts in real-time (32.4%), mostly due to identified CANARY bugs. The
           version of CANARY that generated the reprocessed alerts, which incorporated all of the
           improvements discussed under the analysis methodology, completely eliminated invalid
           alerts due to this cause.
       o  Background Variability: Excluding the problems with CANARY discussed above,
           background variability produced the most invalid alerts in both real-time monitoring and
           reprocessed analyses. However, refinement of CANARY configuration settings reduced the
           number of invalid alerts caused by background variability in the reprocessed analysis by 47%
           when compared to the real-time results.
       o  Equipment Problem: Monitoring Station Hardware:  Excluding the event detection
           system problems, this category was the second most prevalent cause of invalid alerts in both
           real-time monitoring and the reprocessed analysis. This type of invalid alert was not
           significantly impacted by the CANARY upgrades. The best way to reduce invalid alerts due
           to this cause is to improve data quality through improved maintenance.  For the Cincinnati
           pilot, the number of alerts due to monitoring station hardware was also reduced when
           instruments with chronic problems were removed from analysis, as discussed in Section 8.1.
           While this led to fewer alerts during reprocessing, because the poor quality data that
           CANARY analyzed before the sensors were taken offline in real-time mode was no longer
           considered, loss of data for a water quality parameter may impact other aspects of
           performance, such as contaminant coverage.
       o  Equipment Problem: Communication System:  Excluding event detection system
           problems, this was the third most significant alert cause in both real-time monitoring and the
           reprocessed analysis. The system-wide and monitoring  station outage sub-causes were both
           significant contributors to this invalid alert type. CANARY configuration updates reflected
           in the reprocessed analysis reduced the number of invalid alerts due to this cause by 62%
           when compared to the invalid alerts that occurred in real-time.
       o  Procedural Errors: This cause was not a significant source of invalid alerts for either the
           reprocessed or real-time analyses.

   •   Valid Alerts: The CANARY configurations used in the reprocessed analysis  generated 41 valid
       alerts compared to 61 valid alerts generated during real-time monitoring. Thus, while the
       CANARY updates and new configurations were effective at reducing the occurrence of invalid
       alerts, the decrease in detection of valid alerts in the reprocessed analysis demonstrates the
       challenge associated with balancing the detection of true anomalies against minimizing the
       occurrence of invalid alerts.

Table 6-1 and Figure 6-3 show the alerts for each WQM station that were generated in real-time and
during reprocessed analysis. The stations are ordered in decreasing order of reprocessed alerts produced.
                                                                                            44

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot
Table 6-1. Alerts By Monitoring Station
Station
ID
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
Total:
Reprocessed
# Valid
Alerts
0
7
2
5
6
3
3
1
4
2
2
1
1
2
0
39
# Invalid
Alerts
142
68
58
34
47
34
41
71
38
69
15
22
25
27
79
770
Real-Time
# Valid
Alerts
0
11
3
6
8
7
6
1
7
2
2
0
5
3
0
61
# Invalid
Alerts
220
112
180
160
146
110
81
115
127
141
83
101
99
76
165
1,916
Monitoring Station Water Quality Variability
This monitoring location is at a pump station co-located
with multiple ground storage tanks and receives water from
both plants. Thus, it can get water from either of the
storage tanks, water from either of the treatment plants via
the mains, or a mixture of these sources. Thus, water
quality here is highly variable and unpredictable.
This monitoring location is in the seasonal interface zone
between the two plants and thus experiences large
variations in water quality.
This monitoring location can receive water pumped directly
from the plant or nearby reservoir, and can thus
experience slight water quality variability depending on
system operations.
Depending on pumping, this monitoring location can
receive water through one of two upstream pump stations
or from a co-located reservoir. Water quality is highly
variable at this site.
This monitoring location can receive water pumped directly
from the plant or nearby reservoir, and thus can
experience slight water quality variability depending on
operations.
This monitoring location primarily receives water pumped
from the treatment plant, but can also receive water from
the co-located reservoir. Thus, it experiences water quality
variability depending on operations.
This monitoring location can receive water pumped directly
from the plant or nearby reservoir, and thus can
experience slight water quality variability depending on
operations.
This monitoring location primarily receives water directly
from the plant, but occasionally receives water from the co-
located reservoir. Thus, there is some water quality
variability depending on operations. As noted in Table 2-5,
this monitoring location was moved in March 2009 due to
low pressure and intermittent flow.
This monitoring location receives water pumped from the
plant through a major pump station. There is little
variability in water quality at this location.
This monitoring location is located just downstream of the
groundwater plant: there is little water quality variability.
This monitoring location is at a pump station and can
experience water quality variability due to pump
operations.
This monitoring location receives water pumped from the
plant through major pump stations. There is little variability
in water quality at this monitoring location.
This monitoring location receives water pumped from the
plant through a major pump station. However, it can also
receive water from nearby tanks and reservoirs. Thus
there is some water quality variability at this location.
This monitoring location receives water that is pumped
from the plant through a major pump station. There is little
variability in water quality at this monitoring location.
This monitoring location receives water that is pumped
from the plant through a major pump station. There is little
variability in water quality at this monitoring location.

                                                                                           45

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                           Cincinnati Contamination Warning System Pilot
      150
                                                                               M     N
O
             |   Background Variability   EquipmentProblem   Procedural Error   ValidAlert

                               'Note: The scale of the vertical axis varies by chart.
* Note that the y-axis scales are different due to the markedly different number of alerts generated
Figure 6-3.  Cause of Reprocessed and Real-Time Alerts by Location

The CANARY improvements previously discussed led to fewer total alerts and fewer invalid alerts in the
reprocessed analysis for all stations, with percentage decreases in invalid alerts ranging from 35% at
Station A to 82% at Station K and an average percentage decrease of 61%. The number of valid alerts
also decreased for most stations, illustrating that modifications to an event detection system to decrease
the occurrence of invalid alerts can also decrease the sensitivity of the event detection system. The
                                                                                              46

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

percentage reduction in valid alerts ranged from 0% at stations A, H, J and K to 80% at station M with an
average percentage decrease of 33%. Examples of stations where performance was dramatically
improved by CANARY modification include:

    •  Stations L and O: Both of these stations detected one true anomaly (i.e., generated a valid alert)
       during reprocessing that was not detected in real-time. Furthermore, the number of invalid alerts
       decreased by 82% and 65% respectively, resulting in a total of 166 fewer invalid alerts that would
       be received by the utility.

    •  Stations H, J, and K: The new CANARY configuration did not impact the number of valid
       alerts generated at these  stations, but the number of invalid alerts decreased by 51%, 70%, and
       51%, respectively (resulting in a total of 184 fewer invalid alerts received by the  utility).

    •  Stations D and E: While the number of valid alerts generated decreased by 17% and 25%,
       respectively, in the reprocessed analysis, the  occurrence of invalid alerts decreased by 68% and
       78%, respectively (while three valid alerts were lost, the updated CANARY configuration
       produced a total of 224 fewer invalid alerts).

Table 6-1 and Figure 6-3 also clearly show the variations among the monitoring stations in terms of alert
occurrence and cause, especially for equipment problems and  background variability. Notable causes of
differences in alert levels among the stations  are discussed below:

    •  Background Variability at Monitoring Station A:  Table 6-1 describes the water quality
       variability of each monitoring location, and it is evident from this table that stations with a greater
       degree of water quality variability generally have a greater occurrence of invalid  alerts. In both
       the real-time monitoring and reprocessed analysis, Station A had the highest number of invalid
       alerts caused by background variability due to the complex and largely unpredictable water
       quality. Figure 6-4 shows typical water quality at this location. Frequent changes in water
       sources can be clearly seen, and pH may be the most effective way to distinguish between the
       sources.  The surface water source typically has a pH  near 8.5, the groundwater source has a pH
       above 9, and a third source typically has a pH near 8.75 and is most likely from a tank with a
       mixture of the  surface and groundwater sources.
   o>
  _E
   
-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

    •   Equipment Problems at Station O:  In both the real-time monitoring and reprocessed alert sets,
       station O had the highest number of invalid alerts due to equipment problems caused by chronic
       issues with the chlorine sensor and occasional problems with the TOC sensor at this station.

    •   Differences in Alert Causes: In many instances, the distribution of alerts among the various
       causes was significantly different for the reprocessed alerts.  These often non-intuitive changes
       were due to updates in the CANARY configuration settings implemented after the evaluation
       period that resulted in very different alerting patterns. Station H is one example of this;
       equipment problems were resoundingly the greatest cause of invalid alerts in the reprocessed
       results, whereas background variability was the dominant cause during real-time monitoring.
       Some of these invalid alerts were missed during real-time monitoring because CANARY was
       unavailable at the time of the equipment problem.

Table 6-2 and Figure 6-5 analyze the trigger parameters outputted by CANARY for both the reprocessed
and real-time monitoring results.  The percentages are calculated  relative to the total number of alerts in
each category,  as shown in the final row of this table. For example, in real-time TOC was listed in 48.6%
of all alerts received. In some cases, alerts were triggered by more than one parameter so the percentages
of alerts across all parameters for a given analysis sum to more than  100%.

Table 6-2. Alerts by Water Quality Parameter
Parameter
TOC
Chlorine
ORP
PH
Conductivity
TOTAL:
Reprocessed
Valid Alerts
for which
Trigger
Parameter
was Listed
#
4
26
13
11
10
39
%
10.3%
66.7%
33.3%
28.2%
25.6%
N/A
Invalid
Alerts for
which
Trigger
Parameter
was Listed
#
113
298
83
266
280
770
%
14.7%
38.7%
10.8%
34.5%
36.4%
N/A
Total Alerts
for which
Trigger
Parameter
was Listed
#
117
324
96
277
290
809
%
14.5%
40.0%
11.9%
34.2%
35.8%
N/A
Real-Time
Valid Alerts
for which
Trigger
Parameter
was Listed
#
20
34
9
18
18
61
%
32.8%
55.7%
14.8%
29.5%
29.5%
N/A
Invalid Alerts
for which
Trigger
Parameter
was Listed
#
940
876
508
613
773
1,916
%
49.1%
45.7%
26.5%
32.0%
40.3%
N/A
Total Alerts
for which
Trigger
Parameter
was Listed
#
960
910
517
631
791
1,977
%
48.6%
46.0%
26.2%
31.9%
40.0%
N/A
NOTE: Totals indicate total number of alerts, not total number of trigger parameters

During real-time monitoring, TOC was the parameter listed as a trigger with the most alerts (48.6%), and
ORP the least (26.2%). For the reprocessed analysis, chlorine (40.1%) and ORP (11.9%) were the most
and least frequent, respectively.
                                                                                           48

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot
       70%
                                                                        • Reprocessed, Valid
                                                                         Reprocessed, Invalid
                                                                        HReal-Time, Valid
                                                                         Real-Time, Invalid
                  TOC
CL2             ORP             PH

          Trigger Parameter
COND
Figure 6-5.  Percentage of Alerts with each Parameter Listed as a Trigger

Below is further discussion regarding the attribution of individual parameters to alert occurrence.

    •   TOC: Recurring TOC sensor issues throughout the evaluation period resulted in TOC being the
        most frequent contributor to invalid alerts.  As discussed previously, CANARY configurations
        were updated to address this  specific issue. These changes resulted in an 88% decrease in the
        number of invalid alerts triggered by TOC, with only 14.4% of reprocessed alerts attributable to
        TOC.

    •   Chlorine:  In both the real-time monitoring and reprocessed alert sets, chlorine was the most
        frequent contributor to valid  alerts.  However, it was also the second highest and highest
        contributor to invalid alerts for the real-time monitoring and reprocessed results, respectively.
        Thus, while chlorine was a relatively sensitive measure of anomalous water quality conditions,
        this highly variable parameter also produced a large number of invalid alerts.

    •   ORP: In both the real-time monitoring and reprocessed  alert sets, ORP was the least frequent
        contributor to invalid alerts.  However, in the reprocessed analysis ORP was the second largest
        contributor to valid alerts.  In fact, 14% of the reprocessed alerts attributed to ORP were valid,
        which is the highest for any parameter type. This seems  to indicate that ORP was a relatively
        sensitive and reliable measure of unusual water quality. While it likely won't replace chlorine, it
        could be used to verify or rule out a possible anomaly indicated by a chlorine-triggered alert.

6.2     Valid Alerts

This section evaluates the ability of the WQM component to produce alerts when unusual water quality is
present in the distribution system.  Section 6.2.1 considers valid alerts generated for simulated
contamination incidents generated as part of the simulation study. Section 6.2.2 examines valid alerts
produced at the  Cincinnati pilot over the evaluation period.
                                                                                             49

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

6.2.1   Valid Alerts from Simulated Contamination Incidents

Definition:  A valid alert is a WQM alert generated in response to a simulated contamination incident.
Each impacted monitoring location is a site of potential alert. There is at most one valid alert per
impacted location; and only the first instance of an alert from each WQM location was recorded during
the simulation study. Because these alerts are generated in response to a known, simulated contamination
incident, they are considered to be valid.

Analysis Methodology: Section 5.2 summarized detection of simulated contamination scenarios, each of
which may contain multiple valid alerts. This section analyzes the simulation study results at a deeper
level, considering potential and valid alerts for each of the 737 simulated contamination scenarios
practically detectable by Cincinnati's WQM component.  Overall alerting rates are presented, as well as
the occurrence of valid alerts by monitoring location and by contaminant.
Section 6.3.1 investigates occurrence of multiple alerts produced within a scenario.

Results: Of the 737 scenarios practically detectable by WQM, there were  1,959 potential alerts (i.e.,
impacted monitoring stations).  A total of 1,373 alerts were produced, yielding an overall alerting
percentage of 70%.

Table 6-3 summarizes alerting by monitoring location. The number of potential alerts for each
monitoring location is given, which is the number of scenarios in which the monitoring location was
impacted. The third and fourth columns show the number of alerts that were in fact produced, and the
percentage relative to the number of potential alerts.

Table 6-3. Alerts by Monitoring Location
Monitoring
Location ID
A1'2
B
C
D1
E
F
G1
H
I
J
K
L
M
N
O
# of Potential
Alerts
250
267
150
25
51
219
59
50
68
15
194
255
215
100
41
# of Potential
Alerts Produced
39
164
137
8
50
198
38
45
45
14
176
197
163
72
34
% of Potential
Alerts Produced
15.6%
61.4%
91.3%
32.0%
98.0%
90.4%
64.4%
90.0%
66.2%
93.3%
90.7%
77.3%
75.8%
72.0%
82.9%
 This monitoring station did not have an ORP instrument
2 This monitoring station did not have a TOC instrument

The number of potential alerts by monitoring location ranged from 15 to 267.  This is largely dependent
on the study design, though the stations with very few potential alerts were generally close to a treatment
plant, such that scenarios with an injection location at a downstream node would not impact the
                                                                                             50

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

monitoring location. Station J is in the area served by the smaller treatment plant, and contaminant
injections outside this area did not reach this location.

Alerting percentages by monitoring location varied widely, ranging from 16% to 98%.  Stations A and D
had by far the lowest alerting percentages.  As noted in Table 6-1, these stations have complex water
quality, getting water from multiple sources and having frequent, large changes in water quality. Thus, it
may be more difficult to distinguish water quality changes caused by contamination from the normal
variability.  Figure 6-5 shows an example of the highly variable water quality at station A.

However, an even stronger influence on alerting percentage was the water quality parameters monitored.
Three stations (A, D, and G) did not have the full suite of water quality parameters, and these had three of
the four lowest alerting percentages. These stations are Type A (described in Section 2.1.1) and do not
have ORP instruments. Thus, other stations had additional information to facilitate detection of
contaminants impacting this parameter (all contaminants except Nuisance Chemical 1, Toxic Chemical 1
and Biological Agent 2).

In addition, station A, which had the lowest alerting percentage overall, does not have an operational
TOC sensor. As shown in Table 5-5,7 of the 17 contaminants change TOC.  Thus the lack of a TOC
sensor at this station hindered detection of scenarios involving these contaminants. Furthermore,
Nuisance Chemicals 1  and 2, Toxic Chemical 4, and Biological Agent 2 impact only TOC: no scenarios
using these contaminants could be detected at this monitoring location.

These results show monitoring the full suite of water  quality parameters provides better detection
capabilities.  Also, the  impact of baseline water  quality variability is a factor in detection.
6.2.2  Valid Alerts from Observed Water Quality Anomalies
Definition: Observed water quality anomalies  are real water quality anomalies observed in the drinking
water distribution system. These were identified by reviewers as described below. Alerts triggered by
observed water quality anomalies are considered valid. Each water quality anomaly is a discrete incident
that may pass through multiple monitoring stations and thus may trigger multiple alerts. Each monitoring
location where unusual water quality was observed by the sensors is considered a site of potential alert.

Analysis Methodology:  All water quality data from  the evaluation period was analyzed, and knowledge
of routine operations and normal water quality variability at each WQM location was used to identify
significant observed water quality anomalies. Figure 6-6 shows an observed water quality anomaly
caused by a change in chlorine dose at the main treatment plant. Reviewers were able to clearly identify
this unusual water quality at five monitoring stations:  data from two of these sites of potential alert are
shown in this figure.
                                                                                            51

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot
                                         Chlorine atTreatmentPlant
                                         Chlorine at Downstream Location
                                         Chlorine at Downstream Location
  o
0.25
        0
       4/13/09
                   4/14/09
4/15/09
4/16/09
                                   Date
Figure 6-6. Chlorine Data from an Observed Water Quality Anomaly

Unlike the simulation study, there is no way to know with certainty when unusual water is at a monitoring
location. Incidents whose water quality changes were not significant when compared to normal
variability were likely missed by both data reviewers and the CANARY software.

For example, Figure 6-7 shows data from another monitoring location during the water quality anomaly
shown in Figure 6-6.  This location is downstream of the plant and thus should have received the water
with elevated chlorine levels, but this change does not show clearly in the data. The black arrow shows a
chlorine increase that could be related to the observed anomaly, but it is not sufficiently different from the
monitoring location's normal water quality to be certain. Thus this location was not considered a site of
potential alert.
     1.25
  |   0.5
  O

     0.25
       0
       4/13/09
                  4/14/09          4/15/09
                            Date
               4/16/09
Figure 6-7. Chlorine Data from an Additional Site During the Anomaly shown in Figure 6-6

While the evaluators could have used knowledge of network hydraulics to rigorously attempt to identify
all stations that likely observed each incident of anomalous water quality, this was not done in this
evaluation.  If it had, the hydraulic analysis would be imprecise; flow paths change depending on system
                                                                                            52

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

operations and demand, and there was no way to retrospectively obtain detailed data on system conditions
at that time.  Also, there is no way to know how long the slug of water with unusual quality remained
intact and significant.

Results:  This section summarizes detection of observed water quality anomalies that occurred during the
evaluation period. Forty-nine real incidents were identified by reviewers and CANARY detected 69% of
them.

Table 6-4 summarizes the number of incidents  of each type and the detection percentage for each. A
description of these causes of unusual water quality can be found in Section 6.1.

Table 6-4. Observed water quality Anomaly  Causes and Detections
Observed Water Quality Anomaly Cause
Contamination Incident
Main Break
Distribution System Work
Treatment Plant Change
Verified Non-Standard System Operation
Other
TOTAL:
# of Real Incidents with
this Known Cause
0
2
4
7
23
13
49
% of These Incidents
Detected
N/A
100.0%
100.0%
71.4%
65.2%
61.5%
69.4%
Anomalies caused by main breaks and distribution work were all detected: these generally cause quick,
significant changes in multiple water quality parameters. The other incident types had similar detection
rates.

Table 6-5 shows incident detection percentages by the number of sites of potential alert.  Reviewers
identified between one and nine impacted monitoring stations for the incidents.  The number of sites of
potential alert does not seem to have a significant impact on the probability of an observed water quality
anomaly being detected, though these results are certainly skewed because for the majority of incidents,
water quality changes were identified at only one monitoring location. For this table, an incident is
considered detected if an alert was produced for at least one of the sites impacted.

Table 6-5. Incident Detection Percentages by Number of Sites of Potential Alerts
Number of Sites of
Potential Alert
1
2
3
4
5
6
7
8
9
# of Real Incidents with
this Number1
39
1
1
1
0
3
1
0
3
% of These Incidents
Detected
66.7%
100.0%
100.0%
100.0%
-
33.3%
100.0%
-
100.0%
 It is likely that the number of sites of potential alert is underestimated for many incidents.
                                                                                            53

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

6.3    Alert Co-occurrence

The use of multiple WQM stations presents the potential for co-occurrence of alerts within a defined time
period, which would constitute an alert cluster. If valid alerts are received from hydraulically connected
monitoring stations, the cluster provides compelling evidence that a true water quality anomaly is
occurring in the distribution system. In fact, according to the Cincinnati Pilot Operational Strategy, a
cluster containing two or more valid, hydraulically connected alerts results in the immediate
determination that contamination is Possible.

However, clusters consisting of unrelated or invalid alerts also occur, and such invalid alert clusters may
take more time to investigate and rule out compared with an isolated invalid alert.

Alert clusters are considered either valid or invalid, as defined below.

   •   Valid Cluster:  A cluster containing co-occurring valid alerts from at least two hydraulically
       related monitoring stations resulting from a single verified water quality anomaly.

   •   Invalid Cluster:  A cluster that does not contain co-occurring valid alerts resulting from a single
       verified water quality anomaly.

Often more than one alert was received from the same monitoring location. For example, it is likely that
CANARY would produce an alert for each monitoring location shown in Figure 6-7 as the chlorine
suddenly jumped up, and then another as it abruptly dropped back up to its original level.  Only the first
alert from each monitoring location is considered in this analysis.

This section presents an analysis of alert co-occurrence using 1) alerts generated during the simulation
study, and 2) reprocessed alerts generated by the optimized version of CANARY, as described in Section
6.1. The same version and configuration of CANARY was  used in both of these analyses.

6.3.1  Co-occurrence of Alerts for Simulated Contamination Incidents
Definition: For the simulation study, a cluster is formed when alerts are received from two or more
monitoring stations for the same simulated contamination incident.  The nature of the simulation study
guarantees that all clusters generated are valid clusters.

Analysis Methodology: For each scenario, the number of impacted stations was captured, as well as the
number of alerts generated.  As presented in Section 4.3, there were a maximum of 14 impacted stations
in a scenario, and thus a maximum of 14 potential alerts in a single scenario.

Results: In total, clusters were formed for 347 scenarios, which is 47% of those detected. The clusters
ranged in size from 2 to 13 alerts, as shown in Figure 6-8. The number of alerts of the given size is
shown above each bar. Note that no bar is shown for 296 scenarios in which only one alert occurred, as
by definition those are not clusters.
                                                                                            54

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot
    200
                              6789
                               Numberof Alerts
                10   11   12   13  14
Figure 6-8. Cluster Sizes for Simulated Contamination Incidents

There were two alerts in 50.1% of clusters formed, and the cluster sizes generally decrease exponentially
from there. Five or more alerts were generated for 37 scenarios, which is 10.7% of those for which
clusters were formed and 5.8% of all scenarios detected.

The box-and-whisker plots shown in Figure 6-9 shows the number of alerts produced for detected
contaminants, broken down by contaminant. All detected scenarios are included here - including those
for which only one alert was produced and thus no cluster was formed.
   1/1
   o
   a)
   o
   V
  Q
   O    T
  *^    ^ T  t
  £
   
-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

Section 5.2 discussed contaminant spread, which is largely based on the volume of contaminant available
to inject. In general, contaminants that had a larger spread (including the Nuisance Chemicals, Biological
Agent 3 -5) tended to produce clusters more often; the median number of alerts for these contaminants
was two, compared with other contaminants in which the median was one. Conversely, contaminants
with limited spread (including Toxic Chemicals 3 and 7 and Biological Agents  6 and 7) produced alert
clusters less frequently. Note that on the plot, these simply have a line at one alert, as all detected
scenarios for these contaminants had one alert produced. As shown in Table 5-6, no scenarios involving
Toxic Chemical 8 were detected.
6.3.2  Co-occurrence of Alerts on Utility Data
Definition: For CANARY output on the utility data, a cluster is formed when alerts are received from
two or more monitoring stations within a 24-hour period.

Analysis Methodology: A 24-hour moving window was applied to the reprocessed CANARY alerts
(described in Section 6.1) to identify alert clusters.  Twenty-four hours was chosen as the  basis for
defining a cluster because it encompasses the longest travel time between two WQM stations that are
hydraulically connected, but is still short enough such that the earliest and latest alerts should still be
active or in the recent alert history.  This is consistent with the timing seen in the simulation study results
discussed in Section 6.3.1. For 95% of the clusters produced, the time between the first and second alerts
was less than 24 hours. Clusters that were entirely subsets of another cluster were removed to avoid
redundancy.

Alert clusters were first categorized as valid or invalid per the definitions in the introduction to this
section, and then categorized by cause.

To be a valid cluster, two  or more valid alerts due to the same observed water quality anomaly were
required. The same  categories used to classify valid single alerts were used for valid clusters:
Contamination Incident, Main Break, Distribution System Work, Treatment Plant Change, Verified Non-
Standard System Operation and Other.  See Section 6.1 for details on these categories.
Invalid clusters were grouped into the following categories.

    •  System-wide issue: System-wide communications and power outages  often resulted in invalid
       alert clusters when restored to service. Due to the nature of these outages, the utility was
       typically aware of the issue before the alerts occurred, and thus able to determine that the alert
       cluster was invalid without difficulty.

    •  No hydraulic connectivity: Each cluster was analyzed to see if any of the cluster's  alerts were
       hydraulically connected.  Alerts were considered hydraulically connected if the monitoring
       location of the later alert is downstream of an earlier alert's monitoring location.  The hydraulic
       travel time was not considered because the data needed to compute the  actual travel time were not
       available.  If no hydraulic connectivity exists, the utility would easily discount the cluster.

    •  Coincidental station-specific issues: The remaining clusters were  caused by coincidental,
       unrelated issues at the individual WQM stations. Causes of these unrelated issues include
       monitoring station hardware problems, procedural errors and normal background variability.

Results:  63.2% of the reprocessed alerts fell into a cluster. A total of 214 clusters were formed.  Figure
6-10 summarizes the number of clusters that fell into each cluster category.
                                                                                             56

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot
    13%
    (29)
                                                      Total Numberof Clusters = 214
                                                       • Invalid cluster: No hydraulic
                                                        connectivity

                                                       • Invalid cluster: Coincidental
                                                        station-specific issues

                                                       • Invalid cluster: System-Wide
                                                        Issue

                                                        Valid cluster
Figure 6-10.  Alert Cluster Causes

The majority of clusters (85%) could be easily discounted by the utility because the alerting stations were
not hydraulically connected or because they were attributable to a system-wide issue of which utility staff
were aware. An additional 3% of clusters, included in coincidental monitoring station-specific issues,
were comprised entirely of alerts due to monitoring station equipment issues (such as a malfunctioning
TOC sensor), and could also have been easily discounted.

All four valid clusters each contained two alerts.  The time between the valid alerts within these clusters
ranged from 7.1 to 8.9 hours. Three of these valid clusters were caused by a treatment plant change, and
the fourth was caused by verified non-standard system operation. The  other valid alert causes such as
main breaks and distribution system work generally only impacted one or two stations and thus did not
produce alert clusters.

6.4    Summary

The occurrence of valid and invalid alerts has a significant impact on the benefit and sustainability of a
WQM system. Benefits of WQM are realized through detection of unusual water quality conditions that
are of interest to the utility. On the other hand, too many invalid alerts can divert staff from other duties
and may ultimately be perceived as an unsustainable system.

As described in  Section 6.1, the CANARY software and software configurations were updated during the
evaluation period to improve performance and address bugs. Thus, two analyses were performed on the
utility data from the evaluation period:  the CANARY alerts that were actually produced during real-time
deployment were captured, and then the data was reprocessed using the final CANARY configurations to
evaluate what performance would have been if these settings had been  in place all along. The cause of
each alert was identified.  Some of the alerts were determined to be valid, as they were triggered by actual
unusual conditions in the distribution system.
                                                                                             57

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

Using the final CANARY configurations (i.e., reprocessed), 809 alerts were produced, 39 (5%) of which
were valid. Alert occurrence varied significantly across the 15 stations, ranging from 15 to 142 invalid
alerts and 0 to 7 valid alerts. The majority of invalid alerts were caused by background water quality
variability (40%) and equipment issues (40.2%), and the frequency of invalid alerts decreased
significantly over the evaluation period as system issues were resolved  and sensor performance improved.

There were also 49 incidents of unusual water quality in the utility data, and 69.4% of them were
detected. Most were attributed to verified non-standard system operations. These incidents impacted
between one and nine monitoring stations. Clusters were formed for four of the detected incidents, each
containing two alerts.

Two hundred and ten invalid alert clusters were produced on the utility data.  88% were easily discounted
as the alerts were not related hydraulically or did not have similar water quality changes.  Utility staff
determined that the alerts making up the remaining alert clusters were also unrelated.  Thus, while the
occurrence of invalid alert clusters was substantially greater than that for valid clusters, the characteristics
of the valid clusters were distinct from those of invalid clusters, and thus easy to identify.

Results from the simulation study showed there were  1,959 potential alerts (i.e., impacted monitoring
stations) over the 737 practically detectable scenarios. A total of 1,373 alerts were produced, all of which
were considered valid under the conditions of the simulation study, yielding an overall alerting percentage
of 70%. Alert rates were highly dependent on the monitoring location,  with the percentage of potential
alerts generated varying from 16% to 98%.  WQM  stations that lacked ORP had lower detection rates
relative to the other stations, and the one monitoring station that lacked both ORP and TOC had the
lowest detection rate of all.  Baseline water quality  variability also had an impact on alert rates, with
locations with greater background variability generally having lower detection rates. Clusters were
formed for 347 (47%) of the detected scenarios and included between 2 and 13 valid alerts.
                                                                                             58

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                         Cincinnati Contamination Warning System Pilot


  Section 7.0:  Design Objective: Timeliness of Detection and
                                      Response

For a CWS to have the maximum potential to reduce consequences of a contamination incident, it must
detect the incident early enough to allow sufficient time to implement response actions under the
consequence management plan. The timeliness of detection is a function of many aspects of WQM
component design, including sensor network design, data transmission rates, data processing speeds, and
alert investigation procedures. In order to evaluate how well the Cincinnati WQM component met this
design objective, the time for initial detection and the time to investigate a WQM alert will be evaluated.

7.1    Time for Initial Detection

Section 6 summarized valid alerts generated for both simulated contamination incidents and real periods
of unusual water quality observed during the Cincinnati pilot. This section discusses timeliness of
detection for both types of alerts.

Definition: The time for initial detection is the time between the presence  of unusual water quality in the
distribution system and the start time of the first alert. The time for initial detection is comprised of two
elements.  First, since water quality is monitored only at distinct locations in the distribution system, there
is a hydraulic travel time before the unusual water reaches a WQM location.  Second, there is a delay
between the time that unusual water reaches a monitoring location and the time the event detection system
generates an alert. The CANARY event detection system used in Cincinnati is designed such that it must
witness several consecutive timesteps of abnormal data before generating an alert, which eliminates
invalid alerts that would otherwise occur due to a single excursion in the data, as might occur with a brief
interruption in data communications.

The following delays also contribute to the time for detection, though they are negligible.  Together, they
contribute less than eight minutes to the detection timeline.

    •  Time to analyze water by water quality probes: Hach CL-17 analyzes water quality every 2.5
       minutes and GE Sievers TOC analyzes every 4 minutes.
    •  Time to communicate data from the monitoring stations: GCWW uses a 2-minute polling
       interval, so this is the maximum time that lapses between data generation and transmittal.
    •  Time to transmit data to the event detection system:  all observed times were less than 30
       seconds.
    •  Time for event detection system analysis: all observed times were less than 30 seconds.
    •  Time to transmit event detection  system output to control system: all observed times were less
       than 30 seconds.


7.1.1  Timeliness of Detection for  Valid Alerts from Simulated Contamination Incidents

Analysis Methodology:  For simulated contamination incidents, timeliness of detection is calculated
from the scenario's injection time, as this is the time that contaminant is introduced to the distribution
system.  The two main elements of detection time can be precisely calculated for simulated contamination
events. The hydraulic travel delay is the difference between the scenario's  contaminant injection time and
the first time that non-zero concentration is present at a monitoring location. The event detection system
alert delay is the difference between the alert start time and this time of non-zero contaminant
concentration.
                                                                                         59

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

Results: Of the 737 simulated contamination scenarios for which WQM location(s) were impacted,
1,959 monitoring stations were impacted.  Overall, the hydraulic travel time between the contaminant
injection location and impacted monitoring locations ranged from 0.25 hours to 56.8 hours, with a median
of 10.8 hours.  The median time for water to reach a monitoring location for a scenario was 5.8 hours (this
uses the earliest time water arrived across the impacted stations for each scenario).

The remainder of this section considers only stations for which an alert was received.  The overall time to
detect the simulated contamination incidents ranged from 26 minutes to 79.8 hours with an average of 9.2
hours.

Figure 7-1 shows the statistical distribution of times to detect for the simulation study. The first three
plots show the detection timeline of all 1,373 alerts generated during the simulation study. The two main
elements of detection time (hydraulic travel time and event detection alert delay) are shown, followed by
the overall times to alert.  The final plot shows the range of total  detection times for the 643 detected
scenarios.  The times in this final plot are shorter than those in the previous because only the first alert for
each scenario is included.
                                                                              Across the 643
                                                                             Detected Scenarios
      200
            Hydraulic Travel Time Event Detection System Delay  Total Time to Alert
                                                                          Total Time to First Alert
Figure 7-1.  Timeliness of Detection for Simulation Study Scenarios

The hydraulic travel time ranged from 15 minutes to 41.8 hours, with a median of 8 hours. Event
detection system delays ranged from nine minutes to 120 hours, with a median of 46 minutes. The longer
event detection system delays were generally caused when there was a very small initial contaminant
concentration at the monitoring location; many hours could go by before a detectable concentration was
present.

Overall, times to detect for all alerts were between 26 minutes  and 154 hours, with a median of 10.8
hours. When only considering the first alert from each scenario, the times to detect were between 26
minutes and 80 hours, with a median of 5.8 hours. The timing of the first alert is critical as it initiates the
investigation process. If the alert is found to be valid, activation of the Cincinnati Pilot Consequence
Management Plan and implementation of response actions can begin.
                                                                                             60

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

The Cincinnati pilot's threat level is automatically elevated to Possible when two valid alerts that are
hydraulically connected and have the same trigger parameters occur within a time period consistent with
the hydraulic travel time between the alerting stations. For the 347 scenarios discussed in Section 6.3.1 in
which a cluster was formed, the time the second alert was received ranged from 3.1 to 93.6 hours, with a
median of 13.2 hours. The time between the first and second alerts ranged from 1 minute to 74.5 hours,
with a median of 10.2 hours. Note that a second alert requires that contaminated water has flowed to at
least two stations; this metric is highly dependent on hydraulic travel times.


The total time to alert is strongly dependent on the monitoring location at which the alert is generated.
Figure 7-2 investigates the total time to alert for each of the  15 monitoring stations.
    10,000

     9,000

  _ 8,000
  "3T
  &
  c  7,000
  5  6,000
000
J
   3,000

   2,000

   1,000

      0
^
                                         T
D
                                              GH
                                             Station ID
                                                                             M
                                                                                O
Figure 7-2. Timeliness of Detection by Monitoring Location

The median total time to alert ranged from 3.3 hours (station H) to 63.3 hours (station D). The three
stations with the longest alert delays (stations A, B, and D) were also the stations with the lowest
percentage of alerts produced, as shown in Figure 6-1.  Since these stations experience high water quality
variability that can mask water quality anomalies, CANARY was configured to require a longer period of
unusual water quality before an alert is produced. This is intended to reduce the number of invalid alerts
received, though it also increases the time to detect when a true water quality anomaly is present.

The variability in alert time across monitoring stations is further investigated in Figure 7-3, which shows
the two components of the total alert time: hydraulic travel time and event detection system delay.
                                                                                             61

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot
    6,000
    5,000
    4,000
                                                     Median Event Detection System Delay
                                                     iMedian HydraulicTravelTime
  = 3,000
    2,000
    1,000
                           D
G    H     I
 Station ID
M
O
Figure 7-3. Components of Time to Detect by Monitoring Location

Across the stations, the median hydraulic travel time ranged from 2.75 sStation H) to 34 hours (station
D). This was fairly consistent across stations, with a median of 5.5 hours (330 minutes).

The median event detection system delays ranged from 26 minutes (stations C and O) to 47.6 hours
(station D). As noted above, stations A and D were configured to delay alerting until more unusual data
was seen.  Excluding these two stations, the median event detection delays ranged from 26 to 75 minutes,
and eight stations had a median delay of less than 35 minutes.

Figure 7-4 presents the range of total alert times by contaminant. Note that no scenarios using Toxic
Chemical 8 were detected, and thus no times to alert are shown on this plot.
                                                                                            62

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot
    2,500
                                          Contaminant
Figure 7-4. Timeliness of Detection by Contaminant

Excluding Toxic Chemical 7, for which only four valid alerts were produced, the median times to alert are
fairly similar across the contaminants, ranging from 3.9 (Biological Agent 7) to 14.5 hours (Biological
Agent 5).  This may be due to the fact that injections were simulated throughout the distribution system
for all contaminants. Hydraulic travel time, which is strongly influenced by injection location, is the
dominant element of the delay between the start of contaminant injection and the generation of a WQM
alert.
7.1.2  Timeliness of Detection for Valid Alerts from Observed Water Quality Anomalies
Analysis Methodology:  Unlike the simulation study, there is no definitive "injection time" from which
to calculate the timeliness of detection for observed water quality anomalies. In most cases, the point and
time at which unusual water enters the system is unknown.

However, the "start times" of the treatment plant changes can be reasonably estimated, as there is
monitoring equipment at the effluent of both treatment plants.  The data from these plant monitoring
stations was mined to determine when the treatment change occurred - and thus when the atypical water
quality entered the distribution system.

However, the start time for the other observed water quality anomaly types (described in Section 6.2.2)
cannot be determined.  For example, there is no record of precisely when the main break events occurred,
only when the break was first reported. This uncertainty makes it impossible to calculate a timeliness of
detection with any degree  of confidence.  As a result, only treatment plant changes were considered when
quantifying timeliness of detection for observed water quality anomalies.

Results: During the evaluation period, six observed water quality anomalies originated from the
treatment plant. One more plant event was detected but is not considered here,  as the time anomalous
water quality entered the system could not be determined because of a communication failure.  Across
                                                                                           63

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

these, a total of 42 stations were determined to be impacted using the analysis methodology described in
Section 6.2.2.  Overall, the hydraulic travel time between the treatment plant and impacted monitoring
stations ranged from 6.3 hours to 41.4 hours, with a median of 11.3 hours.  The median time for water to
reach any monitoring location across these events was 7.1 hours (this uses the earliest time water arrived
across the impacted stations for the six events).

Four of the treatment plant events were detected - each with one alert received. Considering only the
stations from which an alert was received, the hydraulic travel time was between 7.3 and 13.1 hours, with
a median delay of 10.5 hours. The median time it took CANARY to alert at a monitoring location once
unusual water had reached it was 1.6 hours.  Overall, the time to detect for these treatment plant events
ranged from 7.6 to 17.4 hours, with a median of 13.1 hours. Note that an alert was not always generated
at the first monitoring location reached.

7.2    Time to Fully Investigate a WQM Alert

Definition:  The time to fully investigate a WQM alert is the time necessary to complete all steps in the
alert investigation process and conclude whether contamination is possible.  Generally, this is the time
between the  start of the alert and the time that the Water Quality & Treatment Technician reports results
from the monitoring station inspection to the Water Quality & Treatment Chemist.

Analysis Methodology:  The results from four drills and exercises conducted during the evaluation
period, described in Section 3.2, were used to estimate the time  to fully investigate a WQM alert.  In
addition, the time to  complete major steps of the alert investigation process (e.g., review water quality
trends, review operations and work orders, inspect the monitoring station, etc.) were analyzed.  The time
at which contamination was determined to be Possible is also presented; however, note that the details of
the contamination scenario driving each drill or exercise. In several cases, information from other
monitoring and surveillance components was available before the WQM alert was fully investigated,
which resulted in a time to establish possible contamination that is shorter than the time to fully
investigate the WQM alert.

Alert investigation times  from routine operations were not included in this analysis. Section 9.1 presents
the level of effort required for the routine investigation of alerts. 95% of the WQM alerts generated
during routine operations were found to be invalid via an abbreviated investigation process; no real-time
investigations required an on-site inspection of the monitoring station that produced the alert.

Results:  WQM Drill 1 was conducted on July 14, 2008. The alert from the CANARY event detection
software was received at  9:00 am and the GCWW Water Quality & Treatment Chemist began the
investigation within three minutes.  The investigation concluded after 119 minutes (approximately 2
hours) as the GCWW Water Quality & Treatment Technician reported the results of his station inspection
back to the GCWW Water Quality & Treatment Chemist at 10:59 am. Figure 7-5 shows the timeline
progression of the key activities completed during the WQM alert investigation for WQM Drill 1. The
timeline was normalized  so the alert start time occurs at time 0.
                                                                                            64

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

00:00
OWQM
Alert
00:02
Operator
Recognizes
Alert
00:26
WQ&T Chemist
Notifies Water
Utility Emergency
Response
Manager


00:25
WQ&T 00:28
°0:03 Chemist Remote
°P^ator Determines Sample
WQ&T5 ™* 5"f«*>"
Chemist
u
vana begins
wl 'T








01:11
WQ&T
Technician
Prepares for
Site
Investigation






01:59
OWQM Alert
Investigation is
Complete as
WQ&T Technician
Reports Results of
Station Inspection
V







02:06
Water Utility
Emergency Response
Manager Determines
Contamination is
Possible
1


1 '
 00:00
                                     01:00
                                                                         02:00
                                                                                          02:30
Figure 7-5. Timeline Progression of the WQM Alert Investigation during WQM Drill 1
For WQM Drill 1, the GCWW WUERM made the determination that contamination was Possible
following completion of the WQM alert investigation, 126 minutes after the WQM alert was received.
This drill was based on a single WQM alert, and no information was available from other components to
accelerate the process of establishing possible contamination.

A Full Scale Exercise was performed on October 1, 2008. For the WQM component, the first alert was
received at 7:30 am. The investigation of this alert was completed at 10:30 am as the GCWW Water
Quality  & Treatment Technician reported results from the WQM station inspection. The time to
investigate the WQM alert was 180 minutes (3 hours). Figure 7-6 shows the timeline progression of the
key activities completed during the initial WQM alert investigation for the Full Scale Exercise.
00:00
OWQM
Alert,
Operator
Recognizes
Alert











00:41
Operator
Reviews
Operational
00:52
WQ&T
Chemist
Determines
Alert is
Valid
Data, Reports


f



Results to
WQ&T
Chemist
i
Operator
Notifies
V
C
^
•

VQ&T
Chemist
r
=




00:50
ritribution




;3patcher
Reviews
Work
Orders


00:53
Remote
Sample
Collection
Begins


•

00:55
WQ&T Chemist
Notifies Water Utility
Emergency Response
Manager
i


02:26
Water Utility
Emergency
Response
Manager
Determines
Contamination
is Possible


^










03:OC
OWQM Aler
Investigation is
Complete as
WQ&l
Techniciai
Reports
Results o
Statior
Inspectior


1

1


                                01:00
                                                             02:00
   00:00                                                                                   03:00
Figure 7-6. Timeline Progression of WQM Alert Investigation During Full Scale Exercise

For this exercise the WUERM determined that contamination was possible at 9:56 am, 146 minutes after
the first WQM alert was received. Note that this determination was made prior to completion of the
                                                                                            65

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

initial WQM alert investigation. This was a result of additional alerts being received; alerts were received
from two additional WQM stations at 9:40 am (02:10 on Figure 7-6).

WQM Drill 2 occurred on February 25, 2009. The initial WQM alert was received at 8:22 am. The
GCWW Water Quality & Treatment Technician inspected the alerting monitoring station and reported the
results back to the Chemist at 11:33 am. For this drill, the time to investigate the alert was 191 minutes
(3.2 hours).  Figure 7-7 shows the timeline of key activities completed during the initial alert
investigation.
                                                                                   03:11
                                                                                 OWQM Alert
                                                                                Investigation is
                                                                                 Complete as
                                                                                   WQ&T
                                                                                 Technician
                                                                                Reports Results
                                                                                  of Station
                                                                                  Inspection
00:00 00:24 00:30
OWQM Alert, WQ&T WQ&T
Operator Chemist Chemist
Recognizes Notifies Determines
Alert Water Utility Alert is Valid









Emergency
Response
Manage
00:01
Operator
Notifies
WQ&T
C
•hemist
r



r l




00:35
Remote
00:40
Water Utility
Emergency
Response
Manager
Determines
Contamination
is Possible
/
Sample /
Collection /
Begins /
r i
r ^ 	











01:31
WQ&T
Technician
Prepares for
Site
Investigation
	
r
   00:00
                                                      02:00
Figure 7-7.  Timeline Progression of the WQM Alert Investigation During WQM Drill 2
                                                                                            03:30
During WQM Drill 2 the WUERM made the determination that contamination was possible at 9:02 am,
40 minutes after the first WQM alert was received.  This decision was made prior to completion of the
initial WQM alert investigation because an additional WQM alert had been received with similar
parameters and hydraulic connectivity to the first WQM alert and it was verified that operational activities
had not caused either alert.

The WQM After-Hours Drill, intended to target staff with less experience investigating alerts, began on
April 29, 2009.  The WQM alert was received at 9:30 pm. The GCWW Water Quality & Treatment
Technician inspected the monitoring station and reported results back to the GCWW Water Quality &
Treatment Shift Chemist at 12:12 am (April 30, 2009). For this drill, the time to investigate the alert was
162 minutes (2.7 hours). Figure 7-8 shows the progression of the key activities completed during the
WQM alert investigation.
                                                                                             66

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                         Cincinnati Contamination Warning System Pilot
00:00
OWQM Alert,
Operator
Recognizes
Alert








00:04
Operator
Notifies
WQ&T
Chemist

r i
00
we
Che
Deter
Alert is
' i
^m


00:22
WQ&T
Chemist
Notifies
Water Utility
Emergency







Response 02:04 Q2A2
Matjager Water Utility OWQM Alert
Emergency Investigation is
01 -35 Response Complete as
14 / 00:32 WQ&T Manager WQ&T
!&T Remote ...... Detern
mist Sample D Technician ^
mines Collection Prepares or Site Contamlr
3 Valid/ Begins Investigation
' T T lr ^


1 '
Tines Technician
ation is Reports Results
ble of Station
Inspection
' ^ r
^^^^^^^^^^^^^^^


                               01:00
                                                            02:00
   00:00                                                                                 03:00
Figure 7-8. Timeline Progression of the WQM Alert Investigation During WQM After-Hours Drill

During the WQM After-Hours Drill the WUERM made the determination that contamination was
Possible 124 minutes after the initial WQM alert was received.  This decision was made prior to
completion of the WQM alert investigation because an alert was received from a second, nearby WQM
station. After review of the two alerts and verification that the alerts were related and not caused by
operational activities, contamination was deemed Possible.

Table 7-1 provides a summary  of the average and range time spent on each activity. The average time to
investigate a WQM alert was 165 minutes (2.8 hours) with a range of 119 to 191 minutes.

Table 7-1. Time to Implement Key Activities During Drill and Exercise WQM Alert Investigations
Activity
Time to Investigate WQM Alert
Time elapsed between start of WQM alert and operator
recognition of alert
Time for operator to notify Water Quality & Treatment Chemist
Time for operator to review operational data and report results
to Water Quality & Treatment Chemist
Time for Distribution Dispatcher to review work orders
Time for Chemist to determine WQM alert is valid
Time for Chemist to notify Emergency Response Manager
Time to initiate remote sample collection
Time for Water Quality & Treatment Technician to prepare for
site investigation
Time for Water Quality & Treatment Technician to inspect
WQM station and report results to Water Quality & Treatment
Chemist
Average
(minutes)
165
1
1
7
2
28
4
8
41
42
MIN to MAX
(minutes)
119 to 191
1 to 2
1 to 1
7 to 7
2 to 2
10 to 51
1 to 8
1 to 17
31 to 52
19 to 58
                                                                                          67

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

7.3    Summary

The time for initial detection by the WQM component is comprised of two main elements: the time
necessary for contaminated water to flow from the contamination site to a WQM station, and the time
necessary for the event detection system to produce an alert.

Table 7-2 summarizes these  delays for both simulated events and observed water quality anomalies. The
range of values observed is shown, followed by the median value in parentheses. The second column
shows the hydraulic travel times from the source of unusual water to impacted station(s). The third
column captures the same times, but only for stations at which an alert was produced. The fourth column
shows the event detection system delays for all alerts produced, and the final column summarizes the
overall times to detect.
                                                                                          68

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

Table 7-2.  Summary of Delays in Time to Detect
Event Type
Simulated
Contamination Events
Observed Water
Quality Anomalies
Hydraulic Travel
Time for All
Impacted
Stations
0.25-56.8 hours
(10.8 hours)
6.3-41.4 hours
(11.3 hours)
Hydraulic Travel
Time for Alerting
Stations
0.25-41.8 hours
(8 hours)
6.3- 11.3 hours
(7.6 hours)
Event Detection
System Delay
0.15- 120 hours
(0.8 hours)
0.3-6.4 hours
(1.6 hours)
Total Time to Alert
0.4- 154 hours
(10.8 hours)
7.6- 17.4 hours
(13.1 hours)
Clearly the hydraulic travel time was responsible for the majority of the delay between the start of an
incident and alert generation. This emphasizes the importance of sensor network design, as the
monitoring locations determine how long it takes for unusual water to reach a location at which it could
potentially be detected.

While the range was much larger for the simulated contamination incidents, which originated from many
different distribution system locations, the median value for the total time to alert was remarkably similar
for the simulated and observed water quality incidents.

Drills and exercises showed that full investigation of a valid alert takes between two and three hours.
This timeline is greatly accelerated when additional alerts are received - either from WQM or another
CWS component. The additional information can curtail the need for a site inspection of a monitoring
station. However, for the 348 simulated scenarios for which multiple alerts were produced, the median
time between the first and second alerts was  10.2 hours, indicating that the  second alert may not always
arrive in sufficient time to avoid the site inspection.
                                                                                             69

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot


       Section 8.0: Design Objective:  Operational Reliability

For a CWS to consistently detect incidents of unusual water quality, it must achieve a high degree of
operational reliability.  Specifically, the four WQM design elements: monitoring stations, data collection,
event detection system and component response procedures must all be consistently available and
producing quality data. In order to evaluate how well the WQM component met this design objective, the
following three metrics were evaluated: data completeness, data accuracy and availability. The following
subsections define each metric, describe how it was evaluated and present the results.

8.1    Data Completeness

Definition: Data is considered incomplete if it is missing or unusable. A sensor's data is considered
missing if it is not delivered to the SCADA system, as data is expected from every sensor monitored by
the WQM component every two minutes. If the data is delivered to the SCADA system but has been
flagged to indicate suspect quality, it is considered unusable. Incomplete data is problematic because it
represents an opportunity for a missed event.

Data completeness was evaluated for the WQM component to characterize the amount of data that was
delivered to, and considered usable by, CANARY. Thus, it characterizes performance of the monitoring
station and data collection design elements. Because performance issues with individual sensors were a
primary cause of lost data, completeness was also characterized in detail for each sensor. The component
and sensor analyses are presented in Sections 8.1.1 and 8.1.2, respectively.

8.1.1  Data Completeness for the WQM Component
Analysis Methodology: All data generated during the evaluation period was analyzed to characterize
data completeness for the WQM component. This analysis measured the amount of data delivered to the
SCADA system free of flags and thus considered usable by the event detection system.  The time,
monitoring station and cause of each instance of missing or unusable data were documented.

The following definitions applied to the analysis of component data completeness:

    •  Data  Stream: The output signal for a single instrument (e.g. Hach chlorine sensor at a specific
       monitoring station).  There are 84 WQM data streams for the Cincinnati pilot.

    •  Potential Data Hours for the Component: The total number of hours in the evaluation period
       multiplied by the total number of data streams.

    •  Complete Data Hours for the Component: The potential data hours for the component minus
       the total hours of incomplete data for all data streams.

    •  Percentage Data Completeness:  The complete data hours divided by the potential data hours.

When analyzing data completeness for the WQM component, missing or unusable data was attributed to
one of the following causes:

    •  Sensor Issue:  Data from an individual sensor is incomplete if no data is being produced by the
       sensor or if a sensor fault is indicated. A detailed analysis of sensor issues is presented in Section
       8.1.2.

    •  Data  Collection Failure:  Two patterns indicate a data collection failure: 1) the responses from
       all sensors at a monitoring station are concurrently flat-lined, or 2) the data from all sensors at a
       monitoring station are missing for a period of time.  Data collection failures are grouped into two
       sub-categories:
                                                                                          70

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

           o  System-Wide Outage: This type of failure is characterized by flatlined or missing data for
              all WQM data streams, which can result from a system-wide network outage or failure of
              the SCADA system.
           o  Monitoring station Data Collection Failure: This type of failure is characterized by
              flatlined or missing data for all data streams from a single monitoring station which may
              result from loss of communication service to the monitoring station or PLC failure.

    •   Monitoring Station Issue: Data may be considered incomplete because of a problem with the
       monitoring station including:
           o  Loss of Flow to Monitoring Station: A hydraulic problem, such as a main break or
              clogged pressure regulator, can interrupt the flow of pressurized water to a monitoring
              station. Some sensors generate a fault in this condition, making those data streams
              unusable for event detection.
           o  Loss of Power to Monitoring Station: All sensors used at the utility require power to
              generate and transmit data. While all stations were equipped with a UPS, there were
              instances where the primary power supply was lost and the  UPS charge expired before
              the power supply was restored.  All sensors stopped producing data when this occurred.

    •   Calibration: Each monitoring station has a "Normal/Calibration" selector switch. A technician
       places the selector switch to the "Calibration" position when servicing the monitoring station. All
       data streams from a monitoring station during calibration periods are flagged as unusable and thus
       not analyzed by the event detection system.

    •   Monitoring Station Maintenance Error: Some data was unusable because of technician error.
       The most common maintenance error occurred when a technician forgot to take a monitoring
       station out of calibration mode, often resulting in days of flagged, and thus incomplete  data.

Results:  Figure 8-1 provides the percentage of data completeness for the WQM component for each
reporting period.  Over the course of the evaluation period, monthly data completeness ranged from
81.7% to 98.6%, with an average of 93.1%.

Between the January 2008 and February 2009 reporting periods, data completeness gradually increased
from 89.2% to 98.6%.  This improvement can be attributed to improved O&M of the various sensors over
the evaluation period and to modifications that were made during the evaluation period after observing
and correcting equipment issues, as shown in Table 2-5. The exception to the upward trend during this
period was the September 2008 reporting period during which data completeness was only 83.2% due to
widespread and prolonged power and communications outages during the Hurricane Ike windstorm.
Between the February 2009 and May 2010 reporting periods, data completeness for the WQM component
ranged between 81.7% and 96.8%, with downtime attributed to recurring maintenance issues and sensors
being taken offline due to prolonged equipment malfunction. The decline in data completeness following
the transition to real-time monitoring in June 2009 demonstrates the challenge of keeping the complex
equipment used in the WQM component in proper working order, even after the initial start-up issues
have been worked out.
                                                                                           71

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot


ono/.
yno/.
0)
Jj bU/o
O
o
0)


noi




























































































































































































> Real-time monitoring













































































































^$^t^
Start Date of Monthly Reporting Period
Figure 8-1. Data Completeness for the WQM Component over the Evaluation Period

Figure 8-2 shows the percentage of incomplete data hours by cause and sub-cause for the entire
evaluation period.  The total hours of incomplete data for each cause or sub-cause are shown in
parentheses. The total number of incidents in each category is shown in brackets.

This figure clearly shows that the leading cause of incomplete data hours was sensor issues, which is
covered in greater detail in Section 8.1.2. Though there were fewer incidents of sensor issues (318) than
data collection incidents (666), the sensor issues lasted much longer. Many of the data collection
incidents lasted less than three hours, whereas sensors taken off-line resulted in weeks of incomplete data.

Data collection failure was the second most significant cause of incomplete data hours and had the most
incidents of failure. Most of the data loss attributable to system wide communication outages. This sub-
cause was responsible for a significant amount of incomplete data because a system wide communication
outage results in loss of all 84 WQM data streams. The longest system-wide outage occurred during the
Hurricane Ike windstorm in September 2008. There were also system-wide outages when maintenance or
updates were being performed on the SCADA system.

The next two largest causes of incomplete data were monitoring station maintenance errors and
calibration, which were responsible for 6% and 4% of incomplete data, respectively. Incomplete data due
to calibration is unavoidable as it represents the time that instruments are taken off-line for maintenance
activities.  The most common maintenance error leading to incomplete data occurred when a technician
left a monitoring station in calibration mode.

Monitoring station issues accounted for only 2% of incomplete data, with most of the data losses resulting
from loss of power. The Hurricane Ike windstorm in September 2008 resulted in data loss because many
monitoring stations experienced prolonged power outages that extended beyond the capacity of the UPS.
                                                                                           72

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                            Cincinnati Contamination Warning System Pilot

Beyond that incident, most instances of power loss were associated with two monitoring stations.  One
experienced numerous power outages due to non-CWS equipment on the same circuit overloading and
tripping the circuit breaker.  Another, located at a Cincinnati Fire Department facility, occasionally
experienced power outages when overspray from vehicle wash-down tripped the ground fault circuit
interrupter receptacle that the WQM station was plugged into.
             4% (4,929)
                           2% (2,238)
  17% (21,381)
                Sensor Issue [318] (See Section 7.1.2 for Details)
               •f Data Collection Failure [666]
               • Station Maintenance Error [138]
               5 Station Calibration [246]
                Monitoring Station Issue [65]
  Total Incomplete Data Hours [1,433] = 121,853
  Total Numberof Incompete Data lncidents= 1,433
  Total Potential Hours of Complete Data = 1,768,763

  Notes:
  • Hours of incomplete data for each cause or sub-cause are shown in parentheses on the plots.
  •The num ber of incomplete data incidentsf or each cause or sub-cause is shown in brackets
   in the legends.
                                                                          Data Collection Failure
                                                                       iSystem Wide Outage [575]
                                                                        Station Data Collection Failure [91]
                                                                         Monitoring Station Issues
                                                                         6% (141)
Loss of Powerto Station [63]
Loss of Flow to Station [2]
Figure 8-2. Cause and Sub-cause of Incomplete Data for the WQM Component


8.1.2   Data Completeness for Individual Water Quality Sensors
Definition: Data completeness was calculated for each individual sensor during the evaluation period.
This analysis measured the amount of data delivered by each sensor to the SCADA system free of flags,
and thus considered usable by the event detection system.

Analysis Methodology: When analyzing completeness for an individual sensor, the number of potential
data hours excluded the incomplete data hours attributable to causes other than sensor issues.  Thus, the
number of hours of incomplete data attributable to data collection failure, monitoring station issues,
calibration of other sensors and monitoring station maintenance errors as described above were subtracted
from the total number of potential data hours in the evaluation period for each sensor.

The definitions for data stream and percentage data completeness from Section 8.1.1 also apply to Section
8.1.2. The following definitions apply to the analysis of data completeness for individual sensors:
                                                                                                  73

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

    •  Potential Data Hours for a Sensor: The total hours that data is expected to be collected from an
       individual sensor during the evaluation period.

    •  Complete Data Hours for a Sensor:  The potential data hours for the sensor minus the total
       hours of incomplete data for the sensor.

When analyzing data completeness for individual sensors, missing or unusable data was attributed to one
of the following causes:

    •  Sensor Offline: A sensor is considered "offline" when it has been turned off by the utility. Many
       instruments were taken offline temporarily for maintenance, and a few were taken offline
       permanently due to persistent issues.

    •  Sensor Fault:  Data is flagged as unusable when the sensor diagnostics detect an internal fault
       which can be due to a hardware malfunction or a software error.

    •  Flat-line Response: Flat-line data from an individual sensor indicates that new values are not
       being produced, which can result from a loose cable connection or an error with hardware,
       firmware or software.  If all sensors from a particular monitoring station were flat-lined, those
       hours of incompleteness were attributed to data collection failure and not included in the potential
       data hours for the station's individual sensors.

    •  Improper Sensor Maintenance:  This cause was used when a sensor fault was triggered by work
       performed by a technician during a sensor maintenance activity.

Results:  The sensor completeness is summarized in Table 8-1 below. See Table 2-5 for detailed
descriptions of the WQM modifications, many of which significantly impacted data completeness.

Table 8-1.  Average Annual Percentage Data Completeness for WQM Sensors
Sensor*
Hach Astro TOC
Hach Chlorine
Hach conductivity
Hach ORP
Hach pH
s::can carbo::lyser
Sievers 900
US Filter Chlorine - Bare Electrode Flow Cell
US Filter Chlorine - Membrane Flow Cell
US Filter pH
YSI conductivity
YSI ORP
Percentage Completeness
2008
85.9%
98.5%
99.0%
98.7%
98.9%
43.2%
93.0%
53.7%
78.6%
98.8%
97.2%
98.1%
2009
55.7%
99.8%
100%
100%
98.1%
91.4%
90.8%
-
95.2%
99.5%
100%
100%
2010
30.1%
100%
100%
100%
100%
100%
86.3%
-
100%
100%
100%
100%
* See Table 2-2 for a list of the sensor models

As discussed in Section 2.1.1, the utility switched from U. S. Filter bare electrode chlorine probes to
membrane probes and flow assemblies in July 2008 because the relatively high pH of the distributed
water was incompatible with the upper pH tolerance of the sensor.  The difference in data completeness is
dramatic and emphasizes the importance of selecting sensors that are compatible with a utility's specific
water quality.
                                                                                            74

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

The percentage completeness for most sensors increased from 2008 to 2009 as modifications and
improvements were performed. The s::can carbo::lyser had the most significant improvement.  The s::can
carbo::lysers experienced relatively low data completeness during 2008; these units produce sensor faults
when there are hardware issues, causing CANARY to ignore the data.  Section 2.5 describes that original
aluminum housings were replaced with stainless steel housings to remedy the build-up of aluminum oxide
on the lamp assemblies, significantly increasing data completeness in 2009. Technicians continued to
identify and address s::can issues in 2009, such as backwards flow through the flow cell and fouled
windows, resulting in 100% data completeness in 2010.
However, the data completeness for the Hach Astro TOC and Sievers TOC decreased from 2008 to 2009.
The decrease in data completeness for the Sievers TOC was due to multiple defects and issues, including
faulty inorganic carbon removers, leaky syringes, and clogged or loose tubing.  In the case of the Hach
Astro TOC, the decrease in data completeness resulted from the sensors being decommissioned starting
the latter half of 2009 due to recurring equipment problems that caused inaccurate and erratic data. The
two types of TOC instruments continued to experience recurring equipment malfunctions into 2010,
resulting in substantial downtime.  The utility began decommissioning the Hach Astro TOC instruments
in 2009, and all three Hach Astro TOC  sensors were offline for nearly all of 2010.
In 2010, the data completeness for all sensors, except the Hach Astro and Sievers TOC instruments,
increased to 100%. The 2010  sensor data completeness indicates that a high level of performance can be
expected from most instruments after initial startup issues have been addressed.
Figure 8-3 shows the sub-causes of sensor issues over the evaluation period. The percentage and total
hours (in parentheses) of incomplete data for each sub-cause are shown.
                                                       SensorOffline

                                                      iSensorFault

                                                      I Improper Sensor Maintenance

                                                       Flatline Response
  Total Sensor Incomplete Data Hours= 85,915
  Total Potential SensorHours of Complete Data = 1,732,826
                                                               * Percentage (Hours)
Figure 8-3.  Incomplete Sensor Data by Sub-cause
                                                                                             75

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

The most significant sub-cause of incomplete data hours attributable to sensor issues was intentionally
taking the sensors offline. Offline sensors accounted for 62% of all component incomplete data hours, the
most by far of any sensor or component cause or sub-cause.
Sensor faults were second largest cause of incomplete data. The US Filter bare electrode chlorine sensor
had the most incomplete data hours attributable to sensor faults because of the incompatibility with
GCWW's high pH water. Numerous prolonged faults occurred until the bare electrode probes were
replaced as described previously. The Sievers TOC had a slightly inflated number of sensor faults early
in the evaluation period because  sensor outputs were considered faults which were instead purely
informational. The sensor fault output was reconfigured to initiate "fault" conditions only for tags which
did in fact indicate that the sensor output was unusable. Note that sensor faults were not received  from
the YSI sensors and US Filter pH sensors. The YSI sensors did not have the capability of producing a
sensor fault, and  faults that were  produced by the  US Filter pH sensor were not transmitted to the SCADA
system due to communications bandwidth limitations.
Improper sensor maintenance was the third leading cause of incomplete data, with the Sievers TOC
sensor losing the most data to improper sensor maintenance. The complex, compact, and sensitive nature
of the Sievers TOC sensor led to multiple occasions where data from these instruments was missing or
flagged as unusable soon after a technician had serviced the device
  OO
Finally, flat-line response was the sub-cause that contributed least to incomplete data. The Hach Astro
TOC had the most incomplete data hours caused by flat-line responses due to recurring sensor issues.

8.2     Data Accuracy

Definition:  In addition to being  complete, as defined in Section 8.1, data must also be accurate to ensure
that the event detection system is analyzing information that reflects actual real-time water quality
conditions in the  distribution system. Inaccurate data is problematic because it can generate invalid alerts,
as seen in Section 6.0, and mask  true water quality anomalies.

Data is considered accurate if the measured value is within an acceptable range of the true value obtained
through an independent method.  The acceptable range  for accurate data is defined for each water  quality
parameter in Table 8-2.  The true value for each water quality parameter at any given time  was
approximated using historic  data and variability for the  monitoring location and by considering the quality
of the water leaving the treatment plant.
Table 8-2.  Water Quality Parameter Accuracy  Ranges
Parameter
TOC
Chlorine
PH
ORP
Conductivity
Accuracy Range
± 50%
± 50%
± 10%
± 30%
± 30%
Accuracy was evaluated for the entire WQM component and for each sensor, as presented in Sections
8.2.1 and 8.2.2, respectively.  The component analysis measured the amount of complete and accurate
data that was suitable for event detection. The sensor analysis illustrates how malfunction in each sensor
type contributed to inaccurate data.
                                                                                            76

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

8.2.1  Data A ecu racy for the WQM Component
Analysis Methodology: Empirical data that were considered complete during the evaluation period were
analyzed to quantify data accuracy at the component level. The time, monitoring station, and cause of
each instance of inaccurate data were documented.

The following definitions applied to the analysis of component data accuracy. The definitions for data
stream and complete data hours for the component from Section 8.1.1 also apply to Section 8.2.1.

    •  Accurate Data Hours for the Component: The number of complete data hours for the
       component minus the total hours of inaccurate data for all data streams.

    •  Percentage Accuracy: The accurate data hours divided by the potential data hours.

When analyzing data accuracy  for the WQM component, inaccurate data was attributed to one of the
following causes:

    •  Sensor malfunction:  Sensor malfunctions that caused inaccurate data include sensor hardware
       or firmware errors, internal flow blockage, defective equipment, and breakage.

    •  Improper maintenance:  Examples of improper maintenance that caused inaccurate data
       included incorrect calibration and failure to replenish reagents before they ran out.

    •  Monitoring station flow loss: In Section 8.1.1, monitoring station flow loss was listed as a
       cause of incomplete data when the monitoring station's sensors produced sensor faults. However,
       for sensors which did not produce a fault, the erratic data resulting from a flow loss was
       considered complete but inaccurate.

Results:  In order for data to be considered usable for event detection or most other applications, it must
be both complete and accurate. Figure 8-4 shows the total percentage of potential data hours for the
component that were unusable, broken down into the hours that were incomplete, as discussed in Section
8.1, and the hours that were complete  but inaccurate. Over the evaluation period, the average percentage
of data hours that were unusable per reporting period was 10.4%, ranging from 4.98% to 21.9%. Over the
same period, the average percentage of complete but inaccurate data hours was 3.46%, with a range from
0.23% to 7.90% per reporting period.  These results demonstrate that incomplete data was more common
than inaccurate data. This is  largely a result of the utility taking sensors with persistent issues  off-line -
either permanently or until they were fixed.
Figure 8-4 can be evaluated with respect to the two phases of the evaluation period; the optimization
phase from January 2008 through May 2009, and the real-time monitoring phase from June 2009 through
May 2010.  During the optimization phase, data completeness and accuracy for the component fluctuated
as issues  with different sensors were encountered, causes diagnosed and modifications implemented
throughout the optimization period. After the transition to real-time monitoring, data accuracy increased
due to a combination of improved maintenance, resolution of equipment malfunctions and
decommissioning of instruments with chronic performance issues. An exception to this trend occurred
during the December 2009 reporting period, during which a Hach Astro TOC sensor malfunctioned for
multiple days. Another month  with a high percentage of unusable data was April 2010.  In this case, the
server providing data to CANARY went down and thus all data streams were incomplete.
                                                                                           77

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the

                          Cincinnati Contamination Warning System Pilot
    u
    .a
    us
    in
    D

    o
    I

    a
    cu
    a
    o
    a.
    c
     4 ^ <^ ®XX <^ ^ ^\^  ^ t* ^   ^ 1> 4  ^ <^
                                 Start Date of Monthly Reporting Period
Figure 8-4. Percentage of Potential Data Hours That Were Unusable for the WQM Component




Figure 8-5 shows the breakdown of causes of inaccurate data during the evaluation period. The

percentage and total hours (in parentheses) of incomplete data for each sub-cause are shown.
                   2.32% (1,424)
    14.4% (8,808)
                                                 • Sensor Malfunction


                                                 • Im proper Maintenance


                                                 
-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

Sensor malfunction was by far the largest cause of inaccurate data, mostly due to issues with the Hach
Astro TOC sensors, as discussed in Section 8.2.2. The second largest cause of inaccurate data was
improper maintenance, and a major contributor to this was sensors running out of reagents.  Station flow
loss was the least significant cause of inaccurate data because most sensors generated a fault when there
was a loss of flow and thus those data hours were considered incomplete instead of inaccurate.

8.2.2  Data Accuracy for Individual Water Quality Sensors
Analysis Methodology:  The empirical data collected from the pilot to evaluate the component accuracy
were further characterized to assess the accuracy of the data produced by individual sensors. As
monitoring station flow loss and improper maintenance do not reflect sensor performance, inaccurate data
hours with those causes are not included in the analyses in this section.

The following definitions applied to the analysis of sensor accuracy:

    •   Potential Number of Accurate Data Hours for a Sensor:  The total number of complete data
       for the sensor minus the inaccurate data hours caused by monitoring station flow loss and
       improper maintenance.

    •   Accurate Data Hours for a Sensor: The potential number of accurate data hours for a sensor
       minus the total hours of inaccurate data for a sensor.

Results:  Table 8-3 shows the average annual percentage  accuracy for each sensor type over the
evaluation period.  Overall, most issues that reduced sensor accuracy during 2008 and 2009  resulted in
modifications that improved accuracy in 2010. Trends in the data accuracy for a few specific sensors are
discussed below.
Table 8-3.  Average Percentage Accuracy for Sensors
Sensor ID
Hach Astro TOC
Hach Chlorine
Hach conductivity
Hach ORP
Hach pH
s::can carbo::lyser
Sievers 900
US Filter Chlorine US Filter Chlorine
US Filter Chlorine - Membrane Flow Cell
US Filter pH
YSI conductivity
YSI ORP
Percentage Accuracy
2008
79.0%
97.7%
99.7%
99.2%
98.7%
81.1%
96.4%
95.4%
99.9%
97.0%
96.9%
96.6%
2009
75.1%
99.2%
100.0%
100.0%
94.7%
83.8%
96.7%
_
84.2%
86.2%
99.3%
95.3%
2010
58.9%
99.3%
100.0%
100.0%
98.4%
100.0%
95.7%
_
100.0%
99.6%
100.0%
100.0%
In 2008 and 2009, the s::can carbo::lyser exhibited the second lowest percentage of data accuracy.
Section 8.1.2 discussed many issues with this instrument that resulted in incomplete data due to sensor
faults.  However, there were occasions when sensor faults were not produced when these issues were
present, and this data was classified as inaccurate. The resolution of these issues resulted in 100%
accuracy in 2010.
                                                                                            79

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

Likewise, much of the inaccurate data from the US Filter instruments were also caused by instrument
malfunctions for which a sensor fault was not produced.  In addition, problems with multiple U. S. Filter
signal converters caused data accuracy to decrease in 2009.
The YSI conductivity sensor had a lower accuracy of 96.9% in 2008 due to sensor malfunctions and the
output signal being inadvertently set to the non-temperature compensated reading at one of the stations.
These issues were  addressed and accuracy improved to greater than 99% in 2009 and 2010.
In 2009, the Hach  pH and YSI ORP sensors had an accuracy of 94.7% and 95.3%, respectively. The
cause was determined to be faulty probes, which were replaced resulting in improved accuracies of 98.4%
and 100% in 2010.

While efforts were made to optimize the operation of all sensors over the evaluation period, an increasing
trend in accuracy was not observed for all sensors. The Hach Astro TOC and, to a much lesser degree the
Sievers TOC sensors, experienced recurring equipment malfunctions that were never fully resolved,
reducing the accuracy of these sensors throughout the evaluation period.

8.3    Availability

Definition: The WQM component is considered to be available for the detection of possible
contamination incidents if the four design elements (monitoring stations, data collection, event detection
and component response procedures) are functioning properly.

Analysis Methodology:  Empirical data collected from the pilot during the evaluation period were
analyzed to quantify the availability of the WQM component. All instances during which the WQM
component was unavailable for longer than 1 hour were categorized according to the design element that
was down. The total number of hours that the WQM component was unavailable during each reporting
period was tabulated. The criteria for each design element to be considered available are as follows:

    •  Monitoring Stations: The monitoring station design element was considered available if at least
       12 of the 15  stations in the distribution system were producing complete and accurate TOC or
       chlorine data. The threshold of 75% of data stream availability for component availability was
       used for all components with multiple data streams. Chlorine and TOC were chosen because
       these two parameters have been shown to be most effective for contaminant detection.

    •  Data Collection: The data collection design element is considered available if usable data is
       successfully transmitted to the SCADA system for at least 12 of the 15 monitoring stations.

    •  Event Detection: The event detection system design element is considered available if the
       CANARY output is 0 (no alert) or 1 (alert) and that output is transmitted to the SCADA server
       for at least 12 of the 15 monitoring stations.  Though there were rare occasions when utility staff
       removed one or more monitoring stations from CANARY due to a monitoring  station
       maintenance issue, most instances in which CANARY was unavailable impacted all stations.

    •  Component response procedures: The component response procedures design element is
       considered available if the WQM alert investigations procedures are in place and trained staff are
       available to execute those procedures.  The Cincinnati pilot has personnel trained to investigate
       WQM  alerts according to the component response procedures working 24/7. Thus there was no
       unavailability attributable to this design element.

The following definitions applied to the analysis of component availability. The definition for potential
data hours for the component from Section 8.1.1 also applies to Section 8.3.

    •  Available Data Hours for the Component:  The potential number of data hours for the
       component minus the total hours of unavailability. If multiple design elements were
                                                                                           80

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

       simultaneously unavailable, the hours associated with concurrent design element unavailability
       were only counted once. For example, if both the data collection and event detection systems
       were concurrently unavailable for 8 hours, the WQM component was considered unavailable for
       8 hours, not 16.

    •  Percentage Availability: The available data hours divided by the potential data hours.

Results:  The average availability of the WQM component was 81.7% over the evaluation period. The
event detection, data collection and WQM station design elements were not available for 3,795; 110; and
28.2 hours, respectively, including 69.2 hours where multiple design elements were unavailable.  This
amounted to 3,864 hours of component unavailability out of 21,168 potential hours, or 81.7% component
availability.

Figure 8-6 shows the unavailability of the WQM component over the evaluation period. The bars show
the number hours each design element was unavailable. The overall component availability for each
reporting period is also shown.




k.
o 300
I
- 9nn
1 150

en
n -
+ 4

^
^ . k. ^ ^ A
•







I
c* wQy


















» * .*
^ * ^




1 i
Mi.
Mill
^ ^ nnn^



	 ^| Real-time Monitoring






^ •

I0


II III
C^^ C^ C^ CN^ C^ C^ C^ CS^ C^ C^^ l^~ Iv^
c* kQy k.Qy ^& kfo1 k5y K.0^ iJQy k.Qy ^y k.^y kfo1 k!^
Start Date of Monthly Reporting Period
Event Detection "Data Collection Monitoring Stations ^Component Availability



(D
60% §
I
>inoA rr
g;

no/

Figure 8-6. WQM Component Unavailability and Unavailable Hours by Design Element
Clearly, periods of unavailability were primarily due to issues with the event detection system. This
element was unavailable if either the EDDIES or CANARY applications were not running properly. In
addition to minor restarts and maintenance, there were three significant incidents where the event
detection design element was unavailable for an extended duration:

    •  May 2008 through July 2008 reporting periods: Updated versions of both CANARY and
       EDDIES software were loaded onto the workstation to reduce the number of invalid alerts.
       However, there were bugs in the software updates that resulted in 1,243 hours of unavailability
       while these issues were addressed.

    •  March 2009 through April 2009 reporting periods: Software updates were installed during the
       March 2009 reporting period to address minor software bugs.  However, this resulted in frequent
       occurrences of CANARY freezing and failing to produce output until it was restarted.  Several
       software updates and system restarts were needed to resolve this issue resulting in 772 hours of
       event detection unavailability.
                                                                                            81

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

    •  February 2010 reporting period: A bug in a new version of CANARY caused the software to
       enter loops where dozens of alerts were generated in rapid succession following an initial alert.
       These nuisance alerts eventually resulted in the decision to shut down the event detection system
       during the March 2010 reporting period while a solution was developed, resulting in 300 hours of
       event detection system unavailability.

Interruption in the power supply was also responsible for some periods of event detection system
unavailability.  Although the CANARY workstation was equipped with a backup battery supply, there
were two instances where the backup supply expired following a prolonged loss of line power. Power
outages occurred during the August 2008 and September 2009 reporting periods, resulting in 23.6 and
43.2 hours of event detection system unavailability, respectively.  The Hurricane Ike windstorm caused
the 2008 outage while the cause for the 2009 outage was unknown.

Finally, some periods of event detection system unavailability were due to CANARY restarts. After each
restart, CANARY required between 1 and 3 days of data for initialization, depending on each monitoring
location's configuration, before analysis of real-time data for anomalies could resume.  CANARY
initialization accounted for 25% of event detection system unavailability.  This design flaw was fixed in
the version of CANARY installed on March 9, 2010, which queried existing data in the database to obtain
data for initializing the software.

The data collection design element was the second largest contributor to data unavailability.  For this
design element to be available, both the communication and SCADA systems must be working. Most
communications outages were brief and localized to a single monitoring station, and thus did not result in
component downtime. There were only two occurrences of prolonged data collection unavailability:

    •  September 2008:  The SCADA  system crashed during this reporting period, resulting in the
       component being unavailable for 40.9 hours. This outage was unrelated to the  Hurricane Ike
       windstorm that occurred during this reporting period.

    •  September 2009:  The communications provider experienced an  internal issue during this
       reporting period that required multiple days to resolve. This caused the component to be
       unavailable for 51.1 hours.

The WQM station design element caused very little component data unavailability as most issues were
localized to a single monitoring station. There were only three reporting periods during which the
monitoring station design element was not available over the course of the evaluation period:

    •  March 2008:  This reporting period had 8.3 hours  of unavailability when four stations were
       concurrently unavailable: there were prolonged TOC and chlorine sensor issues at one
       monitoring station and three other stations were in calibration mode, either due to ongoing
       maintenance work or being inadvertently left in calibration mode  after service was complete.

    •  August 2008: This reporting period had instances where five to six stations experienced
       concurrent power outages caused by the Hurricane Ike windstorm, leading to 7.9 hours of
       unavailability. Note that all of the monitoring station design element unavailability during this
       reporting period coincided with event detection system downtime.

    •  September 2008:  This reporting period had instances where four to five monitoring stations
       experienced concurrent power outages caused by the Hurricane Ike windstorm, leading to 12.0
       hours of unavailability. All of these hours coincided with unavailability of the  event detection
       system.
                                                                                            82

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

Table 8-4 shows the amount of concurrent unavailability for one to six monitoring stations - both in
terms of the percentage of the evaluation period and the actual number of hours the unavailability
occurred.  Overall, at least one monitoring station was unavailable for 26.6% of the evaluation period.

Table 8-4. Concurrent Unavailability of "X" Number of WQM Stations
Concurrent
Unavailability
Percent of time /
hours for which "X"
monitoring station(s)
were concurrently
unavailable
1 Monitoring
Station

22.3%
(4710 hours)
2 Monitoring
Stations

3.7%
(777 hours)
3 Monitoring
Stations

0.5%
(115 hours)
4 Monitoring
Stations

0.1%
(12. 3 hours)
5 Monitoring
Stations

0.1%
(15.3 hours)
6 Monitoring
Stations

0.0%
(0.6 hours)
Single monitoring station unavailability was fairly evenly distributed over the evaluation period, with
only 54.4% of the hours where a single monitoring station was unavailable occurring within the first
seven out of 29 total reporting periods.  Stations placed in calibration mode for extended periods while
technicians addressed issues with the Sievers TOC and U. S. Filter Chlorine sensors caused most of the
occurrences of individual monitoring station unavailability. Maintenance errors that occurred when a
technician inadvertently left a monitoring station in calibration mode after service were also a significant
cause of single monitoring station unavailability.

Most instances where two or three stations were simultaneously unavailable occurred early in the
evaluation period, with 98.2% of the hours where three stations were unavailable occurring within the
first four reporting periods and 77.6% of the hours where two stations were unavailable occurring within
the first seven reporting periods. Incomplete and inaccurate data was common during this period due to
hardware,  software, or maintenance issues, and thus it was not uncommon to have more than one station
experiencing issues at the same time.

The incidents when four, five, or six monitoring stations were unavailable were analyzed in the previous
discussion regarding unavailability of the monitoring  station design element (i.e., if more than three
monitoring stations are concurrently down, the WQM design element is considered unavailable). There
were no instances where more than six monitoring stations were concurrently unavailable.

8.4     Summary

The availability of the WQM component to detect contamination incidents was evaluated by analyzing
the performance of the four design elements: monitoring stations, data collection, event detection and
component response procedures.

The availability of the monitoring station design element was characterized by the completeness and
accuracy of the data generated by the monitoring stations. Data completeness measured the amount of
usable data, (i.e. not missing or flagged as unusable),  and accuracy measured the amount of data that was
within a predetermined tolerance range.  The monitoring station design element was considered available
when at least 12 of the 15 monitoring stations in the distribution system were producing complete and
accurate TOC or chlorine data, because these two parameters have been shown to be most effective for
contaminant detection.

Out of a total of 1,768,763 data hours, there were 121,853 hours of incomplete data (6.9%) and 61,235
hours of complete but inaccurate data (3.5%).  Incomplete data was mostly attributable to sensors being
taken offline for extended periods due to recurring equipment malfunctions, sensor faults and data
collection  outages. Sensor malfunctions, mostly due to issues with TOC sensors, was the leading cause of
inaccurate data.  The vast majority of instances where data were not complete or accurate affected only a
                                                                                            83

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

single monitoring station, and rarely more than 3 of the 15 stations. As such, the WQM component had
28.2 hours of unavailability attributable to the monitoring station design element.

The availability of the data collection design element depended on the of data transmission from the
monitoring stations to the SCADA system. Similar to the criteria for the monitoring station design
element, data collection was considered available when data was successfully transmitted by at least 12 of
the 15 monitoring stations.  System-wide communications outages and SCADA system downtime led to
110 hours of WQM component unavailability attributable to the data collection design element.

The event detection design element was considered available when both the EDDIES and CANARY
applications were running properly.  Issues with both applications and power outages to the computer that
ran these applications led to 3,795 hours of WQM component unavailability attributable to the event
detection system design element.

The component response procedures design element was considered available when the WQM alert
investigations procedures were in place and trained staff were available to execute those procedures.  The
Cincinnati pilot had personnel trained to investigate WQM alerts according to the component response
procedures working 24/7 since the beginning of the evaluation period. Thus there was no WQM
component unavailability attributable to this design element.

After accounting for the unavailability attributed to each design element and 69.2 hours when multiple
design elements were not available, it was determined that the component was available for 81.7% of the
evaluation period.

In addition to overall WQM component availability, the performance of individual sensors was
characterized, as sensor-related issues were the most common cause of incomplete  and inaccurate data.
The amount of complete and accurate data from individual sensors ranged from 79.0% to 99.7% during
the first year of the evaluation period, but the majority of sensor-related issues were resolved by the end
of the second year of the evaluation period in 2009, resulting in nine out of the ten sensor types to have at
least 95% complete and accurate data in 2010.  The Hach Astro TOC was the only exception at 58.9% in
2010; several sensors were decommissioned due to ongoing sensor issues.

It should be noted that many of the issues associated with data unavailability were a result of using a
variety of sensors and equipment in an effort to evaluate different monitoring options. Utilities deploying
WQM should carefully evaluate sensors and other equipment before putting the units into service.  This
will result in reduced downtime and increase data accuracy.
                                                                                            84

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot


             Section  9.0: Design Objective: Sustainability

Sustainability is a key objective in the design of a CWS and each of its components, which for the
purpose of this evaluation is  defined in terms of the cost-benefit trade-off.  Costs are estimated over the
20-year lifecycle of the CWS and include the capital cost to implement the CWS and the cost to operate
and maintain the CWS.  The benefits derived from the CWS are defined in terms of primary and dual-use
benefits. The primary benefit of a CWS is the potential reduction in consequences in the event of a
contamination incident; however, such a benefit may be rarely, if ever, realized. Thus, dual-use benefits
that provide value to routine  utility operations are an important driver for Sustainability. Ultimately,
Sustainability can be demonstrated through utility and partner compliance with the protocols and
procedures necessary to operate and maintain the CWS.  The three metrics that will be evaluated to assess
how well the Cincinnati CWS met the design objective of Sustainability are: costs, benefits and
compliance. The following subsections define each metric, describe how it was evaluated and present the
results.

9.1     Costs

Definition: Costs are evaluated over the 20-year lifecycle of the Cincinnati CWS and comprise costs
incurred to design, deploy, operate and maintain the WQM component since its inception.

Analysis Methodology:  Parameters used to quantify the implementation cost of the WQM component
were extracted from the Water Security Initiative: Cincinnati Pilot Post-Implementation System Status
(USEPA, 2008a). The cost of modifications to the WQM component made after the completion of
implementation activities were tracked as they were incurred. O&M costs were tracked on a monthly
basis over the duration of the evaluation period.  Renewal and replacement costs, along with the salvage
value at the end of the lifecycle were estimated using vendor supplied data, field experience, and expert
judgment. Note that all costs reported in this section are rounded to the nearest dollar. Section 3.6
provides additional details regarding the methodology used to estimate each of these cost  elements.

Results: The methodology described in Section 3.6 was applied to determine the value of the major cost
elements used to calculate the total cost of the WQM component, which are presented in Table 9-1. It is
important to note that the Cincinnati CWS was a research effort, and as such incurred  higher costs
than would be expected for a typical large utility installation. A similar WQM component
implementation at another utility should be less expensive as it could benefit from lessons learned and
would not incur research-related costs.

Table 9-1.  Cost Elements used in the Calculation of Total Cost of the WQM Component
Parameter
Implementation Costs
Annual O&M Costs
Renewal and Replacement Costs
Salvage Value
Dual-use benefits
Value
$4,229,333
$178,478
$1,555,555
($96,686)
($4,410)
Table 9-2 below presents the implementation cost for each WQM design element, with labor costs
presented separately from the cost of equipment, supplies and purchased services.
                                                                                           85

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

Table 9-2. Implementation Costs for the WQM Component
Design Element
Project Management
Monitoring Stations
Data Collection
Event Detection System
Response Procedures
TOTAL:
Labor
$102,749
$1,719,703
$233,578
$271,245
$76,409
$2,403,684
Equipment, Supplies,
Purchased Services
-
$1,628,890
$133,134
$50,873
-
$1,812,897
Component
Modifications
-
$10,000
-
$2,752
-
$12,752
Total
Implementation
Costs
$102,749
$3,358,594
$366,712
$324,869
$76,409
$4,229,333
1 Project management costs incurred during implementation were distributed evenly among the CWS components.

Project management includes overhead activities necessary to design and implement the component. The
monitoring station design element includes the cost of the water quality sensors, the custom panels, and
the modeling necessary to select a location for each monitoring station. The data collection design
element includes the cost of a communications system to transmit data to the utility control center and the
computer hardware and software to display and archive the data collected from the monitoring stations.
Costs associated with the event detection system design element include installation and configuration of
the software to analyze the data generated, as well as the computer hardware required to run the software.
The final design element, response procedures, includes the cost of developing procedures that guide the
routine operation of the component and alert investigations, along with training on those procedures.

Overall, the monitoring stations design element had the highest implementation cost (79%). The total
implementation cost for data collection and the event detection system were substantially lower at 9% and
8%, respectively. Implementation costs for development of the procedures for routine operation and
training on those procedures, as well as for project management were significantly lower at 2% each. The
monitoring stations, with their water quality sensors, associated  supplies, and mounting panels, accounted
for 90% of the equipment costs. Costs for the other design elements were mostly labor costs associated
with system design and setup; computers and servers to house these systems were the only equipment
needed for those elements.

The component modification costs represent the labor, equipment, supplies and purchased services
associated with enhancements to the WQM component after completion of major implementation
activities at the end of December 2007. The  single most costly modification was the relocation of one of
the monitoring stations to a new location  in order to obtain more representative data from the targeted
area of the distribution system. The costs associated with event detection system modification were all
labor costs, as the CANARY developers and EPA staff worked to make software updates requested by the
utility, fix software bugs, and refine CANARY configurations to reduce the number of invalid alerts.

The annual O&M labor hours and costs for the WQM component, broken out by design element,  are
shown in Table 9-3.
                                                                                           86

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot
Table 9-3. Annual O&M Costs for the WQM Component
Design Element1
WQM Stations
Data Collection2
Event Detection System
Procedures
TOTAL:
Total Labor
(hours/year)
615
96
170
317
1198
Total Labor Cost
($/year)
$27,898
$3,743
$10,003
$15,513
$57,158
Supplies and
Purchased
Services ($/year)
$105,480
$15,840
$0
$0
$121,320
Total O&M Cost
($/year)
$133,378
$19,583
$10,003
$15,513
$178,478
1 Overarching project management costs were only incurred during implementation of the WQM component and are
not applicable for annual O&M costs.
2 Reoccurring communication cost is split between WQM and ESM.

The most labor-intensive aspect of the component involves routine maintenance, calibration, and repair of
the sensors on the monitoring stations. O&M for the data collection and event detection systems requires
a low-level of monitoring and troubleshooting of the IT infrastructure, as well as periodic software
updates. In addition, the database used to store the component data requires updates when sensors are
changed or moved to another location, and the event detection system occasionally needs to be
reconfigured to accommodate shifts in the water quality baseline.

Most of the O&M labor hours reported under procedures were spent on the routine investigation of alerts.
On average, investigation of an alert took 5.1 minutes. As the number of CANARY alerts is reduced, the
number of hours required for procedures should decrease.

The renewal and replacement costs and salvage value were based on costs associated with major pieces of
equipment. The useful life of these items was estimated as either five or seven years based on field
experience, manufacturer-provided data, and input from subject matter experts. For the items with a
useful life of seven years, it was assumed that the equipment would need to be replaced twice during the
20-year lifecycle  of the CWS, and items with a useful life of five years were assumed to be replaced three
times.  These items and their total costs are presented in Table 9-4.

Table 9-4.  Equipment Costs for the WQM Component
Equipment Item
GE-Sievers 900, TOC Instrument
Hach Astro 1950, TOC Instrument
S::can Carbo::lyser, Spectral Instrument (TOC)
YSI 6920 DW Sonde, Chlorine, pH, Conductivity, ORP
Hach WDMPsc, Chlorine, pH, Conductivity, ORP
Hach WDMP, Chlorine, pH, Conductivity
Siemens (US Filter) Depolox 3+, Chlorine, pH
SCADA License
SCADA System I Primary Server
SCADA System II: Secondary and Thin Client Server
SCADA Tape Drive
SCADA UPS
EDDIES Computer
Useful Life
(years)
7
7
7
7
7
7
7
5
5
5
5
5
5
Unit Capital
Costs
$24,950
$22,450
$8,200
$10,700
$14,950
$12,400
$3,700
$13,200
$5,126
$5,097
$2,224
$4,611
$5,565
Quantity
(# of Units)
14
3
2
5
9
3
5
3
1
2
1
1
1
TOTAL:
Total Cost
$349,300
$67,350
$16,400
$53,500
$134,550
$37,200
$18,500
$39,600
$5,126
$10,193
$2,224
$4,611
$5,565
$744,118
                                                                                           87

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

While many dual-use benefits were realized over the course of the evaluation period, as discussed in
Section 9.2, only one could be monetized and used to offset the cost of the WQM component; a savings
of $4,410 per year in the cost of chlorine feed solution.  This benefit was realized through the additional
water quality data provided by the WQM component which allowed utility operators to more accurately
adjust the amount of chlorine added at the treatment plants while maintaining the target disinfectant
residual in the distribution system.

To calculate the total cost of the WQM component, all costs and monetized benefits were adjusted to
2007 dollars using the change in the Consumer Price Index (CPI)  between 2007 and the year that the cost
or benefit was realized.  Subsequently, the implementation costs, renewal and replacement costs, and
annual O&M costs were combined, and the monetized dual-use benefits and salvage value were
subtracted to determine  the total cost:
       WOM Total Cost: $8,202,994

In this calculation, the implementation costs and salvage value were treated as one-time balance
adjustments, the O&M costs and dual-use benefits recurred annually and the renewal and replacement
costs for major equipment items were incurred at regular intervals based on the useful life of each item.

9.2    Benefits

Definition:  The benefits of CWS deployment can be considered in two broad categories: primary and
dual-use. Primary benefits relate to the application of the CWS to detect contamination incidents and can
be quantified in terms of a reduction in consequences.  Primary benefits are evaluated at the system-level
and are thus discussed in the report titled Water Security Initiative: Evaluation of the Cincinnati
Contamination Warning System Pilot (USEPA, 2013). Dual-use benefits are derived through application
of the CWS to any purpose other than detection of intentional and unintentional drinking water
contamination incidents. Dual-use benefits realized by the WQM component are presented in this
section.

Analysis Methodology: Information collected from forums such as data review meetings, lessons
learned workshops and interviews were used to identify dual-use applications of the WQM component of
the Cincinnati CWS.

Results:  Operation of the WQM component of the CWS has resulted in benefits beyond  the detection of
intentional and unintentional contamination incidents.  These key dual-use benefits and examples
identified by the utility include:

    •   Backup monitoring capabilities:
           o  WQM data can be used to support and enhance existing distribution system monitoring.
              Additionally, monitoring station data can be used to confirm water quality trends
              observed in other monitoring programs.

    •   Information to optimize distribution system water quality and operations:
           o  By providing continuous readings, WQM provides a better understanding of water
              quality variability in the  distribution system. This variability can be related to activities
              such as changes in source water quality and treatment chemical dosing rates.  Continuous
              WQM data enables the utility to quickly identify and respond to water quality changes,
              resulting in optimized operations.
                                                                                            88

-------
     Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                      Cincinnati Contamination Warning System Pilot

       o   GCWW has recently agreed to participate in the Partnership for Safe Water Distribution
           program.  The development of the program included input from representatives of
           utilities, state and federal regulators, consultants and subject matter experts. The program
           is divided into various phases including goal setting, data collection and self-assessment.
           Areas covered by the program include maintaining chlorine residual and pressure and
           minimizing main breaks. In support of these areas, goals including optimizing water
           quality are established by the utility and then evaluated during the self-assessment phase.
           The WQM data will be used in this evaluation. Additionally, the role of the WQM
           component as part of an overall water security system would likely be identified as a
           goal.

•   Information to augment compliance monitoring:
       o   GCWW developed a model for trihalomethane (THM) formation in the distribution
           system. The chlorine residual and pH data from the monitoring stations can be  entered
           into this model to predict THM concentrations. Results from the THM model can inform
           treatment and/or operational changes to maintain compliance  with disinfection and
           disinfection byproduct regulations.

       o   Continuous data from the monitoring stations can be used to ensure that more consistent
           and stable water quality (e.g., pH) is maintained for optimal corrosion control.

•   Improved knowledge of distribution  system hydraulics:
       o   Data from the monitoring stations can be used to follow a change in water quality leaving
           the treatment plant and traveling through the distribution system. This data allows for
           estimation of hydraulic travel times which can be used to verify the accuracy of the
           distribution system model and evaluate the impact of operational actions on water quality
           and hydraulics. This provides the utility with greater confidence in the  model, which is
           important for all model applications.

       o   GCWW uses both ground water and surface water, and the interface zone between these
           two sources in the distribution  system can change depending on operations and water
           demand. GCWW can use pH and conductivity data from the WQM stations to  monitor
           the water type in the interface zone in real-time.  This information also has potential
           regulatory implications.  Specifically, a plan was submitted to and approved by the Ohio
           Environmental Protection Agency to use this data to meet the Groundwater Rule
           requirement to  define the location of ground water in the distribution system.

•   Detection of unusual water quality not resulting from  contamination:
       o   Turbidity readings can be used to identify or confirm distribution system activities such
           as flow reversals, hydrant operation and main breaks.

       o   A slow decrease in chlorine residual in an isolated part of the  distribution  system may be
           an indication of possible microbiological activity (i.e., regrowth).

       o   While not a benefit realized at the Cincinnati pilot, utilities that use chloramines for
           secondary disinfection can use  data from the WQM stations to identify onset  of
           nitrification.

•   Optimize the application of treatment chemicals:
       o   By establishing the relationship between the chlorine residual leaving the  plant and the
           residual at various points in the distribution system, the applied chlorine dose can be
           optimized resulting in a more efficient use of chlorine. For utilities that boost the
                                                                                        89

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

              chlorine in the distribution system, the same approach can be used to optimize booster
              disinfectant application. This can minimize the total amount of chlorine applied,
              resulting in chemical savings.

           o  By monitoring the pH in the distribution system, the application of chemicals to adjust
              pH of the water leaving the plant can be optimized.

    •  Optimize pumping and storage:
           o  Chlorine sensors showing a low reading can be used to identify a need for an operational
              change related to tank turnover and pumping. Early identification of the potential for
              unacceptably low chlorine residual at storage facilities allows greater flexibility in
              balancing the objectives of maintaining acceptable water quality and minimizing energy
              costs.

           o  Chlorine readings from locations at or near a storage tank can be used to provide an
              indication of the effectiveness of tank mixing.

The listed dual-use benefits are illustrated in the case studies presented below.  These case studies were
developed from experiences occurring at the Cincinnati pilot during the evaluation period.

Case Study la: Backup monitoring capabilities.
On September 14-15, 2008 the utility experienced a windstorm caused by Hurricane Ike, resulting in a
combination of power outages and flooding in a treatment plant building. This disabled the primary
sensors used to monitor the plant effluent water quality. The utility boosted chlorine levels at the
treatment plant to ensure safe drinking water in the distribution system, then used sensors installed as part
of the CWS (relying on UPS backup power) to monitor chlorine levels water quality in the system.

Case Study Ib: Determining compliance with the Groundwater Rule.
GCWW is evaluating the potential of complying with the Groundwater Rule by achieving four-log virus
inactivation at its ground water plant. This requires online chlorine residual data, which is used to
determine compliance.  In order to ensure that the required data is available, the utility's standard practice
is to install two sensors with one serving as the primary and the other as a backup. Rather than install
another sensor at the ground water plant, the utility is utilizing the WQM chlorine sensors installed as part
of the CWS to provide backup chlorine readings.

Case Study 2: Minimizing main breaks during cold weather.
Data collected by the WQM stations in the distribution system was used to monitor operational changes
in an attempt to minimize the number of water main breaks during the winter months. GCWW's largest
treatment plant uses a surface water source. The second plant uses groundwater as its source, which has a
more consistent temperature that is significantly warmer than the surface water in the winter.  The
groundwater also has higher pH and a different conductivity profile compared with the surface water. In
an effort to minimize main breaks caused by the colder surface water, the utility has been conducting a
study to determine if increasing the area of the distribution system served by the ground water plant
during the winter months would decrease the number of main breaks. The warmer water is moved farther
into the system by changing system valving and increasing pumping from the groundwater plant. Several
WQM stations are located at critical points in the  interface zone between the two water sources. The
utility uses pH, conductivity, and temperature data from these stations to monitor the distribution of
groundwater in the system.
                                                                                            90

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

Case Study 3a: Assisting with investigations required by the Groundwater Rule.
If a positive total coliform sample result is found in a groundwater distribution system, the Groundwater
Rule requires Triggered Source Water monitoring of all wells serving the area at the time that the positive
sample was collected. A request for waiver of this sampling requirement may be approved if it can be
shown that the positive sample was due to an issue in the distribution system and not the source. As part
of an investigation, GCWW can use data from the WQM stations to check water quality in the portion of
the  service area supplied by the groundwater plant and evaluate the data to show that the distribution
system was the cause of the positive sample.

Case Study 3b: Assisting with investigations of positive sample results collected for the Total
Coliform Rule.
The Total Coliform Rule requires implementation of the utility sampling plan in response to a positive
total coliform result. While currently not required, the utility performs an assessment of operational and
water quality information in the area of the positive result. This assessment includes evaluating recent
water quality data for any abnormalities.  As such, GCWW has used data collected from the WQM
stations for performing this assessment. A recent positive total coliform result was obtained at a location
downstream from one of the monitoring stations.  The data from the station was evaluated and found to
show no unusual water quality.

The proposed revisions to the Total Coliform Rule have requirements for performing Triggered
Investigations. While the specifics of what constitutes a Triggered Investigation are still being finalized,
they will most likely include an assessment of available water quality data from the area.

Case Study 4: Providing an increased knowledge of distribution system hydraulics, which can be
applied to the distribution system model.
Data collected from the WQM stations was used to verify and improve the accuracy of the existing
GCWW distribution system model.  When water leaving the treatment plant changed, the utility was able
to track the slug through the distribution system, just as the injection of a chemical tracer is tracked during
a tracer study. In one specific instance, there was a temporary failure of a chemical feed system at one of
the  treatment plants that produced a slightly abnormal slug of water entering the distribution system.  The
SCADA system recorded the timing of this event, so the utility knew precisely when this slug of water
entered the system. The time that this slug reached each WQM location was apparent from the sudden
change in water quality at the location, which provided an estimated travel time from the plant to that
location in the system.  These data were then compared with the predictions made by the GCWW
distribution system model.  At many locations, the observed travel times agreed reasonably well with the
model predications.  However, the slug flowed into an area that was not predicted by the hydraulic model.
After an investigation, it was determined that a model parameter was inaccurate and required an update.
GCWW has also decided to perform similar checks in the future.

Additionally, GCWW was in the process of developing an all-pipes  distribution system model.  An
evaluation of the accuracy of this model using all data, including that from the WQM stations, was
conducted. This evaluation enabled verification and appropriate adjustment to the all-pipes distribution
system model resulting in a more accurate model. A more accurate  model will produce results that better
reflect the distribution system performance for all modeling applications.

Case Study 5: Detecting benign water quality anomalies.
Data collected from WQM stations can be used to respond to changes in water quality resulting from
verified water quality anomalies. GCWW has discovered that most WQM alerts, unrelated to sensor
malfunction, are associated with  operational changes. For example, maintenance on the granular
                                                                                            91

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

activated carbon beds has caused alerts due to changes in TOC concentrations in the distribution system.
While these changes in water quality are neither problematic nor unanticipated, it does demonstrate the
ability of the system to detect changes in water quality. Early knowledge of unanticipated changes in
water quality can provide time for intervention through operational changes at the treatment plant or in
the distribution system.

9.3    Compliance

Definition: The degree to which utility staff fulfill their responsibilities to operate and maintain the
WQM component of the CWS. The component response procedures for the Cincinnati CWS required
utility staff to investigate all alerts and document those investigations in checklists.

Analysis Methodology: The percentage of WQM alerts that were investigated by utility staff is used as a
proxy for compliance with the component response procedures for the component. All WQM alerts and
investigations were entered into a database. The database was queried to determine the number and
percentage of alerts that were investigated.

Results:  Figure 9-1 shows the percentage of WQM alerts that were investigated during each reporting
period over the course of the evaluation. The number of alerts received each month is also shown to give
a sense of how many investigations were performed.

Qn%


S? fin%
Cw ^noA
«
o
'c
0)
Q.
OflOA

0% •


% of Alerts Investigated
»# of Alerts

— »

* *


^
* *
^

* *


..ll
_to _to -to _to _>!
fcjo* hjcp hjcp f^Cr i^
•
.
1 ll.l.l



(0 (0
*- *j
(0 (0
Q Q
0 0
z z








I










Real-time monitoring
























































































onn










1 SO

1 j.n
1 90 e
120 |
1 nn ^
a
fD
- - fin



5%AoV%r\^*r\^r\^\^r\^r^^
^\^\\fe\\fe\\fe\Nfe\Nfe\^\^\^
^ /vv <^ 0^^,^^ Ny i5> op ^ <^  4 t^ «T
Start Date of Monthly Reporting Period
Figure 9-1 . Percentage of WQM Alerts Investigated and Number of Alerts Received
From the beginning of the evaluation period until the December 2008 reporting period, the WQM
component was undergoing significant modifications to address performance issues, most notably sensor
malfunctions and event detection software bugs. During this period, utility staff were not expected to
investigate all alerts. Instead, investigations were conducted on a representative sample of the alerts
generated by the 15 stations in the distribution system to provide utility staff with an opportunity to
become familiar with the investigation procedures and to learn the typical water quality patterns
                                                                                            92

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

associated with each monitoring location.  The resulting compliance rate during this period was
approximately 10%.

During the January 2009 reporting period, five monitoring stations that had achieved acceptable
performance were transitioned to real-time monitoring, and utility staff were instructed to investigate
alerts from these stations as soon as they were received. Beginning in January 2009, the compliance rate
was based only on the stations monitored in real-time.  The compliance rate was only 32% during the
January 2009 reporting period because the monitoring  started in the middle of the reporting period

Five more monitoring stations transitioned to real-time monitoring during the March 2009 reporting
period, and the final five stations transitioned to real-time monitoring during the May 2009 reporting
period. As each group of stations transitioned to real-time monitoring, the basis for calculating the
compliance rate for alert investigations was increased accordingly.  Data was not available for the March
and April 2009 reporting periods, but by May 2009 the investigation rate increased to 85%. Minor issues
with the sensors, event detection system, and communications system during the subsequent periods
prevented compliance from reaching 100% because the utility staff learned to recognize alerts that were
due to such issues and thus did not investigate them.

By the June 2009 reporting period, most problems with the sensors and CANARY event detection system
had been resolved and all 15 stations were monitored in real-time. After May 2009, compliance averaged
97%, reaching 100% in  7 out of 12 reporting periods, indicating a high level of utility compliance with
the WQM component response procedures. During the real-time monitoring period, which spanned 12
months, 220 alerts were investigated and a total of 25.9 labor hours were spent on investigations, resulting
in an average of 0.12 labor hours (7.2 minutes) per investigation.

9.4    Summary

The sustainability of the WQM component of a CWS is dependent upon the  relative costs and benefits of
the component. The total cost to deploy the WQM component of the Cincinnati CWS was $4,109,686,
and the annual cost to operate and maintain the component was $862,674.  The monitoring stations were
responsible for the majority of these costs. Note that the fact that this was a pilot project significantly
inflated costs, as described in Section 10.2.

While this component is expensive, it greatly enhances the ability of the integrated  CWS to detect
contamination incidents and reduce consequences from such incidents (USEPA, 2013).  WQM also
provides numerous dual-use benefits which can enhance day-to-day water quality management at a utility.
Numerous dual-use benefits  were observed during the evaluation period of the Cincinnati pilot, including:
backup monitoring capabilities, information to optimize distribution system water quality and operations,
information to augment  compliance monitoring, improved knowledge of distribution system hydraulics,
detection of water quality anomalies not related to contamination, optimization of treatment chemical
usage and optimization of pumping and storage.

The sustainability of the WQM component can be verified through continued O&M of the component.
This includes compliance with component response procedures that guide the routine  investigation  of
alerts. By the end of the pilot evaluation period, the rate of alert investigations reached an average of
97%. Furthermore, the component is still in operation  at the time of publication of this report. This
indicates that the Cincinnati  CWS is sustainable and will likely continue to operate  into the foreseeable
future.  In this case, the  benefits derived from the component would appear to justify the costs.
                                                                                           93

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot


               Section 10.0:   Summary and Conclusions

The evaluation of the WQM component of the Cincinnati CWS involved analysis of empirical data,
observations from drills and exercises, results from the simulation study, qualitative observations gleaned
from participants during forums and a cost and benefit analysis from the benefit-cost analysis. A set of
performance metrics was defined for each of six design objectives, and results were presented showing
how well the WQM component performed relative to each metric. Highlights, limitations and
considerations for interpretation of this analysis are presented in this section.

10.1   Highlights of Analysis

Evaluation of the WQM component produced a comprehensive assessment of a robust WQM system
deployed as part of the first CWS pilot deployed under WSI. Notably, it was shown that a variety of
water quality incidents can be detected by monitoring standard water quality parameters in the
distribution system. During real-time operation, valid alerts were produced  for a variety of incidents
including main breaks, treatment process upsets, and unusual system operations. Furthermore, bench-
scale testing showed that standard water quality parameters change in the presence of a variety of
contaminants at concentrations well below those that would cause harm to utility infrastructure or the
public. Results from the simulation study confirmed the broad detection capabilities of the WQM
component, with scenarios involving 16 of the 17 test contaminants  being detected. The simulation study
results also demonstrated the value of monitoring multiple parameters, as three WQM stations that were
missing one or two parameters had lower detection percentages compared with the stations equipped with
the full suite of sensors. Finally, the variability of baseline water quality was observed to impact
detection capabilities.  Monitoring stations with more variable baselines generally had lower detection
percentages compared to those with stable water quality.  However,  detection percentages were still above
64% for all WQM stations with a full set of parameters, demonstrating that monitoring can be performed
effectively even at locations with highly variable water quality.

The WQM network deployed in Cincinnati also achieved a high degree of spatial coverage:  72% of the
area and 84% of the population. While spatial coverage was high, the network was limited with respect to
scenario coverage under the conditions and assumptions of the simulation study. Of the 2,015 simulation
study scenarios, a practically detectable contaminant concentration with the  potential to generate an alert
reached a WQM location in only 737 (36.6%) scenarios.  However,  most of these potentially detectable
scenarios were detected 643 (87.3%). Scenarios that were not potentially detectable because no WQM
stations were impacted in the scenario tended to be localized with limited contaminant spread.

Not all alerts generated during real-time operation were valid; the majority of the alerts (95%) were not
due to unusual water quality.  The most common causes of invalid alerts were sensor issues (40%) and
monitoring location water quality variability (40%). The number of invalid  alerts decreased significantly
over the evaluation period.  This was largely due to updated CANARY configuration settings and
improved water quality data as sensor hardware issues were resolved. By the end of the evaluation
period, the utility found the frequency of alerts received was sustainable as the average  alert investigation
was under 15 minutes

Overall, the WQM component had 81.7% availability. The CANARY event detection system was the
biggest contributor to unavailability (96%), largely caused by maintenance and troubleshooting.  Taking
equipment offline for repair caused the majority of data incompleteness during the evaluation period,
accounting for 62% of the incomplete data hours.
                                                                                          94

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

10.2   Limitations of the Analysis

The fact that the CWS deployed in Cincinnati was the first of its type had several consequences that
impacted the evaluation. A few of the more important considerations included:

    •   This was a pilot project and thus a variety of solutions were implemented. Several sensor types
       were installed, some of which were unreliable  and required an unsustainable amount of
       maintenance. This also required service contracts with multiple vendors.  Significant trial and
       error was necessary to produce a viable, functioning system. A utility implementing their own
       WQM system would likely not do this.

    •   Improved products are now available. In many cases, the Cincinnati pilot was the first time
       hardware and software products were installed in real-time.  Thus, many issues were encountered
       and resolved, and these improvements are included in the currently available products.  In
       addition, the increased awareness of this application has motivated vendors of hardware and
       software products to make their solutions more effective and reasonable to implement.

    •   The planning and implementation approach, in which EPA took the lead role, was inefficient. If
       all relevant utility experts had been involved in the planning, several pitfalls could have been
       avoided and existing systems could likely have been leveraged more fully.

While an extensive amount of data from a variety of sources was available for evaluation of the
Cincinnati pilot, there were some limitations of the analysis. Data completeness for the evaluation was
relatively high, but there were some gaps in data collection.  Specifically, some water quality data was
lost during periods in which the data communication system was down. Also, there were some  instances
in which alert investigation checklists were incomplete or missing.

As explained in Section 6.2.2, no contamination incidents occurred during the evaluation period of the
Cincinnati pilot.  Thus, it was necessary to use results from computer simulations of contamination
incidents to evaluate certain performance metrics.  While these simulations were very detailed and the
supporting models were parameterized using data from real-world observations, the model is still only an
approximation of reality. Thus the results of the simulation study should only be considered in the
context of the design and assumptions intrinsic to the study.

It is also important to consider that the WQM component of the Cincinnati CWS was being updated and
modified during most of the  evaluation period. As discussed in Section 2.5, major system changes were
made through February 2010, which is 86% of the evaluation period. The evolving nature of the system
during the evaluation period skewed the following metrics.

    •   Operational reliability metrics were inconsistent because elements of the system were taken down
       for long periods of time for maintenance and replacement, and data streams were added and
       removed from service. It is expected that operational reliability will be higher and more
       consistent in the post-evaluation period after modifications were  completed and the system was
       operating consistently.

    •   Metrics relating to alert occurrence were inconsistent due to modifications made to CANARY
       over the course of the evaluation period. As discussed in Section 6.1, the data from the evaluation
       period was reprocessed with the final CANARY settings to estimate alert occurrence under the
       optimal CANARY configuration.  However this analysis was still impacted by periods of
       inaccurate and missing data.

    •   Real-time monitoring accounted for only 40% of the evaluation period. This was the only time
       during which alerts were consistently investigated and documented using alert checklists. No
       investigation checklists were completed in real-time during the first year of the evaluation period.
                                                                                           95

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

       In addition to causing low alert investigation numbers, this impacted alert classification.
       Researchers had to retrospectively analyze water quality data at the time of the alert to identify
       the possible cause of the alert. These retrospective alert investigations were performed without
       the benefit of ancillary data from system operations or maintenance that would have been
       available during an investigation conducted in real-time.

While the system changes did illustrate the impact of improvements on system performance for some
metrics, they did not provide an extended period of stable operations that could have provided the
evaluators with a basis for estimating long-term performance.

10.3   Potential Applications of the WQM Component

The WQM component of the Cincinnati CWS was tailored to the capabilities and structure of GCWW;
therefore, the evaluation described in this report is specific to Cincinnati and interpretation should be
treated as such. However, the Cincinnati CWS revealed numerous applications and lessons that can be
applicable to other CWSs.

During the pilot, a variety of equipment was evaluated. While some of the water quality sensors
performed below acceptable levels, a set of sensors were identified that could effectively monitor each of
the parameters considered during the pilot. Furthermore, GCWW has identified a suite of sensors and
technologies that they plan to incorporate into a standard monitoring station design. This standard design
includes all of the parameters piloted with the exception of TOC.  Experience during the pilot showed that
the online TOC instrumentation tested was expensive and difficult to maintain.  As a result, TOC units
will only be deployed at more critical locations.  GCWW decided to replace existing TOC
instrumentation at some sites with spectral instruments which they find easier and less expensive to
maintain.  In addition, GCWW has replaced some of the existing  chlorine monitors with reagent-less
models to further reduce costs and upkeep time.

With the  deployment of 17 new WQM stations, each with six or more sensors, a greater demand has been
placed on instrument technicians to keep the system running and producing quality data. As noted above,
some sensor types exerted a greater demand on staff than others.  But after the down-selection process in
which poor performing equipment was decommissioned and after technicians received adequate training,
GCWW was able to keep the sensors performing at acceptable levels with a sustainable level of effort.

At the start of the pilot, there was concern that the WQM component would generate too many alerts and
that eventually these alerts would be largely ignored. In the early stages of the pilot, this was indeed the
case. However, most of these invalid alerts were caused by instrument malfunction which produced noisy
and inaccurate data, and bugs in the developmental CANARY event detection system software. After
these problems were remedied, alert rates fell to acceptable levels and GCWW reports getting eight to ten
alerts in a typical month. Furthermore, through training and practice, GCWW was able to reduce the
average time for completing the investigation of an alert to less than 15 minutes. In addition, staff has
reported that the investigations are interesting and useful for maintaining confidence in water quality.  At
the time of publication, GCWW staff continues to investigate and document WQM alerts in real-time.
This demonstrates that real-time water quality data, monitored by an automated event detection system,
can produce an acceptable rate of alerts and provide valuable information for everyday operation.

While the WQM component is expensive to operate and maintain at over $178,000 per year, GCWW has
realized many day-to-day benefits of the component. Real-time knowledge of distribution system water
quality has provided a deeper understanding of the impact of system operations on distribution system
water quality, which has lead to increased confidence in the quality of the water provided to the customer.
                                                                                             96

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

In addition, water quality anomalies not caused by contamination have been detected, including treatment
process disruptions and main breaks.

Most importantly, the component has been incorporated into GCWW's broader water quality
management strategy.  It appears that the component will be maintained and potentially expanded in the
future.

The overarching goal of the WQM component is to improve real-time awareness of water quality
throughout the distribution system in order to optimize system operation and allow for detection of
unusual water quality.  The overall success of WQM depends not only on reliable data, but also requires
commitment by utility staff in maintaining the system and using the data generated.

The evaluation presented here may aid other utilities seeking to improve existing capabilities or add
additional functionality as part of an effective CWS. Many utilities have existing capabilities that can be
leveraged to build an effective WQM component at a much smaller cost than was incurred for the
Cincinnati CWS. For example, if a utility has existing chlorine sensors used for compliance monitoring, a
valuable step towards integrating those sensors into a WQM component is to develop procedures for
regularly reviewing the data produced by the sensors and investigating any unusual water quality
conditions.
                                                                                            97

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                         Cincinnati Contamination Warning System Pilot
                           Section 11.0:  References

Allgeier, S.C., et al. 2009. "Modeling the Performance of a Drinking Water Contamination Warning
       System", Proceedings ofAWWA Water Quality Technology Conference 2009, AWWA, Seattle,
       WA.

Allgeier, S.C., et al. 2011. "Detection of Distribution System Contamination Incidents Using Online
       Water Quality Monitoring", Proceedings ofAWWA Water Quality Technology Conference 2011,
       Phoenix, AZ.

Hall, J., et al. 2007. "Online Water Quality Parameters as Indicators of Distribution System
       Contamination." Journal AWWA. Vol. 99, Issue 1: 66-77.

Hart, D.B., McKenna, S.A., Klise, K.A., Cruz, V.A. & Wilson, M.P. 2007. "CANARY: A Water quality
       event detection algorithm development and testing tool", Proceedings ofASCE World
       Environmental and Water Resources Congress 2007, ASCE, Tampa FL.

U.S. Environmental Protection Agency. 2005. WaterSentinel System Architecture, EPA 817-D-05-003.

U.S. Environmental Protection Agency. 2008a. Cincinnati Pilot Post-Implementation System Status.
       EPA817-R-08-004.

U.S. Environmental Protection Agency. 2008b. Threat Ensemble Vulnerability Assessment Research
       Program, http://www.epa.gov/nhsrc/water/teva.html.

U.S. Environmental Protection Agency. 2008c. Water Security Initiative: Interim Guidance on
       Developing an Operational Strategy for Contamination Warning Systems.  EPA 817-R-08-002.

U.S. Environmental Protection Agency. 2013. Water Security Initiative: Evaluation of the Cincinnati
       Contamination Warning System Pilot.  EPA 817-R-13-003.

U.S. Environmental Protection Agency. 2014. Water Security Initiative: Comprehensive Evaluation of
       the Cincinnati Contamination Warning System Pilot EPA 817-R-14-001.
                                                                                         98

-------
        Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                        Cincinnati Contamination Warning System Pilot
                        Section 12.0:  Abbreviations

CWS          Contamination Warning System
DMZ          Demilitarized Zone
EDDIES       Event Detection Deployment, Integration, and Evaluation System
EPA          Environmental Protection Agency
ESM          Enhanced Security Monitoring
GCWW       Greater Cincinnati Water Works
HMI          Human Machine Interface
IT            Information Technology
O&M         Operations and Maintenance
ORP          Oxidation Reduction Potential
PLC          Programmable Logic Controller
SCADA       Supervisory Control and Data Acquisition
TEVA-SPOT   Threat Ensemble Vulnerability Assessment and Sensor Placement Optimization Tool
THM          Trihalomethane
TOC          Total Organic Carbon
UPS          Uninterruptible Power Supply
UV           Ultraviolet
WQM         Water Quality Monitoring
WSI          Water Security Initiative
                                                                                     99

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot
                             Section 13.0:  Glossary
Accurate data. A measured data value within an acceptable range of the true value obtained through an
independent method.

Alert.  Information from a monitoring and surveillance component indicating an anomaly in the system,
which warrants further investigation to determine if the alert is valid.

Alert investigation. A systematic process, documented in a component response procedure, for
determining whether an alert is valid, and identifying the cause of the alert. If a benign alert cause cannot
be identified, contamination is possible.

Anomaly.  Deviations from an established baseline. For example, a water quality anomaly is a deviation
from typical water quality patterns observed over an extended period.

Baseline. Normal conditions that result from typical system operation. The baseline includes predictable
fluctuations in measured  parameters that result from known changes to the system. For example, a water
quality baseline includes  the effects of draining and filling tanks, pump operation and seasonal changes in
water demand, all of which may alter water quality in a somewhat predictable fashion.

Benefit.  An outcome associated with the implementation and operation of a contamination warning
system that promotes the welfare of the utility and the community it serves.  Benefits are classified as
either primary or dual-use.

Benefit-cost analysis. An evaluation of the benefits and costs of a project or program, such as a
contamination warning system, to assess whether the investment is justifiable considering both financial
and qualitative factors.

Biotoxins.  Toxic chemicals derived from biological materials that pose an acute risk to public health at
relatively low concentrations.
Box-and-whisker plot. A graphical representation of nonparametric statistics for a dataset. The bottom
and top whiskers represent the 10th and 90th percentiles of the ranked data, respectively. The bottom and
top of the box represent the 25th and 75th percentiles of the ranked data, respectively.  The line inside the
box represents the 50th percentile, or median of the ranked data. Note that some data sets have the same
values for the percentiles presented in box-and-whisker plots, in which case not all lines will be visible.
Component response procedures.  Documentation of roles and responsibilities, process flows, and
procedural activities for a specified component of the contamination warning system, including the
investigation of alerts from the component. Standard operating procedures for each monitoring and
surveillance component are integrated into an operational strategy for the contamination warning system.

Confirmed. In the context of the threat level determination process, contamination is confirmed when
the analysis of all available information from the contamination warning system has provided definitive,
or nearly definitive, evidence of the presence of a specific contaminant or class of contaminant in the
distribution system. While positive results from laboratory analysis of a sample collected from the
distribution system can be a basis for confirming contamination, a preponderance of evidence, without the
benefit of laboratory results, can lead to this same determination.
                                                                                           100

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

Consequence management.  Actions taken to plan for and respond to possible contamination incidents.
This includes the threat level determination process, which uses information from all monitoring and
surveillance components as well as sampling and analysis to determine if contamination is credible or
confirmed.  Response actions, including operational changes, public notification, and public health
response, are implemented to minimize public health and economic impacts, and ultimately return the
utility to normal operations.

Consequence management plan.  Documentation that provides a decision-making framework to guide
investigative and response activities implemented in response to a possible contamination incident.

Contaminant detection potential. The capability of the contamination warning system to detect specific
contaminants or contaminant classes.  In order for the WQM component to have the potential to detect a
specific contaminant, at least one of the measured water quality parameters must produce a statistically
significant change from the baseline in the presence of the contaminant at a concentration capable of
producing significant consequences.

Contamination incident. The introduction of a contaminant in the distribution system with the potential
to cause harm to the utility or the community served by the utility. A contamination incident may be
intentional or accidental.

Contamination scenario. Within the context of the simulation study, parameters that define a specific
contamination incident, including: injection location, injection  rate, injection duration, time the injection
is initiated and the contaminant that is injected.

Contamination warning system. An integrated system of monitoring and surveillance components
designed to detect contamination in a drinking water distribution system.  The system relies on integration
of information from these monitoring and surveillance activities along with timely investigative and
response actions during consequence management to minimize the consequences of a contamination
incident.

Costs, implementation.  Installed cost of equipment, IT components, and subsystems necessary to
deploy an operational system. Implementation costs include labor and other expenditures (equipment,
supplies and purchased services).

Cost, life cycle. The total cost of a system, component, or equipment over its useful or practical life.
Life cycle cost includes the cost of implementation, operation & maintenance and renewal & replacement.

Costs, operation & maintenance. Expenses incurred to sustain operation of a system at an acceptable
level of performance. Operational and maintenance costs include labor and other expenditures (supplies
and purchased services).

Costs, renewal & replacement.  Costs associated with refurbishing or replacing major pieces of
equipment (e.g., water quality sensors, laboratory instruments,  IT hardware,  etc.) that reach the end of
their useful life before the end of the contamination warning system lifecycle.

Coverage, contaminant. Specific contaminants that can potentially be detected by each monitoring and
surveillance component of a contamination warning system.

Coverage, spatial. The areas within the distribution system that are monitored by or protected by each
monitoring  and surveillance component of a contamination warning system.
                                                                                          101

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

Credible. In the context of the threat level determination process, a water contamination threat is
characterized as credible if information collected during the investigation of possible contamination
corroborates information from the validated contamination warning system alert.

Critical concentration.  The concentration of a specific contaminant capable of producing significant
consequences, either with adverse impacts to the  exposed population or utility infrastructure.

Data completeness. The amount of data that can be used to support system or component operations,
expressed as a percentage of all data generated by the system or component. Data may be lost due to QC
failures, data transmission errors, and faulty equipment among other causes.

Data stream. The output signal for a single instrument (e.g. Hach chlorine sensor at a specific
monitoring station).

Distribution system model.  A mathematical representation of a drinking water distribution system,
including pipes, junctions, valves, pumps, tanks,  reservoirs, etc. The model characterizes flow and
pressure of water through the system. Distribution system models may include a water quality model that
can predict the fate and transport of a material throughout the distribution system.

Dual-use benefit. A positive application of a piece of equipment, procedure, or capability that was
deployed as part of the contamination warning system in the normal operations of the utility.

Ensemble. The comprehensive set of contamination scenarios evaluated during the simulation study.

Event detection system. A system designed  specifically to detect anomalies from the various monitoring
and surveillance components of a contamination warning system. An event detection system may take a
variety of forms, ranging from a complex set of computer algorithms to a simple set of heuristics that are
manually implemented.

Evaluation period. The period from January 16, 2008 to June 15, 2010 when data was actively collected
for the evaluation of the Cincinnati  contamination warning system pilot.

Flow rate. The volume of water moving past a fixed location per unit time.

Hydraulic connectivity. Locations or areas within a distribution system that are on a common flow path.

Impacted WQM location. A monitoring location that receives a practically detectable concentration of
contaminant in a simulation study scenario.

Incomplete data. Data that is missing or unusable. This occurs when a sensor's data is not delivered to
the SCADA  system or if the data is flagged to indicate suspect quality.

Injection location. The specific node in the distribution system model where the bulk contaminant is
injected into the distribution system for a given scenario within the simulation study.

Injection rate. The mass flow rate at which the bulk volume of a contaminant is injected into the
distribution system at a specific location for a given scenario within the simulation study.

Invalid alert. An alert from a monitoring and surveillance component that is not due to an anomaly and
is not associated with an incident or condition of interest to the utility.
                                                                                          102

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

Metric. A standard or statistic for measuring or quantifying an attribute of the contamination warning
system or its components.

Model.  A mathematical representation of a physical system.

Model parameters. Fixed values in a model that define important aspects of the physical system.

Module. A sub-component of a model that typically represents a specific function of the real-world
system being modeled.

Monetizable. A cost or benefit whose monetary value can be reliably estimated from the available
information.

Monitoring & surveillance component. Element of a contamination warning system used to detect
unusual water quality conditions, potentially including contamination incidents. The four monitoring &
surveillance components of a contamination warning system include: 1) online water quality monitoring,
2) enhanced  security monitoring, 3) customer complaint surveillance and 4) public health surveillance.

Net present value. The difference between the present value  of benefits and costs, normalized to a
common year.

Node.  A mathematical representation of a junction between two or more distribution system pipes, or a
terminal location in a pipe in a water distribution system model.  Water may be withdrawn from the
system at nodes, representing a portion  of the system demand.

Nuisance chemicals. Chemical contaminants with a relatively low toxicity, which thus generally do not
pose an immediate threat to public health. However, contamination with these chemicals can make the
drinking water supply unusable.

Observed water quality anomaly.  Period of unusual water quality in utility data, where data does not
match expected values or variability.

Optimization phase. Period in the  contamination warning  system deployment timeline between the
completion of system installation and real-time monitoring.  During this phase the system is operational
but alerts are not being acted upon in real-time.  Instead, this phase provides an opportunity to learn the
system and optimize performance (e.g., fix or replace malfunctioning equipment, eliminate software bugs,
test procedures and reduce occurrence of invalid alerts).

Parameter sensitivity value. For a specific water quality parameter, the smallest change that can be
reliably discriminated from normal instrument noise. Practically, this represents the true change in the
parameter value that could potentially generate an alert.

Pathogens.  Microorganisms that cause infections and subsequent illness and mortality in the exposed
population.

Possible. In the context of the threat level determination process, a water contamination threat is
characterized as possible if the cause of a validated contamination warning system alert is unknown.

Potential data hours. For a monitoring and surveillance component, the total number of hours in the
evaluation period multiplied by that component's total number of data streams.
                                                                                           103

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

Potential data hours for a sensor. The total hours that data is expected to be collected from an
individual sensor during the evaluation period. This excludes times when external factors limit data
collection, such as during station calibration or a system-wide communication outage.

Potential data hours for the component.  The total number of hours in the evaluation period multiplied
by the total number of data streams.

Practically detectable contaminant concentration.  The minimum concentration of a contaminant
which produces a change in at least one water quality parameter greater than or equal to the parameter's
sensitivity value.

Practically detectable scenario. A simulation study scenario in which at least one monitoring location
receives a practically detectable concentration of contaminant.

Primary benefits. Benefits that are derived from the reduction in consequences associated with a
contamination incident due to deployment of a contamination warning system.

Priority contaminant. A contaminant that has been identified by the EPA for monitoring under the
Water Security Initiative.  Priority contaminants may be initially detected through one of the monitoring
and surveillance components and confirmed through laboratory analysis of samples collected during the
investigation of a possible contamination incident.

Process flow. The central element of a component response procedure that guides routine monitoring and
surveillance activities in a contamination warning system. The process flow is represented in a flow
diagram that shows the step-by-step process for investigation of alerts - identifying the potential cause of
the alert and determining whether contamination is possible.

Public health response.  Actions taken by public health agencies and their partners to mitigate the
adverse effects of a public health incident, regardless of the cause of the incident. Potential response
actions include administering prophylaxis, mobilizing additional healthcare resources, providing
treatment guidelines to healthcare providers, and providing information to the public.

Radiochemicals.  Chemicals that emit alpha, beta and/or gamma particles at a rate that could pose a
threat to public health.

Real-time monitoring phase. Period in the contamination warning system deployment timeline
following the optimization phase. During this phase, the system is fully operational and producing
actionable alerts. Utility  staff and partners  now respond to alerts in real-time and in full accordance with
component response procedures. Optimization of the system still occurs as part of a continuous
improvement process, however the system is no longer considered to be developmental.

Routine operation.  The day-to-day monitoring and surveillance activities of the contamination warning
system that are guided by the component response procedures. To the extent possible, routine operation
of the contamination warning system is integrated into the routine operations of the drinking water utility.

Salvage value. Estimated value of assets at the end of the useful life of the system.

Simulation study. A study designed to systematically characterize the detection capabilities of the
Cincinnati drinking water contamination warning system.  In this study, a computer model of the
contamination warning system was challenged with an ensemble of 2,023 simulated contamination
scenarios.  The output from these simulations provides estimates of the consequences resulting from each
                                                                                          104

-------
         Water Security Initiative: Evaluation of the Water Quality Monitoring Component of the
                          Cincinnati Contamination Warning System Pilot

contamination scenario including fatalities, illnesses and extent of distribution system contamination.
Consequences are estimated under two cases, with and without the contamination warning system in
operation. The difference provides an estimate of the reduction in consequences.

Site characterization. The process of collecting information from a site of interest to support the
investigation of a possible contamination incident during consequence management.

Target in-pipe concentration. A simulation study scenario variable that defines the target concentration
of a contaminant in the distribution system at the injection location.

Threat level. The results of the threat level determination process, indicating whether contamination is
possible, credible or confirmed.

Timeliness of detection.  A portion of the incident timeline that begins with the start of contamination
injection and ends with the generation and recognition of an alert.  The time for contaminant detection
may be subdivided for specific components to capture important elements of this portion of the incident
timeline (e.g., sample processing time, data transmission time, event detection time, etc.).

Timestep.  In the Cincinnati contamination warning system model, a set interval of time (i.e., every 15
minutes) at which the computational platform performs calculations, reads inputs or generates outputs.

Toxic chemicals.  Highly toxic chemicals that pose an acute risk to public health at relatively low
concentrations.

Trigger parameter.  Event detection system output during alerting timesteps that indicates the water
quality parameters whose changes triggered the alert.

Usable data.  Data that is usable for event detection or most other applications. It must be complete and
accurate.

Valid alert. Alerts due to water contamination, system events (i.e., work in the distribution system for
customer complaint surveillance or WQM) or public health incidents (for public health surveillance).

WQM location.  A single location of monitoring where sensors measuring multiple water quality
parameters are installed.

Water Utility Emergency Response Manager. A role within the Cincinnati contamination warning
system filled by a mid-level manager from the drinking water utility.  Responsibilities of this position
include receiving notification of validated alerts, verifying that a valid alert indicates possible
contamination, coordinating the threat level  determination process, integrating information across the
different monitoring and  surveillance components and activating the consequence management plan. In
the  early stages of responding to possible contamination, the Water Utility Emergency Response Manager
may serve as Incident Commander.
                                                                                           105

-------