&EPA
     United States
     Environmental Protection
     Agency
  Water Security Initiative: Evaluation of the Public
  Health Surveillance Component of the Cincinnati
        Contamination Warning System Pilot

             Monitoring and Surveillance
                Water Quality Monitoring

              Enhanced Security Monitoring

            Customer Complaint Surveillance
               Public Health Surveillance
                 Possible Contamination
               Consequence Management
                Sampling and Analysis
                     Response
Office of Water (MC-140)
EPA-817-R-14-001E
April 2014

-------
                                      Disclaimer
The Water Security Division of the Office of Ground Water and Drinking Water has reviewed and
approved this document for publication. This document does not impose legally binding requirements on
any party. The findings in this report are intended solely to recommend or suggest and do not imply any
requirements. Neitherthe U.S. Government nor any of its employees, contractors or their employees
make any warranty, expressed or implied or assumes any legal liability or responsibility for any third
party' s use of or the results of such use of any information, apparatus, product or process discussed in this
report,  or represents that its use by such party would not infringe on privately owned rights.  Mention of
trade names or commercial products does not constitute endorsement or recommendation for use.

Questions concerning this document should be addressed to:

Chrissy Dangel
U.S. EPA Water Security Division
26 West Martin Luther King Drive
Mail Code 140
Cincinnati, OH 45268
(513)569-7821
or

Steve Allgeier
U.S. EPA Water Security Division
26 West Martin Luther King Drive
Mail Code 140
Cincinnati, OH 45268
(513)569-7131

-------
                               Acknowledgments

The Water Security Division of the Office of Ground Water and Drinking Water would like to recognize
the following individuals and organizations for their assistance, contributions, and review during the
development of this document.

    •   Yeongho Lee, Greater Cincinnati Water Works
    •   Jeff Swertfeger, Greater Cincinnati Water Works
    •   Jennifer Hsieh, New York City Department of Health and Mental Hygiene
    •   June Weintraub, San Francisco Department of Public Health
    •   Cynthia Yund, U.S. Environmental Protection Agency

-------
                                Executive Summary

The goal of the Water Security Initiative (WSI) is to design and demonstrate an effective multi-
component warning system for timely detection and response to drinking water contamination threats and
incidents.  A contamination warning system (CWS) integrates information from multiple monitoring and
surveillance components to alert the water utility to possible contamination, and uses a consequence
management plan (CMP) to guide response actions.

System design objectives for an effective CWS are: spatial coverage, contaminant coverage, alert
occurrence, timeliness of detection and response, operational reliability and sustainability.  Metrics for the
public health surveillance (PHS) component were defined relative to the system metrics common to all
components in the CWS, but the component metric definitions provide an additional level of detail
relevant to the PHS component. Evaluation techniques used to quantitatively or qualitatively evaluate
each of the metrics include analysis of empirical data from routine operations, drills and exercises,
modeling and simulations, forums and an analysis of lifecycle costs.  This report describes the evaluation
of data collected from the PHS component from the period of January 2008 - June 2010.

The major outputs from the evaluation of the Cincinnati pilot include:
    1.  Cincinnati Pilot System Status, which describes the post-implementation status of the Cincinnati
       pilot following the installation of all monitoring and surveillance components.
    2.  Component Evaluations, which include analysis of performance metrics for each component of
       the Cincinnati pilot.

    3.  System Evaluation, which integrates the results of the component evaluations, the simulation
       study, and the benefit-cost analysis.

The reports that present the results from the evaluation of the system and each of its six components are
available in an Adobe portfolio, Water Security Initiative: Comprehensive Evaluation of the Cincinnati
Contamination Warning System Pilot (USEPA 2014).

Public Health Surveillance  Component Design

The PHS component consists of the following design elements: public health surveillance tools,
communication and coordination and component response procedures.  As part of the initial pilot of the
WSI, the PHS component was developed for the Greater Cincinnati Water Works (GCWW) based on
many of the city's existing public health monitoring systems. Four data streams were utilized for the PHS
component: 911 surveillance tool, Emergency Medical Services (EMS) surveillance tool, EpiCenter
surveillance tool, and the Cincinnati Drug and Poison Information Center (DPIC) surveillance tool.  As
part of the PHS  component, several new systems were implemented to inform GCWW of a potential
contamination incident related to anomalous data provided by the surveillance tools.  Once anomalies are
identified, automated email alerts are  sent to public health partners and GCWW personnel, who conduct
an investigation according to the Cincinnati Pilot Operational Strategy.  For more information on this
topic, see Section 2.0. A summary of the results used to evaluate whether the PHS component met each
of the design objectives is provided below.

Methodology

Several methods were used to evaluate PHS performance. Data was tracked over time to illustrate the
change in performance as the component evolved during the evaluation period. Statistical methods were
also used to summarize  large volumes of data collected over either the entire or various segments of the

-------
evaluation period.  Data was also evaluated and summarized for each reporting period over the evaluation
period.  In this evaluation, the term reporting period is used to refer to one month of data that spans from
the 16th of the indicated month to the 15th of the following month. Thus, the January 2008 reporting
period refers to the data collected between January 16th 2008 and February 15th 2008. Additionally, three
drills and two full-scale exercises designed around mock contamination incidents were used to practice
and evaluate the full range of procedures, from initial detection through response.

Because there were no contamination incidents during the evaluation period, there is no empirical data to
fully evaluate the detection capabilities of the component.  To fill this gap, a computer model of the
Cincinnati CWS was developed and challenged with a large ensemble of simulated contamination
incidents in a simulation study. An ensemble of 2,015 contamination scenarios representing a broad range
of contaminants and injection  locations throughout the distribution system was used to evaluate the
effectiveness of the CWS in minimizing public health and utility infrastructure consequences.  The
simulations were also used for a benefit-cost analysis, which compares the monetized value of costs and
benefits and calculates the net present value of the CWS. Costs include implementation costs and routine
operation and maintenance labor and expenses, which were assumed over a 20 year lifecycle of the CWS.
Benefits included reduction in consequences (illness,  fatalities and infrastructure damage) and dual-use
benefits from routine operations.

Design Objective: Spatial Coverage

Spatial coverage is the cumulative area of the distribution system where a detectable increase in
symptomatic individuals could be reported via any of the PHS tools.  Spatial coverage is measured by the
metrics of area and population coverage, and the spatial extent of alerts. Collectively, the surveillance
tools used by the PHS component cover GCWW's entire service area  (100% area coverage). Figure ES-
1 depicts the overlapping coverage of the various surveillance tools. The 911  and EMS surveillance tools
monitor 911 calls and EMS runs that occur within the city of Cincinnati. The  cross-hatch shows the
GCWW retail service area, which is also the geographic area covered  by DPIC surveillance. The black
border depicts the boundary of Hamilton County, which is the area covered by the EpiCenter surveillance
tool. For more information on this topic, see the relevant subsections  regarding spatial coverage for each
PHS surveillance tool in Sections 4.0 through 7.0 and Section 8.2 for the integrated component.
                                                                                             IV

-------
                            Population perSq. Mile
 |OJ Treatment Plants
[    | Cincinnati Zip Codes
|    | Hamilton County
V//A GCWW Service Area
                                  <£>  ,£>  ,-0
                                 <$ •$  $
Figure ES-1. Spatial Coverage of the 911, EMS, and DPIC Surveillance Tools
Design Objective: Contaminant Coverage

Contaminant coverage is the ability to detect a wide range of water contaminants and is measured by
contamination scenario coverage. Since there were no contamination incidents during the evaluation,
results from the simulation study were used to assess this design objective. Table ES-1 demonstrates the
contaminants that are theoretically detectable by the PHS component based on available data in published
literature regarding health-seeking behavior in response to symptoms of illness. The table presents the
ratio of the critical concentration, which is the concentration that would produce adverse health effects, to
the detection threshold for each contaminant. The table also shows the percent of simulated
contamination incidents detected by the PHS component, as determined through analysis of simulation
results. For more information on this topic, see the relevant subsections regarding contaminant coverage
for each PHS surveillance tool in Sections 4.0 through 7.0 and Section 8.3 for the integrated component.

Table ES-1. Assumed Characteristics of Contaminants Detectable by the PHS Component
Type1
Toxic Chemical 1
Toxic Chemical 2
Critical
Concentration/
Detection
Threshold
458
3,640
% of Simulated
Contamination
Incidents
Detected
100%
100%
                                                                                            V

-------
Type1
Toxic Chemical 3
Toxic Chemical 4
Toxic Chemical 5
Toxic Chemical 6
Toxic Chemical 7
Toxic Chemical 8
Biological Agent 1
Biological Agent 2
Biological Agent 3
Biological Agent 4
Biological Agent 5
Biological Agent 6
Biological Agent 7
Critical
Concentration/
Detection
Threshold
1,640
290
668
850
950
300
4,500
3,940
2.40 x 104
4.54
10.0
1.74
1.64
% of Simulated
Contamination
Incidents
Detected
100%
100%
100%
100%
100%
100%
100%
100%
100%
100%
100%
96.6%
96.7%
 Note that the contaminants being modeled in the simulation study were assigned generic IDs for security purposes.
Design Objective: Alert Occurrence

Alert occurrence tracks the frequency of alerts to determine how well the surveillance tools can
discriminate between public health incidents, including water contamination and normal variability in the
underlying data. Metrics for this design objective include invalid and valid alerts, which were
characterized using empirical data. Invalid alerts occurred frequently at the beginning of the evaluation
period due to intentionally low threshold levels which provided opportunities to train public health
personnel on alert investigation procedures.  Following threshold adjustments for the 911 and EMS
surveillance tools, invalid alerts were reduced by approximately 90%. A total of 49 valid alerts (5 EMS
and 44 Epicenter) were observed over the evaluation period which is a total of 10% relative to the total
number of alerts across all of the surveillance tools. The PHS system produced valid alerts during various
public health incidents including an influenza outbreak in the city. For more information on this topic,
see the relevant subsections regarding alert occurrence for each PHS surveillance tool in Sections 4.0
through 7.0 and Section 8.4 for the integrated component.

Design Objective: Timeliness of Detection

For PHS, timeliness of detection  refers to the timeline between when PHS data is transmitted and the time
that investigation into anomalous data is completed. Factors that impact this objective include: time for
data transmission, time for event detection, time to recognize alerts and time to investigate alerts. These
metrics were characterized using empirical data.  Data from PHS  drills was used to evaluate the time to
investigate valid alerts. Across the surveillance tools, most data was transmitted and uploaded in one
hour or less with EMS as the exception (average of 13.2 hours), event detection typically required less
than one hour, and the median time for alert recognition was between 10 and  13 hours.  For invalid  alerts,
most investigations were completed in 20 minutes or less.  Based on PHS drill data, the alert investigation
time ranged from 1.5 to 2 hours for simulated valid alerts.  Figure ES-2 demonstrates the investigation
timeline during PHS Drill 2 which involved both DPIC and 911 alerts.
                                                                                             VI

-------


00:39
00-26 WQM station
Oo:oo DPIC activates alert received
DPIC receives communicator
reports of Gl

symptoms at 00:2°
day care, DPIC
begins determines water
investigation contamination is
likel
F 1
/
' i


00:30
911 alert
received
*
' i
01:01
WUERM considers 01:33
00:42 contamination Consensus
Communicator possible and determination
discussion suspects a chemical contamination
begins contaminant Possible
F 1 F i r 1


 00:00
                                                                                          01:33
Figure ES-2. PHS Drill 2 Investigation Timeline (DPIC and 911 Alerts)
Simulation study results analysis showed an overall average time of detection for the PHS component of
approximately one day across all of the contamination scenarios that were detected. For most
surveillance tools, the detection timeline was generally more rapid for the toxic chemicals (within hours)
in comparison to the biological agents (within days to weeks), predominantly due to the longer symptom
onset time following exposure for the biological agents.  For more information, see the relevant
subsections regarding timeliness of detection for each PHS surveillance tool in Sections 4.0 through 7.0
and Section 8.5 for the integrated component.

Design Objective: Operational Reliability

Operational reliability metrics quantify the percent of time that the PHS tool is working as designed.
Availability of the PHS component was utilized to measure operational reliability through analysis of
empirical data. The PHS component exhibited excellent operational reliability during the evaluation
period, and at least a portion of the component was available 100% of the time.  The majority of PHS
downtime was due to network instability concurrent with Water Security Data Repository database
unavailability. For more information on this topic, see the relevant subsections regarding operational
reliability for each PHS surveillance tool in Sections 4.0 through 7.0 and Section 8.6 for the integrated
component.

Design Objective: Sustainability

Sustainability is a key objective in the design of a CWS and each of its components, which for the
purpose of this evaluation is defined in terms of the  cost-benefit trade-off. Empirical data as well as
feedback documented during component forums were used to evaluate costs, benefits, and compliance for
the PHS component. Costs were estimated over the lifecycle of the system to provide an estimate of the
total cost of ownership. Table ES-2 demonstrates the value of the major cost elements used to calculate
the total lifecycle cost of the PHS component. These costs were tracked as empirical data during the
design and implementation phase of project design,  and were analyzed through a benefit-cost analysis. It
is important to note that the Cincinnati CWS was  a pilot research project, and as such incurred higher
costs than would be expected for a typical large utility installation.
                                                                                           VII

-------
Table ES-2. Cost Elements used in the Calculation of Lifecycle Cost
Parameter
Implementation Costs
Annual O&M Costs
Renewal and Replacement Costs1
Salvage Value1
Value
$1,305,966
$17,871
$241,531
-
 Calculated using major pieces of equipment.

To calculate the total lifecycle cost of the PHS component, all costs and monetized benefits were adjusted
to 2007 dollars using the change in the Consumer Price Index (CPI) between 2007 and the year that the
cost or benefit was realized. Subsequently, the implementation costs, renewal and replacement costs, and
annual operation and maintenance costs were combined to determine the total lifecycle cost:
       PHS Total Lifecycle Cost:  $1,788,073

A similar PHS component implementation at another utility should be less expensive when compared to
the  Cincinnati pilot as it could benefit from lessons learned and would not incur research-related costs.

The benefits that have been afforded from implementation of the PHS component include:

    •   Relationships formed and knowledge base discovered which can be employed in other areas of
       participant agencies,
    •   Improved knowledge of partner agencies' abilities and organizational structure,
    •   Use of 911 and EMS data for other applications, and
    •   Improved coordination between the public health partners and the utility during emergency
       response.
Compliance was demonstrated through 100% participation in drills and exercises which required
substantially more effort than routine investigations, but was beneficial to the public health partners and
GCWW as demonstrated by more efficient and effective communication during response to Possible
water contamination. Furthermore, compliance was evidenced by a high rate of alert investigations
completed by the public health partners during the evaluation period (>75% during most months).  For
more information on this topic, see Section 8.7.
                                                                                           VIM

-------
                                   Table of Contents
LIST OF FIGURES	xii

LIST OF TABLES	xiv

SECTION 1.0: INTRODUCTION	1

  1.1     CWS DESIGN OBJECTIVES	1
  1.2     ROLE OF PUBLIC HEALTH SURVEILLANCE IN THE CINCINNATI CWS	2
  1.3     OBJECTIVES	2
  1.4     DOCUMENT ORGANIZATION	3

SECTION 2.0: OVERVIEW OF THE PHS COMPONENT	4

  2.1     PUBLIC HEALTH SURVEILLANCE TOOLS	5
  2.2     COMMUNICATION AND COORDINATION	7
  2.3     COMPONENT RESPONSE PROCEDURES	8
  2.4     ROLES AND RESPONSIBILITIES	8
  2.5     SUMMARY OF SIGNIFICANT PUBLIC HEALTH SURVEILLANCE COMPONENT MODIFICATIONS	9
  2.6     TIMELINE OF PHS DEVELOPMENT PHASES AND EVALUATION-RELATED ACTIVITIES	11

SECTION 3.0: METHODOLOGY	13

  3.1     ANALYSIS OF EMPIRICAL DATA FROM ROUTINE OPERATIONS	13
  3.2     DRILLS AND EXERCISES	13
    3.2.1    PHS Drill 1 (August 22, 2008)	13
    3.2.2    CWS Full Scale Exercise 2 (October 1, 2008)	14
    3.2.3    PHS Tabletop Exercise (April 22, 2009)	14
    3.2.4    PHS Drill 2 (July 28, 2009)	14
    3.2.5    CWS Full Scale Exercise 3 (October 21, 2009)	14
  3.3     SIMULATION STUDY	15
  3.4     FORUMS	19
  3.5     ANALYSIS OF LIFECYCLE COSTS	20

SECTION 4.0: PERFORMANCE OF THE 911 SURVEILLANCE TOOL	22

  4.1     DESCRIPTION OF THE 911 SURVEILLANCE TOOL	22
  4.2     DESIGN OBJECTIVE: SPATIAL COVERAGE	23
    4.2.1    Area and Population Coverage	23
    4.2.2    Spatial Extent of an Alert	24
    4.2.3    Summary	27
  4.3     DESIGN OBJECTIVE: CONTAMINANT COVERAGE	27
    4.3.1    Contamination Scenario Coverage	27
    4.3.2    Summary	29
  4.4     DESIGN OBJECTIVE: ALERT OCCURRENCE	29
    4.4.1    Invalid Alerts	29
    4.4.2    Summary	30
  4.5     DESIGN OBJECTIVE: TIMELINESS OF DETECTION	31
    4.5.1    Time for Data Transmission	31
    4.5.2    Time for Event Detection	32
    4.5.3    Time for Alert Recognition	33
    4.5.4    Time to Investigate Alerts	34
    4.5.5    Summary	39
  4.6     DESIGN OBJECTIVE: OPERATIONAL RELIABILITY	40
    4.6.1    Availability	40
    4.6.2    Summary	42

SECTION 5.0: PERFORMANCE OF THE EMS SURVEILLANCE TOOL	43
                                                                                          IX

-------
  5.1    DESCRIPTION OF THE EMS SURVEILLANCE TOOL	43
  5.2    DESIGN OBJECTIVE:  SPATIAL COVERAGE	45
     5.2.1    Area and Population Coverage	45
     5.2.2    Spatial Extent of an Alert	48
     5.2.3    Summary	50
  5.3    DESIGN OBJECTIVE:  CONTAMINANT COVERAGE	50
     5.3.1    Contamination Scenario Coverage	50
     5.3.2    Summary	52
  5.4    DESIGN OBJECTIVE: ALERT OCCURRENCE	52
     5.4.1    Invalid Alerts	52
     5.4.2    Valid Alerts	57
     5.4.3    Summary	57
  5.5    DESIGN OBJECTIVE:  TIMELINESS OF DETECTION	57
     5.5.7    Time for Data Transmission	58
     5.5.2    Time for Event Detection	59
     5.5.3    Time for Alert Recognition	59
     5.5.4    Time to Investigate Alerts	60
     5.5.5     Summary	65
  5.6    DESIGN OBJECTIVE:  OPERATIONAL RELIABILITY	65
     5.6.1    Availability	65
     5.6.2    Summary	66

SECTION 6.0: PERFORMANCE OF THE EPICENTER SURVEILLANCE TOOL	68

  6.1    DESCRIPTION OF THE EPICENTER SURVEILLANCE TOOL	68
  6.2    DESIGN OBJECTIVE: SPATIAL COVERAGE	70
     6.2.1    Area and Population Coverage	70
  6.3    DESIGN OBJECTIVE: CONTAMINANT COVERAGE	70
     6.3.1    Contamination Scenario Coverage	71
     6.3.2    Contaminant Detection Threshold.	72
     6.3.3    Summary	74
  6.4    DESIGN OBJECTIVE: ALERT OCCURRENCE	74
     6.4.1    Invalid Alerts	74
     6.4.2    Valid Alerts	76
     6.4.3    Summary	78
  6.5    DESIGN OBJECTIVE: TIMELINESS OF DETECTION	78
     6.5.1    Time for Data Transmission	78
     6.5.2    Time for Event Detection	79
     6.5.3    Time to Investigate Alerts	79
     6.5.4    Summary	81
  6.6    DESIGN OBJECTIVE: OPERATIONAL RELIABILITY	82
     6.6.1    Availability	82
     6.6.2    Summary	82

SECTION 7.0: PERFORMANCE OF THE DPIC SURVEILLANCE TOOL	83

  7.1    DESCRIPTION OF THE DPIC SURVEILLANCE TOOL	83
  7.2    DESIGN OBJECTIVE:  SPATIAL COVERAGE	85
     7.2.7    Area and Population Coverage	85
     7.2.2    Spatial Extent of an Alert	86
     7.2.3    Summary	87
  7.3    DESIGN OBJECTIVE:  CONTAMINANT COVERAGE	88
     7.3.1    Contamination Scenario Coverage	88
     7.3.2    Summary	90
  7.4    DESIGN OBJECTIVE: ALERT OCCURRENCE	90
     7.4.1    Invalid Alerts	90
     7.4.2    Summary	91
  7.5    DESIGN OBJECTIVE:  TIMELINESS OF DETECTION	91

-------
     7.5.1    Time for Data Transmission	92
     7.5.2    Time for Event Detection	92
     7.5.3    Time for Alert Recognition	93
     7.5.4    Time to Investigate Alerts	94
     7.5.5    Summary	99
  7.6    DESIGN OBJECTIVE: OPERATIONAL RELIABILITY	99
     7.6.1    Availability	99
     7.6.2    Summary	100

SECTION 8.0: PERFORMANCE OF THE INTEGRATED COMPONENT	101

  8.1    DESCRIPTION OF THE INTEGRATED PHS COMPONENT	101
     8.1.1    Surveillance Tools Overview	101
     8.1.2    Analysis Methodology	102
  8.2    INTEGRATED DESIGN OBJECTIVE: SPATIAL COVERAGE	103
  8.3    INTEGRATED DESIGN OBJECTIVE: CONTAMINANT COVERAGE	104
  8.4    INTEGRATED DESIGN OBJECTIVE: ALERT OCCURRENCE	105
  8.5    INTEGRATED DESIGN OBJECTIVE: TIMELINESS OF DETECTION	110
  8.6    INTEGRATED DESIGN OBJECTIVE: OPERATIONAL RELIABILITY	112
  8.7    INTEGRATED DESIGN OBJECTIVE: SUSTAINABILITY	113
     8.7.1    Costs	113
     8.7.2    Benefits	116
     8.7.3    Compliance	117
  8.8    SUMMARY OF THE INTEGRATED COMPONENT	118

SECTION 9.0: SUMMARY AND CONCLUSIONS	120

  9.1    HIGHLIGHTS OF ANALYSIS	120
  9.2    LIMITATIONS OF THE ANALYSIS	121
  9.3    POTENTIAL APPLICATIONS OF THE PHS COMPONENT	121

SECTION 10.0: REFERENCES	123

SECTION 11.0: ABBREVIATIONS	124

SECTION 12.0: GLOSSARY	126
                                                                                           XI

-------
                                   List of Figures
FIGURE 2-1. SPATIAL COVERAGE OF THE 911, EMS, AND DPIC SURVEILLANCE TOOLS	7
FIGURE 2-2. THE COMMUNICATOR PROTOCOL	11
FIGURE 2-3. TIMELINE OF PHS COMPONENT ACTIVITIES	12
FIGURE 4-1. AREA COVERAGE OF 911 ALERTS IN CITY OF CINCINNATI (N=86)	24
FIGURE 4-2. SPATIAL EXTENT OF 911 ALERTS (N=86, EMPIRICAL DATA)	25
FIGURE 4-3. HISTOGRAM OF 911 ALERT AREAS (N=86, EMPIRICAL DATA)	26
FIGURE 4-4. 911 INVALID ALERTS PER REPORTING PERIOD (N=86) AND WITH ADDITIONAL ALERTING CRITERIA (N=7)
	30
FIGURE 4-5. HISTOGRAM OF NUMBER OF CALLS PER ALERT (N=86)	30
FIGURE 4-6. 911 SURVEILLANCE TOOL AVERAGE TIME FOR DATA TRANSMISSION	32
FIGURE 4-7. 911 SURVEILLANCE TOOL AVERAGE TIME FOR EVENT DETECTION	33
FIGURE 4-8. AVERAGE TIME TO RECOGNIZE 911 ALERTS	34
FIGURE 4-9. 911 AVERAGE INVALID ALERT INVESTIGATION TIME (N=39, EMPIRICAL DATA)	36
FIGURE 4-10. PHS DRILL 2 TIMELINE (911  ALERT)	37
FIGURE 4-11. 911 SURVEILLANCE TOOL TIMELINESS OF DETECTION (SIMULATION STUDY DATA)	38
FIGURE 4-12. 911 SURVEILLANCE TOOL TIMELINESS OF DETECTION (SIMULATION STUDY DATA)	39
FIGURE 4-13. 911 SURVEILLANCE TOOL DOWNTIME (EVENTS > IHOUR)	41
FIGURE 4-14. 911 SURVEILLANCE TOOL AVAILABILITY	41
FIGURE 5-1. EMS ALERTS PER ZIP CODE (CITY OF CINCINNATI, N=77)	46
FIGURE 5-2. NUMBER OF ZIP CODES IN EMS ALERTS PRIOR TO ALERTING MODIFICATION (N=62)	47
FIGURE 5-3. NUMBER OF ZIP CODES IN EMS ALERTS POST ALERTING MODIFICATIONS (N= 15)	47
FIGURE 5-4. TOTAL EMS RUNS PER ZIP CODE ASSOCIATED WITH ALERTS DURING EVALUATION PERIOD (CITY OF
CINCINNATI, N=77)	49
FIGURE 5-5. TOTAL EMS ALERTS PER ZIP CODE WITH MULTIPLE EMS RUNS (CITY OF CINCINNATI, N=77)	50
FIGURE 5-6. EMS INVALID ALERTS PER REPORTING PERIOD (N=72)	53
FIGURE 5-7. EMS RUNS PER ALERT (N=62)	54
FIGURE 5-8. EMS RUNS PER ALERT (N= 15)	54
FIGURE 5-9. RATIO OF EMS RUNS/AFFECTED ZIP CODES PER ALERT: PRE-UPDATED ALERTING CRITERIA (N=62) AND
POST-UPDATED ALERTING CRITERIA (N= 15)	55
FIGURE 5-10. PERCENTAGES OF SYNDROMES FOR EMS ALERTS (N=77)	55
FIGURE 5-11. SYNDROME CATEGORIES FOR EMS ALERTS (N=77)	56
FIGURE 5-12. EMS SURVEILLANCE TOOL - AVERAGE TIME FOR DATA TRANSMISSION	58
FIGURE 5-13. EMS SURVEILLANCE TOOL - AVERAGE TIME FOR EVENT DETECTION	59
FIGURE 5-14. AVERAGE TIME TO RECOGNIZE EMS ALERT	60
FIGURE 5-15. EMS AVERAGE INVALID ALERT INVESTIGATION TIME (N=43, EMPIRICAL DATA)	62
                                                                                      XII

-------
FIGURE 5-16. PHS DRILL 1 TIMELINE (EMS ALERT)	63
FIGURE 5-17. EMS DATA STREAM TIMELINESS OF DETECTION (SIMULATION STUDY DATA)	63
FIGURE 5-18. EMS SURVEILLANCE TOOL TIMELINESS OF DETECTION (SIMULATION STUDY DATA)	64
FIGURE 5-19. EMS SURVEILLANCE TOOL DOWNTIME (EVENTS > 1 HOUR)	66
FIGURE 5-20. EMS SURVEILLANCE TOOL AVAILABILITY	66
FIGURE 6-1. AVERAGE AND MINIMUM CASE COUNTS PER SYNDROME ALERT	73
FIGURE 6-2. AVERAGE AND MINIMUM CASE COUNTS ABOVE SYNDROME THRESHOLDS PER ALERT	73
FIGURE 6-3. EPICENTER INVALID ALERTS PER REPORTING PERIOD	75
FIGURE 6-4. PERCENT OF EPICENTER INVALID ALERTS BY SYNDROME	75
FIGURE 6-5. CASES PER INVALID ALERT	76
FIGURE 6-6. VALID ALERT COUNT AND DURATION (IN CUMULATIVE DAYS) PER REPORTING PERIOD	77
FIGURE 6-7. AVERAGE DAILY COUNTS BY SYNDROME DURING DIFFERENT TIME PERIODS	78
FIGURE 6-8. EPICENTER TIME FOR EVENT DETECTION	79
FIGURE 6-9. EPICENTER SURVEILLANCE TOOL TIMELINESS OF DETECTION (SIMULATION STUDY DATA)	80
FIGURE 6-10. EPICENTER SURVEILLANCE TOOL TIMELINESS OF DETECTION (SIMULATION STUDY DATA)	81
FIGURE 7-1. DPIC DRINKING WATER SURVEILLANCE PROCESS FLOW	83
FIGURE 7-2. HISTOGRAM OF ALERTS PER ZIP CODE	86
FIGURE 7-3. HISTOGRAM OF ZIP CODES PER ALERT	87
FIGURE 7-4. DPIC INVALID ALERTS PER REPORTING PERIOD	91
FIGURE 7-5. AVERAGE TIME TO RECOGNIZE DPIC ALERT BY MONTH	93
FIGURE 7-6. DPIC AVERAGE INVALID ALERT INVESTIGATION TIME (N=486, EMPIRICAL DATA)	95
FIGURE 7-7. PHS DRILL 2 TIMELINE (DPIC ALERT)	96
FIGURE 7-8. DPIC SURVEILLANCE TOOL TIMELINESS OF DETECTION (SIMULATION STUDY DATA)	96
FIGURE 7-9. DPIC SURVEILLANCE TOOL TIMELINESS OF DETECTION (SIMULATION STUDY DATA)	97
FIGURE 7-10. ASTUTE CLINICIAN DATA STREAM TIMELINESS OF DETECTION (SIMULATION STUDY DATA)	98
FIGURE 7-11. ASTUTE CLINICIAN DATA STREAM TIMELINESS OF DETECTION (SIMULATION STUDY DATA)	99
FIGURE 8-1. ALERTS PER MONTH FOR THE INTEGRATED PHS COMPONENT	107
FIGURE 8-2. PHS COMPONENT TIMELINESS OF DETECTION	Ill
FIGURE 8-3. PHS COMPONENT TIMELINESS OF DETECTION AND Low SYMPTOM ONSET	112
FIGURE 8-4. PHS COMPONENT DATA COMPLETENESS (BASED ON 911, EMS AND EPICENTER DATA STREAMS)	113
FIGURE 8-5. PERCENT INVESTIGATION CHECKLISTS COMPLETED PER MONTH	118
                                                                                      XIII

-------
                                    List of Tables
TABLE 2-1. PUBLIC HEALTH SURVEILLANCE COMPONENT DESIGN ELEMENTS	5
TABLE 2-2. PHS SURVEILLANCE TOOL OVERVIEW	6
TABLE 2-3. PUBLIC HEALTH USER'S GROUP ROLES AND RESPONSIBILITIES	9
TABLE 2-4. PHS COMPONENT MODIFICATIONS	9
TABLE 3-1. PHS DRILL VARIATIONS	15
TABLE 3-2. ASSUMED CHARACTERISTICS OF CONTAMINANTS DETECTABLE BY THE PHS COMPONENT	17
TABLE 4-1. GENERALIZED 911 INCIDENT CODES	22
TABLE 4-2. STATISTICAL ANALYSIS OF SPATIAL EXTENT OF 911 ALERTS (N=86, EMPIRICAL DATA)	26
TABLE 4-3. AVERAGE RADIUS AND AREA OF FIRST 911 ALERT BY CONTAMINANT (SIMULATION STUDY DATA)	27
TABLE 4-4. 911 DETECTION STATISTICS	28
TABLE 4-5. 911 ALERT RECOGNITION TIME (HOURS)	34
TABLE 4-6. 911 INVALID ALERT INVESTIGATION TIME (MINUTES, EMPIRICAL DATA)	36
TABLE 4-7. 911 SURVEILLANCE TOOL TIMELINESS OF DETECTION (MINUTES, SIMULATION STUDY DATA)	38
TABLE 5-1. CUSUM INTERPRETATION TABLE	43
TABLE 5-2. EARS SYNDROME CATEGORIES AND MEDICAL COMPLAINTS	44
TABLE 5-3. EMS ALERT STATISTICS (JANUARY 16,2008 -MAY 12,2009, N=62)	48
TABLE 5-4. EMS ALERT STATISTICS (MAY 13,2009 -JUNE 15,2010, N=15)	48
TABLE 5-5. EMS DETECTION STATISTICS	51
TABLE 5-6. EMS ALERT RECOGNITION TIME (HOURS)	60
TABLE 5-7. EMS INVALID ALERT INVESTIGATION TIME (MINUTES, EMPIRICAL DATA)	62
TABLE 5-8. EMS DATA STREAM TIMELINESS OF DETECTION (MINUTES, SIMULATION STUDY DATA)	64
TABLE 6-1. EPICENTER SYNDROMES	69
TABLE 6-2. EPICENTER ALGORITHMS	69
TABLE 6-3. EPICENTER DETECTION STATISTICS	71
TABLE 6-4. EPICENTER SURVEILLANCE TOOL TIMELINESS OF DETECTION (MINUTES, SIMULATION STUDY DATA)	81
TABLE 7-1. STATISTICS OF ALERTS PER ZIP CODE	85
TABLE 7-2. STATISTICS OF ZIP CODES PER ALERT	86
TABLE 7-3. CLUSTER FREQUENCY PER ZIP CODE	87
TABLE 7-4. DPIC DETECTION STATISTICS	88
TABLE 7-5. ASTUTE CLINICIAN DETECTION STATISTICS	89
TABLE 7-6. STATISTICS OF ALERTS PER MONTH	91
TABLE 7-7. TIME FOR EVENT DETECTION	92
TABLE 7-8. TIME TO RECOGNIZE DPIC ALERT (HOURS)	93
TABLE 7-9. DPIC INVALID ALERT INVESTIGATION TIME (MINUTES, EMPIRICAL DATA)	95
                                                                                      XIV

-------
TABLE 7-10. DPIC DATA STREAM TIMELINESS OF DETECTION (SIMULATION STUDY DATA)	97
TABLE 7-11. ASTUTE CLINICIAN DATA STREAM TIMELINESS OF DETECTION (SIMULATION STUDY DATA)	98
TABLE 7-12. DPIC AVAILABILITY	100
TABLE 8-1. COMPARISON OF SYNDROMES FROM PHS SURVEILLANCE TOOLS	102
TABLE 8-2. EVALUATION OF SPATIAL COVERAGE METRICS	103
TABLE 8-3. EVALUATION OF CONTAMINANT COVERAGE	105
TABLE 8-4. EVALUATION OF ALERT OCCURRENCE	106
TABLE 8-5. CONCURRENT PHS ALERTS (EMPIRICAL DATA)	107
TABLE 8-7. EVALUATION OF TIMELINESS	110
TABLE 8-8. EVALUATION OF OPERATIONAL RELIABILITY	112
TABLE 8-9. COST ELEMENTS USED IN THE CALCULATION OF LIFECYCLE COST	114
TABLE 8-10. IMPLEMENTATION COSTS	114
TABLE 8-11. ANNUAL O&M COSTS	115
TABLE 8-12. EQUIPMENT COSTS	116
                                                                                       xv

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot
                            Section 1.0:  Introduction

The purpose of this document is to describe the evaluation of the public health surveillance (PHS)
component of the Cincinnati pilot, the first such pilot deployed under the U.S. Environmental Protection
Agency's (EPA) Water Security Initiative (WSI). The evaluation covers the period from January 2008 to
June 2010 when the PHS component was fully operational. This evaluation was implemented by
examining the performance of the PHS component relative to the design objectives established for the
contamination warning system (CWS).

1.1    CWS Design Objectives

The Cincinnati CWS was designed to meet six overarching objectives, which are described in detail in
WaterSentinel System Architecture (USEPA, 2005) and are presented briefly below:

    •  Spatial Coverage. The objective for spatial coverage is to monitor the entire population served
       by the drinking water utility. Spatial coverage can be considered geographically. PHS spatial
       coverage varies geographically based on population density, population demographics (industrial
       vs. residential), and/or types of surveillance tools used within a public health jurisdiction.
       Metrics applicable to spatial coverage include: area and population coverage, and spatial extent of
       an alert.

    •  Contaminant Coverage. The objective for contaminant coverage is to provide detection
       capabilities for all priority contaminants.  This design objective is further defined by binning the
       priority contaminants into 12 classes according to the means by which they might be detected
       (USEPA, 2005).  Use of these detection classes to inform design provides more comprehensive
       coverage of contaminants of concern than would be achieved by designing the system around a
       handful of specific contaminants. Contaminant coverage depends on the specific data streams
       analyzed by each monitoring and surveillance component, as well as the specific attributes of
       each component. The metric explored in this design objective is contamination scenario
       coverage.

    •  Alert Occurrence. The objective of this aspect of system design is to minimize the rate of
       invalid alerts (alerts unrelated to contamination or other anomalous conditions) while maintaining
       the ability of the system to detect real incidents. Metrics associated with alert occurrence include:
       invalid alerts and valid alerts.
    •  Timeliness of Detection. The objective of this aspect of system design is to provide initial
       detection of a contamination incident in a timeframe that allows for the implementation  of
       response actions that result in significant  consequences reduction. For monitoring and
       surveillance components, such as PHS, this design objective addresses only detection of an
       anomaly and investigation of the subsequent alert. Timeliness of response is addressed under
       consequence management and sampling and analysis (S&A). Metrics associated with timeliness
       of detection include: time for data transmission, time for event detection, time for alert
       recognition and time to investigate alerts.

    •  Operational Reliability. The objective for operational reliability is to achieve a sufficiently high
       degree of system availability, data completeness and data accuracy such that the probability of
       missing a contamination incident becomes exceedingly low. Operational reliability depends on
       the redundancies built into the CWS and each of its components. The metric used to evaluate
       operational reliability was availability.

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot

    •   Sustainability. The objective of this aspect of system design is to develop a CWS that provides
       benefits to the utility and partner organizations while minimizing the costs.  This can be achieved
       through leveraging of existing systems and resources that can readily be integrated into the design
       of the CWS.  Furthermore, a design that results in dual-use applications that benefit the utility in
       day-to-day operations, while also providing the capability to detect intentional or accidental
       contamination incidents, will also improve Sustainability. For PHS, this design objective is
       discussed only within the section which covers the integrated component (Section 8), and
       includes costs, benefits and compliance.

The design objectives provide a basis for evaluation of each component, in this case PHS, as well as the
entire integrated system. Because the deployment of a drinking water CWS is a new concept, design
standards or benchmarks are unavailable. Thus, it is necessary to evaluate the performance of the pilot
CWS  in Cincinnati against the design objectives relative to the baseline state of the utility prior to CWS
deployment.

1.2    Role of Public Health Surveillance in the Cincinnati CWS

Under the WSI, a multi-component design was developed to meet the above design  objectives.
Specifically, the WSI CWS architecture utilizes four monitoring and surveillance components common to
the drinking water industry and public health sector: water quality monitoring (WQM), enhanced security
monitoring (ESM), customer complaint surveillance (CCS) and PHS.  Information from these four
components is integrated under a consequence management plan (CMP), which is supported by S&A
activities, to establish the credibility of possible contamination incidents and to inform response actions
intended to mitigate consequences.

The PHS component  of the Cincinnati CWS includes the surveillance tools that monitor the following
data streams: 911 calls, Emergency Medical Service (EMS) runs, Emergency Department (ED) patient
data from local hospitals (i.e., EpiCenter), and Poison Control Center (PCC) call data from the Cincinnati
Drug and Poison Information Center (DPIC). These surveillance tools were collectively monitored to
identify possible contamination incidents. Surveillance was performed on the data using appropriate
statistical algorithms as well as human surveillance, whereby public health personnel identify data
anomalies using professional judgment (i.e., the astute clinician). System users observe alert data to
identify clustering of cases, or common symptoms among cases.

When PHS generates an alert, appropriate personnel at the Greater Cincinnati Water Works (GCWW) are
notified according to  standard operating procedures as outlined in the Cincinnati Pilot Operational
Strategy. The general process for alert investigations in the Cincinnati CWS is outlined in the document,
Water Security Initiative: Interim Guidance on Developing an Operational Strategy for Contamination
Warning Systems (USEPA, 2008a).

1.3    Objectives

The overall objective of the PHS component evaluation is to demonstrate how well the component
functioned as part of the CWS deployed in  Cincinnati (i.e., how effectively the component achieved the
design objectives). This evaluation will describe how the surveillance tools (which are analyzed
independently and collectively) could reliably detect a possible contamination incident based on the
standard operating procedures established for the Cincinnati CWS. It will also characterize factors that
impact the Sustainability of PHS in a CWS. Although  no known contamination incidents occurred during
the evaluation period, the PHS component yielded sufficient data for the evaluation through information
collected during routine operation,  drills and exercises, and from computer modeling conducted as part of

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot

a simulation study.  In summary, this document will discuss the approach for analysis and integration of
this information to assess the overall operation, performance, and sustainability of the PHS component as
part of the Cincinnati CWS.

1.4    Document Organization

This document contains the following sections:

    •  Section 2:  Overview of the PHS Component. This section introduces the PHS component of
       the Cincinnati CWS and describes each of the major design elements that make up the
       component. A summary of significant modifications to the component that had a demonstrable
       impact on performance is presented at the end of this section.

    •  Section 3:  Methodology. This section describes the data sources and techniques used to
       evaluate the PHS component.

    •  Sections 4 through 7:  Evaluation of PHS Surveillance Tools. Each of these sections
       addresses one of the PHS surveillance tools listed in Section 2.1. The design objectives described
       in Section 1.1 are covered for each surveillance tool, and the supporting evaluation metrics are
       discussed in a dedicated subsection under each design objective. For each metric, an overview of
       the analysis methodology is provided followed by presentation and discussion of the results.

    •  SectionS:  Performance of the Integrated PHS Component. This section includes a thorough
       evaluation of the integrated functionality of the PHS surveillance tools used in the Cincinnati
       CWS, including a comparative evaluation regarding how each tool met the stated design
       objectives.

    •  Section 9:  Summary and Conclusions.  This section provides an overall summary of the PHS
       component evaluation, discusses limitations of the study and describes potential additional
       applications.

    •  Section 10: References. This section lists all sources and documents cited throughout this
       report.

    •  Section 11: Abbreviations. This section lists all acronyms approved for use in the PHS
       component evaluation.

    •  Section 12: Glossary. This section defines terms used throughout the PHS component
       evaluation.

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot
            Section 2.0:  Overview of the PHS Component

Per the Centers for Disease Control and Prevention (CDC), public health surveillance is the "ongoing,
systematic collection, analysis, interpretation and dissemination of data about a health-related event for
use in public health action to reduce morbidity and mortality and to improve health," (Thacker and
Berkelman, 1988).  PHS involves the analysis of health-related data to identify disease events that may
stem from various sources, in this case, drinking water contamination. Using PHS successfully requires
the proper acquisition of data and application of analysis techniques, as well as effective communication
practices between essential investigative personnel.

For the Cincinnati CWS, existing PHS data and infrastructure provided a solid foundation to achieve the
goals of the PHS component as part of a CWS.  However, following a gap analysis, a number of
enhancements and modifications were identified to fully develop and/or optimize the surveillance tools
and communication and coordination protocols to meet the design objectives described in Section 1.1.
Specifically, automated event detection tools that could analyze PHS data (e.g., 911 calls and EMS runs)
and potentially provide early indication of drinking water contamination for contaminants with rapid
symptom onset had not been implemented. Therefore, the capability to provide timely detection of
contamination incidents resulting from contaminants with rapid symptom onset (i.e., contaminants that
produces symptoms within minutes to several hours of exposure to an acutely harmful dose) via near real-
time  detection was not available.  In addition, the lack of consistent and reliable mechanisms for
communication and coordination between the water utility and local health departments presented a
challenge in terms of defining roles and responsibilities to investigate alerts produced by the PHS tools.

The PHS component of the Cincinnati CWS leveraged a variety of Health Insurance Portability and
Accountability Act (HIPAA) compliant public health data sources to identify possible contamination
incidents. Two new event detection tools, the 911 surveillance tool and the EMS surveillance tool, were
implemented for the purposes of detecting increases in 911 calls and EMS runs which may indicate
exposure of individuals to contaminants with rapid symptom onset. Existing surveillance tools were also
utilized for identification of possible contamination incidents, including:  1) EpiCenter, which monitors
hospital ED admission reports for a rise in medical syndromes that may indicate disease outbreaks; and 2)
the DPIC surveillance tool, which monitors for chemical poisoning incidents. In addition to enhanced
data acquisition and analysis, protocols were implemented to improve the efficiency of communication
among Cincinnati Health Department (CHD), Hamilton County Public Health (HCPH), DPIC and
GCWW.

The PHS component of the Cincinnati CWS was fully deployed and operational by the end of 2007 and a
detailed description of the system at this  point in the project can be found in Water Security Initiative:
Cincinnati Pilot Post-Implementation System Status (USEPA, 2008b). During the next phase of the pilot,
the evaluation period from January 2008 through June 2010, the system was modified to optimize
performance and then analyzed.

The three main design elements for the PHS component are described in greater detail in Table 2-1.
Sections 2.1 through 2.3 provide an overview of each of the three PHS design elements, with an emphasis
on changes to the component during the evaluation period. Section 2.5 summarizes all significant
modifications to the PHS  system that are relevant to the interpretation of the evaluation results presented
in this report.

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot

Table 2-1. Public Health Surveillance Component Design Elements
Design Element
Public Health
Surveillance Tools
Communication and
Coordination
Component Response
Procedures
Description
PHS data streams, including 911 calls, EMS runs, ED cases, and PCC calls are monitored
using automated surveillance systems to identify possible drinking water contamination.
A mechanism and protocol for communication and coordination between the appropriate
local public health organizations and the drinking water utility which is utilized during PHS
alert investigations and during other public health crises. A User's Group consisting of
public health and utility meets periodically to discuss matters relevant to the PHS
component of the CWS as well as other current health topics.
Written standard operating procedures exist for every step in assessing PHS alerts and
communicating with partners. These procedures outline effective and timely
communications, including clear guidance on appropriate response actions.
2.1    Public Health Surveillance Tools

The surveillance tools selected for the PHS component in combination with the analysis methods used by
public health personnel during ongoing surveillance of public health data aim to detect a broad spectrum
of contaminants of concern. A brief description of each of the PHS surveillance tools included in this
component evaluation is provided below:

    •   911 Surveillance Tool. 911 call data is collected by the Cincinnati Fire Department (CFD) and
       filtered based on incident code to include calls that are most indicative of possible water
       contamination. This data is analyzed spatially and temporally via SaTScan™ algorithms.
       Results from this analysis are displayed on the Public Health User Interface, an interactive web-
       based tool developed as part of the Cincinnati CWS to display information on 911 and EMS
       alerts.  Automated email alerts are sent whenever analysis results exceed pre-established
       thresholds. Because this data is collected by CFD, the analysis only applies to the portion of the
       GCWW service area within Cincinnati  city limits. Evaluation of the 911 surveillance tool is
       discussed more thoroughly in Section 4.

    •   EMS Surveillance Tool. EMS run data is collected by CFD paramedics and Emergency Medical
       Technicians (EMTs) upon completion of an EMS run. This data is uploaded to a database server
       via wireless routers at CFD fire houses, filtered for syndromes most likely to indicate water
       contamination, and analyzed using CDC's Early Aberration Reporting System (EARS). Like the
       911 analysis, results from this analysis  are displayed on the Public Health User Interface and
       automated email alerts are sent when thresholds are exceeded. This data also only applies to the
       portion of the GCWW service area within Cincinnati city limits.  Evaluation of the EMS
       surveillance tool is discussed more thoroughly in Section 5.

    •   ED Registration Data Surveillance Tool (EpiCenter).  ED registrations are entered at local
       hospitals following a patient visit to the ED.  Pertinent information from these  records is uploaded
       into EpiCenter (formerly the Real-Time Outbreak Detection System [RODS]), housed at the Ohio
       Department of Health (ODH).  Case data is categorized by syndrome, and is analyzed using a
       variety of algorithms.  Local public health personnel are notified when thresholds are exceeded.
       Since all Hamilton County hospitals submit data to EpiCenter, this surveillance tool covers the
       Hamilton County portion of the GCWW service area. Evaluation of EpiCenter is discussed in
       detail in Section 6.

    •   PCC Call Data Surveillance Tool  (DPIC).  Calls into DPIC are handled by trained
       toxicosurveillance specialists; call details are entered into the National Poison Data System
       (NPDS) interface.  Statistical, non-statistical, and human surveillance techniques are applied to
       data within NPDS in order to detect anomalies possibly related to water contamination. Part of
       the human surveillance performed on DPIC data is observation of any calls from primary care

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                        of the Cincinnati Contamination Warning System Pilot

       physicians pertaining to severe or unusual symptoms exhibited by their patients. Because DPIC
       serves the entire Southwest Ohio region, this data source covers the entire GCWW service area.
       Evaluation of the DPIC surveillance tool is discussed more thoroughly in Section 7.
A fifth surveillance tool, the National Retail Data Monitor (NRDM), was considered for the Cincinnati
CWS.  The NRDM monitors the sales of over-the-counter (OTC) medications as a potential indicator of
disease outbreaks. Unfortunately, data reporting from area pharmacies was inconsistent, and the
unreliability of the underlying data minimized the utility of the NRDM surveillance tool as a means of
early outbreak detection. Furthermore, it was not possible to evaluate NRDM data collected during the
evaluation period, as the data provider prohibited ODH from conducting research using the data or from
providing the data to a third party.

An overview of the data surveillance tools used and evaluated for the Cincinnati CWS can be found in
Table 2-2.

Table 2-2.  PHS Surveillance Tool Overview

Data Source
Data Owner
Data Type
Analysis Tool
Algorithms/
Analysis
Methods
Display
Spatial
Coverage
PHS Surveillance Tool
911
911 call data
CFD
Incident Codes
SaTScan™
Space-time statistical
models
Public Health User
Interface
City of Cincinnati
(only locations within
the jurisdiction of
CFD; 22% of GCWW
service area)
EMS
EMS run data
CFD
Syndrome
CDC EARS
Temporal statistical
models
Public Health User
Interface
City of Cincinnati
(only locations within
the jurisdiction of
CFD; 22% of GCWW
service area)
Epicenter
ED registration data
ODH
Syndrome
Epicenter
Temporal statistical
models
Epicenter User
Interface
Hamilton County
(includes 95% of
GCWW service area)
DPIC
PCC call data
NPDS
Syndrome
NPDS
Statistical, non-
statistical, and
human
NPDS User
Interface
100% of
GCWW service
area
In addition to the PHS surveillance tools noted in Table 2-2, identification of unusual cases by an astute
clinician at any participating agency may also produce an alert.  This type of alert could occur prior to
detection of any statistical anomalies in the data.  While an important piece of PHS, observations by
astute clinicians were not routinely documented during the evaluation period; however, the role of the
astute clinician is discussed in this report where appropriate.

Figure 2-1 depicts the overlapping coverage of the various surveillance tools.  As previously noted, the
911 and EMS surveillance tools monitor 911 calls and EMS runs that occur within the city of Cincinnati.
The cross-hatch shows the GCWW retail service area, which is also the geographic area covered by DPIC
surveillance.  Zip codes that fall either partially or completely within the city of Cincinnati boundaries are
represented by the dashed outline. It should be noted that because some zip codes extend beyond city

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot

limits, the zip code boundaries do not precisely depict city of Cincinnati boundaries, but is a close
approximation. The black border depicts the boundary of Hamilton County, which is the area covered by
the EpiCenter surveillance tool.
                           Population perSq. Mile
 Treatment Plants
| Cincinnati Zip Codes
| Hamilton County
I GCWW Service Area
                             oB  r£>  <£>  rO  .
                            f  / ^  ^ <*
Figure 2-1. Spatial Coverage of the 911, EMS, and DPIC Surveillance Tools

Figure 2-1 shows the population density of the GCWW service area for reference in subsequent sections
of this document. For example, algorithms that measure data volumes without accounting for the
underlying population density may register more alerts in areas that are more densely populated.
2.2    Communication and Coordination

Prior to implementation of the PHS component, one major gap identified was the lack of a reliable link or
consistent mechanism for data sharing between GCWW and the local public health partners. To
overcome this gap and support PHS component design objectives, the following improvements were
implemented:

    •   User's Group.  A Public Health User's Group (hereafter referred to as the "User's Group") was
       established in order to coordinate efforts required for the PHS component across all stakeholders.
       The User's Group includes representatives from CHD, HCPH, CFD, DPIC, the Federal Bureau of
       Investigation (FBI), and GCWW.  During the early stages of design and deployment, the User's
       Group met on a monthly basis to inform the design, use  and evaluation of tools proposed for the
       PHS component of the Cincinnati CWS.  Following the  completion of implementation activities,

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot

       the group transitioned to a quarterly meeting schedule. The User's Group provides a forum to
       discuss not only issues related to the Cincinnati CWS, but other issues that impact both the public
       health community and the drinking water utility. Through participation in these meetings, an
       ongoing dialogue has been established that improves communication and coordination between
       GCWW and its local public health partners.
    •  Automated Email Alerts. In order to coordinate the distribution of PHS alerts, automated email
       notifications were set up to be sent to GCWW, DPIC, and local public health any time threshold
       alerting criteria were exceeded for the 911 and EMS data analysis. Initially, these emails
       included basic information on the type, date, and time of alerts (e.g., EARS alert for the water
       syndrome on 10/4/2008 at 8:30 am).  More detail was added to these email alerts through the
       evaluation period based on feedback from the User's Group (see Section 2.5, Major
       Modifications).
    •  Water Safety Hotline.  A 24-hour Water Safety Hotline was also established to improve access
       to the toxicological expertise available at DPIC.  This hotline was distributed to necessary utility
       and public health personnel for use whenever consultation is necessary on symptoms or other
       details associated with a PHS investigation. As a result, another means of communication
       between GCWW, local public health, and DPIC was established.
    •  Communicator Protocol. This protocol established the use of an auto-dialer system operated by
       CFD to allow immediate notification to all relevant partners when a public health incident,
       including possible water contamination, is suspected (described below in Section 2.5).

The protocols for information sharing and communication implemented between PHS partners as part of
the Cincinnati CWS aimed to achieve the design objectives described in Section  1.1.  The extent to which
communication and coordination efforts accomplished these goals will be discussed within the section
which covers the integrated component (Section  8.0).

2.3    Component Response Procedures

To capture the routine operation of the PHS component leading up to and after issuance of an alert
notification, GCWW developed detailed  operational strategy procedures.  The Cincinnati Pilot
Operational Strategy describes the process and procedures involved in the operation of the PHS
component, including the initial investigation and validation of a PHS alert. The Cincinnati Pilot
Operational Strategy establishes specific  roles and responsibilities, and details procedural and information
flow descriptions. Health partners complete investigation checklists when investigating alerts to record
data such as alert location and patient data including age, gender, location and symptom information.
Development of the Cincinnati Pilot Operational Strategy provided an opportunity to  better define the
protocols and procedures for how GCWW would work with local public health partners to investigate
alerts generated through the CWS. For PHS alerts, if investigators observe similar symptom descriptors
or syndromes, the cases are clustered, and no other explanation can account for the cases, water
contamination is deemed possible and the Cincinnati Pilot Consequence Management Plan is
implemented. The operational strategy includes a series of checklists that were developed to support the
investigation of PHS alerts.

2.4    Roles and Responsibilities

The PHS component depends on local public health agencies, emergency response personnel (e.g., 911
dispatchers and EMTs), PCC personnel and water utility staff for the purposes of providing pertinent
public health data, investigating subsequent alerts and making a Possible contamination determination
following alert investigations. These personnel are knowledgeable in the interpretation of alerts produced
by the various surveillance tools, as described in Section 2.1. In addition, they are aware of who to

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                        of the Cincinnati Contamination Warning System Pilot

contact during alert investigations, as well as following a Possible contamination determination.  General
responsibilities of representatives in the User's Group are outlined in Table 2-3.

Table 2-3.  Public Health User's Group Roles and Responsibilities	
       Job Function
                   General Responsibilities
          (All members participate in communications)
  Fire Department
   Provide HIPAA-compliant 911 and EMS data
   Maintain the PHS notification system (i.e., the communicator),
   including monthly routine test calls
   Provide supplemental information regarding 911 and EMS activity
   during investigation of possible drinking water contamination
  Poison Control Center
                            •  Provide HIPAA-compliant poison control data
                            •  Investigate DPIC surveillance alerts
                               Provide supplemental toxicological expertise during investigation of
                               possible drinking water contamination
  Local Public Health
  Agencies
   Investigate 911, EMS, and Epicenter alerts
   Initiate or participate in communications with water utility and other
   health partners regarding concern of possible water contamination
   Follow-up with health care providers to obtain specific case data
   during investigation of possible drinking water contamination
 Water Utility
•  Receive notification of PHS alerts
•  Review recent water quality/laboratory data for correlation to PHS
   alerts
•  Notify other partners if a trigger is determined to be Possible
The roles and responsibilities described above capitalize on expertise available at the corresponding
agencies.  For example, public health personnel in charge of PHS alert investigations should have some
previous knowledge of syndromic surveillance.
2.5     Summary of Significant Public Health Surveillance Component Modifications

Per the implementation approach outlined in the document Interim Guidance on Planning for
Contamination Warning System Deployment, evaluation and refinement of each monitoring and
surveillance component is necessary to ensure proper operation of the system relative to the design
objectives (USEPA, 2007).  For the PHS component, necessary modifications were identified using
feedback received during User's Group meetings and lessons learned from drills and exercises.  An
overview of the significant component modifications implemented during the evaluation period can be
found in Table 2-4; these modifications and will serve as a reference when discussing the results of the
evaluation presented in Sections 4.0 through 8.0.

Table 2-4.  PHS Component Modifications
ID
1
2
Component Modification
Modification
Cause
Modification
Cause
Surveillance Tools: Adjusted database access components for
more robust data acquisition and increased stability of data
acquisition interface
Request to reduce application monitoring labor hours by partners
Surveillance Tools: Migrated from RODS to Epicenter interface
New emergency room data event detection tool available
Date
June 12, 2008
Spring -Fall 2008

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot
ID
3
4
5
6
7
8
9
10
Component Modification
Modification
Cause
Modification
Cause
Modification
Cause
Modification
Cause
Modification
Cause
Modification
Cause
Modification
Cause
Modification
Cause
Communication and Coordination: Cincinnati Pilot Operational
Strategy modified: Possible water contamination determination
made jointly between local public health and GCWW
Actions observed during PHS Drill 1 differed from existing PHS
alert investigation procedures
Communication and Coordination: Created all-hours contact list
Local partner and utility personnel contact information not readily
available during drills/exercises to communicate findings during
investigation process
Surveillance Tools: To provide additional information during an
investigation, the query for the Water Security Data Repository
Data Detail page (EARS summary screen) was updated to include
patient disposition information in the detailed record list
Request to include additional information on record list display
Surveillance Tools: Modified 91 1 incident codes being filtered for
analysis
Local public health partners question relevance of some 911
incident codes to water contamination
Surveillance Tools: More detail added to 91 1 and EMS alert email
notifications
Request to include location details (latitude/longitude data
converted to address location) on record list display in alert emails
Surveillance Tools: Added case data display in Google Earth for
911 and EMS alerts
Request to include spatial display of 91 1 calls and EMS runs
associated with 911 and EMS alerts
Surveillance Tools: Alerting threshold adjusted through
implementation of new alerting criteria for 91 1 and EMS alerts
PHS component was generating too many 91 1 and EMS alerts
Communication and Coordination: Developed "communicator"
protocol
Actions observed during March 2009 PHS User Interface refresher
training/contamination scenario tabletop discussion differed from
existing PHS alert investigation procedures
Date
September 15, 2008
October 17, 2008
November 10-13,
2008
March 20, 2009
May 12, 2009
May 12, 2009
May 12, 2009
May 14, 2009
In general, the major modifications served to improve data access for system users, improve data analysis,
and/or further refine communication procedures between public health personnel and GCWW. Examples
of improving data access include added detail to email notifications and the display of case data on
Google Earth in the User Interface. Data analysis was improved by refining the 911 incident codes being
filtered for analysis, as well as modifying threshold levels to acceptable alerting levels.  Finally,
communication and coordination was improved by refinement of PHS alert investigation procedures in
the Cincinnati Pilot Operational Strategy and through development of the "communicator" protocol,
described below.

The need for a central communication protocol was realized during a full-scale exercise in October 2008.
According to communication protocols at the time, one agency was acting as a hub between GCWW and
                                                                                            10

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                        of the Cincinnati Contamination Warning System Pilot

other public health agencies to relay pertinent investigational information.  However, as multiple
conversations ensued between different partners, the data "hub" became isolated information channels.
To compound the situation, technical issues with conference call scheduling occurred, which resulted in
communication difficulties for some partners who needed to provide pertinent information during the
exercise. This exercise highlighted issues with the existing communication procedures and demonstrated
information flow inefficiencies among partners.

To remediate this issue, the "communicator" protocol was implemented to allow expedient
communication among all members of the User's Group when an alert occurs.  The communicator is an
auto-dialer system operated by CFD, which can be utilized to issue an urgent message to all members of
the User's Group.  It can be used to notify personnel via phone and email of a possible water
contamination incident or other developing public health situation. When the communicator is activated,
the notification issued by the system will contain details of the incident and call-in information so that
partners can begin preliminary investigation and prepare for collaborative investigation via conference
call. An overview of the  communicator protocol is displayed in Figure 2-2.
 A PHS alert is detected;
 email alert sent to User's
      Group
The PHS alert is analyzed
 by User's Group entities
and determined valid and
without known cause (i.e.,
contaminated water cannot
    be ruled out)
  A User's Group
Participant activates the
 communicator, which
sends out messages to
   User's Group
User's Group convene on
  conference call to
   discuss alert
                                                       User's Group receives
                                                       message and each entity
                                                       analyzes pertinent data
Figure 2-2.  The Communicator Protocol
2.6     Timeline of PHS Development Phases and Evaluation-related Activities

Figure 2-3 presents a summary timeline for deployment of the PHS component, including milestone
dates for when significant component modifications and drill and exercise evaluation activities took place.
The timeline also shows the completion date for design and implementation, along with the subsequent
optimization and real-time monitoring phases of deployment. The information in this figure is
representative only of modifications that were implemented for the 911 and EMS surveillance tools as the
EpiCenter and DPIC tools were maintained separately by external partners.
                                                                                               11

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                        of the Cincinnati Contamination Warning System Pilot
             Jun-08
       Database Component
         Access Adjusted
      Jan-08
                                          Mar-09
                                     911 Incident Code
                                        Modification
      Oct-08
    Migration to
     Epicenter
     May-09
Adjusted 911 & EMS
  Alert Thresholds
       May-09
    Implemented
Communicator Protocol
                                                            Jun-10
                 Drill 1
                Aug-08
           Optimization
          Jan-08 - Jun-08
Wind Storm FSE 2
  Sep-08   Oct-08
   Design &
Implementation
   Complete
    Jan-08
Figure 2-3. Timeline of PHS Component Activities
                                                      ^v
                                                Real-time Monitoring
                                                  Jun-08 -Jun-10
                                                             End of Data
                                                              Collection
                                                               Jun-10
                                                                                              12

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot


                           Section  3.0:  Methodology

The following section describes the evaluation techniques that were used to fully evaluate the PHS
component. The analysis of the PHS component was conducted using five evaluation techniques to
assess each surveillance tool and the overall integrated component: empirical data from routine
operations, results from drills and exercises, results from the CWS simulation study, findings from forums
such as lessons learned workshops and results from an analysis of lifecycle costs.

3.1    Analysis of Empirical Data from Routine Operations

This evaluation includes data on the performance, operation, and sustainability of the PHS component
from January 16, 2008 to June 15, 2010. In this evaluation, the term "reporting period" is used to refer to
a month of data which spans from the 16th of one month to the 15th of the next month. Thus, the January
2008 reporting period refers to the data collected between January 16, 2008 and February 15, 2008.

Investigation data and timelines were provided through PHS investigation checklists.  To facilitate and
document PHS alert investigations, lead investigators were required to fill out an investigation checklist
indicating completion of procedures, summarizing findings, and detailing the investigation time. The
PHS component (specifically, the 911 and EMS surveillance tools) was modified as needed to optimize
performance from January 2008 - May 2009. While some investigation checklists were completed
during this optimization period, PHS investigators were not required to respond to alerts in real-time nor
complete an investigation checklist during this time. For the DPIC surveillance tool, investigation
checklists were completed in real-time throughout the January 2008 - June 2010 period. Finally, for the
EpiCenter surveillance tool, alert data was provided for analysis in this evaluation by Hamilton County
which spanned the time period March 2008 - March 2010.

3.2    Drills and Exercises

Findings from drills and exercises were used to evaluate the alert investigation process, as implemented
by system users, and to determine whether timely and accurate conclusions resulted from the
investigation.  One main objective of the drills and exercises was to provide the local public health
partners and GCWW the opportunity to practice procedures associated with recognition of and response
to PHS alerts. Drills and exercises also provided an opportunity to identify which procedures required
modification to improve the efficiency of the investigation and communication processes.  All of the drills
and exercises that were designed to test and evaluate the  Cincinnati pilot were compliant with Homeland
Security Exercise and Evaluation Program guidelines.  A brief description of five drills and exercises
conducted for the purpose of component evaluation is provided below. These drills and exercises were:

    •  PHS Drill 1 (August 22, 2008)
    •  CWS Full Scale Exercise 2 (October 1, 2008)
    •  PHS Table Top Exercise (April 22, 2009)
    •  PHS Drill 2 (July 28, 2009)
    •  CWS Full Scale Exercise 3 (October 21,2009)

3.2.1  PHS Drill 1 (August 22, 2008)

Description: The objectives of the drill were to evaluate the alert investigation procedures associated
with the PHS component of the Cincinnati CWS and the interactions between local public health partners
and the GCWW Water Utility Emergency Response Manager (WUERM) as they investigated the alert to
determine if drinking water contamination was possible.  In addition to evaluating implementation of the
                                                                                          13

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot

procedures, elapsed time between drill actions was recorded to establish baseline data for future drill
activities.

Relevant Participants: PHS relevant participants are listed in Table 3-1.

3.2.2  CIVS Full Scale Exercise 2 (October 1,  2008)

Description:  A Full Scale Exercise was conducted on October 1, 2008 to test all Cincinnati CWS
components. Investigation time associated with the public health alert investigation procedures and the
interactions between local public health partners and the GCWW WUERM were analyzed during this
exercise.  Note: CWS Full Scale Exercise 1 took place prior to the evaluation period and did not involve
the PHS component.

Role of PHS: EMS and DPIC alerts occurred after GCWW had already received a WQM alert and a
CCS alert. The public health partners concluded the PHS alerts were likely related to the previous
GCWW alerts. The local public health partners coordinated with GCWW on public notification and
response. However, CWS Full Scale Exercise 2 demonstrated issues with communication as the public
health partners were contacted by different members of the GCWW consequence management team
concurrently.  A key outcome of this exercise was streamlining of communications among all partners
during the latter stages of incident response.

Relevant Participants: Water Utility: GCWW (WUEPvM), Local Public Health Agencies: CHD
(Epidemiologist) and HCPH (Epidemiologist), and Poison Control Center: DPIC (Toxicologists)

3.2.3  PHS Tabletop Exercise (April 22,  2009)

Description:  The main objective of the tabletop exercise was to evaluate whether the User's Group
would determine if drinking water contamination was possible based on a simulated contamination
scenario that involved PHS  alerts.  The simulated action driving the tabletop exercise scenario was the
introduction of atoxic chemical into the water supply at a large reservoir. Individuals exposed to the
contaminant experienced unusual symptoms which resulted in 911 calls, EMS runs and ED visits.
Additionally, introduction of the contaminant resulted in WQM alerts and positive rapid field test results.

Relevant Participants: PHS relevant participants are listed in Table 3-1.

3.2.4  PHS Drill 2 (July 28, 2009)

Description:  The primary objective of the drill was to provide the local public health partners (CHD,
HCPH, and DPIC) and the GCWW WUEPvM the opportunity to practice the recognition of and response
to alerts generated by the PHS component. A secondary objective was to test the communication
procedures between local health partners and utility personnel during the alert investigation process.
Specifically, the newly developed "communicator" protocol was tested to practice implementation of
rapid communication amongst all members of the User's Group during the investigation. Drill objectives
were evaluated based on a simulated call to DPIC from a day-care facility, followed by a SaTScan™ alert
showing an increased number of 911 calls in the same area. In addition  to evaluating implementation of
the procedures, elapsed time between actions was also recorded.

Relevant Participants: PHS relevant participants are listed in Table 3-1.

3.2.5  CIVS Full Scale Exercise 3 (October 21, 2009)
Description:  The Full Scale Exercise was based on a simulated contamination incident in the GCWW
drinking water distribution system.  The scenario involved the intentional injection of a large quantity of a
toxic chemical into the GCWW drinking water system through a fire hydrant in an urban neighborhood of
Cincinnati.  The contaminant selected for the scenario was expected to trigger CCS alerts, due to the odor
                                                                                           14

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot
associated with the toxic chemical, and would produce sufficient public illness to generate delayed PHS
involvement, but not necessarily PHS alerts.  Local health departments were not available to participate in
the exercise due to their responsibilities in preparing for H1N1 pandemic influenza response.

Role of PHS:  DPIC personnel worked closely with GCWW personnel (WUERM, Incident Command
System [ICS]) and the simulated local health departments (CHD and HCPH) throughout the exercise.
Conference calls were conducted as new information became available, and was shared effectively by all
of the participants. Evaluators noted that public health information was used to support the development
of public notices and to help identify the nature of the contaminant.  Additionally, based on the
professional opinion of a DPIC physician-toxicologist, the Incident Commander began evacuating the
impacted area.

Relevant Participants: Water Utility: GCWW (WUERM), Local Public Health Agencies:  Simulated,
and Poison Control Center: DPIC (Toxicologists)

Table 3-1. PHS Drill Variations
Variations
Time of Drill (N = Normal business hours, A = After hours)
Drill Participants
GCWW WUERM
Law Enforcement: FBI
Local Public Health: CHD (Epidemiologist)
Local Public Health: HCPH (Epidemiologist)
Local Fire Department: CFD (Fire Chief)
Poison Control Center: DPIC (Toxicologists)
Drill 1
6/26/08
N

1
0
1
1
0
1
TTX
3/11/09
N

1
1
1
1
1
0
Drill 2
4/30/09
N

1
0
1
1
1
0
3.3    Simulation Study

Evaluation of certain design objectives relies on the occurrence of contamination incidents with known
and varied characteristics. Because contamination incidents are extremely rare, there is insufficient
empirical data to fully evaluate the detection capabilities of the Cincinnati CWS. To fill this gap, a
computer model of the Cincinnati CWS was developed and challenged with a large ensemble of
simulated contamination incidents in a simulation study. For the PHS component, simulation study data
was used to evaluate the following design objectives:

    •   Contaminant Coverage: Analyses conducted for this design objective quantify the ratio of
       contamination scenarios actually detected by the PHS component versus those that could
       theoretically be detected.  Simulations can also be used to understand which of the surveillance
       tools within the PHS component are the most valuable for detecting chemical, radiological, or
       biological incidents.

    •   Alert Occurrence: Analyses conducted for this design objective characterize contamination
       scenarios in which multiple PHS alerts occurred from different PHS data streams, and consider
       the order in which the alerts occurred.

    •   Timeliness of Detection: Analyses conducted to evaluate this design objective  quantify the time
       between the start of contaminant injection and the first PHS alert.
                                                                                            15

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot

A broad range of contaminant types, producing a range of symptoms, was utilized in the simulation study
to characterize the detection capabilities of the monitoring and surveillance components of a CWS.  For
the purpose of the simulation study, a representative set of 17 contaminants was selected from the
comprehensive contaminant list that formed the basis for CWS design. These contaminants are grouped
into the broad categories listed below (the number in parentheses indicates the number of contaminants
from that category that were simulated during the study). A description of the manner in which the
critical concentration, which is the concentration that would produce adverse health effects (or aesthetic
problems in the case of the nuisance chemicals), was derived is also provided for each contaminant
category.

    •   Nuisance Chemicals (2): these chemical contaminants have a relatively low toxicity and thus
       generally do not pose an immediate threat to public health.  However, contamination with these
       chemicals can make the drinking water supply unusable. The critical concentration for nuisance
       chemicals was selected at levels that would make the water unacceptable to customers, e.g.,
       concentrations that result in objectionable aesthetic characteristics.
    •   Toxic Chemicals (8):  these chemicals are highly toxic and pose an acute risk to public health at
       relatively low concentrations. The critical concentration for toxic chemicals was based on the
       mass of contaminant that a 70 kg adult would need to consume in one liter of water to have a 10%
       probability of dying (LD10).
    •   Biological Agents (7):  these contaminants of biological origin include pathogens and toxins that
       pose a risk to public health at relatively low concentrations. The critical concentration for
       biological agents was based on the mass of contaminant that a 70 kg adult would need to
       consume in one liter of water, or inhale during a showering event, to have a 10% probability of
       dying (LD10).

Development of a detailed CWS model required extensive data collection and documentation of
assumptions regarding component and system operations. To the extent possible, model decision logic
and parameter values were developed from data generated through operation of the Cincinnati CWS,
although input from subject matter experts and available research was utilized as well.

The simulation study used several interrelated models, three of which  are relevant to the evaluation of
PHS: EPANET, Health Impacts and Human Behavior (HI/HB), and the PHS component model.  Each
model is further broken down into modules that simulate a particular process or attribute of the model.
The function of each of these models and their relevance to the evaluation of PHS is discussed below.

EPANET
EPANET is a common hydraulic and water quality modeling application widely used in the water
industry to simulate contaminant transport through a drinking water distribution system. In the simulation
study, it was used to produce contaminant concentration profiles at every node in the GCWW distribution
system model, based on the characteristics of each contamination scenario in the ensemble.  The
concentration profiles were used to determine the number of miles of pipe contaminated during each
scenario, which is one measure of the consequences of that contamination scenario.

Health Impacts and Human Behavior Model
The HI/HB model used the concentration profiles generated by EPANET to simulate exposure of
customers in the GCWW service area to contaminated drinking water.  Depending on the type of
contaminant, exposures occurred during one  showering event in the morning (for the inhalation exposure
route), or during five consumption events spread throughout the day (for the  ingestion exposure route).
The HI/HB model used the dose received during exposure events to predict infections, onset of
symptoms, health-seeking behaviors of symptomatic customers and fatalities.
                                                                                           16

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot
The primary output from the HI/HB model was a case table of affected customers, which captured the
time at which each transitioned to mild, moderate and severe symptom categories. Additionally, the
HI/HB model outputs the times at which exposed individuals would pursue various health-seeking
behaviors, which generate the input data for the following PHS surveillance tools: 911, EMS, DPIC, and
EpiCenter (ED data).  These case records were processed by the surveillance tools included in the PHS
component model.

The case table was used to determine the public health consequences of each scenario, specifically the
total number of illnesses and fatalities. Furthermore, EPANET and the HI/HB model were run twice for
each scenario; once without the CWS in operation and once with the CWS in operation. The paired
results from these runs were used to calculate the reduction in consequences due to CWS operations for
each simulated contamination scenario.

Public Health Surveillance Component Model
The PHS component model is based on the component as deployed and currently operating in the
Cincinnati CWS. Inputs from the HI/HB model (health-seeking behaviors from the case table) were
processed by the PHS Event Detection module, which is composed of four automated event detection
tools (911, EMS, EpiCenter and DPIC). The specific statistical surveillance method that was modeled for
DPIC was the volume-based clinical effects algorithm, which requires four calls from the same zip code
in a 24-hour window. Human surveillance detection methods were also integrated into the PHS Event
Detection module, which included active monitoring by DPIC toxicosurveillance specialists of calls
received (alerting threshold was set at 2 calls from the same node within a 4-hour window), and detection
by the simulated Astute Clinician (via number of cases seen by primary care physicians or ED
physicians).   In real-world situations, astute clinician monitoring is performed by virtually at any agency
involved in PHS (in addition to the active monitoring being conducted by primary care physicians or ED
physicians).  When the number of cases met the alerting criteria established for any of the surveillance
methods, the module generated alerts which were processed by the PHS Alert Validation module.

Fifteen of the 17 contaminants evaluated in the simulation study can produce low, moderate, or severe
symptoms in exposed individuals, who would then pursue various health-seeking behaviors. Thus, these
15 contaminants are theoretically detectable by PHS, while the two nuisance chemicals are not because
they were assumed to not produce symptomatic cases under the scenario conditions. Table 3-2 provides
a summary of the assumed delay for onset of low symptoms, and the ratio of the critical concentration to
the detection threshold for each contaminant.  The ratio was calculated to determine whether the detection
threshold was sufficient to detect water contaminated at concentrations equal or greater than the critical
concentration. Large ratios demonstrate the contaminants that can be detected at concentrations
significantly lower than the critical concentration.  The detection threshold values for PHS represent the
concentration of the contaminant that would result in enough illnesses to produce a signal, and were
obtained from a literature review and input from subject matter experts.

Table 3-2. Assumed Characteristics of Contaminants Detectable by the PHS Component
Contaminant
Toxic Chemical 1
Toxic Chemical 2
Toxic Chemical 3
Toxic Chemical 4
Low Symptom
Onset Delay1
10 minutes
15 minutes
15 minutes
1 hour
Critical
Concentration/
Detection
Threshold
458
3,636
1,640
290
                                                                                           17

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot
Contaminant
Toxic Chemical 5
Toxic Chemical 6
Toxic Chemical 7
Toxic Chemical 8
Biological Agent 1
Biological Agent 2
Biological Agents
Biological Agent 4
Biological Agents
Biological Agent 6
Biological Agent 7
Low Symptom
Onset Delay1
15 minutes
15 minutes
10 minutes
1 day
30 minutes
4 hours
2 hours
12 hours
4 days
1 day
1 day
Critical
Concentration/
Detection
Threshold
668
850
950
300
4,500
3,940
2.4 x 104
4.5
10
1.7
1.6
 For the toxic chemical contaminants, the time from exposure to symptom onset is dose-dependent. Time parameter
values for earliest onset of symptoms were assigned based on available medical and toxicological literature.

Outputs of the PHS Event Detection module provide inputs to the PHS Alert Validation module.  The
primary outputs from the PHS Event Detection module are time of alert, location of alert and type of alert.
This information is used by the PHS Alert Validation module, which included activation of the
communicator protocol, as described in Section 2.5, to determine whether contamination is possible.
During the communicator discussion, the local health partners and GCWW would conclude that water
contamination is possible based on geographical clustering of cases and similarity of symptoms evident
from the alert notifications. The procedures included in this module are representative of the alert
investigation process that public health partners and GCWW utilize in the Cincinnati CWS.

The ability of the PHS surveillance tools to detect possible water contamination depends on the health-
seeking behavior of exposed individuals, which in turn depends on the nature and timing of symptoms
produced by the contaminant. The following model assumptions affect the data inputs to the PHS model,
as well as the manner in which the PHS model processes data:

    •   A percentage of the exposed population experience symptoms, and pursue health-seeking
        behavior such as calling 911 (and subsequently being transported to the ED via EMS),
        transporting themselves to the ED, calling their primary care physician or calling DPIC.  There is
        also a percentage of individuals who "do nothing" to seek healthcare, which decreases as
        symptom level increases. For each health-seeking behavior, a specific time delay occurs
        between the time  of symptom onset and the time that individuals pursue healthcare.
    •   All health-seeking behavior actions recorded in the model which become inputs to the PHS
        model are related to water contamination.
            o  Health-seeking behavior was parameterized based on a review of relevant peer-reviewed
               literature, input from subject matter experts, and an exercise conducted in which
               respondents selected their likely actions when experiencing symptoms of different types
               of illnesses.
    •   Individuals seek healthcare  more aggressively when experiencing fast onset, more severe or
        highly unusual symptoms.
                                                                                            18

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot

    •   During non-business hours, individuals cannot schedule appointments with primary care
        physicians.
    •   The number of available of toxicosurveillance specialists at DPIC varies depending on the time
        of day, according to their normal business hours and non-business hours capacity.
    •   The EMS response and transport time was parameterized using ambulance response data
        provided by CFD, as this is a precursor step that occurs prior to the time that EMS data is
        uploaded and transmitted for analysis by the EMS surveillance tool.
    •   There is some time delay (based on knowledge of the functional PHS component) for data to be
        uploaded and available for analysis by each of the surveillance tools.
    •   Contamination scenarios initiated at high demand times were detectable sooner than scenarios
        that were initiated at low demand times. A seven-hour time delay occurred between the
        scenarios initiated at low demand (12:00 am) and the first exposure event (7:00 am), which
        resulted in a time lag before detection was possible, unlike the scenarios initiated at high demand
        (9:00 am), which could have resulted in exposure soon thereafter at the 9:30 am or 12:00 pm
        exposure events.

The simulated PHS investigation reflects the procedures used by the local health partners and GCWW
personnel to investigate a PHS alert. Investigators assess the underlying case data for clustering and
similar symptom categories as well as possible alternative explanations for the alerts, such as a public
health incident unrelated to water contamination.  The PHS investigation portion of the model assumes:

    •   All cases investigated had similar symptom categories as the simulated PHS system analyzed
       only cases which resulted from exposure to contaminated water.
    •   All cases in an alert were clustered because of the hydraulic connectivity of the contamination
       scenarios.
    •   PHS alerts were investigated immediately upon receipt if the alert details (i.e., similar symptom
       categories and geographic clustering of cases) suggested possible water contamination. This
       assumption was based on the process applied by public health personnel responsible for
       investigating alerts during the evaluation period who quickly reviewed PHS alert details within
       minutes of receipt.  If it was readily apparent that the underlying case details did not suggest
       water contamination, investigation checklists were not typically completed until hours later. In a
       few instances where alert details demonstrated possible water contamination, public health
       personnel responded immediately by activating the communicator protocol to involve all relevant
       partners in the alert investigation.
    •   The communicator protocol was always activated when a PHS alert occurred due to the nature of
       the underlying case data in PHS alerts in model runs (i.e., geographic clustering and similar
       symptom categories).
    •   No other explanations (such as a public health incident unrelated to water contamination) could
       be found for alerts during investigations.

The practical implication of these assumptions is that the alert validation process, once activated,
proceeded to completion as alerts will not be ascribed to other unrelated public health incidents, or caused
by background variability. All PHS alerts resulted in the determination that water contamination was
possible.

3.4    Forums

Feedback and suggestions from the public health partners and utility personnel on all aspects of the PHS
component were captured during User's Group meetings as well as the lessons learned workshop held in
July 2009.  Information gathered through these forums provided insight regarding which elements of the
                                                                                            19

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot

component were acceptable to the end users and highlighted others that required modification or
enhancement. Forums consisted of:

    •  Public Health User's Group  Meetings:  Bi-monthly meetings were scheduled for the duration
       of the three year Cooperative Research and Development Agreement between USEPA and the
       city of Cincinnati. Component design, functionality and modifications/enhancements were
       discussed during these meetings, including component modifications (Table 2-3).

    •  Lessons Learned Workshop: The purpose of the lessons learned workshop was to allow the
       User's Group the opportunity to provide feedback regarding the performance, operation, and
       sustainability of the  PHS component during the evaluation period. The group expressed specific
       feedback regarding the strengths and weakness of each of the PHS surveillance tools in the
       context of their effectiveness in identifying possible contamination incidents.

3.5    Analysis of Lifecycle Costs

A systematic process was used to evaluate the overall cost of the PHS component over the 20-year
lifecycle of the Cincinnati CWS. The  analysis includes implementation costs, component modification
costs, annual operations and maintenance (O&M) costs, renewal and replacement costs, and the salvage
value of major pieces of equipment at the end of the lifecycle.

Implementation costs include labor and other expenditures (equipment, supplies and purchased services)
for deploying the PHS component. Implementation costs were summarized in Water Security Initiative:
Cincinnati Pilot Post-Implementation System Status (USEPA, 2008b), which was used as a primary data
source for this analysis. In that report, overarching project management costs incurred during the
implementation process were captured as a separate line item.  However, in this analysis, the project
management costs were equally distributed among the six components of the CWS, and are presented as a
separate line item for each component. Component modification costs include all labor and  expenditures
incurred after the completion of major implementation activities in December 2007 that were not
attributable to O&M costs. These modification costs were tracked on  a monthly basis, summed at the end
of the evaluation period, and added to the overall implementation costs.

It should be noted that implementation costs for the Cincinnati CWS may be higher than those for other
utilities given that this project was the  first comprehensive, large-scale CWS of its kind and had no
experience base to draw from. Costs that would not likely apply to future  implementers (but which were
incurred for the Cincinnati CWS) include overhead for EPA and its contractors, cost associated with
deploying alternative designs and additional data collection and reporting requirements. Other utilities
planning for a similar large-scale CWS installation would have the benefit of lessons learned and an
experience base developed through implementation of the Cincinnati CWS.

Annual O&M costs include labor and other expenditures (supplies and purchased services) necessary to
operate and maintain the component and investigate alerts. O&M costs were obtained from maintenance
logs, investigation checklists, and training logs. Maintenance logs tracked the staff time spent
maintaining the PHS component. To account for the maintenance of documents, the cost incurred to
update documented procedures following drills and exercises conducted during the evaluation phase of
the pilot was used to estimate the annualized cost.  Investigation checklists and training logs tracked the
staff hours spent on investigating alerts and training, respectively. The total O&M costs were annualized
by calculating the sum of labor and other expenditures incurred over the course of a year.

Labor hours for both implementation and O&M were tracked over the entire evaluation period.  Labor
hours were converted to dollars using estimated local labor rates for the different institutions involved in
the implementation or O&M of the PHS component.
                                                                                            20

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot

The renewal and replacement costs are based on the cost of replacing major pieces of equipment at the
end of their useful life. The useful life of PHS equipment was estimated using field experience,
manufacturer-provided data and input from subject matter experts. Equipment was assumed to be
replaced at the end of its useful life over the 20-year lifecycle of the Cincinnati CWS.  The salvage value
is based on the estimated value of each major piece of equipment at the end of the lifecycle of the
Cincinnati CWS. The salvage value was estimated for all equipment with an initial value greater than
~$1,000.  Straight line depreciation was used to estimate the salvage value for all major pieces of PHS
equipment based on the lifespan of each item.

All of the cost parameters described above (implementation costs, component modification costs, O&M
costs, renewal and replacement costs, and salvage value) were used to calculate the total lifecycle cost for
the PHS component, as presented in Section 8.7.
                                                                                            21

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot
      Section 4.0:  Performance of the 911  Surveillance Tool

The following section provides a description of the 911 surveillance tool followed by the results of the
evaluation of this tool. This analysis includes an evaluation of metrics that characterize how the 911
surveillance tool achieves the design objectives described in Section 1.1. Specific metrics are described
for each of the design objectives.

4.1    Description of the 911 Surveillance Tool

Cincinnati Police Department and CFD emergency dispatchers process 911 calls on a regular basis
through Cincinnati's Computer Aided Dispatch system. 911 call detail data is exported from CFD's
source database to the WS application server database to support call  cluster identification by SaTScan™,
an automated surveillance tool which was implemented during the Cincinnati CWS.  Call detail
information includes the call identifier, the incident type code, the date and time of the  incident (call time
and dispatch time), and the incident location as latitude and longitude coordinates.

New call detail records are queried on a minute-by-minute basis. For call detail records that have incident
type codes matching the subset of selected incident type codes, a corresponding record is stored on the
WS application server database for later analysis by the SaTScan™ algorithm. These call detail records
remain on the server for 28 days, after which they are removed.

911 Event Definitions
Local public health partners identified the likely dispatch incident types that may indicate a drinking
water contamination incident; identification of the incident type is based on a caller's complaint(s) as
interpreted by the dispatcher through prompts from Priority Dispatch  System™ integrated with the
Motorola dispatch system. The selected incident type codes assigned for consideration as a possible
water contamination indicator are listed in Table 4-1 below.

Table 4-1. Generalized 911 Incident Codes
911 Incident Codes
Abdominal pain, hemorrhage
Allergies, asthma, breathing problem
Burn/blister (added to filtering March 2009)
Chest pain, heart problem
Choking, seizures, convulsions
Eye problem (removed from filtering March 2009)
Headache
Inhalation (removed from filtering March
2009)
Overdose
Sick person
Possible stroke, fainting, unconscious
Person down (removed from filtering March 2009)
The group of 911 incident codes which are filtered for analysis by the SaTScan™ algorithm was modified
as a result of a coding exercise conducted with 911 operators in February 2009. The exercise included
five unique water contamination scenarios, some with symptoms from exposure via ingestion of
contaminated water, and others with symptoms from dermal or inhalation exposure.  Based on the 911
operator coding results, some incident codes that were not determined to be indicative of possible water
contamination were removed from filtering, and others were added (see Table 4-1).
                                                                                          22

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                        of the Cincinnati Contamination Warning System Pilot

SaTScan™ Analysis
SaTScan™ is a free software package that analyzes spatial and temporal data using the spatial, temporal
or space-time scan statistics.  The configuration implemented for the PHS component utilizes the space-
time permutation model, which leverages only case data (date of event, location of event, event count).
SaTScan™ analysis is executed hourly on the half-hour; the algorithm executes on a rolling 21-day data
set of 911 call detail records that are extracted from the WS application server database during each
analysis cycle.  The analysis results provide the location and size of likely event clusters across the entire
dataset, sorted by the p-value (statistical probability that a given cluster occurred by chance).
For further information on SaTScan™, the SaTScan™ User Guide (Kulldorff, 2010) and other technical
documentation can be downloaded from httjj^/wwwjgls^^

911 Surveillance Tool Alerting Criteria
A 911 alert will only be generated when the alerting criteria, as established for the PHS component, is
met. The initial design included alerting criteria one through three below, to eliminate notifications from
subsequent analyses that duplicated recent results. A fourth alerting criterion was later added in May
2009 to reduce the overall number of alerts. The current alerting criteria for identifying 911 alert
conditions are:
    1.   If the SaTScan™ event detection tool identifies a candidate cluster with p-value less than 0.0250
        for a given day AND
    2.   If PHS has not already generated an alert for the exact cluster center identifier (911 call identifier
        closest to cluster center) for the given day AND
    3.   If PHS does not measure the candidate cluster center point as being within any previously alerted
        cluster(s) for a given day (distance from candidate alert-worthy cluster center to previously-
        alerted cluster center(s) is less than said previously-alerted cluster's radius) AND
    4.   If the event count (number of 911 calls) associated with the candidate cluster is greater than 16.

Lower-level PHS alerts which meet the minimum settings of the SaTScan™ event detection tool (p-value
less than 0.0250), but do not exceed the established event count threshold (16 calls), are displayed on the
Public Health User Interface, but an email alert is not transmitted.  When the alerting criteria are met, an
email notification alert is transmitted to the local public health partners and GCWW. If a 911 alert is
generated, the local health partners work collaboratively with GCWW utility staff to conduct an
investigation to determine whether or not alerts have been generated by other PHS surveillance tools, and
whether the alert is related to a potential drinking water contamination incident or other public health
situation, such as a known disease outbreak.

4.2     Design Objective:  Spatial Coverage

The spatial coverage is the cumulative area of the distribution system covered by the 911 surveillance tool
with data provided by CFD, which is limited to the city of Cincinnati due to differences in political
jurisdictions (911 calls  originating from outside Cincinnati city limits are processed through the Hamilton
County Communications Center). In order to evaluate how well the 911 surveillance tool met this design
objective, the following metrics were evaluated: area and population coverage, and spatial extent of an
alert. The following subsections define each metric, describe how it was evaluated and present the
results.

4.2.1   Area and Population Coverage

Definition: Area coverage describes how 911 alerts are distributed geographically, while population
coverage depicts the geographic area covered by the 911 surveillance tool.
                                                                                             23

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                        of the Cincinnati Contamination Warning System Pilot

Analysis Methodology:  911 alerts that occurred during the evaluation period were plotted on a map that
depicts the geographic area covered by the 911  surveillance tool (i.e., city of Cincinnati).

Results: During the evaluation period, a total of 86 alerts were generated by the 911 surveillance tool.
Figure 4-1 illustrates that these alerts were spatially distributed across  the city of Cincinnati, though
clearly more concentrated in areas of higher population density.  Each marker in the figure represents the
geographic center of a single alert. Most alerts  were contained within the spatial area where the
population density is greater than 3,603 individuals per square mile.
         911 Alerts
Population per Sq. Mile
                                                                           • Miles
                                                     0  1  2
Figure 4-1.  Area Coverage of 911 Alerts in City of Cincinnati (n=86)


4.2.2   Spatial Extent of an Alert

Definition:  Spatial extent of an alert describes the area covered by a 911 alert. This metric is
characterizes the geographic area (size) of each 911 alert.

Analysis Methodology: From the empirical data, the geographic area of an alert was calculated using the
alert radius, which is the distance from the cluster center to the furthest call from the center. The analysis
includes a map representing the spatial area of each 911 alert that occurred during the evaluation period.
Statistical analysis of alert clusters is also presented and includes the alert  area, number of calls and
density of calls per unit area (square mile) per alert. Using relevant contamination scenarios from the
                                                                                              24

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                        of the Cincinnati Contamination Warning System Pilot

simulation study in which 911 alerts occurred, the average radius and area of the first 911 alert was
calculated for each contaminant.

Results: Figure 4-2 illustrates the spatial extent (i.e., area) of each of the 911 alerts that occurred during
the evaluation period. The alert area represents the bounding circle of all calls contained in each alert.
                          Population per Sq. Mile
911 Alerts
    *
Figure 4-2.  Spatial Extent of 911 Alerts (n=86, empirical data)

Table 4-2 includes a statistical analysis of the spatial extent of 911 alerts that occurred during the
evaluation period; the average alert area of 3.56 square miles is small relative to the entire 911 service
area of approximately 77 square miles, and the entire GCWW service area of 354 square miles.  The
range in call density for 911 alerts was 0.03 to 384.25 calls per square mile.  This range illustrates the
upper and lower bounds of sensitivity of the 911 surveillance tool based on the default alerting
parameters.  Tight call clustering is generally necessary for an alert to be generated. This is supported by
the fact that 80% of all  911 alerts encompassed an area less than four square miles in size, or
approximately 5% percent of the area covered by the 911 surveillance tool and 1% of the overall GCWW
service area.  Furthermore, 33% of 911 alerts covered an area less than one square mile.  The 911
surveillance tool generated an alert on September 14, 2008 during a major windstorm that was caused by
Hurricane Ike. This alert contained the highest number of 911 calls of any alert during the evaluation
period - a total of 34 calls.
                                                                                              25

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot

Table 4-2.  Statistical Analysis of Spatial Extent of 911 Alerts (n=86, empirical data)

Average
Minimum
Maximum
Alert Area (mi2)
3.56
<0.00011
97.76
Number of Calls
9
3
34
Density (calls/mi2)
11.02
0.03
384.25
 The minimum alert area was less than SaTScan's™ minimum distance threshold (minimum radius = 0.006
miles) which translates to a minimum alert area of 0.0001 mi2.

The histogram presented below (Figure 4-3) demonstrates that the majority of 911 alerts that occurred
during the evaluation period covered small geographic areas (less than five square miles). The 911 alert
with the largest alert area (97.76 square miles) was excluded from the histogram for visualization
purposes.
      80
      70
      60
    o> 50
      40
    £
    E 30
      20
      10
             0-5      6-10     11-15    16-20     21-25    26-30
                             Alert Area (Sq Miles)
Figure 4-3.  Histogram of 911 Alert Areas (n=86, empirical data)

Table 4-3 below demonstrates the average radius and area of the first 911 alert for all simulation study
contamination scenarios in which a 911 alert occurred, separated by contaminant. Given that the average
area is noticeably larger for Biological Agents 4, 5 and 7, it is assumed that the underlying cases were
noticeably more spread out for the 911 alerts that occurred in scenarios involved these biological agents
compared to the scenarios involving chemicals. Symptom progression for these contaminants is much
slower than the chemical contaminants, allowing for a greater spread of the contaminant throughout the
distribution system prior to detection and changes in distribution and/or consumption patterns to prevent
additional exposures.

When compared to the average area of invalid alerts in the empirical data (3.56 square miles), the average
area of 911 alerts for the toxic chemicals and biological agents in the simulation study are comparable,
though the alert area for three of the biological agents was orders of magnitude larger (likely for the same
reasons as described above). Another reason that the average alert area was larger for some of the
contaminants is that many of the contamination scenarios spread widely throughout GCWW's
                                                                                             26

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot
distribution system, which resulted in a significant geographic distribution of affected individuals, and
therefore alerts composed of cases spread across broader geographic areas.

Table 4-3. Average Radius and Area of First 911 Alert by Contaminant (simulation study data)
Contaminant
Toxic Chemical 1
Toxic Chemical 2
Toxic Chemical 3
Toxic Chemical 4
Toxic Chemical 5
Toxic Chemical 6
Toxic Chemical 7
Toxic Chemical 8
Biological Agent 1
Biological Agent 2
Biological Agents
Biological Agent 4
Biological Agents
Biological Agent 6
Biological Agent 7
Scenarios
Detected
41
44
44
46
44
47
45
48
43
15
51
48
36
3
6
Average
Radius (miles)
1.41
0.72
0.79
1.57
0.95
1.67
1.24
1.33
1.09
0.67
1.73
3.68
3.64
1.37
4.95
Average Area
(mi2)
6.25
1.63
1.96
7.74
2.84
8.76
4.83
5.56
3.73
1.41
9.40
42.54
41.62
5.90
76.98
4.2.3  Summary

911 alerts during the evaluation period were concentrated in areas with greater population densities.  In
addition, alert areas were relatively compact, with 80% of alerts encompassing an area less than four
square miles. Analysis of the 911 alerts in the simulation study demonstrated that the average area was
comparable to the empirical data for the toxic chemicals and biological agents (~3 - 8 square miles),
though the alert area for three of the biological agents was orders of magnitude larger. The simulation
data supports the hypothesis that case clustering will be apparent in alerts that occur during contamination
incidents, and possibly more so in scenarios involving a chemical contaminant that causes rapid symptom
onset.

4.3    Design Objective:  Contaminant Coverage

The 911 surveillance tool monitors 911 calls that could signal a public health incident, including water
contamination.  For 911 call data, contaminant coverage is dependent on the health-seeking behaviors
following symptom presentation, as discussed in Section 3.3. In order to evaluate how well the 911
surveillance tool met this design objective, contamination scenario coverage was evaluated. The
following subsection defines the metric, describes how it was evaluated, and presents the results.

4.3.1  Contamination Scenario Coverage

Definition:  Contamination scenario coverage is defined as the ratio of contamination incidents that are
detected to those that are theoretically detectable based on the design of the 911  surveillance tool.
Detectable contamination scenarios include those in which the contaminant injection occurred within the
city limits and those which originated at distribution system attack nodes rather than facility attack nodes.

Analysis Methodology:  Since no water contamination incidents occurred during the  evaluation period,
simulation study results were utilized to quantify this metric.  The ratio of scenarios actually detected to
those that were theoretically detectable (based on the assumptions regarding health-seeking behavior that
were parameterized in  the model) was calculated for each contaminant. Additionally, the average  and
median number of cases at the time of detection was calculated for each contaminant.  Certain
contamination scenarios that were not theoretically detectable were screened out of the analysis including
                                                                                            27

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot

those that originated at facility attack nodes (which were detected by the ESM component), those which
involved the nuisance chemicals, and scenarios which originated outside of the city limits.

Results: The 911 surveillance tool detected 80% (n=561) of the contamination scenarios that were
theoretically detectable (n=702). Table 4-4 below shows the detection statistics for the 911 surveillance
tool for each contaminant.

Table 4-4.  911 Detection Statistics
Contaminant
Toxic Chemical 1
Toxic Chemical 2
Toxic Chemical 3
Toxic Chemical 4
Toxic Chemical 5
Toxic Chemical 6
Toxic Chemical 7
Toxic Chemical 8
Biological Agent 1
Biological Agent 2
Biological Agents
Biological Agent 4
Biological Agent 5
Biological Agent 6
Biological Agent 7
Scenarios
Detected
41
44
44
46
44
47
45
48
43
15
51
48
36
3
6
Scenarios
Not
Detected
7
0
0
2
0
3
1
3
0
25
0
3
16
40
41
Percent
Detected
85%
100%
100%
96%
100%
94%
98%
94%
100%
38%
100%
94%
69%
7%
13%
Average #
Cases at Time
of Detection
430
268
304
1,074
645
3,923
947
2,417
366
158
103,304
5,143
17,677
331
476
Median # Cases
at Time of
Detection
354
148
137
550
335
2,280
582
1,297
230
150
73,503
4,934
14,949
353
505
The 911 surveillance tool generally had a high detection rate across almost all contaminants, with 100%
detection for five of the fifteen contaminants and another five above 90%.  Chemical contaminants have a
high detection rate due to the likelihood of people taking action to receive medical treatment when
symptoms progress rapidly and are quite unusual or life-threatening following ingestion of contaminated
water.

The lowest detection rates are associated with biological agents which generally show a slower symptom
onset and do not always progress to severe symptom levels. The slower symptom onset and less urgent
health-seeking behavior provide less opportunity for the 911 surveillance tool to detect a contamination
incident.  Furthermore, two of the biological agents (Biological Agent 6 and Biological Agent 7) were
modeled as causing illnesses from inhalation exposure rather than ingestion.  Due to the design of these
scenarios, wherein inhalation exposure could only occur once per day in the morning during a shower,
there were fewer instances of illness, fewer calls to 911 for medical assistance, and therefore a lower rate
of detection by the 911 surveillance tool.

As shown in the Table 4-4, Biological Agent 3 had a  100% detection rate and also the greatest number of
cases at the time of detection. The high number of cases is due to both the extremely low dose required
for symptom onset due to its potent toxicity  and a substantial delay prior to symptom onset.  Therefore, in
scenarios that involved Biological Agent 3, the contaminant continued to spread before the first case
became symptomatic. It is likely that nearly all individuals who are exposed to the contaminant will
exceed the low symptom threshold.
                                                                                            28

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                        of the Cincinnati Contamination Warning System Pilot

4.3.2   Summary

The contamination scenario coverage results from the simulation study demonstrate that the 911
surveillance tool is able to detect chemical and biological agents with two-thirds of the contaminants
detected in greater than 90% of scenarios.

4.4     Design Objective: Alert Occurrence

Alert occurrence addresses how well the 911 surveillance tool performs by describing the volume of
alerts that occurred and the number of these alerts that were valid (i.e., public health incident, including
water contamination). It should be noted that no valid alerts occurred during the evaluation period of the
911 surveillance tool.  Analyses conducted and presented for the contamination scenario coverage metric
reflect the occurrence of valid alerts in the simulation study (Section 4.3.1). Thus, to characterize this
design objective, invalid alerts were evaluated. The following subsection defines the metric, describes
how it was evaluated and presents the results.

4.4.1   Invalid Alerts

Definition: Invalid alerts include any alert generated by the 911 surveillance tool that is determined not
related to a public health incident, including water contamination, following the alert investigation.

Analysis Methodology: The total number of invalid alerts was calculated for each reporting period, and
is equal to the total number of alerts minus the number of valid alerts.  The number of calls per invalid
alert was calculated and is presented in a histogram.

Results: During the evaluation period, a total of 86 alerts were  generated by the 911 surveillance tools,
which were all determined to be the result of background variability. No apparent temporal trend of
invalid alert frequency was observed (Figure 4-4).

During the majority of the evaluation period, any alerts that met the default alerting parameters of the
SaTScan ™ algorithm constituted a 911 alert for the Cincinnati  CWS.  As a result of input received from
the system users stating that the alerting frequency was too high, new alerting criteria were implemented
on May 12, 2009. The impact of this modification is evident in  that only one alert occurred following the
change.  For the purposes of comparison, if the new alerting threshold was applied to the evaluation data
from before this date, only seven 911 alerts (8%) would have occurred, as illustrated below by the red
bars in Figure 4-4.
                                                                                             29

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                        of the Cincinnati Contamination Warning System Pilot
      14
      12 -•
      10
   HI
  <
• N
KN
A
umber of Invalid Alerts
umberof Hypothetical Inv
lerts(with newalertinq crite
alid
ria)










1
1





•








1
1
NewBusir
| (5/12
1
1
I
illll \



ess Rules
d for Alerting
/09)


1
                                          >  £?> d* vC? & & & & <§> *? & «? £ v»J
                                          y T>  <^  ^ <^ <^ -\v ^r cjr ^ Kv ^  >^  n>  ^  ^ 4
                                     Start Date of Monthly Reporting Period
Figure 4-4.  911 Invalid Alerts per Reporting Period (n=86) and with Additional Alerting Criteria
(n=7)

The histogram presented in Figure 4-5 demonstrates the range in number of calls for all invalid alerts that
occurred during the evaluation period.  The majority of alerts contained between five and ten 911 calls.
Over 90% of alerts contained fifteen or fewer calls, all of which were determined to be the result of
background variability.
             0-5      6-10     11-15    16-20
                     Number of Calls per Alert
21-25
Figure 4-5.  Histogram of Number of Calls per Alert (n=86)


4.4.2   Summary

Initially, the only limiting condition on alert notifications was a statistically derived threshold. This
resulted in detection of many statistically significant anomalies that were of little concern to public health
officials because there were so few cases in most alerts.  In May 2009, an additional condition was
imposed on the alert notifications limiting alerts to anomalies with greater than sixteen filtered 911 calls,
which reduced the annual frequency of alerts by 99% for the 911 surveillance tool.
                                                                                              30

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot

4.5    Design Objective:  Timeliness of Detection

Timeliness of detection is the time delay for the 911 surveillance tool to detect a potential public health
incident, including water contamination. The timeline begins with initial transmission of 911 call data
and concludes with completion of the alert investigation. Post-exposure factors that would affect the
overall timeliness of detection, such as time to symptom onset and health-seeking behaviors, are
discussed in Section 3.3.  These time delays occur prior to the time for data transmission.

In order to evaluate how well the 911 surveillance tool met this design objective, the following four
metrics were evaluated: time for data transmission, time for event detection, time for alert recognition
and time to investigate alerts.  The following subsections define each metric, describe how it was
evaluated, and present the results.
4.5.1  Time for Data Transmission
Definition: Time for data transmission describes the time it takes for 911 records to be available for
analysis.  It includes the time to transmit and filter data, as recorded by 911 dispatcher personnel in the
911 Computer Aided Dispatch system, to the WS application server.

Analysis Methodology:  Each 911 record contains timestamps that can be used to calculate the time
between the initial 911 call and the time it is available for analysis on the WS application. The time for
data transmission was calculated from empirical data on a monthly basis through creation of a Structured
Query Language (SQL) script to run against all records stored on the WS application database.  Statistical
analysis, including the average and range of time for data to transmit to the WS application server, is
presented per month.

Results:  The average time for data transmission of 911 call  records from time of call to upload to the WS
application server ranged from 45 to 1,706 minutes during the evaluation period. As depicted in Figure
4-6, the data transmission time was typically between 45 and 100 minutes during most reporting periods.
Occasional long delays in data transmission were caused by network outages which caused downtime of
the interface that transmits call records from the CFD server to the WS application server. Until this
interface is manually restarted, data transmission cannot occur.  Specifically, one notably long period of
interface downtime (~9 days) occurred between November 25, 2008 and December 4, 2008, which was
the result of network instability.  During this time period, transmission of all records from the CFD server
to the WS application server was impeded.  This event noticeably increased the average transmission time
for the November 2008 reporting period.

During two reporting periods later in the evaluation timeline, longer data transmission times also
occurred. In the February 2010 reporting period, the 911 interface experienced a seven day outage which
delayed data transmission.  Later, in the April 2009 reporting period, the utility's 911  web-services
subscription expired which caused a five day delay in data transmissions between May 1, 2010, and May
6,2010.
                                                                                           31

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                        of the Cincinnati Contamination Warning System Pilot
     100000
     10000
  £.   1000
  c
  o
       100
        10
                          *   *
                               Start Date of Monthly Reporting Period
Figure 4-6.  911 Surveillance Tool Average Time for Data Transmission

This metric illustrates that there is some time delay between the time that 911 calls are placed and
uploaded by call operators into source systems, and the time it takes for the data to be transferred to the
WS application server for filtering.  Functionally, this time delay illustrates that 911 alerts may be
generated about an hour after call volumes had increased if individuals were exposed to contaminated
water.

4.5.2   Time for Event Detection

Definition:  Time for event detection describes the time required for the 911 surveillance tool to generate
an alert using the SaTScan™ algorithm after data has been transmitted to the WS application server. This
is the time for analysis of data and generation of a result by the SaTScan™ algorithm applied to 911 data.

Analysis Methodology: Time for event detection is calculated as the difference between job start and job
finish.  Statistical analysis, including average and range of time for event detection, is presented per
month.

Results: As depicted in Figure 4-7, the average time for event detection for the 911 surveillance tool
ranged from 0 to 1.14 minutes. With the exception of the initial reporting period, the time for event
detection was consistently less than 0.6 minutes. This metric  illustrates that once 911 call data is filtered
and available for analysis, the SaTScan™ algorithm functions with notable efficiency to process data and
generate alert notifications.
                                                                                             32

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot
      1.2 n
  ^  1-0
  o
  £-  0.8
  c
  o
  Q
  •E
  o
      0.6
      0.4
      0.2 •
      0.0

                               Start Date of Monthly Reporting Period
Figure 4-7. 911 Surveillance Tool Average Time for Event Detection

It should be noted that the scale at which event detection process duration is captured (minutes) is
affected by the Windows operating system and its native tools. While the actual mean duration is
probably in the range of 40 to 45 seconds (based on investigating details within the SaTScan™ log files),
the high-level utilities provided by Windows used to capture start/stop times of the event detection cycle
do not generate time values at the resolution of seconds. As a result, duration appears to be zero minutes
when approximately 0.75 minutes is likely a more accurate value.
4.5.3   Time for Alert Recognition
Definition: Time for alert recognition quantifies the time it takes public health personnel (i.e.,
investigators) to recognize the email alert and begin the alert investigation, as determined from empirical
data.  For the  911 surveillance tool, this portion of the timeline begins when an alert is generated by the
SaTScan™ algorithm and notification is sent via email to public health personnel, and ends when public
health personnel recognize receipt of the alert.

Analysis Methodology: Statistical analysis (average, median and range) of time for alert notification
was performed for each month, as well as the evaluation period as a whole.

Results: Figure 4-8  demonstrates the average time to recognize 911 alerts per month.  In many cases, the
average time was affected by the time  of day that alerts were produced.  When alerts occurred after-hours
(weekdays 5:00 pm to 9:00 am the next morning) or on the weekend, a 10- to 40-hour time lag occurred
before the health partners were able to recognize and investigate the alerts.  One outlier was excluded
from the October 2008 reporting period, as a significant time delay occurred prior to recognition of one
alert due to the public health partners being out of the office.

Following implementation of the additional alerting criteria in May 2009, only one 911 alert occurred
(April 2010).  Time for recognition is not reported for this alert as a formal investigation was not
completed. Note: Asterisks in Figure 4-8 indicate that no data was available either due to an alert
investigation not being conducted, or that no alerts occurred during that reporting period.  See Figure 4-4
for additional data on alert occurrence.
                                                                                             33

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot
AC _

>- ^S -
O
^
Qn
.Q
'= ->c .
g> 25
O)
a)
a: 20 -
r
a>
< 15 -
£
a> -in .
E 1U
i-
0 -










* 1 * *
$$$$£









*
^^^










^§b










^







II I
1 .
|*|||*************
4^t4€4^
Start Date of Monthly Reporting Period
Figure 4-8. Average Time to Recognize 911 Alerts

During the first six months of the evaluation period, public health personnel were not expected to
complete alert investigations in real-time; therefore, some months do not display an average even though
alerts occurred, as alert investigation information was not available. In some instances, investigations
could have been delayed due to unavailability of the Public Health User Interface as indicated by
comments on investigations completed during the same timeframe.

Statistics for time to recognize alerts over the entire evaluation period are shown in Table 4-5. There is a
notably broad range in times to recognize alerts, from 4 minutes (0.06 hours) to 157.48 hours. As
previously noted, long delays were often due to alerts issued after-hours or inaccessibility of the Public
Health User Interface.

Table 4-5.  911 Alert Recognition Time (hours)
Parameter
Average
Median
Minimum
Maximum
Time (hours)
21.08
9.83
0.06
157.48
4.5.4  Time to Investigate Alerts
Definition: Time to investigate alerts includes the portion of the incident timeline that begins with the
recognition of a 911 alert, and ends with a determination regarding whether or not contamination is
possible.  The time to investigate alerts, as captured in the investigation checklists, is based on the nature
of the alert details and the investigation procedures that must be implemented before concluding that the
alert is not indicative of a possible contamination incident. For PHS drills and the simulation study, this
                                                                                            34

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot

data represents the timeline from the contaminant injection to the time that contamination is deemed
possible when the 911 alert investigation is concluded. As noted in Section 3.3, no time delay for alert
recognition was parameterized in the CWS model as it was assumed that alert investigations occurred
immediately upon receipt of alerts based on the nature of the underlying case data (i.e., similar symptom
categories and case clustering).

Analysis Methodology:  Analysis of empirical data (i.e., invalid alerts) was performed to calculate the
average, median, and range of times as listed in investigation checklists. Information on investigation
time from PHS drills was used to describe time to investigate simulated 911 alerts that were ultimately
determined to be possible contamination incidents.

Timeline data gathered from investigation of invalid alerts and during drills and exercises was used to
parameterize the investigation time for PHS alerts in the  simulation study. Simulation study timeline data
(which, as noted above, started at the time of contaminant injection) was evaluated to illustrate the
timeliness of detection overall for the 911 surveillance tool and for scenarios initiated at periods of high
or low demand. Percentile values were calculated to examine the distribution of data and were examined
in a box-and-whisker plot. Average detection times were calculated for individual contaminants, as well
as for scenarios initiated at periods of high or low demand for individual contaminants.

Results:  The results presented below are arranged in order of empirical data, drill data and simulation
study data.

Empirical Data
During the evaluation period, time to investigate  alerts ranged from 3 to 60 minutes. Figure 4-9 is a
graphical representation of the average time for 911 alert investigations per month. The average
investigation time for 911 invalid alerts decreased over the course of the evaluation period from
approximately 30 to 10 minutes per alert it represents an  overall improvement of time necessary to
investigate alerts that are not due to public health incidents, including water contamination. This
decreased time is likely because public health partners investigating 911 alerts became more familiar with
investigation procedures over time, and therefore required less time to identify invalid alerts. It should be
noted that system users were not required to investigate alerts until the beginning of the June 2008
reporting period as the system was still in a development and testing phase between January and June
2008.
                                                                                            35

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot
     35
  a,  ju
  3
  c
  1  25
  01
  E
  o
  •a
  ro
  O)
  01
  _c
  Ol
  O)
  ra
     20
     15
10
                                  tn
                                                              **  ***   *****

                                 Start Date of Monthly Reporting Period
Figure 4-9. 911 Average Invalid Alert Investigation Time (n=39, empirical data)

Following the enhancement of the additional alerting criteria which adjusted the alerting threshold, only
one 911 alert was produced (April 2010).  Because an investigation checklist was not completed by the
public health partners for this alert, the investigation time was not determined.

System users can expect to investigate approximately one to two 911 alerts per year and expend 5 to 10
minutes per investigation.  In addition, it should be noted that the May 2009 component enhancements
facilitate more efficient investigations, as alert data no longer needs to be manually translated from
latitude/longitude to address format, and because more detailed patient data is included in the 911 alert
notifications (i.e., incident code, age, gender of patient).

Statistics for time for alert investigation over the entire evaluation period are shown in Table 4-6.

Table 4-6. 911 Invalid Alert Investigation Time (minutes, empirical data)
Parameter
Average
Median
Minimum
Maximum
Time (minutes)
12.33
10
3
60
Drill Data
A simulated 911 alert was used to practice alert investigation procedures during PHS Drill 2; this
investigation involved the examination of 911 case data of individuals who had ingested water
contaminated with a toxic chemical. This exercise provided an estimate of the time for a Possible
contamination determination was reached after approximately 1.5 hours. While this provided a
                                                                                          36

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot

reasonable estimate of how long it would take to investigate valid 911 alerts, the actual investigation time
during a "live" incident may vary depending on other factors (e.g., personnel availability). The timeline
for PHS Drill 2 is presented below in Figure 4-10, which displays some of the key points of the
investigation that was undertaken in PHS Drill 2.  The simulated 911 alert was injected 30 minutes after
the start of the drill, which was initiated by a simulated DPIC alert.
00:39
00-26 WQM station
Oo:oo DPIC activates alert received
DPIC receives communicator
reports of Gl
symptoms at 00:20
day care, DPIC
begins determines water
investigation contamination is
likely
' v i


00:30
911 alert
received
r l
r i
01:01
WUERM considers 01:33
00:42 contamination Consensus
Communicator possible and determination
discussion suspects a chemical contamination
begins contaminant Possible
r 1 ' 1 ' 1

 00:00
                                                                                           01:33
Figure 4-10.  PHS Drill 2 Timeline (911 Alert)
Simulation Study Data
Figure 4-11 demonstrates the overall timeliness of detection statistics for the 911 surveillance tool and
for scenarios initiated at periods of low and high demand, using percentile values to illustrate the
distribution of data in a box-and-whisker plot. Scenarios initiated at high demand times were detected
sooner than scenarios detected at low demand times due to the design of the CWS model. A seven-hour
time delay occurred between the scenarios initiated at low demand (12:00 am) and the first exposure
event (7:00 am), which resulted in a detection time lag, unlike the scenarios initiated at high demand
(9:00 am), which could have resulted in exposure soon thereafter at the 9:30 am or 12:00 pm exposure
events.
                                                                                            37

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot
19000
-I nnnn .
onnn
to
"3 finnn
ii
A nnn

zUUU







* Average




»


II
I — = — I
911 Surveillance Tool 911 Surveillance Tool 911 Surveillance Tool
(lowdemand) (high demand)
Figure 4-11. 911 Surveillance Tool Timeliness of Detection (simulation study data)

There were a total of 561 scenarios detected by the 911 surveillance tool with an average detection time
of 1,644 minutes (approximately one day), as shown in Table 4-7. As noted above, scenarios initiated at
high demand were detected sooner, with an average detection of time 1,095 minutes, whereas scenarios
initiated at low demand had an average detection time of 2,865 minutes.

Table 4-7.  911 Surveillance Tool Timeliness of Detection (minutes, simulation study data)
Scenarios
Total
Low Demand
High Demand
Count
561
174
387
Average
1,644
2,865
1,095
Median
271
1,891
91
Average timeliness of detection for the 911 surveillance tool by contaminant is presented below in Figure
4-12, where contaminants are arranged in increasing order of timeliness of detection. For each
contaminant, the overall average is presented as well as the average value for high and low demand
scenarios. This figure demonstrates that scenarios involving chemical contaminants were detected rapidly
(within hours) whereas a greater time delay occurred before scenarios involving biological agents were
detected (days to weeks). This difference is due to the increased length of symptom onset for the
biological agents.  Furthermore, unlike the toxic chemicals, where exposed individuals always proceeded
from mild to moderate and finally severe symptoms, a certain percentage of individuals exposed to the
biological agents did not proceed beyond mild or moderate symptom levels, and therefore pursued less
urgent health-seeking behavior. Note that the differences in timeliness of detection for high or low
demand scenarios narrows as the overall timeline from contaminant injection to detection increases; for
biological agent scenarios, there is little to no difference in timeliness of detection between high and low
demand scenarios.
                                                                                            38

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot
       100000
        10000
         1000
                                           Contaminants
Figure 4-12.  911 Surveillance Tool Timeliness of Detection (simulation study data)

4.5.5  Summary
Timeliness of the 911 surveillance tool was affected primarily by time for alert recognition by public
health personnel. As mentioned in Section 4.5.1, 911 call data is typically available one hour after entry
into the Priority Dispatch System™.  Time for alert recognition varied significantly, and was affected by
the time when 911 alerts were sent (e.g., recognition was substantially delayed for alerts generated after
business hours or over weekends).  In contrast, time for alert generation was extremely short, with
SaTScan™ consistently generating results in less than one minute.

Public health personnel became more efficient in 911 alert investigations following the initial
implementation period; by the end of the evaluation period,  invalid alert investigations were usually
completed within 10 minutes, although this investigation time may be longer for valid alerts based on
performance observed during PHS  drills. Participants in the lessons learned workshop indicated that the
speed of information (alerts containing location data) from SaTScan™ should be valuable for detecting
contaminants with rapid symptom onset, especially compared to existing capabilities prior to the
Cincinnati CWS.

Simulation study data analysis showed that for most chemical contamination scenarios, 911 call counts
became high enough to meet or exceed the alerting criteria within only a few hours following contaminant
injection.  In contrast, detection of the biological agents occurred within a day or in some cases a week or
more after contaminant injection.
                                                                                            39

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot

4.6    Design Objective:  Operational Reliability

Analysis of the operational reliability of the 911 surveillance tool addresses the physical operation of the
surveillance tool and quantifies the percent of time that the 911 surveillance tool was working as
designed.  In order to evaluate how well the 911 surveillance tool met this design objective, the
availability metric was evaluated.  The following subsection defines the metric, describes how it was
evaluated, and presents the results

4.6.1  Availability

Definition: Availability is the amount of time the 911 surveillance tool is functional and accessible. For
the 911 surveillance tool to generate available data, 911 data had to be successfully transmitted from
CFD's Computer Aided Dispatch server to the WS application server, filtered, analyzed via the
SaTScan™ event detection tool and any alerts be displayed on the Public Health User's Interface.

Analysis Methodology:  Overall downtime hours of the 911 surveillance tool per reporting period, due to
downtime  of alert notifications,  data collection, or event detection was calculated. The measurement of
availability is related to downtime hours; total downtime was subtracted from possible data hours in each
reporting period to calculate percent availability.

Results: Most downtime events for the 911 surveillance tool were attributed to the inhibition of 911 data
collection  due to periodic network instability (see blue bars in Figure 4-13), which prevented data
transmission from the CFD server to the WS application server. As is apparent in the figure, the
lengthiest period of data collection downtime occurred during the November 2008 reporting period,
which was caused by network instability (208.8 hours of downtime of the Regional Computing Center)
that prevented data collection. Some data collection downtime during the September 2008 reporting
period was the result of power outages and network instability caused by a windstorm that resulted in loss
of electricity to 90% of Cincinnati residents for up to four days. Data collection downtime also occurred
during the March and April 2009 reporting periods due to occasional connection losses with the CFD
source database - the cause of which is unknown. One final period of data collection downtime occurred
in the February 2010 reporting period when the 911 interface  experienced a seven day outage.

Two instances of event detection downtime occurred in the April 2008 and May 2008 reporting periods
due to unavailability of the WS application server database. Alert notification and event detection
downtime  occurred during the April 2010 reporting period when the utility's 911 web-services
subscription expired, causing  a five day delay in data transmissions.
                                                                                           40

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                        of the Cincinnati Contamination Warning System Pilot
       300
   «•   250

   1
       200
HI

1
o
°    150

I
HI
a    100




\l .





fii Alert Notification
• Data Collection
• Event Detection








|
K
|
1
a
                                Start Date of Monthly Reporting Period
Figure 4-13.  911 Surveillance Tool Downtime (Events > 1 hour)

During the course of the evaluation period, availability generally exceeded 90% for the 911 surveillance
tool, as shown in Figure 4-14.  The lowest overall value for availability occurred during the April 2010
reporting period when the utility's 911 web-services subscription expired, causing a five day delay in data
transmissions. When data transmission is inhibited, subsequent event detection processing on the most
current data cannot occur.  The average availability over the entire evaluation period was 93%.
      100%

      95%

      90%
  H  OOYo
  S
  JS
   5  80%
  5>  75%


      70%


      65%


      60%
                                 Start Date of Monthly Reporting Period
Figure 4-14.  911 Surveillance Tool Availability
                                                                                              41

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot

4.6.2  Summary

Functionally, high availability percentages during the evaluation period demonstrate the overall stability
and operational reliability of the 911 surveillance tool. Availability improved during the period,
particularly after utility personnel established an automated monitoring tool which provides notification
when the WS application server needs to be restarted if network instability causes it to shut down.
Implementation of daily checks of the data collection system significantly reduced downtime by
effectively identifying and correcting system issues, which previously could persist unnoticed for days.
                                                                                             42

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot
     Section 5.0:  Performance of the EMS Surveillance Tool

The following section provides a description of the EMS surveillance tool followed by the results of the
evaluation of this tool. This analysis includes an evaluation of metrics that characterize how the EMS
surveillance tool achieves the design objectives described in Section 1.1. Specific metrics are described
for each of the design objectives.

5.1    Description of the EMS Surveillance Tool

EMS/EARS Data
The EMS surveillance tool relies on the data-sharing partnership between CFD and GCWW. CFD EMTs
and paramedics are considered data providers because they record patient information in an electronic
format using EMS Tablets (i.e., portable tablet computers). Information recorded includes patient age,
gender, chief medical complaint, incident zip code, medical observations made by the provider and
medication and procedures provided. Upon returning to the firehouse, the EMS Tablet automatically
uploads the patient data to a central CFD server via wireless routers installed as part of the Cincinnati
cws.

CFD provides access to a de-identified copy of patient data to the WS application server, located at
GCWW, where one or more syndromes are assigned to the provider impression.  EMS run records that
indicate  a possible incident based on their syndrome assignment are filtered into the system for further
analysis. The filtered data are stored on the WS database server at GCWW, which will store three years
of EMS  run data.

EARS
EARS is a free software package provided by CDC that executes within either SAS or Microsoft Excel.
EARS analyzes the EMS run data hourly using cumulative sum (CUSUM) algorithms to detect an
increase in reporting activity. An increase above the EARS threshold generates an alert (EARS refers to
any alert as a flag). The alert is based upon one of the three CUSUM algorithms. The Cl algorithm has
the lowest sensitivity and is most useful for surveillance systems monitored daily. The C2 algorithm has
a greater sensitivity than the Cl algorithm and can assist in identifying the length of an outbreak's rapid
acceleration period. The C3 algorithm has the greatest sensitivity  and can identify aberrations that
gradually increase over short periods. The threshold for the Cl and C2 is three standard deviations above
the baseline mean. The threshold for the C3 algorithm is two standard deviations above the mean and as
compared to CUSUM for the previous three days (i.e., the three  days prior an alert).  Table 5-1 explains
the three algorithms; "t" refers to the current day of the analysis run.

Table 5-1. CUSUM Interpretation Table
Alert
C1
C2
C3
Baseline (days)
t-7 through t-1
t-9 through t-3
t-9 through t-3, with threshold
based on 3-day average
What it Flags
First alert to an acute event
High consecutive values
Gradual increase over short
time
Event Detection
Start of outbreak
Length of outbreak
Start of outbreak
Prior to deploying the EMS surveillance tool during the Cincinnati CWS, EMS run data was collected for
a fourteen month period in 2006 and 2007 in order to validate and adjust syndrome mapping and to
                                                                                          43

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot

evaluate the various EARS flags and their frequencies. After consultation with the User's Group, Cl was
determined to be the most appropriate algorithm for the CWS objectives. Furthermore, the User's Group
concluded that Cl flags for specific zip codes would result in too many alerts; as a result, alert
notifications are only sent for Cl flags over the entire geographic area (city of Cincinnati). Other Cl
flags are displayed on the User's Interface for reference.

EARS Syndrome Categories
Based on review of the 23 EARS syndrome categories, the local public health partners identified eight
categories that may indicate an incident.  These categories are composed of a variety of patient chief
complaints, which are diagnosed and assigned by the responding EMT. EMS run records with provider
impressions categorized into one of these eight syndrome categories are filtered for analysis by the EMS
surveillance tool. One of these syndromes (water) was a new custom category created as a part of the
Cincinnati CWS, and includes chief complaints that would signal exposure to a variety of contaminants
with rapid symptom onset.  The syndromes are not mutually exclusive, allowing a complaint to be
assigned to more than one syndrome.  The provider impressions and syndrome categories are listed below
in Table 5-2.

Table 5-2. EARS Syndrome Categories and Medical Complaints
Syndrome
Category
Cardiac
(cardiaccat)
Gastrointestinal
(gicat)
Neurological
(neurons)
Poisoning
(poison)
Psychological
(psychcat)
Unexplained
Upper
Respiratory
(upperresp)
Water
Medical Complaint
Angina Pectoris, Cardiac Arrest, Chest Pain/Discomfort, Congestive Heart Failure,
Dysrhythmia, Hypertension, Hypotension, Myocardial Infarction, Unconscious
(unknown etiology)
Abdominal Pain (minor), Abdominal Pain (severe), Appendicitis, Dehydration,
Diarrhea, Food Poisoning, Lower Gastrointestinal (Gl) Bleeding, Nausea/Vomiting,
Upper Gl Bleeding
Altered Level of Consciousness, Cerebrovascular Accident/Stroke, Dizziness/Vertigo,
Headache, Numbness/Tingling, Paralysis/Loss of Motion, Seizures/Convulsions
(unknown), Syncope/Fainting, Transient Ischemic Attack
Abuse/Dependency, Alcohol Related, Drug Induced Emotional, Drug Overdose, Food
Poisoning, Hematuria, Ingestion, Inhalation, Renal Failure
Abuse/Dependency, Alcohol Related, Anxiety, Behavioral Disorder, Depression, Drug
Induced Emotional, Drug Overdose, Psychiatric Disorder, Suicide Attempt (not DOA)
Blank, DOA, Other, Respiratory Arrest, Unconscious (unknown etiology)
Airway Obstruction/Choking, Cold/Flu, Croup, Epiglottitis, Respiratory Distress,
Respiratory Distress (acute), Respiratory Involvement, Smoke Inhalation
Abdominal Pain (minor), Abdominal Pain (severe), Altered Level of Consciousness,
Diarrhea, Dizziness/Vertigo, Ingestion, Nausea/Vomiting, Seizure/Convulsions
(febrile), Seizures/Convulsions (unknown)
EARS analysis uses a three month rolling baseline of the filtered EMS run records and analyzes the data
by syndrome and by syndrome stratified on the zip code level. The analysis process normally executes in
fifteen minutes (Hutwanger L., 2003).

EMS Surveillance Tool Alerting Criteria
                                                                                           44

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot

Based on the current alerting criteria for the EMS surveillance tool, an EMS alert will only be generated
when the following EMS alert conditions are met.  Condition number four (event count / affected zip
count) was applied in May 2009 based on feedback from the User's Group.
    1.  If EARS sets C1 CUSUM flag for entire data set (Location = "_ALL_") for a given syndrome on
       a given day AND
    2.  If PHS has not already generated an alert for specified location and syndrome and day AND
    3.  If EARS sets Cl CUSUM flag within 48-hours of the event date (configuration based on event
       detection system summary output content) AND
    4.  If the ratio (Event count / Affected zip code count) is greater than  1.5 for a specified syndrome
       and day (this condition would be met when there is at least some spatial clustering of the EMS
       runs).

When the  alerting criteria are met, an email notification alert is transmitted to the local public health
partners and GCWW. Lower-level EMS alerts that are categorized as Cl alerts by the EARS tool but do
not exceed the alerting criteria are displayed on the Public Health User Interface, but an email alert is not
transmitted. If an EMS alert is generated, the local health partners investigate internally and work
collaboratively with GCWW utility staff to conduct an investigation and determine whether the alert is
related to an actual public health incident, including water contamination.  Investigation procedures for
PHS alerts are fully described in the Cincinnati Pilot Operational Strategy.

5.2    Design  Objective:  Spatial Coverage

The spatial coverage is the cumulative area of the distribution system covered by the EMS surveillance
tool, which is limited to the city of Cincinnati due to jurisdictional limits (CFD only serves the city of
Cincinnati). In order to evaluate how well the EMS surveillance tool  met this design objective, the
following  metrics were evaluated: area and population coverage, and spatial extent of an alert.  The
following  subsections define each metric, describe how it was evaluated, and present the results.

5.2.1  Area and Population Coverage

Definition: Area coverage describes how alerts are distributed geographically, while population
coverage depicts the geographic area covered by the EMS surveillance tool.

Analysis Methodology: Zip code data from EMS alerts that occurred during the evaluation period was
plotted on a map that depicts the geographic area covered by the EMS surveillance tool (i.e., city of
Cincinnati). This involved calculating number of instances that zip codes in the city of Cincinnati were
included in EMS alerts. Additionally, the total number of zip codes per EMS alert was calculated pre-
and post-implementation of the additional alerting criteria described in Section 5.1.

Results:  During the evaluation period, a total of 77 alerts were generated by the EMS surveillance tool.
Figure 5-1 illustrates the number of instances that zip codes in the city of Cincinnati were included in
EMS alerts; this figure demonstrates that centrally-located zip codes in the downtown area of Cincinnati
were included in alerts in more instances than non-central zip code locations.  The underlying population
density is not presented in this map as some zip codes included in the geographic area extended beyond
the city limits but can be compared with the population map in Figure 2-1.
                                                                                            45

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                        of the Cincinnati Contamination Warning System Pilot
                  Alerts
                                                                        I Ml If
Figure 5-1.  EMS Alerts per Zip Code (City of Cincinnati, n=77)

It should be noted the number of instances that a zip code was included in an alert is overestimated given
that zip code data for each alert that occurred prior to the component modification on May 12, 2009 (see
Table 2-4) was not captured in real-time and was analyzed retrospectively. During the retrospective
analysis, EMS daily run data were filtered to estimate the number of runs specific to an alert event.  On
days where an EMS alert occurred, all EMS runs for that day and their associated zip codes were included
in this analysis regardless of whether the EMS run occurred before or after the alert notification, as this
information was unavailable. Therefore, EMS runs and zip codes were included in an alert analysis even
though they did not contribute to the alert because they occurred after the alerting criteria were satisfied.
Zip code data for EMS alerts that occurred after May 12, 2009 is accurate as the location data was
captured in real-time.

In Figure 5-2, the histogram shows the number of zip codes involved in EMS alerts prior to the
implementation of new alerting criteria (event count / affected zip codes > 1.5) on May 12, 2009. On
average, a total of seven zip codes were involved in EMS alerts.
                                                                                             46

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot
      20
      18 -
      16 -
      14 -

      10 -
       8 -
       6 -
       4 -
 8     10
Zip Codes
                                                12
14
16
Figure 5-2. Number of Zip Codes in EMS Alerts Prior to Alerting Modification (n=62)
Figure 5-3 below captures the number of zip codes in EMS alerts post-implementation of the new
alerting criteria (event count / affected zip codes> 1.5), where the average number of zip codes per alert
was 7.6. Given that only fifteen EMS alerts have occurred since the implementation of the additional
alerting logic, there is insufficient data to allow accurate comparison of the number of zip codes involved
in alerts pre- and post-implementation of the new alerting criteria.
                                         8      9      10     11
Figure 5-3. Number of Zip Codes in EMS Alerts Post Alerting Modifications (n=15)
                                                                                           47

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                        of the Cincinnati Contamination Warning System Pilot

5.2.2   Spatial Extent of an Alert
Definition:  Spatial extent of an alert describes the area covered by an EMS alert. Essentially, it is the
geographic area (size) of each alert as measured by the number of zip codes in each alert. For example,
an alert containing ten different zip codes has a greater spatial extent than an alert containing three zip
codes.

Analysis Methodology: A statistical analysis of the average, minimum and maximum number of EMS
runs and number of zip  codes in EMS alerts was conducted using empirical data and is presented in both
tabular form and geographically.

Results: Table 5-3 includes a statistical analysis of the EMS alert data for the evaluation period pre-
implementation of the new alerting criteria in May 2009. The average ratio of event count (i.e.,  EMS
runs) to affected zip code was 1.29 for 62 EMS alerts which occurred in this time period. This ratio is
slightly less than the cut-off ratio of 1.5 imposed as a new alerting criterion in May 2009. Additionally,
the average number of EMS runs involved in EMS alerts during this evaluation period was 9.06, with an
average of seven zip codes involved in an alert.
Table 5-3.
Average
Minimum
Maximum
EMS Alert Statistics (January 16, 2008 - May 12, 2009, n=62)
Number of Events (EMS Runs)
9.06
3
19
Number of Zip Codes
7
2
15
Event Count / Affected Zip Codes
1.29
1
2.5
Table 5-4 presents EMS alert statistics for alerts that occurred post-implementation of the new alerting
criteria in May 2009. The effect of the new alerting logic is apparent, as the number of EMS runs per
alert increased by nearly 40% with little change in the average number of zip codes. The intended effect
of the new alerting criterion is to require that a higher ratio  of EMS runs occur per zip code in order to
limit alerts to those with some degree of spatial clustering.  For example, local public health partners
would likely be more concerned about an alert signaling a high volume of runs in one  zip code as opposed
to an alert that demonstrated fewer EMS runs in a variety of zip codes.  The change in alerting criteria
reduced the average number of alerts per year from 47 to slightly fewer than 15.

 Table 5-4. EMS Alert Statistics (May 13, 2009 - June 15, 2010, n=15)

Average
Minimum
Maximum
Number of Events (EMS Runs)
12.87
9
18
Number of Zip Codes
7.60
4
11
Event Count / Affected Zip Codes
1.72
1.55
2.25
During the evaluation period, the 77 EMS alerts that occurred included a total of 858 EMS runs. As
mentioned previously, a retrospective analysis was conducted to map the EMS runs that were relevant to
each EMS alert.  Therefore, the total number of runs may be a slight overestimate. For the purpose of
demonstrating the spatial extent of alert data, the retrospective analysis of EMS runs is useful to illustrate
the total sum of EMS runs per zip code for all EMS alerts (see Figure 5-4).  Similar to the previous map
demonstrating a one-year period of EMS  data (Figure 5-1), this map illustrates that higher volumes of
EMS runs are apparent in the central area of the city.
                                                                                             48

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot
                  EMS Runs
                                                                        I Ml If
                                                  0   1  2
Figure 5-4. Total EMS Runs per Zip Code Associated with Alerts During Evaluation Period (City of
Cincinnati, n=77)

Figure 5-5 is slightly different from the map presented in Figure 5-4; this figure depicts the number of
instances that a zip code affected by an EMS alert contained multiple EMS runs (i.e., > 1 run). The intent
of the map is to demonstrate zip codes where some spatial clustering occurred based on EMS runs in
EMS alerts.  For example, the centrally located zip code denoted by dark blue shading was involved in
greater than sixteen alerts with multiple EMS runs.  In comparison, the zip code on the southeast section
of the city denoted by yellow shading was involved in fewer than three alerts that contained EMS
multiple runs.
                                                                                             49

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot
                  Alerts
                                                                       I Ml If
                                                 0  1   2
Figure 5-5. Total EMS Alerts per Zip Code with Multiple EMS Runs (City of Cincinnati, n=77)


5.2.3  Summary

Zip codes included in EMS alerts were spatially distributed across the city of Cincinnati, though it is
apparent that zip codes in downtown areas of Cincinnati were involved in EMS alerts more frequently
than alerts in non-central locations. In addition, higher ratios of EMS runs per zip code for alerts were
also apparent in central downtown areas of the city.  Both total EMS alerts as well as zip code clustering
occurred in areas with higher population density.

5.3    Design Objective: Contaminant Coverage

The EMS surveillance tool monitors EMS runs that could signal a public health incident, including water
contamination.  For ESM run data, contaminant coverage is dependent on the health-seeking behaviors
following symptom presentation, as discussed in Section 3.3. In order to evaluate how well the EMS
surveillance tool met this design objective, contamination scenario coverage was evaluated. The
following subsection defines the metric, describes how it was evaluated, and presents the results.

5.3.1  Contamination Scenario Coverage
Definition: Contamination scenario coverage is defined as the ratio of contamination  incidents that are
actually detected to those that are theoretically detectable based on the design of the EMS surveillance
tool. Detectable contamination scenarios include those in which the contaminant injection occurred
                                                                                            50

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot

within the city limits, and those which originated at distribution system attack nodes rather than facility
attack nodes.

Analysis Methodology: Since no water contamination incidents occurred during the evaluation period,
simulation study results were utilized to quantify this metric. The ratio of scenarios actually detected to
those that are theoretically detectable (based on the assumptions regarding health-seeking behavior that
were parameterized in the model) was calculated for each contaminant.  Additionally, the average and
median number of cases at the time of detection was calculated for each contaminant. Certain
contamination scenarios that were not theoretically detectable were screened out of the analysis including
those that originated at facility attack nodes (which were detected by the ESM component), those which
involved the nuisance chemicals, and scenarios which originated outside of the city limits.

Results: The EMS surveillance tool detected 69% (n=487) of the contamination scenarios that were
theoretically detectable  (n=702).  Table 5-5 below shows the detection statistics for the EMS surveillance
tool for each contaminant.

Table 5-5. EMS Detection Statistics
Contaminant
Toxic Chemical 1
Toxic Chemical 2
Toxic Chemical 3
Toxic Chemical 4
Toxic Chemical 5
Toxic Chemical 6
Toxic Chemical 7
Toxic Chemical 8
Biological Agent 1
Biological Agent 2
Biological Agents
Biological Agent 4
Biological Agents
Biological Agent 6
Biological Agent 7
Scenarios
Detected
44
25
25
38
39
45
46
39
33
21
51
32
45
4
0
Scenarios
Not
Detected
4
19
19
10
5
5
0
12
10
19
0
19
7
39
47
Percent
Detected
92%
57%
57%
79%
89%
90%
100%
76%
77%
53%
100%
63%
87%
9%
0%
Average #
Cases at Time
of Detection
1,365
394
690
3,014
2,332
18,981
2,238
8,206
2,465
205
173,838
16,589
22,732
428
N/A
Median # Cases
at Time of
Detection
1,162
318
605
2,749
1,874
13,362
1,966
7,286
1,408
119
142,537
18,466
19,176
403
N/A
The EMS surveillance tool had a high detection rate above 75% for eight of the fifteen contaminants and
another four above 50%. The lowest detection rates by EMS are associated with Biological Agent 6 (9%)
and Biological Agent 7 (not detected). These two contaminants were modeled as producing illness
through the inhalation exposure route, and thus there was only one exposure event in the morning (7:00
am showering event) that could have produced cases. Fewer exposed individuals resulted in a lower
number of requests for EMS transport which contributed to lower detection rates.

While the EMS detection percentages were high for many contaminants, they were somewhat lower when
compared to the 911 surveillance tool detection percentages.  This is likely due to the fact that not all
individuals who call 911 will receive EMS transport. In the model, some patients will decide on self-
transport if an EMS unit has not arrived after a certain amount of time.  This results in fewer EMS cases
being logged and available for statistical analysis, whereas a case record is always recorded for all
individuals who call 911.  Secondly, for some  of the toxic chemicals with a rapid symptom onset time,
                                                                                            51

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot

coupled with a short time delay prior to death following exposure, individuals might have died after
calling 911 and prior to the time that an EMS unit arrived. This pattern likely resulted in fewer EMS
cases being logged in comparison to 911, and therefore lower detection rates.

Biological Agent 3 was detected in 100% of theoretically detectable scenarios, and also had the greatest
number of cases at the time of detection.  Due to the extremely low dose required for symptom onset for
this contaminant, it is likely that nearly all individuals exposed to the contaminant will experience
symptoms.  This contributes to a large number of cases overall, more calls to 911, and consequently a
higher number of EMS runs.  Furthermore, this contaminant has a substantial delay prior to symptom
onset, which allows the contaminant to spread widely throughout the distribution system, producing many
exposures before the first case becomes symptomatic.

5.3.2   Summary

The contamination scenario coverage results from the simulation study demonstrate that the EMS
surveillance tool is able detect contamination scenarios involving a variety of different types of
contaminants.  In comparison to the 911  surveillance tool, detection rates were somewhat lower overall
due to fewer EMS cases being logged and available for statistical analysis.

5.4     Design Objective: Alert Occurrence

Alert occurrence addresses how well the EMS surveillance tool performs by describing the frequency of
invalid and valid alerts, and quantifying how accurate the EMS surveillance tool is at discriminating
between valid alerts and normal variability in the underlying data. In order to evaluate how well the EMS
surveillance tool met this design objective, the following metrics  are evaluated: invalid alerts and valid
alerts. The following subsections define each metric, describe how it was evaluated  and present the
results.

5.4.1   In valid A lerts

Definition:  Invalid alerts include any alert generated by the EMS surveillance tool that is determined  as
not related to a public health incident, including water contamination, following an alert investigation.

Analysis Methodology:  The total number of invalid alerts is equal to the number of total alerts minus
the number of valid alerts.  These  alerts were quantified per monthly reporting period as well as analyzed
statistically by frequency, syndrome type, and probability of syndrome per zip code.  In addition,
geographic analysis of invalid alerts was performed to discern any possible spatial patterns.

Results: During the evaluation period, a total of 72 invalid alerts were generated by the EMS
surveillance tool.  Prior to implementation of a new alerting criterion in May 2009, an average of
approximately four alerts were generated per month (median = 2) which were all determined to be the
result of background variability.  The impact of the new alerting criterion is apparent, as far fewer alerts
have occurred post-May 2009 (Figure 5-6). No temporal trend in alert frequency was observed when  the
data was plotted in time-series format according to reporting period.
                                                                                            52

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot
1 n


« 7
r '
a>
< fi
w b
111 5
•s
E
0

0 -









Numberof Inval

new alerting crite


d Alerts

sria)



















1

ii
) *>Jto i^Jb ^vb i*Sb <^b
vO vO yO vO vO
^ Cy ^^ Py O^ wC^
Figure 5-6. EMS Invalid Alerts per

M«... AI«^:^« /
Implemen
^~ /E/-I o/nn



r~r~"
ed
\
j


.
Hi II ll 1
Start Date of Monthly Reporting Period
Reporting Period (n=72)

III


EMS run data that met the alerting criteria assigned for the EARS Cl algorithm from January 2008 to
May 2009 constituted an EMS alert. As a result of input received from the system users, stating that the
alerting frequency was too high, new alerting criteria for sending alert notifications were implemented on
May 12, 2009. The impact of this modification is evident as fewer alerts occurred after the May 16, 2009
reporting period. If this new alerting criteria, indicated by the red bars,  is applied to the pre-May 2009
alert data for the purposes of comparison, only ten EMS alerts would have occurred, compared to the
actual 62.

The histogram presented in Figure 5-7 demonstrates the range in number of EMS runs for all alerts that
occurred between January 15, 2008 and May 12, 2009. Alerts that occurred after May 12, 2009 were
excluded from the analysis given that additional alerting criteria were applied to the data.
                                                                                             53

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot
      30 -i
              0-5
    6-10        11-15
       EMS Runs
                     16-20
Figure 5-7. EMS Runs Per Alert (n=62)

Figure 5-8 shows data from the 15 EMS alerts which occurred between May 13, 2009 and the end of the
evaluation period on June 15, 2009. After the implementation of the new alerting criteria, the number of
EMS runs per alert increased from an average of 9.06 to 12.87.
      6  -I
      5  -
      4 -
   <
      2  -

                10
11
12
13    14
EMS Runs
15
16
17
18
Figure 5-8. EMS Runs Per Alert (n=15)

The histogram presented in Figure 5-9 compares the ratio of EMS runs to affected zip codes for all alerts
that occurred before and after the new alerting criteria. Most of the alerts prior to May 12, 2009
contained an event count to zip code ratio of less than 1.5. Alerts occurring after May 12, 2009 had
additional alerting criterion applied, as discussed in Section 5.1; following the implementation of this new
criterion, the event count to zip code ratio had to be greater than 1.5 in order to issue an alert. Fifteen
alerts occurred following the new criterion, as compared to 62 prior to May 12, 2009.
                                                                                            54

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                        of the Cincinnati Contamination Warning System Pilot
                                                       1/16/08-5/15/09

                                                     K 5/13/09-1/15/10
   u
   <
      10
                                1.5
2.5
                                      Ratio
Figure 5-9. Ratio of EMS Runs/Affected Zip Codes per Alert: Pre-updated Alerting Criteria (n=62)
and Post-updated Alerting Criteria (n=15)

Figure 5-10 demonstrates the percentage of EMS alerts per syndrome category for all alerts that occurred
during the evaluation period. The four highest categories were cardiac, neurons, poison and upper
respiratory.
                    Psychological     P.°is°n
                       11%        17/0
Figure 5-10.  Percentages of Syndromes for EMS Alerts (n=77)

Alerts for the entire evaluation period, categorized by syndrome are depicted in Figure 5-11. In the 16
months preceding the new alerting criteria, all eight syndrome categories were represented by the 62
alerts that occurred.  In the next seven months, eleven alerts occurred, which fell into six of the eight
syndrome categories (i.e., no poison or psychat syndrome category alerts).
                                                                                             55

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                        of the Cincinnati Contamination Warning System Pilot
                            ^
                            0»V 5^ N^ r^ ^ <$" & $> <$• & /V\N ^ QV   ^ <1> -^ £  ^
                                    Start Date of Monthly Reporting Period
Figure 5-11.  Syndrome Categories for EMS Alerts (n=77)

Background knowledge of trends and occurrence of alerts by zip code may be useful during alert
investigations. EMS syndrome categories from 2009 were compared to the total EMS runs to determine
whether specific syndrome categories would be statistically likely or unlikely to occur in each of 35 zip
codes. The results highlight why cognizance of community demographics and provider behavior is
important when investigating public health alerts.

Most zip codes had statistically high or low probabilities for at least one syndrome category; only eight
zip codes did not have statistically significant probabilities for any syndromes.  Two zip codes, 45224 and
45229, had statistically high probabilities in four syndrome categories. While the reasons for this are not
certain, for the 45224 zip code this could be caused by high EMS utilization due to an aged population.
Zip code 45224 had a median age of 40.2 and 20.2% of its population is older than 65, compared to the
city of Cincinnati with a median age of 3 5.7 and 12% of its population over the age of 65.  In the case of
45229, poverty status may play a role in increased EMS utilization.  Zip code 45229 has 26.8% of
families living in poverty, compared to the city average  of 20.9%.

How providers code patient records and understanding syndrome definitions is also important to consider
when interpreting results. For example, four of the five  zip codes with lower probabilities of EMS runs
for neurological complaints were on the west side of the city; these same zip codes were more likely to
have EMS runs coded as unexplained, indicating that neurological complaints may have been coded as
unexplained in this area. When considering syndrome definitions, zip codes with high probabilities of
poison runs generally had a high probability ofpsychcat calls as well; this occurred in 75% of zip codes
with statistically high poison probabilities. This is likely due to the large overlap  in provider impressions
from chief complaints in the two syndrome categories (Table 5-2).
                                                                                             56

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                        of the Cincinnati Contamination Warning System Pilot

5.4.2   Valid Alerts

Definition: A valid alert is a result generated by the EMS surveillance tool indicating a public health
incident, including possible water contamination, is occurring in the location where the alert is observed.

Analysis Methodology: The total number of valid alerts was characterized qualitatively using empirical
data reports. Because of the low number of valid alerts, no statistical analysis was performed. Analyses
conducted and presented for the contamination scenario coverage metric reflect the occurrence of valid
alerts in the simulation study (Section 5.3.1).

Results: A total of five separate valid alerts attributable to three different public health incidents
occurred during the evaluation period; none of these alerts were due to possible water contamination. The
public health incidents included a heat-related event (July 2009), the H1N1 influenza outbreak
(September 2009), and an allergy-related event (May 2010).  These determinations were made following
standard alert investigation procedures, which in some cases included consultation with other members of
the User's Group.

During the July 2009 reporting period, two EMS alerts occurred; symptoms associated with these alerts
included chest pain,  fainting and weakness/fatigue.  An email alert from ODH received by local public
health personnel during this time indicated a rise in ED cases of "weakness" in older adults that did not
trigger an EpiCenter alert. Because two alerts occurred  at once along with the email from ODH, the
communicator protocol was activated to discuss the matter with public health partners.  No GCWW
system repairs were  occurring, and DPIC reported no unusual cases. This alert coincided with the
occurrence of hot and humid weather; hence, this alert was determined a heat-related public health
incident.

Two EMS alerts occurred in the September 2009 reporting period  consistent with the increase in illness in
the population due to the H1N1 influenza outbreak. This corresponds to several valid alerts observed in
ED patient data via surveillance with the EpiCenter tool, as discussed in Section 6.4.2.  These alerts were
in the upperresp syndrome, and over half of cases indicated "cold/flu" as their chief complaint. One-third
of the cases were college age (i.e., 18 to 25 years old), consistent with recent H1N1 activity at the time.
Hence, these alerts were classified as public health incidents due to an infectious disease outbreak.
Finally, one EMS alert occurred in the March 2010 reporting period which was investigated and
determined to be linked to a rise in allergy-related illness, as it occurred when pollen counts were
extremely high in the Cincinnati area.

5.4.3   Summary

During the evaluation period, a total of 72 invalid alerts  occurred,  with the highest percentage occurring
in the cardictccctt and poison syndrome categories. The new alerting criteria imposed on the alert
notifications limiting alerts to anomalies which exceed a ratio of 1.5 for event count to affected zip codes
reduced the annual frequency of alerts by approximately 70% for the EMS surveillance tool.  A total  of
five valid EMS alerts occurred that were linked to public health incidents.

5.5     Design Objective: Timeliness of Detection

Timeliness of detection is the time it takes to detect a potential public health incident, including water
contamination via the EMS surveillance tool, beginning with EMS data transmissions and ending with the
conclusion of the alert investigation.  Post-exposure factors that would affect the overall timeliness of
detection,  such as time to symptom onset and health-seeking behaviors,  are discussed in Section 3.3.
These time delays occur prior to the time for data transmission.
                                                                                             57

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                        of the Cincinnati Contamination Warning System Pilot

In order to evaluate how well the EMS surveillance tool met this design objective (timeliness of
detection), the following four metrics were evaluated:  time for data transmission, time for event
detection, time to recognize alerts, and time to investigate alerts. The following subsections define each
metric, describe how it was evaluated and present the results.

5.5.1   Time for Data Transmission

Definition: Time for data transmission describes the time it takes for EMS records to be available for
analysis. It includes the time to transmit and filter data, as recorded by EMT personnel, to the WS
application server.

Analysis Methodology: Each EMS record contains timestamps that can be used to calculate the time
between the initial EMS incident and the time it is available for analysis on the WS application server.
Statistical analysis, including the average and range of time for data to transmit to the WS application
server, was calculated per month.

Results: The average time for data transmission of EMS run records from time of run until upload to the
CFD server and transfer to the WS application server ranged from ~515 - 1,100 minutes (7 to 29 hours)
per month during the evaluation period (Figure 5-12).
1000000 •
5f
re
u
w
o
(/)
1
r Data Transmission (min
-i C
-^ o c
•i O O C
D O O C
,g IU
0)
F
•
* 0 * * * * *
* * *****
** «
	 	 •••••".

• Average
^ Maximum
XMinimum
, +
* + * *

*
•
* <

., x x x * x * x * yX x
x x x x
A
* x
X * y X


^^^^\c£^^^^^«$^^^^^^^^^^^^^\»^^
^vvvvv^^^^^^^
Start Date of Monthly Reporting Period
Figure 5-12. EMS Surveillance Tool - Average Time for Data Transmission
The significant delay associated with this metric is a function of secondary data use.  Under certain
circumstances, CFD personnel hold EMS run records on the wireless tablets until extensive
documentation is completed before allowing transfer to the EMS server (and, in turn, to the WS
application server). As a result, these held records are not available for retrieval by the WS application
server, resulting in extended transmission times from the initial EMS run. This factor is the main cause of
the 12.2 hour average time for data transmission, and is not a function of technological limits or errors. If
a modification was implemented to the EMS System software to allow transmission of a run record's
applicable data subset from the wireless tablet only for WSI use, the data transmission time would likely
be reduced significantly.
                                                                                             58

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot

5.5.2   Time for Event Detection

Definition: Time for event detection describes the time required for the EMS surveillance tool to
generate an alert using the EARS algorithms after data has been transmitted to the WS application server.
It is based on the time it takes the EARS algorithms  applied to EMS data to compute a result.

Analysis Methodology:  The time for event detection was calculated as the difference between job start
and job finish for the EARS algorithms. Statistical analysis, including average and range of time for
event detection, is presented per month.

Results: As depicted in Figure 5-13, the  average time per month for event detection ranged from 12.6 to
16.5 minutes. This metric illustrates the efficiency and consistency of the EMS surveillance tool in
analyzing data once it has been transmitted from the CFD source system and is available for analysis.
18.0
Time for Event Detection (minutes)
3hO^.O)OOON)^.C
3OOOOOOOC





















































































































































































































i .>& -.*& -.^ .^ -.*& -.^ -^ ,.>& -V^ -'W' _^j -^ ->& -^ -^ -1© -^ -^ -^ -^ --w^^^j J-'$^/.^ y.^ /.^ ..^ /.^
(,w ^Jv |,w >w |^y jW fw ^Qy fcQy >w i^y (,w i^Qy ^bN ^w ^Qy ^w |^y ^fe* ^w >w ^Qy ^y ^Qy ^w |^y ^y fw
Start Date of Monthly Reporting Period
Figure 5-13.  EMS Surveillance Tool - Average Time for Event Detection


5.5.3   Time for Alert Recognition
Definition: Time for alert recognition quantifies the time it takes public health personnel (i.e.,
investigators) to recognize the email alert and begin the alert investigation, as determined from empirical
data. For the  EMS surveillance tool, this portion of the timeline begins when an alert is generated by the
EARS algorithms and notification is sent via email to public health personnel, and ends when public
health personnel recognize receipt of the alert.

Analysis Methodology: The time for alert recognition was calculated as the difference between the start
time of the alert and the start time of the investigation. Statistical analysis of time for alert recognition
was performed for each month,  and over the evaluation period as a whole.

Results: Because GCWW and  the local partners were not required to respond to alerts in real-time prior
to June 2009,  data gathered between January 2008 and June 2009 in Figure 5-14 was not an accurate
representation of a typical alert  recognition timeline. In some cases, alerts were retrospectively
investigated in batches to systematically analyze potential alert causes rather than to detect an event in
                                                                                            59

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot

real-time.  In other cases, the average time to recognize an alert was affected by the time of day that alerts
were produced.  When alerts occurred after-hours or on the weekend, a 10- to 20-hour time lag occurred
before the health partners were able to recognize and investigate the alerts. Though a total of eight EMS
alerts occurred post-October 2009, a formal investigation was not completed for any of these alerts.
Therefore, data for alert recognition time is not available for analysis. Note:  Asterisks in Figure 5-14
indicate that no data was available either due to an alert investigation not being conducted, or that no
alerts occurred during that reporting period. See Figure 5-6 for additional detail on alert occurrence.
      70
  g>   40
      30
      20
      10
                .ITTTT
.1.
*  *
                              Start Date of Monthly Reporting Period
Figure 5-14. Average Time to Recognize EMS Alert

Analysis of the time for EMS alert recognition for the entire evaluation period shows a wide range, from
0.05 to 61.3 hours.  As mentioned earlier, lags in alert recognition are sometimes due to the occurrence of
alerts on weekends  or after-hours. The overall alert recognition statistics are presented in Table 5-6.

Table 5-6. EMS Alert Recognition Time (hours)
Parameter
Average
Median
Minimum
Maximum
Time (hours)
15.30
8.79
0.05
61.30
5.5.4  Time to Investigate Alerts
Definition: Time to investigate alerts includes the portion of the incident timeline that begins with the
recognition of an EMS alert, and ends with a determination regarding whether or not contamination is
Possible.  The time to investigate alerts, as captured in the investigation checklists, is based on the nature
of the alert details and the investigation procedures that must be implemented before concluding that the
alert is not indicative of a possible contamination incident. For PHS drills and the simulation study, this
data represents the timeline from the contaminant injection to the time that contamination is deemed
                                                                                           60

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot

Possible. As noted in Section 3.3, no time delay for alert recognition was parameterized in the CWS
model as it was assumed that alert investigations occurred immediately upon receipt of alerts based on the
nature of the underlying case data (i.e., similar symptom categories and case clustering).

Analysis Methodology:  Analysis of empirical data (invalid alerts) was performed to calculate average,
median, and range of times as listed in investigation checklists. The time to investigate valid alerts was
described using several public health incidents that occurred during the evaluation period as well as
timeline data collected from PHS drills.

Timeline data gathered from investigation of EMS valid alerts and PHS drills was used to parameterize
the investigation time for EMS alerts in the simulation study.  Simulation study timeline data (which, as
noted above, started at the time of contaminant injection) was evaluated to illustrate the timeliness of
detection overall and for scenarios initiated at periods high or low demand. Percentile values were
calculated to examine the distribution of data, and are presented in a box-and-whisker plot. Average
detection times were calculated for individual contaminants, as well as for scenarios initiated at high or
low demand for individual contaminants.

Results: The results presented below are arranged in order of empirical data, drill data, and simulation
study data.

Empirical Data
Because only a few public health incidents occurred during the evaluation period, the majority of
investigation times represent time to investigate invalid alerts. Figure 5-15 shows the average invalid
alert investigation time for each monthly reporting period.  System users were not required to actively
investigate alerts until the beginning of the June 2008 reporting period as the system was still in  a
development and testing phase between January and June 2008.  Two outliers are included in this dataset
(an EMS alert investigation on March 14, 2008 which took 17.17 hours and the investigation on April 24,
2008 which took 22.25 hours) which are thought to be the result of instances where personnel were
interrupted during the investigation process by regular job duties, and completed the investigation
checklists many hours  after the investigation had been initiated.  Though a total of eight EMS alerts
occurred after October 2009, a formal investigation was not completed for any these alerts. Therefore,
data for alert investigation time was not available for analysis. Note: Asterisks in Figure 5-15 indicate
that no data was available either due to an alert investigation not being conducted, or that no alerts
occurred during that reporting period.

This bar chart shows the average invalid alert investigation time for each monthly reporting period
beginning on January 16, 2008 and ending on May 16, 2010. Average investigation times during these
reporting periods are approximately 43 minutes; however there were two outlying investigation times  of
17.17 hours and 22.25  hours. Data was unavailable for several reporting periods, and many were clustered
at the end of the evaluation period, from October 16, 2009 through May 16, 2010, due to either an alert
investigation not being conducted, or no alerts occurring during that reporting period.
                                                                                            61

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot
     1000
  s
     100
      10
                                     Start Date of Monthly Reporting Period
Figure 5-15.  EMS Average Invalid Alert Investigation Time (n=43, empirical data)

The time to investigate valid alerts (public health incidents) varied during the evaluation period.  It took
100 minutes (1 hour, 40 minutes) to investigate an alert that was due to heat-related illness in July 2009.
During this time, local public health personnel received two EMS alerts; the communicator was then
activated, during which it was determined the alerts were due to heat-related symptoms. Nearly an hour
of this time elapsed between the activation of the communicator and investigation close out. For the two
alerts representing the H1N1 outbreak in September 2009, public health personnel were able to rule out
possible water contamination as a cause of the alerts in less than 20 minutes because knowledge of the
ongoing outbreak helped to determine these alerts were due to H1N1  illness.

Statistics for time for alert investigation over the entire evaluation period are shown in Table 5-7.

Table 5-7.  EMS Invalid Alert Investigation Time (minutes, empirical data)
Parameter
Average
Median
Minimum
Maximum
Time (minutes)
22
9
2
153
Drill and Exercise Data
During the evaluation period, simulated EMS alerts were used during the August 22, 2008 PHS drill as
well as the October 2008 full-scale exercise to practice alert investigation. These drills serve as a proxy
for time to investigate alerts caused by possible water contamination. The time to investigate these
simulated alerts was approximately 1.5 hours for the PHS drill, and about one hour during the full-scale
exercise. The differences in these times reflects the variability of scenarios that may occur, as represented
                                                                                            62

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                        of the Cincinnati Contamination Warning System Pilot

by the different contamination scenarios, as well as other factors that influence alert investigation, such as
personnel availability and clarity of the data.  These factors would also influence the time to complete an
alert investigation during an actual water contamination incident. The timeline for the August 2008 PHS
drill is presented below in Figure 5-16, which displays some of the key points of the investigation
following receipt of a simulated EMS alert.
                                                    00:45
                                                 LPH verify data
                                                   coding
   00:00
   EMS email
   transmitted
00:48
LPH determine
that EMS alert is 00:54
related to     LPH contact
drinking water  DPIC
                                  01:18
                                WUERM
                              determines
                           contamination is
                                Possible
01:14
LPH
contact
WUERM
 00:00
                                                                                             01:18
Figure 5-16.  PHS Drill 1 Timeline (EMS Alert)
Simulation Study Data
Figure 5-17 demonstrates the overall timeliness of detection statistics for the EMS surveillance tool and
for scenarios initiated at periods of low and high demand, using percentile values to illustrate the
distribution of data in a box-and-whisker plot.  Scenarios initiated at high demand times were detected
sooner than scenarios initiated at low demand times due to the design of the CWS model.  A seven-hour
time delay occurred between the scenarios initiated at low demand (12:00 am) and the first exposure
event (7:00 am), which resulted in a detection time lag, unlike the scenarios initiated at high demand
(9:00 am), which could have resulted in exposure soon thereafter at the 9:30 am or 12:00 pm exposure
events.
18000
•ifiooo .
•i/nnn -
19000 -
!fi -innnn
+••
3
.E ftnoo
Ptnnn
^nnn
onnn
0 -



^Average















i




<






j

F
4>

1
" 	 	 |

EMS Surveillance Tool EMS Surveillance Tool EMS Surveillance Tool
(low demand) (high demand)
Figure 5-17.  EMS Data Stream Timeliness of Detection (simulation study data)
                                                                                              63

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot

There were a total of 487 scenarios detected by the EMS surveillance tool with an average detection time
of 4,012 minutes (nearly three days), as shown in Table 5-8. A longer delay was observed for scenarios
detected by the EMS surveillance tool compared to the 911 surveillance tool, likely due to the fact that
EMS cases would be logged after 911 calls were placed, and that fewer EMS cases were logged overall
compared to 911 calls (as discussed in Section 5.3.1). Furthermore, there is a 732 minute time delay for
the EMS data upload before it becomes available for statistical analysis, which contributes to the longer
time delay prior to detection.

Table 5-8. EMS Data Stream Timeliness of Detection (minutes, simulation study data)
Scenarios
Total
Low Demand
High Demand
Count
487
147
340
Average
4,012
4,820
3,663
Median
2,850
3,390
2,850
Average timeliness of detection for the EMS surveillance tool by contaminant is presented below in
Figure 5-18, where contaminants are arranged in increasing order of timeliness of detection (no data is
presented for Biological Agent 7 as EMS did not detect any scenarios involving this contaminant).  For
each contaminant, the overall average is presented as well as the average value for high and low demand
scenarios.  This figure compares the timeliness of detection of the toxic chemicals and biological agents,
where the chemicals were typically detected within a day or two, and the biological agents ranged from
within a day or two to until a week or more after injection of the contaminant. This difference is likely
due to the longer symptom onset time for some biological agents. Unlike the 911 surveillance tool, the
differences in timeliness of detection of various contaminants  by the EMS surveillance tool for high or
low demand scenarios are minor as the overall timeline from contaminant injection to detection is delayed
by the time required for data transmission (~12 hours).  This delay diminishes the impact of differences
between contaminant injection and exposure times in high and low demand scenarios.
      18000

      16000

      14000
      10000
      6000
-Overall
-High
-Low


                  ,A    v
-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                        of the Cincinnati Contamination Warning System Pilot

5.5.5   Summary

For the EMS surveillance tool, one of the lengthiest processes is the time for data transmission because of
delays that occur in data upload to the ESM server, due to the CFD requirements that must be completed
before closing and uploading an EMS run. Delays can also occur when data is stored in EMS tablets
during multiple runs for many hours before EMTs return to the firehouse and are able to upload the data.
Once EMS run data is filtered and available for analysis, the tool quickly and consistently analyzes the
run data and determines if events meet the algorithm's requirements for producing an alert. Although this
efficiency allows local public health partners and GCWW personnel to quickly obtain alert data and begin
the investigation process in a timely manner, timely recognition of EMS alerts did not always occur.
When alerts occurred after-hours or on the weekend, a 10 to 20 hour time lag occurred before the health
partners started the investigation of the alert. Overall, time to complete EMS invalid alert investigations
stabilized to approximately ten minutes per alert.

Simulation study data analysis showed that for most chemical contamination scenarios, it took one to two
days for EMS run counts to become high enough to exceed the detection thresholds for the relevant
syndrome categories monitored by the EMS surveillance tool. In contrast, weeks elapsed before detection
of some of the biological agents occurred.

5.6    Design Objective: Operational Reliability

Analysis of the operational reliability of the EMS surveillance tool quantifies the percent of time that the
EMS surveillance tool was working as designed. In order to evaluate how well the EMS surveillance tool
met this design objective, the availability metric was evaluated.  The following subsection defines the
metric, describes how it was evaluated and presents the results.
5.6.1  Availability

Definition: Availability is the amount of time the EMS surveillance tool is functional and accessible,
expressed in terms of the percent of usable data hours per reporting period. In order for available data to
be generated for the EMS surveillance tool, data must be successfully loaded from EMS tablets to the WS
application server, filtered, and analyzed using the EARS event detection tool.

Analysis Methodology: Availability is expressed in terms of the percent of usable data hours per
reporting period. The measurement of availability is related to downtime events; the available hours were
calculated by  subtracting the total downtime from possible data hours in each reporting period.  Percent
availability was analyzed per reporting period, as well as for the entire evaluation.

Results:  Most downtime events (see blue bars in Figure 5-19) for the EMS surveillance tool were
attributed to the inhibition of EMS data collection due to periodic network instability, which prevented
data transmission from the CFD server to the WS application server. The lengthiest period of data
collection downtime occurred during the March  2009 reporting period.  This was the result of a loss of
connectivity with the CFD source database, the cause for which is unknown.  Some data collection
downtime during the September 2008 reporting period was the result of power outages and network
instability caused by a windstorm, which resulted in loss of electricity for 90% of Cincinnati for up to
four days.
                                                                                             65

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                        of the Cincinnati Contamination Warning System Pilot
ifin n
"3T
2 14D n
01
E19D n
1
§ mn n
Q
n sn n
Ol
JS
£ 4n n
(0
i 20.0
HI
0.0







Ml












•Alert Notification
• Data Collect ion
H Event Detection



I~


Start Date of Monthly Reporting Period
Figure 5-19.  EMS Surveillance Tool Downtime (Events > 1 hour)

During the evaluation period, availability generally exceeded 90% for the EMS surveillance tool, with an
average value of 97% availability. Overall, the lowest value for availability occurred during the March
2008 reporting period and was caused by network instability which prevented data collection (Figure 5-
20). When data collection is inhibited, subsequent event detection processing on the most current data
cannot occur.
   5
   JS
   '5
      100%
       95%
       90%
       85%
       80%
       75%
       70%
                                           "

                                                           C> \C  v1 <  vC  vC xC
                                                            ^
                                                           AX <^  <^     ^  i> ^ n>  <£  ^ «T
                                        Start Date of Monthly Reporting Period
Figure 5-20.  EMS Surveillance Tool Availability
5.6.2   Summary
The high availability percentages during the evaluation period depict the overall stability and reliability of
the EMS surveillance tool. Availability increased post-July 2009, as utility personnel established an
                                                                                              66

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                        of the Cincinnati Contamination Warning System Pilot

automated monitoring tool which provides notification when the WS application server needs to be
restarted if network stability causes it to shut down, providing for consistently reliable transfer of data
from the CFD server to the WS application server for filtering and analysis.
                                                                                              67

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot
 Section 6.0:   Performance of the  Epicenter Surveillance Tool

The following section provides a description of the EpiCenter surveillance tool followed by the results of
the evaluation of this tool. This analysis includes an evaluation of metrics that characterize how the
EpiCenter surveillance tool achieves the design objectives described in Section 1.1. Specific metrics are
described for each of the design objectives.

6.1    Description of the EpiCenter Surveillance Tool

ED hospital admission records were included as a part of the PHS component of the CWS to enhance
situational awareness during events and provide early detection of outbreaks.  These records are managed
and analyzed via the EpiCenter surveillance tool, a syndromic surveillance system operated by the
Situational Monitoring and Event Detection Unit at ODH. Health Monitoring System's (HMS) EpiCenter
replaced the  RODS system starting in March 2008 (HMS, 2009 and ODH, 2009).

Patients arriving in the ED are triaged using reported chief complaint(s); these chief complaints are coded
into an electronic medical  record along with other demographic variables. The electronic records are
uploaded into the EpiCenter system and categorized into syndromes based on chief complaints, and
algorithms built into the program generate alerts any time patient volume (per syndrome or symptom)
exceeds alerting criteria (i.e., statistical thresholds).  Alerts are sent to the local health department(s) at the
appropriate jurisdiction(s). Epidemiologists and disease investigators can access data from their
jurisdiction for purposes of alert investigation,  outbreak management and day-to-day monitoring. The
volume, location and demographics of this data are available to the local health departments at all times
for analysis via an interactive computer module.

It should  be noted that EpiCenter (and previously, RODS), differs from other PHS surveillance tools in
that the tool itself was not modified for the Cincinnati CWS because it is a state-wide, and not local,
surveillance tool. Instead, emphasis was placed on how the data was utilized to initiate or augment
investigations into a possible water contamination incident. Evaluation of this surveillance tool will focus
not only on its ability to detect valid alerts due  to possible water contamination, but also to identify valid
alerts due to  public health  incidents unrelated to drinking water.

For purposes of the EpiCenter data, a valid alert due to a public health incident was defined as any alert
categorized as "seasonal illness health event" or "naturally occurring disease outbreak" by public health
personnel responsible for the anomaly investigation within the EpiCenter system.  This differs slightly
from the other PHS data tools, because these categories are set at the state and not local level, but are
analogous to the alerting criteria for the other PHS tools deployed as part of the Cincinnati CWS. The
911, EMS and DPIC surveillance tools define a valid alert as any alert indicative of a public health
incident,  including water contamination, as explained in their respective sections (Sections 4.0, 5.0, and
7.0).  Examples of these classifications include seasonal influenza outbreaks, respiratory issues related to
allergies and pandemic events such as the 2009 H1N1 outbreak. The data included in this evaluation
period was provided by ODH, and includes EpiCenter alerts produced between March 2008 and March
2010 in Hamilton County, Ohio.

EpiCenter Syndromes
EpiCenter categorizes symptoms into approximately 25 classification groups. Categories are not
mutually exclusive; hence, one patient may be  included in more than one  syndrome.  For purposes of the
CWS, only syndromes pertaining to possible incidents as determined by local public health were included
for investigation. Symptoms contained within  these syndromes are listed in Table 6-1.
                                                                                           68

-------
            Water Security Initiative: Evaluation of the Public Health Surveillance Component
                          of the Cincinnati Contamination Warning System Pilot
Table 6-1.  Epicenter Syndromes
     Syndrome
                                Symptoms Included
 Botulinic
Blurry, difficulty speak, diplopia, double vision, eye problem, language problem, loss of
vision, photophobia, slurred speech, visual difficulties
 Constitutional
Aches, body pain, difficulty walking, loss of appetite, chills, does not feel well, fatigue, fever,
flu-like, fussiness, generalized pain, swollen glands, illness, increased sleep, lethargic, low
blood pressure, lump in groin/neck/underarm, malaise, mumps, muscle aches, polycythem,
septic shock, sluggish, sweats, swollen gland, viral  syndrome, sick, ear/head/stomach ache
 Gastrointestinal
Abdominal pain, appendicitis, cramps, gastric pain, quadrant pain, stomach pain, blood in
stool, dark stool, diarrhea, food poisoning, loose stool, nausea, tarry stool, upset stomach,
vomiting
 Hemorrhagic
Abortion, blood in stool/urine/sputum, bloody sneeze/cough, dysent, hematuria, hemoptysis,
passing clots, petechiae, rectal bleeding, vaginal bleed, bleeding, hemorrhoid
 Neurological
Altered mental state, aphasia, ataxia, back pain radiating, Bell's palsy, blacking out, cannot
focus eyes, can't move/remember/see/speak, cephalgia, confused, convulsions, delirium,
disoriented, droopy eyelids, dystonic reaction, ear ringing, epileptic, face droop, numbness,
flaccid, floaters, headache, hearing loss, incoherent, ischemia, light headed, loss of
consciousness, loss of coordination, memory loss, meningitis, muscle stiffness, neck
pain/stiffness, nerve pain, paresthesia, pinched nerve, presenile dementia, sciatic,  shakes,
side weak, skin sensation, slurred speech, stroke, syncope, tingling, tremors, twitching,
unresponsive, seizure, hallucinations
 Rash
Angioderma, blister, blotch, boil, buboes, bumps, burning to skin, candidiasis, chickenpox,
eczema, facial sore, flesh eating, hives, itchiness, lesion, lumps, measles, Methicillin-
resistant Staphylococcus aureus, non-specified skin, open sore, pox, red and
swollen/painful/sore/spots/streak, redness, ring worm, scabies, shingles, skin
burning/eruption/inflammation/irritation/lesions/problems, sores, splotch, spots, staph
infection, thrush, cyst, rash, ulcer
 Respiratory
Apnea, breathing pain, barky, breathing difficulty/problems, breathing fast, bronchitis, cannot
swallow, cannot breathe, chest congestion, chest discomfort, chest tightness, chest
pressure, chest heaviness, cold, croup, decreased oxygen, dyspnea, earache, ear drain,
ear infection, ear swelling, emphysema, flu-like, hoarse, hyperventilation, low oxygen, lung
pain, not breathing, otitis media,  pertussis, pneumonia, pulmonary congestion, respiratory
arrest/failure/distress, runny nose, shallow breath, sinus, sore throat, strep, stuffy, swollen
tonsils, wheezing, nasal, asthma, Chronic Obstructive Pulmonary Disease, bronchospasm,
gasping
EpiCenter Analysis
EpiCenter has various algorithms available for the analysis of ED hospital admissions data, as shown in
Table 6-2.
Table 6-2.  EpiCenter Algorithms
Algorithm
Constant Threshold
CUSUM with Exponential Moving
Average (EMA)
EMA
Description
Sets a fixed threshold; commonly used to detect immediately reportable
conditions
Threshold set at 4 standard deviations (default) above predicted count, based
on 14 previous days of data
Computes predicted count as a weighted average of actual counts for 1 7
days previous
                                                                                                       69

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot
Algorithm
Simple Moving Average
Recursive Least Squares (RLS)
Description
Predicted count based on average counts for past 14 days
Computes predicted count from a weighted sum of the actual counts of the
current day and the past p-1 days (7 days default); has an adjustable training
window (default 60 days)
Algorithms are applied to the data using a rolling 24-hour analysis window.  Syndrome counts that exceed
threshold for any of the above algorithms may generate an alert if all of the following alerting criteria are
met:
    1.  The observed count is greater than or equal to ten AND
    2.  The observed count is greater than the threshold AND
    3.  If other data conditioning algorithms are applied (i.e., normalized or day-of-week), these
       threshold(s) are exceeded AND
    4.  No anomaly using identical parameters has been created in the past 24 hours.

Alerts generate automated email notifications that are sent to designated personnel at the appropriate local
health department(s). Upon receipt of these alerts, staff can begin investigating the cause of the alert
using data within the EpiCenter module as well as other information at their disposal (e.g., reportable
disease counts, knowledge of current outbreaks, etc.) to determine whether or not the alert represents a
public health incident. In addition, users have the option of applying data conditioning techniques to
account for certain confounders, such as day-of-week effects, during their investigations.

For the Cincinnati CWS, staff at the local health department(s) review EpiCenter data during  alert
investigations to determine if recent hospital admission data support evidence of a possible water
contamination incident. In addition, staff consider water contamination as a possible cause of any
EpiCenter alerts generated.

6.2    Design Objective: Spatial Coverage

The spatial coverage is the cumulative area of the water distribution system monitored by ED hospital
admissions data in EpiCenter.  In order to evaluate how well the PHS  component met this design
objective, the following metric was evaluated: area and population coverage.  The following subsections
defines the metric, describes how it was evaluated, and presents the results.
6.2.1  Area and Population Coverage

Definition:  Area coverage describes how alerts are distributed geographically, while population
coverage depicts the geographic area covered by the EpiCenter surveillance tool.

Analysis Methodology: EpiCenter alerts, by nature, indicate a county-wide rise in a certain  syndrome
category. Therefore, no geographic analysis of alert location data was conducted.

Results: Although specific hospital location data was not available from the data provider, data was
collected from all Hamilton county hospitals. Thus, it can be concluded that area coverage spans the
entire county. This represents 95% population coverage of the total GCWW retail service area (see
Figure 2-1).

6.3     Design Objective: Contaminant Coverage

The EpiCenter tool monitors ED visits that could signal a public health incident, including water
contamination.  For ED patient data, contaminant coverage is dependent on the health-seeking behaviors
                                                                                             70

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot

following symptom presentation, as discussed in Section 3.3. In order to evaluate how well the EpiCenter
surveillance tool met this design objective, the contamination scenario coverage and contaminant
detection threshold metrics were evaluated.  The following subsections define each metric, describe how
it was evaluated, and present the results.

6.3.1  Contamination Scenario Coverage

Definition: Contamination scenario coverage is defined as the ratio of contamination incidents that are
actually detected to those that are theoretically detectable based on the design of the EpiCenter
surveillance tool. Detectable contamination scenarios include those which originated at distribution
system attack nodes  rather than facility attack nodes.

Analysis Methodology: Since no water contamination incidents occurred during the evaluation period,
simulation study results were utilized to quantify this metric. The ratio of scenarios that were actually
detected to those that were theoretically detectable (based on the assumptions regarding health-seeking
behavior that were parameterized in the model) was calculated for each contaminant. Additionally, the
average and median  number of cases at the time of detection was calculated for each contaminant.
Certain contamination scenarios that were not theoretically detectable were screened out of the analysis
including those that originated at facility attack nodes (which were detected by the ESM component) and
those which involved the nuisance chemicals.

Results:  The EpiCenter surveillance tool detected 71% (994 scenarios) of the theoretically detectable
scenarios (n= 1,402). Table 6-3 below shows the detection statistics for the EpiCenter surveillance tool
for each contaminant.

Table 6-3.  EpiCenter Detection Statistics
Contaminant
Toxic Chemical 1
Toxic Chemical 2
Toxic Chemical 3
Toxic Chemical 4
Toxic Chemical 5
Toxic Chemical 6
Toxic Chemical 7
Toxic Chemical 8
Biological Agent 1
Biological Agent 2
Biological Agent 3
Biological Agent 4
Biological Agent 5
Biological Agent 6
Biological Agent 7
Scenarios
Detected
76
94
67
78
54
73
46
94
92
43
94
94
89
0
0
Scenarios
Not
Detected
18
0
27
16
40
21
48
0
2
51
0
0
5
88
92
Percent
Detected
81%
100%
71%
83%
57%
78%
49%
100%
98%
46%
100%
100%
95%
0%
0%
Average #
Cases at Time
of Detection
1,196
382
713
2,593
217
16,139
2,088
6,756
1,608
250
68,707
2,402
2,561
-
-
Median # Cases
at Time of
Detection
1,087
246
554
2,509
1,972
12,444
1,767
6,147
858
134
43,443
2,300
1,919
-
-
The EpiCenter surveillance tool demonstrated a high detection rate across almost all contaminants, with
100% detection for four of the fifteen contaminants and another five above 71%. No contamination
scenarios were detected for Biological Agent 6 or Biological Agent 7. These two contaminants were
modeled as producing illness through the inhalation exposure route, and thus there was only one exposure
event in the morning (7:00 am showering event) that could have produced cases. Fewer exposed
                                                                                            71

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot

individuals resulted in a lower number of patients requiring treatment at the ED, which contributed to
lower detection rates. It is also possible that scenarios involving these biological agents were detected
early enough by Astute Clinician surveillance that not many individuals had advanced to the moderate or
severe symptom level and did not yet require care at an ED.  During these scenarios, if enough collective
information was available to advance the threat level to Confirmed, public notification would have been
issued which would have directed individuals to pursue prophylactic treatment.
6.3.2  Contaminant Detection Threshold
Definition: The contaminant detection threshold is the number of exposed individuals who are
symptomatic necessary to generate an alert through the EpiCenter surveillance tool. This metric is
intended to characterize the size of the smallest contamination incident, expressed in terms of the number
of symptomatic people, which can be detected through this surveillance tool.

Analysis Methodology:  Empirical data provided by Hamilton County was used to characterize this
metric. The two types of historical counts that were used to  quantify the number of cases necessary to
detect contaminants that cause symptoms as described in Section 6.3.1 are total case counts and counts
above threshold.

Total case counts represent the number of cases observed during historical alerts.  This count gives an
indication of the total volume that may be expected during a contamination incident. However, given the
variable nature of ED utilization, total counts may not be the best benchmark for determining detection
limits. Total case counts that trigger an alert one day may not trigger an alert during another time of year
due to seasonality and other natural fluctuations in the data.

Threshold values are determined by the algorithm applied, and generally represent a certain value above
the predicted count.  The default thresholds in EpiCenter are four standard deviations above the calculated
predicted value, although these can be adjusted. In theory, the minimum number of cases necessary to
generate an alert would be one case above the threshold.  The average and minimum counts above the
threshold necessary to generate an alert give an indication of the contamination detection limit for
contaminants causing symptoms typical of the various syndromes.

Average and minimum values for case count and counts above the threshold were calculated per
syndrome for all alerts between January 2008 and September 6, 2009.  Alerts after September 6, 2009
were excluded because these occurred during the H1N1 outbreak in Cincinnati and contained counts
significantly higher than normal. Therefore, they would not be useful for determining detection limits
under typical circumstances. The cut-off date was calculated by a statistical analysis which determined
when natural break points occurred in the data.

Results:  The average and minimum case count values per syndrome can be seen in Figure 6-1. Average
and minimum counts above the threshold are depicted in Figure 6-2.
                                                                                            72

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                        of the Cincinnati Contamination Warning System Pilot
     120
     100
      80
          •All Alarms
          • Botulinic
           Constitutional
          ^ Gastrointestinal
           Hemorrhagic
          » Neurological
           Rash
           Respiratory
                   Average Value
     Minimum Value
Figure 6-1.  Average and Minimum Case Counts per Syndrome Alert

Figure 6-1 shows the average case counts that typically occur during the various syndrome alerts, as well
as the minimum value of cases that have elicited an alert.  Note that in Figures 6-1 and 6-2, "All Alerts"
represents the overall average case counts  or the minimum value or cases for all alerts, regardless of
syndrome. For example, the average Respiratory alert consisted of 97 cases, although case counts as few
as 14 respiratory cases (the minimum case count observed per alert) have also triggered an alert.  There is
a wide range of average observed values, from an average of just 16.2 cases per Botulinic alert to 97 for
the Respiratory syndrome. As mentioned previously, case counts may fluctuate depending on current
events.  Therefore, the observed values above the threshold counts must also be taken into account.
                                                      •All Alarms
                                                      • Botulinic
                                                       Constitutional
                                                       Gastrointestinal
                                                       Hemorrhagic
                                                      n Neurological
                                                       Rash
                                                       Respiratory
              Average Above Threshold
Minimum Above Threshold
Figure 6-2.  Average and Minimum Case Counts above Syndrome Thresholds per Alert

As seen in Figure 6-2, the average number of cases above threshold to trigger an alert ranged from one
case for the Rash syndrome, to eight cases for the Respiratory syndrome. As such, for a contaminant that
causes respiratory symptoms, it could be assumed that the typical detection limit would be eight cases
                                                                                              73

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                        of the Cincinnati Contamination Warning System Pilot

above the daily threshold. While minimum values also generate an alert, utilizing the average above
threshold provides a more realistic estimate of the number of cases required to register an EpiCenter alert
in the event of water contamination.

It should be noted that the limits described above are for normal public health circumstances.  In the event
of an outbreak, these limits may not be applicable due to the increased volume of cases due to the public
health incident. Cases presenting due to water contamination may be masked due to this increased
volume.  Under these circumstances, extra dependence on the expertise of astute public health personnel
will be necessary to help identify cases that may be caused by water contamination.

6.3.3  Summary
The contamination scenario coverage results from the simulation study demonstrate that the EpiCenter
surveillance tool is able to detect a variety of different types of contamination scenarios, involving both
chemical and biological.

Utilization of historical case counts and counts above threshold are useful for quantifying estimates of
contaminant detection thresholds. Although average counts above threshold may give the best estimate of
cases needed to produce an alert for that syndrome, observation of total case counts and counts above
"normal" should not be discounted as they also provide useful information to the public health personnel
investigating the alert. Public health expertise will be especially valuable during disease outbreaks, when
increased case volumes may mask cases reported due to water contamination.

6.4    Design Objective: Alert Occurrence

Alert occurrence addresses aspects of system performance, including the frequency of invalid alerts in
order to ascertain the accuracy of the EpiCenter surveillance tool in discriminating between valid alerts
(public health incidents, including water contamination) and normal variability in the underlying data.  In
order to evaluate how well the EpiCenter surveillance tool met this design objective, the following two
metrics were evaluated: invalid alerts and valid alerts. The following subsections define each metric,
describe how it was evaluated, and present the results.

6.4.1  Invalid Alerts
Definition:  Invalid alerts include any alert generated by the EpiCenter surveillance tool that is
determined as not related to a public health incident, including water contamination, following alert
investigation.

Analysis Methods: The total number of invalid alerts is equal to the  number of total alerts minus the
number of valid alerts.  These invalid alerts were analyzed by frequency and syndrome type, both by
monthly reporting period and for the entire evaluation period.

Results:  Figure 6-3 shows the frequency of invalid alerts and their syndrome types per reporting period.
Invalid alerts peaked in July of 2009, when there were a total of fifteen alerts encompassing five different
syndromes (constitutional, gastrointestinal, neurological, rash and respiratory).  On average, each
reporting period experienced 2.7 invalid alerts and a median of 2 alerts.  The majority of reporting periods
experienced three or fewer alerts. The peak in invalid alerts during July 2009 corresponds to Cincinnati
Children's Hospital coming back on-line after an upgrade to their data system which prohibited them
from submitting data for one year.  Therefore, this increase in alerts is due to  the EpiCenter algorithms
readjusting to a sudden influx of ED cases.
                                                                                             74

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot
1fi

•\A
"1 1
1 n
&
ro
Q 8
•E
5
< 6

9 .
n -
*«










i
£


• Respiratory
• Rash
^Neurological
Hemorrhagic
x Gastrointestinal
Constitutional
• Botulinic



LX , , ,i
^c^o^oS^o^o^
vN* x"£ v\fe v^ v\fe V
4N ^ -\V <^ c^^








- »
^
13 J?J_ % 3
...,,, 1
Start Date of Monthly Reporting Period
Figure 6-3. Epicenter Invalid Alerts per Reporting Period

Invalid alerts were fairly evenly distributed by syndrome; with the exception of the Hemorrhagic
syndrome (5%), each syndrome contributed between 11-19% of invalid alerts (Figure 6-4).
                                          emorrhagic
                                            5%
Figure 6-4. Percent of Epicenter Invalid Alerts by Syndrome

Overall, the number of cases per invalid alert varied (range 10 to 172), although the majority of invalid
alerts were caused by 50 or fewer cases (Figure 6-5).  The average and median number of cases per
invalid alert was 45.3 and 29.5 cases, respectively. In general, this is lower than the average cases per
valid alert (Figure 6-6).
                                                                                            75

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot
                                                           \
              0-25
26-50
51-100
101 +
                                Cases
Figure 6-5. Cases per Invalid Alert

6.4.2   Valid Alerts
Definition: Valid alerts are data anomalies generated by the EpiCenter algorithm that are due to public
health incidents, including possible water contamination, in the location where the alert is observed.
Public health incidents in EpiCenter are denoted as "seasonal illness health event" or "naturally occurring
disease outbreak" by investigators at the local health department.

Analysis Methodology: The total number of valid alerts was analyzed by frequency and type, including
the alert duration and count per reporting period.  In addition, a statistical analysis to determine natural
breakpoints in alert count data was performed; these breakpoints were characterized by average daily
counts by syndrome. Analyses conducted and presented for the contamination scenario coverage metric
reflect the occurrence of valid alerts in the simulation study (Section 6.3.1).

Results: The majority of valid alerts (89.5% of all valid alerts) occurred during the fall of 2009,
corresponding with H1N1 influenza activity in the Cincinnati area.  The H1N1 influenza outbreak was
declared a pandemic by the World Health Organization in June 2009 (World Health Organization, 2009).
This outbreak was caused by a new strain of influenza virus, and circulated worldwide; persons most
affected by this  virus were pregnant women and otherwise healthy adults.  It is estimated that 59 million
people in the U.S. were affected by the H1N1 virus (CDC, 2010b).  In Hamilton County, a major uptick
of suspected H1N1 cases was observed around the end of August 2009, corresponding to the beginning of
a new school year.  Symptoms indicative of H1N1 include fever, sore throat, malaise and other general
flu-like symptoms.

Due to the impact of H1N1 on the Cincinnati region, schools experienced higher than normal rates of
absenteeism, EDs and medical providers saw an influx of patients, and HCPH and CHD held vaccination
clinics.  Because of the increased patient volume seen in EDs, EpiCenter issued numerous alerts during
this timeframe.  Nearly all of the alerts during this timeframe were categorized as due to "naturally
occurring disease outbreak" by public health investigators.  The duration and frequency of these alerts can
be seen in Figure 6-6.
                                                                                            76

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                        of the Cincinnati Contamination Warning System Pilot
                                                                     Respiratory Alert Duration
                                                                 MM« Rash Alert Duration
                                                                 ^^Constitutional Alert Duration
                                                                     Constitutional AertCount
                                                                     Rash Alert Count
                                                                     Respiratory Alert Count
                                   Start Date of Monthly Reporting Period
Figure 6-6. Valid Alert Count and Duration (in cumulative days) per Reporting Period

Valid alerts were an average of one day longer than invalid alerts. This is mainly attributed to the
duration of alerts during the H1N1 outbreak. Alerts during this time period averaged 2.73 days in
duration versus one day for other valid alerts. The longer alert duration is indicative of the breadth of the
outbreak and the volume of patients affected; unlike other health incidents that resolve fairly quickly, the
H1N1 outbreak is an example of an extended public health  incident.

A statistical analysis of EpiCenter daily syndrome counts was performed to ascertain breakpoints in the
data indicating the start and end of the H1N1 outbreak in Cincinnati.  It was determined that there was a
statistical increase in EpiCenter data beginning in September 6, 2009 and continuing through November
9, 2009. Average daily counts for the constitutional and respiratory syndromes were significantly higher
during this timeframe, as indicated in Figure 6-7. It should also be noted  that the data in the September
6, to November 9, 2009 timeframe demonstrated much greater variation than the other two time periods.
Increases of this nature may present difficulties in detecting possible water contamination during that
timeframe due to increased "noise" in the data.
                                                                                             77

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                        of the Cincinnati Contamination Warning System Pilot
     300
     250
     200
     150 --
     100
              1/01/09-9/05/09
                                9/06/09-11/09/09

                                 Time Period
                                                   11/09/09-2/28/10
Figure 6-7. Average Daily Counts by Syndrome during Different Time Periods

6.4.3  Summary

While three or fewer invalid alerts occurred during most reporting periods, some months had numerous
invalid alerts.  The invalid alerts were distributed fairly evenly between syndrome types. Valid alerts
were detected during the evaluation period, the majority of which corresponded to H1N1 influenza
activity in the Cincinnati area. On average, valid alerts remained above threshold one day longer than
invalid alerts.

6.5    Design Objective: Timeliness of Detection

Timeliness of detection refers to the time it takes for a potential public health incident, including water
contamination, to be detected by the EpiCenter surveillance tool; the timeline begins with initial
transmission of ED patient data and concludes with completion of the alert investigation. Post-exposure
factors that would affect the overall timeliness of detection, such as time to symptom onset and health-
seeking behaviors, are discussed in Section 3.3. Following ED data entry at participating hospitals,
patient data is available for transmission and analysis in EpiCenter. In order to evaluate how well the
EpiCenter surveillance tool met this design objective, the following metrics were evaluated: time for data
transmission, time for event detection and time to investigate alerts.  The following subsections define
each metric, describe how it was evaluated, and present the results.
6.5.1  Time  for Data Transmission

Definition: Time for data transmission describes the time it takes for ED records to be available for
analysis; this includes the time it takes for coded medical record data to be transferred from the hospital
data servers to the EpiCenter surveillance tool.

Analysis Methodology: Estimation of the time necessary to upload ED records into the EpiCenter
surveillance tool, as supplied by the data provider (ODH).

Results:  For Hamilton County, patient data is uploaded in batches from Health Bridge every ten minutes.
There were no recorded incidents  of delayed batch data transmission from Health Bridge to EpiCenter,
however, it is important to note that data transmission from the hospital data servers to EpiCenter
                                                                                             78

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot

effectively occurs once per day in the morning after paper case records from the previous day are entered
electronically as a batch in the morning.

6.5.2   Time for Event Detection
Definition: Time for event detection describes the time required for the EpiCenter surveillance tool to
generate an alert using its algorithms after data has been transmitted from HealthBridge to the EpiCenter
surveillance tool.  This is the time for analysis of data and generation of a result by the EpiCenter
algorithm.

Analysis Method: Time for event detection was calculated by subtracting the detection timestamp from
the event timestamp.  Statistical analysis including the average, median and range of time for event
detection was calculated per month and for the entire evaluation period.

Results: Time for event detection averaged around 60 minutes for most reporting periods (Figure 6-8).
The overall average and median time for event detection was 60.22 and 60 minutes, respectively. The
range is relatively narrow, indicating that there is little variability in the time for event detection. This
indicates that most events were detected by the system in the minimum amount of time possible. See
Figure 6-3 for additional details on alert occurrence.
  3
  0)
  HI
  Q
  c
  HI
  HI

  £
  HI
  E
  i-
  01
  ra
  ra
63.5

 63

62.5

 62

61.5

 61

60.5

 60

59.5

 59

58.5
        of
                                  Start Date of Monthly Reporting Period
Figure 6-8.  EpiCenter Time for Event Detection


6.5.3   Time to Investigate Alerts
Definition:  Time to investigate alerts includes the portion of the incident timeline that begins with the
recognition of an EpiCenter alert, and ends with a determination regarding whether or not contamination
is possible. The time to investigate alerts is based on the nature of the alert details and the investigation
procedures that must be implemented before concluding that the alert is not indicative of a potential
public health incident, including water contamination. For the simulation study, this data represents the
timeline from the contaminant injection to the time that contamination is deemed Possible. As noted in
Section 3.3, no time delay for alert recognition was parameterized in the CWS model as it was assumed
that alert investigations occurred immediately upon receipt of alerts based on the nature of the underlying
case data (i.e., similar symptom categories and case clustering).
                                                                                             79

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot

Analysis Methodology:  Statistical analysis of empirical data was not possible, as investigation time was
not formally recorded by public health investigators. However, personnel responsible for investigating
the alerts provided an approximation of the typical investigation time.

This information was used to parameterize the investigation time for EpiCenter alerts in the simulation
study.  Simulation study timeline data (which, as noted above, started at the time of contaminant
injection) was evaluated to illustrate the timeliness of detection overall and for scenarios initiated at
periods of high or low demand. Percentile values  were calculated to examine the distribution of data, and
are presented in a box-and-whisker plot.  Average detection times were calculated for individual
contaminants, as well as for scenarios initiated at periods of high or low demand for individual
contaminants.

Results:  Based on feedback from local public health, it is estimated that EpiCenter alerts require
approximately fifteen minutes of investigation time.  The exact time spent per investigation was not
documented.

The remainder of this section focuses on  simulation study results. Figure 6-9 demonstrates the overall
timeliness of detection statistics for the EpiCenter surveillance tool and for scenarios initiated at periods
of low and high demand, using percentile values to illustrate the distribution of data. Scenarios initiated
at high demand times were detected sooner than scenarios initiated at low demand times due to the design
of the CWS model. A seven-hour time delay occurred between the scenarios initiated at low demand
(12:00 am) and the first exposure event (7:00 am), which resulted in a detection time lag, unlike the
scenarios initiated at high demand (9:00 am), which could have resulted in exposure soon thereafter at the
9:30 am or 12:00 pm exposure events.  The high demand box plot is not displayed in Figure 6-9 due to
the frequency of detections at 1,380 minutes creating no distinction between percentiles in the plot.
    100000
     10000
   I/)
   •§
      1000
       100
                EpiCenter
             Surveillance Tool
   EpiCenter
Surveillance Tool
 (lowdemand)
   EpiCenter
Surveillance Tool
 (high demand)
Figure 6-9. EpiCenter Surveillance Tool Timeliness of Detection (simulation study data)

There were a total of 994 scenarios detected by the EpiCenter data stream with an average detection time
of 2,668 minutes (approximately two days), as shown in Table 6-4.  As noted above, scenarios initiated at
                                                                                           80

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot

high demand were detected sooner, with an average detection of time 2,281 minutes, whereas scenarios
initiated at low demand had an average detection time of 3,396 minutes.

Table 6-4. Epicenter Surveillance Tool Timeliness of Detection (minutes, simulation study data)
Scenarios
Total
Low Demand
High Demand
Count
994
363
631
Average
2,668
3,396
2,281
Median
1,920
1,920
1,380
The low and high demand scenarios are further compared, along with the overall detection timeliness in
Figure 6-10, where contaminants are arranged in increasing order of timeliness of detection by the total
set of component scenarios.  This figure illustrates that for most chemical and biological agent
contamination scenarios, it took about one day for case counts to become high enough to exceed the
detection thresholds for the relevant syndrome categories monitored by the EpiCenter surveillance tool.
For one toxic chemical and one biological agent, two or more days elapsed before enough cases had
occurred to produce EpiCenter alerts. Therefore, the type of contaminant may not have an impact on the
timeliness of detection by EpiCenter. Biological Agents 6 and 7 are not presented in this figure as they
were not detected by the EpiCenter surveillance tool, as discussed in Section 6.3.1.
       12000
       10000
        8000
   J!    6000
        4000
        2000
                                                               A


                               <      <

                                                                   <
            <&N
                                                                    -
                                                                   *
                                             Contaminants
Figure 6-10.  EpiCenter Surveillance Tool Timeliness of Detection (simulation study data)

6.5.4  Summary
Data transmission for EpiCenter is dependent on batch uploads of data from HealthBridge, which occur
every ten minutes.  Technically, new data is available for analysis once every 24 hours in the morning due
to the fact that paper records from the previous day are entered electronically in batches every morning.
The time for event detection is extremely consistent in the EpiCenter surveillance system, averaging
around 60 minutes. In general, EpiCenter alerts require about fifteen minutes for investigation.
                                                                                            81

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                        of the Cincinnati Contamination Warning System Pilot
Simulation study data analysis showed that for most contamination scenarios, it took one day for case
counts to become high enough to exceed the detection thresholds for the relevant syndrome categories
monitored by the EpiCenter surveillance tool.

6.6    Design Objective: Operational Reliability

Analysis of the operational reliability of the EpiCenter surveillance tool addresses aspects of surveillance
tool operation and quantifies the percent of time that the EpiCenter surveillance tool was working as
designed. In order to evaluate how well the EpiCenter surveillance tool met this design objective, the
availability metric was analyzed. The following subsection defines the metric, describes how it was
evaluated, and presents the results
6.6.1  Availability
Definition: Availability is the amount of time the EpiCenter surveillance tool is functional and
accessible, expressed in terms of the percent of usable data hours per reporting period. In order for
available data to be generated for the EpiCenter surveillance tool, data must be successfully collected
from participating hospitals, analyzed using EpiCenter's algorithms, and any alert information made
available on the EpiCenter User's Interface.

Analysis Methods:  Information on the number of hospitals submitting data per reporting period was
gathered from ODH.  From this, availability was calculated as a percent of all potential data collected for
that reporting period.

Results:  At least a portion of data within the EpiCenter surveillance tool was available during the entire
evaluation period. However, some data was unavailable during part of the evaluation period due to a
hospital data system upgrade, during which Cincinnati Children's Hospital was unable to report  data to
EpiCenter. The total data availability during this timeframe (July 2008 to July 2009) was still high at
92%.

During times  when data availability is less than 100%, it is important for the public health investigators to
be aware of these issues. In this instance, Cincinnati Children's Hospital was unable to report during the
July 2008 - July 2009 timeframe; therefore, children may have been underreported in the EpiCenter data
during this time.
6.6.2  Summary
EpiCenter received almost all potential data during the evaluation period, contributing to high overall
reliability of the surveillance tool. Public health investigators should be notified when possible issues
with data availability may occur (e.g., hospitals going off-line), so that these periods of downtime can be
taken into account during analysis.
                                                                                             82

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot
     Section 7.0:   Performance of the DPIC Surveillance Tool

The following section provides a description of the DPIC surveillance tool followed by the results of the
evaluation of the tool. This analysis includes an evaluation of metrics that characterize how the DPIC
surveillance tool achieves the design objectives described in Section 1.1. Specific metrics are described
for each of the design objectives.

7.1    Description of the DPIC Surveillance Tool

DPIC is a PCC serving southwest Ohio.  DPIC offers emergency and technical information 24-hours a
day via telephone service  staffed by pharmacists, pharmacologists, nurses, paramedics and students. Any
questions about poisonings, environmental contamination, drugs (including drug abuse), product contents,
substance identification and adverse reactions are handled by the DPIC hotline. Call information is
captured in Toxicall®, a specialized medical database. In addition, under a contract with the Southwest
Ohio Public Health Departments, reportable diseases and other potential public health incidents detected
during evenings, weekends, and holidays are reported to DPIC.  Protocols exist to report potential food or
waterborne outbreaks and unusual disease incidence, as well as to notify public health officials if a
potential biological terrorist incident is detected.

As part of the Cincinnati CWS, DPIC was integrated into the PHS component of the CWS to determine
how local PCCs can contribute to early detection, notification, and rapid response to a possible drinking
water contamination incident.  DPIC implemented a multi-tiered approach to event detection based on
existing surveillance strategies that include statistical, non-statistical, and human surveillance as
illustrated in Figure 7-1.  Throughout this section of the report, the phrase "DPIC surveillance tool"
represents the collective detection strategies applied by DPIC for identification of a possible
contamination.
    Calls received on hotline
   are entered into Toxicall by
      call center staff
      Call Center Staff:
      Is drinking water
       contamination
       suspected?
Data uploaded to the National
Poison Surveillance System
continuously via the Autoltpload
process


Automated statistical and non-
statistical analysis performed
                                                                              Does unusual    \,
                                                                           case pattern trigger an alert? ./
  Local health departments
  and utility staff sent email
   notification for further
     investigation
 /^ Does alert identify potential
"\      risk?



Toxicosurveillance Team
reviews alert



Email notific
Toxicosurve

ation sent to
Nance Team
/^ Return to  \.
(    normal    ]
V,  operations  J
Figure 7-1. DPIC Drinking Water Surveillance Process Flow


Data collected by DPIC in Toxicall® is uploaded into the NPDS on a near real-time basis via an
automated process. The American Association of Poison Control Centers operates NPDS to aggregate
PCC data from across the nation for purposes of statistical analysis, alert processing and communication
                                                                                             83

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                        of the Cincinnati Contamination Warning System Pilot

of findings. NPDS offers center-centric (i.e., all calls handled by a PCC) as well as geocentric (i.e., all
calls occurring within a certain area) surveillance. For example, DPIC covers southwest Ohio as well as
an area in northeast Ohio; geocentric surveillance allows DPIC to analyze these areas separately, if
desired. Four toxicosurveillance statistical categories can be applied for surveillance, including:

    •  Total call volume: All calls to poison control, including exposure information, substance
       identification, and general education calls.

    •  Human exposure call volume: Calls pertaining to human exposures only.

    •  Clinical effect counts: Based on symptoms exhibited due to human exposure.

    •  Case based:  Case definition specified by poison control using key words and logic.

Because DPIC handles calls from outside the Cincinnati area, a geocentric surveillance approach
including all Ohio zip codes in the GCWW service area was utilized for the  Cincinnati CWS.
Statistical analyses can be performed in NPDS on the total call volume human exposure call volume, and
clinical effect count toxicosurveillance definitions. Because total call volume includes calls not pertinent
to possible water contamination exposure (e.g., substance identification calls), focus was placed instead
on statistical analysis of human exposure call volume and clinical effect count definitions. Aberrations
that are greater than three times the standard deviation from the baseline and involve at least two cases for
either of these definitions trigger an email, which is sent to the toxicosurveillance team (on call 24/7/365)
for further investigation.

The case based definition in NPDS's Syndromic  Definition Module was leveraged for non-statistical
surveillance of possible water contamination cases. The toxicosurveillance team developed a customized
search through this module that incorporates specific substance  and symptom keywords thought to be
most likely related to an incident involving a specific class of contaminants (e.g., metals) and eliminates
records where the reason for exposure to the substance is understood and unrelated to water (e.g.,
intentional suicidal exposures, occupational injuries).

The third surveillance method deployed by DPIC relies on human surveillance. The human surveillance
method for the Cincinnati CWS relies on expertise from certified staff members and physician
toxicologists, along with the open call center environment that facilitates ongoing discussion and
consultation among staff members, in order to identify anything "out of the ordinary" in the observed
calls. In addition, DPIC established a "Water Safety Hotline" that is dedicated for water contamination
queries. Health care and public health providers  as well as utility staff seeking toxicology consultation or
related services can access this number in the event of unusual water testing results, water-related health
effects or other threats.  During the evaluation period, approximately two alerts per month were identified
through human surveillance, comprising 3.7% of all DPIC alerts.

Similar to the EpiCenter surveillance tool, the DPIC surveillance tool was not significantly altered for the
Cincinnati CWS because it was a previously established public health entity. One enhancement to the
surveillance of PCC data was the inclusion of a water-based syndrome definition, and increased
awareness of the possibility of water contamination incidents by DPIC staff.  Evaluation of the DPIC
surveillance tool focuses on its ability to identify public health incidents, including water contamination.
In the context of the CWS, a DPIC valid alert is any alert tied to an intentional or unintentional public
health incident, including water contamination. Classification of an alert as valid is at the discretion of
the DPIC investigator.
                                                                                             84

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot

7.2    Design Objective: Spatial Coverage

The spatial coverage is the cumulative area where DPIC has the ability to detect a public health incident,
including water contamination, in the GCWW distribution system based on spatial data provided in the
alerts.  The zip codes listed in DPIC alerts represent caller location; although location was not regularly
recorded in DPIC  alert investigations by protocol, this information was sometimes provided. Because
DPIC also provides toxicology advice to ED physicians, in some instances this location represents the
place of treatment for a person (i.e., a hospital) rather than the location of exposure. This affects the
interpretation of spatial analysis results. The three metrics used to evaluate how well DPIC surveillance
achieves this design objective were area and population coverage, and spatial extent of an alert. The
following subsections define each metric, describe how it was evaluated, and present the results.

7.2.1  Area and Population Coverage
Definition: Area  coverage describes how alerts are distributed geographically, while population
coverage describes the geographic area covered by the DPIC surveillance tool.

Analysis Methodology:  Analysis of empirical data including a statistical analysis of alerts per zip code,
as well as analysis of zip codes per alert, was performed using alert data from the combined DPIC
surveillance strategies (statistical, non-statistical and human surveillance).

Results:  Since DPIC covers the entire Southwest Ohio area, the DPIC surveillance tool covers 100% of
the GCWW retail  service area. Fifty-two out of 486 (11%) total DPIC alerts contained zip code
information. A low percentage of zip codes were recorded because the standard protocol used by DPIC
does not require the location to be recorded, as described above. However, these 52 alerts encompassed
70% of all zip codes in Hamilton County; only 19 county zip codes were not included in any alerts. Even
with a low percentage of alerts reporting zip codes, DPIC alerts occurred throughout the county,
indicating the comprehensive area coverage for this surveillance tool. There does not appear to be any
clear pattern in the geographical distribution of DPIC alerts within Hamilton County.

Descriptive statistics demonstrating the number of alerts per zip code are provided in Table 7-1.  The
histogram in Figure 7-2 depicts the frequency of alerts per zip code.

Table 7-1. Statistics of Alerts per Zip Code
Parameter
Average
Median
Minimum
Maximum
Alerts per Zip Code
4.98
3
1
39
                                                                                            85

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot
                      2-3
  4-5    6-10
Alerts per Zip Code
11-20
21 +
Figure 7-2. Histogram of Alerts per Zip Code
7.2.2  Spatial Extent of an Alert

Definition:  Spatial extent of an alert describes the geographic area (size) of each DPIC alert.

Analysis Methodology: Statistical analyses of the average, minimum, and maximum number of caller
zip codes in DPIC alerts was performed for the entire evaluation period using the combined surveillance
strategies (statistical, non-statistical, and human surveillance).

Results:  As mentioned in Section 7.2.1, zip code information was available for only 11% of DPIC alerts.
The average, median, and range of zip codes per alert are presented in Table 7-2. The statistics in this
table represent the number of distinct zip codes per alert. For example, if two callers from 45219 were
listed in the same alert, that alert contains one zip  code. Most alerts had between three and seven zip
codes implicated, as shown in the  histogram in Figure 7-3. No alert encompassed more than 11 zip
codes.
Table 7-2. Statistics of Zip Codes per Alert
Parameter
Average
Median
Minimum
Maximum
Zip Codes per Alert
4.96
5
1
11

                                                                                            86

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                        of the Cincinnati Contamination Warning System Pilot
      12 -i
      10 -
    o
    c
    
-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot

7.3    Design Objective: Contaminant Coverage

The DPIC surveillance tool monitors calls from persons which may signal a public health incident,
including water contamination or calls from healthcare providers who are treating exposed persons. For
DPIC calls, contaminant coverage is dependent on the health-seeking behaviors following symptom
presentation, as described in Section 3.3.  Simulation study results from the simulated Astute Clinician
monitoring (via patients being treated by primary care physicians or ED physicians) are also presented in
this section. In order to evaluate how well the DPIC surveillance tool and Astute Clinician monitoring
met this design objective, contamination scenario coverage was evaluated.  The following subsection
defines the metric, describes how it was evaluated, and presents the results.

7.3.1  Contamination Scenario Coverage

Definition: Contamination scenario coverage is defined as the ratio of contamination incidents that are
actually detected to those that are theoretically detectable based on the design of the DPIC surveillance
tool. Detectable contamination scenarios included those which originated at distribution system attack
nodes rather than facility attack nodes, and those that were assumed to result in calls to DPIC (i.e., rapid
symptom onset, unusual symptoms). No calls to DPIC were assumed for individuals exposed to the
Toxic Chemical 8, Biological Agent 3, 4, 5, 6 and 7. For the Astute Clinician monitoring, all
contamination scenarios that originated at distribution system attack nodes were theoretically detectable.

Analysis Methodology:  Since no water contamination incidents occurred during the evaluation period,
simulation study results were utilized to quantify this metric. The ratio of scenarios actually detected to
those that were theoretically detectable  (based on the assumptions regarding health-seeking behavior that
were parameterized in the model) was calculated for each contaminant. Additionally, the average and
median number of cases at the time of detection was calculated for each contaminant. Certain
contamination scenarios that were not theoretically detectable were screened out of the analysis including
those that originated at facility attack nodes (which were detected by the ESM component) and those
which involved the nuisance chemicals.

Results:  The DPIC surveillance tool (which was modeled based on DPIC's volume-based clinical effects
algorithm, and DPIC's active human surveillance) detected 85% (717 scenarios) of the theoretically
detectable scenarios (846 scenarios). Table 7-4 below shows the detection statistics for the DPIC
surveillance tool for each contaminant.

Table 7-4.  DPIC Detection Statistics
Contaminant
Toxic Chemical 1
Toxic Chemical 2
Toxic Chemical 3
Toxic Chemical 4
Toxic Chemical 5
Toxic Chemical 6
Toxic Chemical 7
Toxic Chemical 8
Biological Agent 1
Biological Agent 2
Biological Agents
Scenarios
Detected
94
16
58
94
94
90
94
Not detectable
94
83
Not detectable
Scenarios
Not
Detected
0
78
36
0
0
4
0
-
0
11
-
Percent
Detected
100%
17%
62%
100%
100%
96%
100%
-
100%
88%
-
Average #
Cases at Time
of Detection
247
431
516
728
519
5,332
848
-
308
126
-
Median # Cases
at Time of
Detection
136
322
421
480
265
4,028
478
-
156
91
-
                                                                                             88

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot
Contaminant
Biological Agent 4
Biological Agents
Biological Agent 6
Biological Agent 7
Scenarios
Detected
Not detectable
Not detectable
Not detectable
Not detectable
Scenarios
Not
Detected
-
-
-
-
Percent
Detected
-
-
-
-
Average #
Cases at Time
of Detection
-
-
-
-
Median # Cases
at Time of
Detection
-
-
-
-
The DPIC surveillance tool generally had a high detection rate across almost all contaminants, with 100%
detection for five of the eleven contaminants and another two contaminants at 88% and 96%. There was
a noticeably lower detection rate for Toxic Chemical 2 (17%) and Toxic Chemical 3 (62%), which is the
result of the rapid symptom progression that occurs following exposure to these contaminants. Exposed
individuals proceed quickly to the severe symptom level, at which time urgent treatment is pursued (i.e.,
call 911 to request EMS transport to the ED, or self-transport to the ED). Accordingly, only a small
percentage of cases call DPIC at the lower symptom level before proceeding to moderate and severe
symptoms, which results in lower detection rates for these contaminants.

Additionally, for most contaminants, only several hundred cases had occurred on average at the time of
detection which demonstrates the limited number of calls to DPIC required to produce an alert (i.e., the
scenarios did not progress for a long time prior to detection).

The Astute Clinician monitoring (which was conducted via monitoring the number of cases seen by
primary care physicians or ED physicians) detected 99.5%  (1,395 scenarios) of the theoretically
detectable scenarios (1,402 scenarios). Table 7-5 below shows the detection statistics for the Astute
Clinician monitoring for each contaminant.

Table 7-5.  Astute Clinician Detection Statistics
Contaminant
Toxic Chemical 1
Toxic Chemical 2
Toxic Chemical 3
Toxic Chemical 4
Toxic Chemical 5
Toxic Chemical 6
Toxic Chemical 7
Toxic Chemical 8
Biological Agent 1
Biological Agent 2
Biological Agents
Biological Agent 4
Biological Agents
Biological Agent 6
Biological Agent 7
Scenarios
Detected
93
94
94
94
94
94
94
94
94
94
94
94
94
85
89
Scenarios
Not
Detected
1
0
0
0
0
0
0
0
0
0
0
0
0
3
3
Percent
Detected
99%
100%
100%
100%
100%
100%
100%
100%
100%
100%
100%
100%
100%
97%
97%
Average #
Cases at Time
of Detection
429
246
276
770
534
3,760
1,001
2,121
332
115
21,213
284
435
18
38
Median # Cases
at Time of
Detection
344
142
119
480
265
2,322
625
1,706
167
88
1,647
247
208
17
33
The Astute Clinician surveillance tool had a high detection rate, at or above 97% for all contaminants.
For all contaminants, the CWS model was parameterized such that it does not take many cases for
                                                                                            89

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                        of the Cincinnati Contamination Warning System Pilot

identification of a contaminant by an astute clinician.  The contaminants included in the model produce
very unusual symptoms (and in the case of the toxic chemicals, rapid symptom onset), which allows for a
more efficient clinical interpretation by an astute clinician who had familiarity with chemical poisonings
and waterborne or infectious diseases. For the few scenarios where detection did not occur for Biological
Agents 6 and 7, there were either very few individuals infected or no individuals infected.
7.3.2   Summary
The contamination scenario  coverage results from the simulation study demonstrate that the DPIC
surveillance tool and Astute  Clinician monitoring are able to frequently and quickly detect a broad range
of contaminants. While both of these surveillance strategies proved  effective through analysis of
simulation study results, the monitoring conducted by astute clinicians in real-world situations provides
broader and more  reliable (sensitive) contaminant coverage, as DPIC detection is limited to contaminants
with rapid symptom onset which produce unusual symptoms in a short period of time. Furthermore,
detection by DPIC is dependent on calls being placed to the poison control hotline whereas active
monitoring by astute clinicians is conducted continually during treatment of patients at doctor's offices
and at the ED.

7.4     Design Objective: Alert Occurrence

Alert occurrence addresses how well the DPIC surveillance tool performs by describing the volume of
alerts that occurred, and the  number of these alerts that were valid (i.e., public health incident, including
possible water contamination). It should be noted that no valid alerts occurred during the evaluation
period of the DPIC surveillance tool. Analyses conducted and presented for the contamination scenario
coverage metric reflect the occurrence of valid alerts in the  simulation study (Section 7.3.1). Thus, to
characterize this design objective, invalid alerts were evaluated. The following subsection defines the
metric, describes how it was evaluated and presents the results.
7.4.1   Invalid Alerts
Definition: Invalid alerts include any alert generated by the DPIC surveillance tool that is determined as
not related to  a public health incident, including water contamination, following alert investigation.

Analysis Methodology: The total number of invalid alerts is equal to the number of total alerts minus
the number of valid alerts. These invalid alerts were quantified by month and analyzed statistically for
the entire evaluation period.

Results: A total of 486 invalid alerts occurred, with an average of 16.7 alerts per month.  As seen in
Figure 7-4, the  number of invalid alerts fluctuated by month, ranging from 4 to 41 alerts.  The high
number of alerts in the August 2009 reporting period was due to 19 alerts  occurring on August 18, 2009;
although this is  an unusually high number of alerts in one day, they were not related.  There did not
appear to be any seasonal patterns.
                                                                                             90

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot
   <
   •T3
       45
       40
       35
       30
       20
       15
       10

                                                                  "
i
                                                                                       
-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                        of the Cincinnati Contamination Warning System Pilot
7.5.1   Time for Data Transmission
Definition:  Time for data transmission measures the amount of time it takes collected call data to be
uploaded into the NPDS system, from which point it is available for analysis via algorithms applied to the
toxicosurveillance categories.

Analysis Methodology: Time for data transmission was summarized, as reported by NPDS.

Results: NPDS collects data in near real-time (< 1 minute) and no interruptions in actual data
transmission were recorded during the evaluation period.

7.5.2   Time for Event Detection
Definition:  Time for event detection describes the time required for the DPIC surveillance tool to
generate an alert using the statistical, non-statistical and human surveillance analysis methods. This is the
time for analysis of data and generation of a result.

Analysis Methodology: Since no documented data on time for event detection was collected, a
qualitative characterization of time for event detection was performed based on feedback from DPIC
personnel for each of the surveillance tools.

Results: For the human exposure call volume and clinical effect count algorithms, the NPDS analysis
module includes a latency period of four hours to allow adequate time for call details to be completely
entered into  the system.  This latency period begins after the defined surveillance window, as set by the
user in NPDS. For example, if the definition period was set for 1 to 2 pm, calculations for that period will
not be performed until 6 pm.  Once calculations begin, they are completed in near real-time.  Since the
latency period applies to all statistical calculations,  the time for event detection is consistent at four hours
for the human exposure call volume and clinical effect counts. In contrast, NPDS is programmed to
generate alerts immediately for the non-statistical case based definition (no latency period) for any  case
entered into  Toxicall® database that matches the definition criteria. Therefore, the time for event
detection using non-statistical surveillance is near real-time.

For the human surveillance method, the time for event detection  is approximately 15 minutes for
household calls and 45 minutes for physician calls.  For household calls, DPIC interacts with the caller for
approximately 15 minutes prior to flagging an alert for additional investigation by senior personnel if
water contamination is suspected. For calls received from ED physicians, DPIC interacts with the
physician for an average of 45 minutes during the investigation.  Therefore, the time for event detection
for either household or hospital calls during a possible water contamination incident is expected to be less
than one hour. An overview  of the results is found  in Table 7-7.

Table 7-7. Time for Event Detection
Surveillance Method
Statistical
(human exposure call volume and
clinical effect count algorithms)
Non-statistical
(case-based definition)
Human
Latency Period?
Yes
No
No
Time for Event Detection
4 hours
Near real-time
15 or 45 minutes
                                                                                             92

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot

7.5.3  Time for Alert Recognition

Definition: Time for alert recognition quantifies the time for DPIC staff to recognize the email alert and
begin the investigation.  This portion of the timeline begins when an alert is generated by the NPDS
algorithms and notification is sent via email to public health personnel, and ends  when public health
personnel recognize receipt of the alert.

Analysis Methodology: Statistical analysis (average, median, and range) of time for alert recognition
was performed for each month for the combined surveillance tools, as collected from the investigation
checklists. Calculations were also performed for the evaluation period as a whole.

Results:  The average time for recognition of DPIC alerts was approximately 10  hours during most
reporting periods (Figure 7-5).
      1000
o

I
c
1
       100
       10
  £
  0)
                                 Start Date of Monthly Reporting Period
Figure 7-5. Average Time to Recognize DPIC Alert by Month

Overall statistics are presented in Table 7-8. The difference between the overall average (54.2 hours) and
median (11 hours) values represents a relatively long time for alert recognition during the beginning of
the evaluation period. During this time, participants were not expected to investigate alerts in real-time;
therefore, investigations may have been delayed until personnel had more time to perform investigations.

Table 7-8.  Time to Recognize DPIC Alert (Hours)
Parameter
Average
Median
Minimum
Maximum
Time (hours)
54.2
11
<1
426
                                                                                           93

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot

7.5.4   Time to Investigate Alerts
Definition: Time to investigate alerts includes the portion of the incident timeline that begins with the
recognition of a DPIC alert, and ends with a determination regarding whether or not contamination is
possible. The time to investigate alerts, as captured in the investigation checklists, is based on the nature
of the alert details and the investigation procedures that must be implemented before concluding that the
alert is not indicative of a possible contamination incident. For PHS drills and the simulation study, this
data represents the timeline from the contaminant injection to the time that contamination is deemed
possible. As noted in Section 3.3, no time delay for alert recognition was parameterized in the CWS
model as it was assumed  that alert investigations occurred immediately upon receipt of alerts based on the
nature of the underlying case data (i.e., similar symptom categories and case clustering).

Analysis Methodology:  Analysis of invalid alerts recorded during the evaluation period was performed
to calculate the overall time, as well as average, median and range of times as listed in the investigation
checklists for the combined DPIC surveillance tools.  Information on investigation time from PHS drills
was used to describe time to investigate simulated DPIC alerts that were ultimately determined to be
possible contamination incidents.

Timeline data gathered from investigation of DPIC alerts during PHS drills was used to parameterize the
investigation time for DPIC alerts in the simulation study. Simulation study results from the simulated
DPIC case based statistical surveillance and Astute Clinician monitoring (via patients being treated by
primary care physicians or ED physicians) are also presented in this section. Simulation study timeline
data (which, as noted above, started at the time of contaminant injection) was evaluated to illustrate the
timeliness of detection overall and for scenarios initiated at periods of high or low demand.  Percentile
values were calculated to examine the distribution of data, and are presented in a box-and-whisker plot.
Average detection times were calculated for individual contaminants, as well as for scenarios initiated at
high or low demand periods for individual contaminants.

Results: The results presented below are arranged in order of empirical data, drill data, and simulation
study data.

Empirical Data
Figure 7-6 shows the average invalid alert investigation time for each monthly reporting period. Some
long investigation times were recorded during the development and testing phase  (early in the evaluation
period) when investigators were not expected to respond immediately to alerts.
                                                                                            94

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot
                                 Start Date of Monthly Reporting Period
Figure 7-6. DPIC Average Invalid Alert Investigation Time (n=486, empirical data)

Time for alert investigation decreased considerably after the first six months of alert investigation. The
decrease is most likely due to a transition from a development and testing phase to a "go live" phase after
the first six months.  It may also be due to staff becoming more aware and comfortable with the
investigation process, and hence executing alerts in a more expedient manner.

Statistics for time for alert investigation over the entire evaluation period are shown in Table 7-9.

Table 7-9. DPIC Invalid Alert Investigation Time (minutes, empirical data)
Parameter
Average
Median
Minimum
Maximum
Time (minutes)
14
15
2
180
Drill Data
The investigation of a simulated DPIC alert was characterized by performing drills and exercises. During
the PHS Drill 2 (July 28, 2009), a call to DPIC from a daycare facility was used to report symptoms
caused by water contamination with a toxic chemical. This investigation also involved a simulated alert
generated from the 911 surveillance tool.  In this instance, a Possible contamination determination was
reached after approximately 1.5 hours. While this is a reasonable estimate for how long it might take to
investigate valid alerts, the actual investigation time during a "live" incident may vary depending on other
factors (e.g., personnel availability). The timeline for the July 2009 PHS drill is presented below in
Figure 7-7, which displays some of the key points of the investigation following receipt of a simulated
DPIC alert.
                                                                                            95

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot


00:39
00-26 WQM station
00-00 DPIC activates alert received
DPIC receives communicator
reports of Gl

symptoms at 00:2°
day care, DPIC
begins determines water
investigation contamination is
likel
F 1
/
' i


00:30
911 alert
received
*
' i
01:01
WUERM considers 01:33
00:42 contamination Consensus
Communicator possible and determination
discussion suspects a chemical contamination
begins contaminant Possible
f 1 ' 1 r 1


 00:00
                                                                                        01:33
Figure 7-7. PHS Drill 2 Timeline (DPIC Alert)
Simulation Study Data
Figure 7-8 demonstrates the overall timeliness of detection statistics for the DPIC surveillance tool and
for scenarios initiated at low and high demand periods, using percentile values to illustrate the distribution
of data in a box-and-whisker plot.  The impact of the time delay for exposures between scenarios initiated
and period of high and low demand are noticeable, with an approximate 6 hour difference in average
detection times. For scenarios started at a low demand periods, exposures do not occur until seven hours
after the injection time (12:00 am), unlike the scenarios initiated at high demand (9:00 am),  which could
have resulted in exposure soon thereafter at the 9:30 am or 12:00 pm exposure events.
       10000
        1000
    m
    
-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot

Table 7-10.  DPIC Data Stream Timeliness of Detection (simulation study data)
Scenarios
Total
Low Demand
High Demand
Count
717
173
544
Average
263
535
177
Median
137
479
102
The low and high demand scenarios are further compared, in Figure 7-9 below, where contaminants are
arranged in increasing order of timeliness of detection by the total set of detected scenarios. No data is
presented for the average detection time for low demand scenarios for Toxic Chemical 2, as the DPIC
surveillance tool did not detect the one theoretically detectable scenario, or there were no low demand
scenarios included in the model runs, such as with Biological Agent 2.
        700
        600
inutes
        300
        200
                                                                                    \
                                          Contaminants
Figure 7-9. DPIC Surveillance Tool Timeliness of Detection (simulation study data)

The differences in timeliness of detection by the DPIC surveillance tool (a range from -100 minutes to
-500 minutes was observed, as shown in Figure 7-9) are likely due to a variety of factors, including the
dose required to produce symptoms, symptom onset time, the number of cases required to exceed the
DPIC statistical or human surveillance thresholds, and the number of high or low demand scenarios
modeled for each contaminant (which affects time delays to exposure).

Figure 7-10 demonstrates the overall timeliness of detection statistics for Astute Clinician monitoring and
for scenarios initiated at low and high demand periods, using percentile values to illustrate the distribution
of data in a box-and-whisker plot.  As with all of the previous surveillance tools, the Astute Clinician  had
a longer detection time for the low demand scenarios (-39 hours) than the high demand scenarios (11
hours), accounted for by the increased amount of time between injection of the contaminant and
exposures.
                                                                                           97

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot
   10000
    1000
     100
              Astute Clinician
             Surveillance Tool
 Astute Clinician
Surveillance Tool
  (low demand)
 Astute Clinician
Surveillance Tool
 (high demand)
Figure 7-10. Astute Clinician Data Stream Timeliness of Detection (simulation study data)

There were a total of 1,395 scenarios (99.5%) detected by Astute Clinician monitoring with an average
detection time of 1,340 minutes (Table 7-11).  Low demand scenarios were detected on average 2,341
minutes after the time of contaminant injection, whereas high demand scenarios were detected on average
669 minutes following contaminant injection.

Table 7-11. Astute Clinician Data Stream Timeliness of Detection (simulation study data)
Scenarios
Total
Low Demand
High Demand
Count
1395
506
835
Average
1,340
2,341
669
Median
405
1,870
195
The low and high demand scenarios were further compared to observe differences between the toxic
chemicals and biological agents. For most biological agents, detection required much longer time, as seen
in Figure 7-11 below, where contaminants are arranged in increasing order of timeliness of detection by
the total set of component scenarios. The averages for low or high demand scenarios are not presented
where no data was available. This figure demonstrates the impact of the symptom onset timing on
timeliness of detection.  Contaminants with a slower symptom onset result in a longer time delay prior to
detection (days to weeks). Furthermore, a greater number of cases are required for detection by an astute
clinician for some biological agents which result in non-specific systems following exposure compared to
the toxic chemicals.  This translates to a longer delay prior to detection. The model was parameterized in
this manner based on input from toxicological subject matter experts from DPIC.
                                                                                            98

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                        of the Cincinnati Contamination Warning System Pilot
      7000
      5000
   3
   |  3000


      2000


      1000


         0

A          *—*—*	i-
•    •    •    •    a—*    i
                                           Contaminants
Figure 7-11.  Astute Clinician Data Stream Timeliness of Detection (simulation study data)
7.5.5  Summary

The timeliness of detection for the DPIC surveillance tool was fairly consistent. DPIC data collection and
transmission occurs in near real-time (<1 minute) for statistical, non-statistical, and human surveillance.
Time for event detection was approximately 4 hours for statistical surveillance (due to latency period built
into NPDS), near real-time for non-statistical surveillance, and 15 or 45 for human surveillance
(depending on source of phone call). After the first six months of the evaluation period, time for alert
recognition stabilized at around 11 hours per alert, with most alerts recognized within 24 hours. The
majority of invalid alert investigations took approximately 15 minutes to complete; valid alerts may took
longer to investigate (~1.5 hours) based on observations during drills and exercises.

Simulation study data analysis showed that for chemical contamination scenarios, case counts exceeded
the detection thresholds for the DPIC surveillance tool and Astute Clinician monitoring within hours (for
the high demand scenarios). Days to weeks elapsed before detection of some biological agents and toxic
chemicals.

7.6    Design Objective:  Operational Reliability

Analysis of the operational reliability of the DPIC surveillance tool addresses functional aspects of the
tool and quantifies the percent of time that the DPIC surveillance tool was working as designed.  In order
to evaluate how well the DPIC surveillance tool met this design objective, the availability metric was
evaluated. The following subsection defines the metric, describes how it was evaluated and presents the
results.

7.6.1  Availability
Definition:  Availability is the amount of time the DPIC surveillance tool is functional and accessible,
expressed in terms of the percent of usable data hours per reporting period. In order for usable data to be
                                                                                             99

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot
generated for the DPIC surveillance tool, data must be successfully entered into the NPDS system;
analyzed using statistical, non-statistical, or human surveillance; and any alert information made available
to DPIC personnel.

Analysis Methodology:  The percent availability was calculated based on total hours available for each
of the various DPIC surveillance strategies. Instances of downtime were reported by DPIC personnel and
categorized by detection methodology (statistical, non-statistical, or human surveillance).  Because
downtime was not reported by month, statistics are presented for the entire evaluation period only.

Results:  A back-up generator at DPIC ensures that data systems are available at all times, and are not
affected by power outages. Therefore, instances of unavailability (for statistical and non-statistical
surveillance) generally occur only when NPDS is unavailable, such as during predictable quarterly
outages when the NPDS system is being updated.  These system upgrades caused temporary
unavailability for all of the surveillance categories (statistical, non-statistical, and human surveillance).
Upgrades take approximately 12 hours to execute, and thus amount to 48 hours of downtime per year.

Following one system  upgrade, the case based surveillance tool (i.e., non-statistical surveillance), was
inactive for three weeks in addition to the quarterly updates, amounting to 3.2% of the total evaluation
period. Since human surveillance can occur even in the absence of the NPDS, it was always available.
The percent  of availability by surveillance  category can be seen in Table 7-12.

Table 7-12.  DPIC Availability
Surveillance Method
Statistical Surveillance
Non-statistical
Surveillance
Human Surveillance
Downtime
(weeks)
0.64
3.64
0
Total
Weeks
115
115
115
Percent Availability
99.4%
96.8%
100.0%
Percent Unavailable
0.6%
3.2%
0.0%
7.6.2  Summary
The DPIC surveillance tool achieved a high percentage of availability during the evaluation period.  Only
one significant downtime event resulted in three weeks of data downtime for the non-statistical
surveillance tool.  Although system upgrades of the NPDS caused temporary data incompleteness, their
effects were minimal.
                                                                                            100

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot


     Section 8.0:  Performance of the Integrated Component


8.1    Description of the Integrated PHS Component

The integrated PHS component consists of five surveillance strategies (i.e., automated statistical
surveillance and human surveillance conducted by astute clinicians) working together in order to detect
public health incidents, including water contamination.  The PHS component relies on the integration of
available data sources and professional expertise within stakeholder agencies to accomplish this goal.
Data sources include CFD (911 calls and EMS runs), hospital EDs and calls to DPIC. During a public
health incident, including water contamination, it is expected that persons experiencing symptoms will
exhibit certain health-seeking behaviors that feed into the various data streams (Section  3.3). The time for
symptom onset and health-seeking behaviors following contaminant exposure affects the data type
collected as well as the timeliness of data receipt.

This section focuses on the performance of the integrated PHS component. Since the effectiveness of the
various surveillance tools working together is not simply additive, the discussion focuses on a holistic
view of component performance. In addition, a discussion of the overall component costs and benefits,
along with utility and public health partner compliance with component protocols is included.

8.1.1  Surveillance Tools Overview

The PHS component required numerous surveillance tools to function together as one cohesive unit.
These surveillance tools were selected to provide a timely method of surveillance for symptoms indicative
of possible water contamination, and to provide sufficient spatial coverage of the GCWW service area.
Each surveillance tool contains appropriate algorithms to provide spatial and temporal analysis of data
trends; these analyses and data can be accessed by appropriate personnel through computer interfaces (see
Table 2-1).

Data collection and analysis within the PHS component is designed such that the various PHS tools are
complementary. For example, although 911 call data had the quickest transfer rate to public health
partners (Section 8.4), the alerts do not offer as much case record detail as EMS alerts. Utilizing these
data streams in tandem allows for broader coverage of the design objectives by having one surveillance
tool "cover" in places where another surveillance tool is lacking. Analysis of the PHS surveillance tools
coupled with professional and institutional knowledge from investigative personnel provides for a holistic
view of the population and improved situational awareness to detect public health anomalies.

Each of the surveillance tools within the PHS component deal with symptoms or health-related
complaints from the public. These symptoms are often distilled into categories, or syndromes, for easier
classification and analysis. In the case of the 911 data stream, symptoms as described to the dispatch
operator are matched to incident codes using the Computer Aided Dispatch system. A cross-walk of the
syndromes from the various surveillance tools can be found in Table 8-1. It should be noted that
syndromes are not mutually exclusive, and one symptom can be classified under more than one
syndrome. The DPIC surveillance tool is not included in this table as DPIC's approach to statistical
surveillance does not rely upon syndrome categories.
                                                                                         101

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                        of the Cincinnati Contamination Warning System Pilot

Table 8-1.  Comparison of Syndromes from PHS Surveillance Tools
911 (Incident
Codes)
Eye problem1
Possible stroke
Headache
Abdominal pain
Sick
Abdominal pain
Hemorrhage
Fainting
Headache
Possible stroke
Burn/blister2
Allergies
Asthma
Breathing problem
Chest pain
Inhalation1
Chest pain
Sick
Unconscious
Person down1
Fainting
Heart problem
Abdominal pain
Seizure
Convulsions
EMS Syndrome
Neurological
N/A
Gastrointestinal
N/A
Neurological
Psychological
N/A
Upper Respiratory
Cardiac
Water
Epicenter
Syndrome
Botulinic
Constitutional
Gastrointestinal
Hemorrhagic
Neurological
Rash
Respiratory
N/A
N/A
 These incident codes were filtered for analysis during the first portion of the evaluation period. Upon conducting an
exercise with CFD dispatch operators, they were later removed from the analysis as they were determined not to be
relevant to possible water contamination.
2 These incident codes were added to the group of codes being filtered for analysis during the latter portion of the
evaluation period.  The exercise conducted with CFD dispatch operators demonstrated their relevance to possible
water contamination.

The PHS component differs from other components in that all of the surveillance tools are not managed
or monitored by a single agency.  PCC call data, for example, is collected and investigated solely by
DPIC; results from this surveillance tool are then communicated to other public health partners. Because
of this design, effective communication between personnel involved with each of the surveillance tools is
crucial to the functional component, and communication protocols were continuously improved
throughout the evaluation period. This included development of the "communicator" protocol and regular
User's Group meetings.
8.1.2  Analysis Methodology
The PHS component consists of multiple surveillance tools and personnel in numerous locations. In
some instances, separate agencies are responsible for investigation of different surveillance tools. In
addition, no two data streams share the same data transmission methods or event detection systems.
Therefore, a detailed performance evaluation was performed on each metric for each surveillance tool as
presented in Sections 4.0 - 7.0. However, because the PHS surveillance tools were designed to work
synergistically, evaluation of the integrated component in this section allows for discussion and analysis
of overall PHS component performance.

For the integrated PHS component, the analysis methodology considers the collective performance of the
various surveillance tools functioning as a whole. Included in this evaluation are the design objectives
used at the surveillance tool level as they apply to the comprehensive component. Quantitative measures
                                                                                            102

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot

derived from empirical data in the surveillance tool evaluations are included to demonstrate how the
various parts of the component work together as one cohesive unit. In addition to empirical data and
observations gleaned from during the evaluation period, simulation study data was analyzed to understand
component performance by challenging the CWS model with an ensemble containing thousands of
contamination scenarios.

Evaluation of the integrated component utilizes quantitative measurements from the surveillance tool
evaluations to develop a qualitative view of how the PHS component operates to identify public health
incidents, including water contamination. The integrated evaluation uses results from empirical data
analysis, PHS drills, PHS forums and the simulation study to characterize the performance of the
integrated PHS component.

8.2    Integrated Design  Objective: Spatial Coverage

Spatial coverage is the cumulative area of the distribution system covered by the PHS component, as
dictated by the spatial coverage of the surveillance tools. Adequate spatial coverage ensures that the PHS
component is useful for detecting possible contamination affecting any size area in the entire GCWW
service area. Metrics used to evaluate how  well the PHS component met this design objective include
area and population coverage, and the spatial extent of an alert. Available location data was analyzed
statistically and spatially for each of the PHS surveillance tools and these results were compiled to
ascertain how they work as an integrated system. An overview of results is presented in Table 8-2.

Table 8-2. Evaluation of Spatial Coverage Metrics


Theoretical

Spatial
Coverage


Metric #1 :
Area and
Population
Coverage


Metric #2:
Spatial
Extent of an
Alert

PHS Surveillance Tool

911
Covers 22% of

GCWW service
area


concentrated in
areas with higher
population
densities


Most alerts
encompass 10
kilometers or
less


EMS
Covers 22% of

GCWW service
area


concentrated in
areas with higher
population
densities


Zip codes with >1
EMS run per alert
were concentrated
in downtown


Epicenter
Covers 95% of

the GCWW
service area
Alerts, by nature,

indicate a
county-wide rise
in a certain
syndrome
category


Empirical data
not available


DPIC
Covers 100% of

the GCWW
service area
Alerts occurred
in 70% of

Hamilton
County zip
codes,
distributed
throughout the
county

Alert "clusters"
affected by
hospital location

Integrated
Component
Entire utility

covered by PHS
surveillance tools
Alert locations
covered the

majority of GCWW
service area;
location and
frequency affected
by population
density
Breadth of alerts
affected by
underlying factors
such as population
density and
hospital location
Spatial coverage for PHS is adequate, granted that investigators have some underlying understanding of
weaknesses in the data.  The spatial distribution and spatial extent of alerts are both affected by
underlying factors, such as population density and hospital location, for some but not all of the
surveillance tools. Alerts for 911, EMS, and DPIC's statistical surveillance are generated according to
algorithms that do not take population density into account. Therefore, according to business rules, an
alert can be issued for the same number of cases regardless of whether they are located in a densely or
sparsely populated neighborhood. Investigators need to rely on institutional knowledge of the population
                                                                                          103

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                        of the Cincinnati Contamination Warning System Pilot

to determine if the distribution and extent of alerts suggests possible water contamination, and can also
consider other alert details (i.e.,  syndrome categories) to predict the likelihood of contamination.

Granularity in the available spatial information also affects the interpretation of location and extent of
alerts.  For the statistical surveillance tools, with the exception of 911 alerts (which identify clusters based
on latitude and longitude), the smallest geospatial unit utilized for alerts is zip code. While a standard
location categorization, zip codes consist of varying sizes and can encompass large areas. Depending on
where EMS runs occurred, they could be in close proximity but not register an alert because they were in
separate zip codes. For the human surveillance capability provided by DPIC, the smallest data unit for an
alert may be a single call from a household or ED physician.  Again, having numerous surveillance
streams functioning as an integrated component improves spatial coverage. Cross-referencing alerts from
the PHS data streams can provide a more complete spatial picture.

There is evidence that the integrated component provides sufficient spatial coverage for identification of
possible contamination as alerts observed during the evaluation period from the various PHS surveillance
tools occurred throughout the GCWW service area.  Furthermore, one of the enhancements introduced for
the 911 and EMS alerts included a Google Earth mapping feature to pinpoint the location of 911 calls and
EMS runs which contributed to  alerts.  This capability, which allows visualization of the overall spatial
picture, was identified as a useful feature by the public health partners. During drills and exercises, public
health personnel were quick to identify alert clustering and apply knowledge of the area affected during
their discussions. For example,  during a drill conducted in August 2009, it was quickly noted that a DPIC
alert was in close proximity to a 911 alert with similar symptoms. In addition, spatial clustering (or lack
thereof), was consistently used during investigations to rule out possible water contamination. This
suggests that  investigators were  able to utilize the tools effectively to make decisions based on spatial
considerations.

8.3     Integrated Design Objective: Contaminant Coverage

Performance  for this integrated design objective depends on the detection capabilities of each of the
surveillance tools used within the PHS component. The ability of these tools to detect possible water
contaminants depends on a variety of elements including the  type of contaminant, the nature and timing
of symptoms  produced by the contaminant, and the health-seeking behavior of exposed individuals, as
summarized in Section 3.3. Although empirical data was not available to characterize the detection
capabilities of the PHS surveillance tools, simulation study results allowed  for an analysis of
contamination scenario coverage overall and for each surveillance tool.  As a whole, the PHS component
detected 99.5% of theoretically detectable contamination scenarios. Table  8-3 presents the detection
rates for each of the PHS surveillance tools for the respective sets of theoretically detectable
contamination scenarios.
                                                                                             104

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot

Table 8-3.  Evaluation of Contaminant Coverage







Metric #1 :
Contamination
Scenario
Coverage




Number of
Theoretically
Detectable
Contamination
Scenarios
PHS Surveillance Tool

911





80%
detection






702
contamination
scenarios


EMS





69%
detection






702
contamination
scenarios


Epicenter





71%
detection






1,402
contamination
scenarios


DPIC





85%
detection






846
contamination
scenarios

Astute
Clinician





99.5%
detection






1,402
contamination
scenarios


Integrated
Component
Detected
99. 5% of
theoretically
detectable
contamination
scenario in the
simulation
study, which
included toxic
chemicals and
biological
agents





The collective surveillance capabilities of the surveillance tools allowed for successful detection of a
variety of chemical and biological contamination scenarios throughout the GCWW service area, based on
integrated component design. The detection rates across the surveillance tools demonstrate that each tool
was able to detect at least 70% of the scenarios that were theoretically detectable.  While the detection
capabilities cannot be compared across all of the tools, as there were a different set of theoretically
detectable scenarios for each tool (based on the spatial coverage), some comparisons can be made
between the 911 and EMS surveillance tools (which shared the same set of theoretically detectable
scenarios within the city of Cincinnati limits).

The EMS detection percentage (70%) was somewhat lower when compared to the 911 surveillance tool
detection percentage (80%). This is likely the result of some patients deciding on  self-transport after
calling 911  if an EMS unit had not arrived after a certain amount of time. This resulted in fewer EMS
cases being logged and available for statistical analysis, whereas a case record was always recorded for all
individuals  who called 911. Secondly, following exposure to some of the contaminants, it is likely that
some individuals called 911 to request medical assistant and then died prior to the  time that an EMS unit
arrived. This pattern likely resulted in fewer EMS cases being  logged in comparison to 911, and therefore
lower detection rates.

8.4     Integrated Design Objective: Alert Occurrence

Alert occurrence addresses component performance by describing the volume of alerts that occurred, and
the number of these  alerts that were invalid or valid (public health incident, including water
contamination). In this way, the design objective describes how well the PHS  surveillance tools acting as
an integrated component discriminated between valid alerts and normal variability in the data. In order to
evaluate how well the PHS component met this design objective, invalid and valid alerts were evaluated
using empirical data and simulation study data.
                                                                                            105

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                        of the Cincinnati Contamination Warning System Pilot

Table 8-4.  Evaluation of Alert Occurrence


Metric #1 :
Invalid Alerts





Metric #2:
Valid Alerts




PHS Surveillance Tool

911
Application of new
business rules in
May 2009
dramatically reduced
the number of
invalid alerts; now
expect ~ 4 false
alerts per year




Empirical data not
available





EMS
Application of new
business rules in
May 2009
dramatically reduced
the number of invalid
alerts; now expect ~
6 false alerts per
year


Successfully
detected H1N1
outbreak, heat
related public
incident event, and
an increase in
allergies



Epicenter
3 or fewer invalid
alerts during
most monthly
reporting periods;
alerts affected by
data reporting
issues




Capable of
detecting public
health incidents,
in particular the
H1N1 outbreak




DPIC
10-20 invalid
alerts during
most monthly
reporting periods





Empirical data
not available




Integrated
Component
Adjustments to
business rules
reduced invalid
alerts to
manageable levels
for the 911 and EMS
surveillance tools

Despite the fact that
no water
contamination
incidents occurred,
two of the
surveillance tools
demonstrated the
ability to detect
public health
incidents during the
evaluation period
The surveillance tools are designed to operate cooperatively for increased overall detection capability by
the PHS component. Potential insufficiencies in one surveillance tool can be offset by the integrated
system's ability to detect possible health incidents and issue an alert. For example, while the DPIC
surveillance tool is well suited for identifying instances where individuals experience highly unusual
symptoms with a rapid onset time, it is unlikely to detect a rise in illnesses which onset slowly over a
longer period of time. Therefore, the detection capability offered by the EpiCenter surveillance tool can
compensate for the inability of the DPIC surveillance tool to detect illnesses with a slow onset time.

Historical analysis of surveillance tool alerting trends is useful in observing overall performance of the
integrated system, such as the number of alerts expected per month as well as the co-occurrence of alerts
during real public health incidents. A chart of surveillance tool alerts per month can be seen in Figure 8-
1 for the integrated component.  The vast majority of these alerts were invalid, and the overall number of
911 and EMS alerts dropped noticeably after the modification to alerting thresholds was implemented in
the May 2009 reporting period.  The EpiCenter alerts were categorized and  depicted separately as either
valid or invalid to demonstrate the noticeable uptick in valid alerts during the influenza outbreak between
August and October of 2009.
                                                                                             106

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                        of the Cincinnati Contamination Warning System Pilot
                                                      • DPIC
                                                      HEpiCenter- Invalid Alerts
                                                      • Epicenter-Valid Alerts
                                                      'JEMS
                                                      •911
                                       Start Date of Monthly Reporting Period
Figure 8-1.  Alerts per Month for the Integrated PHS Component

Co-occurrence of valid EMS and EpiCenter alerts during the August and September 2009 reporting
periods demonstrates the ability of more than one PHS surveillance tool to indicate a potential public
health incident. Following receipt of the alerts, the actions that were taken by investigators responsible
for interpreting alert data emphasize their ability to identify valid alerts based on the case details.  These
concurrent alerts are summarized in Table 8-5.

Table 8-5.  Concurrent PHS Alerts (empirical data)

1
2
Concurrent Alerts
EMS/EMS/EpiCenter
EMS/EpiCenter
EMS Date/Time
8/9/09 2: 12 AM
10/1 4/09 3: 14 AM
EpiCenter Date/Time
Email from ODH
8/9/2009
8/10/09 11:03 AM
(constitutional)
10/14/09 5:03 PM
10/1 5/09 9:09 AM
(both respiratory)
Resolution
Following communicator
activation, was resolved as
heat related alert
EpiCenter and EMS alerts
due to respiratory symptoms;
resolved as due to H1 N1
activity
Example 1 consisted of two EMS alerts, which prompted activation of the communicator protocol.
Included in the communicator discussion was an e-mail received from ODH indicating a high number of
elderly persons reporting weakness; this was consistent with data in the EMS alerts.  Because of recent
high temperatures, the alerts were resolved as heat related. A few hours later, EpiCenter issued a
constitutional alert, which encompassed the chief complaints within the EMS alerts.  Example 2 depicts
the PHS system identifying a documented health outbreak through concurrent alerts. In this case, EMS
chief complaints and EpiCenter syndromes were consistent with recent H1N1 activity.

Co-occurrence of alerts was also characterized using simulation study data to identify the percentage of
contamination scenarios  that generated valid alert clusters by the automated PHS surveillance tools (i.e.,
911, EMS, and EpiCenter).  Alert clusters were  evaluated to understand the sequence of alerts as well as
the time that elapsed between concurrent alerts.  For this analysis, the contaminants considered to be
theoretically detectable by the PHS component were grouped into four categories: contaminants with
                                                                                             107

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                        of the Cincinnati Contamination Warning System Pilot

rapid symptom onset (minutes to several hours), contaminants with moderate symptom onset (> 8 hours
to ~1 day), and contaminants with slow symptom onset separated by gastrointestinal or respiratory
exposure (~1 day or longer).  The results of the analysis are presented below in Table 8-6.

This table shows the order and frequency of detection for the 911, EMS, and EpiCenter surveillance tools.
In most instances, the 911 surveillance tool was the first to detect and in only one scenario was EMS the
first surveillance tool to detect contamination when multiple PHS surveillance tools produced an alert.
For a majority of these scenarios, all three surveillance tools alerted.

For most scenarios involving chemical contaminants and biological agents, the order of alerts was: (1)
911, (2) EpiCenter and (3) EMS. If there was not such a significant time lag for EMS data uploads to
occur, the EMS alerts would have likely followed soon after the 911 alerts, and the EMS alerts would
have occurred prior to EpiCenter alerts. For these scenarios, there was an approximate time lag between
the first two alerts of 15 to 24 hours, and an even greater time lag between the second and third alerts
(greater than 24 hours).

The order of alerts for Biological Agents 4, 5, 6 and 7 was: (1) EpiCenter, (2) 911, and (3) EMS. In these
scenarios, the average time between EpiCenter and 911 alerts was 44.7 hours (with a range of 2.5 to!99.5
hours) and the average time between 911  and EMS alerts was 36.2 hours (with a range of 9.0 to 86.0
hours). The order of alerts for these biological agents is in-line with assumptions integrated into the CWS
model regarding human behavior. The symptom onset progression is more gradual for the biological
agents, and individuals do not pursue extremely urgent health-seeking behavior in large percentages until
symptoms reach the severe level.  Therefore, it is more likely that case counts would gradually increase at
the ED and contribute to EpiCenter alerts prior to the time that thresholds for 911 or EMS would be
exceeded.
                                                                                            108

-------
                                Water Security Initiative: Evaluation of the Public Health Surveillance Component
                                            of the Cincinnati Contamination Warning System Pilot
Table 8-6. Concurrent PHS Alerts (simulation study data)
Contaminants
Contaminants with rapid
symptom onset
• Toxic Chemicals (1-7)
• Biological Agent 1
Contaminants with
moderate symptom
onset
• Toxic Chemical 8
• Biological Agent 2
• Biological Agent 3
Contaminants with slow
symptom onset
(gastrointestinal
exposure)
• Biological Agent 4
• Biological Agents
Contaminants with slow
symptom onset
(respiratory exposure)
• Biological Agent 6
• Biological Agent 7
Order of Alerts
911/EMS
911/EpiCenter
EMS/911
EpiCenter/EMS
911/EMS/EpiCenter
911/EpiCenter/EMS
911/EMS
911/EpiCenter
EpiCenter/EMS
911/EpiCenter/EMS
EpiCenter/911/EMS
911/EpiCenter
EpiCenter/911
EpiCenter/EMS
911/EpiCenter/EMS
EpiCenter/911/EMS
EpiCenter/EMS/911
911/EMS
Instances
of Alert
Order
59
45
1
3
20
211
1
9
5
64
40
1
17
11
5
60
1
3
Minutes Between First and Second Alert
Average
1,578
1,406
121
30
1,454
1,337
1,319
1,316
1,470
932
117
89
2,276
7,492
497
2,683
5,790
1,079
Minimum
1,199
749
121
30
1,199
989
1,319
1,109
1,470
29
91
89
91
4,350
89
151
5,790
599
Maximum
2,759
4,049
121
30
2,759
2,729
1,319
1,469
1,470
1,469
271
89
11,971
10,110
749
11,971
5,790
1,859
Minutes Between Second and Third Alert
Average




3,426
794



1,560
2,722



2,046
2,171
61

Minimum




1,410
30



30
1,319



1,470
539
61

Maximum




5,730
1,470



4,350
4,259



4,350
5,159
61

                                                                                                                                    109

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot
8.5    Integrated Design Objective: Timeliness of Detection

Timeliness of detection as it relates to the PHS component encompasses the time from initial data
transmission to completion of an alert investigation. Post-exposure factors can affect timeliness, and
include items such as time to symptom onset and health-seeking behaviors, are discussed in Section 3.3.
These time delays occur prior to the time for data transmission.  Metrics used to evaluate how well the
component met this design objective include time for data transmission, time for event detection, time for
alert recognition and time to investigate alerts. Analysis of invalid alerts recorded during the evaluation
period was performed to present summary-level timeliness data for each metric (Table 8-7).

Table 8-7. Evaluation of Timeliness

Metric#1 :
Time for
Data
Transmission
Metric #2:
Time for
Event
Detection
Metric* 3:
Time for
Alert
Recognition
Metric #4:
Time to
Investigate
Alerts
PHS Surveillance Tool
911
Typically between
45 - 100 minutes;
some long delays
during network
outages
Generally less
than 10 seconds
Median time: 13
hours
Usually between
5-15 minutes
per alert
EMS
Average of 13.2
hours from time of
EMS run to data
upload
Usually between
12 - 16 minutes
Median time: 10.5
hours
Between 10-100
minutes per alert
DPIC
Uploads to
NPDS occur in
near-real-time
(< 1 minute)
4 hours after
specified
timeframe
(for statistical
analyses)
Median time:10.7
hours
90% of
investigations
took 20 minutes
or less
Epicenter
Uploaded in
batches every
10 minutes1
Generally about
1 hour
No data
available
Approximately
15 minutes per
alert
Integrated
Component
Most data
transmitted in 1
hour or less
(EMS is the
exception)
The majority of
event detection
takes less than
1 hour
Median times
indicate a lag in
alert
recognition
(-10-13
hours)
Most
investigations
took 20
minutes or less
 While data is electronically uploaded every 10 minutes, data is routinely delayed as paper records are often not
entered into the electronic system until the following morning.

Empirical data show overall efficient operation of data transmission and event detection.  While
vulnerable to network outages and other instances of system downtime, these events did not appear to
impede the consistent operation of the integrated system.  Some time delay was observed for the time it
takes public health investigators to acknowledge alerts and begin investigation; the median time period
observed was between 10 to 13 hours. Part of this delay is due to alerts that were produced after normal
business hours, which were not recognized by investigators until the next workday.  For example,
personnel responsible for investigation of 911 and EMS alerts were unable to access the User Interface
remotely, which contains data pertinent to the alert investigations.  The public health partners recognized
the need for off-site, 24/7 access to the User Interface, and suggested this component modification during
a lessons learned workshop. Once investigations were started, resolution usually occurred within 20
minutes.

A major strength of the PHS component is that it provides a balance of data that is both timely and
informed. While the 911 surveillance tool is the fastest to collect and analyze data, it is "coarse"
compared to more detailed patient data available in EMS, EpiCenter, and DPIC alerts.  For example, the
EpiCenter surveillance tool requires a longer time to produce alerts, but is the richest in detail and medical
                                                                                           110

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot

validity. During investigations, data that is captured more quickly can be checked against other data
sources with more descriptive data to help determine causation. For example, following an EMS alert for
respiratory symptoms, a review of hospital data may reveal that instances of respiratory complaints in the
ED have been increasing for the past few weeks due to a recently confirmed influenza outbreak, even
though the volume may not have been substantial enough to produce an alert.

Simulation study timeline data (which started at the time of contaminant injection) was evaluated to
illustrate the timeliness of detection overall for the PHS  component and for scenarios initiated at periods
of high or low demand (Figure 8-2). Percentile values were calculated to examine the distribution of data
in a box-and-whisker plot.
      10000
       1000
    m
    
-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot
   I
        100000
         10000
1000
                                                                            ^Average
                                                                            X Low Symptom Onset
                                             Contaminants
Figure 8-3. PHS Component Timeliness of Detection and Low Symptom Onset

8.6    Integrated Design Objective: Operational Reliability

Component reliability considers the physical operation of the integrated PHS component.  Operational
reliability comprises the metrics of availability and data completeness, which quantify the percent of time
that the integrated PHS component is working as designed. A summary of the operational reliability
metrics can be found in Table 8-8.

Table 8-8. Evaluation of Operational Reliability

Metric#1 :
Availability
Metric #2:
Data
Completeness
PHS Surveillance Tool
911
92% availability
overall; most
downtime due to
network instability
causing a delay in
data collection
92% data
completeness
overall; data
incompleteness
caused by network
instability
EMS
95% availability
overall; most
downtime due to
network instability
causing delay in
data transmission
> 90% data
completeness
overall for most
reporting periods
DPIC
100% availability
overall due to
human
surveillance
98.8% data
completeness
overall; data
incompleteness
duetoNPDS
upgrades
Epicenter
100% availability
overall; no
recorded
downtime
>92% data
completeness
overall; some data
incompleteness
due to hospital
going offline
Integrated
Component
Excellent
availability; at
least a portion of
the PHS
component was
available 100%
of the time
Excellent data
completeness;
overall data
completeness
was 96%
The PHS component experienced excellent operational reliability during the evaluation period. There
was a high percent of availability and data completeness for the integrated component; in particular, at
least one of the PHS surveillance tools was available 100% of the time (see Figure 8-4).  In addition, the
percent of data completeness was also high, at 96%. One potential weakness in this area is that the 911
and EMS data streams rely on the same network to function; instability or outages in this network would
                                                                                          112

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot

reduce component data completeness by 50% until operation was repaired. Despite this potential
weakness, there is no reason to expect that a high level of operational reliability would not continue based
on historical data.
  O  50%
   o
   2
       0%

                                   Start Date of Monthly Reporting Period
Figure 8-4. PHS Component Data Completeness (based on 911, EMS and Epicenter data streams)
Note: The dates for the DPIC data incompleteness are unknown, therefore, the data was not included in this figure
but the effects would be negligible.

8.7    Integrated Design Objective: Sustainability

Sustainability is a key objective in the design of a CWS and each of its components, which for the
purpose of this evaluation is defined in terms of the cost-benefit trade-off. Costs are estimated over the
20-year lifecycle of the CWS and include the capital cost to implement the CWS and the cost to operate
and maintain the CWS.  The benefits derived from the CWS are defined in terms of primary and dual-use
benefits. The primary benefit of a CWS is the potential reduction in consequences in the event of a
contamination incident; however, such a benefit may be rarely, if ever, realized.  Thus, dual-use benefits
that provide value to routine utility operations are an important driver for Sustainability. Ultimately,
Sustainability can be demonstrated through utility and partner compliance with the protocols and
procedures necessary to operate and maintain the CWS.  The three metrics that were evaluated to assess
how well the Cincinnati CWS met the design objective of Sustainability are: costs, benefits, and
compliance. The following subsections define each metric, describe how it was evaluated, and present
the results.
8.7.1  Costs
Definition: Costs are evaluated over the 20-year lifecycle of the Cincinnati CWS, and comprise costs
incurred to design, deploy, operate, and maintain the PHS component since its inception.

Analysis Methodology:  Parameters used to quantify the implementation cost of the PHS component
were extracted from the Water Security Initiative: Cincinnati Pilot Post-Implementation System Status
(USEPA, 2008b).  The cost of modifications to the PHS component made after the completion of
implementation activities were tracked as they were incurred.  O&M costs were tracked on a monthly
basis over the duration of the evaluation period. Renewal and replacement costs, along with the salvage
value at the end of the Cincinnati CWS lifecycle were  estimated using vendor supplied data, field
                                                                                           113

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot

experience and expert judgment. Note that all costs reported in this section are rounded to the nearest
dollar. Section 3.5 provides additional details regarding the methodology used to estimate each of these
cost elements.

Results:  The methodology described in Section 3.5, was applied to determine the value of the major cost
elements used to calculate the total lifecycle cost of the PHS component, which are presented in Table 8-
9.  It is important to note that the Cincinnati CWS was a research effort, and as such incurred higher
costs than would be expected for a typical large utility installation. A similar PHS component
implementation at another utility should be less expensive as it could benefit from lessons learned and
would not incur research-related costs.

Table 8-9. Cost Elements used in the Calculation of Lifecycle Cost
Parameter
Implementation Costs
Annual O&M Costs
Renewal and Replacement Costs
Salvage Value
Value
$1,305,966
$17,871
$241,351
-
Table 8-10 presents the implementation cost for each PHS design element, with labor costs presented
separately from the cost of equipment, supplies and purchased services.

Table 8-10.  Implementation Costs
Design Element
Project Management1
Public Health
Surveillance Tools
Communication and
Coordination
Procedures
Shared IT Systems
TOTAL:
Labor
$102,749
$491,312
$172,197
$76,409
$283,923
$1,126,590
Equipment, Supplies,
Purchased Services
-
$71,073
-
-
$77,140
$148,213
Component
Modifications
-
$31,163
-
-
-
$31,163
Total
Implementation
Costs
$102,749
$593,548
$172,197
$76,409
$361,063
$1,305,966
1 Project management costs incurred during implementation were distributed evenly among the CWS components.

Project management includes all overhead activities necessary to design and implement the component.
The cost for PHS tools includes designing and implementing automated event detection systems for
monitoring 911 calls and EMS runs. Communication and coordination includes the cost of establishing a
Public Health User's Group and developing automated alert notification emails. The procedures costs
included developing procedures that guide the routine operation of the component and alert
investigations, along with training.

Finally, shared IT systems, includes the procurement, set-up, and configuration of application and
database servers that host the PHS event detection algorithms.  As this system is utilized by both PHS and
CCS, the associated cost was split evenly between these two components.
                                                                                          114

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot

Overall, the PHS tools design element had the highest implementation costs (45%). The technical
implementation of the new event detection systems for 911 calls and EMS runs involved an analysis to
identify the appropriate statistical tools for each data source, transfer of data from CFD to GCWW
servers, programming and testing to implement the statistical algorithms, construction of a user interface
to allow access to underlying case data and development of an alert notification email.  For DPIC, the
technical implementation of the event detection system for PCC calls involved development of a new
business process to identify possible water contamination when receiving hotline calls, as well as
development of call volume and case based definition algorithms to detect possible water contamination.
The total implementation cost for shared IT systems and communication and coordination were lower at
28% and 13%, respectively. Implementation costs for project management and for development of the
procedures for routine operation and training on those procedures were significantly lower at 8% and 6%,
respectively.

The component modification costs represent the labor, equipment, supplies and purchased services
associated with enhancements to the PHS component after completion of major implementation activities
in December 2007.  The additional expenses were incurred to modify the 911 and EMS event detection
systems, including the addition of more underlying case data (such as geospatial data and incident
identifier codes), as well as adjustment of the alerting thresholds for both data streams to reduce the
number of alerts. The annual labor hours and costs of operating and maintaining the PHS component,
broken out by design element, are shown in Table 8-11.

Table 8-11. Annual O&M Costs
Design Element1
Public Health
Surveillance Tools
Communication and
Coordination
Procedures
TOTAL:
Total Labor
(hours/year)
138
56
134
348
Total Labor
Cost
($/year)
$7,223
$2,307
$8,342
$17,871
Supplies and
Purchased Services
($/year)
-
-
-
-
Total O&M Cost
($/year)
$7,223
$2,307
$8,342
$17,871
1 Overarching project management costs were only incurred during implementation of the PHS component and are
not applicable for annual O&M costs.

O&M for the PHS tools requires routine monitoring and troubleshooting of the IT infrastructure. The
communication and coordination design element involves regular Public Health User's Group meetings
which are scheduled four times per year.  Most of the O&M labor hours reported under procedures are
attributable to the routine investigation of PHS alerts.

Two of the major cost elements presented in Table 8-9, the renewal and replacement costs and salvage
value, were based on costs associated with two major pieces of equipment installed for the PHS
component. The useful life of these items was estimated at 5 years and 10 years, respectively, based on
manufacturer-provided data and input from subject matter experts. It was assumed that the item with a
useful life of 5 years would need to be replaced three times during the 20-year lifecycle of the CWS, and
the item with a useful life of 10 years was assumed to be replaced once. Because the useful life  of the
final installment of all equipment items will expire at the end of the 20-year lifecycle, there is no salvage
value for this component, as reported in Table 8-9. The cost of these items is presented in Table 8-12.
                                                                                          115

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot
Table 8-12.  Equipment Costs
Equipment Item
Wireless CISCO Devices and Routers
Shared IT Systems (Application and Database
Servers)1
Useful Life
(years)
10
5
Unit Capital
Costs
$382
$77,140
Quantity
(# of Units)
26
1
TOTAL:
Total Cost
$9,932
$77,140
$87,072
1 Equipment utilized by CCS and PHS; costs evenly split between two components

To calculate the total lifecycle cost of the PHS component, all costs and monetized benefits were adjusted
to 2007 dollars using the change in the Consumer Price Index (CPI) between 2007 and the year that the
cost or benefit was realized. Subsequently, the implementation costs renewal and replacement costs, and
annual O&M costs were combined to determine the total lifecycle cost:
       PHS Total Lifecyde Cost: $1,788,073

Note that in this calculation, the implementation costs were treated as a one-time balance adjustment, the
O&M costs recurred annually, and the renewal and replacement costs for major equipment items were
incurred at regular intervals based on the useful life of each item.
8.7.2  Benefits
Definition: The benefits of CWS deployment can be considered in two broad categories: primary and
dual-use. Primary benefits relate to the application of the CWS to detect contamination incidents, and can
be quantified in terms of a reduction in consequences.  Primary benefits are evaluated at the system-level
and are thus discussed in the report titled Water Security Initiative: Evaluation of the Cincinnati
Contamination Warning System Pilot (USEPA, 2013). Dual-use benefits are derived through application
of the CWS to any purpose other than detection of intentional and unintentional drinking water
contamination incidents. Unintentional contamination incidents may result from various sources, such as
a depressurization event or a backflow event resulting from failure in a cross connection control program.

Analysis Methodology: Information collected from forums, such as data review meetings, lessons
learned workshops, and interviews were used to identify dual-use applications of the WQM component of
the CWS.

Results: Operation of the PHS component of the CWS has resulted in  benefits beyond the detection of
intentional and unintentional contamination incidents.  These key dual-use benefits and examples
identified by the utility include:
    1.  Relationships formed and knowledge base discovered as part of a PHS component which can be
       employed in other areas of participant agencies:

       •   The communication and team decision-making practiced during drills and exercises for the
           Cincinnati CWS can be applied to consequence management activities for any number of
           public health emergencies (e.g., natural disasters, pandemic influenza, non-water related
           terrorist attacks, etc.). Even if water is not directly related to or impacted by the incident, the
           drinking water supply is almost always a key resource in response and recovery.
    2.  Improved knowledge of partner agency's abilities and organizational structure:

       •   This familiarity may manifest itself in ways as simple as knowing the correct person to
           contact and when to contact that individual during an incident. Being more familiar with
           partners' capabilities helps to  improve trust and collective decision-making abilities.
                                                                                           116

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot

       •   An example from the Cincinnati CWS is demonstrated by the relationship formed between
           DPIC and HCPH.  Epidemiologists at HCPH recognized that the expertise of DPIC could be
           employed to augment research on unintentional overdoses as part of their Injury Surveillance
           System. A representative from HCPH was able to utilize roundtable discussions at DPIC as a
           forum to present injury data and gather feedback on the most effective method to display data
           analyses. In return, DPIC was able to utilize Hamilton County injury data summaries and
           epidemiological expertise.  Leveraging these relationships for purposes beyond the initial
           CWS goal of identifying and responding to possible water contamination incidents serves to
           justify the time expended on the CWS as a means of improvement of the function of an
           agency as a whole.
    3.  Use of 911 and EMS data for other applications:

       •   An example of this is the utilization of this data for public health issues that do not necessitate
           an immediate response, such as injury surveillance and retrospective analysis of disease
           outbreaks. Essentially, data can be used for research purposes to provide a more complete
           picture of the public health  status of a community. As discussed earlier,  data can also be used
           in a more real-time fashion  to improve situational awareness during public health outbreaks
           or events (e.g., EMS alerts during H1N1 outbreak).
    4.  Improved communication and coordination:

       •   A conscious effort from the User's Group members to attend and create productive meetings
           resulted in not only a measurably improved means by which to convey information with one
           another, but also bolstered confidence between member agencies. This benefits the CWS and
           any other applications which require interagency communication.
       •   The "communicator" protocol was implemented to allow expedient communication among
           all members of the User's Group when a PHS alert occurs that requires in-depth investigation
           and analysis. The communicator is an auto-dialer system operated by  CFD, which can be
           utilized to issue an urgent message to all members of the User's Group. It can be used to
           notify personnel via phone  and email of a possible water contamination incident or other
           developing public health situation.
       •   Communications between local public health and DPIC officials during drills and exercises
           yielded a reasonable hypothesis of the causative agent in most drill scenarios. The expertise
           of the DPIC toxicologists, in particular, was invaluable during the process of narrowing down
           possible contaminants.  For example, during a full-scale exercise conducted in October 2008,
           the DPIC participants were  able to surmise that two contaminants were involved based on
           limited information provided during the exercise. Although drills and exercises rely on
           hypothetical contamination scenarios, it can be expected these discussions would occur
           during  investigation of a possible water contamination incident.  The combination of DPIC
           toxicological expertise and  public health partner input proved a valuable asset for
           contaminant identification.
8.7.3  Compliance
Definition: Compliance captures the acceptability of the  PHS component by measuring the willingness
of persons and organizations to monitor, maintain and actively participate in the CWS.  The use of PHS
surveillance tools and communication procedures during routine operations as well as during drills and
exercises is tracked to represent the acceptability of the CWS.

Analysis Methodology: This  metric was measured by determining the percent completeness of PHS
investigation checklists. Another method used to evaluate compliance was an assessment of the
willingness of local public health partners and utility personnel to participate in training, drills and
                                                                                           117

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot

exercises, which evaluate the actions of participants during the process of investigating simulated PHS
alerts.

Results:  The percent of investigation checklists completed by at least one participant health department
can be seen in Figure 8-5. The percent completed was generally good with more than 75% completeness
for most months. Investigation checklists were not always completed during the investigations due to
personnel utilizing other means of documentation; however, it was indicated during group discussions
that the alerts were still noted and investigated. In total, 83% of investigation checklists were completed
during the evaluation period.
      100%
       95%
                                   Start Date of Monthly Reporting Period
Figure 8-5. Percent Investigation Checklists Completed per Month

Attendance was recorded at all User's Group meetings, drills, and exercises to ensure that core
participants (CHD, HCPH, DPIC, FBI and GCWW) were present during meeting discussions and
decision-making.  Since the beginning of 2009, 100% attendance and participation was documented at
most drills, meetings, and exercises, as evidenced from attendance sheets and discussions between all
partners during these events. All core members also participated in the communicator call, activated in
August 2009. As previously mentioned, stakeholders intend to continue the User's Group meetings and
found value in drills and exercises, further bolstering the proof of acceptability for communication
procedures, meetings, drills, and exercises.

8.8    Summary of the Integrated Component

The PHS component implemented for the Cincinnati CWS has demonstrated the ability to successfully
detect events of public health significance and has achieved acceptability with its users.  Strengths of this
component include the ability to reliably detect true public health incidents through various surveillance
tools and providing expanded situational awareness during such incidents.  In addition, effective
communication practices enhanced the acceptability of the system to its users. The communicator
protocol, in particular,  was a sizable improvement,  as proven through drills and exercises. Improvements
to the system could be  made in the timeliness of participants to recognize alerts, which could occur via
                                                                                           118

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot

increased off-site access to the User Interface. Going forward, outside funds for the design of drills and
exercises may be necessary in order for these to be feasible; one way to mitigate this could be to approach
surveillance in the PHS component using an "all hazards" approach. Overall, the empirical data and
simulation study data demonstrate that the PHS component functionality successfully meets the design
objectives described above, and serves a valuable role in the overall  CWS.
                                                                                            119

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot


                Section 9.0:  Summary and Conclusions

The evaluation of the PHS component of the CWS involved analysis of empirical data, data from drills
and exercises, results from the simulation study, qualitative observations gleaned from participants during
forums, and cost and benefit analysis from the benefit-cost analysis.  System design objectives were
evaluated through metrics analysis for each of the surveillance tools as well as for the integrated
component. Highlights, limitations, and considerations for interpretation of this analysis are presented in
this section.

9.1    Highlights of Analysis

Evaluation of the PHS component revealed numerous areas of special interest. First, the system did
achieve the functionality conceived during the design phase, as observed through concurrent alerts and
the successful detection of public health incidents. In addition, improvements to the communication
strategies were particularly successful at increasing efficiency of alert investigations.

As designed, the PHS component was expected to successfully detect possible water contamination by
observing  changes in the health status of the community as monitored by various surveillance tools.  It
was expected that during an actual public health incident, including water contamination, multiple
surveillance tools would trigger alerts in the same timeframe. This would be supported by some degree of
"patient continuity," or observing patient volume from the start to the end of their health-seeking
behavior.  An example of PHS successfully identifying a public health incident occurred during the H1N1
outbreak during the fall of 2009, when both EMS and EpiCenter alerts were attributed to influenza
activity. Together, these observations highlight the ability of PHS to register multiple alerts and fulfill
one design objective. Moreover, simulation study results demonstrated the ability of the PHS
surveillance tools to identify the majority of contamination scenarios involving a variety of chemical and
biological  contaminants.

During the evaluation phase, some observations of data trends led to component modifications to improve
the overall functionality and sustainability of the system. Frequent invalid alerts produced by the 911 and
EMS surveillance tools precipitated an adjustment of thresholds and alerting criteria to reduce the number
of invalid alerts; since this time, far fewer alerts have occurred.

Other examples of the PHS component achieving design objectives occur in its ability to consistently
provide data that is both timely and informative.  As discussed in Section 8.6, the integrated component
achieved excellent data availability and completeness throughout the evaluation period.  Data was both
timely and informative, via the utilization of data that could be collected quickly (e.g., 911 calls) and also
data that contained medically validated information (e.g., EpiCenter).

Improvements to communication strategies and participation in User's Group meetings were particularly
useful for increasing efficiency of investigations and bolstering acceptability. Development of the
communicator protocol allowed for the first-hand presentation of data to all investigation participants,
resulting in faster analysis and discussions that led to reasonable hypotheses of causative agents.  The
pledge to continue participation in the User's Group  meetings confirms its usefulness for stakeholder
agencies.  Finally, the identification of numerous benefits (Section 8.7.2) augments member acceptability
and encourages future participation in PHS surveillance activities.
                                                                                          120

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot

9.2    Limitations of the Analysis

Some limitations identified during the analysis of the PHS component included missing documentation
for some alert investigations, limited data granularity, and a lack of empirical data for certain metrics.
While most alert investigations were documented, some were occasionally recorded by other means at
local health departments; this data was not available for analysis.  Likewise, the DPIC protocol did not
require that location be noted during alert investigations, which limited the ability to conduct spatial
analysis of alerts produced by this surveillance tool.  Spatial analysis was also inhibited by the level of
data granularity available; for EMS, EpiCenter, and DPIC, the smallest geospatial unit recorded was the
zip code level. Because zip codes can be rather large, this may not be the best spatial representation of
alerts.

The largest limitation to the PHS analysis was a lack of empirical data for various metrics due to the
absence of water contamination incidents during the evaluation period. These gaps were filled through
analysis of simulation study results; however, simulated results may differ from real-life experience.

9.3    Potential Applications of the PHS Component

The PHS component of the Cincinnati CWS was tailored to the agencies and data available within the
GCWW service area; therefore, the evaluation of this component is specific to Cincinnati, and
interpretation should be treated as such.  For example, the GCWW service area encompasses multiple
public health jurisdictions and partners, which presented certain communication challenges. These
challenges were partially addressed through implementation of the communicator protocol.  In addition,
the data volume and quality in Cincinnati may differ  from other cities.  However, the Cincinnati CWS
revealed numerous applications and lessons that can be applicable to other CWS installations.

Because the Cincinnati CWS was a pilot project, a certain degree of trial and error was necessary to
produce a viable, functioning system.  As discussed in Section 2.0, NRDM data was originally included
in the PHS design; however, due to unforeseen instability in reporting and unavailability of data for
research purposes, the use of this surveillance tool was not included in the final component. In addition,
the start-up costs for the Cincinnati CWS were mainly due to purchases related to improving the
timeliness of data  collection (e.g., wireless routers, servers, etc.).  Based on the results here and
capabilities in other  cities, it may be determined these start-up costs can be reduced based on the design of
other  systems. Furthermore, integrating data surveillance methods described here with an "all hazards"
approach to PHS (i.e., incorporating surveillance of food safety, pandemic influenza and/or injury
surveillance) creates an even more sustainable CWS.

Improved communication strategies, as developed in the Cincinnati CWS, are widely applicable and can
be implemented anywhere. Regardless of the number of stakeholders involved in expansion projects,
effective communication will be necessary to perform efficient investigations into possible contamination
incidents.  Face-to-face meetings, such as the User's  Group meetings, are important for improving
stakeholder relationships and refining communication strategies. As mentioned previously, the User's
Group meetings were identified as one of the most valuable aspects of the CWS.  Given that improved
communication protocols are relatively inexpensive to implement, the lessons learned through the
Cincinnati CWS should be considered for implementation at all expansion utilities.

The overarching goal of the PHS component  is to improve situational awareness such that consideration
of the possibility for water contamination is raised while performing  surveillance activities.  The astute
observations of public health personnel allow for detection of changes of the health status of their
community, given that they understand their populations.  Indeed, the overall success of the PHS
component of a CWS depends not only on reliable data, but also requires public health professionals who
                                                                                           121

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                        of the Cincinnati Contamination Warning System Pilot

are aware of their service population and the possible causes of changes in observed health trends.  The
evaluation presented here should aid other PHS projects in improving the existing capabilities of public
health personnel in order to participate in an effective CWS.
                                                                                             122

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot
The Centers for Disease Control and Prevention (CDC), 201 Ob. CDC Estimates of 2009 H1N1
       Influenza Cases, Hospitalizations and Deaths in the United States, April 2009 -March 13, 2010.
Health Monitoring Systems. EpiCenter User Manual, Version 2.6.  August 2009.
Hutwanger L.,Thompson W., Seeman L., Treadwell T. The bioterrorism preparedness and response Early
       Aberration Reporting System (EARS). Journal of Urban Health.  2003; 80: i89-96.

KulldorffM.  2010. SaTScan™ User's Guide for Version 9.0.

Thacker S. B., Berkelman R. L. Public Health Surveillance in the United States. Epidemiologic Reviews.
       1988; 10:164-90.

The Ohio Department of Health.  2008 Annual Summary of Infectious Diseases, Ohio:  Profiles of
       Selected Health Events Detected in EpiCenter. 2009.
U.S. Environmental Protection Agency. 2005. WaterSentinel System Architecture, Draft for Science
       Advisory Board Review.

U.S. Environmental Protection Agency. 2007. Interim Guidance on Planning for Contamination
       Warning System Deployment. USEPA 817-R-07-002.

U.S. Environmental Protection Agency. 2008a. Water Security Initiative: Interim Guidance on
       Developing an Operational Strategy for Contamination Warning Systems. USEPA 817-R-08-
       002.

U.S. Environmental Protection Agency. 2008b. Water Security Initiative: Cincinnati Pilot Post-
       Implementation System Status. EPA 817-R-08-004.

U.S. Environmental Protection Agency. 2013. Water Security Initiative: Evaluation of the Cincinnati
       Contamination Warning System Pilot.  EPA 817-R-13-003

U.S. Environmental Protection Agency. 2014. Water Security Initiative: Comprehensive Evaluation of
       the Cincinnati Contamination Warning System Pilot EPA 817-R-14-001.

World Health Organization (WHO), 2009.
                                                                                          123

-------
          Water Security Initiative: Evaluation of the Public Health Surveillance Component
                      of the Cincinnati Contamination Warning System Pilot
                        Section 11.0:  Abbreviations

The list below includes acronyms approved for use in the PHS component evaluation. Acronyms are
defined at first use in the document.
Cardiaccat
CCS
CDC
CFD
CHD
CMP
CUSUM
CWS
DPIC
EARS
ED
EMA
EMS
EMT
ESM
FBI
GCWW
Gicat
HCPH
HI/HB
HIPAA
HMS
ICS
Neurons
NPDS
NRDM
O&M
ODH
OTC
PCC
PHS
Poison
Psychcat
RLS
RODS
S&A
SQL
Cardiac Syndromic Surveillance Category
Customer Complaint Surveillance
Centers for Disease Control and Prevention
Cincinnati Fire Department
Cincinnati Health Department
Consequence Management Plan
Cumulative Sum
Contamination Warning System
Drug and Poison Information Center
Early Aberration Reporting System
Emergency Department
Exponential Moving Average
Emergency Medical Services
Emergency Medical Technician
Enhanced Security Monitoring
Federal Bureau of Investigation
Greater Cincinnati Water Works
Gastrointestinal Syndromic Surveillance Category
Hamilton County Public Health
Health Impacts and Human Behavior
Health Insurance Portability and Accountability Act
Health Monitoring Systems
Incident Command System
Neurological  Syndromic Surveillance Category
National Poison Data System
National Retail Data Monitor
Operation and Maintenance
Ohio Department of Health
Over-the-Counter (Sales of Pharmaceuticals)
Poison Control Center
Public Health Surveillance
Poisoning Syndromic Surveillance Category
Psychological Syndromic Surveillance Category
Recursive Least Squares
Real-time Outbreak and Disease Surveillance
Sampling and Analysis
Structured Query Language
                                                                                     124

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot

Upperresp         Upper Respiratory Syndromic Surveillance Category
USEPA           United States Environmental Protection Agency
WQM            Water Quality Monitoring
WSI              Water Security Initiative
WUERM          Water Utility Emergency Response Manager
                                                                                         125

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot
                             Section 12.0:  Glossary
Alert.  Information from a monitoring and surveillance component indicating an anomaly in the system,
which warrants further investigation to determine if the alert is valid.

Alert Investigation. A systematic process, documented in a standard operating procedure, for
determining whether or not an alert is valid, and identifying the cause of the alert.  If an alert cause cannot
be identified, contamination is possible.

Anomaly. Deviations from an established baseline. For example, a water quality anomaly is a deviation
from typical water quality patterns observed over an extended period.

Baseline. Normal conditions that result from typical system operation.  The baseline includes predictable
fluctuations in measured parameters that result from known changes to the system. For example, a water
quality baseline includes the effects of draining and filling tanks, pump operation and seasonal changes in
water demand, all of which may alter water quality in a somewhat predictable fashion.

Benefit. An outcome associated with the implementation and operation of a contamination warning
system that promotes the welfare of the utility and the community it serves. Benefits are classified as
either primary or dual-use.

Benefit-cost analysis.  An evaluation of the benefits and costs of a project or program, such as a
contamination warning system, to assess whether the investment is justifiable considering both financial
and qualitative factors.

Biological Agents. These contaminants of biological origin include pathogens and toxins that pose a risk
to public health at relatively low concentrations.

Box-and-whisker plot. A graphical representation of nonparametric statistics for a dataset. The bottom
and top whiskers represent the minimum and maximum values, respectively. The  bottom and top of the
box represent the 25th and 75th percentiles of the ranked data, respectively. The line inside the box
represents the 50th percentile, or medial of the ranked data.  Note that some data sets may have the same
values for the percentiles presented in box-and-whisker plots, in which case not all lines will be visible.

Component response procedures. Documentation of roles and responsibilities, process flows, and
procedural activities for a specified component of the contamination warning system, including the
investigation of alerts from the component. Standard operating procedures for each monitoring and
surveillance component are integrated into an operational strategy for the contamination warning system.

Confirmed. In the context of the threat level determination process, contamination is Confirmed when
the analysis of all available information from the contamination warning system has provided definitive,
or nearly definitive, evidence of the presence of a specific contaminant or class of contaminant in the
distribution system. While positive results from laboratory analysis of a sample collected from the
distribution system can be a basis for confirming contamination,  a preponderance of evidence, without the
benefit of laboratory results, can lead to this same determination.

Consequence management.  Actions taken to plan for and respond to possible contamination incidents.
This includes the threat level determination process, which uses information from  all monitoring and
surveillance components as well as sampling and analysis to determine if contamination is Credible or
                                                                                          126

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot

Confirmed. Response actions, including operational changes, public notification, and public health
response, are implemented to minimize public health and economic impacts, and ultimately return the
utility to normal operations.

Consequence management plan.  Documentation that provides a decision-making framework to guide
investigative and response activities implemented in response to a possible contamination incident.

Contamination incident. The introduction of a contaminant in the distribution system with the potential
to cause harm to the utility or the community served by the utility.  A contamination incident may be
intentional or accidental.

Contamination scenario. Within the context of the simulation study, parameters that define a specific
contamination incident, including: injection location, injection rate, injection duration, time the injection
is initiated, and the contaminant that is injected.

Contamination warning system. An integrated system of monitoring and surveillance components
designed to detect contamination in a drinking water distribution system.  The system relies on integration
of information from these monitoring and surveillance activities along with timely investigative and
response actions during consequence management to minimize the consequences of a contamination
incident.

Costs, implementation. Installed cost of equipment, IT components, and subsystems necessary to
deploy an operational system.  Implementation costs include labor and other expenditures (equipment,
supplies and purchased services).

Cost, life cycle. The total cost of a system, component, or equipment over its useful or practical life.
Life cycle cost includes the cost of implementation, operation & maintenance, and renewal &
replacement.

Costs, operation & maintenance. Expenses incurred to sustain operation of a system at an acceptable
level of performance. Operational and maintenance costs are reported on an annual basis, and include
labor and other expenditures (supplies and purchased services).

Costs, renewal & replacement. Costs associated with refurbishing or replacing major pieces of
equipment (e.g., water quality  sensors, laboratory instruments, IT hardware, etc.) that reach the end of
their useful life before the end  of the contamination warning system lifecycle.

Coverage, contaminant. Specific contaminants that can potentially be detected by each monitoring and
surveillance component, including sampling & analysis of a contamination warning system.

Coverage, spatial. The areas within the distribution system that are monitored by, or protected by, each
monitoring and surveillance component of a contamination warning system.

Credible. In the context of the threat level determination process, a water contamination threat is
characterized as Credible if information collected during the investigation of possible contamination
corroborates information from  the validated contamination warning system alert.

Data completeness. The amount of data that can be used to support system or component operations,
expressed as a percentage of all data generated by the system or component. Data may be lost due to QC
failures, data transmission errors, and faulty equipment among other causes.
                                                                                           127

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot

Distribution system model. A mathematical representation of a drinking water distribution system,
including pipes, junctions, valves, pumps, tanks, reservoirs, etc. The model characterizes flow and
pressure of water through the system. Distribution system models may include a water quality model that
can predict the fate and transport of a material throughout the distribution system.

Dual-use benefit. A positive application of a piece of equipment, procedure, or capability that was
deployed as part of the contamination warning system, in the normal operations of the utility.

Ensemble. The comprehensive set of contamination scenarios evaluated during the simulation study.

Event detection system. A system designed specifically to detect anomalies from the various monitoring
and surveillance components of a contamination warning system. An event detection system may take a
variety of forms, ranging from a complex set of computer algorithms to a simple set of heuristics that are
manually implemented.

Evaluation period. The period from January 16, 2008 to June 15, 2010 when data was actively collected
for the evaluation of the Cincinnati contamination warning system pilot. For the PHS component, the
evaluation period was from January 2008 to June 2010 for the 911, EMS, and DPIC surveillance tools.
For the EpiCenter surveillance tool, the evaluation period was from March 2008 to March 2010.

Exposure. In the simulation model, any person who ingests, inhales or detects contaminated water.

Hydraulic connectivity. Points or areas within a distribution system that are on a common flow path.

Incident Commander.  In the Incident Command System, the individual responsible for all aspects of an
emergency response; including quickly developing incident objectives, managing incident operations and
allocating resources.

Incident timeline. The cumulative time from the beginning of a contamination incident until response
actions are effectively implemented.  Elements of the incident timeline include: time for detection, time
for alert validation, time for threat level determination and time to implement response actions.

Injection location. The specific node in the distribution system model where the bulk contaminant is
injected into the distribution system for a given scenario within the simulation study.

Invalid alert. An alert from a monitoring and surveillance component that is not due to an anomaly and
is not associated with an incident or condition of interest to the utility.

Job function. A description of the duties and responsibilities of a specific job within an organization.

Metric. A standard or statistic for measuring or quantifying an attribute of the contamination warning
system or its components.

Model.  A mathematical representation of a physical system.

Model parameters.  Fixed values in a model that define important aspects of the physical system.

Module. A sub-component of a model that typically represents a specific function of the real-world
system being modeled.
                                                                                          128

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot

Monetizable. A cost or benefit whose monetary value can be reliably estimated from the available
information.

Monitoring & surveillance component. Element of a contamination warning system used to detect
unusual water quality conditions, potentially including contamination incidents. The four monitoring &
surveillance components of a contamination warning system include: 1) online water quality monitoring,
2) enhanced security monitoring, 3) customer complaint surveillance and 4) public health surveillance.

Node.  A mathematical representation of a junction between two or more distribution system pipes, or a
terminal point in a pipe in a water distribution system model. Water may be withdrawn from the system
at nodes, representing a portion of the system demand.

Nuisance chemicals. Chemical contaminants with a relatively low toxicity, which thus generally do not
pose an immediate threat to public health. However, contamination with these chemicals can make the
drinking water supply unusable.

Operational strategy.  Documentation that integrates the standard operating procedures that guide
routine operation of the monitoring and surveillance components of a drinking water contamination
warning system.  The operational strategy establishes specific roles and responsibilities for the component
and procedures for investigating alerts.

Optimization phase. Period in the contamination warning system deployment timeline between the
completion of system installation and real-time monitoring.  During this phase the system is operational,
but not expected to produce actionable alerts. Instead, this phase provides an opportunity to learn the
system and optimize performance (e.g., fix or replace malfunctioning equipment, eliminate software bugs,
test procedures, and reduce occurrence of invalid alerts).

Possible. In the context of the threat level determination process, a water contamination threat is
characterized as Possible if the cause of a validated contamination warning system alert is unknown.

Primary benefits. Benefits that are derived from the reduction in consequences associated with a
contamination incident due to deployment of a contamination warning system.

Priority contaminant.  A contaminant that has been identified by the EPA for monitoring under the
Water Security Initiative. Priority contaminants may be initially detected through one of the monitoring
and surveillance components and confirmed through laboratory analysis of samples collected during the
investigation of a possible contamination incident.

Process flow. The central element of a standard operating procedure that guides routine monitoring and
surveillance activities in a contamination warning system. The process flow is represented in a flow
diagram that shows the step-by-step process for investigation alerts, identifying the potential cause of the
alert and determining whether contamination is possible.

Public health incident. An occurrence of disease, illness or injury within a population that is a deviation
from the disease baseline in the population.

Public health response. Actions taken by public health agencies and their partners to mitigate the
adverse effects of a public health incident, regardless of the cause of the incident. Potential response
actions include: administering prophylaxis, mobilizing additional healthcare resources, providing
treatment guidelines to healthcare providers and providing information to the public.
                                                                                           129

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot

Real-time monitoring phase. Period in the contamination warning system deployment timeline
following the optimization phase.  During this phase, the system is fully operational and is producing
actionable alerts. Utility staff and partners now respond to alerts in real-time and in full accordance with
standard operating procedures documented in the operational strategy. Optimization of the system still
occurs as part of a continuous improvement process, however the system is no longer considered to be
developmental.

Routine operation.  The day-to-day monitoring and surveillance activities of the contamination warning
system that are guided by the  operational strategy. To the extent possible, routine operation of the
contamination warning system is integrated into the routine operations of the drinking water utility.

Salvage value.  Estimated value of assets at the end of the useful life of the system.

Simulation study. A study designed to systematically characterize the detection capabilities of the
Cincinnati drinking water contamination warning system. In this study, a computer model of the
contamination warning system is challenged with an ensemble of 2,023 simulated contamination
scenarios. The output from these simulations provides estimates of the consequences resulting from each
contamination scenario, including fatalities, illnesses, and extent of distribution system contamination.
Consequences are estimated under two cases, with and without the contamination warning system in
operation. The difference provides an estimate of the reduction in consequences.

Threat level. The results of the threat level determination process, indicating whether contamination is
Possible, Credible or Confirmed.

Threat level determination process. A systematic process in which all available and relevant
information available from a contamination warning system is evaluated to determine whether the threat
level is Possible, Credible, or  Confirmed. This is an iterative process in which the threat level is revised
as additional information becomes available.  The conclusions from the threat evaluation process are
considered during consequence management when making response decisions.

Time for Confirmed determination. A portion of the incident timeline that begins with the
determination that contamination is Credible and ends with contamination either being Confirmed or
ruled out.  This includes the time required to perform lab analyses, collect additional information, and
analyze the collective information to determine if the preponderance of evidence confirms the incident.

Time for contaminant detection. A portion of the incident timeline that begins with the start of
contamination injection and ends with the generation and recognition of an alert. The time for
contaminant detection may be subdivided for specific components to capture important elements of this
portion of the incident timeline (e.g., sample processing time, data transmission time, event detection
time, etc.).

Time for Credible determination. A portion of the incident timeline that begins with the recognition of
a possible contamination incident and ends with a determination regarding whether contamination is
Credible. This includes the time required to perform multi-component investigation and data integration,
implement field investigations (such as site characterization and sampling)  and collect additional
information to support the investigation.

Time for initial alert validation.  A portion of the incident timeline that begins with the recognition of
an alert and ends with a determination regarding whether or not contamination is possible.
                                                                                          130

-------
           Water Security Initiative: Evaluation of the Public Health Surveillance Component
                       of the Cincinnati Contamination Warning System Pilot

Toxic chemicals. Highly toxic chemicals that pose an acute risk to public health at relatively low
concentrations.

Valid Alert. Alerts due to public health incidents, including water contamination.

Water Utility Emergency Response Manager. A role within the Cincinnati contamination warning
system filled by a mid-level  manager from the drinking water utility. Responsibilities of this position
include: receiving notification of validated alerts, verifying that a valid alert indicates Possible
contamination, coordinating the threat level determination process, integrating information across the
different monitoring and surveillance components, and activating the consequence management plan.  In
the early stages of responding to Possible contamination, the Water Utility Emergency Response Manager
may serve as Incident Commander.
                                                                                            131

-------