EPA/600/EPA/600/B-16/080
                               May 2016
Model  Performance Evaluation and
Scenario Analysis (MPESA) Tutorial
  500

  450
  350
  300

  250
  200
  150
  lllll
  50
   0
   Jan-1994   Ma-1994   May-1994   Jul-1994   Sep-1994   Nov-1994   Jan-19
                      Time (day)
                — Baseflow Obs, —Baseflow Sim,
     1IGHFLOU   MOIST CONDITIONS  MID-RANGE CONDIT] '! '- DRV CONDITIONS   _OW FLOW
    0  5  10 15 20 25 30 35  40  45  50 55 60 65 7D 75
                  Percentage of time flow exceeded
                                     85 go 95 10.D
                  — Observed — Simulated
                  !Kl Observed IK] Simulated
            Model Performance and Scenario Analysis Tool Tutorial

-------
                      By
          Yusuf Mohamoud, Ph.D., P.E.
      National Exposure Research Laboratory
       Office of Research and Development
  United States Environmental Protection Agency
              RTP, North Carolina
      U.S. Environmental Protection Agency
       Office of Research and Development
             Washington, DC 20460
Model Performance and Scenario Analysis Tool Tutorial

-------
                               DISCLAIMER
This tutorial document has been reviewed by the National Exposure Research

Laboratory (NERL), U.S. Environmental Protection Agency (USEPA) and has been

approved for publication. The model performance evaluation and scenario analysis tool

has not been tested extensively with diverse data sets. The author and the U.S.

Environmental Protection Agency are not responsible and assume no liability

whatsoever for any results or any use made of the results obtained from this program,

nor for any damages or litigation that result from the use of this tool for any purpose.

Mention of trade names or commercial products does not constitute endorsement or

recommendation for use.

Citation:

Mohamoud Y.M. Model Performance Evaluation and Scenario Analysis (MPESA) Tutorial.
US EPA Office of Research and Development, Washington DC, EPA/600/B-16/080, 2016
                Model Performance and Scenario Analysis Tool Tutorial

-------
                             ACKNOWLEDGEMENTS


Acknowledgments are due to Xueyao Yang and Matthew Panunto for their participation
in the development of the model performance evaluation and scenario analysis tool.

                                  ABSTRACT

This tool consists of two parts: model performance evaluation and scenario analysis
(MPESA). The model performance evaluation consists of two components: model
performance evaluation metrics and model diagnostics. These metrics provides
modelers with statistical goodness-of-fit measures that capture magnitude only,
sequence only, and combined magnitude and sequence errors. The performance
measures include error analysis, coefficient of determination, Nash-Sutcliffe efficiency,
and a new weighted rank method. These performance metrics only provide useful
information about the overall model performance. Note that MPESA is based on the
separation of observed and simulated time series into magnitude and sequence
components. The separation of time series into magnitude and sequence components
and the reconstruction back to time series provides diagnostic insights to modelers. For
example, traditional approaches lack the capability to identify if the source of
uncertainty in the simulated data is due to the quality of the input data or the way the
analyst adjusted the model  parameters. This report presents a suite of model
diagnostics that identify if mismatches between observed and simulated data result
from magnitude or sequence related errors. MPESA offers graphical and statistical
options that allow HSPF users to compare observed and simulated time series and
identify the parameter values to adjust or the input data to modify. The scenario analysis
part of the tool provides quantitative metrics on how model simulated scenario results
differ from the  baseline results and what impact these differences may have on aquatic
organisms and  their habitats (e.g., increased flood and drought frequency).
                Model Performance and Scenario Analysis Tool Tutorial

-------
                             TABLE OF CONTENTS
Please insert report number and logo here	Error! Bookmark not defined.
ACKNOWLEDGEMENTS	4
1.0 Introduction	7
2.0 Model Performance Evaluation and Scenario Analysis Tool	8
  2.1. Data requirement	8
  2.2 Importing Data	8
  2.3 Model Performance and Diagnostics Tool	14
    2.3.1. Basic Statistical Analysis - Magnitude Only	15
  2.3.2 Visualization and  Zooming	16
  2.3.3 Diagnostics and Feedback - Magnitude Only	21
  2.3.4 Diagnostics and Feedback Magnitude Only Comparisons	29
  2.4 Baseline and Scenario Comparison Tool	42
  2.4.1 Low Flow Analysis-7Q10	42
                 Model Performance and Scenario Analysis Tool Tutorial

-------
                                LIST OF FIGURES

Figure 1: Select Tool panel	9
Figure 2: Paired Input Time Series Data	10
Figure 3: Enter Data Information	11
Figure 4: Paste Data Window	13
Figure 5: Paired Input Time Series Data Panel with Imported Data	13
Figure 6: Model Performance and Diagnostics Tool Panel	14
Figure 7: Descriptive Statistics Panel	15
Figure 8: Error Analysis Panel	16
Figure 9: Time Series Plot of Observed and Simulated Data	17
Figure 10: Duration Curve	18
Figure 11: Annual and Monthly  Panel	19
Figure 12: Plotted Time Series of Average/Aggregated Data	20
Figure 13: Daily Statistics Panel	21
Figure 14: Summary Statistics for Years of Interest	22
Figure 15: Quantile RSR Panel	23
Figure 16: NRMSE Plot	24
Figure 17: Baseflow Separation	25
Figure 18: Time-series and Duration Curve Plots from Baseflow Separation	26
Figure 19: Annual Duration Curves	27
Figure 20: Calculated Statistics,  Flow Duration Curve, and Percentile Values	28
Figure 21: Magnitude Comparison Panel	30
Figure 22: Frequency Comparison Panel with Optional Plot	31
Figure 23: Quantile Comparison Panel	32
Figure 24: Serial and Cross Correlation Analysis Panel	33
Figure 25: Correlation Analysis Calculated	35
Figure 26: Correlation Analysis Plotted	36
Figure 27: Weighted Rank Panel	38
Figure 28: Weighted Rank Plot	39
Figure 29: Combined Magnitude and Sequence Performance Panel	40
Figure 30: Baseline and Scenario Comparison Tool	43
Figure 31: Low Flow Analysis - 7Q10 Panel	44

                Model Performance and Scenario Analysis Tool Tutorial                6

-------
Figure 32: Hydrologic Indicators of Change - Flashiness Panel	45
Figure 33: Environmental Flow Calculations Panel	46
Figure 34: Indices of Flow Variability - Flow Pulses Panel with Optional Plot	47
1.0 Introduction

The Model Performance Evaluation and Scenario Analysis Tool is based on the time
series separation and reconstruction (TSSR) paradigm in which observed and simulated
time series are separated into magnitude and sequence components. Note that the
magnitude component is represented by the duration curve also known as the
exceedance probability curve and sequence is separated from the time series and stored
for reconstruction applications (Mohamoud, 2014). The tool allows users to paste
simulated output data along with observed data for the same corresponding time
period, and calculate a suite of model performance measures that compare the two
datasets. It also plots a variety of graphs to visually compare the simulated results with
the observed data. MPESA consists of two components: Model Performance
Evaluation and Diagnostics, and Baseline and Scenario Comparison. The Model
Performance Evaluation and Diagnostics provides important insights that facilitate the
model calibration process. Although this component provides important insights about
model calibration, it is not a model calibration tool because it does not automatically
adjust model parameter values or explicitly inform modelers which parameter to adjust
manually. The component provides statistical metrics that include error analysis,
weighted ranks, and model efficiency. It can also perform calculations to aggregate data
values, assess serial correlations between simulated and observed results, and generate
time series and duration curve  plots.

The Baseline and Scenario Comparison component calculates low flow indices (e.g.,
7Q10), environmental flow indices, indicators of flow flashiness, and indices of flow
variability. These comparisons allow tool users to assess how much a particular index
changed from the baseline condition. In addition, it enables environmental managers
                Model Performance and Scenario Analysis Tool Tutorial

-------
and engineers to determine the consequence that these changes have on environmental
protection or design of hydraulic structures.

Note that this document is a tutorial that guides the user to estimate model
performance metrics and visualization tools to facilitate the model calibration effort. As
such, this document is not manual that presents detailed information about the
methods.

2.0 Model Performance Evaluation and Scenario Analysis Tool

2.1. Data requirement
   •  Tool users must remove all headers/text from the data before pasting it into the
      "Import Data" window. This can be achieved by removing headers in a word-
      editing program such as Notepad or the desktop HDFT tool. The tool uses a date,
      observed time series, and simulated time series columns. It also uses baseline and
      scenario columns.
   •  The tool works only with tab-separated date and value formats. Please refer to
      Appendix for sample data format.

Note: The tool removes row numbers with missing data and informs the user about rows
with missing values if the values are designated as blank.

Example data used:

Data source: USGS NWIS
Source site name: St. Jones River at Dover, DE
Site number: 01483700
Dates used: 01/01/2000 to 12/31/2009


2.2 Importing Data

Before running the tool, users are required to paste their data into the tool. This section
presents the steps  necessary for users to import their data.

                Model Performance and Scenario Analysis Tool Tutorial               8

-------
Step 1: Open/run the tool.

Step 2: The program should display a main window with four tabs (Select Tool, Paired
Input Time Series Data, Model Performance and Diagnostics Tool, and Baseline and
Scenario Comparison Tool). To access the Model Performance and Diagnostics
components of the tool, select the "Model Performance and Diagnostics Tool" radio
button from the Select Tool panel [Figure 1].
  SELECT TOOL  PAIRED INPUT TIME SERIES DATA  MODEL PERFORMANCE AND DIAGNOSTICS TOOL  BASELINE AND SCENARIO COMPARISON TOOL
                ® Model Performance Evaluation and Diagnostics Tool
                O Baseline and Scenario Comparison Tool
                                Figure 1: Select Tool panel


 A new window which directs users to the Paired Input Time Series Data tab will appear
                                      [Figure 2],
                  Model Performance and Scenario Analysis Tool Tutorial

-------
   SELECT TOOL  PAIRED INPUT TIME SERIES DATA  MODEL PERFORMANCE AND DIAGNOSTICS TOOL  BASELINE AND SCENARIO COMPARISON TOOL
                       Paste Data
            Date
                    Observed/Baseline Simulated/Scenario
                       Figure 2: Paired Input Time Series Data
Step 3: Click the I   Paste Data   I button of the Paired Input Time Series Data Panel to
open the "enter data information" window [Figure 3].

Note: Tool users must arrange the columns so that the date is placed in the first,
observed data in the second and the simulated data in the third column.
                  Model Performance and Scenario Analysis Tool Tutorial
10

-------
  SELECT TOOL   PAIRED INPUT TIME SERIES DATA  MODEL PERFORMANCE AND DIAGNOSTICS TOOL  BASELINE AND SCENARIO COMPARISON TOOL
             Date
                      Observed/Baseline  Simulated/Scenario
                                                     Enter Data Information
     0(0)®
                                                               Select Date and Time Format
                                                              OYYYYMMDD
                                                              ODDMMYYYY
                                                              Date separator  \l_ _ |
                                                              Time separator  |Tj
0 Years
0 Month
0Day
DHour
D Minutes
                            Figure 3: Enter Data Information
To avoid formatting errors, tool users must select the exact date and time format
including the date and time separators from the "Enter Data Information" Window.
                   Model Performance and Scenario Analysis Tool Tutorial
         11

-------
Step 4: On the "enter data information" window, select the format of the date
according to your input data. In this example, MM/DD/YYYY is selected [Figure 3].

Note:    In the current example, daily data are used, thus the hour and minute
         checkboxes were not selected. If the data are hourly or sub-hourly, check the
         corresponding hour and minute checkboxes.


Step 5: Enter the correct date and time separators of the imported data.  Note that a
backslash "/" is used as the date separator in this example.

Note:    The tool does not work properly if the date separator entered and the date
         separator of the pasted data does not match.
Step 6: After the proper date and time formats are selected, click
"Paste Data" window [Figure 4].
to open the
h&| Paste Data

c=i || El || S3
Get the data in U SG S format
Copy and paste (Ctrl+V} it into the text area below,
Click 'Import Data' to place the data into the Input Table.

< 1 III 1 [>
Column
Date Column |lj
Value 1 §
Value 2 [sj
Import Data
                Model Performance and Scenario Analysis Tool Tutorial
         12

-------
                             Figure 4: Paste Data Window

Step 7: Paste the raw data (using Ctrl + V) into the "Paste Data" window [Figure 4].

Note:    The data may not import properly if less than 3 columns are pasted into the
          import data window
Step 8: Click   import Data    gt the bottom of the window [Figure 4] to bring in the data
into the Paired Input Time Series Data table [Figure 5].
  SELECT TOOL   PAIRED INPUT TIME SERIES DATA  MODEL PERFORMANCE AND DIAGNOSTICS TOOL  BASELINE AND SCENARIO COMPARISON TOOL
Date
01/01/2000
01/02/2000
01/03/2000
01/04/2000
01/05/2000
01/06/2000
01/07/2000
01/08/2000
01/09/2000
01/10/2000
01/11/2000
01/12/2000
01/13/2000
01/14/2000
01/15/2000
01/16/2000
01/17/2000
01/13/2000
01/19/2000
01/20/2000
01/21/2000
01/22/2000
01/23/2000
01/24/2000
01/25/2000
Observe d/Baseline
18.0
13.0
17.0
24.0
51.0
48.0
35.0
24.0
23.0
26.0
29.0
26.0
26.0
20.0
16.0
19.0
19.0
9.0
16.0
19.0
17.0
16.0
15.0
16.0
25.0
Simulated/Scenario
18.0
17.0 =
24.0
51.0
48.0
35.0
24.0
23.0
26.0
29.0
26.0
26.0
20.0
16.0
19.0
19.0
9.0
16.0
19.0
17.0
16.0
15.0
16.0
25.0
19.0
                  Figure 5: Paired Input Time Series Data Panel with Imported Data
                  Model Performance and Scenario Analysis Tool Tutorial
13

-------
Note: For this example data, the observed and simulated data are similar in magnitude
but not in sequence. The simulated column is shifted backward by one day to introduce
deliberately a sequence error.
2.3 Model Performance and Diagnostics Tool

Once data has been properly imported, users can now run the Model Performance and
Diagnostics Tool [Figure 6], which evaluates how well the model simulated the
observed data. The Model Performance and Diagnostics Tool panel is organized into
groups by functionality: Basic Statistical Analysis, Visualization and Zooming, Diagnostics
and Feedback, Sequence Only, and Combined Magnitude and Sequence.

  ' SELECT TOOL | PAIRED INPUT TIME SERIES DATA [' MODEL PERFORMANCE AND DIAGNOSTICS TOOL | BASELINE AND SCENARIO COMPARISON TOOL


























BASIC STATISTICAL ANALYSIS-MAGNITUDE ONLY VISUALIZATION AND ZOOMING
Descriptive Statistics Plot Time Series

Error Analysis Plot Duration Curve

Annual and Monthly


DIAGNOSTICS AND FEEDBACK-MAGNITUDE ONLY
Daily Statistics Baseflow Separation

Quantile RSR Annual Duration Curves
MAGNITUDE ONLY SEQUENCE ONLY

„ 	 ^..j_ ^ 	


Frequency Comparison
o,,an,i,e(:nmnariSnn Weighted Rank



Combined Magnitude and Sequence Measures












































































                   Figure 6: Model Performance and Diagnostics Tool Panel
                 Model Performance and Scenario Analysis Tool Tutorial
14

-------
2.3.1. Basic Statistical Analysis - Magnitude Only
Step 1: Click on Model Performance and Diagnostics Tool tab [Figure 6].

Step 2: A statistical summary for the observed and simulated datasets are calculated by
clicking the
             Descriptive Statistics
button [Figure 6].
r
11 Descriptive Statistics Panel [_ d|X|


Parameter
Count
Sum
Mean
Minimum
Maximum
Range
Variance
CV
STD
KURTOSIS
SKEW
Observed Estimates
3653
152970.89
41. 88
0.23
827.0
826.72
4028.13
1.52
63.47
33.52
4.77
Simulated Estimates
3653
152970.89
41.38
0.23
327.0
326.72
4028.13
1.52
63.47
33.52
4.77
**.
*r


                        Figure 7: Descriptive Statistics Panel

Figure 7 shows comparisons of observed and simulated descriptive statistics that
include mean, maximum, minimum, standard deviation, variance, coefficient of variation,
and skew coefficient. To demonstrate the importance of magnitude and sequence
evaluations, even though, the simulated column was shifted backward by one day, all
the descriptive statistics for the observed and simulated datasets are equal. This
suggests that description statistics capture only magnitude but not sequence
differences.
                 Model Performance and Scenario Analysis Tool Tutorial
                                              15

-------
Step 3: For additional model evaluation statistics, click the I     Error Analysis
button [Figure 6] to display the Error Analysis Panel [Figure 8].
O Error Analysis Panel


Model Evaluation Statistics
Mean Error (ME)
Mean Absolute Error (MAE)
Percent BIAS (PBIAS)

B0S

Value
0.0
15.33
0.0



                              Figure 8: Error Analysis Panel
Note: The results of the error analysis show that mean error and percent PBIAS have
zero values suggesting that these two statistics only capture magnitude error but not
sequence errors. Conversely, the absolute mean error seem to capture both magnitude
and sequence errors.
2.3.2 Visualization and Zooming

Step 1: The observed and simulated datasets can be viewed by clicking the
    Plot Time Series
                    button [Figure 6] to display comparisons of observed and
simulated time-series data in a plotting environment [Figure 9].

Note:    To view the observed and simulated datasets individually, users can click the
         radio buttons at the bottom of the window
                 Model Performance and Scenario Analysis Tool Tutorial
16

-------
    300-
    275
    250-
    200-
   0, 175
   3
   £ 150
    125
    100
     75-
     50-
     25-
      0-
         15-Feb
1-Mar
                                Time Series Data
16-Mar     31-Mar     15-Apr
               Year
30-Apr
15-May
30-May
                                — Observed  — Simulated
                                0 Observed 0 Simulated
               Figure 9: Time Series Plot of Observed and Simulated Data
Note: The zoomed time series plot shows the backward shift of the simulated time series
data.
Step 2: A Duration Curve [Figure 10] for the observed and simulated data can be
viewed by clicking
                     Plot Duration Curve
                    shown in [Figure 6].
                 Model Performance and Scenario Analysis Tool Tutorial
                                                               17

-------
                                                                          a
                                Duration Curve
     1000
      100
   111
   ^3
   _ro
      10
        0   5   10   15  20   25  30   35  40   45  50  55  60  65  70  75  80  85  90  95  100
                                 Percentage of time flow exceeded
                                — Observed — Simulated
                               0 Observed 0 Simulated
                            Figure 10: Duration Curve
Figure 10 shows that the simulated and observed data have the same duration curves
or magnitude curves even though the two datasets have time sequence differences
(one-day shift). As such, duration curves represent only magnitude components of a
time series and are useful only in capturing magnitude related errors. Note that [Figure
10] does not show the effect of the one-day shift or the sequence error.
                 Model Performance and Scenario Analysis Tool Tutorial
18

-------
Step 3: Users can aggregate their observed and simulated data to obtain yearly or
monthly sums and averages by clicking the |   Annual and Monthly  J button [Figure 6],
which will show the Annual and Monthly Panel [Figure  11]. Users must select their
time step and operation of choice from the drop down options prior to clicking the
    Calculate
series by clicking
button. The averaged/aggregated data can then be plotted as a time
            [Figure 12]
           J;J Annual and Monthly Panel
                 Time Step
                                      Operation
             Monthly
                                 Sum'Aggregate
                                     Calculate
                       Figure 11: Annual and Monthly Panel
Note: Annual and monthly panel offers only visualization; It does not offer diagnostic
guidelines to modelers.
                 Model Performance and Scenario Analysis Tool Tutorial
                                                               19

-------
|i>l Averag&'Aggregate Panel
                                     Time Series Data
          6,500
          6,000
          5,500
          5,000
          4,500
          4,000
        J3 3,5nii
        to
        > 3,1	
          2,500
          2,000
          1.500
          l.nni i
            RIII i
               2000    2001    2002    2003    2004    2005   2006    2007   2003   2009   2010
                                                 Year
                                      — Observed — Simulated
                                      \y\ Observed  !•! Simulated
                      2002/01
                                    336.2
                                                   335.9
                                          Plot
            Figure 12: Plotted Time Series of Average/Aggregated Data
                   Model Performance and Scenario Analysis Tool Tutorial
20

-------
2.3.3 Diagnostics and Feedback- Magnitude Only
    _1: Daily statistics can be calculated for both the observed and simulated data by
clicking the
Daily Statistics
button shown in [Figure 6], which opens the Daily
Statistics Panel [Figure 13]. Users can then select years of interest and click the Plot
Data button to view time series plots or click the Calculate Statistics button to
calculate sample statistics for the selected observed and simulated data [Figure 14],
lid
Daily Statistics Panel
Daily Data - Observed
Year
2001
2002
2003
2004
2005
2006
2007
2003
2009
Mean






Oct-01
35,0
12.0
4.3
29.0
23.0
2.3
9.4
1.4
11.0
14.156







Oct-02
32.0
11.0
3.9
24.0
37.0
2.3
8.0
1.4
6.4
14.0






< I III I
NOTE
Daily data- Simulated
Year
2001
2002
2003
2004
2005
2006
2007
2008
2009
Mean







Oct-01
32.0
11.0
3.9
24.0
37.0
2.3
8.0
1.4
6.4
14.0







Oct-03
30.0
7.4
3.5
20.0
43.0
2.2
6.5
1.4
4.5
13.167







Oct-04
26.0
5.7
2.8
20.0
24.0
2.3
6.1
1.4
3.8
10.233








Plot Data
Oct-05
25,0
4.6
3.0
19.0
14.0
2.3
5.6
1.4
3.5
8.711







Oct-OB
25.0
5.0
2.2
17.0
9.9
2.4
54.0
1.4
3.1
13.333








Oct-07
23.0
4.2
1.9
18.0
9.0
3.7
78.0
1.4
3.1
15.811






SSE
Oct-08
21
21
0
0
1.8
17
0
8.2
126.0
39
0
1.2
2.9
26.456








Calculate Statistics
Oct-09
20.0
35.0
1.7
16.0
7.9
132.0
19.0
1.1
2.9
26.178






>
A




m
Select rows (Years) in the table's before clicking the "Plot Data" button
Oct-02
30.0
7.4
3.5
20.0
43.0
2.2
6.5
1.4
4.5
13.167







< I III ]
Oct-D3
26.0
5.7
2.8
20.0
24.0
2.3
6.1
1.4
3.8
10.233







Oct-04
25.0
4.6
3.0
19.0
14.0
2.3
5.6
1.4
3.5
8.711








Oct-05
25.0
5.0
2.2
17.0
9.9
2.4
54.0
1.4
3.1
13.333







OCt-06
23.0
4.2
1.9
18.0
9.0
3.7
78.0
1.4
3.1
15.811








Oct-07
21.0
21.0
1.8
17.0
8.2
126.0
39.0
1.2
2.9
26.456







Oct-08
20
0
35.0
1.7
16
0
7.9
132.0
19
.0
1.1
2.9
26.178








Oct-09
17.0
33.0
26.0
16.0
8.8
62.0
10.0
1.0
2.9
19.633







>
^

T


                         Figure 13: Daily Statistics Panel
                 Model Performance and Scenario Analysis Tool Tutorial
                                                                21

-------
Plotting the entire time series (observed and simulated time series) on years by year
basis and comparing the observed and simulated hydrographs provides important
diagnostic insights that include the identification of outliers in the observed time series
data. It also shows the years with hydrograph over-prediction and under-prediction thus
alerting the modeler to search for possible answers about the causes of the hydrograph
differences.
ffl
Daily Statistics Panel
Daily Data -Observed
Year
2001
2002
2003
2004
2005
2006
2007
2008
2009
Mean





Oct-01
35.0
12.0
4.3
29.0
23.0
2.3
9.4
14
11.0
14.156





Oct-02
32.0
11.0
3.9
24.0
37.0
2.3
8.0
1.4
6.4
14.0






Oct-03
30.0
7.4
3.5
20.0
43.0
22
6.5
1.4
4.5
13.167





4 I III I




Oct-04
26.0
5.7
2.8
20.0
24.0
2.3
6.1
1.4
3.8
10.233







Plot Data



Oct-05
25.0
4.6
3.0
19.0
14.0
2.3
5.6
1.4
3.5
8.711







Oct-06
25.0
5.0
2.2
17.0
9.9
24
54.0
1.4
3.1
13.333






Oct-07
23.0
4.2
1.9
18.0
9.0
3.7
78.0
1.4
3.1
15.811









r^nninr
Oct-08
21
21
0
0


1.8
17.0
8.2
126.0
39.0
1.2
2.9
26.456







Calculate Statistics


Oct-09
20.0
35.0
1.7
16.0
7.9
132.0
19.0
1.1
2.9
26.178




~
H


NOTE - Select rows (Years) in the table/s before clicking the "Plot Data" button

	 	
- S *
Daily Data - Observed

Parameter Oct-01 Oct-02
Mean 14.156 14.0
Median 11.0 8.0
STD 12.097 13.479
4 I III I
Daily data - Simulated
Parameter Oct-01 Oct-02
Mean 14.0 13.167
Median 8.0 6.5
STD 13.479 14.684
< I III

Oct-03
13.167
6.5
14.684


Oct-03
10.233
5.7
10.054



Oct-04
10.233
5.7
10.054


Oct-04
8.711
4.6
3.515


Oct-05 Oct-06
8.711 13.333
4.6 5.0
8.515 17.231


Oct-05 Oct-06
13.333 15.811
5.0 42
17.231 24.532


Oct-07
15.811
4.2
24.532


Oct-07
26.456
17.0
39.289



Oct-08 Oct-09
26.456 26.178
17.0 16.0
39.239 41.191
| t
A
=



Oct-08 Oct-09
26.178 19.633
16.0 16.0
41.191 18.946
A
=
.,
in

                      Figure 14: Summary Statistics for Years of Interest

Note: The lower part of Figure 14 calculates mean, median, standard deviation, and
coefficient of variation of observed and simulated data for an average water year.
                 Model Performance and Scenario Analysis Tool Tutorial
22

-------
Step 2: To view the normalized root mean square error for the datasets, click the
     Quantile RSR    | Button shown in [Figure 6] to bring up the Quantile RSR Panel
[Figure 15]. When the    Plot    button is selected in the Quantile RSR Panel [Figure
15], the NRMSE results can be viewed for various quantiles in a graph [Figure 16].
H Quantile RSR Panel (TJOB


Quantiles
Q0.1
Q0.5
Q1
Q5
Q10
Q20
Q30
Q40
Q50
Q60
Q70
Q80
Q90
Q95
Q99
Norm. RMSE
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0


Plot


                             Figure 15: Quantile RSR Panel
Note: The normalized root mean square error values that correspond to the 15
exceedances are all zeros. Error values of zeros signify that the data does not have
magnitude errors and that this statistic does not capture sequence errors.
                 Model Performance and Scenario Analysis Tool Tutorial
23

-------
                  Normalized root mean square error
    0.0000000
             5  10  15  20  25  30  35  40  45  50  55  60 65  70 75 SO 85 90 95 100
                                       Percentile
                           — Normalized root mean square error
                                 Figure 16: NRMSE Plot
Note: NRMSE values of zeros for all exceedance in Figure 16 signify the absence of
magnitude errors in the data.,
Step 3: Click the    Basetiow separation   ] button shown in [Figure 6]. A base flow
separation window will appear. Upon entering a user specified Base flow index and
moving average interval, the base flow and direct flow statistics are calculated for the
observed and  simulated data [Figure 17].
                 Model Performance and Scenario Analysis Tool Tutorial
24

-------
                            Baseflow Separation Panel                  -

                Maximum Base flow Index  |o.73            Smoothing Factor-Recession
                           Calculate BaseFlow and Direct Flow Statistics
Date
01/01/2000
01/02/2000
Total Obs.
18.0
18.0
Baseflow Obs.
18.0
18.0
Direct Runoff...
0.0
0.0
Total Sim.
18.0
17.0
Baseflow Sim.
18.0
17.0
Direct Runoff...
0.0
0.0
*.
=
                            Time Series Data
    45Q
    400
    350
    300
    250
    200
    150
    100-
    50-
     0
     Sep
2004 Nov-2004  Jan-2005  Mar-2005  May-2005  Jul-2005  Sep-2005  Nov-2005  Jan-2006  Mar-20C
                            Time (clay)
                            - Total Obs, — Baseflow Obs,
                              Figure 17: Baseflow Separation
Note: The first step is to plot the observed total flows with the observed baseflow and
examine if the upper baseflow line touches the lower part of the total flow line (Figure
17). If the baseflow line is either too low or too high compared to the total flow line, tool
users need to adjust the baseflow index and the moving average interval values until the
two lines touch each other at both the rising and the recession limb of the hydrograph.
Now that the maximum baseflow index and the smoothing factor are set,  tool users
need to calibrate the model until the simulated baseflow matches the observed
baseflow and the simulated direct runoff matches the observed direct runoff. We
assume that observed and simulated direct runoff will automatically match when
observed and simulated baseflows match.
                 Model Performance and Scenario Analysis Tool Tutorial
                                                                          25

-------
Step 4: The calculated baseflow and direct flow statistics can be viewed in a plot [Figure
18] by selecting one or more columns of discharge values and clicking the

                   button. Alternatively, duration curves of the flow values can be

Plot Time Series
generated by clicking the
                            Plot DurationCurve
        button.
Note:    Users do NOT need to select the data column to plot time-series graph
     Date
  1.'01/2000
  1/02/2000
  1'03'2 000
  1j'04/2000
  1/05/2000
         lid
Time Series Data
             800
                          Hb
                            Flow Duration Curve
     1000 IhlGHFLOwf'    MOIST CONDITIONS   " MID-RANGE CONbTTI<    DRY CONDITIONS    |i_OW FLOW
     100
      10
      0.1

                                                                                  2010
               10  15   20  25  30  35  -40   -W  60   55  60  65  70  75  SO  85   90  95  100
                                Percentage of time flow exceeded
                               — Runoff Obs. —Total bim,
              Figure 18: Time-series and Duration Curve Plots from Baseflow Separation

Figure 18 Simulated total streamflow and direct runoff duration curves.
                  Model Performance and Scenario Analysis Tool Tutorial
                                              26

-------
Step 5: Annual duration curves can be viewed for both the observed and simulated data
by clicking the
              Annual Duration Curves
       button [Figure 6].
Step 6: With the Annual Duration Curves Panel now open [Figure 19], users may
calculate statistics, create a duration curve, or calculate exceedance values by selecting
more than one column of data (by holding the CTRL button) and clicking the
                   button,
Calculate Statistics
Duration Curve
button, or
Exceedances
button
[Figure 19]
The summary statistics for the selected columns will appear as new columns on the left
end of the scrollable Annual Duration Curves Panel [Figure 20]. New windows will
appear for the duration curve and exceedances values.
: - , Annual Duration Curves P
nd
Annual Duration Curves
06S.2001
464.0
387.0
279.0
269.0
251.0
242.0
226.0
201.0
195.0
186.0
167.0
160.0
159.0
151.0
140.0
134.0
134.0
131.0
125.0
121.0
120.0
117.0
110.0
101.0
93.0
Sim.2001
464.0
387.0
279.0
269.0
251.0
242.0
226.0
201.0
195.0
186.0
167.0
160.0
159.0
151.0
140.0
134.0
134.0
131.0
125.0
121.0
120.0
117.0
110.0
101.0
93.0
ODS.2002
124.0
114.0
101.0
99.0
91.0
89.0
78.0
71.0
70.0
66.0
64.0
57.0
53.0
53.0
530
52.0
48.0
46.0
46.0
44.0
41.0
37.0
35.0
34.0
34.0
Sim.2002
124.0
114.0
101.0
99.0
91.0
89.0
78.0
71.0
70.0
66.0
64.0
57.0
53.0
53.0
53.0
52.0
48.0
46.0
46.0
44.0
41.0
37.0
35.0
34.0
34.0
Obs.2003
500.0
470.0
420.0
400.0
387.0
350.0
343.0
319.0
310.0
309.0
303.0
301.0
287.0
270.0
267.0
267.0
267.0
254.0
238.0
235.0
231.0
208.0
203.0
200.0
200.0
Sim.2
500.0
470.0
420.0
400.0
387.0
350.0
343 0
319.0
310.0
309.0
303.0
301.0
287.0
270.0
267.0
267.0
267.0
254.0
238.0
235.0
231.0
203.0
203.0
200.0
200.0
^

•r
< l!ll| >
Hash Sutcliffe Efficiency
2001
0.5201
2002
0.5057
2003
0.6174
2004
0.4883
2005
0.4695
200
0.5128
< I III I *
-


r^ir^ii^i
Select the required columns and then proceed
Calculate Statistics

Duration Curve

Exceedances

                         Figure 19: Annual Duration Curves
                 Model Performance and Scenario Analysis Tool Tutorial
                                                                            27

-------
Note Year by year magnitude curve or duration curve comparisons identify the years in
which model predictions are relatively poor.  These comparisons provide diagnostic
guidelines to modelers.
   I Annual Duration Curves Panel
                               Annual Duration Curves
                    294.0
                     Simulated Medi.
                                 294.0
                    250.5
                     190.0
                                 Simulated Mean
                                            240.416305603..
                                250.5
                                 190.0
                                              Simulated S.D.
                                                        294.0
                                             193.040151263.,
                                             125.865007051.
                                                         Observed Median
                                                                     01
                                                                    294
                                                        250.5
                                                         190.0
                                                                    250
                                                                    190
                                                                                     Select the
                               Flow Duration Curve
          HIGH FLOW

                      MOIST CONDITIONS
                                        MID-RANGE CONDITK}-^   DRV CONDITIONS
                                                                            LOW FLOW


                10
Quantiles
Q0.1
Q0.5
Q1
Q5
Q10
020
Q30
Q40
Q50
Q60
Q70
QBO
Q90
Q95
OB
Obs.2001
464.0
387.0
269.0
125.0
73.0
56.0
42.0
27.0
20.0
16.0
12.0
9.6
5.5
4.2

Sim.2001
464.0
387.0
269.0
125.0
73.0
56.0
42.0
27.0
20.0
16.0
12.0
9.6
5.5
4.2

Obs.2002
124.0
114.0
99.0
46.0
28.0
15,0
11.0
8.4
6.7
5.0
2.9
1.6
0.62
0.44

Sim.2002
124.0
114.0
99.0
46.0
28.0
15.0
11.0
8.3
6.6
5.0
2.9
1.6
0.62
0.44

F














ns
-A.

T

                              Calculate Statistics
                                                                        Duration Curve
               Figure 20: Calculated Statistics, Flow Duration Curve, and Percentile Values

Note: Calculated statistics, duration curve comparisons, andpercentile values provide
useful diagnostic guidelines to modelers.
                   Model Performance and Scenario Analysis Tool Tutorial
28

-------
2.3.4 Diagnostics and Feedback Magnitude Only Comparisons

Step 1: A magnitude comparison of the observed and simulated data can be viewed by
clicking the |    Magnitude Comparison     | button shown in [Figure 6], which will display
the Magnitude Comparison Panel with an option to plot the observed and simulated
magnitude values [Figure 21]. This panel will also display the calculated Nash-Sutcliffe
Efficiency and R square values between the two datasets. Note that these values are
estimated from the entire dataset and offer numerical performance metrics about
magnitude component prediction only, but magnitude comparisons alone do not offer
diagnostic guidelines to modelers.
                Model Performance and Scenario Analysis Tool Tutorial              29

-------
|=>| Magnitude Comparison Panel




= II S jt§^.

Nash-SutCliffe Efficiency (Magnitude Component)


R square (Magnitude Component)

Observed
827.0
B26.0
662.0
639.0
600.0
591.0
570.0
550.0
520.0
509.0
500.0
500.0
4B6.0
470.0
469.0
464.0
463.0
450.0
420.0
400.0
397.0

Simulated
827.0
826.0
662.0
639.0
600.0
591.0
570.0
550.0
520.0
509.0
500.0
500.0
486.0
470.0
469.0
464.0
463.0
450.0
420.0
400.0
397.0

Plot

A.


•v


1.0

1.0

                          Figure 21: Magnitude Comparison Panel

Note: A Nash-Sutdiffe Efficiency of one in Figure 21 means no magnitude errors.
Step 2: Frequency comparison plot can be viewed by clicking the
     Frequency Comparison
                         button shown in [Figure 6]. The Frequency Comparison
Panel is shown on [Figure 22], with an option to plot the flood frequency at various
return periods.
                 Model Performance and Scenario Analysis Tool Tutorial
30

-------
                              Frequency Analysis
    1000
  M
  £
  Q
                                                                 	J	:	
                                      requency Comparison Panel
                                                                   _  n
                                 2.0
                                  Return Period (years)   dobs
                                 0.5
                                                 425.266
                                                 487.51
                                 5.0
                                                 673.456
                                 10.0
                                 25.0
                                 50.0
                                 100.0
                                                 748.259
                                                 804.394
                                                 823.185
                                                 841.004
                                                           Qsim
425.266
437.51
673.456
748.259
804,894
828.185
841.004
                                                                                105
                                 — Observed — Simulated
                    Figure 22: Frequency Comparison Panel with Optional Plot
Note: Frequency comparisons of observed and simulated datasets offer diagnostic
guideline of how well the model simulated the annual peak flows. For this example, the
magnitudes of the observed and  the simulated time series are equal and their peak flow
values are equal.
                                                         Quantile Comparison
Step 3: Quantile values can be viewed by clicking the
button [Figure 6], which will produce a Quantile Comparison Panel [Figure 23]
displaying the observed and simulated flow values for each quantile. By clicking the plot
button, the quantile values can be viewed in a graph [Figure 23].
                 Model Performance and Scenario Analysis Tool Tutorial

-------
                               Quantile Values
     100 :
      10 :
       1
               1D

Quantiles
Q0.1
Q0.5
Q1
Q5
Q10
Q20
Q3Q
Q40
Q50
Q60
Q70
Q80
Q90
Q95
Q99
Observed
639.0
420.0
325.0
145.0
87.0
56.0
41.0
31.0
24.0
17.0
12.0
7.8
3.8
2.3
0.62
Simulated
639.0
420.0
325.0
145.0
87.0
56.0
41.0
31.0
24.0
17.0
12.0
7.8
3.8
2.3
0.62
                                                                       95  100
                           Figure 23: Quantile Comparison Panel
Note: Quantile comparisons provide diagnostic guidelines that allow modelers to
identify the quantiles whose simulated values are over-predicted or under-predicted in
respect to the observed values. Modelers may tie specific processes to specific quantiles
so that they can identify and adjust relevant parameter values as part of the model
calibration process.
                 Model Performance and Scenario Analysis Tool Tutorial
32

-------
2.3.5 Sequence Comparisons
Step 1: Correlation analysis of observed and simulated data can be ran by clicking the
   Serial and Cross Correlation Analysis
button shown in  [Figure 6]. The correlation analysis
consists of Serial and Cross Correlation Analysis of observed and simulated datasets
[Figure 24].
  _£_, Serial and Cross Correlation Analysis Panel
         Lags
                             Maximum Number of Lags   |20|
                       Nash-Sutcliffe Coefficient - Serial Correlation (Observed vs Simulated)
                       Nash-Sutcliffe Coefficient - Serial (Simulated) vs Cross Correlation
                       Nash-Sutcliffe Coefficient - Serial (Observed) vs Cross Correlation
                  Serial Correlation
                                                                      Cross Correlation
                  Qns. Coefficient Sim. Coefficient
                             Lags
                                                                           Correlation Coefficient
                         Figure 24: Serial and Cross Correlation Analysis Panel
                    Model Performance and Scenario Analysis Tool Tutorial
                                                           33

-------
Figure 24 compares the cross correlation between observed and simulated data and the
serial correlations of the observed and simulated data. Further, this panel calculates
Nash-Sutcliffe values between two serial correlations and between the serial correlations
and the cross-correlation. The correlation analysis provides diagnostic insights about
sequence errors.


Step 2: Users may accept the default input values for maximum number of lags (default
value of 20), or they may enter user-specified values. Once acceptable values are
entered, click the |_  Calculate  | button to obtain Nash-Sutcliffe Coefficient values and
correlation results [Figure 25].
                 Model Performance and Scenario Analysis Tool Tutorial               34

-------
I j»| Serial and Cross Correlation Analysis Panel
Maximum Number of Lags [20 |


fci^

Nash- Sutcliffe Coefficient - Serial Correlation (Observed vs Simulated)
Nash-Sutcliffe Coefficient - Serial (Simulated) vs Cross Correlation
Nash -Sutcliffe Coefficient - Serial (Observed) vs Cross Correlation
Serial Correlation Cro

Lags
-20
-19
-13
-17
-16
-15
Obs. Coefficient
0.187653
0.182242
0.177475
0.180928
0.181503
0.179845
-14 0.133352
-13 0.194897
-12
-11
-10
-9
-a
-7
-6
-5
-4
-3
-2
-1
0
1
2
3
0.208755
0.218644
0.221847
0.222629
0.226134
0.239825
0.25564
0.268167
0.294527
0.35528
0.492449
0.780754
1.0
0.780754
0.492449
0.35528
Sim. Coefficient
0.188242
0.182698
0.178221
0.181575
0.18189
0.180069
0.183479
0.195026
0.208783
0.218822
0.222129
0.22288
0.226309
0.240007
0.258015
0.271369
0.295935
0.355978
0.492707
0.780853
1.0
0.780853
0.492707
0.355978
^
=





Ptot



Lags
-20
-19
-18
-17
-16
-15
-14
-13
-12
-11
-10
-9
-a
-7
-6
-5
-4
-3
-2
-1
0
1
2
3

1.0

0.949
0.949

m
ss Correlation
Correlation Coefficient
0.18217
0.177399
0.18084
0.181431
0.179774
0.183275
0.194824
0.20869
0.218577
0.221785
0.222562
0.226061
0.239751
0.255581
0.268126
0.294486
0.355184
0.492316
0.780569
0.999773
0.781038
0.49284
0.356074
0.295976
A



                          Figure 25: Correlation Analysis Calculated
Figure 25 shows calculated correlation values and their corresponding Nash-Sutcliffe
values. Note that a backward shift of one day for the simulated time series has resulted
in a slight decrease in Nash-Sutcliffe values. A decrease in Nash-Sutcliffe between serial
and cross correlation indicates the presence of sequence errors in the pairwise
comparisons.
                 Model Performance and Scenario Analysis Tool Tutorial
35

-------
Step 3: To plot the results of the correlation analysis, click the
Plot
button
[Figure 26]. A plot window will appear displaying the observed and simulated serial
correlation as well as the cross correlation [Figure 26].
                              Correlation Analysis
      1.0

      0.9

      0.8


    1 °'7
    ^j
    CD 0.6
    o
    O
    5 0.5
    I
    o
    O
      0.3
      0:2
      0.1
      0.0
           -20.0  -17.5  -15.0  -12.5 -10.0  -7.5  -5.0  -2.5  0.0  2.5  5.0   7.5  10.0  12.5  15.0  17.5  20.0
                                           Value
                            • Observed Serial — Simulated Serial   Cross
                           Figure 26: Correlation Analysis Plotted
Note that these correlation comparisons capture only sequence errors such as shifts
between observed and simulated data. Note that shifts in the precipitation input data,
which occurs when precipitation data are not available at-site and is transferred from far
away weather stations. In general, when the precipitation input is not aligned with the
flow hydrograph, observed and simulated peak flows are not aligned and these miss-
alignments introduce a shift, which in turn results in both sequence and magnitude
errors. A shift in observed and simulated hydrographs can also be introduced by
inaccurate representation of channel characteristics and flow routing through tributary
and mainstem channels of a watershed. For example, if Mannings n, channel slope, and
storage capacities of the channel are accurately represented, sequence related  shift can
occur even when precipitation data are available at at-site.
                 Model Performance and Scenario Analysis Tool Tutorial
                  36

-------
Model Performance and Scenario Analysis Tool Tutorial                37

-------
Step 4: Users can also view a rank comparison table of all dataset values by clicking the
                              button shown in [Figure 6] to open the Weighted
                                           button, users can view the weighted
          Weighted Rank
Rank Panel [Figure 27]. By clicking the
rank values in a plot environment [Figure 28].
4
Weighted Rank Panel
Rank Comparision
Obs. ranks
2663.0
82.0
3649.0
1656.0
3632,0
1566.0
3523.0
83.0
3605.0
3648.0
1269.0
3631.0
1499.0
1151.0
2664.0
447.0
1920.0
1567.0
1268.0
1162.0
2633.0
2370.0
Sim. Ranks
2662,0
81.0
3648.0
1655.0
3631 :0
1565.0
3522.0
82.0
3604.0
3647.0
1268.0
3630.0
1498.0
1150.0
2663.0
446.0
1919.0
1556.0
1267.0
1161.0
2632.0
2369.0
WtR
0.977
0.99
0.945
0.951
0.914
0.98
0.931
0.9
0.967
0.988
0.914
0.988
0.937
0.983
0.891
0.914
0.939
0.922
0.987
0.906
0.917
0.993
^
=



-n
Percentage of ranks matched
Segment
Q0.1
Q0.5
Q1
02.5
05
Q10
Q20
Q30
Q40
Q50
Q60
Q70
Q80
Q90
Q95
Q99
Q99.99
AVERAGE

Weighted Rank at Each Se...
0.971
0.941
0.398
0.874
0.841
0.761
0.7
0.681
0.687
0.682
0.706
0.728
0.729
0.811
0.883
0.917
0.972
0.811

Plot


^


»

H
                            Figure 27: Weighted Rank Panel
Figure 27 shows observed and simulated rank comparisons resulting from one-day
backward shift of the simulated column. The percentage of ranks matched in each
segment of the duration curve is also shown on the right of Figure 27. Note that the
number of data points in each segment has an influence on the segment-specific
calculated weighted rank values.
                Model Performance and Scenario Analysis Tool Tutorial
                                                                             38

-------
                                 Wt Rank Method
   —
   o
   0)
   O)
   CL
          P0.1 P0.5  P1  P2.5  P5
                               P10  P20 P30 P40  P50  P60 P70   PSO  PSO  P95 P99
                                    Exceedance Probability
                       - Weighted Rank at Each Segment -*- Total Weighted Rank
                              Figure 28: Weighted Rank Plot
Figure 28 shows the percentage of observed and simulated ranks that matched for each
segment of the duration curve. An average weighted rank of 0.81 (Figure 27) represents
a sequence error caused by a one-day backward shift of the simulated time series. Note
that the weighted rank method is the method of choice for sequence related error
evaluations. The weighted rank was developed to capture sequence errors only and it
does not capture magnitude related errors.
                 Model Performance and Scenario Analysis Tool Tutorial
39

-------
Step 5: Users can calculate Nash-Sutcliffe model efficiency and other model
performance metrics by clicking the
shown in [Figure 6].
                                    Combined Magnitude and Sequence Measures
button
                  d Magnitude and Sequence Performance
Model Evaluation Statistics
Root Mean Square Error [RMSE)
Normalized RMSE
Nash Sutcliffe Efficiceny (NSE)-Obs.
Nash Sutcliffe Efficiceny (NSE)-Sim.
Correlation Coefficient (R)
Coefficient of Determination (RA2)
Value
42.03
0.66
0.56
0.56
0.78
0.61
                Figure 29: Combined Magnitude and Sequence Performance Panel
Figure 29 shows RMSE, NRMSE, Nash-Sutcliffe Efficiency, correlation coefficient, and
coefficient of determination. The Nash-Sutcliffe efficiency is the measure of choice for
combined magnitude and sequence evaluations. It is noteworthy that a one-day shift of
simulated data for an otherwise similar datasets has resulted in an NSE value of 0.56.
The correlation  coefficient and coefficient of determination only capture sequence
errors. The RMSE and the NRMSE are closely related to the Nash-Sutcliffe efficiency and
                 Model Performance and Scenario Analysis Tool Tutorial
      40

-------
both can capture magnitude and sequence errors. We recommend the use of NSE in
place of RMSE and NRMSE.
                Model Performance and Scenario Analysis Tool Tutorial              41

-------
2.4 Baseline and Scenario Comparison Tool


The baseline and scenario comparison component does not support model calibration
because it does not calculate model performance measures. The Baseline and Scenario
Comparison has four components for analyzing input datasets: Low Flow Analysis -
7Q10, Hydrological Indicators of Change - Flashiness, Environmental Flow
Calculations, and Indices of Flow Variability- Flow Pulses. Users must first upload
their data into the tool (see Section 2.2: Importing Data before proceeding to this
portion of the tutorial).

2.4.1 Low Flow Analysis- 7Q10

Step 1: Once data has been properly imported into the tool (see Section 2.2 of the
tutorial), click on | BASELINE AMD SCENARIO COMPARISON Toof] to display all Baseline and
Scenario Comparison options [Figure 30]. The goal of estimating low flow indices is to
quantify the degree to which flow values changed from baseline to scenario and
whether such a change violates environmental flow requirements.
                Model Performance and Scenario Analysis Tool Tutorial              42

-------
 I' SELECT TOOL | PAIRED INPUT TIME SERIES DATA | MODEL PERFORMANCE AND DIAGNOSTICS TOOL | BBSELINE AND SCENARIO COMPARISON TOOL
            Low Flow Analysis 7010
                                       Hydrological Indicators of Change-Flashiness
         Environmental Flow Calculations
                                         Indices of Flow Variability - Flow Pulses
                  Figure 30: Baseline and Scenario Comparison Tool
SteoZ Click the
                         Low Flow Analysis-7Q10
button [Figure 30] to bring up the
Low Flow Analysis - 7Q10 panel [Figure 31]. Users can then select any of the three
radio buttons within  the panel and click the [   calculate    button to generate results
according to a  specified flow-averaging period and return interval, explicit flow value, or
flow percentile.
                   Model Performance and Scenario Analysis Tool Tutorial
                                    43

-------
               (•) 7Q10 - Use only Water Years
                 Flow averaging period (days):
                 Return period on years with excursion (years):

               O Explicit flow value:

               O Flow percentile:
10
1000
25
Parameter
Flow (cfs)
Percentile (%)
No of exceedances
Baseline



Scenario



                           Figure 31: Low Flow Analysis - 7Q10 Panel
The goal of estimating low flow indices is to quantify the degree to which flow values
change from baseline to scenario and whether such change violates environmental flow
requirements.
                  Model Performance and Scenario Analysis Tool Tutorial
                     44

-------
Step 3: Click on
                   Hydrological Indicators of Change-Flashiness
[Figure 30] to view a table of
various metrics and index values that describe the relative flashiness of a location based
on the provided input [Figure 32],
              11 Hydrologic Indicators of Change - Flashiness
Hydrological Indicators
of Change
Richard-Baker Index
TQmean
Coefficient of variability
Baseline
0.366
29.691
0.66
Scenario
0.367
29.722
0.66
                   Figure 32: Hydrologic Indicators of Change - Flashiness Panel
The hydrologic indicators of change assess the effect of urbanization on water quantity
and quality. It can also be used to assess the effect of climate change on water quantity
and quality.
                  Model Performance and Scenario Analysis Tool Tutorial
                           45

-------
Step 4: Hydrological indicators such as flow magnitude, rise and fall rates, minimum and
maximum conditions, as well as their relative timing within the year can be viewed by
clicking the
navigating the Group tabs
               Environmental Flow Calculations
button shown in [Figure 30] and through
 Monthly Means   Mean Annual Extreme Values   Average Julian Day for Min. and Max.   Rise and Fall Rates
[Figure 33].

Note: Environmental flow calculations assess how flow alterations affect aquatic
organisms and their habitat.
H E n vi r o nme nt a 1 F low Calculations _ | D 1 1 X |
[ Monthly Means | Mean Annual Extreme Values | Average Julian Day for Min. and Max. [ Rise and Fall Rates

[Values
October
November
December
January
February
March
April
May
June
July
August
September
Baseline
23.089
42.077
62.122
40.5
51.295
65.194
67.972
37.356
43.S64
27.443
20.012
22.689
Baseline CV
0.626
0.829
0.977
0.482
0.538
0.676
0.486
0.522
0.773
0.966
1.061
0.908
Scenario
23.426
42.451
62.169
40.374
51.713
65.626
67.056
37.077
43.698
27.339
20.493
22.194
Scenario CV
0.649
0.815
0.97
0.468
0.546
0.674
0.495
0.537
0.772
0.968
1.029
0.932


                       Figure 33: Environmental Flow Calculations Panel


Step 5: Another option available to users is the hydrological change detection
component, which calculates flow variability of observed and simulated datasets and
ranks them according to their varying magnitude thresholds. Click the
                 Model Performance and Scenario Analysis Tool Tutorial
                                     46

-------
      Indices of Flow Variability - Flow Pulses
                                              button shown in [Figure 30] to generate the
Indices of Flow Variability - Flow Pulses panel  [Figure 34], Users also have the option
to create a plot of the hydrological change values by clicking the |   PLOT  | button
[Figure 34].
  _,_, Indices Of Flow Variability-Flow Pulses
                  Baseline
                                                             Scenario
     Threshold  No.of Pulses Total Durati... Avg. duratio..
    2.0M
    3.0M
    4.0M
    5.0M
    6.0M
    7.0M
             100
                      54048.0
             126
             152
             118
             90
             71
50
                      39456.0
                      19824.0
                      10392.0
                      6600.0
                      5184.0
         3696.0
                      2904.0
                               540.48
                               313.143
                               130.421
                               88.068
                               73.333
                               73.014
                  61.6
                               56.941
                               53.714
Threshold
0.5M
1.0M
2.0M
3.0M
4.0M
5.0M
6.0M
7.0M
8.QM
No. of Pulses
100
126
152
118
90
71
60
51
42 	
Total Durati...
54024.0
39456.0
19348.0
10392.0
6600.0
5184.0
3696.0
2904.0
33560
Avg. duratio...
540.24
313.143
130.579
88.06S
73.333
73.014
61.6
56.941
53.714 	
A

                                        Thresholds (multiples of median)
                               Total Duration (nrs)(0bs) -*- Total Duration (hrs)(Sim)
               Figure 34: Indices of Flow Variability - Flow Pulses Panel with Optional Plot

Note: The indices of variability uses median based threshold values to estimate flow
alterations caused by  urbanization  and other anthropogenic disturbances
                    Model Performance and Scenario Analysis Tool Tutorial
                                                                               47

-------