&EPA
United States
Environmental Protection
Agency
Environmental Research
-aboratory
Athens GA 3O605
EPA-600/9-80-01 6
April 1980
Research and Development
Workshop on
Verification of
Water Quality
Models
-------
RESEARCH REPORTING SERIES
Research reports of the Office of Research and Development, U.S. Environmental
Protection Agency, have been grouped into nine series. These nine broad cate-
gories were established to facilitate further development and application of en-
vironmental technology. Elimination of traditional grouping was consciously
planned to foster technology transfer and a maximum interface in related fields.
The nine series are:
1. Environmental Health Effects Research
2. Environmental Protection Technology
3. Ecological Research
4. Environmental Monitoring
5. Socioeconomic Environmental Studies
6. Scientific and Technical Assessment Reports (STAR)
7. Interagency Energy-Environment Research and Development
8. "Special" Reports
9. Miscellaneous Reports
This document is available to the public through the National Technical Informa-
tion Service, Springfield, Virginia 22161.
-------
EPA-600/9-80-016
April 1980
WORKSHOP ON VERIFICATION
OF
WATER QUALITY MODELS
Co-Chairmen
Robert V. Thomann
Manhattan College
Bronx, New York 10477
and
Thomas 0. Barnwell, Jr.
Technology Development and Applications Branch
Environmental Research Laboratory
Athens, Georgia 30605
Contract No. 68-01-3872
Workshop Coordinators
John A. Mueller and Eiin Vinci
Hydroscience, Inc.
363 Old Hook Road
Westwood, New Jersey 07675
US EPA
ENVIRONMENTAL RESEARCH LABORATORY
OFFICE OF RESEARCH AND DEVELOPMENT
U.S. ENVIRONMENTAL PROTECTION AGENCY
ATHENS, GEORGIA 30605
-------
DISCLAIMER
This report has been reviewed by the Office of Research
and Development, Environment Research Laboratory, U.S. Environ-
ment Protection Agency, Athens, Georgia and approved for
publication. Mention of Trade names or commercial products does
not constitute endorsement or recommendation for use.
11
-------
FOREWORD
As environmental controls become more costly to implement
and the penalties of judgment errors become more severe, envi-
ronmental quality management requires more efficient analytical
tools based on greater knowledge of the environmental phenomena
to be managed. As part of this Laboratory's research on the
occurrence, movement, transformation, impact, and control of
environmental contaminants, the Technology Development and Ap-
plications Branch develops management or engineering tools to
help pollution control officials achieve water quality goals
through watershed management.
Mathematical models are increasingly used in providing a
technical basis for water quality management decisions and the
formulation of environmental policies at all levels of govern-
ment. Because of this increasing use and the increased interest
on the part of scientists and engineers in modeling techniques,
the U.S. EPA sponsored a workshop in which the latest informa-
tion on the development and application of models to environ-
mental decision-making was presented. This report presents the
results of the workshop, which brought together a representative
cross-section of water quality modeling experts from government,
private organizations, and academia.
David W. Duttweiler
Director
Environmental Research Laboratory
Athens, Georgia
111
-------
ABSTRACT
The U.S. Environmental Protection Agency sponsored a
"National Workshop on the Verification of Water Quality Models"
to evaluate the state-of-the-art of water quality modeling and
make specific recommendations for the direction of future model-
ing efforts. Participants represented a broad cross-section of
practitioners of water quality modeling in sections of govern-
ment, academia, industry and private practice. The issues
discussed during this workshop, which was held in West Point,
N.Y., on 7-9 March 1979, were models in decision making, model
data bases, modeling framework and software validation, model
parameter estimation, model verification and models as projection
tools. These issues were discussed by workshop participants who
were organized into small groups, each of which discussed the
state of the art of a specific branch of water quality modeling.
Groups were divided into areas of wasteload generation, transport,
salinity-TDS, dissolved oxygen-temperature, bacteria-virus,
eutfophication and hazardous substances.
Workshop findings were summarized by committee reporters
and are presented in state-of-the-art reports. Workshop parti-
cipants also presented basic issue reports and technical support
papers, all of which are included in this document.
This report was submitted,in partial fulfillment of Contract
No. 68-01-3872 by Hydroscience, Inc., under the sponsorship of
the U.S. Environmental Protection Agency. This report covers
the period from September 1979 to December 1979, and work was
completed as of December 1979.
IV
-------
CONTENTS
Page
FOREWORD i;Li
ABSTRACT iv
ACKNOWLEDGEMENTS ix
PARTICIPANTS X
SUMMARY 1
RECOMMENDATIONS 7
BASIC ISSUES:
Role of Models in Decision Making - g
J.T. Marlar and J.S. Kutzman
STORET: A Data Base for Models - 14
P.L. Taylor
Modeling Framework and Software Validation - 27
J. P. Lawler
Philosophy Underlying Parameter Estimation for 39
Water Quality Models -
C.W. Chen and S.A. Gherini
Measures of Verification - 37
R.V. Thomann
Use of Models as Projection Tools - g2
R.D. Shubinski
-------
CONTENTS
(Continued)
Page
COMMITTEE REPORTS: 68
State-of-the-Art Report and Recommendations of the
Wasteload Generation Committee 71
Transport Systems Committee 78
Salinity/TDS Committee gi
Dissolved Oxygen/Temperature Committee gg
Bacteria/Virus Committee ,nfi
Eutrophication Committee ,,»
Hazardous Substances Committee
JLJ-O
TECHNICAL SUPPORT PAPERS:
How a Program Manager Uses Water Quality Models -
L.A. Beck
Water Quality Models for Bacteria and Viruses -
R.P. Canale 126
Verifying a Water Quality Model -
T. Chi and H.A. Thomas 134
Transport Models -
J.D. Ditmars 142
Recommendations to Improve the Use of Models in
Decision Making -
A.S. Donigian
Salinity Models Applied in the Arid West -
W.J. Grenney 151
Transport on the Continental Shelf in the New
York Bight - 158
G. Han
VI
-------
CONTENTS
(Continued)
Page
TECHNICAL SUPPORT PAPERS (Continued)
The Role of Waste Inflows and Landsat Imagery
in Managing Lake Quality -
J.M. Higgins
Urban Wasteload Generation by Multiple Regression
Analysis of Nationwide Urban Runoff Data -
W.C. Huber
USGS Data Collection Programs Related to Water
Quality Modeling Needs - I75
M.E. Jennings
Separation of Time-Varying Parameters in Stream
Water Quality Modeling -
C.C.K. Liu
Evaluation of Hazardous Substances Transport
Modeling in Surface Waters - 193
Y. Onishi
The Use and Verification of Hydrodynainic Models in
Water Quality Models - 200
J.F. Paul
Toxic Substance Modeling Research at the Large
Lakes Research Station - 202
W.L. Richardson
The Need for Innovative Verification of Eutrophi-
cation Models - 214
D. Scavia
Some Thoughts for Committee Briefing, Water Quality
and Hazardous Substances - 226
T.J. Tofflemire
VI1
-------
CONTENTS
(Continued)
Page
TECHNICAL SUPPORT PAPERS (Continued)
Workshop on Verification of Water Quality Models, »._
Discussion Paper, Wasteload Generation -
B. Sayers
D.O./Temp. Modeling -
C.J. Velz
Salinity/TDS: An Appraisal of Present Practices
and Capabilities in Modeling - 50
G.H. Ward
VI11
-------
ACKNOWLEDGEMENTS
Acknowledgement is made of the valuable assistance given
by William M. Leo and John P. St. John of Hydroscience, Inc. in
the preparation of this report; Irene Hurley and the drafting
department; and Kathleen F. Whartenby for her typing of the
report.
We also thank Edward N. Rehkopf and Maureen O'Dowd of the
Hotel Thayer and Colonel Peter F. Lagasse and Captain John K.
Robertson of West Point for their excellent cooperation in the
organization of the workshop.
We appreciate the presence of Dr. Donald J. O'Connor of
Manhattan College and his talk on "Past, Present and Future of
Water Quality Modeling" and Mr. Robert Horn, Chief, Monitoring
Branch of the USEPA in Washington and his talk on the "National
View of Future Water Quality Issues."
IX
-------
LIST OF PARTICIPANTS
Mr, David Alexander
Environmental Activities Staff
General Motors Corporation
General Motors Tech. Center
Warren, Michigan 48090
(313) 575-8605
Mr. Robert Ambrose
U.S. Environmental Protection
Agency
Environmental Research Lab.
College Station Road
Athens, Georgia 30605
(404) 546-3546
Mr. Douglas Amman
Storm and Combined Sewer
Section
Office of R & D
U.S. Environmental Protection
Agency
Edison, New Jersey 08817
(201) 321-6680
Mr. Thomas 0. Barnwell3 Jr.
U.S. Environmental Protection
Agency
Environmental Research Lab.
College Station Road
Athens, Georgia 30605
(404) 546-3175
Mr. Louis A. Beck
Director
San Joaquin Valley Interagency
Drainage Program
1490 West Shaw Avenue
Suite F
Fresno, California 93711
(209) 488-5681
Mr. Michael A. Bellanoa
Deputy Executive Secretary
Virginia State Water Control
Board
P.O. Box 11143
Richmond, Virginia 23230
(804) 257-6385
Dr. Richard J. Callaway
Res. Physical Oceanographer
U.S. Environmental Protection
Agency
Corvallis Environmental
Research Lab.
200 Southwest 35th Street
Corvallis, Oregon 97330
(503) 757-4703
Dr. Raymond Canale
Civil Engineering Department
University of Michigan
Ann Arbor, Michigan 48104
(313) 763-1463
Dr. Carl Chen
Tetra-Tech, Inc.
3700 Mount Diablo Boulevard
Lafayette, California 94549
(415) 283-3771
Dr. Tze-Wen Chi
Analyst
Meta Systems, Inc.
10 Holworthy St.
Cambridge, Mass. 02138
(617) 868-8660
x
-------
Dr. Tudor T. Davies
U.S. Environmental Protection
Agency
Environmental Research Lab.
Sabine Island
Gulf Breeze, Florida 32561
(904) 932-5311
Major Joe M. Dietzelj Jr.
Science Research Laboratory
U.S. Military Academy
West Point, New York 10996
(914) 938-3429
Dr. John D. Ditmars
Manager, Water Resources
Section
Energy and Environmental
Systems Division
Argonne National Laboratory
9700 South Cass Avenue
Argonne, Illinois 60439
(312) 972-3784
Dr. Dominic M. DiToro
Hydroscience, Inc.
411 Old Hook Road
Emerson, New Jersey 07630
(201) 261-3300
Mr. Anthony S. Donigian} Jr.
Anderson-Nichols
2741 Bayshore Road
Suite 610
Palo Alto, California 94304
(415) 493-1864
Mr. Eugene Driscoll
101 Manito Avenue
Oakland, New Jersey 07436
(201) 337-2217
Dr. C. S. Fang, Head
Dept. Physical Hydraulics and
Oceanography
Virginia Institute of Marine
Science
Gloucester Point, 'Va. 23062
(804) 642-2111
Mr. Dennis Ford
U.S. Army Corps of Engineers
Waterways Experiment Station
P.O. Box 631
Vicksburg, Mississippi 39180
(601) 636-3111 Ex. 3887
Dr. G. Wolfgang Fuhs
N.Y. State Dept. of Health
Division of Laboratories &
Research
Empire State Plaza
Albany, New York 12201
(518) 474-4150
Mr. Thomas W. Gallagher
Hydroscience, Inc.
363 Old Hook Road
Westwood, New Jersey 07675
(201) 666-2600
Capt. Richard C. Graham
Science Research Laboratory
U.S. Military Academy
West Point, New York 10996
(914) 938-2624
Mr. James M. Greenfield
Environmental Engineer
Georgia Environmental
Protection Division
148 International Boulevard
Suite 800
Atlanta, 'Georgia 30303
(404) 656-4988
Dr. William J. Grenney
U.S. Dept. of the Interior
Fish & Wildlife Service
Office of Biological Services
2625 Redwing Road
Ft. Collins, Colorada 80526
(303) 223-4275
XI
-------
Dr. Gregory Han
NOAA
Atlantic Oceanographic and
Meteorologic Laboratories
15 Rickenbacker Causeway
Virginia Key
Miami, Florida 33149
(305) 361-3361 Ex. 326
Mr. John Harris
Chief, Hydrodynamic/Systems
Analysis Unit
California Water Resources
Control Board
2125 19th Street
Sacramento, California 95801
(916) 322-9868
Capt. John E. Hesson
Science Research Laboratory
U.S. Military Academy
West Point, New York 10996
(914) 938-3429 Ex. 2624
Mr. John M. Higgins
Tennessee Valley Authority
Room 248, 401 Building
Chattanooga, Tennessee 37401
(615) 755-3167
Mr. Robert Earn
Chief, Monitoring Branch
U.S. Environmental Protection
Agency (WH-553)
401 M Street, Southwest
Washington, D.C. 20461
(202) 426-7774
Dr. Wayne Huber
Prof, of Environmental
Engineering Sciences
Department of Environmental
Engineering Sciences
A.P. Black Hall
University of Florida
Gainesville, Florida 32611
(904) 392-0846
Mr. Marshall E. Jennings
U.S. Geological Survey
National Space Technology
Facility
NSTL Station
Route 3, Box 40A
Picaynne, Mississippi 39466
(601) 688-2211
Lt. Col. Peter F. Lagasse
Asst. Dean for Academic
Research
U.S. Military Academy
West Point, New York 10996
(914) 938-2624/3429
Dr. John P. Lawler
Lawler, Matusky & Skelly
Engineers
One Blue Hill Plaza
12th Floor
Pearl River, New York 10965
(914) 735-8300
Dr. Clark C. K. Liu
Senior Sanitary Engineer
Survey & Analysis Section
N.Y.S. Dept. of Env. Cons.
Room 312
50 Wolf Road
Albany, New York 12233
(518) 457-7363
Mr. John L. Manoini
320 Brookmere Court
Ridgewood, New Jersey
(201) 652-8461
07450
Mr. John T. Marlar
Chief, Technical Support
Branch
U.S. Environmental Protection
Agency
345 Courtland Street, N.E.
Atlanta, Georgia 30308
(404) 881-3012
XII
-------
Dr. John A. Mueller
Hydroscience, Inc.
363 Old Hook Road
Westwood, New Jersey 07675
(201) 666-2600
Dr. Alan I. Mytelka
Director and Assistant Chief
Engineer
Interstate Sanitation Comm.
10 Columbus Circle
New York, New York 10019
(212) 582-0380
Dr. Tavit Najarian
Chesapeake Bay Institute
Johns Hopkins University
Charles & 34 Street
Baltimore, Maryland 21218
(301) 338-8236
Mr. W. Brock Neely
Dow Chemical USA
Environmental Science Research
1702 Building
Midland, Michigan 48640
(517) 636-0405
Mr. Austin Nelson
Hydroscience, Inc.
2855 Mitchell Drive
Walnut Creek, California 94598
(415) 938-0233
Dr. Donald J. O'Connor
Professor, Environmental
Engineering and Science
Program
Manhattan College
Bronx, New York 10463
(212) 548-1400
Dr. Yasuo Onishi
Staff Engineer
Hydrologic Systems Section
Water & Land Resources Dept.
Battelle-Pacific Northwest
Laboratories
P.O. Box 999
Richland, Washington 99352
(509) 946-2425
Dr. John F. Paul
U.S. Environmental Protection
Agency
Large Lakes Research Station
9311 Groh Road
Grosse lie, Michigan 48138
(313) 226-7811
Dr. Ronald E. Hathbun
U.S. Geological Survey
NASA-NSTL, Building 2101
NSTL Station, Miss. 39529
(601) 688-3350
Mr. William L. Richardson
Env. Scientist
U.S. Environmental Protection
Agency
Large Lakes Research Station
9311 Groh Road
Grosse lie, Michigan 48138
(313) 226-7811
Capt. John K. Roberston
Associate Professor
Science Research Laboratory
U.S. Military Academy
West Point, New York 10996
(914) 938-3429
Dr. Peter Robertson
Field Operations Division
Maryland Water Resource
Administration
416 Chinquapin Round Road
Annapolis, Maryland 21403
(301) 269-3677
Mr. John P. St. John
Hydroscience, Inc.
363 Old Hook Road
Westwood, New Jersey 07675
(201) 666-2600
Mr. Donald Scavia
Great Lakes Environmental
Research Laboratory
2300 Washtenaw Avenue
Ann Arbor, Michigan 48104
(313) 668-2287 Ex. 2280
Xlll
-------
Dr. Robert P. Shubinski, V.P.
Water Resources Engineers,
Inc.
8001 Forbes Place, Suite 312
Springfield, Virginia 22151
(703) 321-9393
Mr. Daniel S. Ssumski
Managing Engineer
Hydroscience, Inc.
2855 Mitchell Drive
Walnut Creek, Ca. 94598
(415) 938-0233
Mr. Phillip Taylor
U.S. Environmental Protection
Agency
Monitoring and Data Support
Division
Office of Water Planning and
Standards (WH 553)
401 M Street, S.W.
Washington, D.C. 20460
(202) 426-7760
Mr. Michael L. Terstriep, Eng.
Illinois State Water Survey
Hydrology Section
P.O. Box 232
Urbana, Illinois 61801
(217) 333-4959
Dr. M. Llewellyn Thatcher
Assoc. Research Professor
Polytechnic Institute of N.Y.
333 Jay Street
Brooklyn, New York 11201
(212) 643-8958
Dr. Robert V. Thomann
Professor, Environmental
Engineering and Science
Program
Manhattan College
Bronx, New York 10463
(212) 548-1400
Dr. Kent W. Thornton
Ecologist
U.S. Army Corps of Engineers
Waterways Experiment Station
P.O. Box 631
Vicksburg, Mississippi 39180
(601) 636-3lll Ex. 3713
Mr. Arthur C. Tingle
Meteorologist, Atmospheric
Science
Brookhaven National
Laboratory
Upton, New York 11973
(516) 345-2271
Dr. James Tofflemire
N.Y. State Dept. of Env.
Conservation
Room 519
50 Wolf Road
Albany, New York 12233
(518) 457-7575
Mr. Richard Tortoriello
Delaware River Basin Comm.
P.O. Box 7360
West Trenton, N.J. 08628
(609) 883-9500
Mr. Clarence Velz
Professor Emeritus Univ. of
Michigan
P.O. Box 495
Longboat Key, Florida 33548
(813) 383-1797
Ms. Elin Vinci
Hydroscience, Inc.
411 Old Hook Road
Emerson, New Jersey 07630
(201) 261-3300
Dr. George E. Ward
Espey, Huston & Assoc., Inc.
3010 South Lamar Blvd.
Austin, Texas 78704
(512) 444-3151
xiv
-------
Mr. R. G. Willey
The Hydrologic Engineering
Center
U.S. Army Corps of Engineers
609 Second Street
Davis, California 95616
(916) 440-2329
Mr. John Yearsley
U.S. Environmental Protection
Agency
Region X
1200 Sixth Avenue
Seattle, Washington 98101
(206) 442-1296
Dr. P. Jonathan Young
Hydroscience, Inc.
611 Ryan Plaza Drive, Suite 230
Arlington, Texas 76011
(817) 461-8851
Mr. G. Kenneth Young
GKY & Associates, Inc.
4900 Leesburg Pike
Suite 350
Alexandria, Virginia 22302
(713) 578-1625
xv
-------
SUMMARY*
Background
In the 50 years since the classical work of Streeter and
Phelps, the use of mathematical models of water quality has
grown extensively. In recent years, both advanced computer
technology and increased USEPA support have combined to greatly
increase the numbers of scientists and engineers using modeling
techniques. At the present time, such techniques contribute to
wastewater management decisions and the formulation of policy at
local, state, regional and national levels.
Since their introduction, modeling techniques have grown in
sophistication, complexity and in general use. A vast array of
software is available for calculating input wastewater loads as
well as water quality impacts. Mathematical models vary in
their ability to simulate water quality variables, and in their
levels of spatial, temporal and kinetic detail.
Because of the current use of mathematical modeling in pro-
viding a technical basis for many important decisions, the
potential contributions in new decision areas, and because in-
creasing numbers of people are using or applying models, the
U.S. Environmental Protection Agency, Environmental Research
Laboratory, Athens, Ga., sponsored a "National Workshop on the
Verification of Water Quality Models."
Purposes
The workshop was organized to:
(1) examine current general capabilities and limitations
of mathematical models
(2) identify methods of verifying model accuracy in
specific situations
*Prepared by Hydroscience, Inc.
-------
(3) assess the reliability of decisions made on the basis
of modeling results
(4) determine the needs and future directions that model-
ing efforts can most productively follow.
In addressing these concerns, the workshop brought together
a representative cross section of participants from government,
private and academic sectors who are experienced in the develop-
ment and application of water quality models. The overall
purpose of the workshop was to elicit from this cross section of
model practitioners expressions of the present state of the art
of model capabilities, credibility and utility and to suggest
areas for improving model performance, verification and ability
to respond to cogent water quality problems.
Format
A Workshop coordinating committee composed of the Workshop
Co-chairmen (Thomas Barnwell, USEPA Environmental Research
Laboratory, Athens, Georgia and Robert V. Thomann, Manhattan
College), Workshop Coordinators and John P. St. John of
Hydroscience, Inc., prepared invitations to selected individuals,
formulated the Workshop agenda and program and prepared the
Summary and Recommendations. The invited participants met on
March 7-9, 1979 at the Hotel Thayer on the grounds of the U.S.
Military Academy, West Point, N.Y. The workshop was coordinated
by Hydroscience, Inc., Westwood, New Jersey.
On the first day of the workshop, all participants heard
speakers address the basic issues of:
(1) Role of Models in Decision Making
(2) Data Base
(3) Time and Space Scales; Kinetic Detail; Cost
Effectiveness
(4) Parameter Estimation
(5) Measures of Verification
(6) Use of Models as Projection Tools
On the second and third days of the workshops, invited
participants were assigned to committees to discuss the above
Issues as they relate to the use of models in the topical areas
of:
(1) Wasteload Generation
(2) Transport
(3) Salinity/Total Dissolved Solids
(4) Dissolved Oxygen/Temperature
(5) Bacteria/Virus
(6) Eutrophication
(7) Hazardous Substances
-------
All Committees then reported on the results of this dis-
cussion in two plenary sessions; one on the state-of-the-art and
a second on recommendations.
In general, the draft Committee reports submitted by the
Committee Chairman at the close of the Workshop shortly there-
after were compiled by Hydroscience in a standard format and
sent out for review by Committee members. Subsequent comments
were incorporated by Hydroscience and a final report resubmitted
to the Committee Chairman for approval.
This publication therefore includes the papers presented by
the authors on the six Basic Issues; summaries of Topical Com-
mittees discussions; and nineteen Technical Support Papers.
Present State of the Art.
The following represents a summary of the principal conclu-
sions reached by the Committees on Topical Areas in addressing
the six Basic Issues of water quality verification.
Role of Models in Decision Making
A general consensus of the workshop participants was that
mathematical modeling results of physical/chemical/biological
processes along with other factors such as legal requirements,
public opinion and economic considerations are used by decision
makers in developing water quality plans. Although most admini-
strators are usually well informed about decision making
factors, it was recognized that modeling results can also be
misused.
For example, some decision makers do not accept formal
models as tools to be used in the decision making process but
rather accept modeling results without question, especially when
the results agree with the administrators' pre-conceived notions.
In this light, the workshop noted that one of the modelers'
functions is to keep the administrators informed on the
strengths, weaknesses and limitations of the modeling results so
that they can better understand the usefulness and reliability
of model results. A strong responsibility therefore rests on
the modeler to carefully explain and document the inherent
assumptions so as not to "oversell" a model that promises more
than it delivers.
In addition, workshop participants believed that transport,
salinity/total dissolved solids (TDS), dissolved oxygen (D.O.)/
temperature (Temp.), bacteria and eutrophication models are
technically sound and when properly applied and verified, are
capable of supporting water quality management decisions. At
this time, members of the Wasteload Generation Committee
concurred that the Non-Point Source (NPS) wasteload generation
models are capable of planning level and guidance decisions but
-------
members questioned the accuracy of the models in their ability
to test best management practices (BMPs) or alternative control
effectiveness.
Finally, the members of the Hazardous Substances Committee
indicated that hazardous substance models are not widely used
in developing management plans.
In summary, committee members agreed that models are useful
and necessary tools to be used as part of the decision making
process by administrators. Members also encouraged the ongoing
development of all model types especially the new technology
wasteload generation and hazardous substance models.
Data Base
In order to perform a defensible water quality modeling
study, Committee members acknowledge the need for extensive
data bases. EPA presently sponsors and controls the nationwide
computerized data handling and storage system known as STORET.
All topical committees agreed that as it presently stands
STORET is often inadequate for modeling purposes. Workshop
members believe that STORET and other generalized data bases
contain large quantities of water quality monitoring data.
These data are not always useful to modelers because monitoring
data are generally not collected synoptically nor are the data
specific enough for the individual modeling studies. In
addition, there is a significant lack of spatial coverage of
samples for individual water systems.
A general consensus of opinion was that good water quality
data bases containing synoptically collected water quality data,
input data, parameter rate data and detailed spatial coverage is
necessary. Because these data are expensive to collect and
specific to individual modeling studies, generalized data bases
do not contain these data and are, therefore, not widely util-
ized for modeling studies.
Modeling Framework and Software Validation
Committee members noted that models for each of the
topical areas can vary in complexity from simple spatial and
temporal scales with simple mathematical solution techniques
and basic kinetics to large complex models with long solution
times and detailed kinetics.' Members also concluded that model
software is not always checked for accuracy and/or conservation
of mass resulting in models that may not be numerically or
scientifically accurate.
There was general concurrence that the complexity of models
and detail of model subroutines are best analyzed at the start
-------
of an investigation. The resulting model detail also depends
on project budget, complexity of the physical system, problems
and questions to be answered, the timing of the project and the
available technology. Members also agreed that, in general the
simplest models and kinetic subroutines consistent with the
problem context are the best approaches both in understanding
model output and in conveying model results. Complexity in
models should only be introduced where necessary.
Parameter Estimation
At present kinetic model parameters are estimated from
special data collection programs, laboratory studies, literature
reviews and calibration procedures. Strict reliance on
literature kinetic rates was recognized as a poor modeling
practice unless the system is fairly insensitive to changes in
the kinetic rates. Better modeling practices rely on special
site specific field studies and model calibrations to estimate
parameter kinetic rates, using the literature values as guide-
lines. Sensitivity Analysis is recommended as a valuable
adjunct to the parameter estimation process.
Measures of Verification
Workshop members generally agreed that models are first
calibrated to define system kinetic parameters and then verified
to provide a measure of confidence in the model. For adequate
verification, the computed model results are compared to a set
of water quality data other than the calibration data set. In
the second comparison, system kinetics remain constant except
for changes which are functions of temperature, salinity, flow
or other system parameters.
Members also agreed that statistical measures of verifica-
tion are available but are not widely used. Present verifica-
tions are based on graphical comparisons between computed model
results and observed data, with the engineer's judgment serving
as the qualitative measure of verification. This method of
verification, although qualitative in nature, remains a solid
engineering practice. However, quantitative verification tech-
niques were recognized by committee members as useful tools for
future studies. There was general agreement that no. one
statistical technique for verification should be promulgated.
Use of Models as Projection Tools
The purpose of water quality models is to aid in under-
standing the cause and effect relationships between wastewater
inputs and water quality impacts, so that treatment alternatives
can be evaluated as they affect future water quality. Workshop
members concurred that except for wastewater generation models,
all other models, when properly verified, adequately predict
incremental changes in water quality in the evaluation of
-------
alternatives. Members also agreed that almost no post audit
surveys have been conducted to date to check the results of
water quality projections. Therefore, the accuracy of model
projections can only be related to the quality of the model
verification.
-------
RECOMMENDATIONS *
The following recommendations are summaries of recommenda-
tions made by the workshop members. Detailed recommendations
for future development efforts and modeling practices follow
each of the committee reports.
1. The workshop members encourage better coordination and
communication between the modeler and the decision
maker, in order to increase the understanding and
credibility of models.
2. The workshop members recommend that modelers should be
involved in data collection efforts and planning
efforts to improve the data available for modeling
studies. A comprehensive high quality data base for
purposes of model testing, verification, and improve-
ment should be established. This should include high
quality synoptic data bases obtained from selected
water bodies including "impoundments", selected river
and estuarine systems and controlled field experiments.
3. The workshop members encourage continued and expanded
software code review and internal automated checks for
the purpose of proper computer program validation.
4. The workshop members encourage expansion of laboratory
studies and special field studies to further develop
parameter kinetics. This is recommended for all areas
of modeling especially for hazardous substance model-
ing.
5. The members encourage the use of statistical verifica-
tion techniques. However, no single technique is rec-
ommended nor should statistical techniques be used to
supercede engineering judgment. Sensitivity analysis
is recommended as a key to verification and parameter
estimation.
6. The Eutrophication and DO/Temp Committees recommend
that resources be allocated for post audit data collec-
tion programs and subsequent model studies to verify
previous model projections.
*Prepared by Hydroscience, Inc.
-------
ROLE OF MODELS IN DECISION MAKING
by
1 2
John T. Marlar and James S. Kutzman
There is an increasing awareness of the role of mathemati-
cal modeling of water quality impacts in arriving at informed
decisions regarding wastewater management. This increasing
awareness covers a wide spectrum which ranges from complete
skepticism to total faith. The purpose of this paper is to
examine the use of outputs from such models and how such outputs
are used in conjunction with other factors to arrive at defensi-
ble decisions. In order to perform this examination, a recent
case involving mathematical modeling and how the results were
used will be presented. This case illustrates how model results
can be used in arriving at decisions and also offers several
germane points that are essential if model outputs are to be
useful in reaching decisions.
The South River is a small river whose headwaters originate
near the City of Atlanta. The river is tributary to Lake
Jackson and is approximately 60 miles in length. The river re-
ceives treated wastewater from several major municipal treat-
ment facilities. In 1972, the following effluent limitations
were established for wastewater dischargers to the South River:
BOD5 = 10 mg/1
NH3-N = 2 mg/1
D.O. = 6 mg/1
Phosphorus = 1 mg/1
Chief, Technical Support Branch, Water Division, USEPA, Region
IV; presented paper.
2
Chief, Applied Technology Section, Technical Support Branch,
USEPA, Region IV.
-------
The oxygen demanding constituent limitations were based on
a mathematical model of the river and the phosphorus limit was
based on a water quality analysis of the nutrient components in
Lake Jackson. Using these effluent limitations as a basis,
facility plans were prepared for the areas and the actual design
work was completed. The City of Atlanta determined it would be
cost-effective to remove its discharges from the river and
transfer them to an adjacent basin. DeKalb County decided to
upgrade existing facilities to meet the effluent limitations.
The planning and design work took approximately 5 years to
-complete with the resulting total construction cost estimated at
$150 million. In March 1978, after completion of the design of
advanced waste treatment facilities, DeKalb County requested
that the effluent limitations be relaxed back to secondary
treatment. The request was made through their congressional
representative. As a result of this request, EPA Headquarters
in Washington conducted an independent investigation into the
technical basis supporting the effluent limitations. In ad-
dition to this, DeKalb County hired its own consultant to like-
wise investigate and examine the basis. The review by EPA Head-
quarters indicated there was a firm technical basis for the
effluent limitations and that the limitations were supportable.
The consultant hired by DeKalb County concluded there was ab-
solutely no basis for the effluent limitations. This conclusion
was refuted" by the regional office of EPA in Atlanta. Faced
with an apparent total disagreement between "experts", the
county hired yet another consultant to review the first consul-
tant's work and the EPA review of the project. This last con-
sultant concluded that there probably was a sufficient basis for
the effluent limitations or at least that the county would not
be able to easily prove their case. As a result of these
reviews, DeKalb County is.presently proceeding with the con-
struction of most of the facilities designed to produce the
original effluent limitations established in 1972. Much of the
controversy appeared in the media and generated a great deal of
local interest. This example of the development and use of the
effluent limitations on the South River is abbreviated with
many of the details omitted from this paper. Copies of the
review reports mentioned above are available for your further
consideration, i.e., references (1), (2), (3) and (4).
The example used here illustrates several points germane
to the use of models in making wastewater management decisions.
These points should be kept in mind when future decisions are
made based on model outputs.
1. The basis used in establishing the effluent limitations was
technically strong and defensible. The effluent limita-
tions for BOD5, NH~-N and dissolved oxygen were based on a
dissolved oxygen model of the river. The input parameters
to the model were based on intensive stream surveys, supp-
lemented by other data collection efforts. The reaeration
rate of the river was measured in the field using the gas
-------
tracer technique developed by Dr. E. Tsivoglou. The
phosphorus limitation was based on a study of the nutrient
budget or input-output approach of nitrogen and phosphorus
loadings to Lake Jackson. This approach has been compared
to more sophisticated "eco modeling" approaches and was
found to yield similar results, reference (5). Further
studies done in 1974 and 1977 reexamined the phosphorus
limitation in light of new techniques and a changing
situation regarding the wastewater discharges to the lake.
The conclusion of these studies was that the lake would be
phosphorus limited and that by implementing the 1 mg/1
limitation on phosphorus, the eutrophication problems in
the lake would be arrested.
The effluent limitations were not based on an arbitrary
policy or decision but rather were based on a firm technical
foundation. The model outputs were based on modeling
efforts that were logical, reasonably well documented, and
defensible. In many cases model outputs do not have strong
foundations. If a decision maker is going to use model out-
puts in arriving at wastewater management decisions, the
technical personnel should provide as good a technical
basis as can be prepared. Such personnel should also be
able to advise the decision maker as to whether or not the
modeling work is strong or weak. Frequently, technical
types are apprehensive about indicating the weaknesses or
strengths of modeling work. It is important that any
decision maker have an objective appraisal regarding how
good or bad the supporting modeling work appears to be. In
the South River case, the decision makers were advised that
the modeling work was good enough to support the limita-
tions. In other cases, the advice to the decision maker
has been that the modeling work was not adequate to support
the conclusions and that the outputs should not be the sole
basis of the decision. It is essential that the technical
personnel provide not only the model output but an objec-
tive evaluation as to the credibility of the work support-
ing those outputs.
2. It is appropriate for technical personnel to advise the
designated decision maker as to the model outputs and the
merits of the outputs. These outputs are one contributing
factor but not often the sole factor. Both the technical
personnel and the decision maker should realize this. In
many cases, either too much or too little reliance is
placed on model results. A proper balance of all signifi-
cant factors must be maintained.
3. There are many factors which can be used in conjunction
with model results in arriving at a wastewater management
decision. In the South River case, two separate major
decisions were made. One was made in 1972 and the other
10
-------
was made in 1978. The decision in 1972 was to require
municipal wastewater dischargers to meet the prescribed
level of treatment. Aside from the model outputs, other
factors were considered in the decision. Local environ-
mental groups expressed considerable interest in improving
the quality of the river. Residents of the Lake Jackson
area were extremely concerned about the deteriorating water
quality of the lake. There was vocal public support for
any efforts to improve the river and lake. Another factor
was that local governmental entities were interested in ex-
panding their waste treatment facilities and wanted to be
eligible to receive federal funding for the construction of
the facilities.
In 1978, the decision was whether to continue to require
DeKalb County to meet the established effluent limitations.
There were different factors to consider in making this
decision than there were in making the 1972 decision. Al-
though the technical basis for the model results was im-
proved, the political factors had radically changed. The
major political entity involved no longer accepted the
effluent limitations. DeKalb County attempted to utilize
means other than direct conversation with the State and EPA
to delay or eliminate the limits. The tone and direction of
EPA concerning advanced waste treatment projects had
changed. Questions were being raised nationally concerning
the need for higher levels of treatment. The third ad-
ditional factor was the precedent factor. There were many
other communities observing what decision would be reached
and how they could use it. It should be remembered that
the City of Atlanta had chosen to remove their discharges
from the basin. The decision was based in part on the
original effluent limitations. Any change in the effluent
limitations for DeKalb County would create uncertainties
over the decision made by the city. Another factor con-
sidered was the federal funding issue. Grants had been
awarded to DeKalb County for the construction costs of
facilities designed to meet the original effluent limita-
tions. If the effluent limitations were changed, the
possibility existed that the funds would be withdrawn from
DeKalb County because the facilities would have to be re-
designed. The last factor considered was that the plants
could be given permits that would contain seasonal limita-
tions. This would afford the county some operational cost
savings and still attain water quality goals.
The above factors are the type that decision makers must
consider in addition to the technical model outputs. Each
situation will have its own set of additional factors.
4. The level of technical evaluation to support any modeling
outputs should reflect the magnitude of the decision being
made and the complexity of the water system being affected.
11
-------
In the case of the South River, the evaluation was reason-
ably extensive which was important because the resulting
cost of the final construction of the needed facilities was
approximately $150 million. Another aspect of this point
is that the approach selected should fit the situation. For
example, the model selected should apply to the type of
situation that is being examined. It is often not necessary
to develop a new model but rather to apply an existing model
with adequate data. In the case of South River a steady
state, one dimensional dissolved oxygen model was utilized.
It was not necessary to utilize a more sophisticated model.
In determining the phosphorus limitation, it was necessary
to modify the Vollenweider approach to approximate actual
conditions observed in Georgia lakes. This modification is
the type of change that personnel doing water quality
analyses should be aware of. It is not enough to merely
put numbers into a program and obtain results. It is
necessary that the personnel be able to analyze, interpret
and understand what a model's output means.
The four points mentioned above are by no means an all en-
compassing list. They do represent points that all individuals
who are involved in modeling should give consideration to. The
most important point however in the administrator's use of model
output is the credibility of the technical personnel providing
these outputs. Technical personnel must be able to give a
decision maker a clear, objective appraisal of the merits of
the outputs.
It is also necessary that the technical personnel under-
stand that they are part of a decision making process and should
not be expected to make the decisions. They should be an
integral part of the process and should contribute to the
decision, but it is equally important that decisions be made at
the proper level.
12
-------
REFERENCES
Review of Advanced Wastewater Treatment Works Proposed for
DeKalb County, Georgia, Office of Water Programs Operation,
U.S.E.P.A., November 7, 1978.
Analysis of AWT Effluent Limits for DeKalb County, Georgia,
Jerome Horowitz and Larry Bazel, October 5, 1978.
Review of Report Entitled "Analysis of AWT Effluent Limits
for DeKalb County, Georgia" prepared by Jerome Horowitz and
Associates; Technical Support Branch, Water Division, EPA
Region IV, November 7, 1978.
Letter from Dr. Ernest C. Tsivoglou to Walter B. Russell,
Jr., Chairman of DeKalb County Commission, December 1, 1978.
Comparison of Eutrophication Models, John S. Tapp,
U.S.E.P.A., Region IV, May 1976.
13
-------
STORET: A DATA BASE FOR MODELS
by
Phillip L. Taylor
One of the important topics before this workshop is the
data base used by modelers. The data base is not all in one
place and in many instances modelers are familiar with the
situation of going to a number of sources to obtain data needed
in the development of a model. Obtaining data such as time-of-
travel, geometry, streamflow, or rainfall may involve contacting
EPA, U.S. Geological Survey (USGS), Corps of Engineers (COE),
or National Oceanic and Atmospheric Administration (NOAA). My
topic today is EPA's data system - STORET - and I hope to shed
some light on how it can be used by modelers.
For the sake of discussion, modeling is divided into three
categories related to the geographic size of the area modeled
and the type of data needed for the model. At one end of the
spectrum are wasteload allocation models and at the other end
are macro models of whole river systems. Between these are
models for relatively large systems such as the Great Lakes or
multi-county non-point source areas. There are many situations
where STORET has data for a model category. A quick review of
the data system covers types of data available, data sources,
some of the analysis routines, and examples in which the data
base was used in modeling.
The STORET system in its operation over the past 15 years
has been used by many Federal, State and local monitoring pro-
grams which have collected and placed in the data base 50 million
monitoring observations for. 200,000 surface water sampling
locations. These data are entered into the system on a daily
basis by more than 225 computer terminals in 120 cities with
master system updates each weekend.
Monitoring and Data Support Division, USEPA, Washington, D.C.
14
-------
One of the major STORET data sources is the US Geological
Survey. All of the water quality data collected by USGS is
stored in WATSTORE for USGS purposes and it is transferred
monthly to STORET for purposes of dissemination to other agen-
cies, and the public. Additionally, EPA periodically receives
streamflow data tapes from USGS and STORET maintains the sup-
porting software for its use. EPA and USGS have an interagency
agreement for sharing these data, and another agreement with the
Survey includes EPA's participation in the National Water Data
Exchange (NAWDEX). As part of this latter agreement, STORET is
indexed annually to assist NAWDEX referral services.
The Corps of Engineers, Bureau of Reclamation, TVA and U.S.
Forest Service are.important sources of STORET monitoring data
for many streams, lakes and reservoirs associated with their
areas of responsibilities. Data throughout the Great Lakes is
provided by the International Joint Commission (IJC) through
cooperative Canadian and U.S. programs.
States provide data to STORET from their ambient monitoring
program. The principal guidance document, entitled the Basic
Water Monitoring Program (BWMP) emphasizes the need for inten-
sive surveys to be done in priority basins at least once every
five years for developing wasteload allocations, setting water
quality standards and assessing conditions in those basins.
This activity, while serving State purposes, will make more
data available to modelers for a wide variety of purposes.
EPA has several monitoring programs which generate various
types of water quality data. The Regions and Headquarters
provide priority pollutant data from ambient, fish and sediment
samples. Effluent data are available from an increasing number
of facility (both industrial and POTW) surveys. Regional data
from on-going fate studies on toxic pollutants will be used for
large scale modeling, but may also be of value to WLA modeling.
Data in STORET from other studies, such as lake eutrophication
surveys and Section 208 non-point source studies, should be use-
ful in developing models of intermediate scale.
All of these many sources of data have collectively pro-
vided a very large data base, only part of which is of 'interest
to this workshop. Table 1 shows the overall abundance of data
in STORET in various topical areas. Shaded maps, such as Figure
1, are available for each topical area to show the geographic
distribution of these data and to help investigators determine
if needed data are available in the general location under
study.
Data can be retrieved by station, basin, county, or ir-
regular area (polygon). Examples for some of the STORET out-
puts are data inventories for each monitoring station, raw data
listings, statistical summaries, computed daily stream loadings,
15
-------
TABLE 1
STORET DATA AVAILABILITY
FOR
TOPICAL PARAMETERS
1975
Topical All Years
_Parameter Number of Observation
(millions)
3.06
.90
.54
1.66
1.87
Temperature
Turbidity
Color
Conductivity
Dissolved Oxy.
TDS
Chlorides
Sulfate
Nitrogen-Total
-Organ.
-Ammonia
-Kjeldahl
Phosphorus-Tot.
Chlorophyll-A
Algae-Total
Coliforms - Total
- Fecal
DDT
Dieldrin
PCB-Total
Chromium-Total
Mercury-Total
0.74
1.85
.89
.11
.23
.77
.42
.82
.04
.06
.78
.61
.03
.03
.01
.14
.13
Number of
Observations
(millions)
.295
.096
.054
.143
.199
.057
.094
.061
.021
.028
.099
.086
.117
.006
.003
.046
.107
.005
.005
.003
.024
.001
Number of
Stations
(thousands)
29.9
12.2
8.1
22.1
20.4
3.9
17.2
13.2
4.4
5.7
16.8
11.9
19.6
0.8
0.9
7.2
14.9
1.7
1.9
.9
7.1
6.6
16
-------
#/
'/JO Vi W;.? I/.?;- m^S^KA
•- M/«Jr**^ if^'-lt-^::?" *" J Tv^v'^ * *'**
"^^^fU^ *. l " ! 8 ^S ^ -^' *s •• • - \
w®mj • ;";" i '>>% I & • 1 K»t! r I ^ri* • r *^iL vT^i' • i-^ ^
gXwtft* a ^t.N-ri" ^;rvf---^
f .^
' MO.£?J5 l^fl l« -: • ^'\n"V^ -V- T- t ' • v •':vW<• •.'.•••^r:*^-'*
1^*^^^-^^^^ §ffi» :$jf$§
ftEKSS
ttS^KSS*5A«.? ?>' I ja.vuK'-sBijEH^Ha SS_ -,.....,..-... j;:) ViTt' v IV'-' ••-'. / ?>.R - •> i; •"*
:•.••••• /f i!; .1 ' :' V • ' i'-*-^*, - . " /^rr '.' ii--itO(
SJ? Islisl^ JrJ . %;;:m » ^S^^r^C^:
*-,;'V ^ilrfl
g^fiS r.'||;; V .^.fc«,:tr^^^i^
a s&&^> * ::-:",'' *':'
s
STORET SYSTEM
CHLORIDES, mg/ I
73-75 85 TH PERCENTILES
.
>7. TO 20.
>ZO TO 66.
>66
0.00
MILES x 10'
18.9^ 37.89 56.83 75.77
1
.
1
.
I
FIGURE I. DATA AVAILABILITY, CHLORIDES
-------
river profiles or water quality mapping. The STORET files can
be interfaced with EPA software, system software, such as SAS,
HMD, or with user supplied software. The STORET software was
developed to meet specific user needs, including comparing water
quality with standards, analyzing water quality trends, deter-
mining water quality improvement related to enforcement actions,
or assisting in the general distribution of water quality data.
If modelers identify additional requirements, STORET management
would be able to include such requirements in future system
development plans, both in terms of data acquisition and software
for retrievals and displays most meaningful to modelers.
The STORET data base is just starting to receive a signifi-
cant increase in intensive survey data from States, and EPA has
proceeded to provide some special system capabilities. It is
expected that this first step will be important in providing
stream and effluent data for modeling purposes. The Basic Water
Monitoring Program (BWMP) defines State water monitoring re-
sponsibilities, and calls for a shift toward more intensive
surveys and away from excessive long term fixed station activi-
ties. The BWMP emphasizes the need to acquire excellent data
in the intensive surveys for determining effluent limits for
water quality based permits. These surveys will be important in
developing cost effective plans for advanced waste treatment
(AWT), advanced secondary treatment (AST) and treatment in excess
of best available treatment (BAT). It is important that the
decisions, locally and nationally, for higher degrees of treat-
ment be based on sound data, because of the large economic and
environmental implications associated with possible mistakes
in this area.
Intensive survey data will be stored in STORET by the
States and they can use the system as an analytical tool to help
summarize and interpret survey data. The completion of a survey
abstract (Figure 2) is a first step for storing intensive survey
data. These abstracts will be used to assist in coding, storing
and retrieving the survey data. Suggestions to help make the
abstract better meet modelers' needs, both now and in the future,
can lead to a STORET capability which will truly be of signifi-
cant value in modeling activities.
The STORET analytical and display capabilities for open
water data have potential value for linking data from separate
cruises and near shore sampling together in a manner which can
be compared with model outputs. These techniques can handle
large amounts of data for many spatial and temporal situations.
Data from Saginaw Bay in Lake Huron will help to indicate some
conceptual uses of the data base.
The EPA Large Lakes Research Station at Grosse.Ile, Michi-
gan used Storet retrievals of raw data and contour plots
(Figure 3) for Saginaw Bay to correlate with a LANDSAT imagery
18
-------
Responsible Office
Segment ID
FIGURE 2
INTENSIVE SURVEY ABSTRACT
Person
Phone
(name of principal stream or water body)
Brief Description of Survey
Location of Survey
(L/L polygon)
Parameters Measured
P =
P
P
etc.
Station Storage Data
or
Station Pointer Data
Station
Agency/Station Order Lat/Long State Co Type Name RMI (yes/no)
A= S=
A= S=
H
H H
H H H
(0 (tf n3
ft A, ft
19
-------
FIGURE 2
(Continued)
CHECK-OFF LIST
(1) Survey Purposes
Reconnaissance
Problem assessment
Model calibration/verification
Wasteload allocation
Municipal permits
Industrial permits
Other
(2) Sources/Problems
Storm or combined sewer (
Non-point source agriculture
Non-point source silviculture
Non-point source urban
Lake eutrophication
Thermal
Stratification
Land disposal
Municipal
Industrial
Power
Irrigation
Other
(3) Land Uses - Non-point Sources
Residential
Office/shopping
Industrial
Agricultural
Homogeneous
Other
(4) Water Uses
Drinking water
Aquatic life
Sports fish
Commercial fish/shellfish
Recreation
Irrigation
Industrial supply
(5) Water Body Types
Stream
Lake
Impoundment
Estuary
Bay
Ocean
Swamp
Groundwater
Other
(6) Sample Types
Ambient Water
Sediment
Fish
Rainfall
Precipitation
Streamflow
Point source
Raw water supply
Washoff
Other
(7) Parameter Groups
Physical
Biological
Bacteriological
Nutrients
Solids
Metals
Pesticides
Trace Organics
Other
20
-------
»tut r -
15.05
MACHINE CONTOURED CHLORIDE DATA,
33 STATIONS, JULY 29-31, 1975
MACHINE CONTOURED SECCHI DEPTH DATA,
33 STATIONS, JULY 29-31, 1975
FIGURE 3. SAGINAW BAY CONTOUR PLOTS (STORET)
PRODUCTION OF A WATER QUALITY MAP OF SASIMAW BAY BY COMPUTER PROCESSING
Of LANDSAT-? DATA. BCNOIX AEROSPACE 9VITEM1 DIVISION
21
-------
project. The project objective was to investigate new surveil-
lance and analysis techniques which were helpful in determining
locations and relative magnitudes of water quality gradients in
large bodies of water.
Another example of using STORET for open water analysis
involved IFYGL data for Lake Ontario. One model output (Figure
4) was compared with two and three-dimensional plots (Figures
5 and 6) generated by STORET using IFYGL data in the system to
help interpret the model results. One point of interest to
the modelers was that these plots show irregular chlorophyll
levels occurring along the shoreline particularly near
Rochester and Oswego with some noticeable spikes which probably
were attributable to algal blooms. These secondary factors
should be taken into consideration when comparing the outputs
of a model with actual ambient data.
An area of interest for all data users has been the quality
of data. In January 1978, QA/QC began receiving high level
attention when the Administrator established a select group,
The Blue Ribbon Monitoring Committee, to review EPA's monitor-
ing and data programs. The Committee was concerned with im-
proving data quality associated with sampling, instrumentation,
analytical methods, data processing, and other methodologies.
Because of new priorities on toxics, several protocols for field
sampling and lab analysis have been improved to provide greater
assurance that the toxics monitoring data are reliable. Paral-
leling the field and lab work, STORET is keeping pace with new
requirements through adjustments to accomodate QA and QC codes,
multi-media data and parameters to cover the many organics being
found with GC/MS instrumentation.
EPA Headquarters is currently conducting a program to
evaluate exposure and subsequent risk from the presence of toxic
pollutants in the water environment. This program is using the
EXAMS (Exposure Analyses Modeling System) model developed by
EPA's Environmental Laboratory in Athens, Georgia to model 114
organic compounds on the list of 129 priority pollutants. EXAMS
is a multicomponent kinetic model that synthesizes environmental
fate and transport processes for a specific set of environmental
conditions. The Headquarters program is in the process of
developing various rate coefficients needed to run the model.
The model's use of physical, chemical and biological fate
processes are based on the results of literature reviews and
laboratory studies. Data from Regional surveillance and analy-
sis field studies on stream segments will help to better deter-
mine rate coefficients. The concept of stream and river reaches
will be used later in the model so that fate evaluations can be
made for actual waterways. This approach depends on the use of
STORET to provide data which is available for many miles of
waters and computerized stream reaches. In many instances these
22
-------
LAKE ONTARIO-LAKE 3 MODEL
LAYER 1 (0-4 METERS)
JUNE
NIAGARA RIVER
ROCHESTER
OSWEGO
ST. LAWRENCE RIVER
PHYTOPLANKTON CHLOROPHYLL a MAXIMUM, 8.2/j.g/l
FIGURE 4. THREE DIMENSIONAL PLOT OF
PHYTOPLANKTON CHLOROPHYLL CALCULATED
FROM LAKE 3 MODEL-JUNE, 0-4 METERS.
THOMANN, •!. al. DEVELOPMENT AND VERIFICATION, I . MODEL
MATHEMATICAL MODELINS OF PHYTOPLANKTON IN LAKE ONTARIO
1978. (EPA-660/5-7e-O«e )
23
-------
LAKE ONTARIO CHLOROPHYLL'a', /Ltg/I FARM 32210
f\
\\...' i
\CTfl i
§» !
Mte !r^
!
u,-.
ROCHESTEff
ST. LAWRENCE
to
U),.: :S-:_
••::::HH iut^^Si-: .:-.-:'
^v "^.^HvV.v
'ri'^ry/. y/-; ^-- r:--••/.•>;•:
^i^>fei ;;;;^v;;-;:;:y;-
§ifll^-^lf^{':SS':
FIGURE 5. LAKE ONTARIO-CHLOROPHYLL a (ALL DATA),
1972 MONITORING DATA FROM STORET
-------
ROCHESTER
OSWEGO
FIGURE 6-LAKE ONTARIO-CHLOROPHYLL'a'
(0-4 METERS). JUNE I97E MONITORING DATA FROM STORET
-------
data will be used to enhance or replace simulated data from
models.
Rivers and streams of the United States are being digitized
by EPA for a data base called the Reach File. It will include
approximately 50,000 stream reaches, representing one seventh
of the country's streams. The file will be a framework for
simulated routing of streamflow and pollutants through the
Nation's river systems. One of the first uses of the file will
be to better describe stream conditions used in the EXAMS model
so that fate evaluation can be made for an actual river or
stream reach using existing data rather than simulating all
conditions.
In conclusion it is clear that limited use has been made
of STORET as a data source for modelers. Some modelers do use
it to store and sift through data they gathered for wasteload
allocation efforts. This need is increasing and the means for
handling the associated data should be improved. On a larger
scale, STORET toxics data and the Reach File will play an im-
portant role in modeling whole river systems. These efforts
will be accompanied by strengthened activities for improving
data quality.
26
-------
MODELING FRAMEWORK & SOFTWARE VALIDATION
by
John P. Lawler
I. MODEL FRAMEWORK - ELEMENTS
Time Variable or Steady State
One, Two or Three Dimensional
Completely Mixed, Plug Flow, Partially Mixed
Time Averaged, Space Averaged
Deterministic, Stochastic
Empirical, Conceptual (Mechanism)
Linear, Non-linear
- First-order, Zero-order, Michaelis-Menton
Single Species, Multi-species, Community
Population vs Ecological Level
- Hydrodynamic, Mass Transport, Energy Flow
Physical, Chemical, Biological Kinetics & Transport
Combinations of many of the above
II. GUIDELINES FOR ELEMENT SELECTION - BROADER THOUGHTS
SOMETIMES LOST
Delineation of the Problem vs Mechanics of Solution: Too
often, the modeler takes a given outline of the problem as
the given; i.e., a cast-in-concrete perception which only
requires a solution into which he is only too glad to
plunge. Immediately there follows a careful, or sometimes
not so careful, sorting of model elements, such'as those
described above, and an eventual choice of a model frame-
work and solution technique.
Some thoughts are offered toward broader views on which the
modeler might well reflect on prior to taking the plunge.
It has been my experience that these and like considera-
tions eventually surface in most modeling efforts. The
trick, I believe, is to start here - not find out after six
months or two years or ten year of study effort - and
Partner, Lawler, Matusky & Skelly Engineers, Pearl River, New
York.
27
-------
associated dollars, that a recasting must take place, which
could have been avoided with a little more foresight. Of
course, hindsight is supposed to be'better than foresight,
but as modelers, our predictions are supposed to eventually
verify. I submit the better the early broad-base thinking,
the better will be our simulation of "the problem", regard-
less of the eventual choice of model framework and solution
nitty-gritty.
Narrow vs Broad Scope: Where should emphasis be placed -
on modeling the water quality changes and improvements to
be expected in the presence of upgraded point source
treatment, or on the total picture, including non-point
source pollution? Are our priorities properly placed if
we ever refine our ability to simulate water quality
changes accruing from water supply development and point
source treatment, and forget that introduction of these
utilities change the growth pattern and growth rate of a
community? Positive (or negative, depending on your point
of view) feedback, to be sure]
Single vs Multiple Effects: Where should emphasis be
placed - on concerning ourselves with the hypothetical
impact of a single aspect of a single industry on a single
species of fish, or on the multiplicity of interrelated
phenomena that together make up the real world condition,
good or bad, of our nation's waterways?
No Action vs Alternative Actions: Shall we stifle needed
projects because, by comparison to the "no action" alterna-
tive, they are viewed as producing unacceptable impacts or
levels of risk. Here we are not addressing the valid "no
action" alternative, but rather the comparison to the
absence of any project in the past, in situations where
some project must proceed, or already exists.
Zero Discharge vs Acceptable Levels of Water Quality
Change: Toxic substances are now a fact of life. Shall we
wring our hands in despair, as the "carcinogen of the week"
is announced, or shall we work toward an ever better under-
standing of the kinetics, transport and distribution of
these substances in our air, land and water, at the same
time others are working toward a similar ever better under-
standing of their impact on man? This is an area, I be-
lieve, where sophistication and refinement of today's
modeling technology, often overused in many applications,
will be tested. Substantial advances in many of the
elements listed above will be necessary.
The Planner vs the Modeler; The theme of each of the fore-
going thoughts revolves around the responsibility of all
engaged in delineating a problem and framing its solution
28
-------
to ensure the problem is properly defined and fully
described before proceeding to simulate the particular
real-world phenomena. Too often, the modeler is satisfied
to take his direction from the planner, the administrator,
the regulator, without recognizing the role he can play in
defining and guiding that direction. After all, if he is
charged with a real world simulation, then he ought have
insights that others may not have. At the very least, he
should train himself to think broadly, so that he contrib-
utes at the time when broad thinking is required.
III. MODEL VALIDATION
Model validation or "acceptance testing" entails verifying
that the working model does indeed represent the system it
has been constructed to represent. Although this step in-
cludes "program debugging," it is by no means limited by
this procedure. During this step, the "working" model is
compared with "known" analytical or numerical solutions
similar to the current system. This is achieved by "de-
generating" the "working" model to simpler systems with
accepted solutions, usually by the judicious selection of
test data.
For example, a dynamic model must eventually reach a steady
state, given repetitive input and boundary conditions. The
results of the "working" model can be compared with a
steady-state solution with the same test data; the compari-
son is made after sufficient time has elapsed for the
"working" model to damp out the transient behavior result-
ing from the initial conditions.
A model developed with space variable parameters, e.g.
variable cross-sectional areas, can be compared to an
analytical or numerical solution developed for constant
cross-sectional areas by the use of constant cross-
sectional areas as input data to the "working" model.
Validation is a necessary but not sufficient condition for
"working" model acceptance. Subtle "bugs" in the program,
program logic, or the numerical solution itself may occur
well into the application of the model when unforeseen
cases are run. No amount of validation testing is enough,
and usually a wise choice of several validations is made,
depending on the particular type of "working" model. The
final result of validation is a functional model, i.e.,
a tested, "working" model.
29
-------
PHILOSOPHY UNDERLYING
PARAMETER ESTIMATION FOR
WATER QUALITY MODELS
by
Carl W. Chen 2
Steven A. Gherini
Introduction
Water quality models possess several properties useful to
decision makexs. They predict the consequences of various en-
gineering alternatives proposed for environmental improvement
or for mitigation of deleterious effects. They also provide
technical bases for explaining why and how environmental bene-
fits are to t>e accrued by the proposed actions.
For decision makers to accept the conclusions based upon a
quality model, the model must be credible. Credibility can be
enhanced by using a model that is based upon sound scientific
principles, that has had several previous successful applica-
tions for different prototypes, that is readily calibrated to
the system being modeled, and that has been verified for the
aquatic system in question using data sets independent of those
used for calibration.
The purpose of parameter estimation is to improve model
calibration and therefore credibility. Parameter values are
selected that oresult in adequate comparisons between the ob-
served and the predicted water quality data.
A discussion of parameter estimation cannot be provided
without making reference to other related issues, e.g., model
complexity, rates and expressions included, and the boundary
conditions for the.system being modeled. All these factors
influence the number of parameter values to be estimated and
the degree of empiricism involved.
Vice-President, Environmental Systems Engineering, Tetra-Tech,
Inc., Lafayette, California; presented paper.
2
Tetra-Tech, Inc., Lafayette, California.
30
-------
The ultimate test of a model rests in its ability to repro-
duce the observed data. Since the model, regradless of its com-
plexity, is a simplified approximation of the prototype, it can
never simulate exactly all perturbations experienced by the real
system. Discrepancies invariably exist due to the errors intro-
duced by idealization of the prototype and/or due to errors in
the measurement of the field data. The problem of parameter
estimation is this: How can parameter values for a model be
selected and adjusted to minimize the errors in prediction.
Model Complexities
It is generally agreed that one should use the simplest
possible model appropriate for the analyses. The term "simple",
however, is relative, and is difficult to define. A model that
is "simple" to some may be complex to others.
Simple models do have special appeal to decision makers
looking for simple answers. They are attractive in that they
are sometimes inexpensive to use both in terms of time and costs.
They can be explained in a straightforward manner. These may be
illusionary, however.
Many aquatic systems are too complex to be handled by
simple models. Simple models often require unrealistic ideali-
zation of the prototype and negate many benefits of modeling,
e.g., keeping track of all the complex interactions and feedback
known to exist in the real system.
Nonetheless, it is up to the analyst to devise the simplest
possible model that will answer the questions of interest. Some-
times the analyst must acknowledge the need for a reasonably
detailed model. To be accepted, they must also learn how to ex-
plain the model simply regardless of its complexity, preferably
in layman's language, so that it can be understood.
To help decision makers understand issues, it is important
to discuss what water quality models are and how simple water
quality models are derived. Water quality models usually operate
on a grid system designed to provide spatial representation of
the water body. Each grid point can accept input described by
such boundary conditions as river inflows, waste inputs, river
outflows, and climatology.
Mathematical equations with appropriate coefficients are
formulated to describe the physical processes that transport
materials from one grid point to others; the chemical transfor-
mations that take place within the water volume represented by
the grid point; and biological processes such as nutrient uptake,
growth, respiration, mortality, and grazing.
31
-------
Up to this point the various physical and chemical charac-
teristics of the prototype are considered in great detail. That
is, the total physical system is divided into smaller units and
the prototype behavior is dissected into a series of simultan-
eous reactions, represented by differential equations. An inte-
gration step must follow to produce answers.
The equations are solved by a computer to produce a time
history of water quality at each grid point. The rates of
various physical, chemical and biological transformations can
be printed for a detailed analysis on the relative importance
of various factors contributing to observed water quality (e.g.,
physical pollution transport, chemical interactions, and organ-
ism growth).
Model simplification is accomplished by aggregating the
grid points, by taking a larger time step of computation, and/or
by excluding certain formulations {and therefore processes) from
the model. As the model is simplified, however, more empiricism
and more external parameter estimation is required. For example,
if the model does not include hydrodynamic calculations, the
flow field must be estimated. When zooplankton, for example,
are excluded from the calculations, one must provide an estimate
of the phytoplankton loss to account for grazing by zooplankton.
Developing estimates for the added parameters is not easy.
They are intrinsic system properties that can not be estimated
a_ priori. Their values in fact adjust themselves to the envi-
ronmental stimuli and therefore are not constants. To provide
good estimates for those parameter values, one needs extensive
field data. The current data required to support a simplified
model can be several orders of magnitude more extensive than
those required for occasional checks against the output of
hydrodynamic calculations.
Further, the simplified models may calculate so-called
"average concentrations" for large areas over a relatively long
time. Such "average concentrations" cannot be compared directly
to observed data. Field observations are made at discrete
points in time. The observed data must be averaged before they
can be compared to the model output. How this "averaging" should
be done is not always apparent.
The point being made here is that oversimplification of a
model can be as bad as overcomplication. It is not axiomatic
that a simple model requires simple data and hence simple para-
meter estimates.
Nondesignated 208 Approach
Under the sponsorship of the Environmental Research Labora-
tory (Athens, Georgia), Tetra Tech has developed a three-part
32
-------
program for water quality analysis in nondesignated 208 areas in
the United States.
The first part is screening where very simple models are
used. The calculations are such that they can be performed with
desktop calculators. A manual has been prepared detailing the
methodology, data needs, calculation procedures and limitations.
The manual has four sections describing the methods for analyzing
wasteloads (point and nonpoint sources), and for predicting pol-
lutant distributions in rivers, lakes, and estuaries.
The purpose of the screening methodology is to help identify
problem areas for more detailed analyses. While the methodology
is an estimation tool, and is not rigorous from a modeling stand-
point, it can serve as an educational tool for the non-modeler.
An appreciation of the basic principles underlying more complex
models can be gained by examining the procedures.
For the problem areas identified, a more sophisticated
analysis may be called for. The second part of the program is a
series of mathematical models that can be used for that purpose.
The third part of the program is the "rates manual", which
serves as a companion to the first two parts. In this manual,
the rates, constants, and kinetics formulations used in surface
water quality modeling have been compiled and analyzed. The
definition of each parameter, how it is used in the model, and
ranges of parameter values reported in the literature are pre-
sented. The subject areas include various physical, chemical,
and biological processes. Detailed information is provided for
reaeration, dissolved oxygen saturation, photosynthesis, carbon-
aceous deoxygenation, nitrogenous deoxygenation, benthic oxygen
demand, coliform bacteria die-off, algal and zooplankton
dynamics.
Calibration Procedure
While there are autocalibration programs available, they
must be used with care. The manual procedure of adjusting
coefficients for best fit is favored, because one can learn a
great deal about the system behavior by trying to resolve dif-
ferences between model predictions and observed data. By leav-
ing calibration to the computer, the model may well be calibra-
ted improperly.
For manual calibration, it is important for investigators
to select parameter values within the ranges reported in the
literature. In that Respect, the "rates manual" described pre-
viously will be useful.
In comparing calculated and observed water quality data, one
should use several approaches. The temporal variations at
33
-------
selected locations and the spatial variations at selected times
may be compared for the overall fit. After that, the point
differences between the observed and the calculated values can
be determined for a statistical evaluation of precision.
Model output can be shown to have varying sensitivity to
different classes of parameters. Parameter classes, ranked in
typical order of decreasing influence on model output, are pre-
sented below:
Boundary conditions
Coefficients for physical processes
Coefficients for chemical processes, and
Coefficients for biological processes
To improve calibration, one should adjust first the most
sensitive coefficient (i.e., boundary conditions) followed by
the less sensitive parameters. The procedure is to change the
most sensitive parameter in one direction (e.g., from a large
to a small value) until further improvement on the calibration
cannot be made and then do the same for the next most sensitive
parameter.
Error Analysis
It is important to understand that there will always be
discrepancies between the observed and the predicted water
quality. Errors are most commonly introduced by the following
factors:
- The boundary conditions are more variable in the pro-
totype than those assumed for the model.
Prototype processes are more complex than the simpli-
fied formulations used.
Rate coefficients have random variations not accounted
for in the model.
There are errors in the field measurements.
All model studies should include sensitivity analysis to
quantify the relative magnitude of error associated with each of
the above factor groups.
New Field Programs
Many historical data sets available for water quality model-
ing studies are incomplete. Most data sets contain the water
quality measurements within the aquatic system without simultan-
eous measurement of boundary conditions, i.e., hydrologic inputs,
pollution loads and meteorology.
34
-------
Modelers are often hard-pressed to come up with the esti-
mates needed for their modeling work. Given inadequate data,
constant boundary conditions are often assumed and imposed on
the model. Nonetheless, the model is still expected to perform
the impossible task of simulating variable water quality condi-
tions observed in the field.
Recently, we have been involved in a very exciting program
where modeling studies and field research activities are being
performed concurrently and in a coordinated manner. The purpose
of the program is to gain understanding of the processes invol-
ved in the neutralization of acidic precipitation. As a focal
point the following task has been delineated: Determine why
three lakes in the. Adirondack Mountains of New York exhibit dif-
fering water quality (pH, alkalinity, etc.) in response to
seemingly similar inputs of acid rain. One lake has a pH of
about 7, one is acidic with a pH of 4-5, and one has a variable
pH ranging from 4 to 7. What must be discovered is how various
physical, chemical and biological processes within the basin
interact to produce these effects.
The program is being sponsored by the Electric Power
Research Institute (Palo Alto, California). Six universities
and one private company are working together to research im-
portant processes in and to measure various state variables
throughout the system which includes atmospheric, terrestrial,
and aquatic components. Two national laboratories and two other
federal agencies are also participating in the field research
program. Field measurements are synchronized so that basin in-
put, basin output, and system state variables are monitored
s imultaneously.
The field program has been designed based in part upon a
conceptual model formulated to follow the quantity and quality
of precipitation (rain or snow) from the tree top, through the
canopy, soil horizons, bogs, stream reaches and lakes. The
model was conceptualized based on an extensive review of litera-
ture.
The hydrologic section of the model has already been devel-
oped. At the preliminary calibration stage, the model has been
used to bracket the depths of flow in the saturated zone of
soils, the likely permeabilities of the soil horizons and other
geohydrological characteristics. This information is used to
help refine the field sampling program design.
The field program has been underway for 18 months and will
continue for another two and one-half years. The first two
years' data will be used to calibrate the model; the last two
years' data will be used for verification. Based on the results
obtained to date, there are reasons to believe that this coordi-
nated modeling/field research program will prove successful.
35
-------
It is true that not many studies can be conducted in such
depth. On the other hand, Federal monies have too often been
spread so thinly on so many projects that very little return per
dollar spent has been witnessed. Decision makers often fail to
get the facts upon which to base their decisions, and modelers
continue to be blamed for not having been able to properly test
and calibrate their models. If modelers are expected to con-
tribute, they must be supported to do what is right, not simply
what is inexpensive.
REFERENCES
1. Zison, S.W., et al., "Water Quality Assessment: A Screening
Method for Nondesignated 208 Areas," EPA-600/9-77-023, EPA
Environmental Research Laboratory, Athens, Georgia, Aucrust,
1977. y
2. Zison, S.W., et al., "Rates, Constants, and Kinetics Form-
ulations in Surface Water Quality Modeling," EPA-600/3-78-
105, EPA Environmental Research Laboratory, Athens, Georgia,
December, 1978.
36
-------
MEASURES OF VERIFICATION
by
Robert V. Thomann
Introduction
There are two basic reasons for constructing representa-
tions of natural water systems through mathematical modeling.
First is the need to increase the level of understanding of the
cause-effect relationships operative in water quality and
secondly, to apply that increased understanding to aid the
decision making process. Water quality models are largely
syntheses of a number of phenomena; water transport/ complica-
ted reaction kinetics, and externally generated residuals in-
puts. The builder of water quality models acts as one part of
a three part interaction which includes the specialist who
generates process details (e.g., uptake of nutrients by phyto-
plankton) and the manager who is concerned with the problem
specification and ultimately its resolution in some sense.
Figure 1 shows this interaction.
For more than fifty years now, this relationship has con-
tinued in a great variety of increasingly complex water quality
modeling situations. But one of the common threads throughout
this period has been the constantly recurring question of the
validity, credibility, and utility of the water quality models.
Indeed even in the historical roots of water quality modeling,
as embodied in the famous Ohio River dissolved oxygen studies
of the 1920's, this question of model validity was present.
Resurveys of the Ohio River between 1914 and 1930 were con-
sidered critical to a justification of the basic history theory
of deoxygenation and reaeration (Crohurst, 1933). Indeed the
works of Streeter and Phelps (1925) addressed the question of
model validity quite directly by numerous qualitative compari-
sons to observed data and through quantitative comparisons by
computing, for example the root mean square error between
''"Prof. Env. Eng. & Science Program, Manhattan College, Bronx,
New York.
37
-------
dissolved oxygen theoretical calculations and observed values.
One easily gains the impression from a reading Of these early
works that analysis of- the relationships between observed and
computed values, both qualitatively and statistically was a
normally acceptable and expected procedure
As the issues of water quality become more complex, re-
quiring the interaction of numerous variables in space and time,
the questions of model credibility increase. The responses to
these questions often tend to be somewhat qualitative; e.g.,
"The results appear reasonable" or "The major features of the
observed behavior have been captured" or "The comparison between
observed and computed values is marginal but sufficient for
most purposes." Increasingly, most assessments of model valid-
ity do not seem to directly answer the basic questions of the
manager, the specialist or the general public. Common questions
are: "How good is the model?", "What is the level of confidence
that we can place on your results?", "How do two models purport-
ing to represent the same water quality phenomena compare to
each other?"
In the light of the questions raised on model credibility,
it is appropriate to address the issue of what measures of
verification, if any, might be useful in today's water quality
modeling setting. However, a brief review of the principal com-
ponents of a water quality model is necessary to clarify and
propose some language that might be applicable to this issue of
model verification. Within the context of water quality prob-
lems, the basic issues discussed apply also to models of input
generation and water transport.
Figure 2 shows the principal components of a mathematical
modeling framework. The upper two steps enclosed with the
dashed lines, namely "Theoretical Construct" and "Numerical
Specification" constitute what is considered a mathematical
model. This is to distinguish the simple writing of equations
for a model from the equally difficult task of assigning a set
of representative numbers to inputs and parameters. Following
this initial model specification are the steps of a) model
calibration, i.e., the first "tuning" of model output to observ-
ed data and b) the step of model verification i.e., the use of
the calibrated model on a different set of water quality data.
This verification data set should presumably represent a con-
dition under a sufficiently perturbed condition (i.e., high
flows, decreased temperature, changed waste input) to provide
an adequate test for the model. Upon the completion of this
verification or auditing step, the model would be considered
verified.
Stages of Model Credibility
The following definitions are therefore offered:
38
-------
FIGURE I. RELATIONSHIP BETWEEN MODELER,
SPECIALIST AND MANAGER IN WATER QUALITY
PROBLEM
SPECIFICATION
GENERAL
THEORY
1 —
1
THEORETICAL
CONSTRUCT
'
FIELD
DATA
LABORATORY
DATA
NUMERICAL
SPECIFICATION
|
COMPUTED
OUTPUT
MODEL
CALIBRATION
MODEL
VERIFICATION
I
FIGURE 2. PRINCIPAL COMPONENTS OF
MATHEMATICAL MODELING FRAMEWORK
39
-------
1. Model: A theoretical construct, together with assignment
of numerical values to model parameters, incorporating
some prior observations drawn from field and laboratory
data, and relating external inputs or forcing functions to
system variable responses.
2. Model Calibration: The first stage testing or tuning of a
model to a set of field data, preferably a set of field
data not used in the original model construction; such
tuning to include a consistent and rational set of
theoretically defensible parameters and inputs.
3. Model Verification: Subsequent testing of a calibrated
model to additional field data preferably under different
external conditions to further examine model validity.
The calibrated model, it should be noted, is not simply a
curve-fitting exercise, but should reflect wherever possible
more fundamental theoretical constructs and parameters. Thus,
models that have widely varying coefficients (i.e., deoxygena-
tion coefficients) to merely "fit" the observed data are not
considered calibrated models.
The verified model is then often used for forecasts of
expected water quality under a variety of potential scenarios.
However, it is apparently rare that following a forecast, and a
subsequent implementation of an environmental control program,
that an analysis is made of the actual ability of the model to
predict water quality responses. This can be termed a "post-
audit" of the model, as shown in Figure 3. Somehow it seems
that once a facility has been constructed, the federal and state
agencies, municipalities, and industries are somewhat reluctant
to return to the scene of a water quality problem to monitor the
response of the water body. A fourth step therefore in deter-
mining model credibility is suggested as follows:
4. Model Post-Audit: A subsequent examination and verifica-
tion of model predictive performance following implementa-
tion of an environmental control program.
Need for Measures of Verification
Increase in Model Complexity. The most obvious need for
some measures of model verification is the fact that water
quality models have iricreased greatly in complexity. Figures
4-7 illustrate this progression. From relatively simple two
linear system models of biochemical oxygen demand and dissolved
oxygen for the first forty years of model development to the new
complex non-linear interactive eutrophication and toxic sub-
stances models, the ability to describe model performance has
become increasingly difficult. The number of state variables in
some models has increased dramatically. It is not unusual today
40
-------
VERIFIED
MODEL
PROJECTED
ENVtRONMENTAL
CONTROL
PROGRAM
V
FORECASTED
WATER
QUALITY
ACTUAL
ENVIRONMENTAL
CONTROL
PROGRAM
ACTUAL
WATER
QUALITY
y_
MODEL
ADEQUACY
IN
FORECASTING
J
AUDIT
TO EXAMINE WITH INTENT TO VERIFY
V
POST-AUDIT
SUBSEQUENT EXAMINATION AND VERIFICATION
FIGURE 3. AUDITING AND POST AUDITING OF WATER QUALITY MODELS
-------
to construct models with up to 20 or more state variables.
Furthermore, as illustrated in Figures 4-7, the physical dimen-
sionality now encompasses the range from the more traditional
one-dimensional streams to fully three-dimensional estuaries,
bays and lakes.
As the number of state variables and physical dimensional-
ity has increased, the overall ability of the analyst to compre-
hend model output has decreased. This is simply due to the
overall size of the model. For example, if a "compartment" is
considered as a state variable, i = l,...m, positioned at some
spatial location j = l...n, then the total number of compart-
ments to be solved for a fully interactive model is m times n.
Figure 8 shows the growth of the number of model compartments
since the earliest work of the two state variable problems of
BOD and DO. The almost explosive growth in the number of model
compartments, coincidental with passage of major water quality
legislation is evident.
Increase in Complexity of Questions. The second major
reason for some quantitative measures of verification is the
fact that the level of questions in water quality has increased
in complexity. Many of the water quality issues today extend
well beyond the traditional problem of raw or inadequately
treated sewage. In that traditional framework, it generally was
clear that some treatment of municipal sewage would probably
improve water quality, specifically dissolved oxygen. However,
some of the water quality questions today may involve such
complex interactions that it is not clear that certain environ-
mental controls will in fact produce the classical result. The
Potomac estuary eutrophication problem is a case in point. It
is not clear that nitrogen removal at the Washington, D.C. Blue
Plains plant actually will result in any reduction in the phyto-
plankton population that could not be achieved solely by phos-
phorus control. Similarly, it is not entirely clear that ex-
tensive dredging of PCB deposits in the Upper Hudson will result
in a reduction of the PCB body burden of the striped bass in the
Lower Hudson estuary to levels below the FDA requirement.
The complexity of the problem then leads to the very real
possibility that environmental control measures may be called
for by model predictive analyses when in fact the implementation
of such controls may produce little or no response in water
quality. The economic, political, and social consequences of
"wrong" answers therefore become more acute in today's problem
setting. Some quantifiable measure of model performance in im-
proving understanding or predictive performance would seem,
therefore, to be of considerable importance.
42
-------
1925-1965
TWO LINEAR SYSTEMS!
WASTE INPUT »-
BOD
DO
BENTHAL DEMAND
(PHOTOSYNTHESIS AND RESPIRATION')
ONE-DIMENSIONAL RIVERS AND ESTUARIES:
OCEAN
FIGURE 4. WATER QUALITY MODELS, 1925-1965
1965- 1970
six LINEAR SYSTEMS:
BOD
ORGANIC I I AMMONIA
H
NITROGEN I **1 NITROGEN
NITRITE
NITROGEN
NITRATE
NITROGEN
ONE, TWO DIMENSIONAL WATER BODIES:
3- \
iOCEAN
FIGURE 5. WATER QUALITY MODELS, 1965-1970
43
-------
1970— 1975
NON-LINEAR INTERACTIVE SYSTEMS
ONE, TWO-DIMENSIONAL
WATER BODIES
ZOOPLANKTON
CARBON
PHYTOPLANKTON
CHLOROPHYLL
UNAVAILABLE
PHOSPHORUS
AVAILABLE
PHOSPHORUS
PHOSPHORUS CYCLE
ORGANIC
NITROGEN
AMMONIA
NITROGEN
NITRATE
NITROGEN
NITROGEN CYCLE
OCEAN
TEMPERATURE
FIGURE 6. WATER QUALITY MODELS 1970-1975
-------
MANY INTERACTIVE SYSTEMS
BIOMASS TOXICANT
TOP
CARNIVORE
t
»
ZOOPLANKTON
PHYTOPLANKTON
SYSTEM
NUTRIENT
SYSTEM
ONE, TWO, THREE-DIMENSIONAL
WATER BODIES
WATER
TOXICANT
IN TOP
CARNIVORE
TOXICANT
IN
ZOOPLANKTON
TOXICANT
IN
PHYTOPLANKTON
TOXICANT
DISSOLVED
IN WATER
SEDIMENT
SEDIMENT
NUTRIENT
SYSTEM
DISSOLVED
TOXICANT
«•*-
TOXICANT
IN
PARTICLES
PARTICULATED
TOXICANT
OCEAN
FIGURE 7. WATER QUALITY MODELS I975-?
-------
Some Verification Measures
Qualitative Measures
Probably the most direct and easily understood measure of
model performance is to present qualitative comparisons of ob-
served data and computed values. This is most often done in
the form of overplotting data and theory or tabulating the
comparison between the two and then drawing qualitative judg-
mental conclusions about the adequacy of the model and its
suitability for projection purposes. A plot of data versus
theory can be a most graphical measure of model credibility -
easily understood and clearly visual. But for some problems,
such a simple qualitative measure is not possible or simply not
adequate. This is particularly so for time variable models of
several state variables and multi-dimensional systems. For
models of this type, as well as the simpler model framework,
some statistical comparisons may provide further understanding
of model credibility.
Statistical Comparisons
A variety of simple statistical comparisons may be appro-
priate to quantify model verification status. Such measures
would be intended to supplement the qualitative comparisons.
Examples of statistical analyses between observed and computed
values are:
1) Regression analyses
2) Relative error
3) Comparison of means
4) Root mean square error
1) Regression Analyses
A perspective on the adequacy of a model can be obtained
by regressing the calculated values with the observed values.
Therefore, let the testing equation be
x = a + Be + e (1)
where a and 3 are the true intercept and slope respectively_
between the calculated_values, c, and the observed values, x,
and e is the error of x. _The regression model equation (1)
assumes, of course, that c, the calculated value from the water
quality model, is known with certainty which is not the actual
case. With equation (1), standard linear regression statistics
can be computed, including
2
a.) The square of the correlation coefficient,, r , (the
% variance accounted for) between calculated and
observed
46
-------
b) Standard error of estimate, representing the residual
error between model and data
c) Slope estimate, b, of 6 and intercept estimate, a, of a
d) Test of significance on the slope and intercept. The
null hypothesis on the slope and intercept is given
by 3 = 1 and ot = o. Therefore, the test statistics
^=- and ot/s
sb a
are distributed as student's t and n-2 degrees of
freedom. The variance of the slope and intercept,
s^ and s are computed according to standard formulae.
A two-tailed "t" test is conducted on b and a,
separately, with a 5% probability in each tail, i.e. a
critical value of t of about 2 provides the rejection
limit of the null hypothesis.
Regressing the calculated and observed values can result in
several situations. Figures 9 (b) and (c) shows that very good
correlation may be obtained but a constant fractional bias may
exist (bl) ; also Figure 9 (a) indicates that poor correla-
tion may be obtained with slope = 1 and intercept = 0.
Finally, Figure 9 (d) indicates the case of good correlation but
for an a > 0 a constant bias may exist. Evaluation of r , b and
a, together with the residual standard error of estimate, can
provide an additional level of insight into the comparison
between model and data.
2) Relative Error
Another simple statistical comparison is given by the rela-
tive error defined as
e =
Various aggregations of this error across regions of the water
body or over time can also be calculated and the cumulative
frequency of error over space or time can be computed. Esti-
mates can then be made of the median relative error as well as
the 10% and 90% exeedance frequency of error. The difficulties
with this statistic are its relatively poor behavior at low
values of x and the fact that it does not recognize the var-
iability in the data. In addition, the statistic is poor when
x > c since under that condition the maximum relative error is
100%. As a result, the distribution of this error statistic is
most poorly behaved at the upper tail. Nevertheless, if the
47
-------
median error is considered, this statistic is a readily under-
stood comparison and provides a gross measure of model adequacy.
It can also be . especially useful in comparisons between models.
3) Comparison of Means
A third measure is to conduct a simple test of the differ-
ences between the observed mean and the computed mean. Letting
d = x - c, the test statistic distributed as a student's "t"
probability density function is given by
sa
where <$ is the true difference between model and data and sg is
the standard deviation of the difference given by a pooled
variance of observed and model variability. If these latter
quantities are assumed equal then
(4)
where s- is the standard error of estimate of the observed data
and is ^iven by
4) Root Mean Square Error
Finally, a measure of the error between the model and the
observed data is also given by the root mean square (rms) error
as
(x.-c. )2
(6)
As before, the rms error can be computed across a spatial pro-
file or over time at a single location. The rms error is sta-
tistically well-behaved and provides a direct measure of model
error. If expressed as a ratio to mean value (across a profile
or over time) , it represents a second type of relative error.
The disadvantage of the rms error is that it does not readily
lend itself to pooling across variables to assess overall model
credibility.
Each of the above measures displays model credibility from
different statistical viewpoints. Some are apparently useful
48
-------
UJ 05
0 (_
O 2
2 W
O o:
<
K Q-
UJ 5
CO O
700
600
500
400
300
200
100
ADVECTION - LINEAR
NON-LINEAR INTERACTION
I92O
1930
1940
1950
I960
1970
FIRST MAJOR
WATER POLLUTION
FED. ACT
1972
ACT
i 1980
11977
AMEND.
1990
TOXIC
SUBSTANCES
ACT.
FIGURE 8. INCREASE OF NUMBERS
OF MODEL COMPARTMENTS WITH TIME
49
-------
a
UJ
>
a:
UJ
V)
m
o
10
8
6
4
2
z
o
o
/
— CALCULATED
• OBSERVED
(a)
X
TIME (days)
4
r' = I
b = I
a = 0
2 4 6 8 10 12
CALCULATED
10
8
6
4
2
2 4 6 8 10
CALCULATED
12
a
UJ
>
CL
UJ
C/5
m
O
10
8
6
TIME (days)
I I
10
8
6
4
2
24 6 8 10
CALCULATED
12
2 4 6 8 10 12
CALCULATED
FIGURE 9. POSSIBLE CASES IN REGRESSION
BETWEEN CALCULATED AND OBSERVED VALUES
50
-------
for diagnostic purposes while others appear to be directly of
value in succinctly describing model verification status.
Some Examples of Quantitative Verification Analyses
Dissolved Oxygen Models
In order to illustrate the present state of the art of model
calibration/verification, a brief review was made of nineteen
models of dissolved oxygen. This water quality variable was
chosen since DO models have been the most extensively used and
over the longest time period. The engineering reports for the
fifteen water bodies were examined and relative errors, rms
errors, and regression analyses were evaluated. All of the water
bodies were analyzed by Hydroscience, Inc. It was assumed that
the personnel engaged in the modeling analyses carried out the
calibration/verification steps consistent with the definitions
given above and not just to "curve fit" the model to the data.
The models included several small streams in New Jersey
(less than 10 cfs), larger river systems such as the Ohio River
and the Upper Mississippi River, bays and estuaries and a large
model of the entire New York Harbor complex. A listing is shown
in Table 1.
The distribution of the median relative error for these
models is shown in Figure 10. For each water body, the error
represents the median relative error where 50% of the stations
(or times) had errors less than the values shown. Across all
models, one-half of the models had median relative errors great-
er than 10% and one half of the models had median relative
errors less than 10%.
As a crude measure, therefore, of the present state of the
art of DO models calibration/verification, one might suggest an
overall median relative error of 10%. It should, of course, be
noted that this is not the error of actual prediction but merely
the error representative of a present level of understanding of
observed behavior of dissolved oxygen. The degree to which the
results shown in Figure 10 is representative of all DO models is
not known. More detailed analyses of a larger sample would be
necessary.
Lake Ontario Eutrophication Models
A variety of models have been constructed of the eutro-
phication of Lake Ontario at several time and space scales and
with different levels of kinetic detail (Thomann, et. al., 1979
and Thomann and Segna, 1979). Extensive application of the
above quantitative measures of calibration/verification was made
for a three-dimensional model of the Lake for one year and for a
two segment vertical model over a ten year period.
51
-------
lOOr-
90 -
80 -
Ul
to
tr
o
ce
cr
UJ
2 <
o -i
UJ UJ
•s o:
LU
O
cr
UJ
CL
ro -
50 -
30 -
10
— I
II
n
r— FIFTY PERCENT OF MODELS
\ HAD MEDIAN RELATIVE
\ ERROR IN D.O. OF § 10%
i
n „_
n
FIGURE 10. SOME RELATIVE ERRORS OF DISSOLVED OXYGEN MODELS
-------
TABLE 1
WATER BODIES
EXAMINED FOR DISSOLVED
OXYGEN VERIFICATION STATISTICS
Location
Remarks
1. New York Harbor
Hudson River
Raritan River
Passaic River
Hackensack River
2. San Joaquin Delta, Calif.
3. Wicomico Estuary, Md.
4. Black River, N.Y.
5. Jackson R., Va.
6. Small N.J. Streams
6a. Pennsauken Cr.
6b. Big Timber Cr.
6c. Grt. Egg Harbor R.
7. .Mohawk River, N.Y.
8. Manhasset Bay, N.Y.
9. Delaware R. West Br., N.Y.
10. Savannah Estuary, Ga., S.C
11. Wallkill R., N.J.
12. Hackensack R., N.J.
13. Ohio R., Ohio
14. Lake Erie Hypolimnion
15. Upper Miss. R., Minn.
425 segment 3 Dimen. model
DO model part of eutrophication
model
A tidal tributary of Ches. Bay
A tributary of Lake Ontario
Tributaries of Delaware River
and Bay
In vicinity of Utica, N.Y.
Bay of Long Island Sound
In vicinity of Cinn., Ohio
DO model part of eutrophication
model - time variable
DO model part of eutrophication
model - time variable
53
-------
Figure 11 shows the median relative error across all
variables at three levels of spatial aggregation - 67 segments,
eight regions, and whole lake two layer scale. Five state
variables were included in the pooled error (chlorophyll, total
phosphorus, dissolved orthophosphorus, ammonia, and nitrate).
The median relative error over the year was the highest at the
smaller spatial scale (44%) and lowest at the whole lake scale
(17%). This indicated that the model did not capture more local
spatial phenomena as well as it reproduced overall lake behavior.
Figures 12-14 show the verification statistics from the
analysis of 10 years of data on Lake Ontario using the two layer
model. Two kinetic regimes are shown. Lake 1 kinetics are
fairly standard and Lake 1-A kinetics included a more complex
phytoplankton compartment (diatoms and non-diatoms), silica
limitation and other kinetic changes in nutrient recycle. Figure
12 shows an example of the chlorophyll verification results for
both regression analyses and relative error distribution. The
regression results indicated some improvement in slope with the
increased kinetic complexity but no improvement in the correla-
tion or intercept. The median relative error for chlorophyll
decreased from 42% to 30% with the inclusion of the more complex
kinetics. Figure 13 shows the results of the student's t-test
comparing observed and computed monthly means over the six state
variables. For the indicated standard errors, the Lake 1-A
kinetics gave a verification score of 70%, i.e., 70% of the .
variable-months where a comparison could be made showed no
statistically significant difference between observed and com-
puted means. If the standard errors are taken at one-half the
values indicated, the score drops to 40%. Figure 14 shows the
median relative errors for each variable and for all variables.
The latter test indicated an overall relative error for the ten
years of analysis and all variables of 22-32%.
A Suggestion
On the basis of the above concepts and illustrations to-
gether with the apparent growing need to be more definitive in
assessing model credibility, it is suggested that quantitative
measures of calibration and verification of models be an integral
part of modeling whenever possible. This includes a pressing
need to conduct post-auditing studies of model projections and
resulting water quality.
The suggestion for quantitative measures of water quality
model verification is aimed at responding to the many questions
often raised at various stages in the decision making process.
There are, however, advantages and disadvantages to quantitative
measures of model verification. Some of the disadvantages are:
54
-------
IVE ERROR (PERCENT)
ALCULATED 1 /OBSERVED
4* en CD o N •*> o> co c
oooo oooooc
MEDIAN RELAT
lOOlOBSERVED-C
ro *• cr> co o ro
oooooo oo
(a) LAKE 3 SCALE, 67 SEGME
/1 972 AVERAGE FOR ALl
1
, irrw ^
'A/TS
. VARI/!
tBLES= 44%
NSN;
1
^
__ \\v
1 1
1 1
M J J A S 0 N
(b) EIGHT REGIONS SCALE
_ i— I 972 AVERAGE FOR ALLVARI
F3
^ ii iH H
ABLES= 35%
H
1
1
1 1
M J J A S 0 N
(c) TWO LAYER SCALE
^-1972 AVERAGE FOR ALL VARI ABLES= 17%
/ 41
/ 23
12 14 ^
1
II 4
^ ^
M
N
FIGURE II. LAKE ONTARIO
MEDIAN RELATIVE ERROR AT THREE SPATIAL SCALES
THOMANN «t all 19780
55
-------
CHLOROPHYLL'a' IN SEGMENT I AT 0-17 METERS
LAKE IA KINETICS
o 10
S^ .
£] Q-^> 6
CO O H
CD OC 3-
O O 4
g 2
0
2.23
0..47 0.28
8 10
LAKE I KINETICS
10
8
6
4
2
0
a b r*
2.6 0.279 0.281
~ .
_ • *
-t^l**:?^
~* , •
1 1 1 1
0
CALCULATED CHLOROPHYLL a
8 10
1.0
0.9
0.8
CO
CD
O
CJ
O
I 0.7
CO
O 0.6
of 0.5
o: U-H
UJ
Id 0.3
>
H 0.2
<
LU 0. I
tr
0
/./4/CF /KINETICS-
•LAKE I A
KINETICS
I
0 20 40 60 80
PERCENT i/N
100
FIGURE 12. LAKE ONTARIO
CHLOROPHYLL VERIFICATION, TEN YEAR ANALYSIS
THOMANN AND SEGNA, 1979 b
56
-------
100
Q
UJ
- 80
iSS
o"
o en
CO I
Si
O UJ
UJ
o
55
60
CD
< 40
20
ESTIMATED
STANDARD ERROR
OF MEAN ( ji.9/1)
CHLOR.—
TOTAL P
0.5
3,0
DISS. ORTHO. P— 2.0
NH3 5,0
NOS Z*>.0
SILICA 50.0
25
VERIFICATION SCORE
ALL VARIABLES: 1967-1976
LAKE I
KINETICS
LAKE I-A
KINETICS
50
4O
63
AT 1/2 OF
ESTIMATED
STD. ERRORS
AT
ESTIMATED
STD, ERRORS
AT I 1/2 OF
ESTIMATED
STD. ERRORS
FIGURE 13. LAKE ONTARIO
VERIFICATION SCORES, ALL VARIABLES, TEN YEAR ANALYSIS
THOMANN AMD SESNA IS7B6
-------
Ul
CO
cc —
o p.
ft: w
01 <
UJ ^j
UJ 3
> o
— _l
I-
-------
1) An urge would be created to "curve fit" model to data
to improve verification statistics
2) Not all of the credibility of a model is subsumed in
verification statistics
3) Good verification statistics do not necessarily imply
the ability to accurately predict future water quality
4) Single measures of verification may be grasped at too
readily and engineering judgment as a measure of model
credibility may degenerate into "What's your median
relative error?"
The advantages are:
1) Some measures, albeit imperfect ones, would be avail-
able for decision makers to assess model credibility
and status
2) A basis would be provided for comparison of models
3) Some estimate could be made of changes in the state of
the art of model performance
4) Modelers would be stimulated to question their model
output with quantitative measures
5) A diagnostic tool would be available to determine
relative improvement of a given model under more
complex frameworks.
A quantitative measure of model performance, therefore,
may be a mixed blessing. At the very least, the time appears
appropriate to address the issue of the need and value of such
measures to assess model status. It is through such discussions
that perhaps some consensus can be reached on the advisability
of such measures or on possible alternative means of describing
the validity of water quality models.
Acknowledgments
Part of the work reported on herein was supported by a grant
from the Environmental Protection Agency, Large Lakes Research
Station, Grosse lie, Michigan to Manhattan College. Additional
support was also received by Hydroscience, Inc. through a con-
sulting agreement. Grateful appreciation is expressed to both
parties for this support. Special thanks are due also to Mary P.
Thomann and Robert J. Thomann for their diligent computations of
the DO error statistics.
59
-------
REFERENCES
1. Crohurst, H.R., 1933. A study of the pollution and natural
purification of the Ohio River, IV. A resurvey of the Ohio
River between Cinn., Ohio and Louisville, Ky., including
a discussion of the effects of canalization and changes in
sanitary conditions since 1914-1916. U.S. Pub. Health
Service, Pub. Health Bull. No. 204, 111 pp.
2. Hydroscience, Inc., 1968. Water quality analysis for the
Markland Pool of the Ohio River. Malcolm Pirnie Engrs.
White Plains, N.Y. 121 pp. and Figs.
3. Hydroscience, Inc., 1970. Water quality analysis of the
Savannah River Estuary. American Cyanamid Co., Wayne, N.J.,
75 pp. and Table and Figs.
4. Hydroscience, Inc., 1973. Water quality analysis of Man-
hasset Bay. W.F. Cosulich Assoc., Plainview, N.J., 53 pp.
and Figs, and Append.
5. Hydroscience, Inc., 1973. Water quality analysis for the
Wallkill River, Sussex County, New Jersey. Sussex Co. Mun.
Utilities Auth., Newton, N.J. 81 pp. and Tables and Figs.
6. Hydroscience, Inc., 1974. Water quality analysis of the
Hackensack River. Clinton Bogert Assoc., Fort Lee, N.J.,
77 pp. and Figs, and Append.
7. Hydroscience, Inc., 1974. Water pollution investigations:
Black River of New York. EPA-905/9-74-009 U.S. EPA, Regs.
II and V, N.Y. and Chicago, 95 pp.
8. Hydroscience, Inc., 1974. Development and application of a
steady-state eutrophication model of the Sacramento - San
Joaquin Delta. Bay Valley Consultants, San Fran., Calif.
228 pp. and Append.
9. Hydroscience, Inc., 1975. Water quality analysis of the
West Branch of the Delaware River, N.Y., DEC, Albany, N.Y.,
102 pp. and Append.
60
-------
10. Hydroscience, Inc., 1975. Water quality analysis of
Pennsauken Creek, Cooper River, Big Timber Creek and Upper
Great Egg Harbor River, N.J. DEP, Trenton, N.J., 163 pp.
and Append.
11. Hydroscience, Inc., 1976. Water quality analysis of the
Jackson River, Vol. I, Westvaco Corp., Covington, Va., 87
pp. and Append.
12. Hydroscience, Inc., 1978. NYC 208 Task Report, Seasonal
steady-state modeling (PCP Task 314). Hazen and Sawyer
Engrs. N.Y., N.Y., 667 pp.
13. Hydroscience, Inc., 1978. Upper Mississippi River, 208
Grant, water quality modeling study, preliminary report.
Met. Waste Control Comm., St. Paul, Minn.
14. Streeter, H.W., and Phelps, E.B., 1925. A study of the
pollution and natural purification of the Ohio River, III.
Factors concerned in the phenomena of oxidation and
reaeration. U.S. Pub. Health Serv., Pub. Health Bull.
No. 146, 75 pp.
15. Thomann, R.V., R.W. Winfield and J.J. Segna, 1979 a.
Verification analysis of the Ontario and Rochester embay-
ment three dimensional Eutrophication Model. EPA Report
in Final Manuscript.
16. Thomann, R.V., and J.J. Segna, 1979 b. Dynamic phyto-
plankton-phosphorus model of Lake Ontario. To be presented
at Phosphorus Management Conf., Rochester, N.Y.
61
-------
USE OF MODELS AS PROJECTION TOOLS
by
Robert P. Shubinski
When all is said an done, the raison d'etre for mathemati-
cal models of water quality is their use as projection tools.
Our concern with model verification, i.e. measuring a model's
ability to simulate circumstances measured in the past, is to
develop confidence in the model's ability to simulate future
conditions.
This paper is an attempt to provide some definitions and
guidelines that are helpful in determining the usefulness of a
model for a specific goal. Analysts are often surprised when
a model fails to meet the needs of a project. Frequently this
could have been avoided by a careful preview to select the
right model for the task at hand.
Conclusions
Personal involvement with model development and application
studies over a number of years has led me to three general
conclusions:
Simplest Model. For any study the simplest feasible
model should be used. The key word here is
"feasible". Some problems require a complex, sophis-
ticated model, but others do not. A good example is
in the field of urban stormwater where the models
STORM and SWMM are both popular. If the objective of
the study is a planning level analysis of the general
runoff characteristics of the system, the model STORM
is a feasible model. If preliminary design of a pipe
drainage system is involved, SWMM is a feasible model.
Vice President, Water Resources Engineers, an operating unit
of Camp Dresser & McKee, Inc., Springfield, Virginia.
62
-------
Unfortunately, we frequently see this order reversed
with the SWMM model being used with a degree of detail
that is unnecessary. The result is frustration and
unnecessary expense, and the analyst can come to
believe that he has more information about his problem
then he possesses in reality. Use of a simple model
where it is feasible often keeps the analyst aware of
the degree to which his judgment is crucial to the
solution. It diverts him away from the trap that the
model, because it is complex and wonderful, will
solve his problems for him.
Basic Relationships Model. To the extent possible, a
model should be calibrated on basic physical, chemical
and biological parameters. Too many models are little
more than elaborate curve fitting techniques. They
permit us to characterize the past behavior of a
system with a good deal of accuracy, but they do not
inspire confidence in our ability to use the model as
a projection tool. Often it is difficult to distin-
guish between basic parameters and curve fitting
parameters since many of our most widely used rela-
tionships have an empirical basis. However, a model
that has been calibrated by adjusting Manning's
coefficient or dispersion coefficients or some other
appropriate parameter is always tsuperior for pro-
jection purposes to one in which polynomial coeffic-
ients or other nonphysical descriptors have been
fitted.
Importance of Flow Modeling. Good water quality
modeling requires a good solution of the flow problem.
Most water quality models are driven by flow which is
the principal force in moving and mixing water quality
constituents. It would seem that adequately modeling
the flow field would be a prerequisite to any water
quality modeling, yet this is often not the case. A
number of complex water quality models that describe
the mixing of constitutents and the kinetics of their
interaction with each other and the environment have
been developed on the basis of very weak inferences of
the flow field. The results of such modeling are un-
certain and difficult to support.
Classes of Models
It is useful to categorize models for projection purposes
into two classes. We shall call these classes "closed end
models" and "open end models". From a mathematical standpoint,
the former would include those problems described by elliptic
63
-------
partial differential equations, while the latter would include
problems described by parabolic or hyperbolic partial differen-
tial equations. For our purposes here a'more physical definition
is useful:
Closed End Models. These are models of systems whose
response is primarily a function of the driving
variables. They are essentially boundary value prob-
lems in which flow and mass continuity generally are
satisfied by identity. The model is used to show how
the variables within the region respond to the im-
position of certain boundary values.
An example of a closed end model is the dynamic flow
and water quality problem in the bay or estuary.
Water movement is controlled by the tidal boundary
condition that is a primary driver and by the upstream
inputs to the system. Water quality constituents are
input to the system, transported about by the flow
fields and its corollary dispersion, and leave the
system at appropriate points. If this type of model
is run for a very long period of time, it reaches a
state of dynamic equilibrium in which the flow and
mass inputs exactly match the flow and mass output of
the system. The model is generally stable provided
certain numerical inaccuracies can be disposed of.
Errors in the boundary conditions show up as stable
errors in the solution.
Open End Models. This class of models is represented
by the initial value problem, particularly those in
which flow and mass continuity are not satisfied by
identity. The end result is a solution highly
dependent upon parameters in the model and certain
errors are cumulative, not self-correcting.
Some ground water models fit the open ended category.
A set of initial conditions is required to start the
model, and inputs and outputs must be specified by
the user. Lengthy operation of such a model may
result in depleting the aquifer or creating a flood.
Similarly, some ecosystem models are open ended and
will produce an infinite amount of biomass if operated
for a long enough period of time. In other words,
models of this type are not self equilibrating. They
are useful for projecting future conditions to a
point, but must be used with some care.
64
-------
Purpose of Models
There are several rather distinct reasons why we want to
use models and because of these different purposes, different
model systems are required.
Select or Screen Alternatives. Models are used in
many planning programs as a tool in screening alterna-
tives. In this type of use, we are more interested in
the relative impact of alternatives than in their ab-
solute impact, although this is not strictly the case.
By and large, we use the models to decide whether or
not to build a facility, the type of facility to be
built, and the size it must be. The same process is
carried out when dealing with nonstructural alterna-
tives, but it is most clearly understood when discus-
sing capital improvement projects. This is the type
of problem for which long range planning models,
followed by more detailed design models are often
appropriate.
System Operation. Sometimes we use a model to deter-
mine what will happen as a given system is operated.
Here we are more concerned with quantitative results,
particularly if the model is to be used to provide
direct guidance to system operators. Long term simu-
lation is the rule rather than the exception, and the
models used may be updated from time to time as the
operation of the system is observed.
— Learning Tool. One of the unexpected benefits of
modeling is its ability to teach its users how the
system operates. A model cannot be used to develop
"new science". Self teaching models are still in a
very primitive stage at this point. However, most
modelers find that repeated usage of the tools
develops in them a better understanding of the system,
within the constraints imposed by the assumptions in
the model.
Classes of Error
Most modelers strive to determine how good their models are,
That is, how well do they simulate the prototype system, and
what are the sources of error that must be taken into account in
evaluating model results. Generally there are four classes of
error:
Lack of Knowledge. A good model requires that its
originator understand the .-cause and effect relation-
ships operative in his system. Unfortunately, there
are physical, chemical, and biological processes at
65
-------
work in water resource systems which are not so clearly
understood. If they are understood in a qualitative
sense, they may not be quantifiable. Hence, a major
source of error in models is caused by incomplete re-
lationships or inaccurate coefficients.
System Changes. The idea of using a model as a pro-
jection tool implies its applicability to situations
which do not yet exist. This may include changes in
population, land use, waste load characteristics, and
a host of other parameters. Naturally, the errors in-
herent in our inability to project population ade-
quately will show up in our inability to project
wasteloads and other consequences of the increase in
population. Thus we must be able to project not only
the results inside the model, but the initial con-
ditions, boundary conditions, and other driving forces
which determine the predictions of the model.
Uncertainties of Nature. Many of our problems deal
with natural forces in hydrology, chemistry, and
biology that are uncertain as to the magnitude, timing
or sequence of their occurrence. Many aspects of the
hydrology, for example, are best understood in statis-
tical terms, and the degree to which our hydrologic
inputs do not reflect the "correct" sequence of events
will be reflected in the model output.
Numerical Errors. Most of our models rely on numeri-
cal analysis techniques of one type or another to pro-
duce their solution. All numerical analysis tech-
niques by their very nature contain error terms and
many have in them inherent instabilities. The model
user must be cognizant of these errors and it is use-
ful to be able to distinguish them from errors of the
types cited above.
Examples and Comments
Perhaps it will be useful to take the comments so far—
classes of models, purposes of models, classes of error—and
discuss how these might be applicable to certain types of
models. The list here is not intended to be inclusive but re-
flects the author's experience.
Groundwater Models. This type of model is a good
example of the open ended model. The physical system
is fairly well understood although in some regions the
characteristics of the aquifer itself are hard to de-
fine because they cannot be seen. Groundwater responds
slowly both to pumping and pollutant inputs and model-
ing is oriented toward long term responses. Ground-
66
-------
water is modeled in an effort to solve problems of
water supply and pollution control, especially those
associated with TDS or salinity.
Streamflow Models. Streamflow models have received
a great deal of attention from analysts possibly be-
cause they are more tractable than many other problems.
Our interest is usually short term since there is
little effect from a long term sequence in many stream
systems. Streamflow models tend to fit the category
of closed end models. One danger in Streamflow model-
ing has been that water quality standards have been
over-emphasized in such work. The models themselves
have been simple, and an undue reliance has been placed
upon simple application of the model to achieve a pre-
set standard.
River Basin Operation Models. These models are another
example of the closed end model although they are
normally operated on a long term basis. They are used
at both ends of the hydrologic spectrum for water
supply problems, both quantity and quality, and for
flood control. The most serious errors with these
models normally are associated with uncertainities in
the inputs.
Lake Quality Modeling. These models clearly fall in
the open ended category. Generally they do not
satisfy mass continuity per se, and if operated for a
long period of time, can predict increases or decreases
in mass in the system which the analyst will recognize
as inappropriate.
Stormwater Models. This model class cuts across many
of the lines between categories which we have drawn
because Stormwater models tend to be both open ended
and closed ended, depending on the circumstances.
Stormwater models are used to determine corrective
measures for combined sewer overflows and to determine
changes in the system which might occur- due to land use
changes, development of the watershed, or other
policies of urbanization.
67
-------
COMMITTEE REPORTS
The response to the basic issues presented in the preceding
section was formulated by assigning workshop participants to
seven Topical Area committees:
1. Wasteload Generation
2. Transport
3. Salinity/Total Dissolved Solids
4. Dissolved Oxygen/Temperature
5. Bacteria/Virus
6. Eutrophication
7. Hazardous Substances
In order to focus the committee discussions on the basic
issues, the following questions were suggested:
Issue 1: Role of Models in Decision Making
What areas of agreement and/or conflict do you see in the
relationship between decision maker and the use of water
quality models in your topical area?
Do you think that the models have been too readily em-
braced by the management community or, conversely, that
model results in your area have been generally ignored?
What have been your experiences with the issue of model
credibility and the role of models in decision making?
Issue 2. Data Base
What is your assessment of the adequacy and reliability of
the data base for water quality models in your topical
area?
Can you identify any gaps or deficiencies in the data base
that you presently work with? Specifically, the status of
input load data, water quality and model parameters should
be addressed.
68
-------
Issue 3: Time and Space Scales; Kinetic Detail; Cost Effective-
ness
Hew do you go about choosing a modeling framework in your
topical area? What criteria do you use to determine time
and space scales and the level of kinetic detail?
What is your assessment of the present state-of-the-art of
validation of computer software in your topical area?
Do you think a "standard numerical solution" for several
computer based problems should be available so that
computer programs could be validated?
Issue 4: Parameter Estimation
How do you estimate the parameters in your models? What
procedure do you use? What criteria do you apply to
determine the credibility of the parameters?
What statistical approaches would you suggest be used in
parameter estimation?
Issue 5: Measures of Verification
What procedures do you follow to calibrate and verify
your models? What techniques do you use to judge the
credibility of your model?
Do you think that a set of statistical techniques should
be promulgated to quantitatively describe model credibility?
If so, what techniques would you recommend?
Issue 6; Use of Models as Projection Tools
What criteria do you use to select the projection conditions
for model simulations?
What is your assessment of the ability of your models to
describe the incremental water quality changes under
future design conditions?
How would you describe model credibility for systems where
data do not exist (i.e. new reservoirs)?
Summary Question
Overall, how would you describe the present verification
and credibility status of the models in your topical area?
69
-------
Following discussions of these questions, a brief summary
of the state-of-the-art of water quality modeling for each
topical area was presented at a plenary session. The committees
then continued deliberations individually to address the question
of recommendations. The following questions were suggested to
the committees to assist in the discussions or recommendations.
What recommendations would you make to improve the useful-
ness of models to decision makers?
What data gathering efforts are recommended to address
noted deficiencies?
Can? Should? guidelines be established for various
problem settings for cost effective modeling studies? Are
"standard solutions" required for model "validation?"
Do you have any recommendations on methods for verifying
models? For selecting appropriate parameters? For using
models in a projection mode?
In general, the draft committee reports submitted by the
committee chairmen at the close of the workshop or shortly there-
after were compiled by Hydroscience in a standard format and sent
out for review by committee members. Subsequent comments were
incorporated by Hydroscience and a final report resubmitted to
the committee chairmen for approval. The committee state-of-the-
art reports and recommendations are compiled in the following
pages.
70
-------
STATE-OF-THE-ART REPORT
of the
Wasteload Generation Committee
Anthony S. Donigian, Jr. - Chairman
Members
Douglas Ammon Wayne Huber
Eugene D. Driscoll Marshall E. Jennings
John E. Hesson Michael L. Terstriep
Issue 1: Role of Models in Decision Making
In terms of modeling, the committee redefined wasteload
generation to include the generation and delivery of nonpoint
source (NFS) pollutant loads from:
urban (stormwater, combined sewer overflows (CSO)
- agriculture
- construction
- mining
silviculture
- other areas (rural, forest, natural background).
In general NFS models have been adequate for planning level
decisions to provide overall direction and guidance to future
analyses. They have been used primarily for assessment type
analyses, as opposed to evaluation of alternatives. Emphasis
has been on the water quantity portions of NFS models and the
results have been reasonable; the water quality portions are
often questionable (generally due to lack of appropriate data)
but adequate to determine whether or not a problem exists and
its general magnitude.
Just recently the need for models to evaluate the effects
of control measures, management decisions, and best management
practices (BMP's) has been recognized but adequacy of current
models is uncertain. Money will likely be spent in the next
few years on implementing BMP's although the ability of current
models to project BMP effects is uncerta-in especially for the
water quality portion of NFS models.
71
-------
There have been mixed reactions related to model credibili-
ty. Models have been accepted in many cases where they have
been properly applied (to problems for which they are appropri-
ate) and some attempt at verification has been made. Models have
not been accepted in some cases but this is often where the model
has been mis-used or improperly applied.
There remain some basic limitations in current NFS models to
accurately and reliably predict NFS loads in many situations.
Issue 2: Data Base
The available data base for modeling is generally inadequate
and often of questionable reliability. No general data base
exists that's appropriate for nonpoint sources. (This includes
STORET and the University of Florida (UF) Urban Data Base.) The
USGS fixed sampling stations are generally on relatively large
rivers that include effects of point sources, nonpoint sources
and instream effects. Much of the available information on
small watersheds (e.g. in STORET and the UF data base) is end-of-
pipe data that can't characterize the cause/effect relationships
important to NFS pollution. However, the data can often be used
in general assessments and overall planning decisions, especial-
ly when no other data exists.
Gaps/Deficiencies:
Extensive data surveys (collection programs) are needed on
relatively small watersheds with preferably uniform land use to
include:
concurrent rainfall, runoff, and water quality data
- assessment of sources: rainfall quality, build-up/
accumulation of pollutants
characterization of sediments, e.g. pollutant
strength, particle size, partition coefficients
inventory/log of activities e.g. tillage/fertilizer
practices in agricultural areas, construction or
silviculture practices, mining activities, etc.
The data surveys should be distributed across the U.S. to
represent soils, geographic .and climatic conditions, and should
last for an extended period (e.g. 3-5 years) depending on the
specific use of the data - calibration, verification, post-audit
analyses, long-term cause/effect definition. Additional docu-
mentation should exist to include maps, photos, drainage plans,
sampling/analysis procedures, quality control methods, watershed
characteristics, etc.
72
-------
Extensive data has been collected by the USDA (ARS and SCS)
on many agricultural areas across the country. An agricultural
data base, similar to the UF Urban data base should be estab-
lished to make the data available and accessible to the people
who could use it, especially for modeling. (A new data collec-
tion program by USGS-BLM is underway on western watersheds being
strip-mined.)
Issue 3: Time and Space Scales; Kinetic Detail; Cost Effective-
ness
Different time and space scales are needed depending on the
problems being analyzed, the watershed behavior/response, and
the model being used. Generally NFS models should be calibra-
ted on relatively small areas (and on short time intervals) so
that the measured output of the basin is due primarily to non-
point sources. Hourly rainfall data is commonly used, but
shorter time scales may be needed especially for calibration.
For long-term simulation runs hourly rainfall from the National
Weather Service (NWS) stations is often the only extended
record available.
Kinetics are generally ignored in many NFS models, since
the pollutants are assumed not to change or transform during the
short residence or transport time on the surface. However,
detailed agricultural runoff models that simulate soil processes
may include transformations of pesticides and nutrients in the
soil. The transformation interval does not usually need to be
as short as the simulation interval.
The committee feels that validation of the software being
used, i.e., correctness of calculational procedures, conserva-
tion of mass, adequacy of kinematic routing solutions, etc., is
not often done but should be considered. For NFS process
models, standard solutions may not be available, so a high
quality data base should be developed as a comparison for solu-
tions from model output.
Standard model input and corresponding output - together
with program documentation - should be distributed with source
decks so that the user can verify that the model is calculating
correctly on his or her computer facility.
With respect to time/space scales, average annual loading
models may be used for gross preliminary assessments but detail-
ed analysis will, usually require a process-oriented simulation
model.
73
-------
Issue 4: Parameter Estimation
Default parameters should not be used without a user being
aware of the assumptions inherent in the default values. Gen-
erally, policy should be to justify the selection of all para-
meter values. The committee encourages the use of models with
physically-based parameters that can be measured from watershed
characteristics, hydrologic response and meteorologic conditions
as opposed to literature values. Parameter values from nearby
similar watersheds, including calibrated values can be used.
However, many parameters may be based simply on calibration.
Credibility is based on a reflection of how well observed
and simulated values agree using a set of parameter values that
are physically reasonable and within accepted limits.
Statistical approaches are not generally used for NFS
models, except to estimate sediment-pollutant relationships, and
sometimes as guidance for parameter adjustments. Such statisti-
cal approaches are not widely used or accepted but might be
examined for possible use.
Issue 5: Measures of Verification
The calibration procedure involves developing the best first
estimate of parameters which are then adjusted as a result of
comparing simulated and observed values. A reasonable water
quantity (runoff) calibration is needed before attempting water
quality calibrations. For models that simulate pollutants re-
lated to sediment or solids, both runoff and sediment/solids
should be calibrated before proceeding to pollutant calibrations.
Split-sample calibration/verification is highly recommended.
However, in data-poor situations there is a real question as to
whether to calibrate on half the data and verify on the other
half, or obtain the best calibration on all the observed data.
In any case, credibility is based on the ability of a single set
of parameters to represent the entire range of observed data.
Overall model credibility can be enhanced if the model is applied
by independent users, in a variety of watersheds, and for a range
of events with different magnitudes. If a single parameter set
can reasonably represent a wide range of events, then this is a
form of verification.
If calibration is performed on a subcatchment, and the model
parameters are extrapolated to the entire catchment for receiving
water simulation, some assessment of the reasonableness of the
74
-------
entire NFS load should be made. This may need to be done in the
receiving water portion.
Quantitative measures of verification are needed and model
reports should always include comparison of simulated and ob-
served data. This should be done for runoff volumes, pollutant
loads, hydrographs and pollutographs. Correlations of point-to-
point comparisons may not be valid, due to time shifts. For NFS
pollution, mass loads are usually more appropriate for comparison
than concentrations. More work is needed to specify what s
specific statistical measures should be used - and are relevant -
for NFS model verification.
Issue 6: Use of Models as Projection Tools
Projection conditions are generally a function of the prob-
lems to be analyzed and the questions to be answered. The model
needs to be capable of representing the future conditions/alter-
natives to be evaluated, most likely through adjustment of
parameters.
In a planning context, continuous simulation may be used to
choose design conditions, and may involve the joint simulation
of NFS loads and receiving water impacts. In general, detailed
analyses of a runoff-quality problem should include planning
level simulations using long-term rainfall records. From the
resulting simulated runoff quality information, critical events
can be selected that provide a desired magnitude and/or frequency
of pollutant mass, concentration, duration, etc.
Current models are limited in their ability to represent
incremental loads resulting from a wide range of future condit-
ions (control measures, land use changes, management practices)
due to both limitations in the data for calibration and in the
model formulations. As additional data becomes available, form-
ulations can be improved. However, in many cases, current
models are the only feasible way of analyzing potential effects
of future conditions i.e. models are often the only "game in
town."
In situations where no data exists, reasonable values for
loads can be obtained from extrapolation of parameters from other
areas. There is a real question as to whether simple or complex
models are most appropriate for this.
75
-------
RECOMMENDATIONS
Of the
Wasteload Generation Committee
Recommendations to improve use of modeling:
1. Identify the problem and the specific information to be
provided by the model.
2. Select the model most appropriate to the problem and
one that addresses the questions and concerns of the
decision maker; use existing models to the extent
possible.
3. Models are tools for aiding in decision making and
should be applied by personnel with appropriate back-
ground and skills.
4. During the course of model studies, close coordination
and communication should be maintained between the
modeler/analyst and the decision maker.
5. Model assumptions and limitations should be clearly
specified especially in relation to the conditions/
alternatives to be evaluated.
6. Emphasis should be placed on analysis and presentation
of model results, especially in terms easily under-
stood by the decision maker and relevant to his infor-
mation needs.
Data Needs and Recommendations:
1. A comprehensive, high-quality data base should be
established for the purposes of model testing and
improvement. This should include the extensive data
surveys (described in the second Issue paper) that
will meet and likely exceed the needs of current models
to help advance modeling state-of-the-art. Existing
data does not serve this purpose.
76
-------
2. The interagency agreements, such as the EPA-USGS
efforts in the National Urban Runoff Program should be
encouraged and expanded as a mechanism for establish-
ing such a data base. The data should be in a form
and framework where it will be readily available and
accessible to the profession.
3. Current practice in urban areas does not emphasize an
understanding of fundamental processes, thus data
programs and associated research should be established
to correct this deficiency. Source control mechanisms
require this fundamental understanding.
Specific Areas for Further Investigation:
1. Erosion and sediment transport (delivery) processes.
2. Sediment-pollutant interactions.
3. Accumulation and washoff processes.
4. Subsurface movement and transport of soluble pollut-
ants, especially in rural/agricultural areas.
5. Snowmelt quality especially in urban areas.
6. Precipitation quality.
7. Continuous monitoring of in-stream quality in conjunc-
tion with washoff studies.
77
-------
STATE-OF-THE-ART REPORT
of the
Transport Systems Committee
Richard J. Callaway - Chairman
Members
John D. Ditmars John K. Robertson
Dennis Ford M. Llewellyn Thatcher
Gregory Han Arthur C. Tingle
Peter F. Lagasse R. G. Willey
John F. Paul P. Jonathan Young
Introduction
The Tranport Systems Committee (TSC) was comprised of in-
dividuals with interests in several fields—watersheds, lakes,
reservoirs, estuaries, rivers, coastal and, where applicable,
the interface between these systems. Recommendations of the
committee reported below are, for the most part, common to all
areas in order to make the report as general and compact as
possible. Readers will, of course, be aware that it is not
possible to address all these systems in any great detail in so
short a space and time frame.
The underlying philosophy of the TSC was that without a
realistic simulation of the physical processes acting to advect
and diffuse dissolved and particulate matter in whatever water
body, the water quality aspects of a modeling effort would be
suspect at best. This philosophy was also expressed during the
meeting by the other Committees and can best be expressed by
acknowledging that while the transport people (primarily hydro-
dynamists and/or physicists) can work on their own problems in-
dependent of water quality considerations, the reverse is not
always true. It is also acknowledged that many water-quality
modelers are quite competent transport modelers. Realities of
the present day funding situation suggest, however, that hydro-
dynamic modelers would do well to join efforts with biologists,
chemists, and engineers in focusing efforts toward evolution of
water quality models. Some would argue that multiple use water
78
-------
quality modeling has arrived at the consortium stage (others
will deeply regret this).
The TSC found the concept that transport model results are
simply input data to water quality models useful. This concept
provides a means of dealing with the two extreme generalizations
that "transport modelers are not satisfied unless all details of
the flow field are included in three-dimensions, with all of the
proper "bells and whistles" and that "only crude transport
models are necessary for water quality models, since large un-
certainties exist in the chemical and biological kinetics any-
way." Thus, the level of sophistication of a transport model
should be determined by the sensitivity of the water quality
model to transport, input data and the degree of confidence re-
quired for that data.
In responding to the Issues, the Chairman has compiled the
written responses of the TSC members, his own notes, and the
more complete notes of Dr. P. J. Young. The TSC members re-
viewed and commented on the compilation. The final version in-
corporates their views, although in such a diverse group it
would be unrealistic to assume that this represents a consensus
report.
Issue 1: Role of Models in Decision Making
What areas of agreement and/or conflict do you see -in the
relationship between decision maker and the use of water quality
models in your topical area?
It was agreed that the main obstacle in the relationship
between decision makers and modelers was lack of communication.
The decision maker is usually under a time constraint, political
pressure, environmentalist lobbies, etc. It is doubtful if a
decision will be made purely on the outcome of a modeling effort,
nor is it clear that it should, considering the forces acting.
Communication gaps occur from both "sides." The modeler some-
times oversells his product, through an enthusiasm'for his own
work; if the decision maker goes along with the modeling effort
on that basis and is burned, it may be a long time before he
accepts once again a modeling output without considerable skep-
ticism. The modeler, too, is vulnerable in that the decision
maker may misinterpret or misapply his results.
The solution, to the communication problem lies in educating
the decision maker with regard to the constraints, assumptions,
reliability, and realism of modeling as only one tool available
to him. On the other hand, the modeler should be aware of the
policy involved and, where necessary, direct the model to a
specific application that the decision maker requires. The dual
79
-------
role implied above is not going to be solved at a workshop be-
tween modelers and decision makers, but will require continuous
interaction between two very dissimilar groups.
Finally, the decision maker often seeks answers to ques-
tions for which no adequate model exists. Too often modelers
have applied inappropriate models (of transport mechanisms, at
least) in an effort to obtain quantitative results. Examples
abound.
Do you think that the models have been too readily
embraced by the management community or, conversely., that model
results in your area have been generally ignored?
The TSC members were unanimous in that inexperienced
management usually accepts modeling results. Indeed, they are
too often uncritically accepted and with little appreciation for
the limitations of the analysis (discussed above). There was
indicated a human tendency to accept more readily results when
they are in support of a, perhaps, preconceived judgment. The
experienced decision maker, on the other hand, while accepting
modeling efforts, usually does so with a healthy skepticism; it
is not infrequently the case that he knows what questions to
ask of the modeler. The best overall results occur when there
is an open search for the real meaning of the results and a non-
defensive response by the modeler.
What have been your experiences with the issue of model
credibility and the role of models in decision making?
Modeling has undergone a series of hill-and-valley traumas,
the valleys caused by oversell of the product followed by
trivial, misleading, or downright wrong answers. Credibility
has fluctuated similarly from blind acceptance to a state of not
even wanting to hear the word "model." At this stage, we seem
to be mounting another hill and we should make use of past ex-
perience and mistakes to regender credibility. The enormous
complexity of today's environmental issues make model use an
absolute requirement, if Federal agencies responsible for water
resources are to reach objectives imposed by the law makers.
Issue 2: Data Base
What is your assessment of the adequacy and reliability of
the data base for water quality models in your topical area?
Can you identify any gaps or deficincies in the data base
that you presently work with? Specifically, the status of input
load data, water quality, and model parameters should be
addressed,
80
-------
The TSC response to this question was mixed. Those who
found the data base adequate and reliable were in a position of
specifying, collecting, processing, and analyzing their own
data. This leads to the conjecture that they found the existing
data base inadequate and unreliable, hence the spawning of their
own data base. Those who found the data base inadequate at the
outset, either made do with what was available or conducted
interpolations-extrapolation field surveys.
The reliability of such time-honored data collectors as the
USGS was generally reliable; problems arise, however, in the
coverage available, the time lag in obtaining the data, and the
costs involved. The National Ocean Survey tidal elevation data
are not consistently reliable, and datum elevations are not al-
ways available. A serious time lag is involved in obtaining NOS
data in a suitable format as a result of understaffing and be-
cause NOS is not primarily a data retrieval organization.
Synoptic data sets for model calibration and verification
are virtually absent for hydrological systems. Climatological
data sets are more intensive in a time-series sense but also are
sparse in areal coverage. (Climatological calibration raises a
whole new set of related quations.) The problem of open bound-
ary data, particularly in coastal areas, is one of expense and
difficulty in measurements. Nearshore data are often lacking
in coastal areas and in large lakes simply because vessel
masters are sensitive to keel to bottom depths and because
oceanographers have traditionally had strong offshore interests.
The dissatisfaction with the existing data base stems
largely from the fact that the data set (e.g., USGS stream gauge
measurements) were not set up with a modeling effort in mind;
rather, commitments to some other mandate were made years before
modeling became de rigueur. Said another way, funding agencies
are more willing to support monitoring rather than research.
Modeling of sediment transport (suspended and settleable)
suffers from a lack of quantification which results from in-
ability to specify correctly settling velocities, critical
erosion velocities, and a general lack of understanding of the
physics and mechanisms involved in sediment transport processes.
Where toxic materials are sorbed into particulates, this lack is
critical to an understanding of the hazardous substance problem.
Basic research on hydrodynamic models with field investigation
feedback is necessary if we are to solve these fundamental basic
research problems.
81
-------
Issue 3: Time and Space Scales, Kinetic Detail, Cost Effective-
ness
How do you go about choosing a modeling framework in your
topical area? What criteria do you use to determine time and
space scales and the level of kinetic detail?
Response to this question was rather vague. For some cases,
a steady-state, one-box model may be perfectly justified; e.g.,
geochemists usually are concerned with time scales of centuries,
kinetic modelers with seconds. Space scales vary with the type
of problem, computer facility, computer budget, and the physics
of the system— i.e., are we dealing with a well-mixed system,
or do we need to simulate a multi-level or vertically continuous
system? If a variable grid is employed (e.g., finite element
methods), then spatial detail near a source would be desirable,
with a larger grid spacing at distance from the source.
The Water Quality (WQ) Committees will address in more
detail the questions of kinetics; the TSC concern is to suffic-
iently detail the transport processes on which the WQ topics
will piggy-back. Examination of the transport equation (advec-
tive-diffusion equation, dispersion equation, etc.) reveals a
galaxie of approaches as to the terms employed, the bulk coeffic-
ients used to gloss over ignorance and/or lack of measurements,
etc. It is worth noting that integrations in time and space of
the transport equations yield simple models with coefficients
that reflect all the details ignored and that are not universal,
while less highly integrated forms yield complex models with
more physically meaningful coefficients and require small time
and space steps. The dichotomy is that simple models may be in
terms of long time scales and averaged space scales that de-
cision makers like, but have no predictive capability and that
complex models may have better predictive capabilities, but on
scales of meters and seconds.
At any rate, the modeling framework must proceed hand-in-
hand with the objectives of the study—perhaps as set forth
quite sketchily by a decision maker—budget and time constraints,
and the projected field effort.
The Chairman's experience with a variety of commercial con-
tracts and university grants has been that hydrodynamical models,
per se, don't sell at the headquarters level unless strongly
coupled with a water quality objective. Given the priorities
involved and the dollars available, this is probably realistic,
but does not respond to the question of which federal agency
will support pure hydrodynamic—related research.
82
-------
What i-s your assessment of the present state of the art of
validation of computer software in your topical area?
Probably some workshop attendees have been guilty, at one
time or another, of taking a model off the shelf and running it.
If a canned model is to be used in a major decision making
effort (which may end up in court), one must be completely
satisfied with the documentation and coding of the model, its
assumptions, limitations, areas of applicability, etc. The
"bug"-free model is a rarity if the program listing consists of
several thousand lines. The recent lesson provided by the NRC
shutting down of five nuclear power plants is a case in point.
Do you think a "standard numerical solution" for several
computer based problems should be available so that computer
programs could be validated?
The TSC was quite happy with this question, since they
quickly and unanimously converged on the opinion that a "stand-
ard numerical solution" had a peculiar dreamlike quality. If
the exact same differential equations were employed, and if the
same boundary conditions were employed, etc., then differences
between solutions would be a function of the numerical tech-
niques employed--which was the point of the question, presumably,
Numerical techniques are sometimes a personal choice—i.e.,
finite differences vs. finite element methods—and do not
necessarily converge at the same rate.
It was felt that more important would be a "standard" river
or a lake or estuary sufficiently instrumented and data process-
ed to permit verification of one's software. (It is duly noted
that not all commenters on this last sentence were entirely
happy with it.)
Issue 4: Parameter Estimation
How do you estimate the parameters in your models? What
procedure do you use? What criteria do you apply to determine
the credibility of the parameters?
The general procedure employed was based on knowing (from
experience) what range of values to anticipate and consequent
simulation over the range. Where only a few parameters are in-
volved, this is not a difficult undertaking but may involve
many computer runs. Where many parameters are used, sensitivity
analysis should be achieved as, e.g., employed by Tomovic
(1963)*. The idea is to relate the change in a system component
as a result of change in some other variable, flux or parameter.
*Tomovic, R. 1963. Sensitivity analysis of dynamic systems.
McGraw-Hill, New York. 142 pp.
83
-------
What statistical approaches would you suggest be used in
parameter estimation?
Typical responses to this were somewhat unsophisticated
relying primarily in root mean square computations. Residual
plots and sums of errors criteria are used, but others felt that
statistical verification either didn't exist or was an art form.
During the general discussion period of all Committees, it
was brought out that the meteorologists have been using statisti-
cal methods routinely for a number of years and have developed
a subculture devoted to statistical forecasting, skill scores,
data smoothing, verification, etc. Water quality modelers, on
the other hand, have avoided this approach successfully to date,
but will have to become involved as more and more court actions
are carried out.
Issue 5: Measures of Verification
What procedures do you follow to calibrate and verify your
models? What techniques do you use to judge the credibility of
your model?
For some simple systems, numerical transport models are
amenable to comparison with classical analytical solutions. As
the complexity of the system simulated increases (non-linearity,
multi-dimensions), analytical solutions are not usually avail-
able. Residual values (observed minus predicted) need to be
examined at several locations within a closed system. Assuming
specified boundary conditions, the interior solutions will
differ from observed values as a function of the solution
method, interior environmental conditions, degree or presence
of discontinuities, etc. Plotted comparisons of residuals,
while not necessarily quantitative, give the experienced analyst
a good idea as to the validity of the simulation. The goodness
of fit may not be obvious to the decision maker, so it behooves
the analyst to discuss the verification procedures employed.
Of course, true verification is obtained after fine-tuning
the model to a given situation and then running it and comparing
it against another set of input and boundary conditions. Dif-
ficulties arise when a given parameter is not expressed as a
function of a measureable physical quality of, say, velocity or
hydraulic radius.
In some situations Monte-Carlo techniques can be used (e.g.,
for reservoir modeling) to give confidence intervals.
84
-------
Do you think that a set of statistical techniques should
be promulgated to quantitatively describe model credibility? If
so, what techniques would you recommend?
"Promulgation" turned off the TSC audience. Depending on
the model and the parameter to be verified, a certain standard
statistic may show results that are biased - either to the good
or bad. Choice of an appropriate set of statistics is best left
to the analyst who knows the data set and its reliability, the
system, the model, and the constraints involved. For some,
statistical analysis was best relegated to self checking or
guidelines, not necessarily for publication.
Since the verification data itself may contain considerable
noise, imposition of statistics is not as simple as it may
appear. Statistical techniques may mask serious errors in some
areas.
The variety of statistics available is too great to recom-
mend a list of techniques. Many WQ modelers are not necessarily
into statistics beyond root mean squares and the t-test.
Meteorologists have much to teach us (as they have in the past
in the hydrodynamics scene).
There was, in general, a rather uncomfortable feeling about
the use of statistics as part and parcel of the transport anal-
ysis; it is felt that there was a natural reluctance to bite the
statistics bullet, but an acknowledgment on the part of poten-
tial courtroom drama participants that the time of defending a
given analysis with statistics has, for better or worse, arrived.
Issue 6: Use of Models as Projection Tools
What criteria do you use to select the projection conditions
for model simulations?
When working with a water quality modeler, the transport
specialist must have a thorough knowledge of just what are the
WQ objectives in terms of detail required and what -degree of
uncertainity in the transport input the modeler is willing to
accept. Put in terms of a commercial application—what does
the client want? This is not to be confused with getting what
the client wants but in determining the simplest method of
evaluating a given situation in terms of client need. Projection
conditions, then, are defined by the client's objectives and
desired alternatives.
What is your assessment of the ability of your models to
describe incremental water quality changes under future design
conditions?
85
-------
Responses varied to this question. For instance, it was
suggested that errors in the receiving stream model may be far
less than those of the input data set, and the errors in trans-
port may be far less than kinetics of constituents. If the
physics are well represented, then one-dimensional model simula-
tions would be adequate; however, few post-audit cases have been
documented.
In reservoir models, there still is an inability to repre-
sent the onset of stratification, indicating a need for basic
research on this phenomenon.
How would you describe model oredibility for systems where
data do not exist (i.e,3 new reservoirs}?
See Part 1 of this Issue regarding input data.
Committee members were skeptical for the most part, largely
because there have been few cases where there has been a post-
audit attempt. In the reservoir situation, similarly sized
reservoirs and watersheds are studied with a view toward pre-
dicting outcomes for new reservoirs. Unfortunately, reservoirs
take a long time to be constructed and put into operation, and
the impetus to go back and check calculations is lost as new
generations of engineers come on line.
River and/or aqueduct simulations are generally amenable to
open-channel hydraulic solutions; problems still will exist with
regard to water quality, weed growth, etc.
86
-------
RECOMMENDATIONS
of the
Transport Systems Committee
What recommendations would you make to improve the useful-
ness of models to decision makers?
Based on past experiences and as discussed in Issue 1, the
credibility of modeling in the eyes of the decision makers has
suffered because of a lack of understanding on their part to the
limitations of models, and also because the modeler has not al-
ways represented the model well. The decision makers' expecta-
tions having been dashed, they sought other-than-model solutions
to political/environmental problems of mind-boggling complexity.
The inherent non-linearity and many-bodied nature of the prob-
lems points directly to a mathematical analysis of some sort.
Mathematical analysis suggests a model, be it linear program-
ming or hydrodynamical in nature. That being the case, the
decision maker is bound to use models if he is to be effective.
Therefore, he must either be experienced in modeling himself
(not necessarily a sufficient condition) or be capable of
appreciation of model use. Eventually, he will have to know
what questions to ask of the model.
The modeler is not absolved of responsibility in the educa-
tion-technology-transfer process between computer output and the
decision maker. He must anticipate the questions likely to be
asked and point out areas that the model indicates should be ex-
amined (but were not identified initially).
The discussion above is rather obvious and may be platitude
prone, but for optimum utilization of the skills of modelers and
the benefits of modeling there is required an interaction be-
tween two very different levels of achievement—management and
science.
What data gathering efforts are recommended to address
noted deficiencies?
With some notable exceptions, most modelers are condemned
to use "other people's data," data not usually gathered for a
modeling effort. The TSC was unanimous in the opinion that
87
-------
modelers should be involved at the initiation of any data
gathering event, i.e., if a model is likely to evolve, modelers
should get involved. The modelers' advice should be sought_when
choosing the location of measurement sites based, when feasible,
on preliminary model runs. The objectives of this effort are to
minimize sampling time and scope and the number of stations
required. Because time-series data will be required and because
of the logistic problems involved, care must be taken to select
these stations with a minimum of hassle to the budget and the
body.
Can? Should? guidelines be established for various
problem settings for cost effective modeling studies? Are
"standard solutions" required for model "validation"?
No. No.
Do you have any recommendations on methods for verifying
models? For selecting appropriate parameters? For using models
in a projection mode?
Since this is so model dependent, the question degenerates
into a list of references. Leendertse's work in Jamaica Bay,
for instance, may be cited.
Overall, how would you describe the present verification
and credibility status of the models in your topical area?
With regard to estuarine transport processes, the TSC
recommended adoption of the report by Kinsman, et al.* The
following sentence from the preface responds to the above
question: "It was the very strong consensus of the group that
recent data show many of our previous ideas of estuarine trans-
port processes to be overly simplistic and that a greater level
of sophistication of our understanding of these processes is
required, not only for a significant scientific advancement, but
also for effective environmental protection and management."
The report of the Kinsman study makes many pertinent recommenda-
tions on verification, flux term investigation, small and large
scale experiments, etc. Their arguments will not be repeated
here; suffice it to say that the majority of their recommenda-
tions also apply in part to reservoir and large lake modeling.
We close this section with another quote regarding estuary
models from Kinsman, et al.: "No model which has not been both
verified and tested can be considered anything but 'work in
progress1. It is something of a scandal that none of the
*Kinsman, B., J.R. Schubel, M.J. Bowman, H.H. Carter, A. Okubo,
D.W. Pritchard, and R.E. Wilson. 1977. Transport processes in
estuaries: Recommendations for research. Marine Sci. Center,
SONY Spec. Rept. No. 6, Ref. 77-2, 21 pp.
88
-------
'models' we now have has been either verified or tested in its
complete form. The data with which to do so have never been
taken."
What overall recommendations would you make regarding water
quality model verification? What needs in this area of verifi-
cation do you perceive to be most critical in the future? What
specific programs would you suggest?
No one definition of "verification" is possible when so
many variables are at play in the transport-WQ modeling business.
Rather, measures of model sensitivity and/or "verification" need
be reported with reference to specific parameters and to specif-
ic water bodies. Therefore, we recommend that existing state-of-
the-art and evolving water quality models be examined to deter-
mine the sensitivity of the model results to transport input
parameters. This is because, while WQ modelers do a lot of
sensitivity checking on most parameters, little has been done
in transport parameters and, for certain model applications, the
entire model may be transpo^-t-driven.
Verification for different systems will differ. For in-
stance, in a very broad sense, stage and flow rates may suffice
for rivers; tracer distributions for reservoirs; salinity dis-
tribution (vertical and horizontal), tidal heights, velocity
profiles (vertical and cross-stream) and tracer distribution
for estuaries; salinity distribution, fixed-point velocities,
total transport, and storm surge elevations for coastal areas.
As part of these generalities, it is noted that verification of
the same transport system may differ with the purpose of the WQ
model using the transport input.
In reservoir and lake modeling, further research on strati-
fied flow, turbulence, and vertical transfer coefficients is
required. Formulation of water quality models will require
compatibility of hydrodynamic time and length scales and water
quality scales.
Satellite and remote sensing data should be used in con-
junction with ground truth data. Physical hydraulic models
should be used where possible; moveable-bed models should be of
considerable interest in bed-load transport problems in estuary
mouths. Features are observable in small-scale models that can-
not be extrapolated from field data and may not be replicated in
mathematical models indicating adjustment of the grid or an in-
complete mathematical/physical analysis.
Of prime concern in many applied problems, is our present
inability to measure and model sediment transport satisfactorily.
Assuming that many toxic substances will be transported in
settleable or suspended particular phase, it behooves the analyst
to address this very difficult problem. The nature of the attack
89
-------
indicated is one of basic research; field measurements are not-
able by their absence; the pure physics of the apparently rather
simple processes is also lacking.
The TSC was unanimous in the opinion that extensive data
sets are needed in a variety of settings. Using estuaries as an
example, it is required to fund intensive data gathering programs
on all types of estuaries, including fiords. Logistically, it
was pointed out that the Yaquina Estuary in Oregon is one that
can be sampled from a small boat from freshwater to the ocean in
a day. Admittedly a somewhat selfserving example on the part of
the Chairman, it is in fact a model of the larger Chesapeake and
Delaware systems. Since it varies seasonally from well-mixed to
stratified (Hansen and Rattray 1-3 classification), it could
provide a wealth of information on a small scale that could be
extrapolated to larger, more important systems. San Diego Bay
was also indicated as a. tractable system for testing certain
aspects of transport computational schemes in embayments.
The problem of funding was raised throughout TSC delibera-
tions. Pure research items, such as turbulence-diffusion
mechanisms, were indicated as an NSF-type area. Intensive data
gathering efforts in estuaries are indicated as, at least, joint
NOAA-EPA efforts. These efforts must not be designed by trans-
port specialists only, however, since WQ problems are, in the
final analysis, of prime concern.
We submit that agencies supporting modeling studies should
require "verification" of transport models in terms of the
particular system studied. This can be accomplished by breaking
out a portion of a budget for field programs after careful con-
sideration of the constitution of the final product. Where
appropriate, model verification activities by parties independent
of a particular model development might be undertaken.
Finally, the subject of statistical verification should be
pursued intensively. Although there was an indication of a
general audience reaction against the unfamiliar, it should
prove to be a satisfying endeavor in the future since one will
be able to quantify just how good (or bad) ones effort has been.
If the direction is away from a battle of coefficients toward an
understanding of the system, then the statistical approach cannot
help but be beneficial. As far as funding agencies are concern-
ed, they should begin thinking about how best to exploit the
possibilities (not at all obvious at this stage) since they can
expect to have to pay for it.
90
-------
STATE-OF-THE-ART REPORT
of the
Salinity/TDS Committee
Louis A. Beck - Chairman
Members
Tze-Wen Chi Robert P. Shubinski
William J. Grenney Richard Tortoriello
Austin Nelson George H. Ward
Issue 1: Role of Models in Decision Making
The committee felt that too often models have been employed
for decision-specific problems; once the decision is made,
further "model" development is terminated. In many instances,
model development and verification should be a long-term program.
Management tends to blindly accept modeling results as
scientific and quantitative when the results agree with their
preconceived views. However, management questions results when
they disagree. A more tempered, less extreme view is needed.
Occasionally political decision makers feel constrained by
modeling. Perhaps because of this, some decision makers are
resistive to the use of models. At the same time, modelers
should recognize there are other political or socio-economic
factors that bear on management decisions.
Good judgment is essential in selecting a model appropriate
to the problem at hand. Many factors should be considered and
the mere availability of a computer code is not an appropriate
criterion of selection by itself.
Issue 2: Data Base
There are probably more data available for salinity from
surface waters than for most other parameters. Nonetheless,
more data, particularly over long periods of time are required.
91
-------
In estuaries, programs of long-term routine monitoring as well
as intensive short-term space-time studies are needed. For wet-
lands, available data is practically nonexistent. Fairly good
salinity measurements are available for inland surface waters
over large geographical areas. There is a need, however, for
data on specific ions, localized conditions, and salinity
loading functions for agricultural and natural processes,
especially in the arid west.
There is a lack of data on the intrusion of salinity into
groundwater, whether of oceanic or geological origin. Although
some data on groundwater salinity is available, it is not col-
lected for developing a predictive tool, and therefore is non-
specific or lacking important ancillary measurements.
Issue 3: Time and Space Scales; Kinetic Detail; Cost Effective-
ness
Salinity responses typically exhibit long space and time
scales in comparison with other parameters, both in surface
waters and groundwater. This should be recognized in implemen-
ting data programs as well as in verifying models. This has im-
portant consequences on system characteristics, requisite data
resolution, computational demands, as well as the dimensions of
the problem at hand.
In inland, particularly arid regions, the state-of-the-art
of model verification is behind the needs and questions being
posed by management.
It is the responsibility of the agency or engineer applying
a model to evaluate the model he is using for correctness of
code, numerical accuracy, and appropriateness of application.
Issue 4: Parameter Estimation
Salinity may be considered a conservative substance for
dilute concentrations in inland waters and also in estuaries
where it is an excellent variable for estimating dispersion.
For inland systems, both surface and groundwater, the com-
ponents of TDS may be subject to kinetics as well as effects of
other constituents (such as pH, TSS, etc.). This is particular-
ly true where high concentrations occur in the arid west.
-------
Issue 5: Measures of Verification
The committee felt that verification is achieved by sub-
jective judgement and visual comparison of model results with
observed data.
Statistical techniques should be used but not promulgated.
One should be free to adopt statistics appropriate to his need.
Statistical techniques should also be applied to analysis of
the data, especially confidence bounds and reliability, prior
to its application in model validation. Because salinity
occasionally has a good data base (especially for inland waters
of the arid west) and simple kinetics, salinity could be a good
parameter for the development and testing of statistical veri-
fication techniques.
Issue 6: Use of Models as Projection Tools
The committee felt that selection of projection conditions
is dictated by the specifics of the problem and the range of
conditions important to the system. It was also felt that
salinity models are generally reliable in predicting incremen-
tal water quality changes under future design conditions due to
the conservative nature of salinity, but this is highly con-
tingent upon the specific situation.
93
-------
RECOMMENDATIONS
of the
Salinity/TDS Committee
1. In order to improve the usefulness of models to decision
makers, there should be a close working relationship
between the modeler and the decision-maker. Presentation
of model results to a decision-maker should include the
reliability of the prediction and, where possible, confi-
dence limits. Demonstration of model response (i.e., sen-
sitivity analyses) can be very useful to decision-makers.
2. A greater effort in model development for groundwater is
needed.
3. The hydrodynamic/transport mechanisms need better formula-
tion for many systems, e.g. groundwater flow, or the
density circulation of estuaries. Related to this, better
data on hydrodynamics/transport are needed.
4. It is important that modelers work more closely with data
collectors, perhaps participating in the program where
possible. One example of data inadequacy is the definition
of the boundary condition at the mouth of an estuary, which
is frequently assumed to be at a constant oceanic value
but may be variable and require adequate monitoring.
5. The committee strongly urges that no standard guidelines
for cost-effective modeling studies be promulgated. Judg-
ment of the modeler is a very important aspect of the
modeling effort and "standards" tend to reduce or eliminate
this judgment. At the same time, management should not be
expected to re-develop a model for each new application.
6. Verification in the past has been largely qualitative. The
committee recommends a greater application of quantitative
measures, though it does not think any specific statistical
test should be recommended.
7. Overall, the committee feels that development is needed in
four areas of application, with varying requirements and
methods:
94
-------
(1) Coastal Surface Waters
The most common and best developed area is the estuary/ in
which salinity is a parameter of central importance. The
significance of coastal wetlands is now better recognized;
these systems will require models, some of which are pre-
sently under development.
(2) Coastal Groundwater
Modeling is often done for salinity intrusion and probably
basic relationships are generally understood.
(3) Inland Surface Waters
Modeling is done often but there are some serious unknowns,
such as salinity losses in reservoirs and kinetics of
specific ions in streams.
(4) Inland Groundwater
This area is least commonly modeled and much more research
needs to be done in the unsaturated zone.
95
-------
STATE-OF-THE-ART REPORT
of the
Dissolved Oxygen/Temperature Committee
Clarence Velz - Chairman
Members
Joe M. Dietzel John T. Marlar
C. S. Fang Ronald E. Rathbun
James M. Greenfield Peter G. Robertson
Clark C. K. Liu Daniel S. Szumski
Introduction
Our committee was composed of modelers working in govern-
ment, academia, and private practice, and even though there was
considerable discussion during the deliberation of the issues,
there was a surprising consensus of opinion. This gives us
cause to believe this report reflects a relatively accurate
assessment of the state-of-the-art in DO/TEMP modeling.
The committee wishes to acknowledge the fact that because
of the limited time available for deliberation, our discussion
necessarily focused on dissolved oxygen modeling as applied to
specific water-courses. However, since the committee believes
that much of the content of that discussion applies equally to
temperature modeling, the interchangeability of the statements
is reflected throughout this report by the use of the term
"DO/TEMP."
The general findings by the committee on the state-of-the-
art of DO/TEMP modeling are summarized as follows:
1. The committee agrees that the conceptual framework for
DO/TEMP modeling is well founded scientifically, and that with
proper application it should provide a reasonably accurate ve-
hicle for predictive analysis. The conceptual framework as cur-
rently utilized is structured in either one, two, or three di-
mensions using either steady state or time variable assumptions.
96
-------
Application of the framework to DO/TEMP problems is relatively
easily accomplished using readily available computer programs.
2. The committee agrees that our understanding of what
model parameters are important and our technical ability to
obtain valid field measurements of those parameters is reasonably
well advanced; however, these principles are not universally
applied.
3. The committee agrees that some shortcomings exist in
our current practice of DO/TEMP modeling. First, current model-
ing frameworks do not facilitate quantification of the random
component of DO/TEMP data. In certain instances where a large
random component exists, ability to forecast reliably may be
impaired. Second, questions of model limitation and sensitivity
may not be addressed in sufficient detail by modelers to allow
the decision maker a chance to evaluate the uncertainties
inherent in the projection analysis. This failure may be due in
part to the inability of modelers to make these assessments
themselves. Third, the accuracy of current mathematical equa-
tions to predict the reaeration rate in a watercourse is some-
what unsatisfactory and use of in-situ gas tracer techniques
developed by Tsivoglou is encouraged. Fourth, there is a ten-
dency within the modeling community to use any available data,
including monitoring type data, for specific model development.
Development of reliable predictive assessments of decision
alternatives necessitates intensive snyoptic-type data collected
in the specific watercourse being modeled.
The committee's assessment of the specific issues compris-
ing this state-of-the-art report is presented below.
Issue 1: Role of Models in Decision Making
The attitude of decision makers toward using modeling
results in the decision making process varies widely from en-
thusiastic acceptance to outright mistrust and rejection. The
range in attitudes is explained in part by the prior track
record of modelers to provide an analysis sufficiently defensible
to withstand legal challenge, in part by how the decision maker
perceives his own technical understanding of the issues, in part
by how clearly and understandably the analyst presents his find-
ings to the decision maker, in part by the analysts' ability to
present demonstrable proof the model results are correct, and in
part by the amount of time available to do the required analysis
before a decision-must be made. It is the committee's observa-
tion that administrative conflicts normally result more from
interpretation of model results or comparisons of model results
with standards, rather than from issues related to model
97
-------
selection. In short, modeling credibility lies more with the
analyst than with the computer program which was used.
The committee agrees that the level of sophistication of
specific case DO/TEMP models should be the minimum required to
include not only the principal phenomena but the^principal
factors encompassing alternative decision analysis. The level of
sophistication required should be determined on a case by case
basis.
The modeler serves four principal functions in the
decision making process. First, he translates the environmental
issues into a program of investigation. Development of the pro-
gram includes analysis of existing data and literature and con-
sultation with people knowledgeable with the specific environ-
mental issues. Second, he selects the appropriate conceptual
framework for modeling the system such that the issues can be
addressed forthrightly at the minimum level of sophistication
necessary to make technically sound evaluations. Third, he
oversees and participates in data collection and analysis.
Fourth, he provides technical guidance in interpreting the
results, in defining the confidence limits of the results, and
in insuring the administrator is cognizant, within the limits of
scientific understanding, of the environmental consequence of
his decision.
Issue 2. Data Base
The principal issue related to the data base for DO/TEMP
models is the need for intensive synoptic-type data collection
surveys. The members of the committee concur that the founda-
tion of acceptable model calibration and verification lies in a
well planned data collection program, and that compromises in
data collection can have a significantly adverse effect on the
ultimate achievement of a reliable analytical tool. Intensive
synoptic-type surveys require a substantial, well organized and
trained staff of field and laboratory personnel and require a
great deal of preliminary work and planning.
Since the objective of a water quality modeling study is to
identify cause effect relationships, acquisition of an appro-
priate data base assumes great importance. Three factors should
govern the design of the data acquisition program. First,
sampling should be conducted over a short time interval such
that a comprehensive "snap-shot1" of the water quality and load-
ing functions of the watercourse is obtained. In tidal systems,
this requires slack water sampling supplemented where appropri-
ate with selected time series data. The sampling stations
should be located at intervals sufficient to elucidate the sal-
ient features of the DO/TEMP profile as well as at locations
98
-------
upstream of all significant point sources and tributaries. Sec-
ond, the sampling frequency should not only be consistent with
the time scale of the DO/TEMP model but also be consistent with
the natural variability of the system. Diurnal measurements
should be made at those locations where hourly variations in
DO/TEMP are expected. Third, the sampling program should in-
clude direct measurement of state variables, rate constants and
other physical factors that influence the DO/TEMP balance in
specific areas. The state variables include, but are not nec-
essarily limited to, long term BOD, DO, TEMP, nutrient concen-
trations, salinity. pH, chlorophyll, etc. The rate constants
include, but are not necessarily limited to, deoxygenation,
reaeration, ammonia oxidation (where appropriate), photosyn-
thesis/respiration (where appropriate), benthic oxygen demand
(where appropriate), etc. The physical factors include, but
are not necessarily limited to, streamflow. time of travel,
tidal currents, channel morphology water depth and its change
as a function of tlow, meteorological parameters, etc. Waste-
water tlows must also be monitored for pertinent characteristics.
As noted above, good synoptic data collection of an inten-
sive nature is an essential requirement tor the application of
applying micro-scale models. For the purposes of macroscale
models, and tor determining recurrence intervals, the data base
should include a well designed long-term component.
Issue 3: Time and Space Scales Kinetic Detail; Cost Effective-
ness
DO/TEMP models must be consistent in their time and space
scales with the phenomena being simulated. For example, simula-
tion of the diurnal variation in DO resulting from carbon fixa
tion and metabolism in phytoplankton requires the use of a time
scale equivalent to fractions of an hour. Modeling frameworks
set in different time scales may be required to address effic-
iently water quality issues related to mean daily or seasonal
limitations for the specific watercourse under investigation.
There is a misconception, nowever, tnat because seasonal changes
in water quality occur, it is essential to formulate a time var-
iable model to simulate instantaneous responses througnout the
year. To do so requires exceedingly complex mathematical solu-
tion techniques and voluminous data that is not realistically
attained from laboratory and field measurements. In the end,
simplifying assumptions are necessary for solving the complex
mathematics and for reducing the data needs. The net result is
a model of questionable reliability and utility. The trade off,
on the other hand, is to formulate a steady-state model to simu-
late the critical seasonal responses sucn that the mathematical
complexity and input data needs are reduced dramatically. In
the end, fewer simplitying assumptions are needed with the net
99
-------
result being a model of more reliability and utility. The cost
effectiveness of such an approach is obvious. Regardless of the
timeframe selected, however, the state-of-the-art of DO/TEMP
modeling is well advanced.
Spatially, one, two, and three-dimensional DO/TEMP models
are currently available. Here again, selection of the space
scale must be consistent with the watercourse and phenomena being
simulated. For example, while one-dimensional models are the
most widely used in free flowing systems, their use is usually
not appropriate for simulating phenomena in lakes, embayments,
estuaries, or coastal waters. Numerical solution techniques
developed over the last twenty years have provided the necessary
tools for dealing with these more complex systems.
Undue kinetic detail is seen by this committee as a stumbl-
ing block to the effective use of DO/TEMP models as decision
making tools. If a phenomenon is expected to impart a signifi-
cant change in the DO/TEMP budget, then kinetic detail appropri-
ate to simulating that phenomenon is usually warranted, as for
example in simulating photosynthesis in eutrophic waterbodies.
Otherwise, kinetic detail should be maintained at the minimum
required to adequately reproduce the primary state variable
interactions.
Existing computer programs for DO/TEMP models are generally
efficient and cost effective. A standard numerical solution for
different classes of problems appears to be a good idea. Such
solutions would provide a benchmark by which analysts could
evaluate the accuracy and cost effectiveness of a particular
model. A standard numerical solution would also have the advan-
tage of providing a useful tool for evaluating new programs and
solution techniques. However, the committee feels the state-of-
the-art has not advanced sufficiently to allow this approach to
be implemented at this time.
Issue 4: Parameter Estimation - Calibration
Model calibration serves two important functions. First,
in many instances it is the only procedure available to estimate
the in-situ reaction rates for the individual components of the
DO/TEMP budget. Second, it provides the modeler the first
opportunity to perform a sensitivity analysis on the components
of the DO/TEMP budget. Performing a preliminary sensitivity
analysis at this stage permits the data acquisition program to be
modified if necessary. Needless to say, selection of the
significant components for inclusion in the specific case DO/TEMP
model as well as estimation of the corresponding in-situ reaction
rates must be determined for each watercourse using data genera-
ted from intensive synoptic surveys as well as from other
100
-------
specialized studies conducted on that watercourse. The practice
of using reaction rates or reaeration rates taken from the
literature must be discouraged unless it is demonstrated first
that either the DO/TEMP budget is insensitive to the rate used
or the predicted response tracks the observed response reasonably
well.
As noted above, the committee is in agreement that parameter
estimation for dissolved oxygen models should be accomplished
through independent measurements wherever possible. Calibration
should provide the vehicle whereby parameters are tuned to more
accurately represent the variability of the parameters under
different hydrologic and meteorologic conditions. The accuracy
and credibility of the model is generally enhanced by keeping
the number of parameters to the minimum required.
Selected comments and conclusions of the committee with
regard to parameter estimation issues are:
1. Good time-of-travel estimates and channel geometry
are essential in river systems, particularly in small
streams.
2. Direct reaeration measurements using methods such as
those described by Tsivoglou or Rathbun are particular-
ly valuable and use of them should be encouraged.
3. Benthal oxygen demand measurements are best measured
in-situ.
4. In developing a steady state DO model for a biologic-
ally active water body, the model parameter for photo-
synthesis/respiration requires careful evaluation. Or,
alternatively, this term may be separated from analysis
by simultaneous modification of both the mathematical
formulation of the model and the data used in its
verification.
5. In cases where parameter estimates depart significantly
from literature values, the source of the parameter
estimate should be documented. Measurements or deriva-
tions from first principles are the preferred methods
of documenting these cases.
6. In preparing reports the modeling community should
present tabular summaries of model parameters and,
where appropriate, time histories of model parameters.
101
-------
Issue 5: Measure of Verification
The committee agrees the most rigorous test of verification
of a model is to demonstrate that phenomena predicted by the
model actually occur under the conditions and at the locations
observed in the field.
The committee also agrees another way to demonstrate
verification of a DO/TEMP model is to have a calibrated model
track an independent set of observed data such that good agree-
ment is achieved throughout the profile. The data used for
verification in this case must be derived from intensive synop-
tic surveys conducted under stable hydrologic and meteorologic
conditions different from those used for calibration.
The committee agrees furthermore that statistical analysis
can be a valuable tool for quantifying the "goodness of fit"
between the observed and predicted profiles, but the state-of-
the-art of DO/TEMP modeling is not yet advanced enough to allow
promulgation of "goodness of fit" standards.
Sensitivity analyses are valuable adjuncts to verification.
Since all state variables, rate constants, and physical para-
meters do not have the same order of magnitude impact on the
modeling projections, the sensitivity analysis provides the
modeler with a tool to assess not only which components of the
model affect the projections the most but also the confidence
that should be placed on those projections. In addition, if the
sensitivity analysis is extended to include an evaluation of
changes in advective flow and pollutant loading, then the effec-
tiveness of alternative management proposals can be evaluated
critically.
The committee also agrees that because of the large random
component inherent in environmental data, application of
stochastic modeling techniques to the sensitivity analysis of
model projections should be encouraged. At present, Monte Carlo
simulations appear to be the most straightforward technique for
stochastic modeling.
Issue 6: Models as Projection Tools
Since the model to be used for projection analysis ideally
should be developed on the basis of intimate knowledge of the
drainage basin, its problems, and potential management strate-
gies, the modeler should be able to anticipate the needs of the
decision maker and thereby avoid having to push the model beyond
its intended limits. Hence, the modeler should be able to make
102
-------
projections within these limits with considerable confidence
and reliability given sufficient understanding of the following:
1. Parameter adjustments to projection conditions - flow,
temperature, boundary conditions, sources and sinks of
oxygen, etc. - with as many parameters explicitly re-
lated to causative mechanisms (e.g. reaeration rate to
hydraulic properties) in order to eliminate subject-
ivity in their selection.
2. Variability factors as they influence the allowances
that must be made in comparing model results to
standards.
3. Risk involved in the decisions which are to be made
and the required sensitivity analysis on the model
projections.
4. Frequency of compliance with standards and related
questions.
The degree to which each of these four issues has to be in-
corporated into design of the projection analysis program varies
of course from site to site. However, the analyst must always
be mindful that the goodness of the verification is a key
measure in determining how conservative the projection analysis
should be.
In conclusion, the committee agrees that a DO/TEMP model
based on good synoptic type data and on rigorous calibration and
verification procedures is a valuable tool in the evaluation of
alternatives for the decision maker.
103
-------
RECOMMENDATIONS
of the
Dissolved Oxygen/Temperature Committee
A number of recommendations emerge naturally from this
assessment of the state-of-the-art of DO/TEMP modeling.
1. The committee unanimously agrees that development, calibra-
tion, and verification of a reliable specific case model
depends on an intensive synoptic type data acquisition
program. The design of such a program in its essential
elements is presented in the discussion of Issue 2: Data
Base. It is strongly urged that such programs be encouraged
and supported at all levels of decision making.
2. It was further agreed that while water quality monitoring-
type data may serve a useful purpose in regulatory and
compliance practice it should not be used in modeling
practice because this type of data only measures the in-
stream response to unknown source loadings.
3. It was further agreed that long hydrological and meteor-
ological data acquisition programs should be not only
continued but expanded in coverage since all DO/TEMP
projection analyses are inescapably tied to probability
of occurrence of these natural phenomena.
4. It was further agreed that kinetic detail in the formula-
tion of specific case DO/TEMP models should be limited to
that required to adequately reproduce the primary state
variable interactions.
5. It was further agreed that the practice of using inappro-
priate data, of using reaction rates gleaned from the
literature without proper in-situ validation, and of using
the same data for "verification" as was used for calibra-
tion is not acceptable and is to be strongly discouraged.
6. It was further agreed that a better phenomenological
understanding of reaeration leading to a better predictive
equation is urgently needed.
104
-------
7. It was further agreed that follow-up studies should be
conducted on the watercourse after a course of action based
on model projections has been implemented.
8. The committee agrees furthermore that statistical analysis
can be a valuable tool for quantifying the "goodness of fit"
between the observed and predicted profiles and that use of
verification "scores" should be encouraged but not promul-
gated.
9. Studies should be instituted to incorporate verification
testing techniques into existing computer programs includ-
ing appropriate mass balance checks.
10. Although it is impossible to develop a universal water
quality model to suit all needs, efforts should be made to
make the format of modeling output as uniform as possible.
This would enhance the transfer of modeling results and
also make modeling more acceptable to decision makers. In
addition, computer graphics techniques should be encouraged
in relation to the presentation of modeling results.
11. Finally, it was agreed that modeling should not be marketed
as a quick, easy, or magical way of providing wholesale
answers to water quality problems.
105
-------
STATE-OF-THE-ART REPORT
of the
Bacteria/Virus Committee
John L. Mancini - Chairman
Members
Raymond Canale John A. Harris
G. Wolfgang Fiihs Alan I. Mytelka
Issue 1: Role of Models in Decision Making
The state of the art for modeling virus distributions can-
not support decision making.
Decision makers have a healthy skepticism of models. On
the other hand, models have provided a rational basis for
decision making when properly applied and these proper applica-
tions should be supported. In addition, a more open communica-
tion of management needs and model capabilities and limitations
should be encouraged.
Models which calculate the distribution of coliform bacter-
ia can be used for planning and design decisions in streams,
estuaries, oceans and lakes. Some reservations exist, as ex-
pressed by both managers and engineers, but these can be overcome
in many instances by inclusion of appropriate safety factors and/
or staged construction of facilities.
Issue 2: Data Base
Data available from data banks (STORET, surveillance net-
works) are generally inadequate as input for models, and special
data gathering efforts are needed for each study. Intermittent
sources (storm sewers, combined sewer overflows) are to be sam-
pled in the study area, but data gathered in similar studies on
a comparable area can be used with confirmation. The emphasis
in sampling should be on the study of storm events of different
106
-------
types and durations (as characterized by hydro-graphs) and periods
of dry weather between storms (a minimum of 3 to 6 storms per
outfall are needed). As a minimum, both the mean and variance of
the concentration of indicator bacteria are to be determined for
each event. Some committee members believe that land use pat-
terns are useful for the selection of representative sampling
sites.
Data collection to define rates of die-off of indicator
bacteria, in the opinion of most committee members, should be
continued with special attention to variations in die-off rates
in different types of receiving waters.
For pathogens.loss of viability is insufficiently known,
and factors such as adsorption and sedimentation may have to be
considered.
Coliform and pathogen densities cannot be determined in the
presence of solid raw sewage matter.
Issue 3: Time and Space Scales; Kinetic Detail; Cost Effective-
ness
Typical temporal and spatial scales are relatively short,
but are specific to the questions asked of the model and the
complexity of the system being analyzed. Evaluation of the
transport phenomena and the rates of reactions will define these
scales. The bacterial kinetics can be extrapolated from previous
studies, but require substantiation by short-term field tests
to ensure proper orders of reaction.
Basic planning decisions and the majority of detailed
design and operational actions can be accommodated by a steady
state model. However, as the system under evaluation becomes
more complex, especially through influences of transport,
dynamic simulation procedures need to be used.
Although the software for bacterial modeling is relatively
simple, incorporation of internal checks, e.g. mass balance
summaries, should be used. Emphasis needs to be placed in the
integration of the physical influences on the bacterial kinetics.
A "standard numerical solution" for simple situations
would be helpful as would standard examples for specific soft-
ware.
107
-------
Issue 4: Parameter Estimation
The kinetic coefficients for models of indicator bacteria
are best obtained on a site specific basis using a three step
procedure. First, a good field sampling program is conducted
which is compatible with the model framework and has simultan-
eous measurements of all the important variables and processes
such as the water quality response, loadings, flow patterns and
dispersion. These results are used in conjunction with the
model and a trial-and-error procedure to estimate the bacterial
die-away rate. Next, this rate should be compared with esti-
mates obtained from in situ bottle or bag studies where bac-
terial concentration changes are followed over time in a uniform
mixture of the wastewater and the receiving water. Finally,
estimates of die-away rate should be compared to literature
values. For the case of indicator bacteria, die-away rates are
relatively well-known functions of temperature and salinity.
Thus, because the kinetic framework for indicator bacteria is
normally quite simple, relatively straight-forward and standard
procedures can be used to estimate the kinetic coefficients.
Issue 5: Measures of Verification
Present procedures in verifying models are to use caution,
review similar situations, and to apply engineering and other
practical experience in judging model predictions.
The overall sentiment is that a set of statistical tech-
niques should be promulgated. The range of opinion varied from
strong support to strong opposition. In support of promulgation
is the proposition that models and their use is a fact of life,
even though models are sometimes being misused and the model
predictions are often being given more weight in decision making
than warranted. Being able to quantitatively describe their
credibility should improve understanding of applicability. The
strong dissent to promulgating quantitative statistical tech-
niques centers on the concern that statistical justification is
too easily fabricated so as to impress decision makers and re-
inforce their acceptance of a level of credibility of models that
is not justified.
Issue 6: Use of Models as Projection Tools
The committee generally felt that in their present state
models of indicator bacteria could be used in the projection
mode, although one committee member was skeptical of their
108
-------
credibility. Proper application of basic conservation of mass
equations, coupled with a knowledge of the literature of bacter-
ial kinetics, would be appropriate for many planning problems.
The committee had differing views on the method of providing for
recognized uncertainties in the accuracy of results. Some felt
that monitoring of control sites would best minimize uncertain-
ties while others felt that staging of construction and/or pro-
viding sufficient factors of safety would be more appropriate.
Summary
Bacterial modeling is being effectively applied to provide
input for broad management decisions. Increases in difficulty
are experienced considering detailed design and operational
decisions when bacterial models are applied to complex receiving
water systems. These difficulties result primarily from lack of
information on the transport phenomena and lack of proper veri-
fication techniques to validate the model structure.
Models for indicator bacteria can be used with a relatively
high degree of confidence for many planning and design problems
when the results are employed in close conjunction with profess-
ional engineering judgment and intensive calibration and verifi-
cation field sampling programs.
109
-------
RECOMMENDATIONS
Of the
Bacteria/Virus Committee
1. Committee agreement was obtained, in principle, on the
importance of evaluating health risks associated with
bacterial bathing and shellfishing standards. The Com-
mittee had divergent views as to when efforts should be
devoted to this area. One approach suggested steady
investments in this area over time whereas an alternate
view was that funding should be contingent upon identifi-
cation of some benefits, i.e., cos-t reduction, health
improvements, et al.
2. Work on relative die-away rates for pathogens and viruses
as compared to the indicator bacteria should be supported.
3. Methods of estimating wastewater inputs which efficiently
meet requirements are needed. For intermittent inputs,
these requirements are:
Accurate measures of the number of organisms dis-
charged on an event basis.
- Variance in concentration of organisms in the dis-
charge in individual events and over a large number of
events.
4. Virus evaluations would be useful particularly since
current thinking appears to be that health effects at
beaches and in shellfish are primarily associated with
viral infections. In this regard, improved measurement
techniques are required and quantification of virus counts
in wastewater discharges are mandatory. Work on determin-
ing viral die-away rates is recommended as well as develop-
ment of dose-response relations. Finally, methods should
be employed to track viruses ("hot" particles! to insure
that significant processes are being accounted for (affin-
ity for solids, etc.).
5. Other phenomena which may be present and significant should
be more fully defined such as:
110
-------
After growth
Reduced rates of mortality at low bacterial concentra-
tions .
- Effects of salinity on die-off rates in subsurface
waste fields in marine environments.
6. The level of complexity needed for the definition of trans-
port fields is, to some extent, problem and site specific.
Therefore a fruitful area of research would be associated
with development of transport field assessment methods'of
varying levels of complexity which could be calibrated
and/or verified independently of the bacterial analysis.
7. There is agreement that a need exists to evaluate modeling
results. There are two aspects which should be considered:
How good is the model?
What input can the model make to the decision making
process?
The committee encourages further thoughtful R & D efforts
to develop technology in these areas.
Ill
-------
STATE-OF-THE-ART REPORT
of the
Eutrophication Committee
Dominic M. DiToro - Chairman
Members
Robert Ambrose Tavit Najarian
Michael A. Bellanca Donald Scavia
Carl W. Chen Kent W. Thornton
John M. Higgins G. Kenneth Young
Issue 1: Role of Models in Decision Making
The committee feels that decision makers often expect too
much from models and that qualified results given to the manager
are more useful than no result at all. Given all other factors
that enter into a water resource management decision, it may not
be worthwhile to delay decisions for a finally verified model.
Issue 2: Data Base
The data base available is generally inadequate with infre-
quent sampling of too few variables. Data gathering is sporadic
rather than synoptic and is usually not coordinated with a
modeling study. It is to be noted that appropriate sampling
intervals on an annual cycle are not uniform and are strongly
dependent on the particular water body being studied.
There is a paucity of rate measurements as well as benthic-
sediment information. STORET should be continued with an
emphasis on quality control of the input data. A higher priority
should be given to developing a biological data management
system which includes a taxonomic hierarchy. Special data bases
in the system are quite good such as for the Great Lakes. The
data base needs additional nutrient information especially at
low concentrations as well as better chlorophyll and biomass
data. At present there is a functional problem with soluble
112
-------
reactive.phosphate and procedures for filtered nutrients should
be defined, or refined, to incorporate nutrient measurements
that indicate functionally available forms for uptake.
There are generally deficiencies in the point source data
with a lack of sample replication. Non-point source data are
also required for both storm event and base flow conditions.
Issue 3: Time and Space Scales; Kinetic Detail; Cost Effective-
ness
There are no -universally accepted criteria for selection of
an appropriate modeling framework. Factors considered important
by committee members include: financial resources available,
complexity of the system, the problem and questions to be
addressed, the projected'time horizon, available technology and
the ability of the selected framework to analyze alternatives.
The committee feels that there is a clear and present dan-
ger in the validation (or lack thereof) of computer software in
the eutrophication area. Due to the complexity of the interT
acting systems, code must be thoroughly checked for programming
errors and numerical errors. The analyst should utilize known
mass conservation laws, check results against available analyti-
cal solutions and consult with other users to elicit any known
deficiencies. Test cases for code and standard solutions for
judging the adequacy of the software should be provided.
Issue 4: Parameter Estimation
Parameters are generally selected by choosing values within
reasonable ranges, by a best fit procedure and by exercising
judgment. Values should be selected for the appropriate type of
water body and care must be employed not to mix marine, brackish
and freshwater parameters. Some assistance is available in
recently published rate manuals but this is no substitute for
experience.
Credibility of selected parameters is judged by comparison
to known ranges of the values with careful analysis of any
extreme values. In addition, sensitivity analysis is used to
determine if the model results are hypersensitive to slight per-
turbations in a parameter. Monte Carlo simulations of kinetic
constants are helpful in assessing system sensitivity to the
parameters. Finally parameters are judged credible if kinetic
fluxes are reasonable, i.e., if the fluxes calculated with the
parameters compare well to independent measurements such as
primary productivity.
113
-------
Automated methods presently exist which minimize the dif-
ferences between observations and calculated results by varying
parameters. The committee urges that these methods be used
within specified bounds and that judgment be used to avoid
simple curve fitting. Any statistical methods developed to aid
in parameter estimation ought to estimate both total system and
parameter errors and be able to identify any systematic errors
or biases.
Issue 5: Measures of Verification
In calibrating a model reasonable coefficients are selected
such that measured data compare well with model output and cal-
culated fluxes are reasonable when compared with independently
determined fluxes. In this calibration phase also, care is
taken to search the output for any non-explainable computational
results. During the verification phase, data withheld during
calibration (split data set) is used commonly to test the
validity of the model. In addition, data sets with statistically
different characteristics are used together with their associated
different input conditions. It is suggested that verification -
in the sense of achieving truth - is too strong a goal. On the
other hand, verification - in the sense of "acceptance testing"
- is strongly encouraged by the committee.
Credibility of the model is established through: internal
consistency; minimizing empirical formulations without observa-
tional support; applicability of the model to similar situations;
demonstrated predictive ability; absence of any counter-intuitive
non-explainable results and use of reasonable assumptions for the
inclusion or exclusion of normally significant mechanisms.
In establishing model credibility, the use of statistical
techniques should be recommended (not promulgated). Goodness of
fit tests should be employed, with the number of degrees of free-
dom minimized (e.g. 100 parameters for 80 data points). Analysis
of residuals should be employed to test for lack of bias, random-
ness and normality - the latter if assumed in the statistical
method. Least squares techniques should be used on single
decision variables and weighted least squares on multiple
decision variables. In assessing goodness of fit, penalties
should be assigned more strongly to either underestimations or
overestimations depending on the variable being tested (e.g. one
sided loss function for dissolved oxygen). It may be noted that
some data sets are not of high enough quality or are of too small
a sample for meaningful statistics.
114
-------
Issue 6: Use of Models as Projection Tools
In selecting projection conditions, there are no well de-
fined criteria to assist the analyst. It is clear that as many
potentially critical conditions should be examined, as is con-
sistent with the resources available.
The committee was divided in their assessment of the abili-
ty of eutrophication models to predict incremental water quality
changes in the future, with the nine members characterizing it
as follows: good (3), fair (5)., poor (1). Likewise, the credi-
bility of model results for presently non-existent systems
(e.g. new reservoirs) were adjudged to be: good (1), fair (5),
poor (1), no opinion - case by case evaluation (2).
Summary Question
Overall, the committee judged the present state of the art
of verification of eutrophication models to be either fair (2
members) or poor (6 members) and the credibility of these models
to be generally good to fair (good-3, fair-4, poor-1).
115
-------
RECOMMENDATIONS
of the
Eutrophication Committee
1. Credibility of models can be improved by:
a) early communication with decision makers to clarify
water uses, problem areas and project requirements
b) more emphasis on data collection and less on analysis
c) quantification of "how good" and "how bad" the model
predictions are
d) improved methods for visually displaying modeling
results (e.g., automated computer graphics)
e) complete model documentation at all levels (computa-
tion can be reproduced from the report only)
f) complete candidness on modeling with no "overselling"
of its capabilities
g) implementation of post-auditing procedures
2. Traveling "road shows" are recommended for explanation of
model capabilities and applications to decision makers.
3. In data gathering, new technology should be employed, such
as remote sensing of chlorophyll and turbidity (LANDSAT -
Nimbus G).
4. For wastewater inputs, it is recommended that continuous
sources be monitored for nutrients using flow composited
daily samples. After a preliminary study to determine its
importance, non-point source information should be gathered
with at least two da'ta sets, one for calibration and one
for verification. The non-point data should be coordinated
with appropriate land uses and the total non-point dis-
charge to a water body estimated from its total drainage
area.
5. Field scale experiments should be conducted with major per-
turbations to the system (e.g. addition of large quantities
of nitrogen or phosphorus), which are closely coupled with
calibration and prediction computations. During .these
field studies a sufficient number of measurements of fluxes
and state variables would be made.
116
-------
6. It is recommended that a number of "National Benchmark
Lakes" be established with full data sets, against which
all models might be tested ("standard solution").
7. Water quality standards which relate eutrophication varia-
bles to water use and impact should be developed. Dissol-
ved oxygen is an important secondary variable that should
be considered here.
8. It is recommended that a simple model calculation (e.g.
mass balance) always be used whether or not a complex
calculation is attempted. From a consultant's point of
view, an answer must be obtained within the allocated time
and money and complex model calculations often are not
guaranteed. It is also not clear whether the uncertain-
ities in the results are smaller or larger when one
compares results from simple and more complex models.
9. Resources should be allocated for post-auditing of water
bodies after changes in the system are implemented. Staged
construction is recommended where uncertainities exist,
with a reanalysis of new data prior to continued con-
struction.
10. Increased use of computer graphics should be made for dis-
play of model output.
11. Computer based eutrophication models should be used only by
professionals in the field.
117
-------
STATE-OF-THE-ART REPORT
of the
Hazardous Substances Committee
Tudor T. Davies - Chairman
Members
David Alexander
Thomas W. Gallagher
Richard C. Graham
John P. Lawler
W. Brock Neely
Yasuo Onishi
William L. Richardson
Phillip L. Taylor
James Tofflemire
Issue 1. Role of Models in Decision Making
The committee felt that modeling in the hazardous substance
area often is not a factor that comes into play in decision
making. On the other hand, it was also noted that managers
often readily embrace modeling results that can be best charac-
terized as "non defensible number generation." Concern was
expressed with respect to a proper balance between the role of
health effects and the role of good modeling. The committee
viewed modeling as one of the many tools that should be used
carefully in the decision making process, examples of which
process are illustrated in the two figures that follow.
Issue 2: Data Base
The committee felt that data available in the hazardous
substance area was totally inadequate and, in many ways, unre-
liable (e.g. the method of analysis sometimes in error). In the
absence of good predictions of cohesive sediment migration,
adequate sediment procedures are needed. It was felt that input
load data generally provided highly variable estimates and that
the receiving water quality data base was meager and often in
error. There is a paucity of information on model parameters of
hazardous substances which depend on adsorption/desorption, par-
tition coefficients and cohesion coefficients of fine sediment.
118
-------
LEGISLATIVE
LIAISON
CABINET
LIAISON
AGENCY HEAD
SELECTS OPTION
PUBLIC OPINION
(HEARINGS)
AGENCY ANALYSIS AND PRESENTATION OF OPTIONS
RISK LOW RISK HIGH
-*-SCALE-*-
CONTROL COST HIGH
CONTROL COST LOW
ECONOMIC
IMPACT
ANALYSIS
NIEHS
EVALUATION OF DATA
TOXICOLOGICAL DATA
(ANIMAL POLLUTANT
EXPERIMENTATION.
FATE IN HUMAN BODY)
SOCIAL DECISION ANALYSIS
EPA
REGULATORY
FEASIBILITY
CONSIDERATIONS
CDC
EVALUATION OF DATA
EVALUATION OF DATA
EXPOSURE DATA
(POLLUTANT OCCURRENCE.
FATE IN ENVIRONMENT)
EPIOEM10LOGICAL DATA
(MEDICAL DATA ON
EXPOSED POPULATIONS)
FIGURE I. DECISION-MAKING IN
THE REGULATION OF TOXIC SUBSTANCES
«.g.,U.S. GOVT.
119
-------
ALTERATION
OF INPUTS
INPUT ASSESSMENT
POINT SOURCES
DIFFUSE SOURCES
(MAGNITUDE » LOCATION )
DECISION MAKING (REGULATION)
REDUCE EXISTING DISCHARGES
PREVENT MANUFACTURE
ALLOW SOME MANUFACTURE
ALTER ENVIRONMENTAL GOALS
EXPECTED
DOSE
ALLOWABLE
DOSE
MODELING
DIAGNOSTIC:
SYNTHESIS OF INFORMATION
EMULATION OF EXISTING CONDITIONS
pno GNOSTIC:
SIMULATION OF EXPECTED RESPONSES
IMPACT ASSESSMENT
ROUTES OF TRANSPORT
BIOMASS
COMPARTMENT CONCENTRATION
SINKS
(SURVEILLANCE)
EXPERIMENTATION
RATES OF TRANSPORT
RATES OF TRANSFORMATION
RATES OF BIOCONCENTRATION
HEALTH-ECOLOGICAL EFFECTS ASSESSMENT
DOSE - EFFECT
SOCIO-ECONOMIC ASSESSMENT
COST- BENEFIT
FIGURE 2. ENVIRONMENTAL MODELING
MANAGEMENT PROCESS
120
-------
It was suggested that workers in this area need to take advan-
tage of the available radionuclide data base.
Issue 3: Time and Space Scales; Kinetic Detail; Cost Effect-
iveness
Selection of a modeling framework is quite dependent on the
problem being addressed and must include considerations of the
impacts on both man and the biota. The spatial scale of the
model would extend over that distance within which the toxicant
would have an impact on man and the biota and this would be a
function of the operative mechanisms. Temporal scales would
vary from days to several years, depending on the specific
kinetics and transport, the seasonal effects (wet vs. dry
weather) and the type of release (instantaneous vs. continuous
release). Major kinetic mechanisms that are to be examined in
the modeling of hazardous substances include sediment inter-
actions with the toxicants, chemical changes and bioaccumulation.
The committee felt that a standard validation procedure
should be used to check computer software and that, to the
extent possible, analytical solutions should be made available.
It was noted that computations should be performed in consistent
units such that mass balances could be performed. Second party
review was also recommended for validation as well as use of
another computer code which had been applied to similar problems,
Issue 4: Parameter Estimation
In estimating model parameters, the committee felt that
measurements for chemical and physical mechanisms should be
performed first. After field measurements for transport para-
meters are accomplished, laboratory measurements of the physical
and chemical properties of the hazardous substance are made,
related to the various variables upon which a specific parameter
depends, and translated to field conditions. Laboratory rates
of hydrolysis, oxidation, evaporation and photolysis are
related directly to field conditions, whereas sorption/desorption
kinetics are dependent on the nature, size distribution and type
of solids in the natural system.
Translation of laboratory data on biological parameters to
field conditions is an order of magnitude more difficult than
for the physical/chemical rates and workers should expect to
perform field measurements. Laboratory measurements, however,
are useful in partitioning uptake pathways.
121
-------
Credibility of the selected parameters is established
through sensitivity analysis, by comparson with scientifically
defensible ranges of values of the parameters and by selection
of parameters using means independent of the model.
Issue 5: Measures of Verification
In the calibration and verification of models, the commit-
tee reported that comparison of model output to observed data is
a basic mandatory procedure together with checks on the conser-
vation of the total mass. Credibility of the model is judged by
examining whether recognized mechanisms and parameters are used,
the degree of reliance on observed data to establish parameter
estimates and the history of success of the model or similar
models. In addition, the relative confidence in each of the
model elements, via-a-vis both mechanisms and parameters, is
used to judge overall model credibility.
The committee felt that a set of statistical techniques
should be used to quantify model credibility with neither blind
rejection of the model, if the statistical criteria are not met,
nor blind acceptance of the model if the criteria are met.
Issue 6: Use of Models as Projection Tools
The selection of the projection conditions for modeling of
hazardous substances incorporates the following considerations:
problem definition, acceptable level and frequency of risk, the
level of "insurance" a client is willing to pay for, chronic vs.
short-term effects and existing vs. proposed chemicals. It was
noted that these substances-unlike others such as dissolved
oxygen, temperature, salt, etc.-have been introduced by man and
a different type of criteria than used for other substances may
be appropriate and should be developed. It was also noted that
tools for comparable evaluations of the impacts of various haz-
ardous substances or on the effectiveness of various control
strategies should be available.
The committee felt that models of hazardous substances
could describe incremental water quality changes under future
design conditions, at least to the level of an overall macro-
scopic mass balance.
With respect to model credibility for systems where data do
not exist (physically non-existent system), the committee felt
that adequate results were possible if models included recognized
mechanisms and parameter values - whose relative confidence were
known-and the models had a history of success being applied to
the specific, or similar, problems.
122
-------
RECOMMENDATIONS
of the
Hazardous Substances Committee
The Committee recommends that another workshop be reconven-
ed with plenary lectures given by USEPA personnel on the state
of their mass balance models for hazardous substances. The
workshop should be directed toward recommendations on how to
improve the macroscopic scale models, i.e. make them more
microscopic. This may entail three of four pilot studies where
sufficient data would be obtained on a few chemicals - including
radionuclide tracers - so that micro-scale models can be cali-
brated, verified and post-audited.
123
-------
HOW A PROGRAM MANAGER USES WATER QUALITY MODELS
BY
Louis A. Beck, Director
San Joaquin Valley Interagency Drainage Program
1490 West Shaw Avenue, Suite F
Fresno, California 93711
The program manager is one link in the decision-making
process. He may only make recommendations to a funding entity,
but his evaluation of alternatives and selection of a recommended
action have a strong influence in the decision-making process.
He can use water quality models for two purposes: 1) as a
planning tool to compare alternatives; and 2) as evidence in
support of recommendations. The same modeling information may
be used for both purposes. The program manager's understanding
of the modeling information will determine how well he is able
to support his recommendation when questioned.
In planning, models are only one source of information when
comparing alternatives. There may be projections of population
or of water resource development. There will be calculations of
costs and benefits, and an economic analysis which may include
predicted rates of inflation. All projections are uncertain to
some degree. Population projections may err numerically or in
distribution patterns. Cost estimates often have a contingency
factor of 25 or 30 percent. Benefits may err with respect to
both production and prices. But estimates must be made, then
detailed calculations are performed on these estimates.
Seldom is a water quality model the one key element in the
planning process. The relative worth of all the inputs must be
considered. If the water quality model results are considered
75 percent reliable, and cost estimates and other environmental
effects have a 90 percent reliability, the latter would be given
more weight.
When modeling results are used to support the program
manager's recommendations, he would like very positive and un-
equivocal information, but he knows there will be qualifications.
There must be communication between the program manager and the
modelers. The modelers should not hesitate to inform the pro-
124
-------
gram manager of any shortcomings they perceive in the model or
concerns that they may have. He does not want to learn of these
shortcomings from someone else during a public hearing.
The program manager will feel more confident with his rec-
ommendations if a sensitivity analysis of the model results can
be done for him. When he knows that the variation of a para-
meter over a wide range results in a small variation in model
results, he will feel more comfortable.
In summary, the program manager wants as much information
as possible that is relevant to a recommendation. The modeler
should communicate extensively with the program manager and not
assume that the program manager is well versed on modeling. It
is the modeler's responsibility to inform him as thoroughly as
possible.
125
-------
WATER QUALITY MODELS FOR BACTERIA AND VIRUSES
By
Raymond P. Canale
Professor of Civil Engineering
University of Michigan
Introduction
This discussion concerns the modeling of bacteria and
viruses in the context of the six "issues" raised by the
organizers. The emphasis will be on modeling of indicator
bacteria because we have had limited experience (especially of a
practical nature in actual field situations) with virus problems.
Issue #1 - Role of Models in Decision Making
Unfortunately, far too often administrators do not use model
output to arrive at rational wastewater management decisions.
Even more tragic is the fact that frequently these same admini-
strators financially support the development of models and then
fail to use them. These kinds of experiences errode the credi-
bility of the only quantitative technology which can respond to
environmental impact problems in a rational fashion. To avoid
these difficulties it is important that modelers and administra-
tors cooperate during the initial phases of a project and jointly
define the users of the model in terms of water quality issues.
This should lead to proper model design with regard to time
scales, space scales, and kinetic and hydraulic complexity. Un-
fortunately, in many cases the water quality issues are forced
to fit the framework of readily available "shelf" models rather
than the other way around as suggested above. As a result the
models are either too simple to address the real problems, or
worse, too complex to interpret.
A second major deterrent to effective use of models by
administrators concerns the degree of confidence both modeler
and administrator have in the ability of the model to represent
the real system. Model design should respond to the individual
characteristics of each lake or river in the simplest possible
manner and avoid unnecessary mathematical complexity. The use
126
-------
of general, "off the shelf" models should be avoided. The result-
ant models should then be carefully calibrated and verified prior
to application.
Issue #2 - Data Base
A critical factor related to model use by administrators
concerns the availability of adequate data for model calibration
and verification. Station locations and sampling frequency
should be compatible with the time and space scales of the
model, which is in turn related to model use. It is necessary
that the system response water quality be measured during
periods when all pollutant sources are known by simultaneous
measurement. Sampling stations should be located to simplify
estimation of model coefficients during calibration. It is
also important to coordinate measurement of other variables
during sampling such as advective flow patterns, dispersion
coefficients, temperature and boundary concentrations.
Normally routine monitoring data are of little value for
calibration or verification of models for indicator bacteria.
This is the case because monitoring programs do not normally
include measurement of all of the important variables and have
deficiencies with regard to both sampling frequency and location.
Thus, the value of monitoring data is normally limited to sur-
veillance purposes. Modeling programs which must rely on exist-
ing monitoring data are normally doomed to failure because of
these inherent limitations. When these restrictions are imposed
it is difficult to properly design, calibrate, or verify models.
These are the reasons models are not used by administrators.
Issue #3 - Model Detail
Probably the most important principle of model design is
that the framework of the model should be the simplest possible
that permits description and quantification of the important
phenomena operating in the lake or river. Thus linear die-away
kinetics are used unless nonlinear processes can be shown to be
important to the problem solution, despite the fact that real
systems involving indicator bacteria are nonlinear. . It is
advisable to use simple hydraulic models when complex ones can
be avoided. For example, it is poor engineering practice to
construct models for instantaneous tidal velocity if only long-
term or time-averaged spatial concentration profiles are required
for problem solution in terms of the water quality issues.
Interactions with -administrators are necessary to insure proper
model design. Complex questions for some problems such as
temporary pollution levels associated with combined sewer over-
flows during wet weather events may require construction of
complex time variable models, but simple models should be
designed if possible.
127
-------
Issue #4 - Parameter Estimation
The kinetic coefficients for models of indicator bacteria
are best obtained on a site specific basis using a three step
procedure. First, a good intensive field sampling program is
conducted which is compatible with the model framework and has
simultaneous measurements of all the important variables and
processes such as the water quality response, loadings, flow
patterns, and dispersions. These results can be used in con-
junction with the model and a trial-and-error procedure to
estimate the bacterial die-away rate. Next this rate should be
compared with estimates obtained from bottle test studies. In
this text bacterial concentration changes are followed over time
in a uniform mixture of the wastewater and the receiving water.
Finally, estimates of die-away rate should be compared to
literature values. For the case of indicator bacteria, kinetic
die-away rates are relatively well-known functions of temperature
and salinity. Figures 1 and 2 show the results of two indepen-
dent data summaries which define first-order bacterial die-away
rates. The results for freshwater are relatively similar despite
the different sources of data. Thus, because the kinetic frame-
work for indicator bacteria is normally quite simple, relatively
straight-forward and standard procedures can be used to estimate
the kinetic coefficients.
Issue #5 - Verification
It is necessary to verify models for indicator bacteria
because during calibration laboratory die-away coefficients are
adjusted to accomodate settling and other phenomena which may
not occur in the bottle test. Thus, it is essential that the
model output be checked against field data for at least one in-
dependent set of conditions. It is advisable that more than one
independent survey be used if possible because measurement
techniques for indicator bacteria are not very precise. The
goodness of fit of the model compared to field data may be
evaluated using recent statistical methodologies proposed by
Thomann (1979). In addition it is advisable to perform sensitiv-
ity calculations to determine how the model output varies as a
function of changes in model coefficients and forcing functions.
It is important that the modeler have accurate estimates for the
most sensitive variables.
Issue #6 - Use of Models as Projection Tools
Mathematical models for' indicator bacteria should ideally
be used to evaluate the marginal costs and benefits associated
with incremental changes in water quality. As an example Figure
3 shows a plot of fecal coliform loading vs. the resultant fecal
coliform concentration as calculated by a model in critical
regions of Onondaga Lake. Cost of treatment can be related to
loading for various degrees of control of combined sewer over-
128
-------
H
KJ
O
"O
o
u.
LL.
UJ
8
or
1.2
1.0
0.6
o
tr
LJ
o
cr
o
CO
cr
0.4
0.2
A Meier and Gannon (1971), Lab Study
A Meier and Gannon (1970), Field Study
•*• Mich. Water Resources Commission Fecal Coliform (1968)
O Mich. Water Resources Commission Total Coliform (1968)
• Scarce et a!. (1964) •
O Scarce et al. with Chlorination (1964) •
• Scarce et al. with Intermittent
Illumination (1964)
10 15 20
TEMPERATURE (°C)
25
30
35
FIGURE I. BACTERIA DIE-AWAY RATE AS A FUNCTION OF TEMPERATURE
-------
S '-°
x. 0.9
3 0.8
X. 0.7
0.6
O.o
0.4
0.3
1.0
9
8
7
6
SEA WATER MORTALITY RATES
Kf= 1.40* l.07(<"20)
LEGEND:
•-LAB DATA
• -FIELD DATA
FRESH WATER MORTALITY RATES
A A A,
L EG END:
A-LA8 8 FIELD DATA
10
20 30
TEMPERATURE (°C)
40
50
60
FIGURE 2. BACTERIA DIE-AWAY
AS A FUNCTION OF TEMPERATURE
( MANCINN I, 1978)
130
-------
1
to
UJ
o
z
Q
-------
flow. Although similar relationships have been developed between
loading and response for many other projects, an important
question concerns the accuracy of such calculations. Errors are
introduced into the projections because the model coefficients,
forcing functions, and initial conditions are not known per-
fectly. These types of errors occur even if the model structure
and mechanisms are assumed to be perfect (which is never the
case). Normally, these questions are addressed by sensitivity
analyses of the results to design conditions. A better approach
to this question would be to perform Monte-Carlo simulations
where all the uncertain values are varied in a random fashion
over the full range of uncertainity. Unfortunately this is a
costly procedure which is rarely used in engineering practice.
Research now being conducted at the University of Michigan is
examining the applicability of Kalman filtering techniques for
calculating directly the probable error associated with model
projections as a function of the error in the model coefficients
and forcing functions. This capability will in turn help deter-
mine in a quantitative manner the amount of data necessary to
verify model frameworks, coefficients and forcing functions. It
is hoped that this type of systematic approach to a problem which
has been formally addressed mainly on an intuitive or judgment
basis will ultimately lead to models more useful to administra-
tors.
132
-------
REFERENCES
1. Mancini, John L., 1978, "Numerical Estimates of Coliform
Mortality Rates Under Various Conditions," Journal WPCF,
50, No. 11, p. 2477-2484.
2. Thomann, Robert V., 1979, "Measures of Verification,"
National Workshop on Verification of Water Quality Models,
EPA, in press.
133
-------
VERIFYING A WATER QUALITY MODEL
By
Tze-wen Chi and Harold A. Thomas, Jr.
Meta Systems, Inc.
Cambridge, Massachusetts
This paper presents an example of verification of a
Water Quality Model. To facilitate a consistent analysis, the
officials of EPA Region III implemented Qual-II Model to per-
form waste allocation analysis of the Lower Kanawha River,
which had experienced low dissolved oxygen in summer and fall.
Oxygen consumption was thought to be caused by the carbonaceous
BOD and nitrogenous oxidation; resident industries were the
major contributors of the organic nitrogen. The model was
calibrated by historical data, including 1973 conditions, which
had a stream flow slightly higher than 7Q,... and a Nitrogenous
oxidation rate estimated at 0.04/day. An intensive survey was
conducted in 1974 to obtain data for model verification. In
this survey some interested parties suggested that the normal
second stage of biochemical oxidation of organic nitrogen did
not occur in these reaches and thus did not need to be removed
in treatment plants prior to discharging in this locality.
After analyzing the data and carrying through a variety of
statistical tests, we came to the conclusion that the suggestion
mentioned above was without merit; it was not supported by the
available data for the Kanawha River. We find that parameter
values predicted for the Qual-II model were justified. The
salient points of this conclusion are summarized below.
1. The size, shape and flow characteristics of the
Kanawha River of the reaches near Charleston, West Virginia are
much the same,as reaches in many other navigable rivers
originating in the Appalachian Mountains in which nitrogenous
wastes are discharged and are subsequently oxidized in the
stream, with the production of nitrate nitrogen. In fact,
some of these streams such as the Ohio and Potomac Rivers, were
among those in which the normal course of the nitrogen cycle in
streams was first demonstrated. Data pertaining to reaction
velocity constants for nitrification compiled in standard
sanitary texts and handbooks, which have been used for many years
134
-------
are based in substantial part upon extensive stream surveys of
this group of rivers.
2. It is known from many laboratory experiments — bottle
tests and model streams - that biochemical oxidation of ammonia
and common organic nitrogen compounds to nitrite and nitrate
nitrogen will occur if favorable conditions are present. These
include: (1) presence of appropriate seeding of autotrophic
microorganisms, including Nitrosomonas and Nitrobacter and
similar bacteria; (2) adequate time for the reaction to occur;
(3) a benign ambient aquatic environment — pH, dissolved
oxygen temperature, ionic strength, etc. Moreover, it is known
that nitrification may be inhibited or precluded by unfavorable
conditions, such as the presence in active form of some heavy
metals (e.g., mercury, chromium and copper) and by the simul-
taneous occurrence of certain other biochemical processes at
high levels of activity that make the aquatic environment un-
suitable for nitrification. The latter form of inhibition may
occur in heavily polluted streams.
In the first phase of our analysis we examined hydrologi-
cal, hydrographic, and water quality data from past years per-
taining to the Kanawha River and also chemical and physical
data pertaining to wastes discharged to the river between
stations at River Miles 67.7 to 69.7. Our conclusion is that in
general stream conditions are favorable for nitrification at low
to moderate rates. No facts or factors were identified that
would lead us to conclude that nitrification would likely be
inhibited in these reaches.
In the second stage of our analysis we analyzed data
recently obtained regarding the fate of nitrogen compounds in
the stream with the view of delimiting as closely as possible
the amount of nitrogenous oxidation actually occurring in the
lower reaches of the Kanawha River under present conditions.
3. One relevant condition for believing that stream
nitrification occurs is laboratory demonstration that samples
of the nitrogenous wastes do in fact undergo the second
stage of biochemical oxygen demand in bottle tests under
controlled conditions. Data obtained by EPA Region III analysts
amply demonstrate that wastewater samples do oxidizfe with the
production of nitrates. Further, analysis of the laboratory
data indicates that the schedule and degree of nitrification is
typical of that found with many nitrogenous wastes. Least
squares fitting of the data yielded reaction velocity parameters
(0.10 to 0.25 per day; Napierian base) that fall within the
range found in numerous bottle and model stream tests. The
fact that oxidation of nitrogenous compounds of the waste being
discharged into the Kanawha River are oxidized under laboratory
conditions, of source, does not prove that such oxidation occurs
in the river, but it does support a presumption of stream
135
-------
nitrification unless specific factors in the stream that inhibit
second stage oxidation can be identified.
4. The next phase of our investigation pertained to
analyses of sets of water quality data from river samples
collected twice daily from September 24th to October 3rd, 1974.
These 20 sets of data relating to dissolved oxygen and various
forms of nitrogen, together with information regarding river
runoff and water temperature, were analyzed in several ways to
assess the rate and extent of nitrification in the stream.
The reaches of the Kanawha River under consideration
extend from River Mile 73.7 at Chelyan Bridge to Mile 58.7 at
South Side Bridge. In this 15-mile stretch there are no
tributaries of consequence. The Marmet Lock and Dam is located
midway at Mile 67.7; this structure, which is just downstream
from major wastewater outfalls, speeds the vertical and horizon-
tal dispersion of wastes over the entire cross-section. A large
industrial plant and three small communities, Chesapeake, Belle
and Marmet, are located on the banks and discharge wastewater
effluents into the river. Wastewaters from Chesapeake and
Marmet receive primary treatment, Belle's wastewater is pro-
cessed in an extended aeration plant. All three municipal
treatment plants have design capacities of less than 300,000
gallons per day; their contribution to the total flux of
pollutants in the river is minor, and the amount of nitrate
nitrogen contributed by them is negligible. Wastes from the
industrial plant are released through several outfalls between
stations at River Miles 68.5 to 69.2. These wastes constitute
a major addition to the total flux of nitrogen compounds in
the Kanawha River. During the sampling period in late
September and early October industrial waste inputs from these
outfalls increased the total flux of nitrogenous wastes in the
stream by more than 50 percent. Data available to us indicate
that during the period of investigation the amount of nitrate
nitrogen discharged from the outfalls was very small in relation
to the amount of ammonia and organic nitrogen discharged.
During the sampling period water temperature averaged about
20°C with only minor variations. Flow rates in the Kanawha
River at this time averaged about 5600 cubic feet per second -
almost twice the design flow of 2890 cfs (7-day; 90 percent dry
year flow). The mean flow-through time of the 15-mile section
was about two days. Runoff was not uniform during the period;
higher discharges occurred toward the end of the sampling
period (September 30, October 2 and 3, 1974). Because of
irregularity of runoff rate, with concomitant fluctuations in
velocities and rates of longitudinal dispersion of the wastes,
it was not possible to assess accurately the rate of oxidation
of ammonia and organic nitrogen for each of the 20 sample sets.
Instead it was expedient to amalgamate the data and to compute
the average flux rate of the various forms of nitrogen over the
entire sampling period. In Table 1 flux rates are shown for
136
-------
TKN, NH3, N03/ Org N and total N. Input to the reach above the
industrial wastewater outfalls is calculated as the average
flux rates obtained from the water quality data, and stream
flow rates at stations at River Miles 73.7 and 69.7. Output of
the reach is calculated by similar information from stations at
Mile 61.0 and 58.5 below the waste outfalls. Average flux rates
calculated in this way together with the standard deviations in
flux rate from sample to sample are summarized as follows:
TABLE 1
^ Input ., Output
10 Ibs/day 10 Ibs/day Percent
(as N) (as N) Increase
Nitrate nitrogen 28.8+2.3 32.1+2.3 11
Ammonia nitrogen 3.5 +_ 0.6 16.0 + 0.7 357
Organic nitrogen 9.5+1.2 18.5+2.0 95
Total nitrogen* 41.8+3.2 66.5+4.3 59
*Organic and inorganic, but not including nitrite nitrogen
for which no data were available.
The observed increase in,the flux rate of nitrate nitrogen
based on 39* ' input and 39 output samples is 3.3 x 10
pounds per day. Since input of nitrate nitrogen from the waste
outfalls in the reach during the sampling period was negligible,
the observed increase can only be due to (i) nitrification in
the stream, (ii) sampling error, or (iii) a combination of (i)
and (ii). On the hypothesis that the Kanawha River has nitrify-
ing characteristics similar to those of rivers of the same size
with similar pollution loading, one would expect a small but
significant amount of nitrate production in the 15-mile reach in
a two-day residence time. Such an increase could be measured
without difficulty in many situations. In the Kanawha River,
however, a large nonuniform flux of nitrogenous material from
antecedent pollution occurs as an input to the reach of concern
and obscures interpretation of the test results. This variable
flux from waste effluents of upstream municipalities and in-
dustries makes it difficult to calculate with precision the rate
and extent of oxidation of ammonia and organic nitrogen in the
(1) Two stations at upstream and downstream for 40 measures,
with one deleted for incompleteness of data.
137
-------
sections2
-------
is a stochastic variate distributed in accordance with a stu-
dent's frequency distribution with u = 78^ ' degrees of freedom.
E(u) denotes the population mean of the u-values. According to
the statistical theory, the variate, t, is a dimensionless
measure of the deviation of the observed mean from population
mean. It has an expected value of zero and a standard deviation
of 1.013. If many replicate series of 20 samples sizes were
obtained for this reach of the Kanawha River under similar
conditions of flow, temperatures, and pollution loading; and if
a t-value were computed for each series of twenty samples, it
would be expected that the overall average value of t would be
close to zero. But owing to random sampling fluctuation, some
large values and some small (negative) values would occur.
Using tables of the student t-distribution it is possible to
calculate the probabilities (frequencies) associated with t-
values of different magnitudes. For example, the t-value of 3
0.47 calculated above under the hypothesis that E(u) = 1.8 x 10
pounds per day - a level based on a nitrification velocity
constant typically found in streams of this size and hydrologi-
cal class - would, according to the theory, be exceeded in
about one third of the hypothetical replicate sampling_series
[Pr{t>. 47} = 0.319]. Thus, a value of t of 0.47 (and u =
3,300 Ibs/day) is not at all incompatible with a true mean
u-value of 1800 Ibs/day.
If we now test as an alternative hypothesis the suggestion
that no biochemical oxidation of ammonia or organic nitrogen
occurs in the reach, a t value of 1.03 would be obtained:
4— r.i—
(14.1)
39
(3.3-0)
39
= 1.03
(14.3) ,0.5
From tables of the t-distribution with 78 degrees of freedom,
it is found that a t value as large as 1.03 would occur on
the average only once in about six of the hypothetical repli-
cate sampling series [Pr(t>1.03}= 0.153]. While this is not a
rare event, it is significant to note that the probability is
only about one-half the probability of the result obtained in
the first computation based on the hypothesis that E(u) = 1800
(4)
v =
.14.1'
' 39
39
39
(39+1)
-2 = 78
39
139
-------
pounds per day. Thus, while the data do not disprove the hypo-
thesis that nitrification does not occur, they are in fact more
nearly in accord with the hypothesis that' it does occur and at
rates typical of impounded streams of this size.
It will be useful perhaps to restate the above inferences
in terms of the Neyman-Pear son,- statistical test and the concept
of Type I and Type II errors. ' Under the hypothesis (call it
H ) that nitrification does not occur, the critical region for
rejection of H is
1*0 I* ta/2,v = t.025,78 = 1<99
when the Type I error is fixed at 5% (a = .05). Since the ob-
served t-value of 1.03 is less than 1.99, the null hypothesis
cannot be rejected. But if now we calculate the Type II error,
it is found to be large. Under the alternative hypothesis (H, ,
say) that stream nitrification does occur, it may be shown that
with the frequency rejection criterion computed for a = .05, a
very large Type II error of 0.92 (0 = 0.92) is obtained. These
computations are summarized in the following compilation for two
different criteria for rejection (a = 0.05 and 0.20).
Type I Error Power of Test Type II Error
0.05 1.99 0.08 0.92
0.20 1.30 0.27 0.73
At both levels of a the power of the test (1-0) is seen to be
low, and large Type II errors inhere. Evidently the additional
data obtained in September and October 1974 are not sufficiently
precise to be decisive on this'issue. But in view of the large
(5) In the Neyman-Pearson test a critical region is selected in
advance of the test for rejection of the null hypothesis.
If the t-value, based on test results, falls in the criti-
cal region, then the null hypothesis is rejected. The Type
I error (a error) is the present probability of rejecting
the null hypothesis in cases where the null hypothesis is
correct. The Type II error (3 error) may occur when the
null hypothesis is accepted; it is the probability that the
test result will not fall in the critical region for re-
jection as it should in cases where the alternative hy-
pothesis is true. Thus the 3 error is a measure of the
hazard of making an incorrect inference when the null
hypothesis is accepted.
140
-------
errors it would be most unwise to conclude that since the null
hypothesis cannot be rejected nitrification does not occur in
the stream. More data are needed for proof or disproof. In
further analysis it would be desirable to include consideration
of the entire nitrogen balance on the stream in the reaches of
interest. Needed data included more detailed information on
industrial wastewater releases.
Summary and Conclusions
Our study leads us to believe that nitrification occurs
in the Kanawha River at rates in the range typical of other
impounded streams of this size. Available data are not
sufficiently detailed to delimit precisely reaction velocities
under the varying ambient conditions in the river. Rates of
biochemical oxidation of ammonia and other nitrogenous wastes
are inherently unstable, and large sampling errors are inevita-
ble. Many factors determine the rate and extent of nitrogenous
oxidation. For example, on sunny days most of the ammonia
present in the stream may be absorbed directly in the metabolic
processes of algae, slowing or halting the production of
nitrites and nitrates. In Charleston, West Virginia, in a
typical year there are 56 clear days, 192 cloudy days and 118
partly cloudy days. Variations in rates of nitrogenous oxida-
tion from this source and from many other causes make it
difficult to define norms and to select appropriate input para-
meters for computer models that attempt to simulate the natural
processes of stream self-purification. These difficulties are
severe but they do not obviate the need to attempt to establish
reasonable estimates of average nitrification rates unde'r stream
conditions expected in the future. It would be a serious
mistake to conclude from examination of a limited body of infor-
mation, from which the occurrence of nitrification cannot be
conclusively demonstrated, that nitrification is an insigificant
factor. We believe that it is a significant factor affecting
the oxygen balance of the stream.
141
-------
TRANSPORT MODELS
BY
J. D. Ditmars
Energy and Environmental Systems Division
Argonne National Laboratory
The water resources group at Argonne has been involved in
the evaluation of specific types of transport models (both
mathematical and physical) in terms of prototype-scale, field
data. The environment has been primarily the Great Lakes and
the transport processes modeled include the near-source behavior
of cooling water discharges, whole-lake circulation of Lake
Michigan, and extreme nearshore circulation driven by wind and
waves. The following summary comments are drawn from the con-
text of these limited experiences and address, in part, the
issues of the data base, model detail, and methods of verifica-
tion.
The most useful data are those acquired specifically for
the evaluation of a given model(s). This would appear to be a
truism, but it is not clear that it is always appreciated. It
is our experience that familiarity with and planning with
regard to the specific nature of model inputs, processes, and
outputs are essential to the acquisition of data useful for
evaluation purposes. Data gathered in monitoring activities,
for regulatory or operational purposes, are usually inadequate
substitutes. While monitoring data provide time-series records
at a few points in a flow system, they often lack the synoptic
scope required for model evaluation.
Measurements for evaluation purposes should include
measurements of variables that may affect the transport process
even though those variables may not be accounted for explicitly
in the model. Our studies of thermal plume models and measure-
ments of plumes in the Great Lakes have indicated this to be
particularly important with regard'to ambient conditions.
Models of thermal plumes rarely are able to account for spatial
and temporal variations in the ambient environment, and model
input often reflect uniform and steady conditions. Measurements
by Argonne of a power plant thermal plume at two different
occasions, but under identical discharge, ambient stratification,
wind, and depth-averaged ambient current conditions, indicated
142
-------
near-surface isotherm areas were substantially different. In-
vestigation of measurement of the vertical structure of the
current showed that the current was nearly uniform in one case
and sheared in the other. As model input parameters could not
account for this variability, calibration of the model against
one set of data would surely lead to poor performance against
the other set. In this particular case, had the ambient current
measurements been limited to single current meter at middepth,
the poor performance may have been attributed to some other
model parameter.
Despite our desire for an objective standard upon which to
determine "verification," we have found no simple quantitative
measure. In fact, we often find ourselves discussing model
data comparisons in such subjective terms as circulation
patterns or plume shapes. The output simulations of four
numerical hydrodynamic models for Lake Michigan were compared,
at Argonne by Allender, with time-series current observations
at fixed points in the lake. Quantitative comparisons included
realtime graphs for fixed locations; power spectra, lake-wide
plots of average motion, progressive vector diagrams, cumulatie
scalar- and vectored-averaged currents at fixed locations; hori-
zontal scalars averages at various depths, and Fourier norms.
The major failing of the models was their inability to simulate
time-? Bries currents at a fixed location, yet the simulation of
circulation patterns appeared to agree "not badly" with patterns
inferred from fixed and satellite observations.
143
-------
RECOMMENDATIONS TO IMPROVE THE,USE OF
MODELS IN DECISION-MAKING17
By
Anthony S. Donigian, Jr.
Anderson-Nichols
Palo Alto, California
Modeling techniques have been applied in a variety of
subject areas in addition to urban water planning. Recent
surveys have discussed the use of modeling in planning (Lee
1973), environmental decision-making (Holcomb Research Institute
1976), and social-human decision-making (Fromm, et al. 1975).
Interestingly, the major obstacles to greater use and impact of
modeling on decision-making are the same for these topics and
for urban water planning. They are:
data availability
modeler-decision maker interaction
model documentation
understanding model assumptions and limitations
For urban water systems, modeling is an area where
technology is far ahead of our ability to apply it. Although
areas such as ecologic modeling, nonpoint pollution, sediment/
solids transport, and water-related economic impacts require
further research, current models are limited largely by the data
available to calibrate, verify, and apply them. In a survey of
project directors and monitors of federally supported modeling
activity in social-human decision making, Fromm, et al. (1975)
note that data availability was the limitation most often
mentioned. Sonnen, et al. (1976) provide a similar conclusion
for urban runoff modeling. From our questionnaire survey
results, 48 percent of the respondents recommended additional
data collection to improve urban water planning while an even
larger percentage (57 percent) of the model-using agencies so
responded. In each of our case studies data collection efforts
accompanied the modeling work.
(1) Excerpted from "Planning and Modeling in Urban Water
Management" by A.S. Donigian Jr. and R.K. Linsley. Hydro-
comp, Inc., Palo Alto, CA. Prepared for OWRT. Contract
No. 14-34-0001-6222. October 1978. 158 pg.
144
-------
Although data requirements for modeling depend on the
specific water problem and the model being used, Table 1
summarizes the general categories of needed data and provides
examples of each. Obviously all models do not require all
types of data shown in Table 1, and the examples listed are
most directly applicable for simulation models which are- the
most frequently used types. However, the amount of data re-
quired will vary considerably, from information on a single
storm event to continuous data for many years depending on the
type of analysis performed and the specific information needed
for informed planning.
Observed quantity/quality data for streams and receiving
waters are most often lacking, and water quality data collection
is especially expensive. Data from nearby or similar watersheds
may be used in some cases assuming that conditions are
similar in both areas. Since the data are used to evaluate
model>parameters and calibrate the model to local conditions,
some local site-specific data are usually needed to insure
model accuracy.
The Holcomb Research Institute (1976), in studies for the
United Nations Scientific Committee on Problems of the Environ-
ment, found that successful model applications in terms of im-
pact on decision-making usually involved extensive interation
between modelers and decision-makers. This communication is
difficult to establish due to differing backgrounds, objectives,
and reward systems. It is a necessary link if the model is to
provide the information that the decision-maker needs for the
specific problem he is facing. Although our case studies always
included users (modelers) within the agency, this did not always
insure that the modeling impacted the decision-making process.
The Holcomb report recommends (1) regular meetings between
modelers and decision-makers to insure agreement on problem
definition and needed information, (2) graphics displays to
communicate modeling results in semi or non-technical terms,
and (3) "policy analysts" with knowledge of both modeling and
decision-making processes to further insure effective communi-
cation. Our analysis confirms these recommendations.
Many more models are developed than are actually" applied
for their intended purposes. Although no reliable data
exists, Fromm, et al. indicates that "...at least one-third and
perhaps as many as two-thirds of the models failed to achieve
their avowed purposes in the form of direct application to
policy problems." (p. 4, 1975). We would expect similar
results for urban water models. Lack of adequate documentation
is a major obstacle; especially when models are developed for
outside (non-developer) users. Often funds are not allocated or
are depleted before documentation is developed. Models are
often released without sufficient testing and incomplete or
insufficient documentation to decipher program "bugs" that
145
-------
TABLE 1
DATA NEEDS FOR MODELING URBAN WATER SYSTEMS
Category
Watershed/System Data
Examples
Meteorologic Data
Observed Water
Quantity/Quality
Data
Economic Data
Topographic maps, land use, soils
characteristics, pipe network descrip-
tion (length, slope, roughness),
reservoir operation, pumping schedules,
drainage description, channel dimen-
sions, storage capacities
Precipitation (storm events and/or
many years), pan evaporation, maximum
and minimum air temperature, other
(wind, solar radiation, etc.) as
needed.
Streamflow, lake levels, tide levels
and cycles, bay/estuary circulation
patterns, concurrent water quality data
(for all constituents of interest),
urban runoff data (quantity/quality)
Water use (residential, commercial,
industrial), flood depth-drainage
information, construction cost, treat-
ment cost, 0 & M costs, recreational
use, interest rates
146
-------
invariably occur. University researchers in modeling usually
publish thesis, articles, and reports without recognizing the
need for user manuals. Government developed models may or may
not include user manuals, but continuing support, user assis-
tance, and program maintenance is almost nonexistent. Models
developed on one computer system may not run on another system
(even with the same model computer) without program modifica-
tions. Thus, adequate documentation should include discussion
of theory, assumptions, limitations, and extent of testing/
verifying the model, in addition to basic information on program
structure, operating instructions, and compatability with other
computer systems. Supplementary information would include data
requirements, data sources, and guidelines for evaluating model
parameters and analyzing model results.
Lack of understanding of model assumptions and limitations
is a major obstacle to greater use and impact of modeling
especially for decision-makers and non-developer model users.
Obviously this is a part of the modeler/decison-maker inter-
action discussed above, but focuses directly on the model and
what it represents. A model is a representation of reality, not
a one to one map; it is based on a series of assumptions within
which a problem is analyzed. Sonnen, et. al. states that a
modeler attempts "...to approximate a solution to a theoretical
problem with both an approximation of the theory and an approxi-
mation of the prototype water body. The model user or the user
of the model's results views his problem, and the theoretical
statement, as precise and infinitesimal." (p. 59, 1976).
These conflicting views between the modeler and the user
must be resolved if modeling is to be used effectively. The
reports by Sonnen, et al. Holcomb Research Institute, and Fromm,
et. al. agree that the most successful model applications often
include the model developer as the user. In seven of our eight
case studies the agencies either developed model components or
were assisted by the model developers. Involvement in the
model development process requires the user to be acutely aware
of model assumptions and limitations, and thus allows him to
most effectively interpret and analyze the model output. With-
out this understanding, the model may be analyzing a problem
different from, or considerably simpler, than the one- the
decision-maker faces.
A related obstacle results from the impression that in a
modeling study the model is the sole guide to decision-making.
It must be kept in mind that "—a model is a means to an end,
and not an end in itself." (Lee 1973, p. 19). The real
analysis begins when the model run is finished because the
interpretation of the modeling results is the critical step in
the modeling study. Even with optimization models which pro-
duce a socalled "optimal" plan, project design, or policy, the
final recommended plan will likely be different from the
147
-------
optimal plan because of the many other inputs to the decision-
making process.
From the questionnaire survey, the case study investiga-
tions, and our analysis of the urban water modeling field, the
following recommendations are extended to potential model
users and decision-makers to help improve the use and effective-
ness of modeling in decision-making:
d) Require a detailed definition of_ the problem and the
specific information to be provided by_ the model. The
problem definition may be part of the modeling study.
Otherwise, the problem should be clearly defined and
the type of information provided by the model and the
analyses to be performed should be specified.
(2) Develop close 'modeler-decision-maker interation.
Throughout the modeling and analysis phases of a
modeling study, regular meetings between the modelers
and decision-makers should help to develop effective
two-way communication. If modeling consultants are
employed, a staff person should be assigned liaison
between consultant and decision-maker and be
thoroughly familiar with the model and the modeling
application.
(3) Use existing models to the extent possible. A review
of available models should be conducted to evaluate
appropriate models. Consultants may assist in this
process but the final choice should depend on the
user's understanding of the problem, the model
assumptions and limitations, and the analyses -to be
performed. Model development may be required if no
models appropriate to the problem are available. How-
ever, modifications to an existing model are usually
more cost-effective than developing a completely new
model.
(4) Require adequate documentation. Adequate documenta-
tion includes discussion of theory, assumptions,
limitations, and extent of testing/verification in
addition to basic information on program structure,
operating instructions, and compatibility with various
computer systems. Supplementary information includes
data requirements, data sources, and guidelines for
evaluating model parameters and analyzing model
results. If the model is being developed or modified,
installation on the user's computer facility should
be required.
Require complete delineation of model assumptions and
limitations. The way the model represents the system,
148
-------
including simplifying assumptions and resulting
limitations, must be clearly explained so that the
decision-maker understands what the model can and
cannot do.
(6) Integrate modeling and data collection efforts. Most
modeling applications will require some data collect-
ion often involving monitoring, sampling, and
analysis. Such programs should be integrated with the
modeling so that the specific data required by the
model is supplied.
If data requirements and/or collection are extensive,
a data management (computer-based) system may be
needed to efficiently store, retrieve, verify, and
prepare the data in a form suitable for modeling.
(7) Emphasize analysis and preparation of model results.
Sufficient resources must be allocated to the
analysis of the model results to insure that the
information produced is usable by decision makers.
This is the key step that determines whether or not
the modeling results will impact the final plan,
project, or policy recommended.
149
-------
REFERENCES
(1) Fromm, G., W.L. Hamilton, and D.E. Hamilton. 1975.
Federally supported mathematical models: survey and
analysis. Research Applied to National Needs. National
Science Foundation. Washington, D.C. 293 p.
(2) Holcomb Research Institute. 1976. Environmental modeling
and decision making: the United States experience,
Prepared by the United Nations Scientific Committee on
Problems of the Environment. Praeger Publishers. New
York, New York. 152 p.
(3) Lee, C. 1973. Models in planning: an introduction to
the use of quantitative models in planning. Urban and
Regional Planning Series Volume 4. Pergamon Press Ltd.
Oxford, England. 142 p.
(4) Sonnen, M.B., L.A. Rosener, and R.P. Shubinski. 1976.
Future direction of urban water models. EPA/600/2-76/058,
Office of Research and Development, U.S. Environmental
Protection Agency. Cincinnati, Ohio. 93 pp.
150
-------
SALINITY MODELS APPLIED IN THE ARID WEST
By
William J. Grenney
Utah State University
Logan, Utah
Over 80 percent of the flow of the Colorado River originates
from snowmelt and rain in the high mountain watersheds. This
high quality water accumulates total dissolved solids (TDS,
salinity) rapidly as it flows into and through the lower eleva-
tion arid regions of the basin. For example, TDS in the Price
River, Utah at a flow of 50 cfs has been observed to increase
from 300 mg/1 to over 3000 mg/1 in a distance of 50 miles
(Dixon, 1978).
Traditionally, the extensive deposits of Mancos shales have
been considered the prime source of TDS in the Upper Colorado
River Basin. They are marine deposited shales intermixed with
lenses of sandstone and limestone. Soils derived from them are
typically saline. Several processes are thought to contribute
to the increase in TDS concentrations as the water flows down-
stream: 1) a portion of the stream water moves from the channel
into the salty alluvium and back into the channel again; 2)
groundwater percolating through salty formations; 3) irrigation
return flow (surface, interflow, deep percolation); 4) efflor-
escence in intermittent stream channels which is periodically
flushed out by thunderstorms; 5) salt pick-up by overland flow
(sheet flow and flow in rills); 6) salt associated with eroded
soil; 7) reservoir evaporation; 8) evapotranspiration (agricul-
tural and natural); 9) export of high quality water out of the
basin; 10) industrial, energy, and municipal development. The
precipitation of salt has been observed in reservoirs and some
stream channels.
The characteristics of a variety of models applied to
salinity problems in the arid West are summarized in the Table.
The purposes of these models have been: 1) to evaluate impacts
of energy and agricultural development on downstream salinity
and; 2) as research tools to identify and quantify TDS contri-
butions from the various processes.
151
-------
MODEL CHARACTERISTICS
REFERENCE
MODEL TYPE
TIME
INCREMENT
CONSTITUENTS
MODEL PROCESSES
APPLICATION
Hyacc, et al.
1970
Thomas, et al.
1971
Ribbens and
Wilson, 1973.
Colorado Salinity
Control Forum,
1975.
Utah State
University
1974. a, b,c.
Hill, et. al.
1473; Hill, et.
al. 1975; Utah
State University 197;
Israelsen, 1979
Watershed,
deterministic
Watershed with
soil column under
agricultural land.
Deterministic
processes.
River system
network,
deterministic
In-channel,
deterministic
water quality
model
Watershed,
deterministic
Mnnt-hl V
nont nxy
Monthly
Monthly
Steady-state
during criti-
cal flow
periods.
Monthly
Total dissolved
solids (TDS),
(conservative)
Ca, Mg, Na, SO,,
Cl, HCO,
j
(nonconservative)
TDS
(conservative)
TDS
(conservative)
TDS
(conservative)
Agricultural runoff and interflow (crop
consumptive use, irrigation efficiency)
natural system surface runoff, ground-
water flows, interchange between surface
and groundwater.
Agricultural runoff and interflow (crop
consumptive use, irrigation efficiency),
natural system surface runoff, interflow
groundwater flows, specific ion precipi-
tation and dissolution in the agricul-
tural soil profile.
Each subbasin is represented as a node in
a river system network. The time varying
(monthly) effects of agriculture (irriga-
tion practices) energy development, natural
systems, point loads and diversions are
input at each node and water and salinity
balances are conducted on the network.
Reservoir operations.
Point loads and diffuse sources along the
river system.
Agricultural runoff and interflow (crops
consumptive use, irrigation efficiency),
energy development, natural system surface
runoff, groundwater flows, interchange
between surface and groundwater.
38 subbasins of the
Upper Colorado River
Basin.
Little Bear River
Basin, Utah.
Lower Colorado River
Basin.
Bear, Virgin and
Sevier River Basins
in Utah.
Bear River Basin
San Juan River Basin
Palo Verde, California
Sevier River, Utah
Cn
-------
MODEL CHARACTERISTICS
Page 2
REFERENCE
MODEL TYPE
TIME
INCREMENT
CONSTITUENTS
MODEL PROCESSES
APPLICATIONS
Utah Scats
University, 1975
Narasinhaffl and
Israelsen, 1975
White, 1977.
Jurinak, et.al.
1977
Ponce and Hawkins,
1978
Dixon, 1978
River system net-
work,
deterministic
Watershed,
deterministc
Emperical non-
point in-channel
loading functions.
Watershed,
stochastic rain-
fall linked to
deterministic
processes.
Emperical non-
point overland
flow loading
functions
In-channel,
deterministic
Steady-state
at specified
flows and
boundary
conditions
Monthly
Annual
Daily
Annual
Hourly
TDS
(Conservative)
Ca, Mg, Na, SO
Cl, HC03 N03
(Nonconservative)
TDS
(Conservative)
Suspended Sedi-
ment.
Ca, Kg, Na, K,
Cl, SOA, P,
Carbonates, Zn,
Cd, Pb, Cr, Hg.
TDS
(conservative)
TDS
(conservative)
Each subbasln is represented as a node
in a river system network. The effects
of agriculture (trrigatlon practices),
energy development, natural system, point
loads and diversions are input at each
node and vater and salinity balances are
conducted on the network.
Agricultural runoff and interflow (crop
consumptive use, irrigation efficiency),
natural system surface runoff, ground-
water, and interchange between surface and
groundwater. Specific ion precipitation
and dissolution in the agricultural soil
profile.
Salt release to in-channel flows for
intermittent flows. Salt-sediment inter-
action.
Stochastic rainfall. Soil chemistry equil-
ibrium reactions for specific ions. Ero-
sion of soil with associated specific ions.
Water transport of specific ions by over-
load flow, infiltration, and soil trans-
port.
Salt release to overland flow over
mancos shale soils.
Unsteady flow routing in the channel.
Natural system salinity pickup from in-
channel nrocessps.
Colorado River Basin
(Wyoming to Imperial
Dam, California)
Twin Falls Tract,
Idaho.
Price River Basin,
Utah
Fremont River Basin
Dirty Devil River Basin
Colorado River Basin
Escalante River Basin
San Juan River Basin
Price River Basin,
Utah.
Coal Creek",
Price River Basin, Utah
U1
-------
MODEL CHARACTERISTICS
Page 3
REFERENCE
MODEL TYPE
TIME
INCREMENT
CONSTIUENTS
MODEL PROCESSES
APPLICATION
Chadwick, 1978
Malone, et al. ,
1979
Watershed.
Stochastic rain-
fall linked to
deterministic
process model.
River system
network,
probabilistic
Variable;
about
hourly during
rain events.
Steady-state
at specified
flow condi-
tions.
TDS
(conservative)
TDS
(conservative)
Stochastic rainfall, interception, infil-
tration, evapotransporation, salt release
from natural sources to overland flow and
to inchannel flow.
Each subbasin is represented as a node
in a river system network. The effects of
agriculture (irrigation practices), energy
development, natural system, point loads
and diversions are input at each node and
water and salinity balances are conducted
on the network. Uncertainties (variances)
associated with salinity sources are also
input at Che nodes and propagated through
the system.
Coal Creek Basin
and the Price River
Basin, Utah
Colorado River Basin
(Wyoming to Imperial
Dam, California)
I-1
£>•
-------
REFERENCES
1. Chadwick, D. George, 1978. Hydrosalinity Modeling in the
Price River Basin. Master's thesis, Utah State University,
Logan, Utah. p. 103.
2. Colorado River Salinity Control Forum. 1975. Proposed
Water Quality Standards for Salinity, Colorado River
Systems.
3. Clyde, Calvin G., Donna H. Falkenborg, and J. Paul Riley.
1975. Colorado River Basin Modeling Studies. Utah Water
Research Laboratory, Utah State University, Logan, Utah
p. 616.
4. Dixon, Lester S. 1978. A Mathematical Model of Salinity
Uptake in Natural Channels Traversing Manco Shale Bad-
lands. Ph.D. Dissertation, Utah State University, Logan,
Utah. p. 213.
5. Grenney, William J. and R.J. Wagenet. 1979. An Atmos-
pheric-Terrestrial Heavy Metal Transport Model. Ecological
Modeling (in publication).
6. Hill, Robert, W., Eugene K. Israelsen, and J. Paul Riley.
1973. Computer Simulation of the Hydrologic and Salinity
Flow Systems Within the Bear River Basin. Report PRWG
104-1, Utah Water Research Laboratory, Utah State Universi-
ty, Logan, Utah. p. 122.
7. Hyatt, M.L., J.P. Riley, M.L. McKee, and E.K. Israelsen.
1970. Computer Simulation of the Hydrologic-Salinity Flow
System Within the Upper Colorado River Basin. Report PWRG
54-1, Utah Water Research Laboratory, Utah State University,
Logan, Utah. p. 121.
8. Israelsen, E.K. (in preparation). Hydrologic and Salinity
Model of the Sevier River System. Utah Water Research
Laboratory, Utah State University, Logan, Utah.
9. Jurinak, J.J., W.J. Grenney, G.L. Wooldridge, J.P. Riley,
and R.J. Wagenet. 1977. A Model of Environmental Transport
of Heavy Metals Originating from Stack Derived Particle
Emission in Semi-Arid Regions. Final Report to Southern
155
-------
California Edison Company. Utah State University, Logan,
Utah 84322. p. 143.
10. Malone, R.M., D.S. Bowles, and Grenney, 1979. Stochastic
Analysis of Water Quality. Report Q/78-002, Utah Water
Research Laboratory, Utah State University, Logan, Utah.
p. 120.
11. Narasimhan, V.A. and Eugene K. Israelsen. 1975. A Water-
Land Use Management Model of the Sevier River Basin. Report
PRWG 150-1, Utah Water Research Laboratory, Utah State
University, Logan, Utah. p. 44.
12. Ponce, Stanley L. and Richard H. Hawkins. 1978. Salt
Pickup by Overland Flow in the Price River Basin, Utah.
Water Resources Bulletin, Vol. 14, No. 5, 1187-1200.
13. Ribbens, Richard W. and Robert F. Wilson. 1973. Applica-
tion of a River Network Model to Water Quality Investiga-
tions for the Colorado River. U.S. Bureau of Reclamation,
Engineering and Research Center, Denver, Colorado, p. 38.
14. Thomas, L.J., J.P. Riley, and E.K. Israelsen. 1971. A
Computer Model of the Quantity and Chemical Quality of
Return Flow. Report PRWG 11-1, Utah Water Research Labora-
tory, Utah State University, Logan, Utah. p. 187.
15. U.S. Bureau of Land Management. 1977. The Effects of
Surface Disturbance on the Salinity of Public Lands in the
Upper Colorado River Basin. 1977 Status Report, Denver
Service Center, p. 180.
16. U.S. Bureau of Reclamation. 1976. Environmental Statement,
Colorado River Water Quality Improvement Program. Draft
Report in support of PL 93-320, Title II.
17. U.S. Environmental Protection Agency. 1971. The Mineral
Quality Problem in the Colorado River Basin. Summary
Report and Appendix A, Natural and Man-Made Conditions
Affecting Mineral Quality; Appendix B, Physical and Economic
Impacts; Appendix C, Salinity Control and Management
Aspects; and Appendix D, Comments on Draft Report. U.S.
Government Printing Office, Washington, D.C.
18. Utah State University. 1975. Colorado River Regional
Assessment Study. Report to the National Commission on
Water Quality. Utah Water Research Laboratory. Utah State
University, Logan, Utah 4 Vol.
156
-------
19. Utah -State University 1974a. Planning for Water Quality
in the Sevier River System in the State of Utah. Report
PRWG-142-3. Utah Water Research Laboratory, Utah State
University, Logan, Utah. p. 143.
20. Utah State University 1974b. Planning for Water Quality
in the Bear River System in the State of Utah. Report PRWG-
142-2, Utah Water Research Laboratory, Utah State Universi-
ty, Logan, Utah. p. 150.
21. Utah State University 1974c. Planning for Water Quality
in the Virgin River System in the State of Utah. Report
PRWG-142-1, Utah Water Research Laboratory, Utah State
University, Logan, Utah. p. 147.
22. White, R.B. 1977. Salt Production from Micro-Channels
in the Price River Basin, Utah. Master's thesis, Utah
State University, Logan, Utah. p. 121.
157
-------
TRANSPORT ON THE CONTINENTAL SHELF
IN THE NEW YORK BIGHT
By
Dr. Gregory Han
NOAA/Atlantic Oceanographic and
Meteorological Laboratories
Miami, Florida 33149
The New York Bight is a section of the continental shelf
extending from Montauk Point at the tip of Long Island to Cape
May at Delaware Bay, and out to the 200 m isobath. A multi-
disciplinary investigation of the Bight has been going on since
1973 under the NOAA/Marine Ecosystems Analysis Program. The
physical oceanography part of the program which is directed by
Dr. Donald V. Hansen and myself at AOML, includes measurement
of temperature and salinity at over 80 stations, 4 to 5 times
per year, as well as concurrent dissolved oxygen and nutrients
measurement in cooperation with Dr. Atwood of AOML. Arrays of
recording current meters have been deployed throughout the Bight
to look at nearshore flow, (depths less than 30 m) dispersion
from the Hudson River plume and shelfwide general circulation.
Over 65 current meteryears of data have been gathered since 1973,
One of the major efforts of the program is a multi-para-
meter model of the carbon/oxygen/nitrogen cycling in the Bight
being developed by Dr. John Walsh of Brookhaven National Labora-
tory. The transport field for this model is provided by a
diagnostic model developed by Dr. Jerry Gait and applied by me
to certain specific flows which are observed in the Bight.
Transport on the Continental Shelf has certain character-
istics which require different treatment than in estuarine and
riverine systems. The major flows on the shelf, other than
the tidal oscillations, are alongshore barotopic responses to
the alongshore windstress. The flow is geostrophically
balanced with the across-shelf sea surface set up induced by
Ekman surface layer transport. It is not directly forced by
the 'friction at the sea surface except very near the coast
at depths less than about 30 m. The flows are oscillatory at
the frequency of storm occurrance (3-10 day period) with
magnitudes of up to 50 cms
158
-------
The,mean flow is southwestward with a mean velocity of
1-5 cms which is very small compared to the oscillations.
Even this small mean5velocity results in an alongshore
transport of 20 x 10 m s which is 100 times the flow of
the Hudson River.
Water quality modeling is hampered by several factors
which are:
1) An open boundary along the shelf break across which
sharp gradients of nutrients and other constituents
are present but about which we have very little
knowledge of the mixing.
2) Cross shelf boundaries with large transport which
is highly variable both in time and across the
section
3) Weak along shelf gradients which makes calculation of
the divergence of a constituent flux very inaccurate.
4) Changes in stratification from highly stratified to
homogenous which changes the dynamics through the
year.
5) Mixing effects of breaking internal waves on the
outer shelf.
It is the cross-shore mixing of constituents which are
input to the shelf from the shore and from nearshore dumping
activities which is the most important and the most difficult
to specify. The long time scale for cross shore mixing (years)
and the input of freshwater from many sources along the shore
precludes use of freshwater as a tracer for calculation
of mixing coefficients except very near the river mouth. The
Hudson Shelf Valley appear to act as a conduit for cross shelf
transport of clean outer shelf bottom waters into the area at
the mouth of the Hudson River (commonly called the Bight Apex).
Because of all these problems we have kept our initial
goals for the water quality model modest. Nature was kind in
providing an event in 1976 which has enabled us to test the
model's capabilities. The so-called "anoxia" event in
May-September, 1976 produced large gradients in space and time
and a large signal in many parameters. A complete description
of the event is contained in a forthcoming NOAA Professional
Paper. Application of the diagnostic model to the computation
of transport across sections of a box model of the layer below
the pycnocline, enabled us to calculate the divergence of
oxygen flux over 40 days during the development of the event.
Thus, we could infer the net utilization of oxygen which was
required to produce the observed oxygen decrease. It appears
159
-------
that the respiration of a large concentration of dinoflagelates
alone was of the correct order to produce the required utiliza-
tion of oxygen. Upcoming work with Dr. Walsh will attempt to
compare the circulation patterns and biochemical conditions
between the 1976 event and the preceding "normal"year. Further
studies will focus on events extending from March through the
summer of 1975 and 1976 since it is hypothesized that the bloom
of dinoflagelates were instituted by conditions present early
in the year. The presence of a deep pycnocline formed by a
warm, early spring favored the growth of the heterotrophic
species over the autotrophic nanoplanketon which typically
succeeds it each spring.
The diagnostic model is a vorticity balance model with
linear bottom friction which requires an input of observed
density, bottom topography and the barotrophic velocities
perpendicular to the boundaries. Output is the barotrophic
velocity field over the entire region. Solution is done with a
finite element technique. Solid boundaries are defined by a no
flux condition on the boundary solution. Baroclinic velocity
shear in the vertical is assumed geostropic except for the top
and bottom Ekman layers. The complete velocity profile is
calculated using a turbulance closure scheme model of Mellor
and Durbin. Transports are easier to specify. Top and bottom
flow, as separated by the pycnocline, is found by integrating
the geostrophic transport over each layer and then assigning the
Ekman transport to the proper layer.
The model resolves storm events by selecting the time
average of the observed velocity at the boundaries over the
proper time period to resolve the storm drive flows. Thus,
successive time periods have dramatically different flows
which are often oppositely directed. Resolving the structure
of these advective flows in both time and space minimizes
the transport forced into the diffusive or unresolved part of
the transport.
Some parameterization of diffusive processes is still
necessary. We have calculated the cross shelf horizontal (K )
and cross pycnocline vertical (K ) eddy diffusivities by x
tracing the decay of the well kn§wn "cold pool" of winter water.
In May of each year cold, salty water with temperatures down to
20°C moves through the Bight at a rate which is observed by
current meter measurements... The change in T-S properties is_,
consistent with K = 4 x 10 cm s and K = 0.02 to 0.1 cms
as found by solving the salt and heat bafance equations
simultaneously. Mixing in the apex was also studied using an
estuarine type salt balance model. This shows a flushing time
of 6.8 days, which confirms Ketchum's earlier estimate.
We are attempting to model the transport of dissolved
and suspended constituents on the continental shelf using a
160
-------
combination of old and new techniques to approach a difficult
problem. Verification of the model is an important step in
making the model useful for planning studies. The great
quantity and diversity of data which has been collected over
the past 5 years will allow us to effectively address the
difficult questions of transport and water quality in the New
York Bight.
161
-------
THE ROLE OF WASTE INFLOWS AND LANDSAT IMAGERY
IN MANAGING LAKE QUALITY
By
John M. Higgins
Water Quality and Ecology Branch
Tennessee Valley Authority
Introduction
During the last 15 years, substantial progress has been
made in cleaning up point sources of pollution and their
immediate downstream impact. As a result, secondary impacts to
water quality are receiving more attention. This includes the
impact of both point and nonpoint sources on lakes and
reservoirs (e.g., the EPA Clean Lakes Program). Unfortunately,
the dynamic nature of lakes and reservoirs makes it difficult
and costly to monitor quality variations, define desirable
conditions, and establish cause and effect relationships. In
order to formulate effective management plans, improved
techniques are needed for relating waste inflows to relevant
characterises of lake quality and for economically obtaining
the data required for decision-making. This paper briefly
dis-cus-ses the use of inflow/outflow models and Landsat imagery
in meeting these needs.
Waste inflows represent control variables which, hopefully,
can be managed in a manner consistent with water quality and
other objectives. Comparative data, such as that provided
by remote sensing, is useful in isolating problem areas,
developing priorities, monitoring improvements, and in
separating natural phenomena from man-induced impacts. Mathe-
matical models aid the interpretation of this information for
management purposes. Two particular types of models are dis-
cussed here: the first relates waste inflow to lake quality;
and the second relates satellite imagery to lake quality.
Waste Inflows and LakeQuality Impacts
In recent years substantial research has been devoted to
defining lake quality in terms of trophic state indices and
relative classifications (e.g., Shannon and Brezonik, Carlson,
Ott). Numerous models have been developed to relate quality
factors and trophic status to waste inflow, hydraulic conditions
162
-------
and other relevant variables. These range from simple empirical
models (e.g., Vollenweider, Dillon, and Rigler, Larsen and
Mercier) to complex mechanistic models of the interactions
between a variety of ecosystem components (e.g., Chen and Orlob,
Leidy and Jenkins, Park). Although these indices, classifica-
tion systems, and models are useful in many types of analyses,
a complete methodology is generally not available for defining
acceptable waste loads, water quality standards, and cause-effect
relationships.
Simple trophic state indices and empirical response models
have proved useful for some decision-making purposes, but fail
to include the impact of many important variables (e.g., other
waste inflows, meteorology, flow patterns, and spatial and
seasonal variations). Complex ecosystem models which include
many of these factors are difficult to use and require extensive
data. Their cost and sophistication often makes them inappro-
priate for routine use.
This situation suggests the need for a new generation of
predictive empirical models. These new models would have a
level of sophistication somewhere between simple empirical models
and complex ecosystem models. They would not replace existing
models, but rather, complement them. They would be specifically
designed to facilitate management decisions by relating control-
lable waste inflows to those lake conditions which directly
impact beneficial uses. They should include the important
physical dimensions and causal factors, but should not be ex-
cessively complex or costly to use. They should relate inflows
of potential pollutants, such as nutrients, organic wastes,
toxic substances, and suspended solids, to relevant quality
characteristics, such as trophic status, dissolved oxygen
concentration, algal growth, clarity, pathogenic organisms, and
toxic impacts. They should also include the relevant effects of
factors, such as hydrodynamics, geometry, meteorology, and
spatial and seasonal variations.
These criteria might seem to imply the need for integrated
ecosystem models similar to those already developed. This is
not the intent, however. The objective is, rather, a set of
specialized models for individual measures of lake quality.
Each model would include only those factors most important to
the process or condition being considered. Obviously, this is
an ambitious objective which has been partially addressed by
many researchers. Future needs, however, suggest increased
emphas-is.
Lands-at Imagery and Lake Quality Monitoring
Managing lake quality requires a variety of data, as
suggested by the waste inflows, quality characteristics, and
causal factors mentioned above. Much of this data must be
obtained through detailed field surveys. The cost of these
163
-------
surveys, however, makes it impractical to routinely monitor
spatial and seasonal variations in a large number of reservoirs.
Fortunatly, there are alternative method's for collecting some of
the comparative data needed for managing lake quality. One
promising alternative is Landsat imagery. There are presently
two functioning Landsat satellites. They provide coverage of a
given area every nine days. Each satellite is equipped with a
multispectrao scanner which records reflected radiation from the
earth's surface for four spectral bands. Each image value rep-
resents a 57 x 79 meter, picture element of the earth's surface.
Using this satellite data for water quality management
requires regression models relating the image values (or derived
statistics) to relevant water quality parameters. For example,
ground truth data for chlorophyll concentrations, secchi disc
depth, turbidity, or nutrient concentrations, can be related
directly to the satellite data, or combined to form indices
which can be related to the satellite data. If a set of reli-
able models or relationships can be developed, future satellite
data can be used to routinely monitor these measures of lake
quality.
Boland, Scarpace, and others have demonstrated the feasi-
bility of this technique for trophic classification of lakes and
for specialized types of water quality monitoring. Although the
quality parameters and water depths which can be examined are
limited, the low cost and the spatial and temporal range of
satellite data allow a variety of comparative analyses (e.g.,
spatial and seasonal variations within one lake; relative com-
parisons of separate lakes; and routine monitoring of long-term
trends). The results will be useful in identifying problem
lakes, in setting priorities, and in establishing regulatory
standards.
Specific areas for further work include: (1) acquiring
better ground truth data; (2) correcting Landsat data for
atmospheric absorption, scattering, and sun angle changes; (3)
applying these techniques to a broader set of lakes, reservoirs,
and management problems; and (4) developing a more standardized
set of models and model variables.
Summary
It appears that greater emphasis will be placed on managing
lake quality in the future. This will require greater quantifi-
cation of the impact of waste inflows on lake quality and more
economical methods of obtaining routine data. In order to meet
these needs, greater emphasis should be placed on the developing
and applying of inflow/outflow models and Landsat monitoring
techniques.
164
-------
REFERENCES
(1) Boland, D.H.P. 1976. Trophic Classifications of Lakes
Using Landsat-1 (ERTS-1) Multipsectral Scanner Data. U.S.
Environmental Protection Agency. Office of Research and
Development, Corvallis Environmental Research Laboratory.
245 pp.
(2) Brezonik, P.L., and E.E. Shannon. 1971. Trophic States of
Lakes in North Central Florida. Publication-13. Water
Resources Research Center.
(3) Chen, C.W., and C.T. Orlob. 1972. Ecologic Simulation for
Aquatic Environments, Final Report. Water Resources Engin-
eers, Inc., Walnut Creek, California.
(4) Dillon, P.J. and F.H. Rigler. 1974. The Phosphorus-
Chlorophyll Relationship in Lakes. Limnology and Ocean-
ography. V. 19(5): 767-773.
(5) Larsen, D.P., and H.T. Mercier. 1976. Phosphorus
Retention Capacity of Lakes," Journal Fisheries Research
Board Canada. V.33:17:1742-1750.
(6) Leidy, G.R. and R.M. Jenkins. 1977. The Development
of Fishery Compartments and Population Rate Coefficients
for use in Reservoir Ecosystem Modeling. U.S. Army
Engineer Waterways Experimental Station. Contract
Report Y-77-1. 72 pp.
(7) Ott, W.R. 1978. Water Quality Indices: A Survey of
Indices Used in the United States. U.S. Environmental
Protection Agency. EPA-600/4-78-005. 128 pp.
(8) Park, R.A. 1978. A Model for Simulating Lake Ecosystems.
Center for Ecological Modeling. Rensselaer Polytechnic
Institute. Troy, New York. 19 pp.
(9) Scarpace, F.L., et al. 1978. Landsat Analysis of Lake
Quality for Statewide Lake Classification. Proceedings
ASP-ACSM Spring Convention, February, 1978.
165
-------
(10) Vollenweider, R.A. 1976. Advances in Defining Critical
Loading Levels for Phosphorus in Lake Eutrophication.
Mem. 1st. Ital. Idrobiol. 33:43-83.
166
-------
URBAN WASTELOAD GENERATION BY
MULTIPLE REGRESSION ANALYSIS OF
NATIONWIDE URBAN RUNOFF DATA
By
Wayne C. Huber
Dept. of Environmental Engineering Sciences
University of Florida
Gainesville, Florida 32611
Mechanisms of Wasteload Generation
Nonpoint source loads of pollutants to receiving waters are
by definition generated by mechanisms in which the land surface
is both the source and initial conveyance of the water quality
parameters. As stormwater is routed through the drainage
system, further material may be generated or lost through pro-
cesses of scour and deposition, particularly in combined sewer
systems.
Generation of pollutants (e.g., suspended solids) from both
the land surface and drainage channels is a problem of sediment
transport that is poorly understood even after decades of re-
search. For instance, it is not clear what mechanism apply on
the land surface since erosion may be achieved both through the
impact of raindrops (e.g., the Universal Soil Loss Equation
Approach) and through the boundary shear of overland flow (e.g.,
the sediment transport approach). In the former case, it is
difficult to separate "buildup" and "washoff" relationships,
while in the latter case it is popular to assume for modeling
purposes that pollutants are "generated" by some mechanisms
during dry periods and are available to be washed off by
stormwater runotf.
There is considerable current research underway to deter-
mine the underlying physical and chemical mechanisms at work in
generation of nonpoint source loads. This task is made all the
more difficult by the fact that it is seldom possible to
separate "buildup"' and "washoff" mechanisms strictly from
measurements of flow and concentration at a catchment outlet.
167
-------
Regression Approaches
One alternative to conceptual or physically based models is
to perform a regression of measured loads at catchment outlets
versus hydrologic, demographic and other relevant factors.
Either an arbitrary (e.g., multiplicative) model may be used or
a fit of an assumed (e.g., sediment rating curve) model may be
made. This approach has been used in numerous urban runoff
studies; thirteen examples are summarized by Smolenyak (1).
Correlation is often questionable due to the low number of data
points available and the frequent presence of spurious correla-
tion when loads (concentration x runoff volume) are regressed
against runoff volume.
Regressionof Nationwide Data
This synopsis describes briefly the results of such a study
performed on extensive data contained in the EPA Urban Rainfall-
Runoff-Quality Data Base (2,3). Data from 22 catchments in 12
cities were analyzed by stepwise multiple linear regression (1).
Dependent variables were loads (e.g., Ib/ac) on a per storm
basis for several parameters.
Independent variables included the following:
AVRAIN - Average rainfall intensity (in/hr),
FLOW - Flow volume (in),
FDD - Preceding dry days,
PFLOW Peak flow rate (in/hr),
PRAIN - Peak rainfall intensity (in/hr),
PADUR - Rainfall duration (hr), and
RAVOL - Rainfall volume (in).
It was not possible to include demographic parameters among the
independent variables due to the low number of differing land
uses among the catchments.
All rainfall-runoff-quality data were first analyzed to
determine flow and time weighted concentrations and standard
deviations as well as total loads for each catchment for each
storm (3). A composite weighted average concentration and
standard deviation was also computed for each catchment from all
storms. These are shown in Table 1 for parameters BOD5 and
suspended solids. Although combined sewered areas tena to have
higher average concentrations than storm sewered areas, no clear
distinction among land uses may be made.
When storm event loads were subjected to the stepwise mul-
tiple regression analysis described earlier, the most significant
independent variables to enter the regression (and the para-
meters providing the best bi-variate relationship) were as
follows;
168
-------
Table 1 . Comparison of flow-weighted BODjand suspended solids means and standard deviations by land
use and type of sewerage (3) .
CTl
Storm or
Combined
City-Catchment Sewerage
Single-Family Residential
San Francisco, CA-Selby St.
Racine, Wl-Site I
Lancaster , PA- Stevens Ave.
Broward County ,FL-Residential
San Francisco, CA- Vicente St. N.
San Francisco, CA-Vicente St. S.
Lincoln, NB-39 & Holdrege
Lincoln, NB-63 & Holdrege
Lincoln, NB-7 8 & A
Windsor, ON-Labadie Rd-.
Seattle, WA-View Ridge \
Seattle, WA-View Ridge 2
Seattle, WA-Lake Hills
Seattle, WA-Highlands
West Lafayette, IN-Ross-Ade
Greenfield, MA- Maple Brook
Multiple-Family Residential
San Francisco, CA-Baker St.
San Francisco, CA-Brotherhood Way
San Francisco, CA-Laguna St.
Commercial
Seattle, WA-Central Bus. Dist.
Seat tie, WA-Southcenter
C
C
C
S
S
S
S
S
S
S
S
S
S
S
S
S
C
C
C
C
S
0031'
Mean
(mg/1)
38.1
89.6
56.2
6.7
9.8
4.5
37.6
22.1
8.7
16.9
18.4
12.9
6.3
4.2
59.6
11.6
22.9
45.6
46.3
64.3
12.5
(£•600,-
Std.Dev.
(mg/1)
30.0
17.7
49.3
4.3
11. 4C
3.4C
51.8
32.5
5.1
8.2
11.2
9.4
2.9
3.8
89.7
7.2
6.0
24.7
8.8
37.7
7.7
Number
of
Events
8
7
5
20
1
1
13
11
9
20
7
5
5
4
8
4
3
3
2
5
7
0053(PSusp. Solids
Mean Std. Dev.
(mg/1) (rng/1)
215.4
178.9
271.3
28. 4b
48.3
45.8
735.9
827.7
1532.0
389.8
55.6
107.7
61.3
109.3
104.7
147.4
90.7
654.8
210.7
161.8
93.5
146.0
137.3
171.1
16. 7b
29. 2c
29.3°
302.8
228.4
780.4
254.2
102.5
33.1
9.9
71.5
52.0
112.1
14.5
524.6
101.0
21.7
237.0
Number
of
Events
8
7
5
28
1
1
18
12
10
20
28
4
5
4
8
5
">
3
2
5
27
-------
Table I (Continued)
City-Catchment
Industrial
Seattle, WA-South Seattle
Mixture-Res., Com., Other
San Francisco, CA-Mariposa
Dur ham, NC- Third Fork
Northampton, MA-Market St.
Storm or
Combined
Sewerage
S
St. C
S
Brook S
00310-BOD5
Mean Std.Dev.
(mg/1) (mg/1)
11.9
43.2
127.3
30.1
8.2
42.5
13.6
19.4
Number
of
Events
7
3
2
3
00530-Susp. Solids
Mean Std. Dev.
(mg/1) (mg/1)
114.2
172.4
1498.3
149.2
176.3
86.4
171.2
55.0
Number
of
Events
29
3
4
6
aSTORET code for parameter. Refer to Table VI-3 in first Data Base Report (2).
"Parameter 70299 reported instead of 00530, i.e., suspended solids determined ly evaporation instead
of filtration.
cStandard deviation based on within-storm variation, 8 samples for BOD5 and 10 samples for SS.
-------
Most Significant Parameters No. of Catchments
PLOW 9
PFLOW 8
PRAIN 2
RAVOL 2
AVRAIN 1
The predominance of flow volume and peak flow rate is further
enhanced-by their own significant correlation (FLOW = 0.262
PFLOW , R = 0.62, n = 241) which may be justified in part on
the basis of unit hydrograph theory. Thus, one conclusion of
the study is that flow volume is the most significant individual
hydrologic parameter in the generation of urban runoff loads.
This may be modified somewhat in light of the presence of an un
known degree of spurious correlation since loads are computed as
a summation (over a storm event) of flows x concentrations and
are regressed against a summation of flows (to obtain FLOW).
It was found that, using combined data from all catchments,
relationships that- are significant at the 99 percent level (F
test) could be developed for all dependent variables as a
function of FLOW. These are shown in Table 2. In addition
most parameters were significantly correlated versus suspended
solids (TSS) in a power (log-log) relationship (0.18 _< R _<
0.83). This suggests that solids may be used as the basis of
prediction of other parameters, a common assumption in surface
runoff quality modeling.
As seen in Table 2, the exponent in the equation for TSS
is 1.1; individual catchments among the 22 studies produced ex-
ponents ranging from 0.72 to 1.3. These values agree well with
other urban studies, as shown by Smolenyak (1) and serve to
justify the sediment rating curve approach. Exponents in non-
urban areas tend to be on the order of 2.0, higher because of
the absence of an upper limit on sediment availability.
Of interest is the fact that preceding dry days (PDD) was
significant at the 0.9 (. level for at least one pollutant in the
multiple regression analysis for five catchments and was not
significant for three others. (It was not available for the
remaining 14 catchments). Thus the significance of antecedent
conditions in wasteload generation remains unanswered.
Conclusions
Ideally, stormwater runoff loads to receiving waters should
be determined by direct measurements. Since this is often im-
possible due to monetary and other constraints (e.g., the
absence of wet weather during the sampling program) generalized
results from other studies may be utilized for preliminary
estimates. However, all of the customary caveats and
171
-------
TABLE 2
RELATIONSHIPS BETWEEN POLLUTANT LOADS (Ib/ac) AND FLOW VOLUME,
(in.) DEVELOPED USING COMBINED DATA FROM 22 CATCHMENTS (1)
Dependent
Variable
BOD
COD
NH3N
HITN
NTOT
ORGN
TOTN
OOP
TOP
TOTP
TPHOS
TOTS
TSS
o
RZ
.28
.76
.44
.80
.57
.88
.74
.83
.46
.66
.91
.69
.56
Sig. Level
F-Test
.99
.99
.99
.99
.99
.99
.99
.99
.99
.99
.99
.99
.99
No. of
Events
80
157
20
21
103
40
37
34
119
53
8
41
260
R
a
34.0
29.8
.215
.119
.0400
.856
.304
.0648
.0104
.426
.105
279
44.2
Model: Load = a(FLOW)
Reg. Coefs.
b
1.12
1.08
.72
.80
.71
1.04
1.07
.98
.78
1.5
1.05
1.41
1.10
Dependent Variable Definitions
BOD-Biochemical Oxygen Demand, 5-day
COD-Chemical Oxygen Demand
NH3N-Nitrogen, Total Ammonia
NITN-Nitrate Nitrogen, Total
NTOT-Nitrite + Nitrate, Total
ORGN-Nitrogen, Total Organic
TOTN-Nitrogen, Total
DOP-Orthophosphate, Dissolved (as P)
TOP-Orthophosphate, Total (as POJ
TOTP-Phosphorus, Total (as P)
TPHOS-Phosphate, Total (as PO.)
TOTS-Solids, Total
TSS-Suspended Solids, Total
172
-------
precautions pretaining to regression analyses and their use
and interpretation are of even more importance here due to the
wide variation of measured results and relatively weak data base
used to produce the statistical models.
Acknowledgments
This synopsis is taken directly from the masters degree
research of Kevin Smolenyak, co-directed at the University of
Florida by the writer and James P. Heaney. The research has
been supported by the EPA Bata Base contracts 68-03-0446 and
68-03 2663.
173
-------
REFERENCES
(1) Smolenyak, K.J., "Urban Wet Weather Pollutant Loadings,"
Master of Engineering Thesis, Department of Environmental
Engineering Sciences, University of Florida, Gainesville,
1979.
(2) Huber, W.C. and J.P. Heaney, "Urban Rainfall-Runoff-
Quality Data Base," EPA Report EPA-600/8-77-009, (NTIS
PB 270 065), Cincinnati, Ohio, July 1977.
(3) Huber, W.C. and Heaney, J.P., Smolenyak, K.J. and D.A.
Aggidis, "Urban Rainfall-Runoff-Quality Data Base,
Addendum 1: Statistical Analysis and Additional Data,"
EPA Report EPA-600/8-79-004, Cincinnati, Ohio, 1979.
174
-------
USGS DATA COLLECTION PROGRAMS
RELATED TO WATER QUALITY MODELING NEEDS
By
Marshall E. Jennings
U.S. Department of the Interior
Geological Survey
Gulf Coast Hydroscience Center
National Space Technology Laboratories
NSTL Station, Mississippi 39529
Several ongoing U.S. Geological Survey data collection
programs have potential benefit to water-quality modelers.
Most modelers find the USGS streamflow gaging network useful
and adequate for many of their modeling needs; however, many
modelers are not aware of the substantial water-quality data
available for water-quality investigations. Some of the major
USGS programs that relate to water-quality modeling needs are
discussed below. All data is available on the USGS National
Water Data Storage and Retrieval system (WATSTORE) or on the
National Water Data Exchange (NAWDEX) system.
Fixed Point Monitoring
Fixed point monitoring emphasizes long-term effects. Two
networks are operated by USGS- The National Stream Quality
Accounting Network (NASQAN) and the Hydrologic Bench Mark
program.
NASQAN
The National Stream Quality Accounting Network (NASQAN)
is a data-collecting facility for obtaining regional and
nationwide overviews of the quality of our streams. Water-
quality data from NASQAN stations provide the information
needed to:
Account for the- water quantity and "quality of water
moving within the United States;
Develop a large-scale picture of how stream quality
varies from place to place; and,
175
-------
Detect changes, in stream quality with time.
NASQAN is different from other water-quality monitoring
studies in several important ways:
The network is designed around a system of subdivided
river basins, so the collected water data can be related
to conditions within a known area upstream, and compared
with that from adjacent or nearby areas.
Stations are operated uniformly; therefore, results
obtained can be compared directly because the same
methods are used to collect and analyze the data at all
stations in the network.
Stations are committed to long-term objectives, so the
length of record at all stations, the frequency of
sampling, and sampling locations will remain uniform for a
long time. The uniformity allows for valid comparisons
between stations and provides an opportunity to look for
long-term changes.
Obviously, it is impossible to measure the characteristics
of every drop of water flowing in a stream. Samples, therefore,
must be collected from enough places and at the proper
frequencies to give a reasonable representation of overall
conditions.
The spacing of NASQAN stations is based on a system of
hydrologic subdivisions developed by the U.S. Water Resources
Council (WRC). In this system, drainage basins in the
United States are divided into 21 regions, 222 subregions, and
349 accounting units, the latter two being progressively smaller
parts of a region. NASQAN monitoring sites (stations) are
located at points chosen to provide a good sample .of the water
leaving an accounting unit.
To date (1978) 445 NASQAN stations have been established,
with a station representing every accounting unit. Ultimately,
NASQAN will include about 525 stations in order to adequately
cover coastal areas.
A visitor to a typical NASQAN station will first see an
instrument shelter about the size of a telephone booth (at some
stations, the shelter is several times as large). Inside the
shelter is an instrument that obtains a continuous record of
stream stage (water elevation), from which streamflow is
computed. Many stations also are equipped with a recorder
for obtaining continuous data on water temperature and specific
electrical conductance; otherwise, conductance and temperature
are measured by an observer who visits the site once every day.
176
-------
In addition to the daily sampling and continuous records
that are kept, the following data are collected at each station
by a field party:
Twelve times per year (approximately monthly)
Temperature
Specific conductance
pH (balance between acids and alkalies)
Bacteria indicators
Inorganic compounds
Biological nutrients
Suspended sediment
Floating algae
Organic carbon
Four times per year
Trace elements
Attached organisms
Samples for analyses of about 20 pesticide compounds are
collected at 153 NASQAN stations. Samples for radiochemical
analyses are collected at 51 NASQAN stations at frequencies
ranging from monthly to semiannually.
Some of the above measurements are made directly on the
stream, but many others require bottling and preservation of
samples by icing or addition of chemicals so that consistent
analytical results can be obtained after shipment to a
laboratory.
The U.S. Geological Survey designed NASQAN, and Survey
personnel operate most of the stations and analyze the samples
collected. Other agencies, however, such as the Environmental
Protection Agency, the U.S. Army Corps of Engineers, and State
and local organizations either take part in the operation of
some stations or provide financial support. Because of local
interests and needs, several agencies usually cooperate in
supporting the different parts of a single station.
Information from NASQAN represents one of the important
building blocks required for good management of the Nation's
waters. It tells us how things are (How much water? What
quality?) on a national and regional scale. It represents a
baseline against which we can measure the importance of future
changes in water quality.
NASQAN data already are being published in annual reports
and are being used in making important decisions about the
future management of our water resources.
For those who would like to learn more about NASQAN, the
Geological Survey has published a more detailed description.
Copies may be obtained by requesting USGS Circular 719, "The
177
-------
National Stream Quality Accounting Network (NASQAN) - Some
Questions and Answers." by J.F. Ficke and R.O. Hawkinson,
from:
Branch of Distribution
U.S. Geological Survey
1200 South Eads Street
Arlington, Virginia 22202
reference: NASQAN: Measuring the Quality of America's
Streams by Benjamin L. Jones, U.S. Government
Printing Office, 1978.
Hydrologic Bench Mark Program
Water-quality data, collected at 57 hydrologic bench-mark
stations in 37 States, allow the definition of water quality in
the "natural" environment and the comparison of "natural" water
quality with water quality of major streams draining similar
water-resources regions. Results indicate that water quality in
the "natural" environment is generally very good. Streams
draining hydrologic bench-mark basins generally contain low con-
centrations of dissolved constituents. Water collected at the
hydrologic bench-mark stations was analyzed for the following
minor metals: arsenic, barium, cadmium, hexavalent chromium,
cobalt, copper, lead, mercury, selenium, silver, and zinc. Of
642 analyses, about 65 percent of the observed concentrations
were zero. Only three samples contained metals in excess of
U.S. Public Health Service recommended drinking water standards-
two selenium concentrations and one cadmium concentration. A
total of 213 samples were analyzed for 11 pesticidal compounds.
Widespread but very low-level occurrence of pesticide residues
in the "natural" environment was found-about 30 percent of all
samples contained low-level concentrations of pesticidal com-
pounds. The DDT family of pesticides occurred most commonly,
accounting for 75 percent of the detected occurrences. The
highest observed concentration of DDT was 0.06 microgram per
litre, well below the recommended maximum permissible in drink-
ing water. Nitrate concentrations in the "natural" environment
generally varied from 0.2 to 0.5 milligram per litre. The
average concentration of nitrate in many major streams is as
much as 10 times greater.
The relationship between dissolved-solids concentration
and discharge per unit area in the "natural" environment for
the various physical divisions in the United States has been
shown to be an applicable tool for approximating "natural"
water quality. The relationship between dissolved-solids con-
centration and discharge per unit area is applicable in all the
physical divisions of the United States, except the Central Low-
land province of the Interior Plains, the Great Plains province
of the Interior Plains and the Basin and Ridge province of the
178
-------
Intermontane Plateaus. The relationship between dissolved solids
concentration and discharge per unit area is least variable in
the New England province and Blue Ridge province of the Appala-
chian Highlands. The dissolved-solids concentration versus
discharge per unit area in the Central Lowland province of the
Interior Plains is highly variable.
A sample collected from the hydrologic bench-mark station
at Bear Den Creek near Mandaree, N. Dak., contained 3,420
milligrams per litre dissolved solids. This high concentration
in the "natural" environment indicates that natural processes
can be principal agents in modifying the environment and can
cause degradation. Average annual runoff and rock type can be
used as predictive tools to determine the maximum dissolved-
solids concentration expected in the "natural" environment.
reference: Water Quality of Hydrologic Bench Marks-An
Indicator of Water Quality in the Natural
Environment, U.S. Geological Survey Circular
460-E by J.E. Biesecker and D.K. Leifeste.
Cooperative Programs
The cooperative programs of USGS with a variety of federal,
state and local agencies generates a significant amount of water
quality data of use to water-quality modelers. Many of these
data result from short-term or synoptic data collection efforts.
The data is published in annual State publications such as
Water Resources Data for Texas Water Year 1977, Volume 1.
Arkansas, Red, Sabine, Neches, Trinity River basins, and inter-
vening Coastal basins, U.S. Geological Survey Water-Data Report
TX-77-1, 585 p. The data are also available on computer data
bases such as WATSTORE.
River Quality Assessments
In addition to the need to assess the effectiveness of
pollution control efforts, there is a need to predict the
effects of proposed management alternatives so that the best
development plans can be selected. In its capacity as the
appraiser of the Nation's mineral resources, the U.S'. Geological
Survey is developing and demonstrating techniques for making
such predictions.
In this River-Quality Assessment research, teams of Survey
scientists study all aspects of river quality in a drainage
basin and determine the relative importance of river-quality
problems, their causes, and the relative effectiveness of pro-
posed solutions. The most efficient management actions can be
determined from these studies to provide the best water quality
at least cost. A series of these studies will be conducted by
179
-------
the Survey to demonstrate their usefulness and techniques for
making them.
In a pilot assessment completed on the Willamette River in
Oregon, the scientists analyzed several river-quality problems.
Maintenance of high levels of dissolved oxygen in the
river;
Growth of algae as a potential nuisance;
Occurrence and distribution of toxic trace metals; and,
Potential for excessive soil erosion with increasing
basin development.
The team found that waste-treatment plans already in effect
had greatly reduced the input of oxygen-using wastes to the
river and that dissolved-oxygen levels had improved greatly
during recent years. Summer releases of freshwater from upstream
reservoirs to aid river navigation had resulted in further im-
provements, which is an example of one management action serving
two beneficial purposes.
Nevertheless, some additional improvement in the dissolved
oxygen content was desirable to provide a greater margin of
safety for the future. A very costly plan for advanced
treatment of municipal wastes had been proposed as a possible
solution. The results of the river-quality assessment showed,
however, that elimination of a very few industrial discharges of
ammonia wastes would result in greater improvement in the river
at much less cost.
The releases of fresh reservoir water during low flows also
have helped control the growth of undesirable algae in several
ways:
The freshwater provides a continuous low-level source of
nutrients favorable to the growth of desirable algae.
The increased flow quickly moves algae out of the river
system.
The released water is lower in temperature and in certain
trace elements, which results in slower algal growth.
A study of trace metals in bottom sediments indicated no
areas with concentrations high enough to cause immediate
concern.
180
-------
Population and industry are expected to increase greatly
over the next 50 years in the Willamette Valley. To help
evaluate the probable effects of such growth on land and water
quality, a photomosaic map and an erosion potential index are
used to estimate the way various land uses can affect soil
erosion and sediment deposition in different types of terrain.
These maps can be used by planners to make decisions on future
land and water management within the river basin.
The major benefit of the Willamette River assessment is
that it indicates that some proposed high cost pollution-control
measures may be unnecessary; as a result the potential savings
over the next 10 to 20 years could amount to tens of millions of
dollars.
Additional assessments are now being carried out in the
other river basins, e.g., Appalachicola River, Florida, Chatta-
hoochee River, Georgia, Schuylkill River, Pennsylvania and the
Carson-Truckee Rivers, Nevada. Because the combination of
problems addressed in each basin is different, a variety of
examples will be available to demonstrate the benefits of the
river-quality assessment approach to those who must make river-
quality management decisions.
reference: River Quality Assessment by Benjamin L. Jones,
U.S. Government Printing Office, 1977.
National Water Use Program
The National Water Use Program is a cooperative federal-
state program designed to collect, store, and disseminate
water use data to compliment data on availability and quality
of the nation's water resources.. Design of the program
specifies measurement of a broad range of water use elements
which were selected to meet many of the information requests of
groups involved in planning, management, and operation on
national, regional, and local levels. The primary objectives
are (1) to account for the water used throughout the United
States; (2) to organize the data collected so that it may be
retrieved and used at the national, regional, and local levels;
(3) to manage the program so that the data will be uniform in
quality; and (4) to provide the necessary information to be able
to update and make projections of future water requirements.
The nation's fresh and saline waters are under stress from
increasing demands for water for domestic, industrial, agricul-
tural, and other uses and from demands for greater protection of
water quality. Competition for the available resources for
diverse uses dictates that available supplies be matched with
uses most beneficial to the common good. Relatively detailed
information is being collected describing quantity and quality
of water that is available, but relatively little information is
181
-------
available or is being collected describing quantities that are
being used, where used, for what uses, and water quality changes
that result from uses. Without adequate information on uses of
water, decision-makers cannot, and have not, been able to
resolve many critical water problems involving water-quality
residuals, environmental impact, energy development, and resource
allocations.
The National Water Use Program will provide for the
storage of both detailed inventory data at the state level and
aggregated estimates of water use at the national level (extra-
polated from the state-level data). This data will be readily
available to the local and Federal user communities to meet the
following requirements:
A. Local
1. Determine present and projected water uses.
2. Quantify environmental pressure placed on water
resources.
3. Support comprehensive water resource planning.
4. Minimize the impact that water resource
availability has on the environment.
5. Minimize the economic expense of water resource
planning.
6. Support conservation and preservation of water
resources.
7. Permit intelligent participation in Federal
planning.
8. Provide communication linkage among the agencies
with the water resource community.
B. Federal
1. Provide for the optimum utilization of the
nation's water resources.
2. Collect, store, and disseminate water-use data
to complement data on availability and quality
of the nation's water resources.
3. Provide an efficient, economical tool to support
interbasin planning.
4. Support and enhance the data available to produce
the "National Assessment" of the nation's water
resources.
5. Provide a means of making more timely and
accurate forecasts of water use throughout the
nation.
Direction, management, and standards developed to provide
for a national, consistent, and comprehensive program is the
responsibility of the Survey; Manpower-intensive field activi-
ties for acquisition of the data will be the responsibility of
182
-------
the local agency, where direct communication with the water-using
community can be readily established. How these responsibilities
are implemented will ultimately reside with the State USGS
District Chief and the cooperator.
The National Water Use Program will take a phased approach
to implemetation. The software to support and maintain the data
at the national level will be implemented during the first half
of calendar year 1979. Additional national-level reporting,
forecasting, and support software will be developed later as
required by the U.S. Geological Survey.
The cooperative programs between the states and the Survey
for the collection, storage, and dissemination of detailed
state-level data were begun in fiscal year 1978 and will be
fully implemented by 1982.
The responsibility for disseminating raw data collected at
the state level rests with each state. In each state, an organi-
zation must be selected to provide a user liaison for this
purpose.
At the national level, the USGS will provide a user liaison
service for disseminating the aggregated data stored in the
national data base.
Although both state and USGS personnel may be able to aid
users in understanding the data stored by the Water Use Program,
the final interpretation of water use data and projections made
from that data are solely the responsibility of the individual
user. The data will be indexed in the NAWDEX system.
Information about the water use program in each state may
be obtained through the USGS District Chief in each state.
Additionally, information about the program may be obtained
from:
Fred Ruggles
U.S. Geological Survey
National Center
12201 Sunrise Valley Drive
Reston, Virginia 22091
Phone: (703) 860-6877
reference: The National Water Use Program Information Sheet,
U.S. Geological Survey, 1978.
183
-------
SEPARATION OF TIME-VARYING
PARAMETERS IN STREAM WATER QUALITY MODELING
By
Clark C. K. Liu
New York State Dept. of
Environmental Conservation
Albany, New York 12233
Summary
In this study a set of equations were developed which
can be used to'separate the time-varying effects from observed
DO data. It allows a reasonable stream simulation such that
both the model and the data used in its formulation do not
contain DO due to biological activities. Biological DO produc-
tion and consumption are complex phenomena. By excluding these
highly variable processes, this method simplifies stream DO
modeling considerably.. The net oxygen input due to this process
exists only part of the day, but, in the stream waste assimila-
tive capacity analysis and waste load allocation, one would
focus his attention on critical conditions. Hence, unless the
change of stream ecology is the main concern, it is desirable
to formulate a stream water quality model without this time-
varying term.
Discussion
In a stream containing phytoplankton biomass and benthic
plants, the processes of photosynthesis and respiration consti-
tute the major oxygen source and sink. Photosynthesis is the
biological synthesis of organic compounds by chlorophyll-bearing
plants in the presence of solar energy. A by-product of this
process is oxygen. On the other hand, oxygen is consumed by
living organisms as a process of their respiration. Therefore,
for a biologically productive stream, observed dissolved oxygen
(DO) content is the result of several dynamic processes includ-
ing photosynthesis and respiration. In the formulation of a
stream water quality model, an independent term has to be includ-
ed to accommodate these in-stream DO sources and sinks, other-
wise the model calibration and verification must be conducted
in such a way that photosynthesis and respiration action is
deliberately separated from the observed DO data.
184
-------
The pro.cesses of photosynthesis and respiration are time
dependent. In a steady state stream with constant waste load-
ings, these processes are the only causes of diurnal DO fluctua-
tion. Ideally, a natural water body with significant biological
oxygen production and consumption should be analyzed in terms of
a time-varying model which makes stream DO a function of both
time and space (O'Connor and DiToro, 1970). However, the
application of a time-varying model in many water quality appli-
cations is constrained for lack of field data and due to the
numerical complexities in its solution. Therefore, a steady
state model is still the most popular tool in water quality
analysis.
Photosynthesis and respiration actions were ignored in the
traditional approach to steady state stream modeling (Streeter
and Phelps, 1925). Obviously, this is inadequate for a
biologically productive stream. In order to calibrate a model
by this approach, it would be necessary to incorporate photo-
synthesis and respiration actions with other model parameters.
As a result, the satisfactory hydraulic and bio-chemical simula-
tion for a stream can hardly be achieved.
More recently, an average net daily photosynthesis and
respiration rate has been added in steady state stream modeling.
The model calibration and verification are conducted based on
average DO (e.g. Zitta, et al, 1977). This approach also has
its limitations. First, the rate of net biological oxygen
production is often difficult to evaluate due to the lack of a
thorough understanding of the biological activities in a stream
(Rutherford, 1977). Thus, estimates of net stream photosynthesis
and respiration rate based on data from a particular survey and
its use in model projection is unreliable. Secondly, because of
the supersaturated DO condition during daylight hours, average
DO in many streams is high while DO content may be significantly
below the stream standard for the rest of the day. From a water
quality management standpoint, therefore, the average DO is not
a good index.
In the Study now reported, a new method was developed
which separates the time-varying effect from observed DO data
before using them in model calibration. As a result, both the
stream water quality model and the DO data used in its calibra-
tion and verification are completely in the steady state mode.
This leads to a reasonable model formulation in the sense that
the evaluated model parameters represent the true hydraulic and
biochemical behavior in a stream. This concept was initially
proposed by Lawler, Matusky & Skelly Engineers in their Sus-
quehanna River Study (1975), and was called a "Scrubbing"
method. However, the "Scrubbing" technique consists of the
determination of net photosynthesis rates before the modifica-
tion of observed DO data. It thus requires a series of tedious
computations. Furthermore, in the mathematical formulation of
"Scrubbing" technique, convective changes of DO was dropped as
185
-------
insignificant to the diurnal variation, which in many instances,
may not be wholly justified.
With this in mind, a new mathematical formulation for
diurnal stream DO variation was conducted in this study and a
set of new formula derived which allows the separation of photo-
synthesis and respiration actions from observed DO, based only
on observed DO data and stream hydraulic characteristics.
Mathematical Formulation
In fluid dynamics, there are two methods of describing the
motion of a group of particles in a continuum, i.e. the
Lagrangian method or the Eulerian method. In the Lagrangian
representation a particle in the flow field is chosen arbitrari-
ly at some time and the motion of the fluid is given by the
subsequent motion of this particle. Alternatively, the changes
in velocity at an arbitrarily chosen position as time elapses
are studied in the Eulerian method. (Daily and Harleman, 1966) .
Eulerian representation was found to be more convenient in the
investigation of diurnal variation at any stream point, and is
adopted in this study.
Using the Eulerian method, flow properties at any particu-
lar point in the flow field can be expressed in terms of a
total or substantial derivative. In a one dimensional stream,
dissolved oxygen content in the vicinity of a stream point is:
dD _ 3D 3d ..
dt ~ T£ + U 3¥ (1)
The first term on the right hand side of the equation (1) rep-
resents the "local" change as a function of time and the second
term is the "convective" change dependent on the motion of the
field particle in a stream. Here D is DO deficit, in mg/1, or
the difference of saturation concentration of DO and the actual
concentration. U is the mean velocity at that stream point, in
mile/hr. Of the two independent variables, t refers to the
time of diurnal variation, in hrs., and s is the distance along
the path of flow particles, in miles.
Water quality models are often used in the study of a
critical water quality conditions when low flow and high temp-
erature are prevalent. Therefore, the stream flow can be
reasonably assumed to be steady and uniform, and "local" changes
consist of only the action of photosynthesis and respiration.
In a polluted stream, "convective" change of DO is the result of
organic waste decay and atmospheric reaeration. Hence, the
substantial derivative of DO at a stream point becomes:
g| = R - P(t) + K^L - K2D (2)
186
-------
Here P and R are the rates of photosynthesis and respiration, in
mg/l/hr. Note that P is a function of time, while the rate of
plant respiration can be assumed a constant (Odum, 1956}. K
and K2 are the rate of biochemical oxidation of organic wastes,
or deoxygenation coefficient and the rate of stream reaeration,
in 1/hr., respectively. For the sake of simplicity, a single
term L is used in equation (2) for both carbonaceous and nitro-
genous biochemical oxygen demanding materials (BOD).
In a typical diurnal DO fluctuation, two equilibrium points
exist (Figure 1). At nighttime equilibrium, when DO deficit is
a maximum D , equation (2) is:
|| = R + K-jjL - K2Dr = 0 (3)
Similarly, at daytime equilibrium, when DO deficit is at its
i
dD
minimum D , equation (2) is:
dt = R - Popt + K1L - K2Dp «
Here P is the optimal rate of photosynthesis. Early studies
indica^ia that for the phytoplankton biomass and benthic plants
in a natural water body the rate of plant respiration is rela-
tively constant and amount to about 10% of the optimal rate of
photosynthesis (Ruttner, 1963 and Odum, 1952) . Or,
R = 0.1 Popt (5)
D is the stream DO when photosynthesis and respiration
action^ are not presented. Therefore, at the vicinity of D ,
equation (2) takes the form of:
_ dD
a? - V - K2D» • u
Here U dD /ds replaces the partial derivative U 9D/9s of
equation Yl) since without photosynthesis and respiration
action, stream DO is a function of distance alone. Solving
equations (3), (4), (5) and (6) simultaneously, one would have:
Dw = Dr - 0.1 (Dr - Dp) - Ss/K2 (7)
Here D and D can be directly read on the diurnal curve.
S = UrdD /dspcan be derived by multiplying the slope of the
observed Stream DO profile and average stream velocity. DO
profile of minimum observed DO may be used, because the
difference between minimum DO and D is respiration R, which
is relatively constant.
187
-------
K2 is stream reaeration which can be determined based on
simple hydraulic data (Rathbun, 1977).
During the daylight hours, supersaturated DO may exist in a
stream, or DO deficit D becomes negative. Following the above
procedure, D can be readily derived to be the following:
Dw = Dr - 0.1 (Dr + Dp) - SS/K2 (8)
Equations (7) and (8) establish the required relationship
whereby a stream's theoretical DO in the absence of photo-
synthesis and respiration can be determined based on observed
DO data and stream hydraulic characteristics.
Application
An intensive stream water quality survey was conducted
during the week of June 13-17, 1977 by the New,York State
Department of Environmental Conservation on Eighteen Mile Creek,
which is located in Niagara County in northwestern New York.
Survey data shows significant diurnal DO fluctuations in the
Creek, especially downstream from Rt. 104 Bridge (Figure 1) .
Circles in Figure 1 are stream DO at each sampling station
after the separation of photosynthesis and respiration actions.
They were derived based on equation (7).
Eighteen Mile Creek was simulated in terms of a steady
state one dimensional SNSIM model developed by EPA Region II
(Braster, et al, 1975). Model parameters such as deoxygenation
rate and stream reaeration were estimated on the basis of field
data, and then modified during the model calibration. Details
have been included in a New York State Department of Environmen-
tal Conservation Survey and Analysis Report (Liu, 1978). Figure
(2) gives a computer plot of the model output. Comparisons of
predicted and observed DO show good agreement and suggest a
reasonable simulation of the Creek's hydraulic and biochemical
behavior.
Acknowledgment
The author wishes to express his sincere thanks to his
colleagues in the Survey and Analysis Section, New York
State Department of Environmental Conservation for many useful
discussions. Special acknowledgement must be given to William
Berner, Section Chief, for his careful readings and comments of
the draft manuscript. Thanks are also due to Mrs. Sally Scott
for her superb typing of several drafts in the preparation of
this paper.
188
-------
h-
Z
LU
I-
Z
o _
w z
zo
UJ
o
UJ
o
100
80
60
40
20
I
1
I
0000 0400 0800 1200 1600
> JUNE 15, 1977
2000
2400
0400
O800
—JUNE 16,
FIGURE I. OBSERVED DIURNAL
DISSOLVED OXYGEN CURVE IN EIGHTEEN MILE CREEK
189
-------
•6
K
o
I
o
8
.
1
1} • .
K
«»
«
or
u
O
1 -> '
•o " ' •
tr
i
•6
J
•Observed 0.0. offer exclusion of photosynthesis and
respiration actions
V,
o
o
o
o
-------
REFERENCES
(1) .Braster, R.E., S.C. Chaptra, and G.A. Nossa, 1975, "A
Computer Program for the Steady State Water Quality
Simulation of a Stream Network", U.S. Environmental
Protection Agency, Region II.
(2) Daily, J.W., and D.R.E. Harleman, 1966, "Fluid Dynamics",
Addison-Wesley, Reading, Mass., pp. 42-43.
(3) Lawler, Matusky & Skelly Engineers, 1975, "Wastewater
Assimilative Capacity Study, Susquehanna River Basin",
Tappan, N.Y.
(4) Liu, Clark C.K., 1978, "Survey and Analysis Report -
Eighteen Mile Creek in Niagara County, New York", New
York State Department of Environmental Conservation
(unpublished).
(5) O'Connor, Donald J., and Dominic, M. DiToro, 1970,
"Photosynthesis and Oxygen Balance in Streams", Jour.
of Sanitary Engrs. Div., ASCE, Vol. 96, No. SA2m 00,
5470571.
(6) Odum, Howard T., 1956, "Primary Production in Flowing
Waters", Jour. Limnology and Oceanograph, Vol. 1,
No. 2, pp. 102-117.
(7) Rathbun, R.E., 1977, "Reaeration Coefficients of Streams -
State-of-the-Art", Journal of the Hydraulics Division,
ASCE, Vol. 103, No. HY4, pp. 409-424.
(8) Rutherford, J.C., 1977, "Modeling Effects of Aquatic
Plants in Rivers", Jour, of the Environmental Engr. Div.,
ASCE, Vol. 103, No. EE4 , pp. 575-591.
(9) Ruttner, Franz, 1963, "Fundamentals of Limnology",
University of Toronto Press, pp. 159-175.
(10) Streeter, H.W.; and Phelps, E.B., 1925, "A Study of the
Pollution and Natural Purification of the Ohio River, III
Factors Concerned in the Phenomena of Oxidation and
Reaeration", US Public Health Service, Health Bulletin No,
146.
191
-------
(11) Zitta, V.L., A. Shindala and M.W. Corey, 1977, "A
Two-Dimensional Mathematical Model for Water Quality
Planning in Estuaries", Water Resources Research
Vol. 13, No. 1, pp. 55-61.
192
-------
EVALUATION OF HAZARDOUS SUBSTANCES
TRANSPORT MODELING IN SURFACE WATERS
By
Yasuo Onishi
Battelle, Pacific Northwest Laboratories
Richland, Washington
Introduction
The environmental impact of various hazardous substances
(e.g., pesticides, heavy metals, radionuclides, etc.) is an
increasingly important issue (1,2,3). Although considerable
effort is being made to minimize the release of these hazardous
substances to the environment, decision makers must have a
sound basis for impact assessment.
Mathematical models supported by well-planned data collec-
tion programs can be useful tools in assessing migration and
ultimate fate of hazardous substances in surface waters. In
order to obtain accurate predictions of contaminant transport,
mathematical models must include major transport mechanisms.
These mechanisms include:
1. advection and diffusion/dispersion of hazardous substances
2. chemical and biological degradation and decay due to
hydrolysis, oxidation, photolysis, volatilization, and
biological activites
3. parent-daughter products of radioactivity decay
4. contribution of hazardous substances from outside sources
into the system
5. interaction between sediment and hazardous substances,
such as contaminant adsorption by sediment; contaminants
desorption from sediment to water; transport of particulate
contaminants (those associated with sediment); deposition
of particulate contaminants to the bed; and resuspension
of particulate contaminants from the bed.
Until recently sediment-contaminant interaction was not
included in models because of the complex nature of sediment
transport and the contaminant adsorption/desorption mechanisms
(4,5). However, significant effects of sediment-contaminant
interaction on the transport of hazardous substances are well
193
-------
documented. For example, field measurements conducted in the
Clinch River, Tennessee, indicated that approximately 90% of the
radionuclide, cesium-137, released from the Oak Ridge National
Laboratory was adsorbed by the suspended sediments in the river
within 35 km downstream of the effluent discharge (6). In
another study the majority of Kepone, a pesticide released to
the James River Estuary, Virginia, was also judged to be
associated with sediment (1).
Transport Models
Most of the mathematical models for the transport of
hazardous substances are based on the general advection-diffus-
ion equation. These models range from simple analytical solu-
tions to complex numerical models. Because of severe limitations
imposed on analytical solutions, applicability of these solutions
is very limited. Instead, numerical models must be used to
accommodate wide variations of channel geometry, flow charac-
teristics, and characteristics of sediment and hazardous sub-
stances in most study areas.
Most water quality models that can be used to simulate the
transport of hazardous substances in surface waters include only
the mechanisms of advection and diffusion/dispersion, degradation
and decay, parent-daughter product of radioactivity decay, and
contributions from outside sources (4,7). These models are
applicable to short-term migration cases where: 1) the contami-
nant has a very small distribution coefficient, Kd7, and 2)
sediment concentration is very low.
However, in cases where: 1) the Kd is very large; 2)
sediment concentration is high; or 3) long-term migration is
concerned, mathematical models must include sediment-contaminant
interaction by coupling sediment and contaminant transport model-
ing (2,8-12). Sediment-contaminant interaction becomes very
important because under one or more of these conditions, a
significant amount of hazardous substances in surface waters is
adsorbed from solution onto sediment. Thus, otherwise dilute
contaminants are concentrated. This process may create a
significant pathway to man. Contaminanted sediments may be
deposited into river and ocean beds, becoming a long-term source
of pollution through desorption and resuspension. In contrast,
sorption by sediment can be an important mechanism for reducing
the area of influence of these hazardous substances by reducing
dissolved concentration of hazardous materials. Moreover, since
the movements and adsorption capacities of sediments vary
significantly with sediment size fractions, transport of sediment
and particulate contaminants must be simulated for each sediment
size fraction. Mathematical models which do not include these
sediment-contaminant interactions may produce errors in predict-
$ng the migration of hazardous substances.
194
-------
As revealed by a recent study (13), very few models are
capable of solving the transport of hazardous substances by
including sediment-contaminant interations (2,8-12). In order
to fully accommodate sediment-contaminant transport, mathemati-
cal models must couple:
sediment transport for each sediment size fraction
- dissolved contaminant transport
particulate contaminant transport for each sediment
size fraction
bed history of sediment and contaminant.
Currently the following three models include these four
components:
1. CHNSED, developed by Field et al., (12) is an unsteady,
one-dimensional model applicable to rivers. It was
applied to the Rio Grande River (New Mexico).
2. SERATRA, developed by Onishi et al., (9) is an un-
steady, two-dimensional (longitudinal and vertical)
model applicable to rivers and lakes. SERATRA has
been applied to the Columbia River (Washington),
Clinch River (Tennessee), Four Mile and Wolf Creeks
(Iowa) and Cattaraugus and Buttermilk Creeks (New
York).
3. FETRA, also developed by Onishi et al., (2,14) is an
unsteady, two-dimensional (longitudinal and lateral)
model applicable to rivers, estuaries and oceans.
This model was applied to the James River estuary
(Virginia).
Data Requirement
One of the most important aspects of mathematical modeling
is the required field data. For transport modeling the required
data are:
1. channel characteristics
cross-sectional shapes (or bathymetry)
plan geometry
•2. fluid characteristics
viscosity
3. flow characteristics
distribution of depth
distribution of velocity
195
-------
- wave characteristics in oceans and large lakes
- salinity
temperature
- diffusion/dispersion coefficient
4. sediment characteristics
- diameter, density, and minerology of sediment
- critical shear stresses of sediment (or other
sediment transport parameters)
5. characteristics of hazardous substances
adsorption/desorption rates (or distribution
coefficients)
- chemical and biological degradation or decay
rates
Accuracy of model prediction is significantly affected by the
integrity of data used for the modeling. However, because of
the cost and time involved, field sampling activities have been
rather limited. Furthermore, for most instances, field sampling
programs and computer simulation programs have not been coordi-
nated. To make the best use of cost and time, as well as pro-
vide accuracy of the prediction, field sampling planners and
mathematical modelers must coordinate their investigations very
closely.
The amount of data required is basically proportional to
the sophistication of the models. However, simpler models
require more judgmental data than more sophisticated models.
For example, a steady, one-dimensional model requires very care-
ful selection of the longitudinal dispersion coefficient when
it is applied to an estuary. However, dispersion coefficients
are less important when an unsteady, two- or three-dimensional
model is applied.
Data are currently incomplete to accurately express the
mechanisms of degradation and decay, and sediment-contaminant
interaction. (Among these data, those describing the sediment-
contaminant interaction, especially adsorption/desorption
mechanisms and migration of cohesive sediment, are most urgently
needed because of their significant effects on the movement of
hazardous substances.) For example, a functional relationship
of the distribution coefficient with various sizes and types of
sediments, organic content, and other water quality parameters
must be established to more accurately describe adsorption/
desorption mechanisms. The time variation of Kd must also be
determined. Because of these incomplete data and parameters
to express these important mechanisms, extensive efforts must
be exercised to conduct comprehensive field and experimental
data collection.
196
-------
Verification
Mathematical models must be calibrated and verified under a
wide range of conditions, prior to the application of the model,
to produce accurate and more defensible prediction of transport
phenomena. Currently only a few models are at least partially
verified against field data (2,5,7,9). There is a definite need
to obtain detailed field data and to verify models with these
data. Verification of existing models is probably more im-
portant than creating a new unverified model.
Recommendations
When evaluating the transport of hazardous substances in
surface water, the following steps are recommended:
1. Examine available models to identify general
applicability and limitations of models.
2. Select potential simulation models for transport of
hazardous substances in rivers, estuaries, oceans and
impoundments for further detailed verification tests.
3. Perform literature search for available field data
and/or perform coordinated field data collection,
laboratory physical modeling and laboratory experi-
ments to obtain data needed for model verification.
4. Perform model verification tests with these measured
data to examine selected models.
5. Select most suitable models identified for rivers,
estuaries, oceans and impoundments.
6. If none of the models are appropriate, modify the
models most suitable for the specific application,
and then verify those models.
197
-------
REFERENCES
(1) U.S. Environmental Protection Agency, "Kepone Mitigation
Project Report", Standard and Criteria Office of Water
and Hazardous Materials, Washington, D.C., 1978.
(2) Onishi, Y., and S.E. Wise, "Mathematical Modeling of
Sediment and Contaminant Transport in the James River
Estuary," Proceedings of the 26th Annual ASCE Hydraulics
Division Specialty Conference on Verification of Mathemati-
cal and Physical Models in Hydraulic Engineering, College
Park, MD, August 9-11, 1978, pp. 303-310.
(3) U.S. Nuclear Regulatory Commission, "Liquid Pathway
Generic Study—Impacts of Accidental Radioactive Releases
to the Hydrosphere for Floating and Land-Based Nuclear
Power Plants," NUREG-0400, Washington, D.C., 1978.
(4) Norton, W.R., L.A. Roesner, D.E. Evenson, and J.R. Monser,
"Computer Program Documentation for the Stream Quality
Model, Qual-II," Water Resources Engineers, Inc., Walnut
Creek, CA, 1974.
(5) Leendertse, J.J., "A Water Quality Simulation Model for
Well Mixed Estuaries and Coastal Seas, Principles of
Computation, Volume I," RM-6230-RC, The Rand Corp.,
Santa Monica, CA, 1970.
(6) Churchill, M.A., J.A. Cragwall, R.W. Andrews and S.L.
Jones, "Concentrations of Total Sediment Loads and Mass
Transport of Radionuclides in the Clinch and Tennessee
Rivers," ORNL-3721, Suppl. 1, Oak Ridge National Labora-
tory, Oak Ridge, TN, 1965.
(7) Baca, R.G., M.W. Lorenzen, R.D. Mudd, and L.V. Kimmel,
"A Generalized Water Quality Model for Eutrophic Lakes
and Reservoirs," Pacific Northwest Laboratory, Richland,
WA, prepared for the Office of Research and Monitoring,
U.S. Environmental Protection Agency, 1974.
(8) U.S. Environmental Protection Agency, "Kepone in the Marine
Environment, Publications and Prepublication," Gulf Breeze
Environmental Research Laboratory, FL, 1978.
198
-------
(9) Onishi, Y., "Mathematical Simulation of Sediment and
Radionuclide Transport in the Columbia River," BNWL-2228,
Pacific Northwest Laboratory, Richland, WA, 1977.
(10) Shin, C.S., and E.F. Gloyna, "Radioactivity Transport in
Water — Mathematical Model for the Transport of Radio-
nuclides," EHG-04-6702, Technical Report No. 12 to U.S.
Atomic Energy Commission, University of Texas, Austin, TX,
1967.
(11) Chapman, R.S., "A Model to Investigate the Influence of
Suspended Sediment on the Mass Transport of Pollutants
in Open Channel Plow," NASA-TM-X-94601, 1977.
(12) Field, D.E., "CHNSED: Simulation of Sediment and Trace
Contaminant Transport with Sediment/Contaminant Inter-
action," ORNL/NSF/FATC-19, Oak Ridge National Laboratory,
Oak Ridge, TN, 1976.
(13) Oak Ridge National Laboratory, "Proceedings of a Workshop
on Evaluation of Models Used for the Environmental Assess-
ment of Radionuclide Releases," CONF-770901, September 6-9,
1977, Gatlinburg, TN.
(14) Onishi, Y., E.M. Arnold, R.J. Serne, C.E. Cowan, F.L.
Thompson, D.W. Mayer, and R.M. Ecker, "Annual Report—
October 1977 to September 1978—Mathematical Simulation of
Sediment and Contaminant Transport in Surface Waters,"
NUREG/CR-0658, PNL-2902, Pacific Northwest Laboratory,
Richland, WA, 1979.
199
-------
THE USE AND VERIFICATION OF HYDRODYNAMIC MODELS
IN WATER QUALITY MODELS
By
John F. Paul
Large Lakes Research Station
U.S. Environmental Protection Agency
Grosse lie, Michigan 48138
The real usefulness of hydrodynamic models is in their
application in water quality models. It is the hydrodynamic
transport which transports material through the nearshore region
into the main part of the lake and which mixes material that is
already present in the lake. The significance of the hydro-
dynamic transport in a particular water quality application
depends on the actual problem investigated: the hydrodynamic
transport is unimportant for a lake treated as a completely
mixed reactor while it can be extremely important for a lake
divided into many segments. This comparison points out that
the real distinction for many water quality problems is in the
scales that are involved, i.e., the time and length scales that
the important mechanisms are assumed to occur over and that have
to be included in the model. Difficulties arise because hydro-
dynamic models have been traditionally calculated over relative
ly small time and length scales (on the order of minutes and
kilometers), and water quality models have been traditionally
calculated on relatively large time and length scales (on the
order of seasons of the year and hundreds of kilometers). It
has not been unusual for water quality modelers to state that
it is impossible to use hydrodynamic models because of the
exorbitant computer costs and because of the difficulty in ex-
tracting the meaningful information that the water quality
models need. Similarly, the hydrodynamic modelers have stated
that it would be a meaningless exercise to try and use their
models in the water quality-models because these models have
such crude time and length scales. The real problem comes down
to how the models with fine time and length scales can be used
in conjunction with models that have much larger time and length
scales. The solution to this is not to just average the small
scale calculations to arrive at some larger scale motion. This
approach completely eliminates the smaller scale motion that
contributes to what is called the mixing in the larger scale
models. What has to be done is to properly account for this
200
-------
small mixing in the larger scale model. The method that is
being employed at the Large Lakes Research Station (LLRS) is
similar to the Reynolds partitioning idea used to derive the
turbulence equations. (Quantities are written as some mean
value and a fluctuating component about that mean. These ex-
pressions are used to derive equations for the mean quantities).
In our method we consider the small scale quantities as a mean
value over the larger scales and as a>fluctuating component
about that mean. Equations can then be derived for the large
scale quantities which account for the small scale mixing. This
appears to be the proper way to deal with the change in scales.
If we say that the major use of the hydrodynamic models will
be in their application in water quality models, then we should
say that the hydrodynamic models be verified on the same time and
length scales as the water quality models. What kind of confi-
dence can one have in the use of a model if it is verified
against data for a couple of hours or days and then used for an
application that extends for a year or more? The only way you
can verify a hydrodynamic model this way is to calculate the
transport of some material in the water for the whole period of
time you are interested in. This does require a lot of data to
compare with. One way to satisfy this requirement for data is
to employ remotely sensed data. For example, the Nimbus 7
satellite is presently in orbit and provides coverage 5 out of
every 6 days with a resolution of 800 meters. The hydrodynamic
model could be used in conjunction with a transport model for
suspended solids in the water and compared with the satellite
data. Ship cruises would also have to be employed to provide
ground truth and give some vertical distribution in the water.
A lot more data would be available this way than if just shop
cruises were employed. Some of the models at LLRS are presently
being verified with data obtained by remote sensing and by ship
cruises.
201
-------
TOXIC SUBSTANCE MODELING RESEARCH AT THE
LARGE LAKES RESEARCH STATION
By
William L. Richardson
Large Lakes Research Station
U.S. Environmental Protection Agency
Environmental Research Laboratory-Duluth
Grosse lie, Michigan 48138
Introduction
The U.S. Environmental Protection Agency (EPA) is confron-
ted with an enigmatic responsibility of managing environmental
quality. The key question that ultimately arises during the
course of its job is: What quantity of a substance, if any, can
be allowed to be discharged and yet maintain the quality and
associated uses of the system? In fact, this is the primary
task confronting EPA in the implementation of the Toxic Sub-
stance Control Act (TSCA) - to regulate or control the discharge
of substances or mixtures which "...present an unreasonable risk
of injury to health or to the environment..."
Regulation of toxic substances will not be done, however,
without consideration of industrial and market concerns. Thus
one of the principles recognized by Congress in preparing this
legislation is that a risk free society is not obtainable (1).
Also, as stated by William Butler, General Counsel, Environmental
Defense Fund:
"Environmentalists wish to see established some rational
method of toxic chemical control which will maximize
benefits of chemicals while at the same time minimize
their unintended hazards to human health and the environ-
ment.
Although, as Mr. Butler continues,
"Let's not kid ourselves, we have a long way to go before
achieving that goal."
202
-------
As a result of these viewpoints which reflect technilogic,
economic and social realities, we will always be confronted
with certain amounts of marginally hazardous substances in the
ecosystem.
The modeler's goal is and has been to provide at least
some rational input to the regulatory process whatever the
environmental issue. The modeling-management process can be
depicted by the flow chart shown in Figure 1. The process
consists of four primary areas: 1) information gathering,
2) modeling, 3) health effect assessment, and 4) decision
making. Information gathering includes:
1. Quantification of existing or expected discharges
or emissions.
2. Measurements of ambient concentrations in various
compartments of the ecosystem.
3. Experimentation to obtain process routes and
rates.
Modeling can involve two separate types: 1) diagnostic
and, 2) prognostic. Diagnostic modeling includes the synthesis
of surveillance and experimental data into a calculation that
is able to emulate the real world. If this is done satisfactor-
ily then the model might be used for prognostic analyses, i.e.,
simulations of biochemical concentrations (dose) in space and
time. These prognostics may be used for regulatory purposes
along with health effects data (allowable doses) to decide what
amount, if any, can be allowed into the ecosystem.
A strategy for Toxic Substance Modeling research has been
formulated for the Great Lakes by Wayland Swain (3), as shown
in Figure 2. This might be referred to as an ecosystem approach.
The unique aspect is that it proposes to quantify the sources,
reservoirs, exposure routes, dose levels and health effects of
chosen chemicals for selected geographical areas. It would
provide the necessary data for a complete diagnostic evaluation.
This approach has been initiated in part for Saginaw Bay,
Lake Huron. Samples have been collected starting in 1977 and
are being analyzed for PCB. Initially the data will be used to
perform a materials balance to see if PCB can be accounted for
in all compartments. Then this will be expanded to include
adsorption-desorption, and biological processes. Because of the
complexity and expense, of laboratory start-up only some of the
water compartment data are available for 1977.
The results of this preliminary materials balance are
shown here for exemplary purpose only. The results are prelim-
inary and should not be used for citation since the values may
change as more data become available. The purpose is to show
the procedures followed in performing a mass balance. These
include:
203
-------
ALTERATION
OF INPUTS
INPUT ASSESSMENT
POINT SOURCES
DIFFUSE SOURCES
(MAGNITUDE » LOCATION)
DECISION MAKING (REGULATION)
REDUCE EXISTING DISCHARGES
PREVENT MANUFACTURE
ALLOW SOME MANUFACTURE
ALTER ENVIRONMENTAL GOALS
EXPECTED
DOSE
ALLOWABLE
DOSE
MODELING
DIAGNOSTIC:
SYNTHESIS OF INFORMATION
EMULATION OF EXISTING CONDITIONS
PROGNOSTIC:
SIMULATION OF EXPECTED RESPONSES
IMPACT ASSESSMENT
ROUTES OF TRANSPORT
BIOMASS
COMPARTMENT CONCENTRATION
SINKS
(SURVEILLANCE)
EXPERIMENTATION
RATES OF TRANSPORT
RATES OF TRANSFORMATION
RATES OF BIOCONCENTRATION
HEALTH EFFECTS ASSESSMENT
DOSE-EFFECT
FIGURE I. ENVIRONMENTAL MODELING-
MANAGEMENT PROCESS
204
-------
HO
-IUJ
QUJ
_J
INTERNAL POOL MECHANISMS
1. CIRCULATING BLOOD TITER
2. AUTO-RE-EXPOSUBB
3. SELECTIVE ELIMINATION
HEALTH EFFECTS
INFANT/CHILD POPULATIONS
IN UTERO EXPOSURE
FROM MOTHERS CIRCULATION
UJ
to
O
LH
05
°-
X
UJ
EFFECTIVE MATERNAL DOSE
I I
HEALTH EFFECTS
ADULT POPULATIONS
XPOSURE
S MILK
IN
1.
2.
3.
TERNAL POOL MECHANISMS
UPTAKE DIFFERENTIALS
ELIMINATION RATES
MAINTAINANCE OF TITER
EFFECTIVE ADULT DOSE
ADULT EXPOSURE LEVEL
DRINKING WATER
FOOD
CO
K
O
or
UJ
V>
UJ
cc
SEDIMENT INTERACTIONS
1. SINK
2. SOURCE
_| GREAT LAKES fl
. I I
1
UPTAKE VIA FOOD SOURCES
(BIOACCUMULATION)
i
1 ZOOPLANKTON 1
1
1 PHVTOPLANKTON ]
I
.u 1 1 ALL OTHER FOODl
5H J 1 SOURCES 1
1
DIRECT UPTAKE T
(BIOCONCENTRATION)J
1 1
J
SOURCE INPUTS 1
ATMOSPHERE, TRIBUTARY, DIRECT DISCHARGE I
FIGURE 2. A STRATEGY FOR TOXIC SUBSTANCE MODELING
V*. SWAIN, LLKS, FES. !»?»
-------
1. Quantifying the transport structures of the Bay.
a. estimate loadings for a trace substance, chloride
b. trace this substance through the system in space
and time, adjusting the transport coefficients
until the calculated concentrations are equal to
the measured (see Figures 5-8)
2. Estimate PCS loads (Figure 9).
3. Calculate expected PCB concentrations in each special
compartment (Figures 10 and 11).
4. Evaluate results.
Figure 3 shows the sampling scheme and the priority
stations at which this initial data are available. Figure 4
shows the location of the Saginaw River sampling location where
samples are obtained to estimate loadings. As can be seen in
Figure 7, the chloride data is well duplicated by the mass
balance model, thus confirming the transport structure used for
the PCB calculations.
The estimated monthly PCB loads are shown in Figure 7 for
mixtures Aroclor 1254 and 1016. Although 1016 loading is 2 to
3 times greater than 1254, 1016 was found in the bay above
detectable levels too infrequently to do a mass balance; there-
fore, the mass balance was carried out for 1254 only. These
results are shown in Figure 8 where calculated Aroclor 1254'
concentrations are superimposed on the measured for each segment
and over time. Although it is readily apparent that the simple
mass balance model cannot adequately describe the data, the
analysis does provide an initial insight into PCB transport in
Saginaw Bay.
For example, when loadings are set equal to zero, the cal-
culated concentrations seem to follow the lowest data points.
This suggests that transient processes are occurring in the bay
like rapid settling and resuspension of solids to which PCB are
adsorbed.
The calculated mass of Aroclor 1254 in the entire bay is
plotted in Figure 9. Over the data period of April through
November, an average of 554 kg resided in the bay compared to
300 kg accounted for by the model. This suggests either an un-
known source or a lack of sufficient data to describe the
variability adequately. Obviously, more research is required.
.However, several general conclusions can be drawn from
this initial investigation.
1. If toxic substances are to be modeled in complex
systems as Saginaw Bay, much expensive da.ta are
necessary for model calibration and verification.
206
-------
SAGINAW RIVER
•Priority stations.
10-31
FIGURE 3. SAGINAW BAY 1977
SAMPLING NETWORK AND SEGMENTATION
SAGIHAW M V
FIGURE 4. SAGINAW RIVER 1977
SAMPLING NETWORK AND SEGMENTATION
PRELIMINARY INFORMATION, NOT FOR CITATION
207
-------
ANNUAL MEAN. 3180 cis 190 m3/sect
. I I I I I ._!_ I I.I i
40 80 120 160 200 240 280 320 360
JULIAN DAY, 1977
FIGURE 5. SAGINAW RIVER DAILY HYDROGRAPH, I977
10
o
" 6
D
UJ
g
EC
O 2
CJ
80 120 160 200 240 280 320 360
JULIAN DAY, 1977
FIGURE 6. SAGINAW RIVER CHLORIDE LOAD, 1977
PRELIMINARY INFORMATION, NOT FOR CITATION
208
-------
Z5
_ 20
„• 15
e
1.0
5 5
n
SEGM £ AT 2
•— _— r-rS. /
*\ T iX*^
"V-^
SfGMfAT*
v—v^^v^^
'
20
16
u,
D
§ 8
X
o 4
o
S«MfftW5
r^-^^T-_!K>-^ f>«J*— ^_»
I CRUISE AVERAGE t 1 SI (mjl
y COMPUTED Img II
0 50 100 150 200 250 300 350
JULIAN DAY
0 50 100 150 200 250 300 350
JULIAN DAY
0 60 100 150 200 250 300 350
JULIAN DAY
0 50 100 150 200 250 300 350
JULIAN OAY
FIGURE 7. SAGINAW RIVER
CHLORIDE TOTAL SYSTEM, 1977
O
x 4.0
5"
LU
e 3.0
DC
O
I
MODEL
DATA MEAN 2.34
MODEL. MEAN 2.33
DATA
J'F'M'A'M'J'J'A'S'O'N'D
197?
FIGURE 8. SAGINAW BAY
CHLORIDE TOTAL SYSTEM MASS, 1977
PRIUMINAHY INFORMATION, NOT FOR CITATION
209
-------
3.0
,,' 2.0 -
O 1.0
PRELIMINARY. JUNE 1978
1.X
0758
'F'M'A'M'J'J'A'S ' o ' N ' o
1977
AROCLOR 1254 MONTHLY LOAD
$AO
-•.
S
• on
o
oc
0 1-0
o
O
^
t.89
\,.S3
1,09
a/6/r a«s
0.^44 u.«iv (J.J86J '
•J'F'M'A'M'J'J'A
!.S«
'•" 1
S ' 0
1.3?
N
2.6?
D
1977
AROCLOR IOI6 MONTHLY LOAD
FIGURE 9. SAGINAW RIVER I977 PCB DATA
[ 0.02
t
; 0.01
o
•'•&-*
0 60 120 ISO 240 300 366
1977
— 1977 LOADING
•-ZERO LOADING
} VOLUME WEIGHTED AVERAGE = I S.D
a os
0.04
0.03
O.B:
0.01
o
0.183
SCGIHCNT I
0 60 120 180 !« 300 366
1977
60 120 ISO 240 300 366
1977
FIGURE 10. SAGINAW BAY AROCLOR 1254,1977
PRELIMINARY INFORMATION, NOT FOR CITATION
210
-------
J'F'MA'M'J'-J'A'S'O'N'D
10
1977
FIGURE II. SAGINAW BAY
AROCLOR 1254 TOTAL SYSTEM MASS
PRELIMINARY INFORMATION, NOT FOR CITATION
211
-------
2. Toxic substance models are an extension of existing
models which provide the foundation. These include
physical transport models and nutrient-bioraass models.
3. Therefore, surveillance for model development must
continue to include traditional parameters and add on
necessary toxic data. For example, we will always need
data for input flows, conservative tracers, biomass
at several levels of the food chain, nutrients for
biomass growth, sediment interaction, temperature, and
other physical parameters.
4. Models may provide the expected dose concentrations in
space and time but will not provide effects.
5. Adequate computer resources and data bases must be
available to the modeler if models are to be used
in the decision making process.
6. Modelers must understand how the data were collected
and analyzed and must help design the field programs
and data bases.
7. Because of the expense of this holistic approach, a
few well planned and executed integrated studies for
a few selected substances are preferred over a diluted
collection of data.
212
-------
REFERENCES
(1) Dominguez, G.S. Guidebook: Toxic Substances Control
Act. CRC Press, Inc. 1977.
(2) Butler, William A., Toxic Chemicals: Environmentalists'
Concerns for the Future. In Proceedings of the Second
Annual Toxic Substance Control Conference/ December 8-
9, 1977, Government Institutes, Inc.
(3) Swain, Wayland, Director, EPA, Large Lakes Research
Station. Personal Communication. February 1979.
213
-------
THE NEED FOR INNOVATIVE VERIFICATION
OF EUTROPHICATION MODELS*
By
Donald Scavia
Research Scientist
Great Lakes Environmental Research Laboratory
National Oceanic and Atmospheric Adminstration
2300 Washtenaw Avenue
Ann Arbor, Michigan 48104
Introduction
In recent years there has been a trend toward using more
mechanistic models of the eutrophication process. By mechan-
istic, I mean models that account for, or simulate, certain
actual processes within the aquatic environment. This excludes
models that are only statistical relations between dependent
variables, blackbox models that ignore internal dynamics, and
models that simulate internal dynamics by unrealistic formula-
tions that are not, or cannot, be measured. These more
mechanistic models must follow the same standard procedures of
model development, calibration, and verification as have the
simpler models; however, as will be discussed below, additional
tests may also be necessary to b%ild confidence in application
of these models.
The Needfor Additional Tests
Often, complete verification of a more mechanistic model is
not possible by usual techniques because one does not have a
complete and independent data set. This is because sampling
all of the properties simulated in more mechanistic models is
difficult and expensive (e.g., zooplankton biomass).
Even when a complete verification data set is available
and the more mechanistic model has been "verified" by usual
techniques, one is left with serious questions concerning
reliability for two reasons: 1) calibration and verification
tests are subjective and 2) there are increased degrees of
*GLERL Contribution No. 185
214
-------
freedom in these generally nonlinear models. The first reason
will not be discussed here because it is considered elsewhere in
these proceedings.
The terms increased degrees of freedom, in this context,
means that more than one set of coefficient values will satisfy
the usual tests for calibration and verification. The basis for
increased degrees of freedom is the cyclic nature of mechanistic
models. Since these models generally simulate ecosystem cycles,
one would not expect material to accumulate excessively in one
particular component but rather to flow among all of the com-
ponents. Then, because of the principles of mass conservation,
one could expect that, if the rate of flow were increased or de
creased proportionately, the state variable concentrations would
not be affected significantly (at least not within the variabil-
ity usually inherent in the verification data set).
It is for this reason and because of the lack of long-
term data that I am suggesting that additional verification tests
be included in the standard procedures for testing mechanistic
models.
Two Additional Tests
The first type of test is re.lated to gross dynamics and
empirical relationships developed for lakes and is particularly
useful when long-term verification data are lacking. The
second type of test is related to the verification of internal
model dynamics and is useful for reducing the degrees of
freedom.
Gross properties--If it is impossible or at least very
difficult to verify directly the long-term dynamics of the
mechanistic model, one can test it indirectly by comparing model
output with output from simpler verified models. An example of
this approach can be found in Scavia and Chapra (1977). In this
study, the results of a mechanistic model were compared with
predictions of annual average total phosphorus made by a simple
mass-balance model. The mechanistic model (Figure 1) was run to
steady-state under a number of nutrient load conditions. At
steady-state, annual average total phosphorus was calculated by
aggregating the model components and averaging over a year. For
the comparison, a simple steady-state mass-balance model for
annual average total phosphorus (Dillon and Rigler, 1974a) was
solved with the same nutrient load conditions:
p = L(l- R)y
215
-------
Omnivorous
Zooplankton
Fish Predation
Small
Large
Herbivorous
Cladocerans Cladocerans Copepods
Rotifers
Mysids
u
r
Oxygen
N-Fixing
Blue-
Greens
Small Large
Others Others
Small Large
Diatoms Diatoms
Nitrate
Nitrite
Reactive
Silica
Inorganic
Carbon
Available
Phosphorus
Dissolved i ^^— • • •-—*
Organic ZzX Ammonia I
Nitrogen rn^^ ^
Detrital
Silica
I979o)
FIGURE I. CONCEPTUAL FRAMEWORK OF MECHANISTIC MODEL
-------
where the retention coefficient R is from Chapra (1975)
R =
16
16
(2)
Combining equations (I) and (2) yields:
P =
16 + q.
(3)
where P = annual average total phosphorus,
L = phosphorus loading rate, and
q = areal water load.
^s
The comparison of the results of the two models (Figure 2)
indicates that both produce similar estimates of total phos-
phorus. Therefore, if the mass balance model was a verified
model or was proven to be general in most respects, then the
mechanistic model could be considered verified to some degree
(at least in terms of long-term mass balance considerations).
Scavia and Chapra (1977) also demonstrated another way to
test a mechanistic model in terms of gross properties. In this
test, the model output was treated like lake data to see if it
conforms to an empirical correlation known to be applicable for
a wide variety of lakes. In other words, the model was tested
to see if it was behaving like the lake. The correlation
(Dillon and Rigler, 1974b) relates (r = 0.95) summer average
chlorophyll 'a1 (chla) to spring total phosphorus (Pv) for a
data set of 46 lakes, each with a nitrogen to phosphorus ratio
greater than 12:
Iog10[chla] = 1.449 log1()[Pv] - 1.136.
(4)
It is reasonable to assume that equation (4) represents
well a large cross section of lakes. For model comparison,
the mechanistic model (Figure 1) was run under a number of
conditions, and for each year that N:P>12, spring total P
concentrations and summer average chlorophyll 'a1 concentrations
were calculated. These results were then plotted (Figure 3),
along with equation (4). The agreement between model output
and the empirical curve was good up to a point. Beyond about
75 ygP/1, the model output diverged consistently from the line.
Thus, in this case, confidence in the model was inspired because
it reproduced the relationship between spring phosphorus and
summer chlorophyll 'a1; however, other important information was
also obtained. The model failed to function consistently under
extremely eutrophic conditions. Scavia and Chapra (1977) suggest
causes for the failure, but the important point here is that
217
-------
80
60
CD
ID
O
8 40
_g
o
o
LU
20
0
« X
/
20 40 60
Equation 3
80
FIGURE 2. COMPARISON OF TOTAL
PHOSPHORUS CONCENTRATION (mg/m3) AS CALCULATED
BY THE MECHANISTIC MODEL AND BY EQUATION (3)
(SCAVIA AND CHAPRA, l»77)
218
-------
80
co
I
E
•ay
E
C/}
0
D)
Q)
60
Q.
O
i_
_o
-40
i_
03
E
E
20
0
• MODEL RESULTS
••DILLON AND RIGLERS
CORRELATION (1974)
I
20 40 60
Spring Total Phosphorus
80
FIGURE 3. COMPARISON OF MECHANISTIC MODEL
RESULTS AND CORRELATION LINE BETWEEN AVERAGE
SUMMER CHLOROPHYLL a AND SPRING TOTAL PHOSPHORUS
(SCAVIA AND CHAPRA, 1*77)
219
-------
this verification procedure provided a test of confidence as
well as set a possible limit to the model's applicability.
Internal dynamics—The second type of verification test
proposed here is verification of the internal dynamics of the
mechanistic model. One of the most important reasons for using
mechanistic models is to examine the controls of the system.
For example, a mechanistic model can be used to examine the
controls of phytoplankton production (Figure 4) and phosphorus
cycling (Figure 5). In this context, model output is used to
estimate the timing and relative magnitude of the influence of
specific processes on state-variable dynamics. One important
question concerning this use of the model is whether the simula-
ted process rates are accurate representations of real processes.
As mentioned above, compensating errors at the process level
might lead to a successful calibration at the state-variable
level. Thus, if models are to be used at the process level and
we are to have faith in the dynamics that produce the state-
variables, we must look closely at the modeled processes.
The following example demonstrates one method of verifying
processes and the way in which compensating errors at the
process level can lead to erroneous conclusions regarding
system controls.
After initial calibration of the state variables in a
mechanistic model of Lake Ontario (Figure 1), simulated process
rates were compared to actual measurements. For this comparison,
a summer averaged (July-Sept.) phosphorus flow diagram was
constructed (Figure 6a) from aggregated model output. The flow
(or transfer) rates were then compared to measurements and cal-
culations from Lake Ontario and to other, more theoretical
estimates. Many of the simulated process rates were very low
(as much as 3-7 times lower) compared to actual rates, with the
most serious discrepancies in transfers among available
phosphorus, phytoplankton, and zooplankton/ yet the state
variables compared successfully I Therefore, I calibrated the
model again, keeping the process rates in mind and most
coefficient values still within acceptable ranges. The new
calibration is shown in Figure 6b. The interesting point here
is that the state variables are close to the originally calibra-
ted values and can still be considered calibrated; however, the
process rates are much higher and, in fact, much closer to
observed values (Scavia, 1979b).
This example demonstrates that if the model were calibrated
only in terms of state variables and then used to examine
control of the phosphorus cycle, then the relative importance
of certain processes would be overestimated by almost an
order of magnitude. For example, bacterial regeneration of
available phosphorus (detritus available P) is relatively
more important in Figure 6a than in Figure 6b and the relative
220
-------
0.4
Stratification
i
£-§
o
0.
0
-0.4
Net
Production
Gross
Production
Sinking
Diffusion ^
Respiration •-•^-Grazing
LIGHT LIMITED
P LIMITED | LIGHT LIMITED
(OVERTURN),
1 I
JFMAMJJASOND
FIGURE 4. RATE PLOT INDICATING
SIMULATED CONTROLS OF EPILIMNION
PHYTOPLANKTON DYNAMICS IN LAKE ONTARIO
(SCAVIA !9T9b)
221
-------
2.5
0)
o> o
(0 0)
> o
< o
0.
2.5
Diffusion
Excretion
Detritus Decay
Phytoplankton
Uptake
i«3
\#
/" --, -^
Net Rate
i /
$
i'.r.-f
i::si
'•••/
Ivi
H
l: i
}
(OVERTURN)
I I
JFMAMJJASOND
FIGURE 5. RATE PLOT INDICATING
CONTROL OF PHOSPHORUS DYNAMICS
(SC&VIA 1979*)
222
-------
STATE VARIABLE CALIBRATION
STATE VARIABLE AND PROCESS CALIBRATION
to
1-0
to
(A)
f
0,089
Phyto-
plankton
4.76
o.10
Detritus
5.24
0.088-^=*
0.075
±Q.Q2=t ExternalSourcesEJ °-044
Hypolimnion
(B)
:0.069:
0.064 0.0641
II
-0.155
Phyto-
plankton
3.98
.324
Herbi-
vores
1.51
0,542
0.236
t
0.475
II
0.102
Detritus
6.36
:0.107:
0.06
0.02=fixternal Sources
0.075
t.i I
0.055
Hypolimnion
NOTE-
CONCENTRATIONS IN BOXES ARE IN ^.g-P/L
CONCENTRATIONS IN PIPES ARE IN i
FIGURE 6. PHOSPHORUS FLOW DIAGRAM
-------
importance of external loads and of transport into and out
of the epilimnion is exaggerated in Figure 6a.
Summary
Because of increased degrees of freedom and the usual lack
of longterm verification dataf mechanistic models need verifica-
tion tests beyond the standard tests used for state variable
simulation. Two general types of verification can be useful
additionals to the usual tests: 1) a comparison of aggregated
output from the mechanistic model with output from simpler
models and empirical correlations that have been verified or
proven to be general and 2) a comparison of simulated process
rates with rates measured in the field or in the laboratory to
determine if the model's internal dynamics are consistent with
measured and theoretical dynamics.
224
-------
REFERENCES
(1) Chapra, S.C. 1975. Comment on "An empirical method of
estimating the retention of phosphorus in lakes" by
W.B. Kirchner and P.J. Dillon. Water Res. Research
11:1033-1036.
(2) Dillon, P.J., and F.H. Rigler. 1974a. A test of a
simple model predicting the phorphorus concentration in
lake water. J. Fish. Res. Bd. Canada 31:1771-1778.
(3) Dillon, P.J., and F.H. Rigler. 1974b. The phosphorus-
chlorophyll relationship in lakes. Limnol. Oceramcffga?:.
19:767-773.
(4) Scavia, D. 1979a. An ecological model of Lake Ontario.
Ecol. Modeling, in press.
(5) Scavia, D. 1979b. Examination of the control of
phytoplankton dynamics and phosphorus cycling in Lake
Ontario with an ecological model. J. Fish. Res. Bd.
Canada (in press).
(6) Scavia, D., and S.C. Chapra. 1977. Comparison of an
ecological model of Lake Ontario and phosphorus loading
models. J. Fish. Res. Bd. Canada 34:286-290.
225
-------
SOME THOUGHTS FOR COMMITTEE BRIEFING -
WATER QUALITY & HAZARDOUS SUBSTANCES
MARCH 8, 1979
By
T. J. Tofflemire
New York State Department of
Environmental Conservation
Bureau of Water Research
As I indicated to the program organizers, I am not an
expert in either mathematical modeling or in toxic substances in
general. I was requested to attend because of my experience
with the Hudson River PCB problem.
I have done a considerable amount of work in monitoring
dredging activities and have several reports relating to this
(1-6). I have also done several years work in collection of
sediment samples and in analysis of sediment data for PCB and
heavy metals (7,8). In the study of the Hudson River PCB prob-
lem, many consultants were retained and studies performed, as
noted in Table 1. My work also involved coordinating some of
the consultants' efforts, applying their results to develop a
solution to a practical problem (PCB in the Hudson River). A
summary report on the consultants' results and DEC studies is
available (9). Recently, I have become involved in calculations
of PCB volatilization from sediments and from water. The vola-
tilization rates of PCB from sediment are high, but can be
greatly reduced by capping the sediments with organic topsoil or
clay (10,11). There is more information in the above referenced
studies than I can possibly describe here. Many of you may al-
ready be familiar with the Hudson River PCB problem. If there
are any questions on these topics, I will try to answer them.
In the following paragraphs I will mention findings,
figures and tables that may be related to modeling of toxic
substances, especially PCB, in water bodies in general.
One often learns of toxic substances in water bodies from
fish analyses. Fish do biocontrate certain toxics so that their
detection is easier. The concentration of toxics in water is
often so low that accurate detention and tracing of the source
of toxics are difficult. Fish are quite mobile and one cannot
226
-------
TABLE I. HUDSON RIVER PCB STUDY CONSULTANTS
CONSULTANT OR GROUP
NORMANDEAU ASSOCIATES
BEDFORD, NH
LAMONT-DOHERTY
PALISADES, NY
USGS - ALBANY
RENSSELAER POLYTECHNIC INST,
TROY, NY (DR. T. ZIMMIE)
HYDROSCIENCE
WESTWOOD, NJ
LAWLER, MATUSKY & SKELLY ENG.
PEARL RIVER, NY
MALCOLM PIRNIE
WHITE PLAINS, NY
WESTON INC.
WEST CHESTER, PA
O'BRIEN AND GERE ENG.
SYRACUSE, NY
RALTECH (WARF)
MADISON, WI
STUDY
BED SAMPLING AND MAPPING (UPPER HUDSON)
BED SAMPLING, PCB ANALYSIS, Cs 137
DATING (LOWER HUDSON)
STATUS
REPORT AND MAPS IN
REPORT IN
RIVER HYDROLOGY WATER PCB AND
SUSPENDED LOAD
BEDLOAD SAMPLING (UPPER HUDSON)
PCB MATH MODELING OF UPPER AND
LOWER HUDSON
(1976 & 77) DATA AND PAPERS IN
(1978) SOME DATA IN
(1979)
(1977)
(1977)
(1978 - 79)
MODELING SEDIMENT AND PCB TRANSPORT
AND HYDROLOGY FOR UPPER HUDSON (NO ACTION)
(DREDGING)
DREDGING TECHNOLOGY, PLANS, AND
ASSOCIATED IMPACTS
FINAL PLANS ANS SPECIFICATIONS FOR
HOT SPOT DREDGING
PCB LANDFILLS AND DREDGE SPOIL SITES,
SAMPLING AND COMPLETE ENVIRONMENTAL
ANALYSIS
1. PCB ANALYSES SEDIMENT AND
WATER MEDIA
2. EFFECT OF PCB ON WATER SUPPLIES
(TREATMENT METHODS)
'PCB ANALYSES - BIOLOGICAL MEDIA
(1977 - 78)
DATA IN
REPORT IN
BEGINNING
REPORT IN
BEGINNING
7 REPORTS IN
TO BEGIN WHEN FUNDED
REPORT IN
DATA AND REPORT IN
IN PROGRESS
IN PROGRESS
-------
K>
K)
CO
CONSULTANT OR GROUP
SYRACUSE RESEARCH CORP.
NYS DEC BUREAU OF
AIR RESOURCES
NYS DEC BUREAU OF
FISH AND WILDLIFE
NYS DEFT. OF HEALTH
GENERAL ELECTRIC
CORP., RES. & DEV.
SCH'ENECTADY, NY
BOYCE THOMPSON
ITHACA, HY
FORDHAM UNIVERSITY
NYU MEDICAL CENTER
SUMY, STONY BROOK, NY
*MT. SINAI SCHOOL OF MEDICINE
NEW YORK, NY
CORNELL UNIVERSITY AND
AGRICULTURE AND MARKETS
ITHACA, NY
STUDY
Tafcle 1, page 2
STATUS
PCB ANALYSES - SEDIMENT AND WATER MEDIA BEGINNING
PCB AIR SAMPLING (1977 - 78} DATA IN
FISH DATA ANALYSIS
(1977 - 78) IN PROGRESS
MACROINVERTEBRATE ANALYSIS, HEAVY METALS
ANALYSIS, LAB QUALITY CONTROL, SOME
PCB ANALYSIS, Cs 137 DATING
SEDIMENT INCINERATION, PCB VOLATILIZATION,
PCB BIOLOGICAL DEGRADATION, NON-DREDGING
PCB RENOVATION OPTIONS
ENVIRONMENTAL EFFECTS OF PCB SUBSTITUTE-
DIELECTROL I, II
TOXICITY OF PETROLEUM HYDROCARBONS
SAMPLING VEGETATION AND LAND IN
FT. EDWARD TO ALBANY AREA
BIOLOGICAL SAMPLING, LABORATORY MODELING
AND PROJECTIONS FOR LOWER HUDSON
HEALTH EFFECTS OF GENERAL ELECTRIC
FACTORY WORKERS
ANALYSIS OF CROP AND FOOD DATA-
IN PROGRESS
IN PROGRESS
WAITING FINAL REPORT
REPORT IN
BEGINNING
BEGINNING
BEGINNING
WAITING FINAL .REPORT
PLANNED
* Not funded by DEC.
-------
accurately locate the sources and sinks of the problems by fish
either. Sediments are much less mobile and are one of the best
medias to analyze for toxics. Toxic compounds such5as PCB are
stored in the sediments at concentrations of 10 -10 times the
concentrations found in water. It would seem that a vital input
needed to most water models of toxics would be good sediment
data.
For the upper Hudson it was concluded the sediment PCB
values were log normally distributed. In some low velocity
areas near the bank, PCB concentrations of 900 yg/g were found,
as noted in Figure 1. In the main channel of the river, the
sediment was often sandy and much lower in PCB (15-20 yg/g).
The downriver variations (Figure 2) in PCB concentration were
much more gradual than the across river variations. In the
Hudson one could find two or three samples in a mucky, near
bank area 20 miles downstream of the source of PCB averaging 300
yg/g, while an area 2 miles downstream of the source of PCB in
the center of the channel in sandy sediment, the PCB may average
only 15 ppm. If one preceeded on the philosophy that he should
simply average all the PCB samples at a given river mile to
obtain the average river bed concentration, one could wrongly
conclude that the PCB was 20 times as high 20 miles downriver
as it was 2 miles downriver from the source of PCB. The solution
is to divide the river into different types of areas on the
basis of velocity, depth, sediment texture, and presence of
emergent vegetation, and then average the PCB concentrations
in those respective areas.
Another factor to consider is the partitioning coefficient
between the PCB in sediment and the PCB soluble in water. Table
2 gives some of the experimental values. The method of defining
solubility is difficult. In an elutriate test, a lot of colloi-
dal solids are suspended that are difficult to separate from the
water and the exposure of the sediment is increased over its
exposure in a river situation. The strength of binding of the
PCB to different sediment types is also a factor as noted in
Figure 3. As the organic carbon content of the sediment in-
creases, PCB is more tightly bound (12). Other factors affect-
ing partitioning coefficients are described in several referen-
ces (13-17).
1. The specific PCB isomer and position of chlorine
attachment.
2. The ionic strength of the water (PCB is less
soluble in salt water than in distilled water).
3. The method of defining solubility.
4. The conditions of mixing in the test.
5. The concentration levels of PCB used.
Another topic of concern is the transfer of PCB from
sediment to water by erosion (bedload and suspended load).
229
-------
TABLE 2. SEDIMENT TO WATER PCB PARTITIONING COEFFICIENTS
U)
o
TYPE STUDY
SEDIMENT MIXED AND SETTLED
MIXED AND SETTLED
WATER FLOWS OVER SEDIMENT IN TANK
MIXED AND SETTLED (ELUTRIATE)
FLOW OVER IN TANK
MIXED AND SETTLED
MIXED AND SETTLED
FLOW OVER TANK
ELUTRIATE TESTS, HUDSON RIVER
MIXED AND SETTLED FOR 5 CITIES IN U,S,
MIXED AND SETTLED
_1_ 1
ELUTRIATE'TESTS 10, 10,00~ff
AROCLOR
COEFF.
AUTHOR
.1242,1016
125U
12SU
70% (1016)
70% (1016)
70% (1016J.
70% (1221)
80% (1016)
70% (1016)
70% (1254)
70% (1016)
70% (1016)
•1.3xl03
6X101*
9xl05
io3-io"
104*
io3-iol*
103
2xl05
10-1-0
io3-iol*
10
3 ll
10%3xlOH
(1-6) PARIS ET AJL, (1978)
( — ) HALTER AND JOHNSON (1977)
( — ) HALTER AND JOHNSON TJL977)
(— ) LMS-GE, HUDSON RIVER (1976)
( — ) LMS-GE, HUDSON RIVER (1976)
f1 ) TOFFLEMIRE (1976)
V ) TOFFLEMIRE (1976)
<— ) VEITH, HUDSON RIVER (1976)
1
( — ) NYS DEPT, OF TRANS, (1976-7),
<—) FULK ET AL- (1975)
( l£) GENERAL ELEC, .KCFARLAND
(1977-8)
('--) MALCOLM PIRN IE (1977)
Dry sediment cone./soluble water cone. (.H5v filter or* centrifuge),
Here there was 200 mg/1 of suspended solids in the water tested.
ELUTRIATE DREDGING SIMULATION
FLOW OVER IN TANK - RIVER SIMULATION
1221
"To5
2 x 10L
1016 & 1242
5 x
1254
2,5 x 1(T
5 x 10
5
-------
0
I-
200 FT
4OOFT
6OOFT
—I
10
CO
I-1
TO
10 FT
A 20 FT
350
NOTE:
RIVER MILE 189.2
26cm
FIGURE I. PCB CONCENTRATIONS IN BED MATERIAL
-------
u>
,r4
10
3
10°
2
10
i
Q.
1
00
0
a.
10
LO
O.I
-
_
— H
—
"3
_
•*'
£
^^
O
<
i— 1
l_
1
-4
CM
CJ
X
u
O
1 J
9
**f
*~~l_(
ST
ro
"— ' *
N-T
g
*"""*
|Mi
-I
o*
«5 u, «
|2 * J
iCr
ro
*•-*
L
^
* i
? i
LOG MEAN AND
95% CONFIDENCE INTERVAL
(NUMBER OF SAMPLES)
_4
^^
in
CM
^_^
OJ
tiMM
1
L
^^
u^
^
•— L !
, — .
f-.
1
5 1
a
c w « - Joe
' ^ ^c * m *
-------
LEGEND-
- ACTIVATED SLUDGE
- HUDSON RIVER SEDIMENT
- TOP SOIL
O - SILT
• - SAW DUST
- M1CROCRYSTALLINE
CELLULOSE
A-CELITE
• - OTTAWA SAND
0.4 0.6 0.6
GRAMS OF ADSORBENT
FIGURE 3. ADSORPTION OF
AROCLOR 1242 ON DIFFERENT ADSORBENTS
233
-------
Models are available to predict erosion of non-cohesive sediments
(sands) that need little experimental calibration. Models to
predict erosion of cohesive sediment (silt and clay deposits,
marshes, etc.) require considerable experimental studies for
calibration. In the Hudson it is the cohesive organic sediments
that are highest in PCB. A practical problem, such as suspen-
sion of PCB-laden sediment by barge traffic, may also have to be
considered. Figure 4 shows that the PCB concentration in the
river water is high at low flow and at very high flows but low
at intermediate flows.
PCB also volatilizes quite rapidly from the water to the
air and from the sediment to the air. Figures 5 and 6 give
plots of experimental data from GE (18), while Figure 7 gives
approximate estimates of transfer rates and mass storage
figures. Table 3 compares some of the literature values for
volatilization. All references given in the table were not
listed in the attached bibliography.
234
-------
TABLE 3. COMPARISON OF K VALUES
FOR VOLATILIZATION OF PCB AT 25°C
For PCB Saturated Water
U)
Solubility
Aroclor Mol. Wt. va/£
1221 (1000)a
192 (1000-2000)6
1242 261 240
(80)e, (340)d
1016 (175+)a, (420)
1254 321 12 (56)C
Vapor Pressure KCs
mm Hg mg/m2/hr
KCs pure PCB
ug/m2/hr
(.055-.090)3 (3.3xlO+33a
« (10~2)e
4.1 x 10"11 13.7
(9xlO-")c, (3xlO~'*)a (1.8-5.0)° (
d (2xlO~'*)d (.OQ9-.016)3
7.7xlO~5 .8 (8. 35x10*3 )c
Reference ( — ) Mackay and Leinonen was used unless otherwise noted.
(a) GE-Corp. Res. Niskayuna, NY - Brooks et al. (i?)
(b) GE-Corp. Res. Niskayuna, NY - McFarland, et al.fl-8)
(c) Hague, et al. (17)
(d) Paris, .et al. (i6)
(e) Huntzinger, et al. ( — )
For Sediment or Soil to Air
Soil PCB KC
Aroclor ug/g vR/tn2/hr
1254 .01 1.25
1254 10
80% (1016) 64. 126
80% (1016) 20
1248 .04
1242, 1254 .05
1242, 125C .05
Air Cone. half life in
ng/tn3 top 10 cm - yrs
1-10 6
.1
1200 <.l
100
10
<.05
Transfer
Area or- Soils
LaJalla, Calif.
Ottawa Sand
Sand & Wood Chips
Sand & Wood Chips
Lake Michigan Area
Sand
Silt or Top Soil
K K/K2
1/hr
(.002-.004)3
.057, (.002-.006)3 . .
.2-.9)d, (.0096-.027)b (.22)
(.3-1.2)d (.25)d
.067
Reference
(— ) McClure
(17) Hague, _e£ al.
(18) G.E., McFarland
( — ) DEC-Air Resources
( — ) Versar Inc.
U8) GE, Brooks
(18)GE, Brooks
-------
1.8
1.6
1,4
1.2
o: i.O
z
UJ
o
2 0.8
8
CQ
CL 0.6
0.4
0.2
0.0
MARCH 3O-SEPTEM8ER 27, 1977
_ NOTE'
REGRESSION: PCB=
5.5 X10*
FOR Q<56XI05L/SEC
J_
I
I i I
QO I.O 2.0 3.0 4.0 5.O 6.0
w ao 9.0
INSTANTANEOUS DISCHARGE (L/SEC) X 10
10.0 11.0
FIGURE 4. HUDSON RIVER AT SCHUYLERVILLE
PCB CONCENTRATION VS. RIVER DISCHARGE
SOURCE' TURK, US«S OAT*
236
-------
24
22
20
18
16
0> 14
V)
(/>
3,2
00
o
o.
10
CJ
(KEF
34m/s
I
10
20 30
TEMPERATURE °C
40
50
FIGURE 5. PCB LOSS FOR 1242 SATURATED WATER
237
-------
220
NOTES'
SEDIMENT PCB=64»g/g,80% 1016
AIR VELOCITY=.34m/s
200
180
£
>*
c*
160
in
O
CD
o
QL
140
120
100
80
60
(KEF. It)
10
20
TEMPERATURE °C
30
2.2
2.0
1.8
1.6
1.4
1.2
1.0
0.8
40
0.6
FIGURE 6. SEDIMENT TO AIR PCB LOSS
238
-------
to
u>
280
3000
205
DREDGE
SPOIL
SITES
160,000
LAND-
FILLS
-DUMPS
700,000
90
1000
| 1300
UPPER HUDSON RIVER
R.D.
IOOO
8000
TO 5000
50,000
300,000
NOTE= UPWARD ARROWS ARE TRANSFERS TO THE AIR,
WHILE HORIZONTAL AND DOWNWARD ARROWS ARE
TRANSFERS TO WATER SYSTEMS. (TRANFERS IN LB/YR,
MASS STORAGE IN LB.)
R.D.= REMNANT DEPOSITS
3000
DREDGE
SPOIL
AREAS
80,000
LOWER HUDSON RIVER
175,000
3000
FIGURE 7. DISTRIBUTION AND TRANSFER
OF PCS IN THE HUDSON RIVER BASIN AS OF JANUARY, 1979
-------
REFERENCES
(1) Tofflemire, T.J., "Preliminary Report on Sediment
Characteristics and Water Column Interactions Relative
to Dredging the Upper Hudson River for PCB Removal." N.Y.
State Department of Environmental Conservation, Albany,
N.Y. (April 1976).
(2) Tofflemire, T.J., "Summary of Data Collected Relative to
Hudson River Dredging." N.Y. State Department of
Environmental Conservation Albany, N.Y. (Dec. 1976).
(3) Miner, W.J. and Tofflemire, T.J., "Port Edward Maintenance
Dredging Project Monitoring Report and Supplement."
N.Y. State Department of Environmental Conservation,
Albany, N.Y. (Jan. and July 1978).
(4) Tofflemire, T.J. and Zimmie, T.F., "Hudson River Sediment
Distributions and Water Interactions Relative to PCB:
Preliminary Indications." Kepone Seminar II, US EPA
Region III, Philadelphia, Pa. (Sept. 1977).
(5) Tofflemire, T.J., "Results of the Lock 4 Dredge Monitoring
Program." In preparation, N.Y. State Department of
Environmental Conservation, Albany, N.Y. (Feb. 1979).
(6) Tofflemire, T.J. et al. , "PCB in the Hudson River:
Sediment Distributions, Water Interactions and Dredging."
Technical Paper No. 55, N.Y. State Department of Environ-
mental Conservation, Albany, N.Y. (Jan. 1979).
(7) Tofflemire, T.J. and Quinn, S.O., "PCB in the Upper Hudson
River: Mapping and Sediment Relationships." Technical
Paper No. 56, N.Y. State Department of Environmental
Conservation, Albany, N.Y. (Jan. 1979).
(8) Tofflemire, T.J. et ail. , "PCB in the Upper Hudson River:
Mapping, Sediment Sampling and Data Analysis." Technical
Paper No. 57, N.Y. State Department of Environmental
Conservation, Albany, N.Y. (Jan. 1979).
240
-------
(9) Hetling, L.J. 'et al., "Summary of Hudson River PCS Study
Results." N.Y. State Department of Environmental Conserva-
tion, Albany, N.Y. (July 1978) .
(10) Shen, T.s. and Tofflemire, T.J., "Air Pollution Aspects of
Land Disposal of Toxic Wastes." Technical Paper No. 59,
N.Y. State Department of Environmental Conservation, Albany,
N.Y. (Mar. 1979).
(11) Farmer, w.J- et al., "Problems Associated with the Land
Disposal of- an Organic Industrial Hazardous Waste Contain-
ing HCB." Department of Soil Science, University of
California/ Riverside, Ca. (July 1976).
(12) Weston Environmental Consultants, "Migration of PCBs from
Landfills and Dredge Spoil Sites in the Hudson River
Valley, New York - Final Report." West Chester, Pa. (Nov.
1978) .
(13) Tulp, M.T. and Hutzinger, 0., "Some Thoughts on Aqueous
Solubilities and Partition Coefficients of PCS, and the
Mathematical Correlation Between Bioaccumulation and
Physio-Chemical Properties." Chemosphere No. 10, pg. 849
(Oct. 1978) -
(14) Dexter, R.N., "An Application of Equilibrium Adsorption
Theory to the Chemical Dynamics of Organic Compounds in
Marine Ecosystems." Ph.D. Dissertation, University of
Washington, Seattle, Wa. (1976).
(15) Weise, C.S. and Griffin, D.A., "The Solubility of
Aroclor 1254 in Seawater." Bulletin of Environmental
Contamination and Toxicology, pg. 403 (1978) .
(16) Paris, D.F. et al., "Role of Physio-Chemical Properties
of Aroclors 1016 and 1242 in Determining their Fate and
Transport in Aquatic Environments." Chemosphere 4, 319
(1978) .
(17) Hague, R. et al., "Aqueous Solubility Adsorption'and
Vapor Behavior of Polychlorinated Biphenyl Aroclor 1254."
Environ. Science &^ Technology 8, 139 (Feb. 1974) .
(18) McFarland, C.M. and Brooks, R., "Unpublished Data on PCS
Volatilization." General Electric, Corp. Research &
Development, Niskayuna, N.Y. (1978).
(19) Lawler, Matusky and Skelly Engineers, "Upper Hudson River
PCB No Action Alternative Study: Final Report." Pearl
River, N.Y. (Mar. 1978).
241
-------
(20) Hydroscience, Inc., "Estimate of PCB Reduction by
Remedial Action on the Hudson River Ecosystem." Westwood,
N.J. (April 1968).
242
-------
WORKSHOP ON VERIFICATION OF WATER QUALITY MODELS,
DISCUSSION PAPER, WASTELOAD GENERATION
By
William T. Sayres
U.S. Environmental Protection Agency
Washington, D.C. 20460
The generally poor ability of deterministic models to
adequately predict the rates of pollutant accumulation, trans-
port and transformation prior to arrival in a receiving water
body is, in my estimation, our most serious modeling problem.
There are those who would argue that receiving water quality
models are in even worse shape and should hence receive priority
attention. I firmly believe, however, that until we are
reasonably able to predict loadings, or are able to be more
specific about how unable we are to make these predictions, our
efforts must be concentrated in this area.
The most commonly cited reason for the sad state of these
models is a lack of suitable data for their calibration and
verification. It is certainly true that the data base is
woefully small, and that those in possession of such data are
frequently reluctant to share their knowledge. This has forced
the users of these models into having to collect large amounts
of data on each individual project in order to assure themselves
that model results are "accurate" (whatever that means).
It is my opinion, however, that attention should be con-
centrated on the model formulations themselves. Loading models
are, for the most part, predicated on fitting an equation or
equations to a limited amount of data then using other data to
calibrate such a model to fit local conditions. Seldom have
enough data been used, or have these data been subjected to a
rigorous enough analysis to assure that the loading function
postulated is indeed the "correct" one. Little work has been
done in examining the formulations themselves, in order to
determine whether or not they actually describe physical
processes taking place.
Such examination would seem to be the purview of the
research community, yet those folks seem extremely reluctant
to tackle the problem. I suspect that one reason for this is
243
-------
that such research is not very satisfying, and is quite
difficult. Another reason, certainly, is that agencies who
sponsor such research have, by their failure to budget for
much work of this sort, in effect said that the problem has a
relatively low priority.
The problem, however, will not go away. I am sure that
those of you who are consultants (and your clients) have, at
times, been uneasy about the "accuracy" of modeling results. We
are, moreover, increasingly moving toward making environmental
decisions based on receiving water quality rather than on
effluent quality. A good example is the rigorous evaluation of
receiving water impacts now required for AWT projects. We are
also moving, I believe, toward the establishment of water
quality standards which reflect the stochastic nature of such
bodies. Given this emphasis, it is not enough to concentrate
on better water quality models. However elegant they may be,
they are driven by imperfect loading models which limit the
validity of our 'modeling results. It seems incumbent on all of
us, therefore, to try to straighten the mess out.
I believe that there are several things we can do. First,
and most important, we can all be more generous about sharing
catchment data that we have. One way of doing this is to put
data into a commonly-held data base, such as the urban data base
at the University of Florida, or STORET. Second, we can take
upon ourselves a more critical examination of the quality formu-
lations in our loading models. We have, in the past, based
model comparisons on overall evaluation of model performace, and
not on examinations of various model components, such as pol-
lutant accumulation/washoff functions. Third, we can publish
results of such examinations so that others may benefit from
our experience. I devote a good portion of my time to technology
transfer activities, and I am convienced that no one suffers as
a result of telling others exactly what he is doing. It is
manifestly clear that there is plenty of work for everyone!
244
-------
DISSOLVED OXYGEN/TEMPERATURE MODELING
By
C. J. Velz
Professor Emeritus
The University, of Michigan
Longboat Key, Florida
Our assignment is to discuss DO/Temp. modeling in the
context of the six "Issues" raised. Although my emphasis shall
be on DO/ much of what I have to say applies equally to tempera-
ture modeling. I suggest we consider each of these issues under
current status and shortcomings, and suggested recommendations.
Issue #1 - The Role of Models in Decision Making
At the outset, let us face it, modeling as it has more
recently developed is not generally accepted by administrators
as a usable tool. In fact, modeling in general today has not
only gained a bad reputation, but is regarded with considerable
mistrust. As EPA no doubt can verify, most current models are
gathering dust on the shelves of "computer software libraries".
We might ask what is wrong with current practice? There is too
much "sophistication"; too much complex math; too much talking
of modelers with each other, rather than with administrators;
failure to present modeling results for ease of user understand-
ing; and most important, the frequency of failure of models to
hook up in application to a real-river case.
The proliferation and promotion of the theoretical-general-
case model as a tool for practical application should be dis-
couraged; such models should be confined to research and
academic training. Concentration should be on development of
practical-applied models specific for each particular river,
oriented to its unique problems and basin conditions. Mathemat-
ical complexity should be avoided, with the accent on simplifi-
cation and ease of understanding of the administrator.
Is-sue #2 - Data Base.
"Date Base" - for what purpose? If by data base the
purpose is to accumulate, on a national scale, vast amounts of
"monitoring" data upon which it is expected that reliable water-
245
-------
quality models can be developed, we are deluding ourselves and
wasting taxpayers' money on a costly, inefficient, unreliable
endeavor. Unfortunately, the current trend is strongly toward
collecting more and more "monitoring-type data". Monitoring is
not designed to obtain adequate data under hydrologic nor
biologic stability, conditions so essential to reliable measure-
ment. Rather, monitoring obtains data under all kinds of
changing conditions, and hence represents a heterogeneous mess,
much of which is useless. Furthermore, monitoring is seldom
correlated with waste loadings and hydrologic conditions. Hence,
monitoring data reflect effect without simultaneous measure of
cause, and therefore give little or no insight into cause-effect
relationships.
However, there are two types of long-term records available,
namely, hydrologic and meteorologic data, that are important but
are seldom adequately analyzed. These data identify the
critical season in which water quality occurs, and afford the
means of defining recurrence probability to which model predic-
tions must be related. One would expect that these findings
would be used in the design of the stream sampling program and
in the selection of model configuration, but they usually are
not.
Water-quality monitoring data should be restricted arid
limited to their primary role as an administrative tool in
surveillance in routine regulatory practice, not as a tool in
planning, design, and decision making, and certainly not in
development of reliable predictive models. The concept of
accumulation of a "data base" for water-quality modeling should
be discouraged. Successful river-quality assessment and model-
ing require fresh, new, concurrent data. Such data can be ob-
tained only through carefully planned intensive, synoptic-type
riv£r-quality investigation. This necessity has long been
recognized, and has been more recently demonstrated, with
dramatic results, in the USGS reports of the applications in the
Willamette and Chattahoochee Rivers. It is urged that, instead
of worrying over "data base" details, a fundamental change in
approach should center on establishing,-on a continuing basis, a
National Intensive River-Quality Assessment Program for all
major river basins.
Issue 13 - Model Configuration
There is a misconception that because seasonal changes in
water quality occur, it is essential to formulate a dynamic
model to simulate instantaneous responses throughout the year.
This necessitates a Eulerian configuration of exceedingly
complex mathematics and requires extensive data, not realisti-
cally attainable from stream and laboratory measurements. In
the end, "assumptions" are necessary both in parameters and in
solutions of the complex math. The net result is much confusion,
loss of model reliability and false economy in time and cost.
246
-------
The Eulerian frame of the general-case-dynamic model
configuration should be supplanted by the Lagrangian configura-
tion of the applied-steady-state model. This eliminates the
complex mathematics without loss of scientific validity. The
river, based on channel geometry, is segmented into short
reaches, and only simple computations are necessary on
a segment-by-segment basis.
It is well established that the critical water quality
period almost invariably occurs annually during the drought
season low streamflow and high temperature. A steady-state
period of 2-3 weeks usually occurs each year uninterrupted by
freshets, in which hydrologic and biologic equilibrium are
approached, ideal for intensive stream sampling and concomitant
measurement, It is noteworthy that, unlike the Eulerian/ in the
Lagrangian configuration parameters are all relatively easily
and accurately derived from stream measurements, without any
"assumptions". And since intensive parameter measurements need
be made only for the steady-state condition, the economy in
time and cost is obvious.
Issue #4 - Calibration
In calibration one observes three phases of dangerous
degeneration taking place:
Phase I—failure to obtain adequate current data.
Modeling is not a function independent of intensive analysis of
the river system, yet fewer and fewer modelers get into the
field to become intimately familiar with the river and to par-
ticipate in the gathering of current concomitant data necessary
for analyzing cause-effect relationships.
Phase II—overuse of existing data. Most modelers are
content to use existing old, monitoring-type data. Some recog-
nize the deficiencies and try to augment by averaging composites
of seasonal data over the years of record; this is like "averag-
ing early peas with late pumpkins".
Phase III—overuse of mathematical_and computer techniques.
Other modelers, particularly mathematicians and computer
specialists with little real river experience, resort to
"optimization" to quantify model parameters (usually obtained
from scanning the literature, not from field investigation).
Then by multiple regression techniques a "best fit" is obtained,
which is taken as the "optimum" value. Seldom is the "optimum"
value checked for validity within the river system being
modeled. This type of calibration must be regarded as little
more than sophisticated "curve-fitting", and the reaction rates
thus- generated are highly suspect. In contrast, there is a
tendency of some modelers to concentrate exclusively on refine-
ments of theory in calibration, most of which prove to be
insignificant relative to the recurrence probability frame of
247
-------
the immediate hydrologic and meteorologic variations of the
specific river basin.
The use of monitoring data for calibration for river-
quality modeling should be discouraged. Calibration should be
based on current concomitant data obtained from intensive field
sampling and investigation (under hydrologic and biologic
equilibrium) on the specific river being modeled. Obviously,
each reaction rate, (BOD, nitrification, reaeration, etc.,)
should be independently calibrated on its own relevant data.
Issue #5 - Verification and Sensitivity Analysis
The current trend to attempt application of models without
verification is shocking] It is no wonder that such models are
increasingly distrusted. In other cases verification and
calibration are based on the same set of data, usually poor data
at that. In some instances where reasonably good calibration
and independent verification have been made, seldom are limits
specified within which application of the model is feasible.
There is also a misconception that once a good applied model has
been calibrated and verified, it can be used indefinitely, even
though radical changes in river conditions have taken place
over the years.
Models which have never been adequately calibrated and
verified should not be promoted for application in evaluation of
alternatives in water-quality management. It is a disservice to
modeling and to society to do so. Good verification implies
comparison between two independent sets of data, one for cali-
bration of reaction rates, and a new second set for verification.
If there is reasonably good agreement between the computed and
the observed river quality, the verification is accepted, and
application of the model for predictive evaluation of still
other conditions (within limits prescribed) is then, and only
then, warranted. Models cannot be used indefinitely, regardless
of how carefully calibrated and verified initially. Intensive
reasses-sments should be made at intervals of 5-10 years or so
for re-calibration and re-validation. Such reassessments also
afford the only reliable measure of achievement attained by
remedial programs- instituted. Hence, the most rigorous test of
reliability is demonstration of agreement between actual achieve-
ment and what the projection predicted would occur. Furthermore,
the, consistency with which this can be demonstrated, when the
method of analysis (or modeling) is applied to other river
systems, is the best way to build confidence in use and accep-
tance of the method.
As a supplement to verification, sensitivity analyses should
be made for each element calibrated. The sensitivity analyses
are good indicators, where review of the calibration should be
made for refinement.
248
-------
Issue #6 - Use of Models as Projection Tools
Again, the single greatest stumbling block to the use of
models as projection tools is failure to develop practical-
applied models, carefully calibrated and verified from good data,
for each specific river for which projections are desired.
Since all projections of river quality are inescapably tied to
probability of occurrence of hydrologic and meteorologic varia-
bility, much too little attention is given to establishment of a
frame of reference in which practical decisions must be made
concerning recurrence expected, such as once in 5, 10 or 20
years.
Quite apart from development of the model, practical
projection implies that an intensive investigation in depth be
made of the river basin as a whole, its problems, plans, and
proposals. Useful projections cannot be made in a vacuum. In
addition, there is increasing need for sharp scrutiny and
evaluation of the consequences of imposition wholesale of
arbitrary water quality standards.
Unfortunately, there are groups in and out of Government
laboring under the delusion that there must be some easy, quick
or magic way for computers and modeling to project wholesale
answers to all water-quality problems. There is no easy short
cut, and modeling should not be oversold. There is no substi-
tute for careful thought and intensive investigation in the
search for cause-effect relationships river-by-river, tempered
by intimate experience and professional judgment—the art and
science of river analysis.
249
-------
SALINITY/TDS;
APPRAISAL OF PRESENT PRACTICES AND CAPABILITIES IN MODELING
By
George H. Ward, Jr.
Senior Project Manager
Espey, Huston and Associates, Inc.
3010 South Lamar Blvd.
Austin, Texas 78704
1. Salinity and total dissolved solids are virtually conserva-
tive parameters within the interior of natural waterbodies.
1.1 Waste discharges involving high TDS, e.g. brine
disposal trom oil wells, represent point sources and
are generally modeled as such, not as an internal
generation rate. (An exception is an oil field dis-
tributed over a section of a large waterbody.)
1.2 In those models involving a depth mean, i.e. two-
dimensional horizontal models or section-mean models,
the net evaporation-evaporation minus precipitation-
at the surface represents an effective source (sink)
when positive (negative). This source function is
mathematically first-order, a rate constant multiply-
ing the salinity.
1.2.1 Inclusion of the evaporative source of
salinity is rarely important, except in
shallow systems located where the evapora-
tive deficit is significantly different from
zero, i.e., in arid or in wet-humid climates.
2. Accordingly, the distribution of salinity or TDS within a
waterbody is determined primarily by the boundary fluxes
and the internal transport processes within the system.
2.1 Transport processes are traditionally subdivided into
advective versus diffusive (turbulent) transports.
However, various spatial and temporal averages applied
to the basic equations of mass transport produce
cross-product terms which themselves must be para-
meterized. This parameterization most frequently has
250
-------
the form of a diffusive flux, whereupon the transport
is termed "dispersion". As transport processes are
discussed elsewhere, their further consideration in
this context is not warranted.
2.1.1 Because of their virtually conservative
character and their natural occurrence,
especially in estuaries, salinity and TDS
enjoy an importance in the calibration
of various water quality transport models.
Thus the need for a capability to model
salinity extends beyond the parameter per se.
2.2 One important boundary source for salinity/TDS is the
flux from the bed and sides of a waterbody, derived
ultimately from geological sources. This is common in
watersheds that drain areas of extensive salt domes or
areas in which the principal aquifers are in contact
with salt domes.
2.2.1 An interesting example is Lake Texoma on the
Red River on the border between Oklahoma and
Texas. TDS values in this lake are on the
order of 1000 ppm and range up to values
double this below the halocline.
2.3 Longitudinal fluxes of salinity/TDS are important to
waterbodies in hydrodynamic contact with a more
saline system. The most common and important example
of this is, of course, the estuary.
2.4 Because salinity distribution within a waterbody is
determined by transport, with practically no internal
kinetics, the time scales upon which salinity reacts
to an alteration in the boundary fluxes is generally
much longer than time scales for, say, dissolved
oxygen or nitrogen species.
2.4.1 In many cases, the dissolved oxygen profile
within a stream, say, is determined
principally by the rates of oxygen supplied
by reaeration versus the rate of oxygen
utilization deriving from the introduction
of organics into the watercourse. The effect
of transport is to modify the net rate
slightly, producing a displacement and,
perhaps, spreading of the dissolved oxygen
profile. In contrast, salinity distribution
is determined solely by the various trans-
port processes, to which the isohaline
pattern must readjust.
251
-------
A practically unique feature of salinity/TDS, vis-a-vis
other common water quality parameters, is its interaction
with density, frequently dominating the effects of
temperature and dictating the density structure of the
waterbody.
3.1 Vertical gradients in salinity substantively suppress
vertical turbulent fluxes of mass and momentum.
(Further discussion of this factor should be under-
taken as a part of transport mechanisms.)
3.2 Longitudinal gradients in salinity produce a horizon-
tal pressure-gradient acceleration that can drive
density currents, which can be a prominent element of
the overall circulation of the waterbody.
3.3 The single most important and common example of
salinity-induced density currents' is in estuarine
circulation.
3.3.1 Estuarine density currents manifest them-
selves as systematic vertical and/or
horizontal shears in the current. These
currents are persistent in time, independent
o£ tidal phase, and are quintessential in
the long-term mean circulation and consti-
tuent distribution in the estuary.
3.3.2 For a given longitudinal salinity gradient,
the intensity and spatial structure of the
density current are a strong function of
bathymetry. In estuaries with a prominent
longitudinal geometry and little cross-
channel relief, the density current produces
two-layered mean flow, up the estuary toward
the head in the lower layer compensated by
flow down the estuary toward the mouth in
the upper layer. In estuaries that are
broad with a central talweg, the mean flow
is directed up the talweg channel through-
out the depth, compensated by a seaward
return flow in the shallow lateral areas
to either side.
3.4 Rigorous modeling-of salinity and transport when
density effects are important requires a coupling of
the momentum and salinity calculations.
3.4.1 The common practice for estuaries, however,
is to avoid this difficulty by parameteriz-
ing the density current transport by in-
flated dispersion coefficients in the
252
-------
salinity model. This converts the momentum-
salinity model from a feedback (i.e., inter-
active) problem to a feedforward problem.
3.4.2 The dispersion coefficients are determined by
forcing a match of mode output to observa-
tions, model calibration. The resultant
coefficients are employed in other salinity
computations (e.g. different inflows) as
well as in the transport of other consti-
tuents such as DO. This has become accepted
through practice, not theory.
3.4.3 Physical models include density transport
implictly, but only qualitatively, not quan-
titatively, in that (so long as water is
the model fluid) the dynamic scaling (Froude)
is not sufficient to ensure similitude
(which would also require-at least-Reynolds
similitude). There is, therefore, a cali-
bration procedure for these models as well,
resulting in the establishment of a set of
bent strips, cobbles, overhead fans and
even forced-air diffusers.
3.4.4 Whatever justification there may be for
extending model application beyond its
range of calibration, it is manifest from
3.3.2 above that this calibration is in-
validated when bathymetry is altered.
4. Verification of models of salinity or TDS is complicated
by the slow response times of these parameters.
4.1 The problem of response time is particularly acute in
estuaries, which tend to be large waterbodies with
substantial salinity gradients impressed over their
entirety.
4.1.1 Tests with the San Francisco Bay physical
model and simulations with the EPA Dynamic
Estuary Model indicate the response time-
constant of salinity to a 30,000 cfs freshet
was 8 tidal cycles at Carquinez Strait and
15 tidal cycles at Point San Pablo. Termina-
tion of the 30,000 cfs pulse and immediate
(step function) reduction back to the 2,000-
3,000 cfs low flow indicated a decay time-
constant of 8 tidal cycles at Point San
Pablo and 10 tidal cycles at Caruinez
Strait.
253
-------
4.1.2 Within the Galveston Bay System, salinity
variations in East Bay following a reduction
in inflow from 20,000 cfs to 1,000 cfs in-
dicated a time-constant of approximately
two months (or 60 tidal cycles). East Bay
is generally the last segment of the system
to equilibrate, and it is speculated that
a more realistic time-constant for the
entire system is on the order of 30 tidal
cycles.
4.2 The long response time for the salinity/TDS distribu-
tion within the system affects the verification pro-
cedures as well as the accuracy of verification for
both steady-state and time-dependent models.
4.2.1 For steady-state model verification, the
basic assumption is that the isohaline dis-
tribution has equilibrated to the freshwater
inflows. This requires two conditions:
stabilization of the freshwater inflows at a
specific value (of course, within a certain
tolerance, since small fluctuations are
integrated out); elapsing of enough time-
constants for salinity response to ensure
the isohaline structure has equilibrated.
4.2.2 As a rule of thumb, two to three time-
constants are necessary to ensure equilibra-
tion. This requires, in turn, that the
freshwater inflows be stabilized for this
period. From the results indicated in 3.1
above, a period of one to several months of
steady inflows are necessary. It is
important to recognize that time equilibra-
tion of the inflows is necessary but not
sufficient for time equilibration of the
isohalines.
4.2.3 A time-varying model is the solution to an
initial-boundary problem, in which the
initial salinity distribution is an
important input. This is generally deter-
mined from observations. Transient
variations .in freshwater inflow result in
transient departures of salinity from this
initial value, but significantly filtered
by the long response time for salinity.
This means that short periods of time
integration produce minor departures
from the initial conditions, and hence
apparently good verification against
measured data. A long period of integration,
254
-------
rather, is required to determine whether
the basic model is seriously in error in
its ability to predict salinity,
4.2.4 An illustration of the time increase in
error of a dynamic salinity model is the
operation of the Galveston Bay physical
model for salinity with 1965 hydrological
inputs. For the first five months of model
simulation, the model res.ults were still
dominated by the initial conditions and
compared very well with measurements. For
the sixth month and thereafter, the model
predictions departed markedly from
measurements, being systematically low on
the order of 5-10 ppt.
5. Perhaps because of the difficulty of securing satisfactory
data sets for either time-dependent or steady-state
verification, verification of salinity/TDS models is often
pursued through a statistical approach utilizing a steady-
state model.
5.1 This is based upon the postulate that a steady-state
(i.e., time-equilibrium) solution with long-term mean
inflows and boundary conditions will somehow approxi-
mate the long-term mean salinity.
5.1.1 There is absolutely no theoretical justifi-
cation for this postulate. Indeed, the
intrinsic nonlinearity of the transport
equation for salinity would render such a
postulate unlikely.
5.2 Some sort of empirical verification of this postulate
is required. Clearly verification of an equilibrium
model in this respect constitutes a de facto valida-
tion of the postulate. It is, however, a site-
specific validation only.
5.2.1 To this writer's knowledge, a sufficiently
detailed and statistically rigorous
validation of the "climatological" capabili-
ty of steady-state models has never been
performed. Again, though such a valida-
tion would be useful, it must nonetheless
be site-specific.
5.2.2 One of the difficulties in this regard is
the necessity for a long term data base on
salinity, sufficiently refined in time to
permit calculation of means and their
255
-------
statistics, from which confidence bounds
can be established.
6. Recommendations (of this writer) for improvement of the
state-of-the-art in modeling of salinity are improved
hydrodynamic formulation of the transport terms and an
introduction of methodological rigor in model verification.
6.1 Better hydrodynamic foundation for the density effects
of salinity upon transport is needed. For estuaries,
in particular, the longitudinal transport ("disper-
sion") is a significant weakness of present models.
This will require theoretical studies coupled with
intensive, carefully specified data collection.
6.2 More attention is needed to quantify the quality of
model prediction with respect to confidence limits of
the data. Proper data stratification taking into
account salinity response time is necessary.
6.3 For the broader questions of model verification in
general, this writer suggests that our discipline has
much to learn from the experiences of the meteor-
ologists.
6.3.1 There are several analogues between our
problems and those of the meteorologists.
We both deal -with large scale fluid, systems,
that exhibit variability on a range of space
time' scales. Numerical models of hydro-
dynamics and transport are an essential
element in the analytical procedure for both
fields. The adequacy of predictions are
tested by comparison with quantitative
fluid parameters, sparsely distributed in
time and space.
6.3.2 Practicing meteorologists employ several
models, and combine the results of these
models with judgment (founded upon their
experience with the processes involved as
well as with the behavior of the models)
to arrive at a prediction. Much of this
judgment, it should be noted, is site-
specific. This role of judgment is
frequently denigrated in our discipline
and sought to be replaced by-rather than
supported by—the operation of models,
whereas it should be cultivated.
6.3.3 Meteorologists are faced with the problem
of verification on a daily basis. Some of
the statistics they have devised to evaluate
256
-------
performance are, of course, of interest to
the water-quality disciplines. More to the
point, though/ is the verification of model
behavior not so readily parameterized,
such as movement or intensity of features
of the hydrodynamic field.
6.3.4 Verification, in the meteorological field,
is a convergent judgment, based upon the
cumulation of many individual predictions.
In the water quality discipline, in con-
trast, verification frequently consists.of
a single data set compared with the corres-
ponding model prediction, a practice which
ignores the stochastic elementin the
measured data as well as the departure of
the physical configuration from that
assumed in the model formulation. Perhaps
the most important aspect of the introduc-
tion of statistical measures in water
quality model verification, whatever these
measures might be, is the implication of
testing the model against an array of data
sets.
257
-------
TECHNICAL REPORT DATA
'(Please read Instructions on the reverse before completing)
EPA-600/9-8Q-016
3. RECIPIENT'S ACCESSIOI*NO.
•ITLE AND SUBTITLE
Workshop on Verification of Water Quality Models
i. REPORT DATE
April
Sluing dat.fi.
. PERFORMING ORGANIZATION CODE
. AUTHOR(S)
8. PERFORMING ORGANIZATtON REPORT NO.
'ERFORMING ORGANIZATION NAME /
Hydroscience, Inc.
363 Old Hook Road
Westwood/ New Jersey 07675
10, PROGRAM ELEMENT NO.
A28B1A
11. CONTRACT/GRANT NO.
68-01-3872
2, SPONSORING AGENCY NAME AND ADDRESS
Environmental Research Laboratory—Athens
Office of Research and Development
U.S. Environmental Protection Agency
Athens, Georgia 30605
GA
13. TYPE OF REPORT AND PERIOD COVERED
Final
14. SPONSORING AGENCY'CODE
EPA/600/01
15, SUPPLEMENTARY NOTES
16, ABSTRACT "~ — '
The U.S. Environmental Protection Agency sponsored a "National Workshop on the
Verification of Water Quality Models" to evaluate the state-of-the-art of water quality
modeling and make specific recommendations for the direction of future modeling efforts
Participants represented a broad cross-section of practitioners of water quality model-
ing in government, academia, industry, and private practice. The issues discussed
during this workshop, which was held in West Point, N.Y., on 7-9 March 1979, were
models in decision-making, model data bases, modeling framework and software validation
model parameter estimation, model verification, and models as projection tools. These
topics were discussed by workshop participants who were organized into small groups,
each of which discussed the state of the art of a specific branch of water quality
modeling. Groups were divided into areas of wasteload generation, transport, salinity-
TDS, dissolved oxygen-temperature, bacteria-virus, eutrophication, and hazardous
substances.
Workshop findings were summarized by committee reporters and are presented in
state-of-the-art reports. Workshop participants also prepared basic issue reports and
technical support papers, all of which are included in this document.
17.
KEY WORDS AND DOCUMENT ANALYSIS
DESCRIPTORS
b.IDENTIFIERS/OPEN ENDED TERMS
c. COS AT I Field/Group
Planning
Simulation
Water Quality
Nonpoint Pollution
Model Studies
12A
13B
T8. DISTRIBUTION STATEMENT
RELEASE TO PUBLIC
1S. SECURITY CLASS (ThisReport)
UNCLASSIFIED
21, NO. OF PAGES
20. SECURITY CLASS (Thispage)
UNCLASSIFIED
274
22. PRICE
EPA Form 2220-1 (9-73)
258
a ll.S, GOVEfWBEKI HWtlKG OFFICE: l9W.657-.t46/S63a
-------
United States
Environments! Protection
Agency
Center for Environmental Research
Information
Cincinnati OH 45268
Postage and
Fees Paid
Environmental
Protection
Agency
EPA-335
Official Business
Penalty for Private Use. $300
Special Fourth-Class Rate
Book
Please make all necessary changes on the above label.
detach or copy, and return to the address m the upper
left hand corner
If you do not wish to receive these reports CHECK HERE '.:;.
detach, or copy this cover, and return to the address in the
upper left hand cofner
EPA-600/9-30-016
------- |