-------
                           EPA 600/9-76-016
                           JULY 1976
         PROCEEDINGS OF THE
             CONFERENCE
                 ON

       ENVIRONMENTAL

MODELING  AND SIMULATION
          APRIL 19-22, 1976
          CINCINNATI, OHIO
              Sponsored by:
         Office of Research and Development
                 and
         Office of Planning and Management
        U.S. ENVIRONMENTAL PROTECTION AGENCY

-------
                             EPA REVIEW NOTICE

     These Proceedings have been reviewed by the U.S. Environmental Protection Agency
and approved for publication. Approval does not signify that the contents necessarily
reflect the views and policies of the Environmental Protection Agency, nor does mention
of trade names or commercial products constitute endorsement or recommendation for
use.

     This document is available to the public for sale  through the National Technical
Information Service, 5285 Port Royal Road, Springfield, Virginia 22161.
                         Monitoring Technology Division
                Office of Monitoring and Technical Support (RD-680)
                        Office of Research and Development
                      U.S. Environmental Protection Agency
                               401 M. Street, S.W.
                             Washington, D.C. 20460
                                       11

-------
                                Proceedings of
                            the EPA Conference on

               ENVIRONMENTAL MODELING AND SIMULATION
                                    Editor

                                 Wayne R. Ott


                               Editorial Board

                          Oscar Albrecht, economics
                          Robert Clark, water supply
                          Robert Kinnison, statistics
                            Albert Klee, solid waste
                            Elijah Poole, simulation
                         Harry Torno, water pollution
                          Bruce Turner, air pollution
                           Ronald Venezia, planning


                             Production Manager

                               Vernon J. Laurie
                                 Acknowledgement

     Appreciation is expressed to Booz.Allen Applied Research, a division of Booz-Allen &
Hamilton Inc., for assistance in preparing the Proceedings.
                                      111

-------
                                  FOREWORD
     Although many mathematical models have existed for some time in air pollution,
water pollution, ecology, and other environmental areas, there previously have been few
attempts to bring these models together to create one scientific field unto itself. This
conference,  the EPA Conference on  Environmental Modeling and Simulation, is a first
attempt to bring together the many  diverse environmental modeling efforts in order to
form a unified discipline—environmental modeling.

     The Conference Proceedings  are believed to  be the most complete single resource
document currently available covering the state-of-the-art of environmental modeling hi a
variety of environmental fields.

-------
                                       TABLE OF  CONTENTS
                                                                                                      Page
                                                                                                     Number
CONFERENCE BACKGROUND
    Elijah L. Poole

CONFERENCE GOALS
    Albert C. Trakowski

KEYNOTE ADDRESS:
    TOWARD A COMMON LANGUAGE, Andrew W. Breidenbach

LUNCHEON ADDRESS:
    A RESOURCE AND ENVIRONMENTAL MANAGEMENT EXCHANGE, Ira L. Whitman

1*  MANAGEMENT I
    Session Leader: Ed Schuck

    FUTURE ENVIRONMENTAL QUALITY MANAGEMENT USING MODELS, D.W. Duttweiler and
    W.M. Sanders, III

    A SYSTEMATIC APPROACH TO REGIONAL WATER QUALITY PLANNING, G.P. Grimsrud and
    E.J. Finnemore

    A REVIEW OF EPA's GREAT LAKES MODELING PROGRAM, W.L. Richardson and N.A. Thomas

2   AIR QUALITY MANAGEMENT
    Session Leader: Francis S. Binkowski

    THE DEVELOPMENT AND IMPLEMENTATION OF USER ORIENTED AIR QUALITY MODELS,
    J.J. Walton

    A GENERIC SURVEY OF AIR QUALITY SIMULATION MODELS, G.D. Sauter

    AIR QUALITY MODELING-A USER'S VIEWPOINT, R.H. Thuillier

3   MANAGEMENT H
    Session Leader: David Duttweiler

    SPACE: A CLOSER LOOK AT THE IMPACT OF ENVIRONMENTAL POLICY, E. Heilberg

    RIBAM, A GENERALIZED MODEL FOR RIVER BASIN WATER QUALITY MANAGEMENT PLANNING,
    R.N. Marshall, S.G. Chamberlain and C.V. Beckers, Jr.

    COMPARISON OF EUTROPHICATION MODELS, J.S. Tapp

    MANAGEMENT IN COMPETITIVE ECOLOGICAL SYSTEMS, D.R. Falkenburg

    PLANNING IMPLICATIONS OF DISSOLVED OXYGEN DEPLETION IN THE WILLAMETTE RIVER,
    OREGON, D.A. Rickert, W.G. Hines and S.W. McKenzie

    URBANIZATION AND FLOODING-AN EXAMPLE, R.P. Shubinski and W.N. Fitch

    PLANNING MODELS FOR NON-POINT RUNOFF ASSESSMENT, H.A. True
10


14

20
26

30

35




40


45

50

57


62

69

74
'Sessions are numbered in the same order as in the conference, and the papers are grouped according to the sessions.

-------
                                                                                                Page
                                                                                              Number

4   AIR QUALITY SIMULATION & APPLICATIONS
    Session Leaders: Robert Jurgens and Roger S. Thompson

    DESIGN AND APPLICATION OF THE TEXAS EPISODIC MODEL, J.H. Christiansen                                77

    AIR MODELING IN OHIO EPA, J.C. Bun and A.B. Clymer                                                   82

    DESIGNING A REGIONAL AIR POLLUTION MONITORING NETWORK: AN APPRAISAL OF
    A REGRESSION EXPERIMENTAL DESIGN APPROACH, P.R. Gribik, K.O. Kortanek and J.R. Sweigart                  86

    SAMPLED CHRONOLOGICAL INPUT MODEL (SCIM) APPLIED TO AIR QUALITY PLANNING IN
    TWO LARGE METROPOLITAN AREAS, R.C.  Koch, D.J. Pelton and P.H. Hwang                                   92

    MODELING OF PARTICULATE AND SULFUR DIOXIDE IN SUPPORT OF TEN-YEAR PLANNING,
    R.A. Porter and J.H. Christiansen                                                                      97

5   WATER QUALITY
    Session Leaders: Roger Shull and William P. Sommers

    A MATHEMATICAL MODEL OF DISSOLVED OXYGEN IN THE LOWER CUYAHOGA RIVER,
    A.E. Ramm                                                                                     101

    A WATER RESIDUALS INVENTORY FOR NATIONAL POLICY ANALYSIS, E.H. Pechan and R.A. Luken              106

    A MULTI-PARAMETER ESTUARY MODEL, P.A. Johanson, M.W. Lorenzen and W.W. Waddel                        111

    MATHEMATICAL MODEL OF A GREAT LAKES ESTUARY, C.G. Delos                                       115

    COST-EFFECTIVE ANALYSIS OF WASTE LOAD ALLOCATIONS, J. Kingscott                                  120

    WASTE ALLOCATIONS IN THE BUFFALO (NEW YORK) RIVER BASIN, D.H. Sargent                            126

    STREAM MODELING AND WASTE LOAD ALLOCATION, J.Y. Hung, A. Hossain and T.P. Chang                     129

    PATUXENT RIVER BASIN MODEL RATES STUDY, T.H. Pheiffer, L.J. Clark and N.L. Lovelace                      133

6   WATER RUNOFF I (TRANSPORT)
    Session Leaders: Richard Field and Lee Mulkey

    EFFICIENT STORAGE OF URBAN STORM WATER RUNOFF, J.R. Doyle, J.P. Heaney, W.C. Huber and
    S.M. Hasan                                                                                      139

    JOINT USE OF SWMM AND STORM MODELS FOR PLANNING URBAN SEWER SYSTEMS, H.L. Kaufman and
    Fu-Hsiung Lai                                                                                   144

    SIMULATION OF AGRICULTURAL RUNOFF, A. Donigian, Jr. and N.H. Crawford                                151

    MODELING THE EFFECT OF PESTICIDE LOADING ON RIVERINE ECOSYSTEMS, J.W. Falco and L.A. Mulkey         156

    RADIONUCLIDE TRANSPORT IN THE GREAT LAKES, R.E. Sullivan and W.H. Ellett                             161

    FEDBAK 0 3-A COMPUTER FOR THE MODELING OF FIRST ORDER CONSECUTIVE REACTIONS WITH
    FEEDBACK UNDER A STEADY STATE MULTIDIMENSIONAL NATURAL AQUATIC SYSTEM, G.A. Nossa            166

    MODELING THE HYDRODYNAMIC EFFECTS OF LARGE MAN-MADE MODIFICATIONS TO LAKES, J.F. Paul         171

    AN EMPIRICAL MODEL FOR NUTRIENT ACCUMULATION RATES IN LAKE ONTARIO, P.A.A. Clark,
    D.J. Casey, A. Solpietro and J.P. Sandwick                                                              176

-------
                                                                                               Number

7   RADIATION AND HEALTH
    Session Leader: Mary J. White

    MODELS FOR EXTRAPOLATION OF HEALTH RISK, W.M. Upholt                                          182

    USE OF MATHEMATICAL MODELS IN NONIONIZING RADIATION RESEARCH, C.M. Weil                       186

    AIR POLLUTANT HEALTH EFFECTS ESTIMATION MODEL, W.C. Nelson, J.H. Knelson and V. Hasselblad             191

    MORTALITY MODELS: A POLICY TOOL, W.B. Riggan, J.B. Van Bruggen, L. Truppi and M. Hertz                    196

    A RADIOACTIVE WASTE MANAGEMENT ASSESSMENT MODEL, S.E. Logan and S.M. Goldberg                    199

    FOOD-AN INTERACTIVE CODE TO CALCULATE INTERNAL RADIATION DOSES FROM
    CONTAMINATED FOOD PRODUCTS, D.A. Baker, G.R. Hoenes and J.K. Soldat                                  204

    AER QUALITY AND INTRA-URBAN MORTALITY, J.J. Gregor                                             209

    EVALUATION OF HEALTH DATA  IN TERMS OF ENVIRONMENTAL FACTORS, M. Katzper and N.P. Ross            214

8   ENERGY
    Session Leader: Oscar Albrecht

    INTEGRATED ASSESSMENT: CONCEPT AND LIMITATIONS, L. Smith, R.H. Ball, P.M. Cukor,
    S. Plotkin and F. Princiotta                                                                         218

    AN INTEGRATED TECHNOLOGY ASSESSMENT OF ELECTRIC UTILITY ENERGY SYSTEMS,
    P. Cukor, S. Cohen, G. Kendall, T. Johnston, S. Gage and L. Smith                                             223

    ENVIRONMENTAL IMPACT MODELING FOR PROJECT INDEPENDENCE, R.A. Livingston, G.R. Kendall,
    R.W. Menchen and H.P. Santiago                                                                     230

    AN ENVIRONMENTAL RESIDUAL ALLOCATION MODEL, F.Lambie and M. Allen                              236

    HITTMAN REGIONAL ENVIRONMENTAL COEFFICIENTS FOR THE PROJECT INDEPENDENCE
    EVALUATION SYSTEMS (PIES) MODEL, W.R. Menchen, M.S. Mendis, Jr. and H.L. Schultz, III                       241

    INTEGRATED ECONOMIC-HYDROSALINITY-AIR QUALITY ANALYSIS FOR OIL SHALE AND COAL
    DEVELOPMENT IN COLORADO, C.W. Howe, J.F. Kreider, B. Udis, R.C. Hess, D.V. Orr and J.T. Young                 247

9   SIMULATION I
    Session Leader: Theodore R. Harris

    CSMP CONCEPT AND APPLICATIONS TO ENVIRONMENTAL MODELING AND SIMULATION, C.L. Wang and
    G. Chang                                                                                      252

    GASP IV CONCEPTS APPLICABLE TO ENVIRONMENTAL MODELING AND SIMULATION, A.A.B. Pritsker            259

    RADIONUCLIDE REMOVAL BY THE pH ADJUSTMENT OF PHOSPHATE MILL EFFLUENT WATER,
    D.L. Norwood and J.A.  Broadway                                                                    264

    AN APPLICATION OF  BIASED ESTIMATION THEORY TO  ITERATIVE MAXIMUM LIKELIHOOD
    SEARCH STRATEGIES, D.J. Svendsgaard                                                               269

10  ECONOMICS I
    Session Leader: Oscar Albrecht

    ECONOMIC AND DEMOGRAPHIC MODELING RELATED TO ENVIRONMENTAL MANAGEMENT,
    A.V. Kneese                                                                                     274

    ECONOMIC IMPLICATIONS OF POLLUTION-INTENSIVE EXPORTS BY DEVELOPING
    COUNTRIES,  P.A. Petri                                                                            282

                                                   vii

-------
                                                                                               Page
                                                                                              Number

    A TAXONOMY OF ENVIRONMENTAL MODELS, R.U. Ayres                                               288

    A STOCHASTIC MODEL FOR SUBREGIONAL POPULATION PROJECTION, P.M. Meier                          293

11   AIR QUALITY (LONG TERM MODELS AND SENSITIVITY TESTING)
    Session Leaders: John H. Christiansen and Richard H. Thuillier

    USE OF THE CLIMATOLOGICAL DISPERSION MODEL FOR AIR QUALITY MAINTENANCE
    PLANNING IN THE STATE OF RHODE ISLAND, P.H. Guldberg, T.E. Wright and A.R. McAllister                    298

    IMPROVEMENTS IN AIR QUALITY DISPLAY MODEL, C. Prasad                                            303

    AIR POLLUTION MODELING IN THE DETROIT METROPOLITAN AREA, A. Greenberg, B. Cho and
    J.A. Anderson                                                                                  308

    SENSITIVITY TESTS WITH A PARAMETERIZED MIXED-LAYER MODEL SUITABLE FOR
    AIR QUALITY SIMULATIONS, D. Keyser and R.A. Anthes                                                 313

    PREDICTION OF CONCENTRATION PATTERNS IN THE ATMOSPHERIC SURFACE LAYER,
    S. Hameed and S.A. Lebedeff                                                                       318

    A TIME-DEPENDENT, FINITE GAUSSIAN LINE SOURCE MODEL, J.C. Burr and R.G. Duffy                       322

12   WATER QUALITY H
    Session Leader: Don Lewis

    WATER QUALITY MODELING IN TEXAS, J.J. Beal, A.P. Covar and D.W. White                                 326

    A DYNAMIC WATER QUALITY SIMULATION MODEL FOR THE THAMES RIVER, D.G. Weatherbe                 330

    DISPERSION MODEL FOR AN INSTANTANEOUS SOURCE OF POLLUTION IN NATURAL STREAMS AND
    ITS APPLICABILITY TO THE BIG BLUE RIVER (NEBRASKA), M.K. Bansal                                   335

    SELECTING THE PROPER REAERATION COEFFICIENT FOR USE IN WATER QUALITY
    MODELS, A.P. Covar                                                                             340

    RECEIV-II, A GENERALIZED DYNAMIC PLANNING MODEL FOR WATER QUALITY MANAGEMENT,
    C.V. Beckers, P.E. Parker, R.N. Marshall and S.G. Chamberlain                                                344

    MODIFICATIONS TO QUAL-II TO EVALUATE WASTEWATER STORAGE, J.S. Tapp                             350

13   WATER RUNOFF II
    Session Leader: Paul Wisner

    WATER POLLUTION MODELING IN THE DETROIT METROPOLITAN AREA, M. Selak, C. Harlow,
    J. Anderson and R. Skrentner                                                                       353

    GENERALIZED METHOD FOR EVALUATING URBAN STORM WATER QUALITY MANAGEMENT
    STORAGE/TREATMENT ALTERNATIVES, J.P. Heaney, W.C. Huber, S.M. Hasan and P. Murphy                     358

    MODELING HYDROLOGIC - LAND USE INTERACTIONS IN FLORIDA, P.B. Bedient, W.C. Huber and
    P. Heaney                                                                                     362

    MODELING URBAN RUNOFF FROM A PLANNED COMMUNITY, E.V. Diniz, D.E. Holloway and
    W.G. Characklis                                                                                 367

14   SOLID WASTE
    Session Leader: Albert Klee

    MODELING IN SOLID WASTE MANAGEMENT: A STATE-OF-THE-ART REVIEW, D.H. Marks                      372

    WRAP: A MODEL FOR REGIONAL SOLID WASTE MANAGEMENT PLANNING, E.B. Berman                      377

    ST. LOUIS: AN APPLICATION OF WRAP, D.M. Krabbe                                                   381

-------
                                                                                                Page
                                                                                               Number

    DEVELOPMENT OF A MODEL FOR AN ORGANIC SOLID WASTE STABILIZATION PROCESS ON A
    PILOT PLANT, D.S. Whang and G.F. Meenaghan                                                          386

15   PLANNING I
    Session Leader: Ronald Venezia

    EVALUATION AND SELECTION OF WATER QUALITY MODELS: A PLANNER'S GUIDE,
    E.J. Finnemore and G.P. Grimsrud                                                                    391

    A LANDSCAPE PLANNING MODEL AS AN AID TO DECISION-MAKING FOR COMMUNITY GROWTH
    AND MANAGEMENT, J. Gy. Fabos and S.A. Joyner, Jr.                                                    396

    A RESOURCE ALLOCATION MODEL FOR THE EVALUATION OF ALTERNATIVES IN SECTION 208
    PLANNING CONSIDERING ENVIRONMENTAL, SOCIAL AND ECONOMIC EFFECTS, D. Hill                      401

    REGIONAL  RESIDUALS-ENVIRONMENTAL QUALITY MANAGEMENT MODELS: APPLICATIONS
    TO EPA's REGIONAL MANAGEMENT PROGRAMS, W.O. Spofford, Jr. and C.N. Ehler                             407

    A COMPUTER MODELING STUDY TO ASSESS THE EFFECTS OF A PROPOSED MARINA ON A
    COASTAL LAGOON, Kuang-Mei Lo, T.G. King, A.S. Cooper                                                 414

16  SIMULATION U
    Session Leader: Gary Liberson

    DIGITAL COMPUTER SIMULATION OF SECONDARY EFFLUENT DISPOSAL ON LAND,
    Kuang-Mei Lo and D.D. Adrian                                                                       419

    COMPUTER SIMULATION OF LONG-TERM SECONDARY IMPACTS OF WATER AND WASTEWATER
    PROJECTS, G.A. Outer, T.C. Ryan and J.F. Westermeier                                                    424

    A CRITICAL APPRAISAL OF MATHEMATICAL MODELS FOR LAND SUBSIDENCE SIMULATION,
    E.J. Finnemore and R.W. Atherton                                                                    429

    UNSTEADY-STATE, MULTI-DIMENSIONAL ANALYTICAL MODELING OF WATER QUALITY
    IN RIVERS, R.W. Cleary                                                                           434

    SIMUALTION MODELING OF ENVIRONMENTAL INTERACTION EFFECTS, E.T. Smith                          439

17  ECONOMICS II
    Session Leader: Oscar Albrecht

    TOWARD A DYNAMIC ECONOMIC MODEL FOR REGULATING FLUOROCARBON EMISSIONS,
    R.C. D'Arge, J. Harrington and L. Eubanks                                                               446

    ENVIRONMENTAL, FISCAL, AND SOCIO-ECONOMIC IMPACT OF  LAND USE POLICIES: TOWARD AN
    INTERACTIVE ANALYSIS, J. Kuhner, M. Shapiro,  R.J. deLucia and W.C. Lienesch                                453

    A TOTAL CONCEPT SYSTEM FOR MUNICIPAL WASTE DISPOSAL, L.L. Nagel                                 458

    ECONOMIC FORECASTING FOR VIRGINIA'S WATER RESOURCE PROGRAMS, C.P. Becker,
    A.M. Griffin, Jr. and C.S. Lown                                                                       466

18   AIR QUALITY (NEW TECHNIQUES AND PHYSICAL MODELING)
    Session Leaders: Richard A. Porter and Terry Clark

    MODELING  RADIATIVE TRANSFER IN THE PLANETARY BOUNDARY LAYER: PRELIMINARY
    RESULTS, F.S. Binkowski                                                                          473

    ADAPTIVE FORECASTING OF BACKGROUND CONCENTRATIONS USING FEEDBACK CONTROL
    AND PATTERN RECOGNITION TECHNIQUES, R. Carbone and W.L. Gorr                                      478

    SOURCE-ORIENTED EMPIRICAL AIR QUALITY MODELS, K.L.  Calder and W.S. Meisel                            483

    EPA FLUID MODELING FACILITY, R.S. Thompson and W.H. Snyder                                         488

-------
                                                                                              Number

    PLUME BEHAVIOR IN THE LEE OF A MOUNTAIN RIDGE-A WIND TUNNEL STUDY, A.H. Huber,
    W.H. Snyder, R.S. Thompson and R.E. Lawson, Jr.                                                        493

    A NOTE ON THE SEA BREEZE REGIME, S.T. Rao and P.J. Samson                                          499

    A NUMERICAL MODEL FOR STABLY STRATIFIED FLOW AROUND COMPLEX TERRAIN
    J.J. Riley, E.W. Geller and Hsien-Ta Liu                                                                503

19  WATER QUALITY IH (TRANSPORT)
    Session Leaders: Donald Dean Adrian and Larry Roesner

    HYDRODYNAMIC AND WATER QUALTIY MODELING IN THE OPEN OCEAN USING MULTIPLE
    GRID SIZES, P.J. Wickramaratne, J.W. Demenkow, S.G. Chamberlain and J.D. Callahan                             508

    BLACK RIVER THERMAL ANALYSIS, D.R. Schregardus and G.A. Amendola                                   512

    WATER MODELING IN OHIO EPA, R.G. Duffy and A.B. Clymer                                            517

    THREE-DIMENSIONAL MODEL DEVELOPMENT FOR THERMAL POLLUTION STUDIES,
    R.A. Bland, S. Sengupta and S. Lee                                                                   522

    SOME OBSERVATIONS ON MODELING DISPERSION OF POLLUTANTS IN NEAR-SHORE WATERS
    OF LAKE MICHIGAN, R.H. Snow                                                                   527

    A RIVER BASIN PLANNING METHODOLOGY FOR STREAMS WITH DISSOLVED OXYGEN AND
    EUTROPHICATION CONSTRAINTS, T.M. Walski and R.G. Curran                                           532

    MODELING POLLUTANT MIGRATION IN SUBSURFACE ENVIRONMENTS, A.A. Metry                         537

    A MODEL OF TIDAL FLUSHING FOR SMALL COASTAL BASINS, A.Y. Kuo                                   543

20  WATER RUNOFF III AND DATA AND VERIFICATION
    Session Leaders: Wayne Huber and Harry Torno

    EVALUATION OF MATHEMATICAL MODELS FOR THE SIMULATION OF TIME-VARYING RUNOFF AND
    WATER QUALITY IN STORM AND COMBINED SEWERAGE SYSTEMS, A. Brandstetter, R. Field and H.C. Torno        548

    USE OF MATHEMATICAL MODELS FOR HYDROLOGIC FORECASTING IN THE NATIONAL WEATHER
    SERVICE, J.C. Schaake, Jr.                                                                         553

    TESTING OF THE STORM WATER MANAGEMENT MODEL OF U.S. EPA, J. Marsalek                            558

    APPLICATION OF STORM & SWMM FOR ASSESSMENT OF URBAN DRAINAGE ALTERNATIVES
    IN CANADA, P.E. Wisner, A.F. Roake and A.F. Ashamalla                                                  563

    ON THE VERIFICATION OF A THREE-DIMENSIONAL PHYTOPLANKTON MODEL OF LAKE
    ONTARIO, R.V. Thomann and R.P. Winfield                                                            568

    MATHEMATICAL MODEL FOR THE EXCRETION OF 14CO2 DURING RADIO RESPIROMETRIC
    STUDIES, R. Dtis                                                                               573

    ESTIMATION OF THE OPTIMAL SAMPLING INTERVAL IN ASSESSING WATER QUALITY
    OF STREAMS, L.J. Hetling, G.A. Carlson and J.A. Bloomfield                                               579

    FIELD DATA FOR ENVIRONMENTAL MODELING-ADJUNCT OR  INTEGRAL?, P.E. Shelley                     583

    DATA DEFICIENCIES IN ACID MINE DRAINAGE MODELING, V.T. Ricca                                    586

21  SOLID WASTE
    Session Leader: Albert Klee

    A MODELING TECHNIQUE FOR OPEN DUMP BURNING, M. Rosenstein and V.J. Descamps                        591

-------
                                                                                              Number

   DEVELOPMENT AND USE OF A FIXED CHARGE PROGRAMMING MODEL FOR REGIONAL SOLID
   WASTE PLANNING, W. Walker, M. Aquilina and D. Schur                                                  595

   INCENTIVES FOR WASTE COLLECTION BASED ON WORK CONTENT MODELING, R.L. Shell and D.S. Shupe         600

   PLANNING FOR VARIATIONS IN SOLID WASTE GENERATION, D. Grossman                                 605

   MODEL OF THE MOVEMENT OF HAZARDOUS WASTE CHEMICALS FOR SANITARY LANDFILL SITES,
   E. Elzy and F.T. Lindstrom                                                                         609

22 ECOLOGY
   Session Leader: John A. Burckle

   PHYTOPLANKTON BIOMASS MODEL OF LAKE HURON AND SAGINAW BAY, D.M. Di Toro and
   W.F. Matystik, Jr.                                                                                614

   COMPARISON OF PROCESSES DETERMINING THE FATE OF MECURY IN AQUATIC SYSTEMS,
   R.R. Lassiter, J.L. Malanchuk and G.L. Baughman                                                        619

   ASPECTS OF MATHEMATICAL MODELS AND MICROCOSM RESEARCH, J.W. Haefner and J.W. Gillett               624

   AN ECOLOGICAL MODEL FOR THE GREAT LAKES, D. Scavia, B.J. Eadie and A. Robertson                       629

23 WATER SUPPLY I
   Session Leader: Robert Clark

   SIMULATION AND MATHEMATICAL MODELING OF WATER SUPPLY SYSTEMS-STATE-OF-THE-ART
   R.A. Deininger                                                                                 634

   CAPACITY EXPANSION FOR MUNICIPAL WATER AND WASTEWATER SERVICES: INCORPORATION
   OF UNCERTAINTY, R.G. Curran, D.H. Marks and D.S. Grossman                                            639

   ADAPTIVE SHORT-TERM WATER DEMAND FORECASTING, D.H. Budenaers                                  646

   HYDROLOGIC IMPACT STUDIES OF ALTERNATIVES TO MEET WATER NEEDS IN SOUTH-
   CENTRAL PENNSYLVANIA, J.C. Schaake, Jr., D.H. Marks,  B.M. Harley and G.J. Vicens                            651

   THE OPERATIONAL WATER QUALITY MODEL, A.N. Shahane, P. Berger and R.L. Hamrick                        657

24 STATISTICS I
   Session Leader: Robert Kinnison

   HOW TO MAKE SIMULATIONS MORE EFFECTIVE, G.S. Fishman                                          664

   THE FACTUAL BACKGROUND OF ECOLOGICAL MODELS: TAPPING SOME UNUSED RESOURCES,
   E.C. Pielou                                                                                    668

   TIME SERIES ANALYSIS AND FORECASTING FOR AIR POLLUTION CONCENTRATIONS WITH
   SEASONAL VARIATIONS, Der-Ann Hsu and J.S. Hunter                                                   673

   METEOROLOGICAL ADJUSTMENT OF YEARLY MEAN VALUES FOR AIR POLLUTANT
   CONCENTRATION COMPARISONS, S.M. Sidik and H.E. Neustader                                          678

   THE APPLICATION OF CLUSTER ANALYSIS TO STREAM WATER QUALITY DATA, J.A. Bloomfield               683

   APPLICATION OF PATH ANALYSIS TO DELINEATE THE SECONDARY GROWTH EFFECTS OF
   MAJOR LAND USE PROJECTS, T. McCurdy, F. Benesh, P. Guldberg and R. D'Agostino                            691

   PREDICTION OF PHYTOPLANKTON PRODUCTIVITY IN  LAKES, V.W. Lambou, R.W. Thomas,
   L.R. Williams, S.C. Hern and J.D. Bliss                                                                 696

-------
                                                                                                Page
                                                                                              Number

25  AIR (POINT SOURCE)
    Session Leader: Bruce Turner

    APPLICATIONS OF THE SINGLE SOURCE (CRSTER) MODEL TO POWER PLANTS: A SUMMARY,
    J.A. Tikvart and C.E. Mears                                                                         701

    MODIFIED DISPERSION MODELING PROCEDURES FOR INDIANA POWER PLANTS, H.D. Williams,
    S.K. Mukherji, D.R. Maxwell, M.W. Bobb and C.R. Hansen                                                  706

    SEVERITY OF STATIONARY AIR POLLUTION SOURCES-A SIMULATION APPROACH, B.C. Eimutis,
    B.J. Holmes and L.B. Mote                                                                         710

26  AIR QUALITY (NEW TECHNIQUES)
    Session Leaders: Joseph A. Tikvart and George D. Sauter

    ATMOSPHERIC POLLUTANT DISPERSION USING SECOND-ORDER CLOSURE MODELING OF THE
    TURBULENCE, W.S. Lewellen and M. Teske                                                            714

    POINT SOURCE TRANSPORT MODEL WITH A SIMPLE DIFFUSION CALCULATION FOR ST. LOUIS,
    T.L. Clark and R.E. Eskridge                                                                        719

    THE CHANGE IN OZONE LEVELS CAUSED BY PRECURSOR POLLUTANTS: AN EMPIRICAL
    ANALYSIS, L. Breiman and W.S. Meisel                                                               725

    QUALITY ASSURANCE AND DATA VALIDATION FOR THE REGIONAL AIR MONITORING SYSTEM
    OF THE ST. LOUIS REGIONAL AIR POLLUTION STUDY, R. Jurgens and R.C. Rhodes                            730

    QUANTITATIVE RISK ASSESSMENT FOR COMMUNITY EXPOSURE TO VINYL CHLORIDE,
    A.M. Kuzmack and R.E. McGaughy                                                                   736

27  WATER (WASTEWATER AND CONVEYANCE)
    Session Leader: Robert Smith

    NEW MODELS FOR OPTIMAL SEWER SYSTEM DESIGN, B.C. Yen, H.G. Wenzel, Jr., W.H. Tang and
    L.W. Mays                                                                                     740

    THE USE OF LITHIUM CHLORIDE FOR AERATION TANK PERFORMANCE ANALYSIS, T.J. Olenik,
    R.C. Ahlert and R. Gesumaria                                                                       745

    SWAN-A SEWER ANALYSIS MODELING SYSTEM, B.C. Tonias and P.C. King                                  750

    ON-LINE MODELS FOR COMPUTERIZED CONTROL OF COMBINED SEWER SYSTEMS, J.W. Labadie,
    N.S. Grigg and P.O. Trotta                                                                          755

    MATHEMATICAL MODELS FOR CALCULATING PERFORMANCE AND COST OF WASTEWATER
    TREATMENT SYSTEMS, R.G. Eilers                                                                 760

28  WATER (ECOLOGY)
    Session Leader: Mutafa Shirazi

    THE ECOLOGICAL MODEL AS APPLIED TO LAKE WASHINGTON, C.W. Chen and D.J. Smith                     764

    A LIMNOLOGICAL MODEL FOR EUTROPHIC LAKES AND IMPOUNDMENTS, R.G. Baca and R.C. Arnett             768

    MATHEMATICAL MODELING OF PHYTOPLANKTON DYNAMICS IN SAGINAW BAY, LAKE HURON,
    V.J. Bierman, Jr. and D.M. Dolan                                                                    773

    THE APPLICATION OF A STEADY-STATE WATER QUALITY MODEL TO THE PERMIT WRITING
    PROCESS, LAKE MILNER, IDAHO, J.R. Yearsley                                                       780

    BUOYANT SURFACE JET, M.A. Shirazi and L.R. Davis                                                   784

-------
                                                                                                Page
                                                                                               Number

29  ECOLOGY AND NOISE
    Session Leader: John O. Burckle

    AGROECOSYSTEM-A  LABORATORY MODEL ECOSYSTEM TO SIMULATE AGRICULTURAL
    FIELD CONDITIONS FOR MONITORING PESTICIDES, M.L. Beall, Jr., R.G. Nash and P.C. Kearney                   790

    A CONCEPTUAL MODEL FOR ECOLOGICAL EVALUATION OF POWER PLANT COOLING SYSTEM
    OPERATION, M.W. Lorenzen and C.W. Chen                                                            794

    REVIEW OF THE STATUS OF MODELING ENVIRONMENTAL NOISE, W.J. Galloway                            799

    COMMUNITY NOISE MODELING, B. Manns                                                            803

30  WATER SUPPLY D
    Session Leader: Robert Clark

    THE COST OF WATER SUPPLY UTILITY MANAGEMENT, R.M. Clark and J.I. Gillian                             808

    MATHEMATICAL MODELING OF DUAL WATER SUPPLY SYSTEMS, A.K. Deb and K.J. Ives                       814

    DATA COLLECTION FOR WATER QUALITY MODELING IN THE OCCOQUAN WATERSHED OF
    VIRGINIA, C.W. Randall, T.J. Grizzard and R.C. Hoehn                                                    819

    WATER SUPPLY SYSTEMS PLANNING,  MANAGEMENT AND COMMUNICAITON THROUGH AN
    INTERACTIVE RIVER BASIN SIMULATION MODEL, R.A. Hahn                                           824

    FUTURE DIRECTIONS IN URBAN WATER MODELING, M.B. Sonne, L.A. Roesner and R.P. Shubinski                829

31  STATISTICS II
    Session Leader: Robert Kinnison

    TRANSPORT MODELING IN THE ENVIRONMENT USING Tifji DISCRETE-PARCEL-RANDOM-WALK
    APPROACH, S.W. Ahlstrom and H.P. Foote                *                                             833

    AN INTERACTIVE SYSTEM FOR TIME SERIES ANALYSIS AND DISPLAY OF WATER QUALITY
    DATA, S. Buda, R.L. Phillips, G.N. Cederquist and D.E. Geister                                               838

    CONFERENCE COMMITTEES                                                                      844

    AUTHOR INDEX                                                                                845

-------
                                              CONFERENCE BACKGROUND
                                                 Elijah L. Poole
                                        Office of Planning and Management
                                      U.S. Environmental Protection Agency


     It is my unique pleasure to welcome you to the "EPA Conference on Environmental Modeling and Simulation."
This is the first EPA conference where modeling and simulation will be discussed in so many diverse topic
areas:  air, water, pesticides, solid waste, noise, radiation, health, energy, ecology, planning, management,
economics, and others.

     The Environmental Protection Agency was established in 1970 to permit coordinated and effective governmental
action in order to protect the environment.  Two of its roles are to perform research and to transmit research
results to the users.  In general, mathematical modeling and simulation are widely used for performing research,
and this appears to be particularly true for environmental  research.

     We were overwhelmingly pleased to receive 220 abstracts as a result of our call for papers, and 164
papers are scheduled for presentation at this conference.   These papers indicate considerable and extensive
environmental modeling efforts in EPA, State and local governments, universities, and private industry.  Some
papers also were contributed by modelers from Canada.  We feel that we are indeed fortunate at this conference
to have so many distinguished speakers and attendees - many of whom are well  known experts in their fields.

     Some major objectives of the Conference are:  to perform a state-of-the-art review of predictive modeling
and simulation in the environmental decisionmaking process, to share modeling expertise within and across
various media, and to better understand computer requirements and other resources needed in the development
and use of models.

     Considerable efforts are expended in formulating and developing mathematical models, and a large share
of computer time is spent running modeling programs.  This conference should serve to enhance communication
among modelers and users of models, thereby decreasing development and operating costs and also eliminating
some redundancies.  With the expertise represented here, I feel that many of the Conference objectives will
be accomplished.

     The Conference has been in the planning stages for more than a year.  It was some 18 months ago that
Dr. Wayne Ott of the Office of Research and Development and I discussed the feasibility of structuring a
conference on environmental modeling and simulation.  Prior conversations with many of the Agency modelers
and users of models supported the usefulness and desirability of such a conference, and so now the concept
has come to fruition.

     Many people have been involved working to make the Conference a success.  The list is rather long so I
will not take the time to try and name everyone.  In the back of your program guide you will see an extensive
list of primary contributors beginning with Vern Laurie, the Conference Coordinator, and Delores Platt for
logistics.  And, of course, there are many others whose efforts are appreciated.

     In general, the program is organized according to subject matter.  The Program Committee felt that some
papers were of considerable interest and should be presented, although no topic category may have existed.
Because it was decided not to have a miscellaneous session, you will occasionally find a paper in a session
where it may not seem to belong.  This happens rarely, however.

     It is our hope that all attendees find the Conference stimulating and discover new techniques and
contacts for future references in developing, operating, and using models.

-------
                                                CONFERENCE GOALS

                                              Albert C.  Trakowski
                                      Deputy Assistant Administrator for
                                       Monitoring and Technical  Support
                                     U.S.  Environmental  Protection Agency
                                             Conference Moderator


   I believe we have opened a new chapter  in the environmental  sciences by holding this conference.   For the
first time we are bringing together, under one roof, all  the many  varied and diverse environmental  topics where
mathematicians, statisticians, operations  research specialists,  systems analysts,  engineers, and others with
quantitative backgrounds share a common interest.

   It is unfortunate, I believe, that modelers working in air pollution seldom have a chance to become fully
acquainted with water pollution modeling approaches.  Similarly, computer models in solid wastes seldom are
brought to the attention of persons developing models for economic studies of air  pollution.  Models developed
for noise applications probably are not widely known among the air or water pollution modeling communities, and
models concerned with ecological processes do not often appear in  the literature alongside papers on environmen-
tal statistics.

   There is, however, a commonality of approach among modelers.  This commonality  should enable them to communi-
cate freely and effectively once they are  brought together in a  forum such as the  present conference.  Although
mathematics and statistics form the foundation for this commonality, the universality does not end there; there
is also a commonality of purpose among the modeling community.

   Most models are based on, or make use of, environmental data  in some form—particularly monitoring data.  Also
most modelers, by developing abstractions  of reality, attempt to simulate reality.  By examining the behavior of
their model under a variety of situations  and with different inputs, they often make predictions about reality.
The "rightness" or ''wrongness" of these predictions is, of course, model validation.  Models also share many
similarities in terms of applications and  uses.  Some of the more  common uses are  to (1) project future environ-
mental phenomena and variables; (2) develop more optimal  control systems and technology; (3) gain insights into
underlying physical, chemical, or biological processes; (4) evaluate the consequences of various environmental
management decisions and regulatory strategies; (5) assist in the  planning process; (6) aid in the interpretation
and analysis of monitoring data intended to depict the state of the environment; (7) estimate the risks of
adverse effects of environmental pollution on human health, plants, and animals; (8) assess the economic and
social costs arising from environmental pollution.

   Our goal in planning this conference was not, however, merely to bring modelers from different environmental
media together for them to share thoughts  about the nature of models.  Rather, our goal was to seek  a confronta-
tion between the model users and the model developers.  Thus, you  will  see that the Conference Program contains
some very interesting papers discussing the practical experience of environmental  managers with models.  In
several instances, we have included papers from state and local  environmental control officials who  will tell us
the problems and successes they have had with particular kinds of  environmental  modeling efforts.  We hope that
the question and answer periods in the technical sessions will  stir some lively debate on these matters.  Hope-
fully, by bringing model users and model developers together we  can accomplish a better understanding of the
potential uses and limitations of models at the same time that the developers receive some constructive feedback
from the users.

   This brings me to some comments about the role of the Environmental  Protection  Agency in the modeling areas.
EPA's Office of Research and Development serves as the major scientific and technical arm of the Agency, carry-
ing forward a broad and varied research program covering air, water, energy, ecology, and many other facets of
the environmental sciences.  This research program includes both in-house and contractual work directed toward
the development, testing, evaluation, and  refinement of models for all  environmental media.  Within  the Office
of Research and Development, the Office of Monitoring and Technical Support is the primary organizational entity
responsible for transferring the technology produced by the research community into the hands of the user com-
munity.  We view this conference, which is cooperatively supported by EPA's Office of Planning and Management,
as one of the more important means by which the results of scientific research (in this case, environmental
modeling approaches) can be effectively transferred to the user community.  Thus,  we hope the conference will
help facilitate a productive dialogue between the developers of models and the environmental managers faced with
the need to make decisions using the results of these models.

-------
                                             TOWARD A COMMON LANGUAGE

                                                  KEYNOTE ADDRESS


                                             Dr. Andrew W. Breidenbach
                                         Assistant Administrator, Office of
                                           Water and Hazardous Material
                                       U.S. Environmental Protection Agency


     As a scientist and a decisionmaker within the Environmental Protection Agency, I am struck by the fact that
this conference is long overdue.   It  is the first conference on modeling and simulation to bring all the various
media of the environmental community  together.  In looking over the schedule for the next two and a half days, I
note that at least 15 distinct areas  are represented, including air, water, energy, solid waste, ecology, noise,
health, and radiation.  Five years ago, before EPA existed, this conference would not have been possible, but
today it seems quite natural to consider multi-media approaches.

     This conference is important  to  me because scientists and modelers are playing a very active role in decision-
making today.  As our decisions become more complicated, scientists are working more closely with the decision-
maker.  By visiting with you, I hope  to learn more about modeling.  At the same time, I hope I can convey to you
some of my thoughts, so we can establish a basis for communication between modelers and decisionmakers.


GETTING DEFINITIONS STRAIGHT

     Good communication begins with a common language.  I ask you to recall the situation not too many years ago
when Congressmen  and Administrators shuddered at such terms as "Biochemical Oxygen Demand," "Total Suspended
Particulates,"  "polychlorinated biphenyls," and "oxides of nitrogen."  These words probably sounded more like
Greek than English.  But  let us look  at how far we have come in just a few years.  Most Congressmen know that
"BOD" has something to do with sewage treatment plants.  Even such terms as "ozone layer" and "catalytic con-
verter" are household terms.  I think we have come a long way toward bringing the language of the scientist
within the province of the decisionmaker.

     But the language of models is another story.  Take the simple word "model."  How often has someone come into
my  office and mentioned that he has a model, and I wait expectantly for her to come into the room.  Or for a
large display case to be carried  in with a miniature replica of a waste treatment plant.  These are not unusual
reactions when  hearing the word model.  We have become accustomed from childhood to such concepts as "model  T,"
"toy models," or  "you should model yourself after that person."  All of these examples imply that a model is a
physical representation.

     The dictionary's definition  seems to focus on this physical aspect.  Webster defines a model as "a standard
for imitation or  comparison; a pattern.  A representation, generally in miniature, to show the construction or
serve as a copy of something."  Actually this is not a quote from Webster, but from the Random House Dictionary,
which I thought would be more appropriate for this conference.

     Nowhere does the dictionary,  or  for that matter, the decisionmaker's general experience, really depict the
idea of a mathematical model.  Yet from what I can gather, almost all of the models you will hear about this
week are mathematical models of one sort or another.

     Even if the decisionmaker understands the concept of mathematical model, he is likely to be confused and
overwhelmed by the variety and complexity of available models.  This confusion could lead to a serious breakdown
in  communication.  For instance,  the  decisionmaker may have in mind a simple, deterministic model, but the
scientist may actually be using a  statistical model, or a stochastic model, or a simulation, or an analog computer
model.

     It is important to get our definitions straight and agree on a common language.  Since the language of
modeling is unknown to the public  and to many managers, the subject seems to cause some fear, or at least a
feeling of distrust.  It is only  human nature to be a little afraid of something new or something you do not
understand.


A COMMON LANGUAGE

     A simplified and clarified language of modeling will go a long way in advancing the cause of modeling.  Mana-
gers and the public will better appreciate modeling, and as appreciation and understanding of models grow, so will
the use of models.

     A careful and consistent language of modeling can also help communication between modelers.  As you listen
over the  next  few days  to  the many talks on models, think of how many different ways you hear the word "model" .
used.  In the titles of the papers to be presented, the words "modeling" or "model" occur 107 times.  One would
expect that for such a common word, each of us would be well acquainted with its meaning.  Yet, when you listen
to  papers being presented, ask youself whether you really know what the authors mean by the word  "model."  Better
yet, consider whether you think your neighbors on either side of you have the same concept of "model" that you

-------
have, and that the authors have.   I suspect there may be a good deal  of fuzziness  in the definitions  commonly
used.

     Another major reason for my concern about definitions is that we are frequently being  sued  over  our deci-
sions, both by industry and by environmentalists.   Courts must listen to both  plaintiff and defendant discuss
modeling.  Since neither the plaintiffs, the defendants, nor the courts can  define their terms exactly,  I can
almost guarantee that opposing sides will  disagree on what they mean  by "model."

     Thus, development of a consistent and understandable modeling language  is becoming extremely important to
the manager, the scientist, and the public.

     We will not, however, be able to develop a common language overnight, and even if we attack the  problem over
the next two and a half days, we may not get very far.  But there are some reasonable first steps we  can take.
As the keynoter for this conference, let me propose some questions from my perspective as a decisionmaker to help
define what I mean by a common language.  As you hear about models in your area of concern, whether it be water
quality, ecology, or economics, keep in mind what decisionmakers like me need  to know.


CURIOSITY OR DECISION TOOL?

     One question might be, "Is this model a mathematical  curiosity or is it a tool for making decisions?"  I
know that we have advanced well beyond the mathematical  curiosity stage but  I  am not sure whether we  have arrived
at the decisionmaking end of the spectrum as yet.

     To help find out where we stand in the evolution of models, let  me describe  the results of  my own model
which I commissioned solely for this conference.   The model  is called A Statistical System  to Evaluate Symposium
Success, also known as ASSESS.  My staff assures  me that it is a state-of-the-art  model,  with nothing but the
best data inputs, and the most rigorous of validation techniques.

     We performed a keyword analysis on the titles of all  164 papers  to be presented here this week.   Then we
classified the papers into three categories:  Policy-Oriented, State-of-the-Art Review, and Technical.

     We called a paper Policy-Oriented if the title of the paper had  the slightest hint of  policy orientation,
such as by using the words "planning," or "management,"  or "policy."   Of course I  hoped that 95  percent  would
be Policy-Oriented papers.  In reality, we found  that only 17 percent of the papers fell  in this category.   So
I doubt that we have reached the stage of model  evolution where all models are being directed at policy  questions.

     The next category is State-of-the-Art Review.   These papers perform a valuable service to modelers  and
decisionmakers by comparing and analyzing models.   Twelve percent of  the papers fell  in this category.   I  hope
we will see more of this kind of paper in the future.

     The third category is Technical papers.  If the title of a paper did not  mention policy, and did not appear
to be a review, we called it a technical paper.   By now  you will  not  be surprised  to learn  that  the majority of
papers—71 percent—appear to be Technical papers.   These figures suggest we may  have too many models which  are
still in the mathematical curiosity phase.  I hope not.

     I do not wish to imply that all technical  papers are irrelevant  to the  decisionmaker.   I know that  many are
very relevant.  My point is that the modeler must become more sensitive to decisionmakers1  needs,  and must know
how they intend to use the model.

     So much for the ASSESS model.   I know the conference  will  be a success  in spite of it.


ASSUMPTIONS

     Another question I ask as a decisionmaker concerns  the model's assumptions.   Assumptions do for  a model  what
gasoline does for an automobile—they make it run,  and they determine how far  it  can go.  The person  who developed
the model probably knows precisely what assumptions are  important.  It is the  user—the decisionmaker—who will
feel the effects of these assumptions.

     The decisionmaker must be told if there are  assumptions which are simply  not  true and  which completely in-
validate  the model.  Unfortunately, modelers can become so involved  in making the model  work that they  forget
to distinguish between things that are true and things that are convenient simplifying assumptions.   The user
must know, for instance, if a model is only for lakes,  or if it assumes complete  mixing, or if  it is valid  only
for summer months.  Stating the assumptions is a  key part of the common language  of modeling.


SENSITIVITY

     I also like to ask about the sensitivity of the model to input variables.  The decisionmaker must be told
which variables are critical to the results, and  which variables do not impact the results.   One of the  most
useful outputs of models is thi-s ability to distinguish  between what  is important, and what is unimportant.   So
sensitivity analysis should also be a part of the language of modeling.

-------
VALIDATION

     One of my favorite questions is, "Has this model been validated?"  Sometimes it may be impossible to validate
a model completely.  For instance, a global economic growth model may be difficult to validate.  But to use a
model in making a decision, I must have some proof that the model works, or at least have something which convinces
me I can rely on it.

     The only sure way to validate a model is to test it using new data—that is, test it using data which were
not used to develop the model.  In the water program, we have often found models which people claimed were valid,
but were really just tested with the same data used to develop and calibrate the model.   This is like using a
pocket calculator to verify that x+y   y+x, for all the possible values of x and y.   When you test a model with
new data, you are also helping to determine how robust the model is—that is,-how applicable the model  is under
varying situations.

     Validation is not something to be afraid of.  Validation techniques can improve your models in the long run,
even if they show that your present model is incorrect for the task at hand.  And I  guarantee that decisionmakers
will use your model more willingly if you have solid evidence to validate it.


DATA INPUTS

     One final question I often ask is" What data did you use?  Did you collect the  data  yourself, or did someone
collect it for you?  Are the data valid today, or are they obsolete?  What variables were measured?"  I can assure
you these are not empty questions.  We have found treatment models which use temperatures as a key independent
variable, but which were based on input data collected at a constant temperature. We have also found a water
quality model for a particular river, where all the so-called ambient monitoring data were collected just down-
stream from industrial outfalls.  A good corollary to the "garbage in-garbage out" rule  is that a model is only
as good as its data.

     The quality of data used is probably one of the restraining forces for the modeling  community today.   The
high costs of collecting data are enough to make some managers shudder at the word "model."   Yet, it is the
modeler's responsibility to insist on good data, even if it means higher cost.   If good  data are not available,
the modeler should point out how this limits the validity of the results.


DECISIONMAKING AND RISK

     Why ask these questions?  Why worry about who uses the model?  About assumptions and sensitivity?   About
validation?  About data?  The reason is that as a decisionmaker, I cannot afford to  use  a model  which is  wrong.
For once a manager uses a model which turns out to be wrong, he will probably never  want  to  use a model again.

     Consider the county executive or State governor who makes a decision based on an urban  transportation model
for air pollution, or a stream loading model for water pollution.  What will be his  reaction after the  new parking
plan is implemented, and the air is still dirty?  Or after the new treatment plant is built, and pollution does
not  improve?  Not only will the modeler's reputation suffer, but the momentum of the entire  environmental  movement
could  suffer seriously.

     The public manager wants to minimize his risk of making wrong decisions.  And that,  in  a  sense, is one of the
primary justifications for developing models in the first place.  It is your job and responsibility as  modelers
to reduce the manager's risk.


THE BIG PROBLEMS

     In the water  program, decisions are made routinely using models, or so I am told.  Models are used to help
determine effluent guidelines, determine water quality standards, and evaluate the impact of effluent reductions
on water.  We could not set standards for  "best available technology," for  instance, without the use of models,
since  in many cases the technology does not exist  in widespread  use.  Water quality  criteria could not be
established for many materials without using mathematical models.  Mathematical models are well known  in toxico-
logy and help the  scientist determine the  relationships between  doses for animals, aquatic life, and human
beings.  Models help us determine what amount of pollutant in the water will yield a cummulative toxic dose
in aquatic organisms.  Effluent loading models can help determine the impact of our multi-bill ion dollar clean-
up programs.

     It was my intention when preparing this paper to give examples of modeling success  stories.  Yet with all of
these models in my program, we could not find one which was universally acclaimed for solving what we call a "Big
Problem."  A Big Problem is a problem that makes the press, that causes Congress to  come  screaming and yelling at
our doors.   To be sure, we have many examples where a model was used to help make a  local decision.  Yet many of
these models were unreliable or controversial and, in the final analysis, decisions  were  sometimes made on the
basis of common sense as much as they were on the model.

     I  interpret this to mean that modeling has been in an embryonic state.  I  hope  that you, the modelers, through
this conference and your interaction, will help modeling come of age for the Big Problem.

     I  believe that models have a lot to offer the manager.  That is why I would like to see both managers and
scientists start working with more urgency on the problem of developing a common language.  If a dialog is

                                                        5.

-------
initiated, the modeler will  begin to get a better idea of what the decisionmaker faces.   Models  to  solve the Big
Problems could begin to be developed.  And the decisionmaker will  have better Information to  confirm  or  deny his
common-sense beliefs.   As environmental problems become more complex,  the decisionmaker  depends  more  and more on
technical knowledge.  Models can help focus this knowledge,  and help us solve some of  these problems.


CONCLUSION

     The organizers of this conference have done an excellent job  of bringing together many diverse entities.
The program is so diverse, and the time is so short, that you will  only be able to hear  a small  portion  of the
papers to be presented, even if you spend all of your time at conference sessions, and not at the bar downstairs
or watching the World  Champion Cincinnati Reds.   All of these papers,  and the stimulating dialog with your counter-
parts should ensure that we meet the major objectives of this conference.   These objectives are, and  I quote:

     "To perform a state-of-the-art review of predictive modeling  and  simulation in the  environmental
     decisionmaking process; to share modeling expertise within and across various media; and to examine
     the adequacy of computer and other resources in the development and use  of models."

     As we embark on this conference, I give you the task of creating  a common language  that  both modelers and
decisionmakers can readily understand.   This language will also help convey our decisions to  the general  public.
I also give you the task of creating a  consistent methodology for  verifying models and describing their  capabilities,
so that managers will  have a standard with which they can measure  the  usefulness and accuracy of models.   Armed
with these, the modeler will be able to convince more and more managers to use models  in  environmental decisions.

-------
                                  A RESOURCE  AND  ENVIRONMENTAL  MANAGEMENT  EXCHANGE

                                                LUNCHEON ADDRESS
                                              Dr.  Ira  L.  Whitman
                                    Environmental  Engineering and  Management
                                                 Columbus,  Ohio

     The conference coordinators advise me that this meeting is dedicated to the users of modeling efforts, to
individuals in government and industry whose management and planning efforts will determine the future course of
resource and environmental management.

     What I have to say should be of interest to anyone concerned about the future of environmental  management —
and should particularly concern you modelers and analysts who care about the results of your efforts and the
impact that you have on the environment.  I will  plead my case  for an awakening, a humanization of technical
persons working in the environmental professions.  Also, I will offer a plan to create leadership in the environ-
mental field, leadership capable of dealing with real-world problems founded upon a solid scientific and techni-
cal base.

     I can personally recall that day, almost 15 years ago, when I first interviewed with my graduate advisor to
be, and learned that in exchange for tuition and stipend I would be transforming unit operations of  sanitary en-
gineering into analog computer simulations.  However,  it wasn't until several  months later that I learned what
an analog computer was.  It took even longer to recognize the fantastic potential for discovery that existed
with the use of models, simulation and with computers.  My efforts were modest by comparison with yours, yet they
were sufficient to crank out that all-important Master's thesis, and send me off to the cruel  hard world where
there is no equation for reality, and no model that tells us when and how to make the right decisions.

     I decided to leave the models and computers to others, and attempted to build upon their expertise in envi-
ronmental modeling to bring about more rational environmental  policies and better managed environmental  programs.
In 1971, in Ohio, we introduced the concept of effluent charges to the Citizen's Task Force on Environmental Pro-
tection, a concept that was endorsed by a coalition of environmentally oriented citizens and businessmen above
the protests of the industrial community.  In 1972, we quietly  commissioned a study of an industrial cost sharing
approach based on computer modeling of major air pollution sources in one of our industrial cities.   Also in that
year, we hired some of the State's first environmental modelers, who have distinguished themselves by presenting
no less than three papers here at this conference.  Yet, in 1974, a major enforcement case we were pursuing was
hopelessly lost, due considerably  to our failure to validate critical air quality monitoring data.

     The lesson learned from these experiences is this:  There  is an undeniable linkage between the  major ele-
ments of a national program of resource and environmental management.  I see four such elements, each being a
link in the process of achieving our national environmental goals.  These elements are:

     1.  Scientific monitoring and understanding of the components of the environment.

     2.  Integration and analysis of these components  into understanding of environmental systems, through model-
         ing and other systematic tools.

     3.  Formulation of environmental policies, and administrative and legislative actions in order  to  manage our
         environmental systems.

     4.  Implementation of environmental programs including design and construction of facilities.

     Scientific monitoring and research allows us to identify and describe the components of our environment.  We
must know what these are before such systems can be modeled and analyzed.   Much of your work has been restricted
by limitations in our basic data and our knowledge of  natural  processes.  Ecological modeling, for example, is
only as good as our knowledge of the basic ecological  processes themselves, many of which we understand in only  a
primitive fashion.  Millions of dollars now go into basic environmental data collection — from ships, balloons,
space satellites, and from complex and very simple monitoring equipment here on earth.  The outputs  from this
first phase of environmental inquiry are the raw inputs needed  by you, the modelers, mathematicians, and analysts.

     Environmental modeling and simulation are integrating processes, by which relationships are tested and ex-
plored.  Through modeling, we can integrate the biological and  chemical factors that describe a polluted waterway
with the cost and benefit factors that describe the results of  improving that waterway, thus gaining economic in-
sights into the impacts of physical phenomenon.  The horizons of your models have become elasticized by new gen-
erations of computers and new generations of modelers  who have  ingeniously learned how to represent  a physical
world by an electronic world.

     But what of the results of your modeling efforts?  Where do they lead and where have they gone?  How many
"optimal" solutions have become "acceptable" solutions?  How many "least cost" alternatives have become "most
used" alternatives?  Where have your models taken us?

     You have led us, I believe, to the doorstep of that next element in our environmental programs, the formula-
tion of policies, and administrative and legislative actions.   But have you taken us across the threshold?
Rarely!  It is this realm of policy and administration that concerns me the most, for this is the real  world —
the world of people and their problems; it is the place where push comes to shove —not just in our models, but
in our city halls and board rooms and anywhere that real power  is being bartered and brokered.

-------
     Ah--but you say that it is your job to provide the data  and  the  alternatives,  and  that  it  is  the  decision-
makers who must make the choices.   But who are the decisionmakers — and  why  is  it  they who  are playing  chess  with
our resources and our lives while you, who have the tools of understanding at your command, are  playing second
fiddle?

     So the question I ask is this — how do we integrate the rationality which  you,  as modelers  and analysts, can
provide into the real world of the political pro and the board room  Machiavelli?

     Can we bring the decisionmaker to you and convince him of your  wisdom?  Can we  wrap you  up  in  a nice neat
shiny package and bring you to his world?  Will  he listen?  Will  he  understand  you?

     NO!!

     How do your models, and what they have to offer,  get across  that  doorstep  into  the  real  world  where the
action is?  My friends, you take them across.   Hand-carry them—special  delivery.   And  how do you  guarantee  that
they are actually delivered, and that they find their  way into the thought processes that lead to action?  You
must be there on the other side waiting to receive them1, and willing to  put  them to  use.

     Become the decisionmakers!  Not through your models, but  through  yourselves — your  talents, your words,  your
actions, your interests.  Prepare yourselves to put behind you the world of  computers and to  professionally in-
habit the world of people.  You will find a vacuum there waiting  for your leadership, and the vacuum created  in
your wake will soon be filled by new generations of modelers waiting and following behind you.

     If you expect the leaders, the decisionmakers, to understand  and  embody the work which you  are performing,
then you must prepare to provide the leadership.   I am suggesting  that you,  the mathematicians,  modelers,  and en-
gineers, become humanized to the point where your concerns are with  the  use, the impacts, and the practicality of
your plans and the results of your modeling efforts.   In short, no environmental policies and programs  in  this
country, nor any public policy, can grow and evolve towards the achievement  of  their goals  unless the persons
best trained and equipped to understand and implement  those policies grow with  it.   Grow!

     I believe that many of you must set out to reorient yourselves  and  grow, in two directions  which will  enable
you to become a part of the decisionmaking process.  Grow in the  direction of your personal actions and behavior,
and grow in the direction of your professional  interests and career.

     Grow by communicating!  How often have you heard  the word communicate thrown  up to  you — at professional
meetings, in your office, and anywhere where people are concerned  about  the  results  of your labors?

     Communicate!  But do you really?  Can you describe your work  and  its applications to your wives?  How  many
of you are understood by them well enough so that they would be able to  describe to  others what you do?

     What about your kids?  Do they understand what you do? Have  you  ever spoken  to their classes  in school? If
you have, did you get your point across well enough so that your  kids  were glad you  came?  Do you think you could
explain to a 6th grade class what you do, and why it will  help fight pollution  in  some way?   What about an  8th
grade class?  10th grade?  12th grade?  Could you, in  fact, successfully speak  before a  class of college sopho-
mores?  Could you do it without writing equations on the board?

     Several years ago, my kids, very young at the time, were  confused by the fact that  certain  people  called me
Dr. Whitman, yet whenever they were ill their mother would have to take  them to visit the pediatrician  (and usu-
ally complain about the cost as well).  What kind of a doctor  was  I  that I couldn't  treat sick kids?  After much
attempted explanation, it all crystalized when I  explained that I  was  a  ''doctor of sick  rivers," and  not a  doctor
of sick children.  Children's perceptions of pollution are very vivid, and very real.  Cleanliness  is usually the
first concept driven home to kids by their parents — and violation of  public cleanliness, pollution,  is not an
abstraction to them by any means.   It is what you and  I are doing  to clean up the  pollution that becomes  the  ab-
straction.  In their simplified world, mother always has an immediate  solution  for the pollution problems which
concern them.  Yet we, whose business and profession it is to  clean  up on a  larger scale, seldom produce results,
or even take actions which the kids can comprehend.  Well, if  they in  their  enthusiasm cannot know  what we  do,
what makes you think that the public at .large will?

     If your kids, or the ones down the block, can't understand what you're  doing, what  about your  parents?   Have
you ever tried to explain your professional activities to your parents and to members of their generation?  They
want to know that the country which they fought for 35 years ago,  and  which  they helped  rebuild  after a disastrous
depression, is being left in good shape for their grandchildren.   They don't care  too much  about our generation —
if things are a mess they figure it's our own fault.   But they surely  care about our children!   What have you done
lately to assure our senior citizens that you are helping to produce a healthful and safe environment for future
generations?  Do they understand you?  Do you talk to  them at  all?

     Communication is just the beginning!  How about involvement  in  community problems — environmental  or  other-
wise?  I know that many of us "working stiffs" in this field are  getting involved, be we modelers,  or lawyers, or
technicians.  Much of the acceptance of public involvement in  environmental  policies and programs has come  about
because some of us have been involved, and are beginning to understand what  citizen  action  really means.

     How about involvement in the political process?  How many of  you  (federal  employees excluded!)  have ever
seriously participated in the election of a candidate?  Have you  passed  petitions  from door to door and tried to
explain why your man was better than the other guy? Have you  ever cornered  and interrogated  a local  candidate to
find out what he really stood for?  Have you sat in on legislative hearings — only to wonder  what was really  de-
cided behind closed doors before the hearing ever began?

-------
     Why am  I dwelling upon the theme of communication and involvement?  Because they are part of personal growth.
Unless you share experiences with other people, experience their interests, their feelings, their views, their
preferences, their philosophy of government, it will be impossible for you as an economist or mathematician or
scientist to take your efforts into the real world where it all happens.  I'm talking about the world beyond the
doorstep where so many of our academic exercises come to rest.

     But self-help is not always enough!  There are those of you out there, and colleagues of yours around the
country, who would like nothing better than to have the opportunity to take your talents to the board room, and
to city hall, and to apply yourselves not just to the description and modeling of the environment, but to partic-
ipate in the decisions and the actions which lead to the management and protection of the environment.

     To enable this professional growth to happen, I am proposing a Resource and Environmental Management Ex-
change, for  the purpose of developing responsible, well rounded, experienced men and women capable of exercising
leadership in the private and public sectors in dealing with environmental and resource issues.   This is the
first public announcement of this concept which has been brewing for months, and which will be put before leading
governmental and private organizations concerned with our ability to manage and protect our natural  and environ-
mental resources.  The objective of the Exchange is not to take technocrats and let them manage our resources,
but rather to build upon persons who have strong technical experience in the environmental field and help them
become fully rounded, sensitive, alert participants in the management and policy-making processes.

     The Resource and Environmental Management Exchange would, with the commitment of the individuals and organi-
zations involved, establish over a five-year span the capability we need to manage our environment and our re-
sources intelligently and democratically.  Just what commitment do we need?

     At the  start, I would envision an Exchange which has the backing of some 75 to 140 organizations, involving
200 individuals.  These individuals would be presently employed, as many of you are, by large and small  corpora-
tions, states, universities, units of our federal government, think tank and research centers, local  and regional
agencies, and professional societies.  Each organization participating would be committed to the following:

      1.  Selecting and sponsoring one or more of its staff for involvement in the Exchange over a five-year
         period.

      2.  Underwriting part of the cost of its employees' participation in the Exchange, including seminar and
         involvement programs held regularly over the five-year period.

      3.  A willingness to exchange personnel with other participating organizations for periods  of 6 to 18 months.

      4.  A commitment to build management opportunities for Exchange graduates.

      I estimate the cost of this program to be from $3,000 to $4,000 per year per person, which  accounts for all
costs above  the normal salary and other costs of employment for Exchange participants over a five-year period.
For a complete Exchange group of 200 individuals, this results in a total  cost of $4 million over the five-year
period.

      But what are the potential benefits by which this cost can be measured?  What is the price  of leadership?
Corporations now spend thousands of dollars in training and upgrading their management team members,  and in  relo-
cating them  throughout their organizations.  The cost of hiring new management talent into their organizations  is
even  higher.  The current value of environmental programs — public and private, is in the billions of dollars a
year, and still growing!

     We have spent millions on environmental monitoring, and data collection, and more millions  on models and
computer analyses to try to generate some sense of these data.  Yet, by and large, we deliver the results of our
efforts to persons in organizations, private and public, untrained in management and unaware of  the  benefits
which national resource management can really produce.  And,  even more alarming, policy and management actions
lead to programs costing billions of dollars in the construction of waste treatment facilities,  in the prevention
of adverse environmental impacts, and in the cancellation of facilities that might otherwise be  built. As various
need surveys show, the total cost of meeting our environmental goals lies somewhere between our  annual federal
budget and our gross national product, both of which are very large amounts,  to say the least.   Can you  imagine
what the value of improved management and alert leadership in these programs will  be?  That is why this  program
is being proposed — to bring us the professional growth and leadership that i.s needed.

     Through modeling and simulation we have been able to integrate many factors within our environmental and
economic systems.  We can make trade-offs leading toward solutions that offer us the most for our money.   Through
your efforts we can study the response of air sheds to new development, the change in river quality as we pro-
gress with our pollution cleanup efforts, and we can electronically track garbage trucks as they find their  way
around a large city over assorted hypothetical  routes.

     But now we must do more than model  the environment and simulate our management systems.   We must harness the
talent which has been used to develop these magnificent tools and give it the opportunity to grow and to prepare
for the responsibilities of leadership.   We must steer ourselves toward people, for it is they,  and not our  ma-
chines,  who will  determine whether, and to what degree, our environmental  goals will be met.   And, in building
this leadership, we should consider the institution of efforts to expose our talent to new situations, new expe-
riences,  new pressures, and new responsibilities.   In short,  we need to consider something like  a Resource and
Management Exchange, which will  allow us to build upon our scientific and technical skills and develop this  lead-
ership for the future.   With it, we all  grow!

     I hope you  agree!   Please  let  me  have  your ideas, and  your response to these  ideas.

                                                        9

-------
                                    FUTURE ENVIRONMENTAL QUALITY MANAGEMENT

                                                 USING MODELS
                  David W. Duttweiler
                        Director
           US Environmental Protection Agency
           Environmental Research Laboratory
                    Athens, Georgia
                 Walter M. Sanders,  III
             Acting Associate Director for
                 Water Quality Research
           US Environmental Protection Agency
           Environmental Research Laboratory
                    Athens, Georgia
                     Introduction

     The purpose of this paper is threefold: 1) to show
environmental pollution control officials and  managers
the  potential  of  models for enhancing the efficiency
and effectiveness of their efforts, 2)  to  suggest  to
modelers  that  their  products  can  be more useful in
environmental protection, and 3) to  outline  a  future
mode  of action for environmental protection that could
be superior, in its accomplishments and costs,  to  the
present approach.

     The concept of the environment as a system will be
outlined   and   "pollution   control"   compared  with
"environmental quality management."  Steps  to  realize
environmental  quality management through use of models
will  be  described  and  the  potential  benefits   of
instituting  environmental  quality  management will be
discussed.
Environment

     A useful concept of the environment  is  a  system
composed  of  sources  of materials linked by transport
and reaction processes to biological receptors or sinks
where  the  materials  are  sequestered  from   further
significant  environmental activity.  The materials are
usually   residuals   (wastes)   of   human   activity,
occasionally  materials intentionally injected into the
environment like pesticides, and often the products  of
geochemical  processes.   They  become pollutants when,
for reasons such  as  health,  ecology,  economics,  or
aesthetics, they are undesirable constituents of one or
more  abiotic environmental components (media),  such as
air, water, or  land.   The  atmosphere,  for  example,
transports  a pollutant from its source to receptors or
to sinks, which may be organisms, another environmental
component, or man-made objects.  During  transport  the
pollutant  may  participate  in  chemical  or  physical
reactions that may transform it  into  other  materials
that  may  also  be pollutants or that may give further
rise to  pollutants.   Pollution  occurs  when  adverse
effects  attributable  directly  or  indirectly  to the
pollutants are discerned  in  receptors,  or  when  the
"quality"  of  the  medium  is degraded to a level that
impairs its utility.  Fishkills and saline agricultural
irrigation  water,  respectively,   are   two   obvious
examples.
Pollution Control

     The   contemporary   approach  to  eliminating  or
preventing  the  undesirable  effects   of   pollutants
focuses,  logically,  on  their  sources, but virtually
ignores the rest of the environmental system or  treats
it   piecemeal.    Pollution   control   is   based  on
technological or managerial modification  of  pollutant
sources  that either emit greater amounts of pollutants
than society deems reasonable for such sources, or  that
cause pollutants to appear in air or water  in  amounts
that  society finds unacceptable.  Pollution control is
generally confined to  sources  that  affect  a  single
medium; rarely are effects in other media considered.
Environmental System Management

     A  superior approach to solving society's environ-
mental pollution problems, employing a holistic view of
both the  environment  and  society,  would  allow  the
environment  to  be  managed  to achieve the objectives
society  chooses.   Advocated  by  both   environmental
technologists   and  social  scientists  (for  example,
McGauhey,  1968;  Freeman,  et_  al.,  1973),  a  system
management  approach  could  devise  means of attaining
society's objectives most effectively and  efficiently.
Management of the environmental system, rather than its
individual components, should prevent the unanticipated
adverse  effects  that  result when the solution of one
environmental  problem  creates  several  more  serious
ones.

     The  systems  approach  forces  recognition of the
interconnectedness  of  environment  and  society,  and
provides  a means of evaluating the impact of projected
social changes on the environment, and of environmental
changes on society.  Unfortunately, social systems  are
understood as poorly as environmental systems.
Environmental Management Functions

     We  recognize  six  major  steps  in  the  systems
approach  to  environmental  management.   First,   the
community's  objectives must be clearly identified, and
criteria must be developed to judge their  satisfactory
attainment.

     Second,  the system to be managed must be defined.
Its components must be identified at a. useful level  of
resolution.   The  boundaries  of  the  system  must be
delineated and all significant inputs  and
the  boundaries  must  be  quantified.
linking  the  components   must   be
quantified.
                          outputs  at
                       The  processes
                     identified   and
     Third,  the  functional  relationships between the
objectives and the environmental  (and  social)  system
must be adequately quantified.

     Fourth,  strategies must be formulated that permit
  sufficient variety of  alternative  decisions  to  be
a
examined    for
consequences and
objectives.
                    decisions  to  be
 their   environmental   and   social
for  their  ability  to  achieve  the
     Fifth,  an  effective  means  of
decisions must be established.
                     implementing the
                                                       10

-------
     Finally, a  means  of  measuring  the  results  of
implementing  the  decisions must be provided, and data
on  environmental  and  social  impact  must  be   made
available for use in any of the preceding steps.

     Each   of  the  above  steps,  so  easily  stated,
represents  technical,  as  well  as   managerial   and
political,  challenges  that  are  not likely to be met
satisfactorily    without     considerable     rational
simplification.   For  example,  one could not possibly
define every input and output  of  a  watershed,  since
measuring  every component of mass transport across the
shed  boundaries  in  the  real  world  would   be   an
insurmountable  task.   However, the dominant processes
of   the   environmental   system   can   be   measured
sufficiently  well  to account for much of the system's
behavior.  Although such incomplete  information  about
the system will prevent full explanation of many of its
phenomena,  nevertheless it can usually allow rational,
system-wide  decisions.    Simultaneously,   areas   of
ignorance  that  must  be  addressed  to improve system
management will be identified.

     Even the simplest of  environmental  processes  is
affected  by  a  complex  of  interacting environmental
factors  whose  net  effect  is  difficult  to  analyze
mentally.   Systematic study of these factors and their
relations to the process can yield insight that can  be
experienced    by   the   knowledgeable   environmental
scientist as a mental picture  of  the  process.   Even
better,   these   insights   can   be  expressed  as  a
quantitative mathematical model.  Models of  individual
processes   combined   into   larger   models   of  the
environmental system can be studied  more  economically
and more comprehensively than can the real environment.
Outputs  of  model  studies  can be analyzed for system
response to various input strategies and  for  insights
that  would  be  impossible  to  gain from study of the
prototype.
Modeling

      Systems  science   offers   some   concepts   (see  for
example,   McFarlane,   1964)   that  help  understand the
potential  and limitations  of  environmental models.   An
environmental  system  is  a  real-world physical system
which can  be  studied  only  through a  measuring  system.
The   output   of  the   measuring  system  is  a  set  of
observations  that is  used  to  construct  a  mathematical
model of  the  physical   system.    Observations of the
physical   system   can  be  compared  to   the   model's
description,    giving  a  set   of   errors  that  guide
refinement of  the   model.   This   iterative   process
continues  until   the  errors  become acceptably small.
The model  is  then accepted as adequate to represent  or
simulate   the  prototype  physical  system   for   the
intended purpose.

      Some  models  can  be formulated in a way that allows
them   to be used  analytically for detecting interesting
features of the system and its  behavior.   Especially
for management purposes, they can find optima, that is,
find   a  set  of conditions that  will cause an objective
function   (an objective   stated mathematically  as  a
function   of  one  or more system  variables) to take on a
desired maximum or minimum.  For example,  the  minimum
value  of  the  cost   function   for  a set of pollution
control procedures can  be  found through  mathematical
manipulation.   Some  models can  only be formulated in a
way that requires the  objective  function to be searched
for the optima.   Even with this  restriction,  however,
searches   using   models are usually much more efficient
for finding the optima  than are  searches  in  the  real
world.
     Models  must not be confused with their real-world
prototypes, and conclusions drawn  from  them  must  be
applied with caution.  A model built with one objective
in  mind  will  likely be inappropriate to simulate the
system for a different objective.  Models built without
clearly defined objectives may have no practical use in
solving environmental problems.  Models  that  are  not
periodically  compared with their prototypes can become
treacherously   inaccurate   if   the   prototypes   or
constraints change without detection.

     Mathematical  models are perhaps the only feasible
means of accomplishing the six major steps outlined for
system management.  The setting of social objectives is
an  especially  complicated  process  in  a  democratic
society.   Models  of this process and the interactions
between  society's  various  objectives   can   suggest
efficient  means of optimizing its results.   Objectives
set  by  any  process,  however,  may  be   unanimously
desirable,   but  not  feasible.   Models  that  relate
objectives to  the  systems  to  be  managed  can  give
insight  into  their  feasibility  that might otherwise
await actual failure of attainment.   An  imprecise  or
non-feasible  objective  can often be recognized and an
equivalent feasible objective substituted as  a  result
of such modeling.

     The  quantitative  descriptive  power of models is
essential to define a system  to  be  managed,   and  to
focus  attention  on  its  significant features.   For a
given  objective,  these  features  can  be   described
quantitatively   so   that  the  variables  having  the
greatest relevance to the objectives can be modeled.

     Functional relationships  between  objectives  and
the  system  to  be managed obviously can be quantified
only  through  mathematical   representations.     These
functions,   known   with   sufficient  resolution  and
precision, are crucial to the rational  application  of
system management.

     Development  of  alternative management strategies
for  environmental  systems  can   use   the   powerful
techniques  of  operations  research,  mostly  based on
models, that have been applied so successfully in
most other modern industrial and commercial activ-
ities.  (See, for example, Churchman, et al. , 1957.)

     If  the  means  of   implementing   decisions   is
considered  separately,  models  can  be used to design
efficient decision-implementing systems and guide their
operation.   Again,  operations  research  and   modern
management science offer powerful techniques.

     Finally,  the  design  of  systems  to monitor the
results of environmental management decisions should be
based on  the  models  used  to  reach  the  decisions.
Models  of  the monitoring system itself are useful for
optimizing its operation.

     The six steps involve  a  variety  of  disciplines
that  traditionally  are  not  accustomed  to  the team
effort  that  is  needed  to  make  system   management
effective.   Semantic and philosophical differences can
find a common ground in models and their symbolism.
Environmental System Models

     Construction of predictive environmental models is
a complex scientific challenge.  Recent successes  have
been  confined  mainly  to models for managing specific
materials or controlling the quality of a single medium
(see,  for  example,  Loucks,  1972;   Thomann,   1972;
Deininger,  1973;  Hill  et_  al., 1976; Lassiter, 1975;
Bloomfield et_ al., 1973) .   Environmental  modelers  are
adopting  a modular approach that permits any number of
                                                        11

-------
subsystem models to  be  assembled  into  larger,  more
comprehensive environmental system models.  Such models
will  never  be able to predict reliably every variable
and parameter that  might  be  of  interest,  but  they
should provide sufficient basic information to meet the
environmental   manager's  needs.   Models  that  offer
insight   into   ecosystem   functions   that    impact
environmental  quality  (Chen et al.,  1975), and in the
behavior of pollutants in  ecosystems   (Gillett,  1974;
Sanders,  1975)  are  available  in  various  stages of
utility.   Models  that   describe   an   operationally
significant  water quality parameter,  dissolved oxygen,
have been available in  useful  form  for  fifty  years
(Streeter  and Phelps, 1925) and are in general use for
making water pollution control  decisions.   Models  of
the   concentration  and  distribution  of  atmospheric
pollutants are being used for  planning  and  operating
air pollution control programs (Singpurwalla, 1974).
Social System Models

     Modeling   society   is   as  great  a  scientific
challenge as is modeling the environment.   The  simpler
social subsystems, e.g.  community activities for solid
waste  disposal (Liebman, 1974), have been successfully
modeled.  The social subsystem that  has  received  the
greatest attention of modelers is, of  course,  the  US
economic   system,   which   continues   to   challenge
econometricians.  Models are available for such  social
functions  as community health services (Palmer, 1974),
law  enforcement  (Gass,  1974),  educational   systems
(Weiss, 1974).
Management Models

     Logical decision making has benefited greatly from
the  quantitative  analytical  techniques  developed by
operations research.  Both processes and  organizations
for  decision making have been modeled for a variety of
purposes and in numerous settings.   A  large  body  of
knowledge  on  decision  making under uncertainty (see,
for example, Raiffa, 1968) is available for application
by  social  institutions   responsible   for   managing
environmental  quality.   Regional models that can help
identify  decisions  which   minimize   the   cost   of
"managing"  residuals  are available (see, for example,
Kneese, et al.,  1970; Spofford, 1973).
Implementation

     The  development  and  application  of   realistic
environmental  management  models have been hindered by
the difficulty in defining or isolating specific
systems  to  be managed and by the discontinuity of the
social  institutions  having  the  responsibility   for
planning   and   implementing   management  strategies.
Logical geographic subdivisions for one  medium  seldom
coincide  with  those  for  another medium; none of the
subdivisions  match  the  various   state   and   local
governmental  boundaries.   EPA  has  had difficulty in
focusing  on  true  intermedia   programs   since   its
authority  and  funding  are provided by eight separate
laws  which  are  either  media,  material, or   source
specific.

     The  future  of  environmental  system  management
looks much brighter, however, with the creation of  the
150  "Designated  Area"  planning agencies, established
under the authority of Section 208, Public Law  92-500,
for   water  quality  planning  and  management.   More
recently, they have been given increased responsibility
for  planning  air,  solid  waste,  thermal,  and  noise
pollution  control  strategies (Mellencamp, 1976).   The
designation   of   these   subregional   or   sub-basin
management  areas  has  greatly increased the degree  of
resolution  that  can  be  achieved  compared  to   the
regional  or  national  scale.   Given  clearly  stated
objectives and the proper  criteria  and  tools,  these
planning   institutions   can  devise  true  multimedia
environmental management alternatives that will  permit
local   officials   to   explore   and  strike  optimum
strategies for achieving both environmental  protection
and  other  social benefits simultaneously.  Using well
developed systems models,  such  agencies  can  quickly
explore  many alternative decisions that could not only
reduce the cost of managing the environment for  social
good  but also achieve a level of environmental quality
not otherwise attainable.

     Based  on  such  model  strategies  and  continual
feedback  from  the  local  area,  these agencies could
influence the allocation of costs so that they would  be
shared equitably and could  evaluate  progress  towards
goals   and   provide  course-correcting  stimuli.    We
believe that these same benefits can also  be  attained
on  a  wider scale when the 208 program shifts from the
designated  area  to   the   state-wide   emphasis    as
stipulated in the law.

     In  order  to  achieve these desired environmental
and social  benefits,  the  scientific  community  must
greatly  increase  its efforts to move systems modeling
concepts from  the  realm  of  scientific  research   to
practical   applications.    Social   and  governmental
institutions  are  developing,   but   well   evaluated
comprehensive  modeling tools, especially those linking
environmental and socio-economic systems, are currently
not available for their use.

     One obstacle has been the system  approach's  huge
appetite  for  data.   Obviously  the  effectiveness  of
system management depends on sufficient pertinent  data
to  construct, test, and use the models needed for each
step.  The formidable complexity of social and environ-
mental systems might suggest that enough  data  of  the
right  kind  is so costly as to be unattainable.  We  do
not believe that data now available, or data that could
feasibly be  obtained,  is  insufficient  to  meet  the
present  needs of models if a relatively gross level  of
resolution  is  accepted.    Environmental   management
decisions  based  on meager comprehension of the entire
system will be, we believe, superior to decisions based
on near-perfect understanding of one  system  component
and ignorance of the others.   Acceptance, initially,  of
a  grosser  level  of  resolution  will  facilitate the
application  and  subsequent  development   of   system
management  techniques.    The  allocation  of society's
resources to environmental system management  can  then
grow, if necessary,  toward an optimum for achieving the
stated objectives.

     Obviously, we believe that the systems approach  to
environmental  quality management is a desirable option
for improving American social action  in  environmental
protection  and  one  that deserves more consideration,
development, application and evaluation.
                      References

Bloomfield et  al.   1973.   Aquatic  Modeling  in  the
     Eastern  Deciduous  Forest  Biome,  US I.E.P.  In:
     Modeling  the  Eutrophication  Process,   Workshop
     Proceedings.  Utah Water Research Laboratory, Utah
     State University,  Logan, Utah.

Chen,  C.  C.  et  al.    1975.  Ecologic Simulation for
     Aquatic Environments.  In:  Systems  Analysis  and
     Simulation  in  Ecology, Volume III, Patten, B. C.
                                                        12

-------
     (ed.).
     588.
Academic Press, Inc., New York.   p.  476-
Churchman,  C.  W.,  R. L. Ackoff, E. L. Arnoff.  1957.
     Introduction to Operations Research.   John  Wiley
     and Sons, Inc., New York.  645 p.

Deininger,  R.  A.  (ed.).   1973.  Models for Environ-
     mental  Pollution  Control.   Ann  Arbor   Science
     Publishers, Inc., Ann Arbor, Michigan.  448 p.

Freeman,  A.  M., III, R. H. Haveman, and A. V. Kneese.
     1973.   The  Economics  of  Environmental  Policy.
     John Wiley and Sons, Inc., New York.  184 p.

Gass,  S.  I.   1974.   Models  in  Law Enforcement and
     Criminal Justice.  Chapter 8 in: A Guide to Models
     in  Governmental  Planning  and  Operations.   EPA
     Report Number 600/5-74-008.  p. 233-275.

Gillett  e_t  al.   1974.   A  Conceptual  Model for the
     Movement of Pesticides  Through  the  Environment.
     US  EPA, ORD, NERC, Corvallis, Oregon.  EPA Report
     Number EPA-660/3-74-024, December.  79 p.

Hill, J., IV, H. P. Kollig, D. F. Paris, N.  L.  Wolfe,
     and  R. G. Zepp.  1976.  Dynamic Behavior of Vinyl
     Chloride in  Aquatic  Ecosystems.   US  EPA,  ORD,
     Environmental     Research    Laboratory,   Athens,
     Georgia.  EPA Report Number EPA-660/3-76-001.   63
     P-

Kneese,  A.  V.,  R. U. Ayres, and R. C. d'Arge.  1970.
     Economics  and  the  Environment:    A   Materials
     Balance  Approach.  Resources for the Future, Inc.
     Washington, DC.

Lassiter, R. R.  1975.  Modeling Dyanmics of Biological
     and Chemical Components of Aquatic Ecosystems.  US
     EPA, ORD, NERC,   Corvallis,  Oregon.   EPA  Report
     Number EPA-660/3-75-012.  54 p.

Liebman,  J.  C.   1974.  Models in Solid Waste Manage-
     ment.  Chapter 5  in: A Guide to Models in  Govern-
     mental Planning and Operations.  EPA Report Number
     600/5-74-008.  p. 141-164.

Loucks,  0. L.  1972.  Systems Methods in Environmental
     Court Actions.  Systems Analysis and Simulation in
     Ecology, Volume II, Patten, B. C.  (ed.).  Academic
     Press, New York.  592 p.

McFarlane,  A.  G.  J.   1964.    Engineering   Systems
     Analysis.    Addison-Wesley   Publishing  Company,
     Inc., Reading, Massachusetts.  272 p.

McGauhey, P. C.  1968.  Engineering Management of Water
     Quality.  McGraw-Hill Book Company, New York.  295
     P-

Mellencamp,  G.  L.    1976.   Personal   Communication,
     Director,  Waste  Management  Program, Chattanooga
     Area Regional Council of Governments, Chattanooga,
     Tennessee.

Palmer, B. Z.  1974.  Models in Planning and  Operating
     Health Services.  Chapter 11 in: A Guide to Models
     in  Governmental  Planning  and  Operations.   EPA
     Report Number 600/5-74-008.  p. 349-374.

Raiffa, H.  1968.   Decision  Analysis.   Addison-Wesley
     Publishing  Company, Inc., Reading, Massachusetts.
     309 p.

Sanders, W.  M., III.  1975.   A  Strategy  for  Aquatic
     Pollutants,   Fate  and  Transport  Determination.
     Presented  at  the  International  Conference   on
     Environmental  Sensing  and Assessment, Las Vegas,
     Nevada, September.  (Proceedings in press).

Singpurwalla, N. D.  1974.   Models  in  Air  Pollution.
     Chapter  3  in:  A Guide to Models in Governmental
     Planning and Operations.  EPA Report Number 660/5-
     74-008.  p. 63-102.

Spofford,  W.  0.,  Jr.   1973.    Total   Environmental
     Quality  Management  Models.  Chapter 19 in: Models
     for Environmental Pollution Control, Deininger, R.
     A. (ed.).   Ann Arbor Science Publishers, Inc., Ann
     Arbor, Michigan,   p. 403-436.

Streeter, H. W. and E. B. Phelps. 1925. Public Health
     Bulletin Number 146.  US PHS, Washington,  DC.    75
     P-

Thomann,  R.  V.   1972.    Systems  Analysis  and Water
     Quality Management.    Environmental  Research  and
     Applications, Inc.  New York.  286 p.

Weiss,  E. H.  1974.  Models in Educational  Planning and
     Operations.   Chapter   9  in: A Guide  to Models in
     Governmental Planning  and Operations.   EPA  Report
     Number 660/5-74-008.  p. 219-316.
                                                        13

-------
                            A SYSTEMATIC APPROACH TO REGIONAL WATER QUALITY PLANNING

                                      G. Paul Grimsrud and E. John Finnemore
                                               Systems Control, Inc.
                                               Palo Alto, California
     This paper describes the methodologies developed
for regional water quality management planning on the
Snohomish and Stillaguamish River Basins in the State
of Washington.  These methodologies were specially
designed to be responsive to future changes in state
and federal legislation, land use, economics, popula-
tion, employment, geographical and political boundaries,
technological development, and changes in the natural
or man-made conditions of the water bodies.  Computer
models were developed and utilized to project future
sewage and  runoff  flows, determine the assimilative
character of water bodies, plan and cost various alter-
native wastewater management plans and provide other
information on the cost-effectiveness of alternatives
necessary for the completion of the Water Quality
Management Plans.  The modeling and programming elements
are the principal factors allowing the development of a
dynamic and easily updated Water Quality Plan.

     The approach is applied to river basin planning on
the Snohomish and Stillaguamish Basins, demonstrating
its application and results.  The methodologies are
presently used by planners in Snohomish County for on-
going water quality planning.

                      -Introduction

     Planners and managers responsible for the perfor-
mance of water quality planning programs must usually
deal with the complex engineering, economic, financial,
legal, institutional and environmental aspects of water
quality.  They must face the need to develop or acquire
a systematic approach to planning which can take all of
these factors into account.   This paper presents a
water quality planning methodology which includes most
up-to-date technology for computer-based modeling, yet
great flexibility for practical use.  The methodology
is designed for the selection of most cost-effective
wastewater management schemes in a region, and the
time-phased design of such schemes over many years in
the future.  The methodology was developed under con-
tract with EPA, and used on the Snohomish and Stilla-
guamish River Basins in Washington State.

                   Planning Procedure

     The planning steps followed in the quest for a
cost-effective water quality management plan^for river
basins include the selection of alternative configura-
tions for treatment facilities, carrying out assimi-
lation analyses for receiving waters, costing the
alternative configurations,  and making cost-effective-
ness comparisons.  The methodologies employed in these
steps are a compromise of computer-oriented quantita-
tive procedures and planner-oriented qualitative
activities.

     Figure 1 summarizes and clarifies the interdepend-
encies of the tasks in this planning methodology and
illustrates the flow of events - prerequisites, bottle-
necks, critical paths, progressions.  The starting-
point data or inputs to the planning process are listed
in Column 1 (left hand side).  The flow paths to the
right from these starting points show the impacts of
the input data on subsequent tasks.
     Throughout the formulation of planning method-
ologies, weaknesses in present available  information
were found.  It was evident that certain  plan  inputs,
such as the water quality standards, were likely  to
change in the near future, thus causing a need for plan
updating.  These problems were given foremost  consider-
ation in the development of plan methodologies.   Future
updates to the inputs (data or assumptions) should only
be made directly to items in Column 1 of  Figure 1.
Thus, whenever any input to the plan development  process
is significantly changed at some future date,  the flow
paths of Figure 1 will identify the downstream tasks in
need of review and possible re-execution.

     Of prime concern in the development  of an effective
water quality plan are the concentrations  of pollutants
in the receiving waters of the basins.  Since  the
natural movements of these waters, and hence their
transportive and diluting effects, are ever-changing
and variable, some specific flow regime must be
selected for planning purposes to make possible the
comparison of alternatives.

                      Flow Regimes

     A specific flow regime,  for analysis purposes, is
required to provide the basis for the computation of
receiving water qualities.  Although the regime should
approximate "worst-case" conditions to provide the
fullest protection against water quality violations,
any conditions, however conservative,  can be exceeded
with some (possibly very small) frequency or proba-
bility.  To plan to control events which occur on the
average only once in a. thousand years, say, would be
to incur inordinate expenses.   Thus,  the selected flow
regime must be in a sense an arbitrary design  condition,
corresponding to flow magnitudes which are adequate for
design purposes a large majority of the time.

     Waste load dilutions are,  in Western Washington
areas,  lowest during summer low-flow conditions.  The
seven-day/ten-year low-flow conditions have been
specified' for the State of Washington.   These were
determined for many points on streams around the basin
using available U.S.G.S. data,  and a set of streamflow
analysis and synthesis programs (see Reference 1 for
details).

      Pollutant concentrations may be even greater,
however, when materials which have accumulated on
urban and agricultural areas  over a dry period are
washed off by a summer storm.   Storms during the
summer months are very common in the Pacific Northwest.
Storm occurrence averages at  least three measurable
storms every summer.   Due to  this relatively high
frequency of storms,  runoff of non-point pollutants can
be expected to occur during the worst-case design
conditions.

     The design storm selected for the purposes of
including non-point runoff was a typical summer storm.
It was assumed to have occurred after twelve dry days,
which,  based on historical data, was found to  be  the
average period between summer storms.   Runoff  resulting
from the average summer storm was input as a uniform
flow over the 24 hour modeling period in the upper
basins.  A runoff pollutograph resulting from  the storm
was used in the Snohomish Estuary.
                                                       14

-------
     In addition to the above conditions, for the
estuary a tidal condition corresponding to the typical
monthly low swing was incorporated into the flow
regime.  This provided a conservative estimate of tidal
flushing.

     Because of the somewhat arbitrary nature of this
flow regime, the above-mentioned flow conditions were
presented for consideration to the Snohomish County
Advisory Committee, and they were accepted.
              Determination of Alternatives

     Considerations included in  the  selection of
alternative wastewater treatment  configurations for
the basins were:

     •    Locations of existing  treatment plants;

     •    Suitability of  existing facilities;

     •    Probable locations of  future  treatment
          plants;

     •    Optional locations for  outfalls;

     •    Types  of treatment plants,  including
          storage facilities;

     •    Spatial distribution,  and  relationship
          to  river and estuary system;

     •    Topography; and

     •    Soil  conditions.

     In  developing alteratives,  at  all times only
systems  that  would be hydraulically  sound and that
showed a potential to be  simple,  least  cost, and
reliable, were  considered.  To accomplish this,
gravity  systems were used whenever possible.

     The list of alternative treatment  plant and  out-
fall locations,  and types of treatment,  for possible
inclusion in  the alternative plans was  derived  from
previous studies, from discussions with local treat-
ment plant operators, engineers  and  elected officials,
and from the  personal familiarity of Snohomish  County
Staff members.   The alternatives  chosen represent
those most likely to be effective, based on the best
engineering judgment of these sources and current EPA
cost-effectiveness guidelines.3

     Alternative configurations  involving the possible
combination of  industrial and municipal flows were not
considered acceptable when the industrial flows would
be greater than about one-third  of the  municipal  flows.
This was  due  to  the fact  that the biological processes
involved  in treatment plants need a  reasonably  steady
wastewater quality; this  characteristic of municipal
wastewater is easily upset by flow and  quality varia-
tions in  effluents from industrial sources beyond the
control of the municipal  treatment plant.  The  alter-
native of industry responsibility for the operation of
the "combination" treatment plant was not considered
at this time  to  be desirable.

     The  general objective of the various regionalized
configurations was to achieve overall cost reductions
through  the economies of  scale obtained  from combining
several local wastewater  treatment plants.  The addi-
tional costs of  required  interceptor  sewers and/or
force mains were included.
                 Wasteload Allocation

     The execution of a cost-effective waste load
allocation in the river basins required consideration
of different types of sources (point, drainage district,
and non-point), different time horizons with their
varying water quality standards required by law,
different receiving waters (rivers, estuary) and alter-
native regionalization schemes for wastewater treatment
facilities.  Many of these were interdependent.

     In order to approach the identification of a most
cost-effective basin configuration in an organized and
effective manner, a working procedure was developed in
advance of undertaking the task.  This procedure is
summarized in Figure 2 (supplemented by Table 1), which
outlines the alternative "routes" which were considered
demanding of investigation, in some cases conditional
upon the findings from earlier phases.  The following
assumptions were used in this procedure:

1.   Straightforward structural solutions were to be
     attempted first.  Only if they were found un-
     successful would non-structural solutions be
     considered.

2.   Where regionalized treatment plants were expanded
     existing plants, the same outfall locations would
     be used.

                  Assimilation Analysis

     A very large number of computations are needed to
determine water quality levels which result from a
variety of wasteloads with complex hydrodynamic and
natural constituent processes.  Therefore, computer
modeling of receiving waters for the analyses of waste
assimilation was selected as appropriate.  Computer
modeling also allows rapid recomputation for various
alternative cases, once the basic model has been
established, thus facilitating comparisons.

     The water quality computer models may also be used
to investigate which estuary and river segments are
"water quality limited," and which are "effluent lim-
ited". ^  Given conditions where all point sources
just meet effluent standards, the former designation
applies where receiving water standards are violated
and the latter applies where they are met.

                Waste Load Forecasts

     In addition to specifying the receiving water
geometry and the design flow regime, the water quality
models require, as input, data on waste loads.   Waste
loads may, now or in the future, be directly subject to
effluent standards, and may also be limited, indir-
ectly, by receiving water quality standards.  The ob-
jective of an assimilation analysis is to determine
what may be discharged where, and yet meet the various
pertinent standards.

     Federal guidelines for effluent standards for
municipal and industrial wastewaters have been, or are
in the process of being, established.  The standards
become more stringent with time, varying in requirement
from the "best practicable control technology currently
available" or secondary treatment, to the "best avail-
able technology economically achieveable" or "zero
discharge."

     Proposed EPA standards for many industrial efflu-
ents are published in the Federal Register. '   For
each quality constituent, two effluent limitations are
provided:  (1) a maximum for any one day, and  (2) a
maximum average of daily values for any period of
                                                       15

-------
thirty consecutive days.  The latter lower value was
selected as the basis for computing industrial waste
loads, since use of the former would be compounding
worst-case events having a very small joint probability
of occurrence.

     The pollutant concentrations employed in the
models as existing in effluent from secondary treatment
in municipal wastewater treatment plants are given in
Table 2.  Where these values were uncertain, they were
chosen to err on the high side, so as to provide a
"worst-case" safety margin.  Chlorine in secondary
municipal effluent was modeled as a conservative (non-
decaying) constituent.  The various representative
types and levels of secondary treatment employed in
this study included:

     •    Oxidation Process (aerated ponds or ditches
          with 3 to 4 days retention capacity and
          effluent chlorination) meeting secondary
          standards (assumed maximum flow capacity
          of 2 mgd).

     •    High Rate Trickling Filters (primary
          sedimentation, H.R. filters with re-
          circulation, secondary clarification,
          effluent chlorination).

     •    Conventional Activated Sludge (primary
          sedimentation, aeration, secondary
          clarification, effluent chlorination).

     The municipal wastewater flows were predicted
using a computer forecasting model, SNOQUAL, which
determines flows on the basis of population dynamics.
The details of this program are given in Reference 1.

     One other important potential source of pollution
is storm runoff from urban areas (in storm or combined
sewers) or from agricultural lands (non-point sources).
The manner in which these were computed and input to
the models is discussed in Reference 1.

     Rivers.  A steady-state river water quality model
(SNOSCI) as described in Reference 1, a  specially
modified version of DOSAG,  was used for the river
assimilation analysis.  Numerous runs were made to
test various boundary conditions.  These consisted of
point sources (municipal and industrial),  non-point
sources (agricultural runoff), and tributary streams.

     The scheme of Figure 2 was used as a guide for
running the alternative test conditions.   First, runs
were made with (a) zero point loads (representing 100%
treatment) and full non-point loads, and (b) vice-versa.
Point source treatment would be achieved by traditional
structural means (sources and treatment plants); non-
point "treatment" would be effected (at least partially)
by "non-structural" measures such as by restricted land
use and agricultural practices.

     Next, with secondary treatment of municipal and
industrial point sources, a number of runs were required
to identify the locations and sizes required for reduc-
tions in non-point sources in order to meet receiving
water quality standards.

     The effects of various regionalized wastewater
treatment schemes were then investigated,  combining
municipal and industrial effluents at certain appro-
priate outfall locations.

     These investigations were repeated  for present,
1980 and year 2000 waste flows.
     Estuary.  The transient estuary water  quality
model  (SRMSCI) as described in Reference  1,  a  specially
modified version of RECEIV7 'for  this study,  and pre-
viously calibrated to the extent possible,  was  used  for
the estuary assimilation analysis.  The design  flow
conditions included river inputs as per the output from
the river model.

     Numerous runs were made to  test alternative muni-
cipal  and industrial treatment schemes, again using
the scheme of Figure 2 as a guide.  An additional
category of source, the Drainage District (treatable
"non-point"), was included to represent two such
extensive areas.

     First, runs were made with no storm  and non-point
sources, and with point sources at secondary treatment.
Next,  the storm and non-point sources were  included  in
the estuary model, and a number  of runs were required
to determine the locations and magnitudes of coliform
reductions required to meet standards.

     The "most stringent" regionalization scheme for
the lower basin was run, with year 2000 municipal and
industrial loads.  This included municipal  flows from
Seven  Lakes, Tulalip, Marysville, Lake Stevens,
Mukilteo and Everett, all given  secondary treatment
at the present Everett Site.

                  Cost of Alternatives

     The objective of these alternative costing  proce-
dures  was to develop appropriate procedures  for  esti-
mating costs for the large number of plan alternatives
considered, each with its numerous facility  components
and with time phasing.

     A computer model, \SUSCI2, was developed by Systems
Control,  Inc.,  to design,  time-phase,  and cost sewers,
force mains,  pumping stations,  and treatment plants.
More details of this model are.provided in Reference 1.

     Given the alternative sewer networks, and locations
and types of treatment plants,  together with the  fore-
cast flows, SUSCI2 designs the facilities needed  and
computes the capital and M&O expenditures required,
indicates the timing of costs over the planning period
(1976-2000),  and also determines a discounted total
cost (present worth) for each alternative scheme.  All
the alternatives were costed with this model, taking
advantage of common components (typically local  sewer
systems) which need not be repeated.   Care was  taken
to ensure that the same total area was serviced under
each alternative, regardless of the manner of servicing.

     After making preliminary investigations into  com-
parative costs of minor local alternatives,  it was
determined to be most efficient to analyze and  select
from the major alternatives first, then proceed to the
intermediate and finally to the outermost and most
minor  and localized alternatives.  The lesser costing
alternatives were thus considered as variations of
the preferable major (central)  alternative.  This
approach was acceptable, since the effects of the
minor  alternatives were relatively small  enough not
to affect the selection between the major alternatives;
this results principally from the fact that a small
change in treatment capacity has less effect on  the
unit cost of treatment in larger plants than it does
in smaller plants.   As a result of this procedure,
large numbers of cost evaluations of unattractive
alternative combinations were avoided.

     Costs for the chlorination of drainage waters in
controllable drainage districts were computed manually.
                                                        16

-------
The capital cost  of  a  contact tank and chemical feed
equipment was based  on a formula by Smith  assuming
dosage at 8 mg/L  with  a 15 minute contact time, and
updating this to  a 1976 dollar cost with an ENR
Construction Cost Index curve for Seattle9.  The M&O
cost for chlorine, etc., was derived from a cost curve
of operating costs of  sewage tanks, after Engineering
Science Inc., and making allowances for the recent
large cost increases experienced for chlorine.

             Cost-Effectiveness Comparisons

     Alternative  plans which met receiving water stan-
dards, as determined by the assimilation analysis, were
compared for cost and  effectiveness.  Besides comparing
the total discounted costs, their components  (e.g., M&O
vs. capital) were compared, since the M&O costs con-
tained more uncertainty than the capital costs.  Under
effectiveness,  such  factors as reliability were con-
sidered.  Thus, among  schemes of similar costs, those
with more force mains  were less attractive than those
with fewer  (more  reliable gravity sewers).  If the
differences in  cost  and effectiveness between two
schemes was not sufficiently great, no preference
between  them was  expressed on those grounds alone, and
more detailed  engineering would be necessary  to deter-1
mine the less  expensive alternative.  Differences
between  environmental  impacts of alternatives were also
considered.

               Time-Phased Facility Schedule

     A  time-phased  facility construction schedule,
required to satisfy  the demands of the basin management
plan selected  for the  two basins, had to be determined.

     As  part  of the  assimilation analysis the receiving
water quality  models SNOSCI and SRMSCI, for the upper
and lower portions  of  the basins respectively, were
applied  to  investigate a number of alternative cases
 (see Figure  2).  Computer runs for the projected year
2000 point  loads  treated at the secondary level deter-
mined  that  generally no problems occurred in  the
receiving waters, besides those due to non-point sources.
Therefore,  a  schedule for the additional facilities
required by  the plan was simply, and practically auto-
matically,  governed  by  (1) the extra facilities needed
by the  year  2000, and   (2) the dates when demands would
exceed  the  capacities  of existing facilities, or when
the useful  lives  of  existing facilities would expire.

     The scheduling  of facilities expansions was
computed by the sewerage system planning and  costing
model named SUSCI2.   Given the alternative sewer net-
works and locations  and types of treatment plants,
together with  the forecast sewage flows, SUSCI2
designs  the facilities needed and computes the capital
and M&O  expenditures required, indicates the  timing of
costs over the  planning period (1976-2000, in four-
year incremental  planning periods), and also determines
a  discounted  total cost for each scheme.  Besides
determining the needs  for new facilities, SUSCI2 also
computes extensions  and updates needed to augment
existing facilities; based on present capabilities, it
estimates from  input demand forecasts the dates when
capacities will become inadequate.  Since the designs
are automatically optimized with respect to cost
within SUSCI2,  the results are automatically the most
cost-effective.

     The resulting time-phased development schedule
was presented at  local public meetings and at the
technical advisory meetings.  These meetings were
specifically set  up  to enable the provision of practical
inputs to plan  scheduling.
     The allocation of  cost  data  over the implementation
schedule was done by preparing  the schedule in table
form, and indicating therein the  distribution of costs
over the various incremental planning periods.

                      References

1.  Systems Control, Inc. , and  Snohomish County Planning
    Dept.  Water Quality Management  Plan for the Snoho-
    mish and Stillaguamish River  Basins;  Volumes I-VI.
    Everett, Washington, November 1974.

2.  Washington State Dept. of Ecology.   Guidelines  for
    the Development of Water Pollution Control and  Ab-
    atement Plans for Sewage Drainage Basins,  Second
    Edition.  Olympia, Washington, September 1,  1970.

3.  Environmental Protection Agency.   EPA Cost-Effect-
    iveness Proposed Analysis Guidelines.   Federal  Regi-
    ster, Volume 38, No. 127.  Washington,  D.C.,  July  3,
    1973.
4.  92nd Congress. Water Pollution Control  Act Amend-
    ments of 1972:  PL92-500.  October 1972.

5.  Environmental Protection Agency.   Proposed Guide-
    lines and Standards:  Pulp, Paper,  and  Paperboard
    Manufacturing Point Source Category.  Federal Regi-
    ster, Volume 39, No. 10, Part  II,  p.  1912,  Subpart
    A.   Washington, B.C., January  15,  1974.

6.  Environmental Protection Agency.   Proposed Effluent
    Guidelines and Performance and Pretreatment  Stand-
    ards for New Sources:  Timber  Products.  Federal
    Register, Volume 39, No. 2, Part  III, pp.  944-5,
    Subparts A,B.  Washington, D.C.,  January 3,  1974.

7.  Metcalf & Eddy, Inc., University  of Florida,  and
    Water Resources Engineers, Inc.   Storm  Water  Manage-
    ment Model, Volumes 1-4  (EPA Report No.  11024DOC).
    Superintendent of Documents, Washington, D.C.,  July-
    October 1971.

8.  Smith, Robert.  Preliminary Design of Wastewater
    Treatment Systems.   J.S.E.D.,  Proc. A.S.C.E., pp.
    117-145.  February 1969.

9.  George S. Nolte and Associates and Snohomish  County
    Planning Dept.  Snohomish County,  Washington, Rural
    Water Sewer Facilities Study.  Everett,  Washington,
    December 1973.
          Table 1.
                     Types of Sources Included In the
                     Pour Cases of Figure 2
                      Types of Sources Present
                      PTS



Case
1
2
3
4


Segment
Location
Above DD's
Below DD's
Below DD's
Above DD's



Point
Y
Y
Y
Y
Treatable
Non-Point
(Drainage
Districts)
N
Y
Y
N

Elusive
(Non-Treatable)
Non-Point
N
N
Y
Y
        Partially treatable by non-structural means.
       Definitions (also for Figure 2):
       AWT - Advanced Wastewater Treatment
       DD - Drainage District. These are agricultural
            areas where rain runoff is routed through
            systems of drainage ditches to adjacent rivers.
       HPT - Highest Practicable Treatment
       NP - Non Point Sources
       PTS - Point Sources
       2° - Secondard (Treatment)
       Y  - Yes
       N  - Ho
                                                         17

-------
      Table 2.     Concentrations Resulting from
                  Secondary Treatment  of Municipal
                  Wastewater, as Employed in
                  Receiving Water Quality Models
Constituent
                  Concentration
BOD
DO
NH -N
N02-N
O-PO.-P
ci2
Cu
Pb
Fecal Coli
Total Coli
Temperature
  30.0 mg/L
   5.0 mg/L
   9.8 mg/L
   0.0 mg/L
  10.0 mg/L
  10.0 mg/L
   1.0 mg/L
   1.0 mg/L
   0.05 mg/L
 200 MPN/100 mL
2000 MPN/100 mL
  25°C
                         Figure 1.  WATER QUALITY MANAGEMENT PLANNING
                                               T     T    T   \   I           .-
                                               —rivsf	i

-------
                            FIGURE 2

        SCHEME FOR DEVELOPMENT  OF  ALTERNATIVES
®
1

DESCRIBE SUITABLE
2° TREATMENT
         RUN ALL SOURCES FOR YR.
        2000, ASSUMING 2° TREATMENT
                               19

-------
                                A REVIEW OF EPA'S GREAT LAKES MODELING PROGRAM
                  W.L. Richardson
        U.S. Environmental Protection Agency
            Large Lakes Research Station
                Grosse lie, Michigan
                     N.A. Thomas
        U.S. Environmental Protection Agency
            Large Lakes Research Station
                Grosse lie, Michigan
    The Large Lakes Research Station at  Grosse  lie,
Michigan,  is  responsible  for implementing the EPA,
Office of Research and Development's research program
for the Great Lakes.  The objective is to be able  to
describe   the  transport  and  fate  of  pollutants.
Mathematical models provide the researcher  with  the
necessary tools for accomplishing this task and, once
calibrated  and  verified,  they can be used by water
quality  managers  confronted  with   making   policy
decisions.   Several levels of modeling research have
been initiated which  address  water  quality  issues
ranging from lake-wide to nearshore effects, and from
eutrophication  to  hazardous  materials.  Concurrent
surveillance and experimentation programs  are  being
conducted for model calibration and verification.  An
overview  of  the EPA Great Lakes modeling program is
presented  including  results  from   some   specific
models.

                    Introduction

    The  Environmental  Protection  Agency, Office of
Research and Development, is  conducting  a  research
program  to address many of the complex water quality
issues  on  the  Great  Lakes.   The  Federal   Water
Pollution Control Act and the 1972 Amendments specify
that  the  agency  ...  "shall  conduct  research and
technical development .   .  .  with  respect  to  the
quality  of  waters  of  the Great Lakes, including an
analysis of the present  and  projected  future  water
quality  of  the Great Lakes under varying conditions
of waste treatment and  disposal."   The  U.^.-Canada
Agreement  on  Great  Lakes  Water  Quality   further
provides impetus for this research effort.
    In   response   to    these    directives,    the
Environmental   Research   Laboratory—Duluth,  Large
Lakes Reasearch  Station  (LLRS)  is  implementing  a
modeling    research    program    to   improve   the
understanding of complex  limnological  processes  in
the  Great  Lakes.   The program has been implemented
primarily through grants to academic institutions and
by a small in-house effort.  This research  dovetails
with   a   concurrent   water   quality   survey  and
experimentation program  which  provides  information
necessary  for  model  calibration  and verification.
The models are providing decision makers in  EPA  and
other  water  management  agencies  with quantitative
tools for evaluating alternative  courses  of  action
concerning  water  quality.  Because the limnological
processes are so complex  and  interrelated;  because
the  Great  Lakes  include  such a large geographical
area;  and  because  of   the  long  detention  times;
simple,  empirical  and   intuitive approaches are not
adequate.  This is critical for the Great Lakes where
billion dollar decisons  can affect the entire system.
Though the cost of modeling one Great Lake may be  on
the  order  of  a million dollars, the billions spent
for remedial actions  justifies  the  effort.   As  a
model  for  each of the  lakes is developed, less data
and experimentation are  required  which  reduces  the
cost  of  each  subsequent lake modeling program.   In
addition, the model structure, kinetics, and software
can  be  used  for  smaller  lakes  which  would  not
necessarily  be  modeled  without  having   the Great
Lakes modeling experience.
    Models are not expected to answer every question,
however, and the researchers would be  the  first   to
agree  that  there  are  major deficiencies which are
difficult to overcome.  These include  the  imprecise
scientific   knowledge   of  specific  processes  and
interactions and further  computational  restrictions
imposed  by available computer technology and cost  of
computer operation.  Models are intended  to  enhance
the  managers' experience and judgement and
improve their insight  into  cause  and  effect.    In
addition,  the  modeling  research provides secondary
benefits by (1) systematizing and quantifying complex
interrelationships between the physical, chemical and
biological  elements  in   limnology   and   (2)    by
identifying  the  weakest areas in our knowledge and,
in fact, defining research needs.
    This paper presents an overview  of  the  general
modeling  process  along  with  a summary of specific
model results.

        Great Lakes Model-Management Process

    The general modeling-management process  for  the
Great Lakes is shown in Figure 1.  Modeling
is the focus for understanding limnological processes
and  for  translating  them  into terms necessary for
management's use.  The process as shown includes:

    1.  Monitoring material inputs.
    2.  Surveillance of material pools.
    3.  Experimentation to define biochemical
        processes.
    4.  Establishment and management of
        water quality standards.
    5.  Establishment of abatement programs.
    6.  Action to reduce material loads.
    Development of a model entails  both  calibration
and   verification.   Calibrating  a.  model  involves
comparing  computed  results  to  measured  data  and
adjusting   model   parameters   until  the  computed
variables  match  the  measured.    Verification    is
obtained  when  computed  results  match data from  an
independent data set without parameter adjustment.
    Once calibrated and verified, the model  is  used
to  simulate  the effect of possible modifications  to
the system (e.g., reductions in phosphorus loads)   on
the  concentration  of  materials  in the water body.
The simulated concentrations are  compared  to  those
desired  (water  quality standards).  The results can
be used as a basis to establish  long-range  planning
goals,  to  determine  effluent  allocations,  or   to
reestablish water quality standards.

                  Model Development

    Water quality models are  structured  to  predict
the effect of material discharges (loads) on material
concentrations  in  the  receiving  water  body.  Two
                                                       20

-------
modeling  approaches are in general use.  They are  1)
empirical and 2)  deterministic.   The  empirical   or
statistical  approach  involves correlations of cause-
factors to effect factors.  For example, Vollenweider
has compiled total phosphorus loading data on several
lake systems and  correlated these  to  the  measured
peak   chlorophyll   a_  levels.   Since  a  range   of
conditions has been incorporated, predictions of  the
effect   of   phosphorus   reductions   are  made   by
interpolation.
    The disadvantage of this approach is  that  there
are  several  assumptions  which  may  limit  its use
particularly for the Great Lakes: 1) it  assumes  the
phosphorus  loadings  are  known  and  correct, 2)  it
assumes the chlorophyll a_ is in equilibrium with  the
loads  (ie.,  there is no time lag in response to the
loads), and 3)  it  assumes  average  conditions  are
sufficiently precise.
    The  deterministic  approach  is  based  on basic
principles and  incorporates  equations  representing
the  actual  limnological  processes.   These  models
account for  and  trace  each  variable  through  the
system and  conserve  mass,  energy, and momentum  in
space  and time.
    For the deterministic models  the  calibration   or
data   "fitting"  process is based on knowledge of the
system parameters  in  contrast  to  the   empirical
approach  which  forces  a  least squares fit to the
data.  If the deterministic model output from initial
simulations does not match the data,  a  limnological
rationale  for parameter adjustment is necessary.   If
a   rationale   is   not   available   then   further
experimentation is required.  Any interim results are
qualified to reflect the range of possible solutions.
The    disadvantage  of  the  deterministic  approach,
however, is that it requires much more research  time
and computer resources.
    The   modeling   process  includes,  but  is  not
necessarily limited to, the  following  steps:

A. Assessment Phase
    1. Define issues.
    2. Define objectives.
    3. Conceptualize model.
    4. Assess general data availability.
    5. Determine capabilities and requirements
       of various model approaches.
       b. Assess model accuracy.
       a. Describe information the model can provide.
       c. Determine time required to develop and
          implement.
       d. Determine computer resources required.
B.  Decision Phase
    1. Determine resources available.
       a. Computer time.
       b. Research resources.
    2. Determine priorities.
    3. Determine accuracy required.
    4. Determine deadline.
    5. Choose a course of action  based on the above
       assessments.
C.  Implementation Phase
    1. Develop and implement model.
    2. Compile existing data.
    3. Design and implement surveillance
       and experimental programs.
    4. Calibrate model.
    5. Determine success of approach and modify
       accordingly.
    6. Verify model.
    7. Evaluate  the success of the model.
    8. Present results to the scientific community.
    9. Document models.
D.  Management Phase
    1. Conduct management simulations.
    2. Make model available for management use.
    3.  Modify and refine model as required.
                 Great Lakes Issues

    The  issues  concerning  the  Great  Lakes can be
categorized by water quality parameters and the  time
and  space scales involved.  The most urgent issue is
eutrophication.   The  problem  is  the   effect   of
phosphorus  and  nitrogen  on levels of algal biomass
and the degree of control required to restore  and/or
maintain adequate water quality.  This issue requires
knowledge  of  lake-wide phenomena and even phenomena
involving the interaction between  lakes.   The  time
scale is on the order of years to decades.
    A second level issue also involves eutrophication
but in the nearshore regions and embayments.  Because
of  the  shorter  response  time  in  these localized
areas, time and space scales are smaller (seasons and
kilometers).
    A third level issue involves immediate effects on
even smaller scales (hours and meters)  of  materials
or  heat  in  the  discharge plume.  For example, the
material distribution in an  effluent  plume  may  be
critical  to  nearby  water  intakes  or recreational
sites.  Specific questions have been  asked  such  as
Where  to  locate  the discharge structure? What size
mixing zone is allowed so not to interfere with water
uses?  and  What  local  and  short-lived  biological
responses are expected?
                 Great Lakes Models
General
    The  first  phase  in  modeling each of the Great
Lakes   (except   Superior)   involves   development,
calibration  and  verification  of the phytoplankton-
zooplankton-nutrient  model   first   structured   by
O'Connor , for the San Joaquin Delta.
The   general  system  scheme  of  this  approach  is
depicted in Figure 2.
    Phytoplankton   biomass   is    represented    by
chlorophyll  a_ which is used primarily because of the
ease of measurement and availability of  data.   Phy-
toplankton carbon is obtained by specifying a carbon-
chlorophyll   stoichiometry   and   is   the  element
zooplankton  consume   along   with   the   nutrients
contained   in  the  phytoplankton.   The  nutrients,
phosphorus and nitrogen are also  accounted  for  and
traced  through  the phytoplankton and zooplankton by
specifying stoichiometry relationships  with  carbon.
Phytoplankton   growth   rate   is   a   function  of
temperature,  light,  and   nutrients   and   follows
Michaelis-Menten   product   kenetics.    The   model
includes nutrient recycling, phytoplankton  settling,
nutrient   sedimentation,   and   material  loadings.
Because of the large time and space scales  involved,
the  lake  is  represented  by  a. few segments, each
assumed  to  be  homogeneous.    The   model   output
respresents  the  average  concentration in each seg-
ment.

Lake Ontario

    Lake Ontario was  the  first  Great  Lake  to  be
modeled  using this approach.  This was initiated for
the International  Field  Year  on  the  Great  Lakes
(IFYGL)  by  a grant to Manhattan College.  This work
culminated in a two volume  EPA  Ecological  Research
Series Report?».  In summary, the Lake Ontario effort
involved       the       calibration      of      the
phytoplankton/nutrient structure for the three  layer
segmentation  scheme (Lake-1) shown in Figure 3.  The
results have been reported to the International Joint
Commission (IJC) for management considerations.
    The results, depicted in Figure 4, indicate  that
the  chlorophyll a. levels are not in equilibrium with
the nutrient loads.   This  is  evident  because  the
                                                       21

-------
simulated  peak  chlorophyll  a^ continues to increase
for about 6 to 8 years before reaching an equilibrium
level.  Further, even with  reducing  the  phosphorus
loads  to  levels set by the Water Quality Agreement,
chlorophyll a^ levels are predicted to increase  to  a
new equilibrium.

    With Lake-1 having been substantially calibrated,
work  continues  at   Manhattan  College to refine the
Lake Ontario model by 1) addition of  spacial  detail
(Figure  5);  and,  2)  addition of biological detail
(multi-species).  The limiting factors in  this  work
are  not the expansion of the model structure itself,
but with data availability and the overwhelming  task
of   data   reduction  for  model  verification.   In
addition, there is the matter of  interpreting  model
output.   The analyst soon becomes overwhelmed by the
reams of computer output  even  if  these  have  been
reduced   to   graphical  form.    To  overcome  these
limitations, the model researchers have been  relying
to  a  great extent on the EPA STORET system for data
archiving, manipulation,  and  statistical  analysis.
Also, more  work is  underway to output both data and
model computations in graphical form.  The  Manhattan
College staff has even produced movies of model ouput
to  reduce  the effort in interpreting their results.
Research is now proceeding with  the  development  of
statistical  techniques  to  determine how well these
complex models represent the data.

Lake Huron

    The Lake Huron  modeling  effort  is  also  being
conducted by Manhattan College under the direction of
the  EPA  Large Lakes Research program.  The modeling
research is being conducted in conjunction  with  the
IJC  Upper Lakes Reference Study and the results will
have direct input to this management level report.
    Essentially, the same model  structure  is  being
applied  with  a  5-segment scheme shown in Figure 6.
The unique aspect of this system is the high material
gradients evident in and extending from Saginaw  Bay.
This  is  an  excellent  case  to  test  the  general
applicability of this model structure.
Lake Erie

    The  Lake  Erie  eutrophication  model   includes
similar  nutrient-biological  systems applied to a 5-
segment scheme (Figure 7) except that the process  of
dissolved  oxygen  depletion  and  resulting nutrient
regeneration must be incorporated.  Manhattan College
is progressing with this task while concurrent  field
effort  is  being  conducted by Ohio State University
and State University College of New York at Buffalo.
    Lake Erie is also the site of an issue  involving
the  effect  of power plants on fish populations.  To
address  this  issue  a  research  program  has  been
initiated  involving  both  data collection and model
development for fish in the western basin.  The model
will incorporate data collected on the number of fish
larvae passing through a power plant and  the  number
observed  in  the  western  basin.  The effect on the
adult populations in time will be computed.

Saginaw Bay

    Modeling research on Saginaw Bay, Lake Huron,  is
being   conducted   by   the   LLRS  at  Grosse  lie.
Eutrophication is the primary issue  in  Saginaw  Bay
evidenced   by   the  highest  chlorophyll  a.  levels
recorded in the Great Lakes system.  A  sub-issue  is
the  effect  of blue-green algae on taste and odor in
municipal water supplies.
    Two parallel modeling   efforts   are  progressing.
First, the Manhattan model  structure is being applied
to  a five segment  scheme7. This  application research
has benefitted  the  program  by   not  only  providing
insight into cause  and effect but also by familiariz-
ing  EPA  personnel with the  details of the Manhattan
models and  computer  programs.   This  has  enhanced
relationships   with the  grantee and has resulted in
better program  management.  In this  way,  the LLRS has
or will have the capability of operating any  of  the
models developed by Manhattan College.
    The  second in-house   modeling  effort on Saginaw
Bay involves development of the   next  generation  of
verified ecosystem  models.  Bierman   has  structured a
four  class  phytoplankton  model which includes more
detailed interaction kinetics in  place of  Michaelis-
Menten kinetics.  Model results are  being used in the
IJC Upper Lakes Reference Study report.
    The  Saginaw  Bay modeling research is also being
expanded  to  include  the  fate  and  transport   of
hazardous materials. A preliminary model  structure is
being  formulated   and  surveillance and  experimental
research is being implemented.

Transport Models

    Another modeling effort being conducted with sup-
port  of  the   LLRS  is  devoted  to  describing  the
detailed transport  processes  in the  Great Lakes.   The
primary objective is to develop a general approach to
describing  pollutant  transport  on    relatively fine
time and space  scale in the near-shore  region.    The
work   is   being   done    at Case   Western  Reserve
University^.
    The models  use  the first  principle   conservation
equations  for  mass, momentum, and energy.   Winds  and
solar radiation drive these equations which result in
computed  current velocities, current direction,   and
thermal  structure  in three dimensions.   Verification
of this type model  is difficult and  requires  synoptic
data  at  many  points.   The application  is   being
demonstrated along  the Lake Erie  shore near Cleveland
and  will  describe  the course of the Cuyahoga  River
discharge into  the  lake.
    The transport  models   are  also being  used   to
evaluate   the   potential  effect   of  the  proposed
Cleveland Jetport on water  quality   and   temperature
structure.    By   varying    the  model   geometry   to
represent the proposed configuration of the  jetport,
it  is  possible  to predict  the  circulation  patterns
and   the   subsequent   distribution  of    material
concentrations  (Figure 8).
    Using   the   same  general   approach,   Paul10 has
developed  thermal  plume  models  which   have   been
substantially verified at the Point  Beach  Power  Plant
on Lake Michigan.
                     Conclusions

    The  experience  gained   thus far in  implementing
the Great Lakes modeling effort has  led to  a  number
of   conclusions  which  may  be  helpful   to  others
initiating similar modeling programs.
    First, the structuring  and calibrating  of a model
derives benefits long before a final  verified  model
is   obtained.     The  modeling  process  requires  a
systematic approach to data collection and  analysis.
Intermim  results  reveal  gaps  in  knowledge   for a
particular  system  or  process  and  are  useful   in
defining and directing new  study and research.
    Second,   most  of  the   effort   in  the  modeling
process is involved with data reduction.  Relatively
little  effort   is  required  to  generate  reams   of
theoretical   computer output.   The primary concern  is
to structure a   model  with  sufficient  bio-chemical
detail  to  be  realistic, but simple enough so as not
to be data limited.
                                                       22

-------
    Third,  considerable  effort   is   required   to
calibrate  and  verify  large  scale models even with
existing model structure and computer programs.   For
this  reason, this process appears to be in the realm
of   applied   research   rather   than   engineering
application.   A modeling program of necessity should
not be separated from a total research program.  This
program  involves   the   entire   range   of   model
structuring,     experimentation,     and    research
surveillance.
    Finally, because of the resource commitments made
to the total research effort,  once a model  has been
implemented,  i.e.,  calibrated   and   verified,   a
continuing  effort  should  be  made  to document the
model, and to keep it operational.  In  this  manner,
models  could be used to assist in answering the many
short-term, day-to-day questions facing the  EPA  and
other  environmental regulatory agencies.  Only after
calibration and verification for  a  specific  system
can  a  model  be turned over to an engineering staff
for general management application.
    One continuing  question  remains,  however,  and
that  concerns the confidence placed on model results
by  managers.    Progress   in   gaining   managerial
confidence  is being made as more rigorous techniques
are developed to  evaluate  how  well  model  results
match  the  data.   In  the final analysis confidence
will come as model  predictions  accurately  forecast
actual  water  quality.   For  Saginaw  Bay, with its
short response time, verification data will  soon  be
available   to  test predictions of the eutrophication
models.  A  good  fit there could go a long way  toward
convincing managers of the utility of the other Great
Lakes models.

                     References

1.  Public  Law 92-500,92nd Congress, S2770,
An Act To Amend  the Federal Water Pollution
Control Act.  October 18, 1972.

2.  Great Lakes  Water Quality Agreement
Between the United States of America and
Canada, Signed at Ottawa, April  15,  1972.

3.  Vollenweider, R.A. and P.J. Dillon.  The
Application of the Phosphorus Loading Concept to
Eutrophic Research.  National Research Council
Canada, Associate Committee on Scientific Criteria
for Environmental Quality June 1974.
4.  O'Connor, D.J., R.V. Thomann, and D.M.
DiToro, 1973.  Dynamic Water Quality Fore-
casting and Management.  Environmental
Protection Agency.  Ecological Research
Series EPA-660/3-73-009.

5.  Thomann, R.V., D.M. DiToro, R.P. Winfield
and D.J. O'Connor.  Mathematical Modeling
of Phytoplankton in Lake Ontario, 1. Model
Development and Verification.  U.S. Envir-
onmental Protection Agency,Corvallis,
Oregon.  660/3-75-005.
March 1975.  177p.

6.  Thomann, R.V., R.P. Winfield, D.M. DiToro
and D.J. O'Connor.  Mathematical Modeling
of Phytoplankton in Lake Ontario, 2. Sim-
ulations Using Lake-1 Model.  U.S. Environmen-
tal Protection Agency, Duluth, Minnesota. In
Press.

7.  Richardson, W.L. and V.J. Bierman, Jr.
A Mathematical Model of Pollutant Cause and
Effect in Saginaw Bay, Lake Huron.  U.S.
Environmental Protection Agency, Environmental
Research Laboratory—Duluth, Duluth, Minn.
In press.

8.  Bierman, V.J., Jr. and W.L. Richardson.
Mathematical Model of Phytoplankton Growth
and Class Succession in Saginaw Bay, Lake
Huron.  U.S. Environmental Protection Agency
Environmental Research Laboratory—Duluth,
Duluth, Minn.  In press.

9.  Lick, Wilbert.  Numerical Models of Lake
Currents.  Case Western Reserve University,
Dept. of Earth Sciences, for U.S.E.P.A.,
Environmental Research Laboratory—Duluth.
In press.

10. Paul, J.F. and W. Lick.  A Numerical Model
for Thermal Plumes and River Discharges.
Proceedings, 17th Conference on Great Lakes
Research, 1974, I.A.G.L.R.
                                                        23

-------
                           GREAT LAKES
                   MODELING-MANAGEMENT PROCESS
   INPUT MONITORING
     Waste loads
    Tributary loads
   Atmospheric loads
Meteorological conditions
Figure 1.   Great Lakes  Model-Management  Process.
                                                                                NUTRIENT INPUTS
                                                                                  NIAGARA RIVETT
                                                                                    TRIBUTARIES
                                                                                      MUNICIPAL
                                                                               INDUSTRIAL WASTES
                                     ENVIRONMENTAL INPUTS
                                     SOLAR RADIATION
                                     WATER TEMPERATURE
                                     LIGHT EXTINCTION
                                     SYSTEM PARAMETERS
                                                                                1  VERTICALEPILIMNION
                                                                                    XCHANGE           	TRANSPOR
                                                                    Figure  3.  Lake Ontario  (Lake-1)  Model Segmentation?
UPPER TROPHIC
LEVEL 02
CARBON
T
UPPER TROPHIC
LEVEL #1
CARBON
t
CARNIVOROUS
;'.OOPI.ANKT')S'
CARBON
1
HERBIVOROUS
7.00 PLANKTON
CARBON
t
1'HYTOPl.ANKTW;
UII.OROI'HYI.I.








| NITROGEN CYCI.F
1
1 ORCANK: AM: ONIA
[" SI'i'HOCI^: NITROCKN
! t i

N 1 TRATE
NITKOCKS
w

' PHOSPHOROUS CYCLE
1 OI-'CA'irC AVAI
| PHOSPHOROUS PflOSPH
1 t
1


1
1
AUM-: i
UKOUS .
1

   HIOI.OCIfjM.
   SUfl-MODKI.
                                                          _l
Figure 2.   General Eutrophication Model  Structure.
o
IE
3
Z
o -r
Z »-
O a
                                                                                                                   Pr*Mnt Condition

                                                                                                                 WQA  Condition
                                                                                                                •toral Condition
                                                                             I   I    I    i   L
                                                                                                      J	I	I    i
                                                                                    I  10  12

                                                                                      YEARS
                                  16  18  20   22   24
                                                                     Figure  4.   Projected Peak CJorophyll  a Concentrations
                                                                                 for Lake  Ontario?
                                                                24

-------
                             Figure 5.  Lake Ontario (Lake-3) Model Segmentation.
                          LAKE HURON
Figure 6.  Lake Huron Model Segmentation.
                                Figure 7.  Lake  Erie Model Segmentation.
                                                                                     HORIZONTAL SURFACE VELOCITIES
                                                                                      •Cuvahoqa Rl»»r
                                                                                  CONSTANT CONCENTRATION
                                                             Figure 8.   Simulated Currents  and Conservative Material
                                                                        Concentrations  for  Alternative Jetport
                                                                        Configurations.
                                                         25

-------
                     THE DEVELOPMENT AND IMPLEMENTATION OF USER ORIENTED AIR QUALITY MODELS
                                                 John J. Walton
                                          Lawrence Livermore Laboratory
                                            University of California
                                               Livermore, CA 94550
                        Abstract

      Implementation of the Clean Air Act and its
amendments requires that decisions be made, at various
levels of government, on many complex questions.  In
order to facilitate the work of decision- and policy-
makers, readily useable air quality models must be
developed.  Such models can have quite different require-
ments and constraints than those developed primarily for
research purposes.  These needs will be identified and
past experience will be drawn upon to illustrate how
some of them have or have not been met.

                      Introduction
     The National Science Foundation,  through Its
Research Applied to National  Needs (RANN)  program,  has
sponsored an interagency program at the University  of
California Lawrence Livermore Laboratory (LLL) ,  the
NASA-Ames Research Center (ARC)  and the Bay Area Air
Pollution Control District (BAAPCD) to develop  and  vali-
date a regional air pollution model which  can be applied
to the air quality problems of the San Francisco Bay
Area.  This work resulted in  the Livermore Regional  Air
Quality (LIRAQ) model.1'2  The model  calculates  in  the
two horizontal  dimensions the transport, dispersion and
chemical changes undergone by the most significant
photochemically reactive and  non-reactive  air pollutants
in the San Francisco Bay Area.  Through the approxi-
mately three year duration of this project a wide
variety of problems were encountered  and handled with
varying degrees of success.   This experience has given
us a feeling for the various, often interrelated,
factors which must be considered by both user and
modeler in order that an appropriate and useable air
quality model be developed.   It  is our purpose  here, to
share our thoughts on this subject with the attendees
of the EPA Conference on Modeling and  Simulation.

     Just as the atmosphere itself exhibits many complex
and interrelated features, so does the process  of user/
modeler communication and compromise needed to  cope with
the diverse requirements and  constraints governing  model
development.  We have chosen  to  break  these into three
broad categories:

  I.  The needs of the user group or  agency.

 II.  The inherent physical  complexities and constraints
of the specific problem.

III.  The various resources which the  user has  at his
di sposal.

It should be emphasized that  these are not independent
factors and frequently compromises need be made between
them.  In the balance of this paper we will describe in
more detail the various aspects  of each of these factors.
I .   User Needs

     We see the user as any group or agency which may
come to the modeler seeking to address questions related
to air quality.  The user's needs will provide the
framework required for model development.  Here we can
identify five basic considerations, each one of which
is  related to some degree to the others.

     Model Application.  Unlike the modeler, for whom
the model  is a tool with which to explore the nature of
pollution, its origins and evolution, the user will in
general be charged with a specific responsibility and a
model  directed to this end will be required.  Some
examples of specific applications are:  the study of
pollution episodes, support in legal action, assessments
required in land use planning, study of the efficacy of
of  various pollution abatement strategies and regulation
needed to implement mandated standards.  It should be
pointed out, that while there may be overlap between
several of these areas, an agency charged with more
than one responsibility may, in fact, require more than
one model.

     Pollutant Characteristics.  Depending upon the
time and space scales of concern, the pollutants of
interest may be anything from  inert to highly reactive,
or they may have properties which make them subject to
specific scavenging processes.  For example, nuclear
releases are considered inert but may consist of par-
ticulates which will settle out at some known rate.
Hydrogen sulfide, stored at a geothermal sight can,
depending upon concentration, be anything from lethal
gas to an unpleasant odor.  Further, if the time scale
of interest is of sufficient length (approximately one
day)  the gas will oxidize to produce sulfates which may
precipitate out as an "acid rain".  The pollutants
associated with combustion range from effectively inert
carbon monoxide to hydrocarbons and the oxides of
nitrogen and sulfur which lead to the secondary pollu-
tants ozone/oxidant and particulate sulfates.  The
effects may not be direct, for example, the Department
of Transportation sponsored Climatic Impact Assessment
Program (CIAP),3 which,among other things considered,
explored the effect of ozone reduction in the strato-
sphere, due to effluent from supersonic aircraft, upon
ultraviolet radiation at the earth's surface.

     Spatial Domain.  The area of interest can be any-
thing from the region  immediately downwind from a power
plant stack to the entire globe.  Local models would,
for example, be employed to treat processes going on  in
the vicinity of  industrial facilities.  The LIRAQ. model
mentioned above  is concerned with regions of character-
istic dimension of one-hundred to two-hundred kilometers.
The question of  the sulfate budget  in the eastern United
States would be of subcontinental proportions.   Finally
the CIAP project employed global models of one,  two and
three-dimensions.   It should be clear that the size of
the domain will  influence the degree of spatial detail
which can be achieved.  Spatial domain relates also to
                                                       26

-------
vertical extent.  Although It will depend strongly upon
the application, we can say roughly that local and
regional problems may be addressed with models focusing
on a well mixed layer below an inversion, if  it exists
(less than a kilometre), subcontinental models may go
through the convective cloud layer (three to four kilo-
metres) and global models may be expected to extend into
the stratosphere  (thirty to thirty-five kilometres).

     Temporal Scale.  The time periods of interest may
be anything from hours to years.   As mentioned above,
the time period treated by the model may influence those
physical characteristics of the pollutants which must be
emphasized in the model calculation.   In turn, the time
scales of interest are roughly related to the spatial
domain under consideration extending from local problems
simulating hours to global scales which may be concerned
with processes evolving over periods of years.

     Confidence Level.  Any model, however complex, is
only an approximation of physical reality.  As will be
discussed in the following sections of this paper, the
modeler  is limited  in the number of processes which he
can describe and even these are sometimes not fully
understood or must be treated with approximations.  The
question then is, to what degree can the model be
expected to reproduce the physical picture which  is
observed?  Further, because of the demands put upon him,
what degree of accuracy does the user  need in order to
make meaningful judgements?  This criterion is fre-
quently related to domain of interest.  For local pro-
blems one might expect to achieve results with accuracy
of ten percent while regional and subcontinental  pro-
blems might be considered reliable when giving results
within tens of percent.  For global calculations, some-
times order of magnitude or even the sign of a change
may be sufficient.

     The user will, in general, have a well defined
application, including the pollutants  of interest,
while temporal and spatial scales may  be less specific.
The modeler will be -able to provide guidance  in deter-
mining what confidence level can be realistically
expected from model results.  Communication between the
user and the modeler is vital during this foundation
stage of model development, in order that both parties
be aware of and appreciate the various requirements and
constraints and the compromises which  must be made
between them.  The definition of possible constraints
will lie primarily  in the purview of the modeler  and
rests upon the second factor governing the model.

II.  Physical Processes and Constraints

     Under this heading the modeler must consider any
process or condition, either natural or man-made which
might contribute to the state of the physical  system
which he is trying to describe.  He must always make
compromises between what he would wish to treat and
what lies within the capacity of the state-of-the-art
or of his resources to handle.  Discussed here are
those factors which most commonly will play a part in
the problem.

     Sources.  Man and nature both provide the atmo-
sphere with effluents through a variety of sources.
Some of the natural sources which contribute to our
"ambient" atmosphere are; forests (hydrocarbons), soil
micro-organisms (methane, nitrous oxide, hydrogen
sulfide), fires (carbon monoxide and particulate),
volcanoes (sulfur, fluorine, particulate, hydrochloric
acid) and oceans  (chlorides, calcium and sulphates).
The pollutants we consider from man's  activity are
particulates and species such as carbon monoxide,
hydrocarbons and oxides of nitrogen and sulfur result-
ing from combustion processes in transportation and
power generation.  In parts of the United States 50-2
from space heating is significant.  By-products such as
hydrogen sulfide, flourine, trace metals and volatile
hydrocarbons from various industries may present pro-
blems in some areas.   Pesticides form another class of
pollutant which we may wish to follow, while industrial
accidents and leaking storage vessels provide a whole
spectrum of problems  of local concern.

     Chemi stry.  The above  are considered primary
pollutants and their  reactions with other species pre-
sent and their response to the diurnal solar cycle help
to define the level  of chemical complexity required in
the model.  If these  pollutants alone are of interest
it may be possible to treat their reactions with simple
chemistry or decay terms.  However, as mentioned
earlier, depending upon time scale these may lead,
through a chain of more complex reactions to secondary
pollutants.  Among these secondary pollutants are ozone
or oxidant resulting  from the presence of hydrocarbons
and the oxides of nitrogen in the presence of sunlight.
Similarly, oxides of  sulfur lead to sulfate thence to
sulfuric acid in the presence of water.

     Meteorology.  Here we mean all of the natural pro-
cesses in the atmosphere which can affect the pollu-
tants being considered.  Turbulent winds will transport
and disperse the pollutants.   The presence of water
vapor will promote heterogeneous transformation; water
vapor in the form of  clouds will affect  photodissocia-
tion processes, while rainfall provides  a mechanism for
the scavenging of certain pollutants (e.g., sulfates
and nitrates).

     Radiation.  As  mentioned in the above sections,
photochemical  reactions may play a role  in the  problem
and hence the significance of solar radiation trans-
port.  Generally, for models on scales less than global,
radiation is a, possibly time-dependent, input  quantity.
It is however clear  that feedback mechanisms may exist
which can play a significant part in the evolution of
the chemically active species and the radiation  balance.

     Topography.  The character of the terrain  in the
region can affect, directly or indirectly, source input,
chemistry and transport.  For example, a source  elevated
due to the terrain may emit above an inversion,  which
will inhibit its transport to the surface.  Surface
characteristics can  affect heating, modifying the
chemistry and possible convective activity.  Complex
terrain such as that  seen in the San Francisco  Bay
Area has strong channeling effects, giving rise  to
characteristic flow and pollution patterns.  Finally,
the character of the  terrain will affect the level of
turbulence and hence  dispersion rate.

     Boundary Conditions.  Since there will usually be
conditions outside the model  domain which can affect
what occurs within,  a set of boundary conditions must
be supplied to the model to account for  these.   These
consist of pollutant  background values,  possible up-
wind sources and transport across the domain boundaries.
As the model becomes  more complex, more  information
Will be required.  Pollutant background  is usually
available from observations in unperturbed regions.
Information on upwind sources may be more difficult to
acquire, frequently being outside the region of influ-
ence of the user.  Recourse must then be had to some,
hopefully realistic,  artiface.  Transport at the domain
boundaries is of course the mechanism by which condi-
tions outside the region affect processes within.  This
information is generally available through the same
process that provides wind fields for transport within
the region.
                                                       27

-------
     Through this second stage the modeler has had to
determine those physical characteristics and processes
which are required to provide a model which will meet
the user's needs.  This is a phase during which actual
numerical experiments will be carried out, in order to
test new ideas and also to see if already existing pro-
grams or subprograms can be adapted to address the
user's problem.  There are, in fact, a number of "off-
the-shelf" models of varying complexity which are avail-
able to the potential user and it is to be hoped that as
the process of developing user oriented air quality
models continues, readily adaptable programs will become
increasingly available.

     One might expect that once the model requirements
have been defined and the pertinent physical  processes
identified it would simply be a matter of writing a
computer program, handing it over to the user and going
on to other things.  However, anyone who has had to
function within a budget will be well aware that further
compromises must be made, because there never seems to
be sufficient money to do all those things desired.
This brings us to the final  factor.

III.  User Resources

     Here we are concerned with any factor which may
tend to facilitate or impede the process of addressing
the user's problem.  Here the user and modeler must
work in concert to apply available funds in the most
efficient way possible.

     Computer Availability.   User and modeler demands
must be compatible with the available computational
facilities.  A computer offers, at a price, some degree
of computational speed, a certain amount of storage for
coding and data, and some form of input/output facili-
ties.  The modeler, in developing a large, complex pro-
gram will be constrained by the size of the computer
available although some tradeoffs between speed and
storage are possible.  The user,  on the other hand, is
more concerned about the speed with which the computer
can run his problem and provide him with useable output.
This question of "turn around" time depends to a certain
extent upon the user's mission.  A .land use planner can
afford to wait several days for his answers,  while an
agency which must respond to sudden emergencies will
require immediate answers albeit, in less detail  than
those of the former.  With a given computer system, the
actual  time (cost) involved in computation rests upon
the model complexity.  Compromise is almost invariably
required between the modeler who wishes to include as
much physics and chemistry as possible and the user who
finally pays the bill.

     Data Avallabi1ity.  User and modeler both are
plagued by the lack of data and as more and more complex
models are developed the need for data is correspond-
ingly increased.  We wish to have data for all  physic-
ally pertinent aspects of our problem, at least over
the domain of interest.  Source data for the model are
often not available at all  from direct measurement and
must be estimated from indirect measures such as popu-
lation, industrial and traffic flow patterns.   Meteoro-
logical data are generally sparse, entailing  the use of
various types of interpolative schemes to provide infor-
mation  over the entire domain.   Pollutant data, required
for model validation are often incomplete or  difficult
to measure with accuracy.

     Response Time.  There are two kinds of response
time which concern the user and constrain the modeler.
The first is the time available for model development.
This is imposed upon the modeler by the user  in order
that the user may meet his commitments.  As mentioned
in Section II of this paper, there is an increasing
number of existing air quality models which help  to
alleviate this problem.  On the other hand, we  tend  to
make increasing demands upon our models which  involves
the inclusion of more physical mechanisms and  the use of
more sophisticated numerical techniques.  The  second
time constraint was alluded to in the first part  of  this
section, that is the model/computer response time, which
is imposed by the user's mission upon the computer choice
and model design.

     Personnel.   Finally, in order that an air  quality
model   be appropriate for a given problem and  provide
useable results, communication links must exist at all
levels of development and application.  In general,
the responsibilities and background of user personnel
will  be quite different from those of the modeler.  The
modeler is expected to be aware of the relative impor-
tance of the many physical characteristics of  the pro-
blem and how they may be addressed, while a local air
quality agency,  for example, may be expected to supply
decision and policy makers with information pertaining
to the impact of changing population patterns  and pollu-
tion abatement strategies.  As models become more com-
plex,  the user may find that a third party or model
operator is required to provide direct operation  of the
model  and to interpret its raw results.

      During model development direct user/modeler links
are necessary for the reasons mentioned earlier in this
paper.  They are:  to reconcile the user requirements
with the modeler's ability to produce a functional
model  within the constraints imposed by the physics and
chemistry, data availability and computational  facili-
ties,  and to make the user aware of the model's features
and limitations.  Once the problem has been defined,
continued communication will help to keep the  project on
track to its desired goal.  To this end there  are
several options  available.  First, formal and  informal
reports always have a place, particularly in providing
documentation of the work in progress and of the model
upon completion.  They do not, however, provide very
direct user/modeler interaction.   In the ILL project a
"user advocate"  was employed to meet this need.  This
person is a member of the developer agency who acts as
the user's representative with the modeling group.
The role is demanding in that it requires not only a
full  knowledge of the user's needs but also a knowledge
of the limitations under which the modeler must function.
Perhaps the best approach, if it is economically  feas-
ible,  is to have direct user participation in model
development.  Though the BAAPCD did not participate
directly in the LIRAQ model development, they did play
a major role in data acquisition.

     The most important communication link is  the  last,
that between the finished model and the user.  The user
should be able to formulate a problem in his own  terms
and have it executed.  Computed results must then be
provided to the user, again in his own terms.  As an
example of how this was achieved in one case we can use
the LLL LIRAQ model.  The name LIRAQ refers specifically
to the air quality program itself.  In order to facili-
tate the use of this model by the BAAPCD several  other
programs have been provided.  The "Problem Formulator"
is a program which provides an interface between  the
actual running code (LIRAQ) and the user.  Through a
series of questions to the user the Problem Formulator
determines the type of problem to be run, physical con-
ditions, spatial domain, etc.  This in turn provides a
series of instructions to an "Executive Routine" which
selects, and modifies if necessary, the appropriate
input data, runs the LIRAQ model  and directs the  com-
puted results to the desired output medium.  Although
this does isolate the user from the workings of the
model  it was felt necessary because of the model's very
complexity and the large amounts of data which  must be
manipulated.
                                                       28

-------
                       Conclusion

     In conclusion,  we might say that development and
implementation of a  complex air quality model  can be
reduced to a process of Communication and Compromise.
The user and modeler are hopefully knowledgeable in their
diverse fields.  There must be a mutual understanding of
the existing requirements and constraints.  The modeler
must make many compromises in his effort to describe
the real world with  the various facilities available to
him.  He must remain aware of the possibility that no
viable compromise exists, since it would be most unfor-
tunate to learn this only after the expense of model
development.  The user, for his part, must recognize the
limitations which do exist and be able to operate within
them.  Finally, there must be interfaces which permit
the user to communicate with the model in his own terms.

                    Acknowledgements

     The author is indebted to a number of his fellow
workers in the Atmospheric and Geophysical Sciences
Division of Lawrence Livermore Laboratory, for their
many helpful comments.  He wishes to express special
thanks to Drs. H. W. Ellsaesser, M. C. MacCracken and
G.  L. Potter for their help in preparation of this paper.
This work was performed under the auspices of the U. S.
Energy Research and  Development Administration under
contract No. W-y
                      References

1.  M. C. MacCracken and G. D. Sauter, editors:
"Development of an Air Pollution Model for the San
Francisco Bay Area," Final report to the National
Science Foundation, UCRL-51920, 1975.

2.  M. C. MacCracken et al.:  "The Livermore Regional
Air Quality Model:  I.  Concept and Development,"
UCRL-77if75, submitted to the Journal of Applied
Meteorology.

3.  CIAP Monographs I-VI,  Department of Transportation,
1975.  Available through the National Technical
Information Service, Springfield, VA 22151.

k.  D. V. Lamb, F. I. Badgley and A. J. Rossano, Jr.:
"A Critical Review of Mathematical Diffusion Modeling
Techniques for Predicting Air Quality with Relation to
Motor Vehicle Transportation," Washington State
Department of Highways P.  B. 224656, June 1973-  Avail-
able through NTS.
                                                        29

-------
                              A GENERIC SURVEY OF AIR QUALITY SIMULATION MODELS*
                                                 G. D.  Sauter
                            University of California Lawrence Livermore Laboratory
                                          Livermore, California 9^550
                      Abstract

     This survey of the generic types of models which
have "been developed for numerical simulation of air
quality compares and contrasts them on the "basis of
such criteria as the simplifying assumptions made in
the solution of the general continuity equation, the
problems to which each model type is applicable and
not applicable, the requirements for input data, and
computational speed.

                    Introduction

     A variety of generic types of numerical models
for simulation of air quality have been or are being
developed.  The various types can be described in
terms such as the assumptions on which each is based,
the applications to which each is suitable and not
suitable, the amount and quality of input data each
requires, and the demands which each places on users
(e.g., operational costs, computer capacity, expertise
of operational personnel).  Successful use of any air
quality simulation model requires a satisfactory
matching of the user's needs and resources with the
model's capabilities and requirements.  Therefore, in
deciding whether to use air quality simulation models,
                       Het rate of change of
                       the average concentration
                       in an arbitrary,  well-mixed
                       volume element
       2.  The net transport out of the element due to
           dispersion and diffusion.  These transport
           mechanisms result from turbulent fluctuations
           in the mean winds.
       3.  The emission of the pollutant into the
           element by sources within or on the lower
           boundary of the element.
       k.  Creation or destruction of the species within
           the element by chemical reactions between
           species and photochemical reactions triggered
           by incident solar radiation.  The strength of
           any particular reaction is generally propor-
           tional to the product of the concentrations
           of the species involved.  This means that for
           chemically reactive species, the total concen-
           tration cannot be determined by adding
           together the contributions from all signifi-
           cant sources.
       5.  Physical loss or destruction mechanisms such
           as surface deposition, decay, or rainout.

       In principle, one can write a mathematical de-
  scription of each of these processes and combine the
  various terms in a continuity (conservation of mass)
  equation, for each species of interest, of the form:

       net rate of advection into element
     + net rate of diffusion into element
     + rate of source emission into element
       rate of physical loss out of element
     + net rate of chemical production
          within element
and if so which ones are most suited to his needs,  a
potential user must not only realistically assess the
goals he wishes to obtain via modeling and his re-
sources for accomplishing them, but also understand
the capabilities and requirements of the various
candidate models.

     This paper is devoted to a brief survey of the
capabilities and requirements of the generic types of
numerical air quality simulation models presently
available.  As a general rule, the more that one
requires of a model (e.g., good spatial and temporal
resolution, the ability to treat simultaneously a
number of pollutants, accurate description of relevant
physical and chemical processes), the greater the
resources (e.g., computer capacity and cost, quality
and quantity of input data) he must make available.

  The Basis for numerical Simulation of Air Quality

     Numerical air quality simulation models are
designed to simulate, with varying degrees of sophis-
tication, the physical and chemical processes which
govern the mixing, modifying, and transporting of
atmospheric pollutants from their sources to other
points of interest, often designated as receptors.
The important processes which determine the variation
with time of the average concentration over an
arbitrary volume element of a pollutant species of
interest are:

     1.  The net transport of the pollutant into the
         element by advection; i.e., due to divergence
         of the pollutant flux, the product of concen-
         tration and the local average wind vector.
*Work performed under auspices of U. S. Energy Research
  The series of equations can then be solved to deter-
  mine the spatially and temporally varying concentra-
  tions of all pollutants of interest.  In practice,
  it is not possible to write down and solve the
  continuity equation in all generality for even a
  single non-reactive species; among other things
  diffusion and loss mechanisms are not fully understood.
  The situation for reactive species is even more
  ambiguous; not only do the chemical reaction terms
  couple the continuity equations for many species and
  make the equations non-linear, but also many of the
  significant reactions are poorly understood.  Even if
  all the physical and chemical processes were well
  known, available meteorological and source emissions
  data are invariably of insufficient quality and
  quantity to allow an accurate simulation.

       Thus in any practical numerical air quality
  simulation, a simplified form of the continuity
  equation is solved.  The simplifications are generally
  of three types:

       1.  Approximation or neglect of spatial
           dependences
       2.  Approximation or neglect of temporal
           variations
       3.  Approximation or neglect of one or more of
           the terms in the continuity equation.

  In practice, most models incorporate simplifications
  of all three types.  For example, one can assume that
  all meteorological parameters and source emission
  rates remain constant over a time interval (say 3 hr)
  and then change to new but constant values over the
  next interval, etc.  Or one might use a highly
  simplified set of chemical reactions in formulating
and Development Administration contract W-7
                                                      30

-------
the chemical production term.  Depending on the
simplifications involved, different air quality
simulation models are applicable to different prob-
lems or situations.  Ho single model is applicable to
all problems.  Although no model represents any
practical problem with total accuracy, veil-
formulated and applied air quality simulation models
can be used to gain, at reasonable levels of effort
and cost, insights into air quality problems which it
would not be feasible to obtain by any other means.

              Highly Simplified Models

Rollback Models

     Rollback models are the simplest air quality
simulation models.  Their real objective is the
determination of the degree to which source emissions
must be reduced if some desired air quality is to be
obtained.  In their simplest form, rollback models
are based on the assumption that the local concentra-
tion of a pollutant above its background level is
directly proportional to the strength of all
neighboring source emissions of that pollutant.  Such
an assumption only applies to stable pollutants which
do not undergo significant production or removal via
chemical reactions.  The proportional assumption
applies only if the same fractional degree of control
is applied to all sources.  The fact that nearby
sources make a larger contribution than those further
away is not recognized.  In non-linear rollback
models, only selected emission sources are reduced,
often with different fractional reductions for
different sources.  Some attempts have been made to
use the rollback technique for ozone by applying it
to oxides of nitrogen and hydrocarbons, the precursors
of ozone, but the results have been inconsistent and
not particularly encouraging.

     In addition to their not being useful for
reactive pollutants, the applicability of rollback
models is limited by other factors.  According to
these models the spatial and temporal variations in
air quality will be the same in the future as observed
now.  Changing spatial and temporal distributions of
source emissions and meteorological patterns do not
impact on air quality; source strengths are all-
important.  The models do have two important
advantages.  First, from an operational point of view,
they are very easy to apply.  Second, and more
important, rollback models are one of the two types
(Gaussian plume models are the other) which are
officially sanctioned by the EPA for use in develop-
ing implementation plans for the satisfaction of
ambient air quality standards.

Gaussian Plume Models

     The most frequently used air quality simulation
model is the semi-empirical Gaussian plume formula-
tion.  In its basic form the model assumes a time and
spatially independent horizontal wind field, a time
independent point source, and no chemical reactions  ,
or loss mechanisms.  Turbulent diffusion in the
direction of the wind is assumed negligible, and
diffusion in the cross-wind and vertical directions
is assumed to produce a Gaussian (bell-shaped) con-
centration profile about the plume centerline.  The
resulting downwind concentration of the plume can
then be expressed in closed form as a function of the
source strength, the average wind velocity, and two
diffusion parameters whose values have been determined
empirically for the various classes of atmospheric
stability.  The plume is assumed to expand indef-
initely in the upward vertical direction and to be
totally reflected at the surface of a flat topography.
The basic plume equation is sometimes multiplied by a
decay factor to simulate a simple loss mechanism.

     The basic plume equation can be used to treat
the case of multiple point sources by summing the
pollutant concentration contributions of the individual
sources.  In the same way, continuous linear or area
sources can be decomposed into an appropriate set of
source elements and handled as multiple point sources.
If the emissions are uniform over the linear or area
source, a closed form, analogous to the basic point
source form, can readily be developed.  In any event,
the size of the complex source region should be
limited to one over which the meteorology can be
assumed reasonably uniform.  Also, the summation
technique restricts the applicability of the Gaussian
plume model to non-reactive species.

     The major advantages of the Gaussian plume models
are their simplicity and ease of application; the
closed-form solutions can readily be converted to
graphical, tabular, or nomogram form.  This advantage
is lost, however, if the model is applied to a many
source, many receptor situation.  Other advantages are
that these models require very little input data and
that there is considerable experience in their use.

     The major limitation in the applicability of
plume models is the assumption that the wind field is
constant and uniform.  In practice this limits their
use to time periods on the order of an hour and spa-
tial distances on the order of 10 km.  Quasi-steady-
state applications can be made by periodically
updating the source emissions and meteorology, but
this does not alleviate the spatial limitation.
Further, since the predicted pollutant concentration
is inversely proportional to the wind speed, the model
is not applicable on calm or nearly calm days.
Finally, the model is not applicable in situations
with complex topography or low altitude inversions.

Gaussian Puff Models

     The Gaussian puff model was developed to overcome
some of the limitations of the plume equation, par-
ticularly the time independence.  In its basic form,
the model tracks a puff of pollutant emitted from a
point source as it is blown downwind and diffused in a
Gaussian manner.  The puff is allowed to expand  in
volume so that all of the original pollutant mass is
retained within it.  In the quasi-time dependent case,
the wind field and source emission rate are periodi-
cally updated and assumed constant over each time
interval.  In the steady-state limit, the puff model
is equivalent to the plume model.  As with the plume
model, multiple source situations are treated by
summation, with due consideration taken of the time
for puffs from the various source elements to reach
the receptor.  The cross-axis distribution within the
puff need not be assumed to be Gaussian; in one  refine-
ment, cross-axis diffusion is determined from turbulent
eddy diffusion (k) theory.

     Although it does permit some degree of time
dependence, the puff model suffers from some of the
other limitations of the plume model; i.e., only non-
reactive pollutants can be treated for cases of
relatively flat topography (some rather unsuccessful
attempts have been made to incorporate a simplified
treatment of reactive species).  It can be used in
light wind situations.  The time dependent puff model
is not as easy to apply as the plume model.  Time
dependent source data are required, and time dependent
trajectories must be determined.  These more difficult
operational problems tend to outweigh the added
                                                       31

-------
advantages of the puff model, and it is not widely
used.  It is test applied to cases of a few sources
and a few nearby receptors.  Even here a computer is
generally required.  For widely distributed sources
and multiple receptors, the large number of trajec-
tories required makes the use of the puff model
prohibitive.

One Box Models

     These models are based on the assumption that
pollutants are uniformly mixed throughout a fixed
volume (box) of air.  For air quality simulation (as
opposed to simulation of smog chambers), the box is
usually taken to extend vertically from the terrain
surface to the inversion base; horizontally the box
should cover an entire region of distributed sources.
In the simplest applications no transport of pollutants
into or out of the box is allowed.  The resulting
concentrations are then proportional to the total
rates of source emissions of pollutants into the box
and inversely proportional to the average residence
time and the inversion base height.   Since pollutant
clouds must generally travel distances on the order
of 5-10 km before uniform mixing can occur, simple
box models should only be used to estimate average
concentrations over large area sources (e.g., whole
cities) or to simulate background concentrations at
points with no large local sources nearby.

     Models which are essentially a special type of
box model have been developed by Gifford and Hanna at
NOAA's Atmospheric Turbulence and Diffusion Laboratory.
For non-reactive pollutants, the simple ATDL dispersion
model can be expressed as c = A Q/u, where c is the
average pollutant concentration in the box, Q is the
average source emission rate per unit area, and u is
the mean wind speed.  The dimensionless parameter A is
assumed to be a constant for a given atmospheric
stability; analyses of air quality data for a number
of urban areas suggest that, over long averaging times
(month, season, year), A = 225 is a reasonable value.
The ATDL dispersion model is a reasonable one to use
in determining average concentrations of non-reacting
pollutants over large areas and long averaging times.
If the area considered is too small, the effects of
sources outside the region under study may be signif-
icant; if the averaging time is too short, the model
will not effectively account for significant short-
term deviations from average source emission rates and
meteorology.   Operationally, the ATDL model is very
easy to apply, and it requires a minimum of input data.
Its major drawback is the lack of spatial resolution
over large areas.

     Hanna has also applied the ATDL model to several
reactive pollutants by incorporating a very simplified
set of chemical reactions.  For example, all reactive
hydrocarbons are lumped together as a single species.
The resulting model has five parameters (instead of
just one for the non-reactive model) which can be
adjusted to give the best results for a particular
situation.  The chemical mechanism is not general; a
different mix of sources and/or significantly different
meteorology would necessitate a retuning of the model.
As with the non-reactive version, this model is
designed to determine average concentrations over a
large area under typical meteorological conditions.

                 More Complex Models

     The models described above stress ease of
application at the expense of physical and chemical
fidelity.   They are either analytic or require very
limited numerical computer capabilities, and the input
data requirements are modest.  (An exception is the
 use  of the  puff  model with a large number of trajec-
 tories.)  However  their applicability is limited;
 e.g.,  simulation of  a steady-state or quasi-steady-
 state  point source (Gaussian plume model) or air
 quality averaged over a large area (ATDL).  The models
 described below  are  designed to yield air quality
 simulations which  include  both spatial and temporal
 dependences.   As a result, their operation is con-
 siderably more demanding,  requiring extensive computer
 capability  and quite large amounts of spatially and
 temporally  resolved  meteorological and source emissions
 data.

 Eulerian Grid  Multibox  Models

     An Eulerian multibox  model consists of a number of
 constant size  volume elements in a fixed spatial grid
 which  covers the entire region of interest.   Pollutants
 are  allowed to flow  through the boundaries of each
 element as  a result  of  advection,  diffusion,  sources,
 and  sinks.  Within each element,  the pollutants are
 assumed to  be  mixed;  they  may be chemically reactive.
 Time dependence  is introduced by periodically
 (typically  every hour)  updating the source emission
 rates  and the  prevailing meteorology (wind speed and
 direction,  inversion base  height,  and for reactive
 pollutants,  incident sunlight)  for each  grid  element.
 Thus the multibox  simulations produce time histories
 of (hourly) average  pollutant concentrations  with
 spatial resolutions  determined by the size of the grid
 elements.   The minimum  size of the elements  is limited
 by one  or both of  two factors:   the quality of the
 available source emission  and meteorological  input
 data and the memory  capacity of the computer  to be
 used.   For  a given computer,  the  minimum grid element
 size is determined by a combination of factors:   the
 size of the region to be simulated,  the  number of
 pollutant species  considered,  the  number of vertical
 layers  of elements,  the degree  of  sophistication of
 the  numerical  technique used,  and  the  complexity of
 the  chemical reaction set  (if any).   Increasing  one
 or more of  these factors increases the minimum grid
 size.   Typically,  the horizontal dimensions of a grid
 element are a  few  kilometers.   A  significant  data
 gathering and  preprocessing effort is  necessary  to
 produce realistic  source emission  and meteorological
 input data with  this  resolution, particularly in
 regions with complex  topography, which tends  to  make
 both the meteorology and the  source  distribution less
 uniform.

     The usual approach in  calculating the flow  of
 pollutants across  the boundary  of  an Eulerian grid
 element is to use  a technique based  on the finite
 difference between the pollutant  concentrations  on
 either  side of the boundary.  The  assumption  that  each
 element is well-mixed gives rise to  errors, referred
 to as numerical diffusion,  in the  simulation  of
 pollutant transport.  These errors  can be  controlled
 reasonably well,  at the expense of  additional compu-
 tational complexity, with higher order differencing
 schemes, especially for distributed  rather than
 localized sources.   Two of  the most  prominent examples
 of such models are the Systems Applications,  Inc.
 (SAI) and Livermore Regional Air Quality  (LIRAQ)
models.

     The SAI model, originally developed for  the  Los
Angeles basin,  uses a grid with several vertical
 layers, thus simulating the solution of the continuity
 equation in three dimensions, and  in turn requiring
 sufficient meteorological data to adequately  represent
a three-dimensional wind field.  The model contains  a
reaction set of 16 reactions  covering 13 species  (all
hydrocarbons lumped together) which  probably  should be
 tuned for each area of application.  A more extensive
                                                      32

-------
chemical reaction set is now being incorporated into
the model.

     The LIRAQ model, originally developed for
application to the San Francisco Bay Area, uses a
single layer of grid elements between the terrain
surface and the "base of the inversion layer.  Only
horizontal pollutant transport is simulated; within
each element, the vertical pollutant distribution is
represented in terms of a, simplified profile.  Two
versions of the model exist.  The non-reactive version
(LIRAQ-l), when used on a CDC-7600 computer, can treat
up to four species simultaneously on a ^5 by 50
element grid, more than adequate to treat the entire
Bay Area with 5 km resolution or subregions with finer
resolution.  About 15 min. of computer time are required
for a 2k hr simulation.  LIRAQ-2, for reactive species,
incorporates a set of k& reactions to treat 19 species,
including three hydrocarbon classes.  This limits a.
CDC-7600 simulation to a maximum of 20 by 20 elements,
still enough to cover most of the Bay Area with 5 km
resolution.  A 2k hr simulation requires about 60 min.
of computer time.

     These Bulerian grid multibox models represent the
most comprehensive approach to simulating air quality
on a regional basis; they are applicable to such air
quality problems as regional compliance with air
quality standards and evaluation of the impact on
regional air quality of various land use alternatives.
However, because all emissions within a grid element
are lumped together, these models should not be used
to simulate the effect of a strong local source on
nearby (within a few grid elements) receptors.  Sub-
stantial input data and computer capability are
required to operate these complex models, and appli-
cations are limited to simulations of a few days for
"typical" or "worst case" conditions.

Particle -in-Cell (PIC) Models

     These time dependent models combine the use of an
Eulerian grid and marker particles, each marker
representing a fixed mass of pollutant.  The markers
are introduced into the grid where emissions occur and
are tracked as they are transported and dispersed
throughout the three-dimensional grid by the specified
wind field.  At the end of any time interval, the
concentration of a pollutant in any grid element can
be determined by summing the masses represented by the
particles then in the element.  This technique virtu-
ally eliminates transport errors due to numerical
diffusion.

     The limitation on the PIC technique is the large
number of particles which must be tracked to adequately
represent pollutant concentrations and concentration
gradients over a large grid.  Available computer memory
sizes limit a simulation to a combination of about
particles and 10^ grid elements (about 30 min of
CDC-7600 computer time would be required for an 8 hr
simulation).  This makes it extremely difficult to
represent large gradients and small changes in concen-
trations of several species with sufficient accuracy
to adequately characterize chemical reactions, so the
PIC technique is best applied to non-reactive species.
PIC models are perhaps the best choice for accurate
three-dimensional simulations of the pollutant concen-
trations produced by localized sources of non-reactive
species.  The major effort in applying the PIC
technique to urban air quality simulation has been the
application of the NEXUS model by Sklarew to CO and
ozone in Los Angeles.
^
Lagrangian Box Models

     In contrast to Eulerian box models, which
simulate the time history of pollutant concentrations
within a volume element fixed in space, Lagrangian box
models (sometimes called trajectory models) simulate
the time history of pollutant concentrations within
boxlike elements of constant volume as they flow along
wind streamlines.  A box may have one or several layers
(cells) between the terrain surface and the base of the
inversion height.  As the box flows across a source,
pollutants may flow in through the bottom of the box
and diffuse vertically into other cells (if any), but
usually no horizontal transport of pollutants across
the boundary of the box is allowed, so numerical dif-
fusion problems are eliminated.  Within each cell the
air is assumed to be well-mixed, and chemical reactions
can occur.  The horizontal dimensions of the box,
typically on the order of a few kilometers, determine
the spatial resolution of the calculated concentrations
and of the required input data.  Both source emissions
and meteorology can be time dependent.  As with
Eulerian multibox models, they are usually updated on
an hourly or few hourly basis.

     Lagrangian box models are primarily source-
receptor oriented; that is, they relate the pollutant
concentration at a receptor area to specific upwind
emission sources via one or more wind trajectories.
Thus the user can identify the source areas respon-
sible for observed pollutant concentrations at selected
points along specific trajectories.  Air quality simu-
lations on a regional basis can be treated by tracking
boxes along enough different trajectories to adequately
represent the sources and meteorology of the region and
interpolating between calculated trajectories.   This
procedure works best in areas with relatively uniform
source distributions and meteorology, where the number
of trajectories needed to characterize the region is
reasonably small.  For regions with complex meteorology
and/or source distributions, the number of trajectories
needed may make the computer requirements prohibitive.
The ratio of time simulated to computer time required
is on the order of 500 to 1 for each trajectory con-
sidered (e.g., about 30 min of computer time would be
required to follow 25 trajectories over a 10 hr period).
Thus, as is the case for Eulerian multibox models,
simulations are limited to a day or so for typical or
worst-case conditions.

     Representative of Lagrangian box models is the
DIFKIK model, developed at General Research Corporation,
which has been applied to both the Los Angeles  and
San Francisco Bay regions.  The model uses five vertical
layers and vertical diffusivity rates that depend on
atmospheric stability.  The simplified chemical reaction
set employs l6 reactions encompassing 13 species, with
all hydrocarbons lumped together.   Other models may
combine more sophisticated chemistry and simpler
meteorological treatments.

                      Conclusion

     Although it has of necessity been very brief, this
generic survey of numerical air quality simulation
models should be sufficient to show that a variety of
these models is now available.  These models differ
widely in the level with which they treat the important
physical and chemical processes involved, in the
applications for which they are designed, and in the
input data and computational capacity they require.
No single model is adequate for all types of air quality
simulations.  In selecting an air quality simulation
model, a potential user should seek a good match of
his resources and modeling needs with the capabilities
and resource requirements of the model.
  33

-------
                 Selected References

     The references listed here "by no means comprise
an exhaustive list.  Rather, they have "been selected
to be representative of the air quality simulation
models currently in use.  The references generally
contain further references to generically similar
models.
Rollback Models:
Gaussian Models:
ATDL Models:
Eulerian Multi-
  "box Models:
PIC Models:
Langrangian Box
  Models:
N. de Nevers and J. R. Morris,
"Rollback Modeling:  Basic and
Modified," J. Air Poll. Control
Assoc. 25, 9^3 (1975).

D. B. Turner, Workbook of Atmos-
pheric Dispersion Estimates, EPA
Office of Air Programs Publication
AP-26 (19TO).

S. R. Hanna, "A Simple Dispersion
Model for the Analysis of Chemically
Reactive Pollutants," Atm. Envir. J_,
803  (1973).
M. C. MacCracken and G. D. Sauter,
"Development of an Air Pollution
Model for the San Francisco Bay
Area:  Final Report to the NSF,
Vol 1," Lawrence Livermore Laboratory
Report UCRL-51920 (1975).

S. D. Reynolds, et al., "Mathemati-
cal Modeling of Photochemical Air
Pollution.  Ill:  Evaluation of the
Model," Atm. Envir. 8, 563 (197^).

R. C. Sklarew, et al., "Mathemati-
cal Modeling of Photochemical Smog
Using the PICK Method," J. Air Poll.
Control Assoc. 22, 865 (1972).
J. R. Martinez, et al., "User's
Guide to Diffusion/Kinetics (DIFKHJ)
Code:  Final Report EPA Contract
No. 68-02-0336," General Research
Corporation (1973).
                                                       34

-------
                                    AIR QUALITY MODELING   A USER'S VIEWPOINT

                                              Richard H. ThuiTlier

                                         Chief of Research and Planning
                       Bay Area Air Pollution Control District, San Francisco, California
     A requirement for modeling has developed out of
the Clean Air Act amendments of 1970.  In spite of
this requirement and the existence of a variety of
modeling techniques, there is a prevailing reluctance
toward the use of modeling as a decisioranaking tool.
Based on experience with regional application of a
variety of models, the Bay Area Air Pollution Control
District encourages the use of appropriate techniques
in a coordinated regional context.  The District feels
that much can be gained from simplified approaches at
minimal cost and recommends that regional resources be
pooled for effective, efficient, and standardized
application.

                    Introduction

     The Bay Area Air Pollution Control District
(BAAPCD) has regulated air pollutant emissions in the
nine-county San Francisco Bay Area over a period of
20 years.  Traditionally, such regulation has been
accomplished, by and large, through best effort tech-
nological control of point sources, with air quality
improvement as a general goal.  Control program effec-
tiveness has been measured against the yardstick of
air monitoring data from community-representative
sites.

     With the advent of the Clean Air Act amendments
of 1970 and ensuing Federal regulations, there devel-
oped a requirement for a more structured approach to
air quality control.  The promulgation of ambient air
quality standards and associated compliance schedules
has given rise to a concept of air quality control
and analysis based on more precise relationships be-
tween source emissions and their resulting air quality
impact.  The activities involved in establishing such
relationships and using them effectively in an air
quality control context comprise the challenging field
of air quality modeling.

     Notwithstanding the existing requirement for
modeling, relatively little has been done in exploit-
ing the potential of this emerging technology.  The
evident reluctance to use modeling is engendered by
such factors as esoteric techniques, required re-
sources for model use, and a general confusion re-
garding appropriate application.

     Immediately after the promulgation of the Clean
Air Act amendments, the BAAPCD made an extensive com-
mitment to modeling in all its facets.  In a region as
extensive as the Bay Area, air quality standards can
be achieved only if planning decisions properly con-
sider air quality.  We feel that modeling can provide
appropriate air quality input to decisionmaking and
is, therefore, a very useful tool for planning and
regulating air quality.  We are grateful for this
opportunity to discuss our philosophy and experience
in this regard, with a view toward stimulating greater
interest within the user community.
  Institutional Framework for Modeling Application

     In attempting to fulfill the requirements of the
Clean Air Act, control measures must be conceived and
applied on a coordinated, regionwide basis with con-
sideration of all sources of pollution in terms of
their combined, impact upon receptors.  A control  pro-
gram of such scope cannot proceed effectively toward
desired levels of air quality without the unifying
guidance of a regional air quality model.  Throughout
this presentation, the term "model" should be con-
strued to refer not to a single algorithm or computer
code but rather to an integrated and compatible set
of analytical tools which, together, supply the nec-
essary quantitative relationship between regionwide
sources and receptors in the context of defined air
quality standards.

     One of the principal problems associated with
modeling in a regional context arises from a broad
spectrum of source categories and a variety of juris-
dictional responsibilities.  Incompatible data bases,
divergent institutional  resources, and special in-
terest bias can serve to place air quality control
more in the context of an adversary proceeding than
in the context of a coordinated technical effort.
Since such controversy has a tendency to divert ener-
gies and obscure goals,  the interest of air quality
attainment and maintenance can best be served by
resolution of such conflicts.  In this regard, two
possible approaches suggest themselves:  One approach
would be the vesting of all responsibility for air
quality analysis in a single agency with regionwide
jurisdiction.  An alternative approach would be to
continue with a decentralized analysis responsibility
under a unified code of procedures involving, for
effectiveness, an integrated, complementary use of
diversified resources and compatible techniques.   The
latter appeals to the author from the standpoint of
political acceptability as well as technical feasi-
bility.  Ideally, such an approach would proceed
under the guidance of a highly qualified multidisci-
plinary technical committee, group, or team, with
sole responsibility for the development, dissemi-
nation, and coordination of standardized procedures
for the region.  Such a team could also be responsible
for interpretation of modeling results.

     In the Bay Area, a combination of the two ap-
proaches prevails.  In 1972, the BAAPCD created a
multidisciplinary Research and Planning Section with-
in the District's Technical Services Division with
responsibility for air quality model development and
application.  Over a 3-year period, the group has
developed a sizeable inventory of techniques.  As
an agency with regional  jurisdiction and in view of
its experience with air quality models, the District
has been able to exercise regional leadership in air
quality modeling activity.  We have, however, been
greatly assisted in our efforts by other regional
agencies possessing specialized resources and ex-
pertise not available in-house.  The District, in
                                                        35

-------
turn, provides guidance and assistance to a variety of
agencies and  individuals who wish to do their own anal-
ysis  in a way which is mutually acceptable to the ana-
lyst  and the  District.  Current efforts in air quality
maintenance planning should serve to further the inter-
est of a coordinated institutional effort in the re-
gional solution of air quality problems.

    The Relationship of Modeling to Decisionmaking

      In addition to technical and institutional prob-
lems  as discussed above, another barrier to the effec-
tive  use of modeling technology is a confusion or mis-
understanding of the relationship of modeling to the
decisionmaking process.  It is important to realize
that  decisionmaking is inherently subjective.  The con-
cept  of a decision implies a choice among alternatives
involving an element of uncertainty.  Decisionmakers
deal  with the reality of uncertainty and their deci-
sions are conditioned by but not necessarily dependent
upon  the amount or quality of available information.

      These factors are frequently overlooked when mod-
eling is proposed as an air quality analysis tool.   One
of two alternative, contradictory, and equally unwar-
ranted arguments will  frequently be lodged against the
adoption of a modeling program.  On the one hand.it is
argued that the uncertainties in the models will  render
them  useless as input to the decisionmaking process.
On the other hand, it is argued that the valued judg-
ment  of the decisionmaker will  be replaced by the model
itself as an objective but utterly inscrutable and pos-
sibly flawed arbiter.   With regard to these arguments,
we note that mandated decisions relating to air quality
must  be made on the basis of available information.
Modeling results may or may not influence a decision
but as added information cannot conceivably detract
from  the quality of that decision.  The alternative to




|LAND USE/TRANSPORTATION |.
i

1 EMISSIONS MODELS 1
I


EMISSIONS INVENTORIES |

METEOROLOGICAL AND/OR 	 •
CLIMATOLOGICAL DATA



1 1
REGIONAL ' • LOCAL
DISPERSION DISPERSION
MODELING MODELING

AIR QUA
DAT

1
PROJECT /ELEMEN1
MODELING AND
ANALYSIS
N. 1
^

LITY
t

1
STATISTICAL
DISTRIBUTION
ANALYSIS
/

[SYSTEM VERIFICATION AND CALIBRATION STUDIES 1

1



ESTIMATED ESTIMATED
REGIONAL 	 . LOCAL 	
AIR QUALITY 	 ' AIR QUALITY 	 •
BACKGROUND VARIABILITY
(COARSE RESOLUTION) (MEDIUM RESOLUTION)
\
( COMPREHENSIVE REGIONAL


AIR

1
ESTIMATED
AIR QUALITY AT
SENSITIVE
RECEPTOR SITES
(PINE RESOLUTION)
/



QUALITY ANALYSIS 1

INTERPRETATION AND
SUMMARIZATION


[REPORTS AND PRESENTATIONS



 consideration  of modeling  information,  whatever the de-
 gree  of  uncertainty,  is  all too often a complete disre-
 gard  of  the  air quality  issue.  In our  experience,  the
 contribution of modeling is a positive  one,  serving to
 clarify  the  decisionmaking process and  to make  deci-
 sions  less arbitrary  in  nature.  With regard  to the in-
 stallation of  a model as an objective arbiter,  we feel
 this  is  rather unlikely  in view of the  very  imperfec-
 tions  used as  a basis for the first argument.   In real-
 ity, modeling  output must invariably be subject to  in-
 terpretation before a decision can be based  in  any  way
 upon  it.  In the interest of efficiency, however, stan-
 dardization  of routine procedures and the development
 of criteria  based on accepted modeling  techniques may
 be desirable.

      In  summary, modeling results should be  viewed  as
 nothing more than information input to  the primarily
 judgmental process of decisionmaking.   Such  information
 may be weighed with other factors in arriving at the
 decision.  Modeling uncertainties, whatever  their na-
 ture or extent, should be considered simply as  part  of
 the general store of uncertainty inherent in  the deci-
 sionmaking process.  If this viewpoint  is taken,  then
 there  is nothing to fear from modeling  except,  of
 course, fear itself.  Within a carefully structured,
 institutionally integrated and professionally adminis-
 tered  program, we feel that modeling can be a very ef-
 fective decisionmaking tool.

            Model  Selection and Application

     To be most effective, models should be selected to
 complement available resources and flexibly address  the
 problem area in which they will be applied.  Applica-
 tions  in the Bay Area run the gamut from regional plan-
 ning to project review.  To meet our needs, we have
 adopted the modular system outlined schematically in
 Figure 1.  Modeling is done on three spatial  scales of
                                                                   BAYMOD Local
                                                                   Scale Domain

                                                                   • - Project/Element \
                                                                      Scale Domain  I \
                                                                      >BAYMOD Count;
                                                                      Source Area
                                                                      (One of nln.


                                                                      r*Reglonal
                                                                        Wind Pattern
                                                                        (One of forty)
                                                              BAY AREA
                                                              AIR POLLUTION CONTROL
                                                              DISTRICT
Figure la.  Flow Chart of Regional  Air Quality Modeling
Activity in the San Francisco Bay Area.
Figure Ib.  Setting for Regional  Air Quality Modeling
Application in the San Francisco  Bay Area.
                                                       36

-------
resolution using techniques of  the  simplest  type  con-
sistent with physical setting and application  require-
ments of the scale  in question.  A  statistical  model
supplements and links the temporal  resolutions  of the
various modeling techniques.  Meteorological,  clima-
tological, and source emissions data  bases are  avail-
able in various formats as input to the modeling.   The
system is designed  to provide air quality estimates at
three spatial scales, independently,  in a manner  that
enables successively coarser scales to be treated as
background.  The statistical model  enables us  to  ad-
dress the air quality problem directly in terms of the
ambient air quality standards over  appropriate  averag-
ing times.

     The modular, multifaceted  nature of our system
allows us to deal effectively with  a  variety of appli-
cations at appropriate levels of time and cost  with
our own in-house resources or in collaboration  with
outside agencies or individuals.  We  feel that  exist-
ing techniques, judiciously employed, enable us to
provide useful input to decisionmaking in virtually
all of our air quality problem  areas.

     Figure 2 illustrates photochemical modeling  out-
put within the regional scale domain  in Figure  1.   The
model LIRAQ-21 estimates concentrations of ozone,  ni-
trogen oxides, hydrocarbons, and carbon monoxide  at a
regional resolution of 25 km^.  Finer resolutions  of
4  km2 and 1 knr are available for subregional  analysis
and a less complex  version of the model, LIRAQ-1,  is
available for nonreactive analysis  alone.  The  model
accounts for the perturbed airflow  through complex
terrain providing a field of concentrations  hour  by
hour or a time history at selected  points based on
emissions and meteorological data input chronologi-
cally.  The principal use of this model will be in the
evaluation of regionwide planning or  regulatory alter-
natives.  The frequency of model application,  using a
CDC 7600 computer,  is somewhat  limited by cost.
      For the purpose of local scale analysis, the Dis-
 trict has developed a gaussian model, BAYMOD.^  This
 model provides annual average concentration estimates
 for  nonreactive pollutants over a 690 km? local  area
 at a resolution of 1  km?.   Annual average emissions
 and  wind  speed  data are input as a 690 element (30x23)
 grid of 1  km  squares.   A local  wind rose is utilized
 for  annual weighted average transport from upwind  grid
 squares treated as point sources.  Concentrations  from
 sources within  the same grid square are  calculated as
 an integrated  line source  average.  The  model  may  be
 made to treat  local  sources alone or to  include
 county-scale  regional  background through use of  a  box
 model  in  conjunction  with  transport by regional  wind
 patterns.  Larsen's statistical  model  is used  in con-
 junction  with  historical monitoring data to relate an-
 nual  averages  to averaging times associated with air
 quality standards.3  BAYMOD is  run routinely on  the
 District's in-house Hewlett-Packard 3000 minicomputer.
 Typical applications  are air quality analysis  of local
 plan alternatives and the  estimation of  pollutant
 background concentration for use with  more localized
 analyses.  Individual  local  analyses may be assembled
 in mosaic  form  to provide  larger regional  coverage at
 fine (1 km2)  resolution.   Figure 3 is  sample output
 from BAYMOD in  the vicinity of  the city  of Santa
 Rosa,  within  the local  scale domain in Figure  1.
                        10	11	II	IJ	14	15	16	1'	IB	19	1U	it	1
                          ////            xxxx
                          //// 0.60 - 0.74   XXXX 2.00 - 2.49
                          ////            XXXX
                              0.75 - 0.99   **«* 2.50 - 2.99
                          III!
                          MM 1.00 - 1.49
                                                                                   _ CARBON MONOXIDE (PARTS PER MILLION)
                                                                                     ANNUAL AVERAGE l KM RESOLUTION
                                                                                     SANTA ROSA AREA  1973
                                                                                     MODEL: BAYMOD (GAUSSIAN)
Figure 2.   Estimated Distribution of Ozone Concentra-
tions at 1400 PST, 26 July 1973 by LIRAQ-2 Regional
Photochemical Model.  Concentration Units Are Parts
Per Million With an Isopleth Spacing of 0.02 PPM.
                                                           Figure 3.   Estimated Distribution of Carbon Monoxide
                                                           Concentration (Annual  Average)  in the Vicinity of
                                                           Santa  Rosa,  California,  by the  BAYMOD Regional Gaussian
                                                           Model.
      The final module in the District system is at the
project/element scale.  Standard gaussian, single and
multiple point, and line source techniques are em-
ployed to estimate the air quality impact of stacks,
roads, housing developments, shopping centers, air-
ports, and a variety of other projects and project
                                                       37

-------
source elements, as illustrated by Figure 4.  Princi-
pal applications are for permit review, variance hear-
ings, and the review of environmental impact reports.
Modeling is done using either computerized codes de-
veloped in-house or by manual or nomographic methods,
depending on the scope and time frame of the analysis.
To fill a large number of requests for assistance in
project-level air quality analyses by non-District in-
dividuals and agencies, the District has prepared a
comprehensive set of guidelines involving manual tech-
niques in "cookbook" format.4  The popularity of this
publication is indicative of an existing need for user
assistance.
Figure 4.  Illustration of a Typical  Setting for Mod-
eling Application on the Project/Element Scale.

 Simplifications, Assumptions, and Parameterizations

     Perhaps one of the greatest drawbacks  to effec-
tive model application is the esoteric nature of many
techniques.  There is an allure associated  with  so-
phistication perpetuated by the aesthetic appeal  of
complex technology and by an intuitive feeling that
complex problems can be solved only by complex
methods.   As a general  rule, we feel  that sophistica-
tion should be sought only when there is true com-
plexity in the nature of the problem  solution and when
the quality of the input data is commensurate with the
requirements of the model.   The photochemical  process
will normally require modeling complexity while  non-
reactive modeling is amenable to considerable simpli-
fication.  The principal benefit to simplicity,  aside
from cost, is the increased breadth of application
through frequent and multiple use.   If the  outputs of
various models are compared in the context of the  in-
formation required, the merits of simplification may
be readily assessed.

     Frequently, the modeling process can be simpli-
fied through proper definition of the problem and  ap-
propriate parameterization of the modeling scheme.
For example, in assessing carbon monoxide levels in a
local area, we might initially consider a model which
would provide point values of concentration at multi-
ple locations and at discrete time intervals.  With
such a model we could then evaluate the highest point
concentration in the area during the period of peak
traffic flow under the most adverse meteorological
conditions.  If, however, we determined that point
values were not of interest in our study area due  to
spatial mobility of receptors (people), more appro-
priate spatial averages might be obtained by a far
simpler and less costly approach.

     An argument against the application of simplified
techniques in air quality regulatory situations is
based on the premise that the social and economic con-
sequences of such decisions are too important to be
based upon analyses exhibiting less than state-of-the-
art accuracy and precision.  While the premise is un-
doubtedly correct, care must be taken to avoid a
never-ending search for the perfect model.  Simplifi-
cation is consistent with the premise if "state-of-
the-art" is defined to include considerations of data
base condition, problem definition, and required in-
formation, in addition to the conceptual and algorith-
mic structure of the analysis scheme itself.  When the
choice of model includes such considerations, useful
estimates can frequently be made with available tech-
niques at minimal  cost.  The District makes liberal
use of such techniques and recommends them to others
for a wide range of applications.  Our efforts in this
regard have resulted, we feel, in a greatly increased
willingness to include air quality considerations
among the many factors normally involved in land use
and other decisions.

 Monitoring, Modeling, and Regulatory Relationships

     Historically, disparate motivations have in-
fluenced the formulation of air quality regulations,
the setting of air quality standards, the establish-
ment of air monitoring programs, and the development
of air quality models.  Regulations have normally been
source-oriented, focusing on equipment or performance
characteristics with a view toward, ease of enforce-
ment.  Air quality standards are receptor-oriented
focusing on time-averaged ambient concentrations re-
latable to effects on health or welfare.  Air monitor-
ing has been site-oriented with a view toward repre-
sentative sampling under economic and facility con-
straints, and finally, model development has been
guided by computations technology under data con-
straints.

     In complying with the comprehensive requirements
of Clean Air Act legislation, efforts in modeling,
monitoring, and regulation should ideally be inte-
grated in a compatible and complementary systems ap-
proach to air quality analysis.   While complete com-
patibility may never be achieved, many improvements
are possible over the present conditions.  In the in-
terest of such improvements, we offer the following
comments:

     1.  Spatial  averaging should be incorporated
wherever possible in the definition and interpretation
of air quality standards as well  as monitoring
                                                      38

-------
programs.  Thus, for example, the standard for pollut-
ant X might be defined as Y parts per million as a
spatial average over Z square kilometers.  Similarily,
air monitoring using statistical/mobile techniques
might provide estimates of existing air quality as a
spatial average on the same scale.  An appropriate
spatial definition would accommodate the resolution
limitations of modeling input as well as the spatial
mobility of human receptors over the averaging times
inherent in the dosage-oriented air quality standards.
In addition, modeling output could be compared, for
validation, with air .monitoring data on a compatible
spatial scale.

     2.  Modeling should be incorporated in source
performance regulations to achieve consistency between
emission limitation and desired air quality.  Thus, a
regulation might limit source emissions to a rate
which, on the basis of a given dispersion algorithm,
would maintain ground-level concentrations at a speci-
fied level.

     3.  Air monitoring should be performed at places
other than the traditional downtown urban locations to
better define regional gradients.  Data from nonurban
sites would facilitate the validation of modeling
techniques and would provide needed information on
background levels of pollutants from natural sources.

           Conclusions and Recommendations

     Our experience has convinced us that air quality
modeling is a very useful tool for air quality regula-
tion and planning.  Specifically, modeling has given
us a consistent rationale for decisionmaking and en-
abled  us to provide technically-supportable solutions
to a great variety of problems.  We feel that under
the guidance of appropriate expertise and with suffi-
cient  ingenuity, very simple techniques can be applied
effectively at minimal cost.  We realize that air
quality problems and appropriate modeling techniques
often  substantially differ from region to region, and
for that reason, no single technique or set of tech-
niques can be applied universally, with success.  We
feel,  however, that the store of existing techniques
is flexible enough to provide useful solutions to air
quality problems in almost any set of circumstances.
Finally, we feel that, notwithstanding the demon-
strable limitations of modeling technology, decisions
based on modeling results, professionally interpreted,
will better serve the interests of air quality than
those  based on intuition alone.

      We  highly  recommend  that the  framework for a  com-
 prehensive  and  standardized  program  of model applica-
tion be established in each region with existing or
potential air quality problems.  A small interdisci-
plinary staff dedicated to the task of regional model-
ing can provide appropriate guidance for the solution
of air quality problems and the efficient utilization
of resources.  In the interest of establishing such
regional capabilities,we would suggest that considera-
tion be given to state and Federal encouragement as
well as funding.  Our own experience in this regard
has been very positive, and we have welcomed the op-
portunity of sharing our feelings and ideas at this
conference.
                     References

1.  MacCracken, M.C. and G.D. Sauter, Eds., Develop-
    ment of an Air Pollution Model for the San
    Francisco Bay Area, Final Report to the National
    Science Foundation, Lawrence Livermore Labora-
    tory, Livermore, California, Vol. 1, 221  p., 1975.

2.  Thuiller, R.H., A Regional  Air Pollution  Modeling
    System for Practical Application in Land  Use Plan-
    ning Studies, Bay Area Air Pollution Control Dis-
    trict, Information Bulletin 5-17-76, 22 pp., 1973.

3.  	, Air Quality Statistics in Land Use
    Planning Applications, Preprints, Third Conference
    on Probability and Statistics in Atmospheric
    Sciences, Boulder, Colorado, June 1973, pp.  139-
    144, 1973.

4.  Bay Area Air Pollution Control District,  Guide-
    lines for Air Quality Impact Analysis  of  Projects.
    BAAPCD Information Bulletin 6-01-75, 1975.
                                                       39

-------
                SPACE:   A CLOSER LOOK AT THE IMPACT OF ENVIRONMENTAL POLICY

                                        Ernest Heilberg
                                 Chase,  Rosen § Wallace,  Inc.,
                                      Alexandria,  Virginia
                 Abstract

     The Spatial Pollution Analysis §
Comparative Evaluation (SPACE)  System is a
computer-based model system designed to
indicate the impact of local policy deci-
sions on the environmental quality within
metropolitan areas.  This system was deve-
loped as an adjunct to the Strategic
Environmental Analysis System (SEAS) and
relies heavily on data produced by SEAS.

     Major features of SPACE include:

     1.  The determination of net emissions
and ambient levels of pollution resolved to
a grid system covering the analyzed region.

     2.  The ability to introduce a broad
variety of local environment-related policy
changes.

                Introduction

     Contrary to what the acronym may at
first suggest, the SPACE System deals with
matters very close at hand   pollution within
metropolitan areas.  The actual name, The
Spatial Pollution Analysis and Comparative
Evaluation System, was intended to point out
that the system is concerned with the spatial
distribution of pollution, rather than just
cumulative or average values for the region
considered.  This is the concept of "space"
you should associate with this system.

     There are several important aspects of
the SPACE System which are not indicated by
the name.  First, as suggested before, the
"space" in question is that within a metro-
politan area, specifically an SMSA (Standard
Metropolitan Statistical Area).  Second, the
system has been designed to be sensitive to a
broad range of local environment-related poli-
cies.  Finally, the system has been developed
as an extention of SEAS (the Strategic En-
vironmental Assessment System).1   All of
these features will be discussed in more
detail later.

     Putting these features together, the  in-
tended applications of SPACE should begin  to
become apparent.  The SPACE model system is
essentially a planning tool, for use by both
national and local planners, in screening  en-
vironment-related policy options.  Three spe-
cific categories of application were con-
sidered during the system design efforts:

1)  Use by local planners to compare the
    relative effectiveness and efficiency
    of local environmental quality im-
    provement plans.

2)  Use by EPA planners (in conjunction
    with SEAS) to determine the impact
    of national policies on metropolitan
    areas.

3)  Use by EPA planners, in their de-
    velopment of guidelines for local
    planners, to compare the effects of
    specific local policies on various
    types of metropolitan areas.

     To provide for these applications, SPACE
was designed to consider a broad range 'of po-
licies as well as a variety of impacts of
such policies.  As a result, SPACE can be
used as a policy screening tool with appli-
cations far beyond those initially intended,
e.g. inclusion of an energy submodel would
permit studies involving both pollution and
energy policies.  Some of the other possible
applications will be alluded to later.

                Background

     Before proceeding with the discussion of
SPACE, it will be useful to review SEAS,
especially those elements of SEAS which are
most pertinent to SPACE. SEAS is a large,
complex collection of forecasting models
which relate the national economy to the
generation of pollution residuals and the
associated costs of pollution control.  It
provides a framework within which a
decision-maker can assess the impact of
alternative national policies related to the
economy and the environment.

     SEAS is driven by a Leontief-type econo-
mic (input-output) model which forecasts the
expected levels of the various economic sec-
tors in the U.S. over a 15 year period. These
projections are converted to pollution fore-
casts, by considering the technologies in-
volved for operation, production and abate-
ment.   The economic and pollution forecasts
are then disaggregated to various sub-regions
of the nation, e.g. states, SMSA's, etc.,
by considering the relative characteristics
of the subregions.  To insure completeness,
pollution sources not directly related to the
economic sectors  (e.g. related to households
and transportation) are introduced at the
disaggregated levels and aggregated upwards.

     Using SEAS it is thus possible to esti-
mate the amount of the various pollution re-
siduals that can be expected to be generated
nationwide or within specified sub-regions of
the nation, and to note the possible changes
in these estimates that result from imple-
menting various national policies.  It should
be obvious that the results obtained are most
meaningful when considering the nation as a
whole or the larger subregions.  As consider-
ation turns to the smaller sub-regions, e.g.
SMSA's, local policy decisions can be ex-
pected to have a significant impact not
reflected in the SEAS analysis.
                                             40

-------
     The desire to provide for the more
meaningful use of the disaggregated pro-
jections led to the conception of the SPACE
System.  To demonstrate the feasibility of
this concept, the Washington Environmental
Research Center of EPA let a contract to
Chase, Rosen § Wallace, Inc. (CRW)' and Alan
M. Voorhees § Assoc., Inc. (AMV) to develop
the SPACE Test System.  It is this test sys-
tem, completed in mid-1975, that is the sub-
ject of this presentation.

          SPACE System Applications

     SPACE is a working system which can be
of value in problem solving.  Its main
utility is as a tool in comparing the rela-
tive impact of various local policy options
on pollution in a metropolitan area.  Thus,
for a  locality attempting to meet specific
environmental goals, SPACE could be useful
in screening policy alternatives.

     The policy alternatives that can be
treated are essentially unconstrained by
the SPACE System.  These include: (1) di-
rect pollution control programs, such as
improving the capacity and/or quality of
solid  waste management systems or water
treatment systems, setting stricter emis-
sion standards for factories or motor
vehicles;  (2) land use controls, such as
zoning, limiting emissions in specific
locations, designating open land, select-
ing specific sites for major facilities
(e.g.  airport, sports arena, etc.);
(3) auto use deterrents, such as establish-
ing auto free zones, designating special
bus and car pool lanes, improving mass
transit, increasing fuel and parking costs;
and (4) other indirect means, such as pro-
viding economic incentives for private
emission control actions, establishing con-
straints on use of specific fuel types.
This listing is by no means exhaustive; but
rather, illustrates the broad range of pos-
sible  considerations.

     The metropolitan areas that can be
considered are limited only by the availa-
bility of data.  As will be discussed later,
EPA has taken steps that should ultimately
eliminate this limitation.   With data avail-
able for sufficient SMSA's, it will be pos-
sible  for EPA to determine the differing
impact of specific policies on different
types  of cities.   Guidance provided by EPA
to local planners could thus be tailored to
the specific locality.

     By design, SPACE measures  the impact
of policy options in terms  of pollution.
In the process of making such determina-
tions,  the system becomes involved in ana-
lyses related to  local  economic activity,
land use,  transportation,  and energy.   With
minor modifications,  mainly related to the
massaging  and display of intermediate
results,  the  possible applications  of
SPACE can  be  extended to include  the  mea-
surement of  policy impact in  a  variety of
forms.   For  example,  SPACE  could  indicate
the trade-off between pollution generated
and fuel consumed resulting from  policies  •
that  encourage or discourage  use  of  specific
fuels;  or it could indicate the inability
of the region to meet the SEAS economic pro-
jections as a result of policies constrain-
ing factory emissions.   The combination of
the wide range of policies that can be
treated and the variety of impact measures
possible, results in a highly flexible model
for local policy screening.

           SPACE System Overview

     The metropolitan area under study is
initially a flat, relatively empty rectangu-
lar grid system.  It contains rivers, high-
ways, railroad tracks,  but little else.  For
each analysis year the SEAS projections in-
dicate the amount of residential, industrial,
commercial, and other developments that are
expected to exist in the region.  These
are distributed over the grid system by
considering the attractions and constraints
associated with individual grid squares,
the features of neighboring grid squares,
and the historical land use patterns.  This
distribution is intended to be represen-
tative, not predictive.  Each activity
thus located becomes a single stationary
pollution source.

     In addition to any pollution generated
directly by these activities, each activity
has the potential of attracting pollution
through the motor vehicles that come to or
leave its facilities.  These mobile pollution
sources are given location in the grid system
by associating trip ends directly with the
located activities and by distributing the
trip routes over the implied transportation
network.

     With both stationary and mobile pollu-
tion sources located, gross pollution gener-
ated in each grid square is determined, based
on size and type of each source.  The actual
pollution emitted to the environment is next
determined by considering two types of pollu-
tion modification   transformation and trans-
portation.

     Pollution transformation generally in-
volves some technological process for con-
verting the polluting substances into other
substances.  These other substances may also
be considered pollution, as in the case of
burning solid waste to produce a smaller
volume of ash plus air pollution.  Some of
the new substances may, however, be useful
materials, as in the conversion of some
solid waste to fertilizer.

     Pollution transportation refers to the
physical movement of the residuals from one
location to another, generally with no sig-
nificant change in the substances transport-
ed.  This includes the piping of sewage
waste to treatment plants and the hauling of
solid waste to land fills and incinerators.
At their destination this transported pollu-
tion may be transformed.

     The transportation of pollution resid-
uals to activities which may in turn trans-
form them introduces a special concept.
Each category of activity is designated as
being either "exogenous" or "endogenous".
                                             41

-------
Exogenous activities are those whose
operating levels are determined external
to the region, e.g.  steel mills operate at
levels determined by national and inter-
national demands.  Endogenous activities,
on the other hand, are those whose operat-
ing levels are determined within the
region, e.g. water treatment plants.  This
distinction is important, as the pollution
generated is more a function of the operat-
ing level than the actual size of the
activities.

     The operating levels of endogenous
activities are taken to be proportional to
the demands placed on them by other acti-
vities in the region, both exogenous and
endogenous.   In SPACE these demands are
measured in terms of the pollution trans-
ported to the endogenous activities.  Not
all endogenous activities can be character-
ized as receiving and processing pollution
residuals, however.   Thus, for electric
power generating stations, and other such
endogenous activities, dummy pollutants must
be defined.   These dummy pollutants are
simply the demands,  e.g. electric power de-
mand.  Like actual transported pollutants,
these dummy pollutants are used primarily  to
determine the operating level of the receiv-
ing activities.

     Through these various considerations,
net pollution emitted in each grid square  is
determined.   The final consideration in SPACE
is the possible dispersion of the emitted
residuals in air or water.  This yields
ambient levels of the residuals in each grid
square.

     As suggested by the foregoing discus-
sion, SPACE, starting with an empty region
grid for each analysis year, essentially
creates a snapshot of the region for an
average day during the year.  This does not,
however, mean that each year is analyzed in-
dependently.  The carryover from one ana-
lysis year to the next is provided for in
three ways.   First,  the data obtained from
SEAS already reflects the impact of time on
such items as economic productivity and
growth, technological advances, population
growth, changes in demand for industrial
output, etc.  Second, user designated policy
changes remain in effect unless again
changed.  In particular, auto emission stan-
dards specified for- one year, influence auto
emissions for cars of that model year when
analyzing subsequent years.  Finally, land
use patterns determined for one analysis
year are a major consideration in specific
activity location in subsequent analysis
years.  This approach in considering changes
over time is most meaningful if the interval
between analysis years is at least 3 years,
and preferably 5 years or more.

           SPACE System Structure

     SPACE was originally conceived as a
patchwork of existing models and data bases.
It was not possible to strictly adhere to
this concept, however, since existing ele-
ments were not always sufficiently com-
patible.  As an alternative, the system was
designed in modular form with usable exist-
ing elements adapted as modules and missing
links developed specifically for the test
version of SPACE.  This design provides for
relatively easy replacement of components
as better models or data files become avail-
able.

     The resulting modular system can be con-
sidered to consist of 13 components.  Each of
these will be discussed briefly.

SEAS Files

     Reference has already been made to the
use of SEAS data as the basis for SPACE anal-
yses.  Although much of the SEAS data is used
in some form, two SEAS files are read direct-
ly by SPACE.  These are the disaggregated
economic projections and the pollution resi-
dual coefficients for the various economic
sectors.  The residual coefficients are esti-
mates of pollution produced per unit of the
activity, reflecting operating and/or pro-
duction processes used, and, optionally,
abatement processes used.

Modal City Files

     The Modal City Files refer to the col-
lection of data required as input to SPACE to
describe the specific SMSA's being analyzed.
Lest the idea of developing such files for
each application scare off potential users,
the original design of SPACE included the
concept of Modal Cities  (or Modal Regions).
This concept envisions the development of a
typology for the SMSA's, such that a small
number of (actual or composite) SMSA's would
be used to represent all SMSA's.  These
representative SMSA's would constitute the
Modal Cities.  A Modal City File would then
be developed for each, and made part of the
SPACE System.  Efforts towards this end have
been initiated recently, through an EPA con-
tract with Urban Systems Research § Engi-
neering, Inc.  Currently SPACE contains only
a single Modal City File, developed speci-
fically for the test system.

Permanent Data Piles

     The major portion of the remaining in-
puts for SPACE will be referred to as the
Permanent Data Files.  These include data
reflecting generalized behavior patterns,
national averages, etc.  Perhaps the most
important items in these files are the des-
criptions of the economic sectors, including
employment information, area requirements,
etc.  Other important data included in these
files are pollution transformation factors
and auto emissions standards.

MSPACB

     The main program for the SPACE model  is
referred to as MSPACE.   Its functions are  to
read the input data files, create a working
data base, sequence the analysis through the
designated analysis years, sequence the exe-
cution of assessment modules for each analy-
sis year and create history files for pos-
sible future use.  The design of this program
is such as to permit relatively independent
                                              42

-------
design and operation of the other component
modules.

OVRIDE

     A major design feature of SPACE is the
means for introducing environment related
policy.  Subroutine OVRIDE serves this func-
tion.  At the start of each analysis year,
following the automatic updating of the
working data base, SPACE allows the user
to specify a broad spectrum of modifica-
tions reflecting new policies.  These range
from simple speed limit changes to more com-
plex land use constraints.

Activity Allocation

     The first of the major submodels exe-
cuted each analysis year is the activity
allocation module.  This submodel distributes
units of the various activities (population,
specific industries, commercial activities,
etc.) among the grid squares of the region.
In dispersing a given activity an attrac-
tiveness index, relative to that type of
activity, is calculated for each grid square.
Within the constraints of minimum activity
size, available land, and local policy, each
activity is distributed in proportion to the
size of the index.  The total number of units
distributed is essentially that implied by
the SEAS projections.  Where the minimum
size is relatively small in comparison to
total sector size, as in the case of housing,
the resulting allocation will be highly dis-
persed.  Where the minimum size is large,
such as with heavy industry, the allocation
will be more concentrated.

     The attractiveness indices, which are
the major bases for the allocation process,
consider a wide range of grid square cha-
racteristics.  These include accessibility,
historical land use, distance from the
central business district, the proximity
of related activities and, if pertinent,
the proximity of an employment base.  The
importance assigned each such attraction
factor is varied with the type of activity
being allocated.

Transportation

     The transportation module operates di-
rectly on the results of the activity allo-
cation module.  Associated with each located
activity are a number of trip ends, based on
the activity type and size.   These trip ends
include those for work trips, business trips,
shopping and recreation trips, freight pick-
up and delivery, etc.  The trips associated
with each pair of trip ends are then catego-
rized by type   auto, bus, rail transit, etc.
through consideration of the local modal
split.   This modal split is determined from
existing facilities, established patterns
and local policies which encourage or dis-
courage use of specific modes.

     The  trips themselves, specifically the
vehicle miles traveled (VMT), are then dis-
tributed  over the grid squares in a manner
similar to the distribution of activities.
Grid square attractiveness for trip miles is
based on  facilities available in the grid
square (e.g. highway lane-miles, bus seat-
miles, etc.), plus the number of trip ends
located in or adjacent to the grid square.
Finally the transportation module determines
the average vehicle speeds in each grid
square by considering legal speed limits and
congestion implied by the ratio of the VMT
distribution to highway capacity.

Pollution

     The SPACE pollution module has been
built around the Georgetown University IMMP
Model.   Its main function is to convert data
in the SPACE data base so as to be compatible
with IMMP input requirements.  This module
controls the execution of IMMP, but the IMMP
model, as modified, for SPACE, controls the
execution of other submodels within the
pollution module.

IMMP

     The IMMP Model primarily takes care of
the pollution analysis and accounting,  in-
cluding the considerations of pollution
transformation, transportation and dis-
persion.  In addition it determines the
operating levels of the endogenous activi-
ties.  Some modifications have been made to
the IMMP program for its use in SPACE.   These
primarily involve the use of IMMP to control
the execution of other components, specific-
ally MOBILE, STORM and MAPS.

     The only significant modification made
to the analysis in IMMP relates to the de-
termination of net pollution emissions by
endogenous activities.  The basic IMMP Model
treats such emissions as being directly pro-
portional to the operating level of the acti-
vity.  As modified for SPACE, the net emis-
sions from endogenous activities also reflect
the mix of pollutant residuals received.

MOBILE

     Subroutine MOBILE was developed to
compliment IMMP, since the basic IMMP Model
is limited in its ability to treat mobile
pollution source emissions.  This routine
determines net emissions from motor vehicles
in each grid square, and then passes the
results to IMMP for inclusion in grid square
totals.  MOBILE considers both trip end and
running emissions.  It considers the mix of
vehicle types, fuels used and average speeds.
It further reflects emission standards and
age distributions of the vehicles, which
affect emission control quality.

STORM

     One source of water pollution, not
reflected in the other components dis-
cussed is the runoff from rain or melting
snow.  Since other analyses in SPACE re-
flect an average day during each analysis
year, direct integration of this type pol-
lution would not be meaningful.  To handle
this consideration, the Corps of Engineers'
STORM Model3 was adapted.  This model analy-
zes the precipitation history in a region
for a full year.  It isolates precipitation/
runoff events and determines the pollution
content of the runoff for each event.  It
                                              43

-------
further considers treatment of the runoff
(in a quantitative sense only) and possible
storage and overflow of runoff awaiting
treatment.

     For purposes of SPACE analyses, STORM
has been modified to isolate the results for
an average rain day and a worst rain day.
The levels of overflow and treatment thus
determined are passed back to IMMP as amounts
of pollution residuals dumped directly into
streams and amounts transported to water
treatment plants, respectively.

     To accommodate this use of STORM, IMMP
was further modified.  IMMP now determines
the remaining capacity of the water treat-
ment plants, which it passes to STORM.  Upon
receiving the results of the STORM analysis,
IMMP produces three sets of water pollution
results   one for an average dry day, one
for an average rain day and one for a worst
rain day.

MAPS

     The output formats incorporated in the
basic IMMP and STORM models were not felt
to adequately meet the needs of SPACE System
users.  As a result a general purpose rou-
tine was developed to produce rectangular
grid displays, showing appropriate data for
each grid square in the region.  This rou-
tine, referred to as MAPS, is used to dis-
play a wide variety of SPACE output, includ-
ing the net emissions and ambient levels of
individual pollution residuals in each grid
square.

History Files

     The last of the major SPACE System com-
ponents are the History Files.  Following
the analysis for each year, the working data
base is copied to a semi-permanent History
File.  This feature allows the user to effi-
ciently analyze variations of previous runs.
Thus following a run involving 5 analysis
years, the user may desire to rerun the
situation with specific policy modifications
introduced at the start of the third analysis
year.  This would be accomplished by using
the History File created at the end of the
second year to recreate the data base as it
existed.  The policy modifications would
then be entered, and SPACE would be executed
for only the last three analysis years.

          SPACE Run Preparation

     The mechanics of actually using the
SPACE System can be very simple or relatively
complex, depending on the degree of complex-
ity associated with the policy options to be
analyzed.  Because of the variety of perma-
nent files incorporated in the system, the
use of SEAS Files and the anticipated exis-
tence of the Modal City Files, required user
input, other than policy descriptions, is
minimal.  These include designation of ana-
lysis years, the Modal City File to be used
and, if pertinent, the History File to be
used for restart runs.

     User specified policies are entered by
modifying data in the working data base.
 The  complexity  involved  in  entering such
 policies  is  somewhat  reduced  by  using a
 variable  format  approach for  designating
 modifications.   Thus,  the user need only be
 concerned with  the  specific items  to be
 changed.  It  is  in  the determination of
 items  to  be  changed and,  to a lesser degree,
 the  new values  to be  introduced  that the
 complexities  arise.   The  test system has
 been designed to give  the user maximum free-
 dom  in selecting policies to  be  analyzed.
 Thus there is no shopping list of  options
 for  the user  to  select from.  Rather there
 is a list of  factors  that can be changed,
 e.g. speed limits,  auto  emission stan-
 dards, abatement process  effectiveness,  etc.
 The  user  must translate  his policy choice
 into value changes  for one  or more of the
 factors.

     The  various computer files  that con-
 stitute the SPACE Test System all  exist  on
 a single  EPA  disk volume  located at  the
 Optimum Systems, Inc.  (OSI) facility in
 Rockville, Maryland.  To  further aid users
 in run preparation, all pertinent  files, in-
 cluding sample JCL  instructions, have been
 stored so as  to permit data input  and job
 execution from remote terminals  using the
 WYLBUR language.  A draft users  guide4 for
 the test  system is  available  for those de-
 siring more details on this aspect of the
 system.

               Summary

     This, then, is the SPACE System   a
 collection of computer models and  data
 bases capable of further  disaggregating the
 determinations from SEAS, in  such  a way as
 to allow analysis of the  impact  on SMSA's of
various local policies.   It is a highly
 flexible system which, although  intended
primarily for the study of environmental
pollution, can be used to study  a variety
 of environment-related phenomena.

              References
    U.S. Environmental Protection Agency,
    "Strategic Environmental Assessment
    System," Draft Report, Washington, D.C.,
    December 16, 1975.

    Paik, Inja K., et al., "The Integrated
    Multi-Media Pollution Model,"
    EPA-600/5-74-020, Office of Research
    and Development, U.S. Environmental
    Protection Agency, Washington, D.C.,
    February 1974.

    The Hydrologic Engineering Center,
    "Urban Storm Water Runoff   STORM,"
    Draft 723-S8-L2520, U.S. Army Corps of
    Engineers, Davis, California,
    October 1974.

    Chase, Rosen § Wallace, Inc., "Spatial
    Pollution Analysis and Comparative
    Evaluation (SPACE) System   Users' and
    Operators' Guide,"  Draft Report,
    Alexandria, Virginia, July 1975.
                                              44

-------
                               RIBAM, A GENERALIZED MODEL FOR

                       RIVER BASIN WATER QUALITY MANAGEMENT PLANNING
            Richard N. Marshall, Stanley G. Chamberlain, Charles V. Beckers, Jr.
                               Environmental Systems Analysis
                           Oceanographic & Environmental Services
                                      Raytheon Company
                                  Portsmouth, Rhode Island
ABSTRACT

To meet water quality objectives in streams
and rivers, a need arises for systematic
analysis of alternative pollution abatement
strategies.  The computerized mathematical
model, RIBAM (River BAsin Model), predicts
water quality for 17 constituents, including
DO, carbonaceous BOD, and parameters that
represent nitrification and photosynthetic
processes.  Predicted water quality profiles
throughout the basin for varying sets of
waste  loads and flow regimes can be compared
with each other and with desired water
quality goals.   RIBAM is suited for determin-
ing the waste load allocations necessary
for achieving water quality standards in
rivers.  A unique calibration method, based
on open-channel hydraulic equations, for an
exponential relationship between stream
velocity and flow is presented.

The basic assumptions of RIBAM are that
steady-state conditions exist and that the
concentrations of water quality parameters
are well mixed, varying only in the longitud-
inal direction of the stream.  The applica-
tion of RIBAM to the Beaver River Basin,
including the Mahoning River, in Ohio and
Pennsylvania is discussed.

BACKGROUND

RIBAM was developed by Raytheon Company
under a project sponsored by the US Environ-
mental Protection Agency to provide a veri-
fied, computerized mathematical model of the
water quality in selected portions of the
Beaver River Basin.  RIBAM is a major modifi-
cation of the DOSAG model.1  In most cases,
predicted values of water quality parameters
at several Basin locations agreed with
previously measured values during three simu-
lated time periods.

RIBAM can be used by EPA, state and local
agencies, and consulting firms for basin-
wide water quality planning,  in accordance
with PL 92-500, the Federal Water Pollution
Control Act Amendments of 1972.  Raytheon
held a model training seminar for the rele-
vant agencies in Ohio.  RIBAM is presently
being used by the EPA Region V Michigan-Ohio
District Field Office in Cleveland in an on-
going project to determine waste load con-
ditions that most favorably meet water
quality objectives in the Mahoning River,
Ohio.

MODEL ASSUMPTIONS

In RIBAM, it is assumed that steady-state
consitions exist in which the basin condit-
ions are invariant with time.  The basin
conditions include the various effluent
waste loads, stream flow, velocity, depth,
and the model parameters, such as reaction
rates and coefficients.  Basin conditions
can vary spatially, but only along the longi-
tudinal direction of the stream.  The water
quality constituents modeled in RIBAM are
assumed to have uniform values throughout
any cross section of the stream at any given
basin location.

BASIN NETWORK

RIBAM analyzes a river basin as a network
consisting of the following four basic
components:

     Junction- confluence between two
     streams within the river basin.
     Stretches - length of river between two
     junctions.
     Headwater Stretches - length of river
     from a headwater to its first junction
     with another stretch (either headwater
     or normal).

     Segments or Reaches - subunits of
     length that comprise a stretch
     (either headwater or normal).

In RIBAM, segments are defined such that the
model parameters are assumed to be invariant
throughout the length of the entire segment.
At the head of each segment, new values of
model parameters can be defined and addition-
al flows and waste loads may enter the
stream.  Figure 1 demonstrates the modeling
network for the RIBAM application to the
Beaver River Basin.

SOLUTION TYPES IN RIBAM

The in-stream reactions that effect the con-
centrations of the 17 water quality parame-
ters in RIBAM are represented by differen^
tial equations.

All of the differential equations have
analytical solutions, which are computed in
a piecewise continuous manner along the en-
tire length of the river basin network.  More
specifically, a mass balance is computed as
additional flows and waste loads enter the
stream at the head of a segment.  The solu-
tion for concentration of each water quality
parameter is then computed for the length of
the segment.  The concentration at the down-
stream end of the segment is then an input
to the mass balance at the head of the next
down-stream segment.
                                             45

-------
Figure 1.  RIBAM Segmentation Network for
           Beaver River Basin

The differential equations and analytical
solutions of the water quality parameters
can be categorized into three types:

     conservative
     non-conservative, non-coupled

     non-conservative, coupled

The conservative solution defines the concen-
tration of the water quality parameter as
being constant throughout the segment.   The
conservative equation is:
  dC _
  3t ~
where C
                              (1)
               concentration of water
               quality parameter (usually
               mg/1)

           t = time (days)
The conservative solution is:

       C(t) = C
      (2)
     where CQ is concentration at the head

     of the segment after the mass balance
     is computed (i.e.,  at time equal to
     zero) and t is time of travel through
     the segment.

The conservative parameters,  or those param-
eters whose concentrations are defined by
the conservative solution, are:
     Sulfates
     Manganese
     Iron
     Total Nitrogen
Dissolved Solids
Lead
Chlorides
In RIBAM, the mass exchange at the head of a
segment is categorized according to three
source  types; 1) tributary sources,  2)
municipal sources (discharge of treated
municipal sewage), and 3)  industrial sources.
For each source type and each segment,  the
RIBAM user may select one flow value and one
concentration value for each water quality
parameter.  If multiple sources of a single
type are located at a segment head, flow and
                                                 concentration must be combined externally
                                                 for use in RIBAM.   Tributary and municipal
                                                 source types are similar because they repre-
                                                 sent flow and mass additions to the system.
                                                 They are distinguished mainly to facilitate
                                                 easier model use and interpretation of
                                                 results.  The tributary source type can be
                                                 used to represent  a withdrawal, by specifying
                                                 a negative flow value.  The industrial source
                                                 type represents the mass added to water that
                                                 is circulated through a facility for use in
                                                 its industrial processing.  The mass balance
                                                 equation is:
                                      VQ1+Q2

                              where:
                              Q  = stream flow entering from the
                               s   upstream reach (cfs)

                                   flow added by tributary sources
                                   (cfs)
                                   flow added by municipal wastewater
                                   sources (cfs)
                                                 Q,
                                                  1
                                                 Q-


                                                 Q,
                                   flow passing through industrial
                                   sources (cfs)

                              C  = concentration of parameter at
                               s   downstream end of the upstream
                                   segment (usually mg/1)

                              C, = concentration of parameter in
                                   tributary streams (usually mg/1)

                              C2 = concentration of parameter in
                                   municipal wastewater sources
                                   (usually mg/1)

                              C., = net change in concentration
                                   between intake and discharge of
                                   industrial process water
                                   (usually mg/1)

                         The equation for the non-conservative, non-
                         coupled solution is :
                              dC
                                                           -KG
                                                              (4)
                                                 where K is the reaction rate of the
                                                 constituent.
                         The solution to equation (4) is:

                              C(t) = CQe "Kt
                                                                                 (5)
                         In RIBAM,  the following constituents are
                         defined by the non-conservative, non-coupled
                         solution:
                                                 Phosphorous
                                                 Ammonia Nitrogen
                                                 Cyanides
Phenols
Carbonaceous BOD
Coliforms
                         The non-conservative, coupled parameter
                         equation links the constituent in concern
                         with one or more other constituents.  A
                         unique equation and analytical solution
                         exists for each of the following coupled
                         parameters:

                              Nitrite Nitrogen
                              Nitrate Nitrogen
                              Chlorophyll a
                              Dissolved Oxygen (DO)
                                              46

-------
The relationships among constituents for the
coupled parameters is shown in Table 1.

TABLE 1.  RELATIONSHIPS AMONG COUPLED
          PARAMETERS
                                      for simulations using the calibrated model
                                      parameter values for one or more previously
                                      observed time periods.
                                                                    K1VCB DIM* MUM UULTJI1- IIMUST

                                                                    HONING BlVtft UCHOH Of MirtH BlytD
                                                                    «,OT ro* o.o,
COUPLED PARAMETER
            COUPLED TO
Nitrite Nitrogen
            Ammonia Nitrogen
Nitrate Nitrogen
            Nitrite Nitrogen
            Ammonia Nitrogen
Chlorophyll a
            Phosphorous
            Nitrate Nitrogen
            Nitrite Nitrogen
            Ammonia Nitrogen
Dissolved Oxygen
            Iron
            Nitrite Nitrogen
            Ammonia Nitrogen
            Carbonaceous BOD
            Chlorophyll a
The mathematical relationships among coupled
parameters represent the natural processes
of nitrification, bacterial oxidation of
organic material, and photosynthesis.  The
equation for DO also includes terms for
reaeration and benthic demand.  Reference 2
presents the mathematical form for each
coupled parameter.

The reaeration coefficient, K.,y, may be
specified for each reach, or it may be
computed by:
     K
      17
A*VB
(6)
             D
     where V = stream velocity  (fps)

           D = stream depth  (feet)

and A, B, and C are coefficient values,
which have been determined for previous
field studies3.  RIBAM also  predicts
reaeration at dams2.

CALIBRATION OF THE MODEL

In simulating water quality  in a river basin
for a previously observed time period, the
predicted values of the model are compared
with measured values.  The model is cali-
brated to the river basin when agreement is
attained between predicted and measured
values.

Each reach of the stream has a unique set of
model parameters, reaction rates and co-
efficient values, that affect the predicted
values.   The model is calibrated by adjusting
the model parameter values.  The sensitivity
of model predictions to the model parameters
describes the relative change in predicted
values due to variations in  the model
parameter values.  Figure 2  demonstrates the
comparison between predicted and measured
values for the calibration of dissolved
oxygen in the Mahoning River, Ohio for a time
period in July-August 1971.  The calibrated
model is usually verified with a favorable
comparison of predicted and measured values
Figure 2.  Comparison of Predicted and
           Measured Values of Dissolved
           Oxygen, Mahoning River

The stream velocity is an important term in
RIBAM, because it is inversely related to t,
the time of travel through a reach.  For
non-conservative, non-coupled parameters,
the amount of reactant removed by natural
processes is exponentially related to t
(see equation (5)).  Similarly, the mass lost
or gained by coupled parameters is sensitive
to the value to t.

The velocity of a reach is estimated by:
                                           V = aQL
                                                    (7)
                                           where Q = stream flow  (cfs)
                                      and a, b are coefficient values.  The
                                      values may be determined from a statistical
                                      analysis of several flow-velocity observa-
                                      tions within the reach.  Frequently, obser-
                                      vations are limited to a time of travel
                                      measurement over a length of stream for one
                                      flow condition.

                                      Consequently, a method is developed to deter-
                                      mine the coefficients from  limited data.
                                      This method applies the basic hydraulic
                                      equations for open channel  flow to obtain
                                      several pairs of velocity and flow values
                                      for each reach.  A statistical regression is
                                      then applied to these values to determine
                                      the coefficients of equation (7).  The first
                                      hydraulic equation"* is :
                                           Q = 1.49
                                  AR 2/3S
                                                                        (8)
                                              47

-------
     where:                       2
     A = cross sectional area (ft )

     R = cross sectional area divided by
         wetted perimeter
         (hydraulic radius, ft)
     S = slope or energy gradient

     n = Manning coefficient

In this method, rectangular stream cross-^
sections are assumed, yielding the following
definitions:

     R = DW	
         W+2D                     (9)
     where W = stream width (ft)

and  A = DW                       (10)

Upon substitution, equation (8) becomes

     -K3W5D5+4Q3D2+4WQ3D+Q3W 2 = 0   (11)
     where K = 1.49
        1/2
                n
Depth is the only unknown quantity in
equation (11).if Q is defined as a measured
flow value or treated as an independent
variable.  Equation (11) is a polynomial in
D, which can be solved numerically using
Newton's method.

When depths have been determined by solving
equation (11),  the velocities are computed
using the hydraulic equation1* .
     V = 1.49
R 2/3 S
                                  (12)
To determine flow-velocity pairs, equation
(11) and (12) must be solved a sufficient
number of times for accuracy in the statisti-
cal regression.  The independent variable,
flow, should be varied over the expected
range of values.  Widths, which are assumed
to be invariant over the range of flows,  and
the slopes must be estimated from detailed
maps or other sources.   If a measurement of
average velocity (or time of travel) has
been made for the reach, the value of n can
be found by iteratively computing equations
(11) and (12) and determining which value is
in the best agreement with the measured
velocity.  In the absence of velocity
measurements, an engineering estimate of n
must be made.

The flow-velocity pairs generated from
equations (11) and (12) are fitted to the
curve defined by equation (7) by statistical
regression techniques.   The resulting co-
efficients can be used for velocity predic-
tions in RIBAM simulations.  The method
offers the advantage of a simple velocity
prediction equation that is based on the
physical characteristics of the reach and
that requires limited or no observational
data on velocity.

SENSITIVITY ANALYSIS

Results of a sensitivity analysis demonstrate
the numerical significance of model param-
eters to the RIBAM predictions.  For example,
the percent change of a predicted value may
be compared with similar percent  changes in
model parameter values that affect  the pre-
diction.  RIBAM sensitivity analysis  results
for the Beaver River Basin are  reported in
reference 2.

MODEL APPLICATIONS

Upon calibration of RIBAM to  a  particular
basin, the model can be used  to predict
water quality for projected conditions.
RIBAM is a simple, effective  tool for deter-
mining waste load allocations for point
sources in a river basin.  The  model  can
estimate water quality profiles for varying
effluent loadings, which may  be due to
changing sewered population,  increased treat-
ment, or addition of a new discharge  facility.
The water quality can be simulated for dif-
ferent environmental conditions,  such as
stream flow and water temperature.  RIBAM
has been used by USEPA personnel  to predict
water quality for projected conditions in
the Mahoning River.

USER-COMPUTER INTERFACE

RIBAM, like other computer models,  requires
the user to learn data deck input formats
and model outputs.  The RIBAM input/output
is straightforward, and.fully documented2.
Figure 3 presents a typical printout  for
RIBAM water quality predictions.   The compu-
tation time and costs for RIBAM are rela-
tively low compared to water  quality  models
that require numerical solutions,  such as
finite difference techniques.
                                            FINAL SUMMARY FOR AMMONIA
                                            DURING THt MONTH OF MAR
                                            CONCENTRATIONS I
                                                                        .FLOWS IN CFS

                                                                        CONC. COHC.
                                                                        AT   AT
                                                                        HEAD  fHD









30

.80
.80
.riO
:S

0 -,

D *0
1*1
3



362
-0
12



•

O <>2
(,8 6B
0 fi
3 3
6 59
8 7B
6 5&
2 95
<. 2
26 1
62 *6
05 90
53 51
349 4l
65 62
67 65
                                  Figure 3.  Typical RIBAM Final Summary
                                             Table for a Non-Conservative,
                                             Non-Coupled Parameter.

                                  CONCLUSIONS

                                  RIBAM is useful for the basin-wide water
                                  quality planning function.  Predicted water
                                  quality profiles for different basin con-
                                  ditions can be analyzed to aid in determining
                                  the waste load conditions that are most suit-
                                  able to the water quality objectives of the
                                              48

-------
basin.  Coefficients for the velocity
prediction equation can be determined by a
method that considers the physical charact-
eristics of the stream and requires minimal
observational data.
REFERENCES

 [1]  Texas Water Development Board.  DOSAG-1,
     Simulation of Water Quality in Streams
     and Canals, Program Documentation and
     User's Manual.  Report No. PB 202974.
     September 1970.

 [2]  Raytheon Company, Oceanographic &
     Environmental Services.  BEBAM   A
     Mathematical Model of Water Quality for
     the Beaver River Basin.  Four volumes.
     US Environmental Protection Agency.
     December 1973   February 1974.

 [3]  Churchill, M.A., H.K. Elmore and
     R.A. Buckingham.  Prediction of Stream
     Reaeration Rates.  Proc. ASCE, Jour.
     San. Eng. Div.  SA4.  July 1962.

 [4]  Streeter, V.L., Fluid Mechanics.
     McGraw-Hill Book Company, Inc. 1962.
                                              49

-------
                                         COMPARISON OF EUTROPHICATION MODELS
                                                   By:  John  S. Tapp
                                           Environmental Protection Agency
                                             Technical Support Branch
                                                 Water Division
                                                    Region IV
                                                Atlanta, Georgia
      A complex mathematical model  for  simulating
 an aquatic  ecosystem was  compared  with less complex
 models of the type developed by Vollenweider to see
 if utilizing the  sophisticated mathematical approach
 adds  to the decision making ability  in comparison
 to the less complex models.  The reservoir used for
 comparison  was Lake Harding on the Chattahoochee
 River in Georgia  and Alabama.  Data  collected by
 the EPA National  Eutrophication Survey on 66 South-
 eastern water bodies were used to  test the Vollen-
 weider type models.  Results indicate  that for Lake
 Harding either approach would give comparable results
 in terms of the decision  to limit  point source
 phosphorus  to the reservoir.

                  Introduction

      The problem  of eutrophication has traditionally
 been  a difficult  one for  regulatory  agencies.  The
 complex interactions that occur in a reservoir or
 lake  generally cannot be  easily defined to the point
 where assessing the impact of a point  or nonpoint
 wastewater  source discharge on a lake  can be made
 with  detailed accuracy.   Over the  years, three basic
 approaches  have evolved for use by agencies in order
 to make decisions concerning limitations of nutrients,
 namely nitrogen and phosphorus, into aquatic systems.
 The three approaches are  (1) complex reservoir
 models which try  to simulate the complex inter-
 actions that occur within a water body; (2) a more
 simplistic  approach which relates  the  input of phos-
 phorus to a water body or the concentration of phos-
 phorus  in a water body with its physical properties;
 and (3)  a very simplistic mass balance approach.
 Realizing the limitations of the very  simplistic
 mass  balance, the two approaches in general use
 today are the complex reservoir model and the
 approach relating an input or in-lake concentration
 of phosphorus to  some physical characteristics of
 the water body (commonly called the Vollenweider
 approach).

      From a scientific standpoint, the best approach
 would  be the complex modeling approach which,  if
 carried  to  an extreme,   would attempt to represent
 accurately  the complex interactions that occur
 within  a lake or reservoir.   However, in practical
 terms,  the  ability to represent these complex
 interactions is limited because some interactions
 have not yet been identified and some that are
 known  cannot readily be measured.   Very extensive
 and expensive research and data collection programs
 could  attempt to accurately  represent all identified
 and measurable constituents  and interactions
 occurring in the complex ecologic  system.   However,
 the collection of this massive amount of data is
usually  infeasible within budgetary restrictions;
 therefore, a common approach used  is to define the
major  interactions and base  the model upon these
 interactions.   A minimum data collection program to
calibrate one of these complex models representing
only the major  interactions  is  still very expensive.
The question is  whether going  to a relatively
sophisticated mathematical approach really adds to
 the decision making  ability as  compared with the
 less complex Vollenweider  approach.   This paper
 attempts  to address  this question in relation to
 one reservoir  in  a Southeastern United States
 setting.

          Complex  Reservoir Model-EPAECO

     An example of a reservoir  model currently in
 use today is one  developed for  EPA by Water  Re-
 sources Engineers-'- and  is  known by the acronym
 EPAECO.   The model was  originally developed  for the
 Office of Water Resources  Research (USDI)  and
 simulates  the  temporal  variation of  vertical water
 quality and biologic profiles over an annual cycle
 in response to meterologic conditions,  tributary
 conditions, and reservoir  releases.   In reality,  an
 aquatic ecosystem has a delicate and stable  balance
 of many different aquatic  organisms  and water
 quality constituents.   The reservoir ecologic model
 solves a  set of equations  which represent  only the
more significant  interactions of  the reservoir  biota
with water quality.  The reservoir ecologic  model
 EPAECO simulates  the hydrodynamic  water quality and
biological responses of reservoirs to tributary
 inputs and environmental energy exchanges  in
reservoir releases.

              Vollenweider Approach

     The other basic type  of model in use  today is
a nutrient budget model for phosphorus  derived  by
Vollenweider2.  As indicated in Figure  1,  Vollen-
weider plotted phosphorus  loading  in grams per
 square meter per year versus the mean depth  divided
by the retention  time of a lake.   Vollenweider  then
empirically defined a basic loading  tolerance for
 the case where the mean depth divided by the reten-
tion time was much less than one.  Using the solu-
tion to the equation, the  loading  tolerance  lines
were projected throughout  the commonly  encountered
ranges of mean depth divided by retention  time.
The lower line was called  the permissible  limit and
the upper line was called  the dangerous  limit and
was defined as twice the permissible limit.   The
permissible limit was said  to separate  oligotrophic
 and mesotrophic lakes and  the dangerous limit was
 to separate mesotrophic and eutrophic lakes.
Vollenweider and  Dillon3 furthered the  Vollenweider
 approach using the steady  state solution to  the
model.  If dangerous and permissible lines are
 drawn as  shown in Figure 2, the trends  represent
 equal predictive  phosphorus concentrations.   This
model indicates that the prediction  of  the trophic
 state of  the lake is based  on a measure of the
predictive phosphorus concentration  in  the lake
 rather than on the phosphorus loading and  is called
 the Dillon model.

      Larsen and Mercier^  expressed  Vollenweider's
mass balance model in terms of  concentration.   The
Larsen-Mercier curves relate the steady state lake
and mean input phosphorus  concentrations.  Larsen
and Mercier selected values of  10  and 20 micrograms
                                                      50

-------
 o
 z
 o
U)
o:
O   '.0
X
0.
tn
O
X
a.
                      '  l  ' ' ""1   '
                       VOLLENWEIDER MODEL
                                           st  »  a
               Symbolt

             A . ..KXTROPHIC
             D " EUTBOPHIC
                                   31 M SZ D at
                                   " D  0 „• °0»
                                      R  .D«a«
                              „.      .,  °i«  .f
                              "„   'V5'     Xi
                                0 J, D^'  a  X ,*
                                   D   D «>   /
                                          OLISOTROPHIC"
       LlOl'OjlO.	—•
       L101-OJ5	
                                                             .3.

                                                             I	1
                                                             10.
 FIGURE 1.  The Vollenweider Model and Data from
            Southeastern Lakes and Reservoirs
                                                                               LARSEN-MERCIER MODEL
                                                                                                o
                                                                         "EUTROPHIC"         7        M4jM
                                                                                                               &

                                                                                                              //
""   S,
         i1
       1C    U "
      ^s   °°
        fl    i,
                                                                                          Symbols

                                                                                       A = MESOTROPHIC
                                                                                       O = 'EUTROPHIC
                                                                             I     1
                                                                                                     OLIGOTROPHIC
                                                                                                   J	L
                                                                        O.I   0.2  0.3   0.4  O.S   0.6  0.7   0.8  0.9    IO
                                                                                        •e»p
  E
  k_
  o>

  a:
  i

  j
                                                                FIGURE 3.
                                                                           The Larsen-Mercier Model and Data from
                                                                           Southeastern Lakes and Reservoirs
                          Z(m)


FIGURE 2.  The Dillon Model  and Data from Southeastern
           Lakes and Reservoirs
 of phosphorus  per liter to delineate  the  oligothopic,
 mesotrophic  and eutrophic states.  These  values were
 selected based on studies in the literature  suggesting
 that  springtime concentrations of total phosphorus
 in excess  of 20 Mg/1 were likely to produce  average
 summer  chlorophyll   concentrations of 10 ng/I  or
 greater.   Larsen and Mercier's curves are shown in
 Figure  3.

              Physical Setting-Lake Harding

       The reservoir used to compare the models
 described  above was Lake Harding, also known as
 Bartlett's Ferry Reservoir, and is located on  the
 Chattahoochee  River about 120 river miles downstream
 from  Atlanta,  Georgia and approximately 286  river
 miles upstream from the confluence of the Chatta-
 hoochee and  Flint Rivers.  The general location is
                                                            shown in Figure 4.  This  reservoir was chosen because
                                                            Water Resources Engineers^  under contract to EPA had
                                                            calibrated the reservoir  ecologic model EPAECO on the
                                                            reservoir.

                                                                  From a eutrophication standpoint, Lake Harding
                                                            is very advanced, i.e., anaerobic conditions prevail
                                                            in the lower depths of  the  reservoir during the sum-
                                                            mer months, productivity  is high, and the water is
                                                            turbid.  Since the time EPAECO was calibrated on Lake
                                                            Harding, a new reservoir  immediately upstream has
                                                            been impounded.  Therefore, the projections utilized
                                                            in this paper are merely  to compare the results from
                                                            the various models and  essentially have no application
                                                            in terms of real effluent limitations to be established,
                                                            except where parallels  can  be drawn between Lake Harding
                                                            and the new upstream  reservoir.
                                                                         GENERAL LOCATION OF LAKE  HARDING
                                                                                  A
                   LAKE SIDNEY
                     LANIER
                                                                                     O   FRANKLIN


                                                                                     D   WEST POINT

                                                                                         LAKE HARDING
                                                                    FIGURE 4.  The  Location of Lake Harding
                                                          51

-------
             Results and Discussion

Vollenweider Type Models

     Gakstatter et al.^ sunnnarized the results of
the 3 Vollenweider type models in relation to data
collected by the National Eutrophication Survey on
23 water bodies most of which were located  in the
Northeastern and North Central United States.  The
study concluded that based on the trophic state
classification developed by the National Eutrophi-
cation Survey, the models developed by Dillon and
Larsen-Mercier fit the data much better than
Vollenweider's model.  The Vollenweider model was
probably less precise because unlike the Dillon and
Larsen-Mercier models it only considers total
phosphorus loading without regard to in-lake
processes which reduce the effective phosphorus
concentration.

Gakstatter and Allum^ also compared the Vollenweider,
Dillon and Larsen-Mercier models with data collected
by the National Eutrophication Survey from 53 water
bodies in the states of Georgia, North Carolina,
South Carolina and Alabama.  The data of Gakstatter
and Allum for the 53 water bodies and additional
data from 13 other Southeastern water bodies also
collected by the National Eutrophication Survey in
the states of Kentucky, Florida and Mississippi are
shown in Figures 1, 2 and 3.  The trophic state
index used for these data was that developed by the
National Eutrophication Survey.   Figures  1,  2  and  3
indicate that for the 66 water bodies all three of
the Vollenweider type models generally fit the data.
The names of the water bodies and the hydraulic
retention time of each are shown in Table i .
TAEt-E 1. KEY TO LAKES SHOW IH riCURrS 3, 1 AMD 5.
y c r t tiff .tare
La)-e Ho. HRT Lake Mo. HUT Lahc No. IIRT
A
n .
B






an
un
oona
: :shear
Ridge
Hill
Falls
cy
ole
air
T f, George
na
ry
nock
see
e 2
a uska 2











Loo1:mit Shoals 23 H.fi2 Hantt H5
:'t. Island 20 ".03 P rkwick IP
>JorPifin 25n.PS P r<*y "7
Tillory 2R n.f>0 L y 50
K. C. nowon 31 - w IS.T 5?

"oultrie 36 0.11 D«lc Hollow Kl
SalnHa 39 o.Ol Herring ton K4
Secession 10 - .Snrctia PI
7 Banhhead 43 0.02 Enid Ml
4 Uolt 44 0, 02 Arfcabuhla MS
n2
02
30
US
nn
nij
03
60
70
50
10
12
29
To investigate the effect of hydraulic retention
time on the fit of the data for the 66 Southeastern
water bodies, those with hydraulic retention times
of less than or equal to 0.08 years (30 days) were
compared with the Vollenweider, Dillon, and Larsen-
Mercier models as shown in Figures 5, 6 and 7.  Those
with hydraulic retention times of greater than 0.08
years were compared with the 3 models as shown in
Figure 8, 9 and 10.  The results indicate that the
three models are generally applicable for water
bodies with both long and short mean hydraulic
retention times.

     Lake Harding was one of the water bodies sampled
by the EPA National Eutrophication Survey^,  Charac-
teristics of the reservoir determined by the National
Eutrophication Survey are shown in Table 2.  Annual
phosphorus loads are shown in Table 3.  The survey
found that for the most part Lake Harding was phos-
phorus limited, although the most upstream station
in the lake which was nearer the relatively small
point  source  wastewater discharges tended to be
nitrogen  limited.   Because the National Eutrophica--
tion Survey considered  only point sources within
a  25-mile radius,  a majority of the nonpoint phos-
phorus load in the Chattahoochee River as shown in
'Table  3 is from wastewater treatment plants in the
Atlanta metropolitan area.

       Marlar  and Herndon^ have estimated the
Atlanta area  point source input of phosphorus to
the Chattahoochee  River as 1,006,992 kilograms
per year,  which indicates that Atlanta point
sources account for approximately 76 percent of
the total phosphorus load in the Chattahoochee
River  tributary and 72  percent of the total phos-
phorus input  to Lake Harding.   Nonpoint source
inputs to the Chattahoochee River tributary rep-
resent 22 percent  of the total phosphorus in the
river.  The yearly average phosphorus loading
to Lake Harding from the National
 2
 Q
 O
 _J
 O
 X
 Q.
                       VOLLENWEIOER MODEL
                        Tl^-0.08 YEARS
                                       an
   Symbols

£ • MESOTnOPHIG
O - EUTROPHIC
O • DEPENDS ON
   TIME OF YEAR
        LCOI-0.20		
        L(0)-O.I5	
                                     s
                                        OLIGOTROPHIC
                                                                                       10.0

                                                                                      T.'TW
                                                                 FIGURE  5.  The Vollenweider Model and Data
                                                                 from Southeastern Lakes and Reservoirs with
                                                                 Hydraulic  Retention Times Less than or Equal
                                                                 to  0.08 Years
   E
   •s.
   O>
                            Z (m)

         FIGURE 6. The Dillon Model and Data  from
         Southeastern Lakes and Reservoirs with
         Hydraulic Retention Times Less than  or  Equal
         to 0.08 Years
                                                      52

-------
     Kl      D
        a   o
            LARSEN-MERCIER MODEL
                JV^o.oe YEARS

          "EUTROPHIC"          .-,
                          /   '
                      .''  /  -*
                           /
                          /
                                    OLIGOTROPHIC
                          Symbols
                          • MCSOTROPHIC
    O.O   O.I   0.2  O.I   0.4   0.3  0.6   O.T   O.t   0.9   1.0
                          Rexp
   FIGURE 7.  The Larsen-Mercier Model  and Data
   from Southeastern  Lakes and Reservoirs with
   Hydraulic Retention Times Less  than  or Equal
   to  0.08 Years


•^
X
«**
.£
at
^, 10.0

0
Q
3


3
CE 1.0
O
I
Q.
ta
o
X
a.




: VOLLENWEIDCR MODEL
T»'>o.oa YEARS.

"EUTROPHIC"


f DO,
: " /
I *-••* a f ° f X
a • MESOTROPHIC o Da & / 4T
O a CUTROPHIC D G / ' ,
O • OCKNOt ON 4 jf /
TIME Of YCAB fT *a vs.*^ /


__
• '^'^'^ ' Es^
•** "^S" "* SsC
[ HOl-0.30 — —""""Si-"''' A^^^
LIOLOJfl. 	 — "-- ,--^ "OLIGOTROPHIC-
HOJ-OJS 	 — — -^^-^
LtO)-O.IOi 	 bj_thj.— -."""T , .....I . . 	 	 1 i
1.1 i.O 10-0 100.0

1
-





:



—
i
-
~

_



III!
IOOO
   FIGURE 8.   The Vollenweider Model  and Data
   from Southeastern  Lakes and Reservoirs with
   Hydraulic  Retention Times Greater  than 0.08
   Years
                                                                      Z (m)

                                           "FIGURE 9.  The Dillon Model and Data from Southeastern
                                           Lakes and Reservoirs with Hydraulic  Retention Times
                                           Greater than 0.08  Years
                                                                                  LARSEN-MERCIER  MODEL
                                                                                     Tw^O.OB YEARS
                                                                          EUTROPHIC
                                                                                a
                                                                                O
                                                                                               o5:
                                                                                              If
                                                                                             'A?
                                                                                              /
                                                                                                  "OLIGOTROPHIC"
                                                                                          Symbols
                                                                                        & ' MESOTROPHIC
                                                                                        O « EUTROPHIC
                                                                                        • • DEPENDS ON TIME
                                                                                           OF TEAR
                                                 O.O   O.I   O.2  0.3   0.4   O.t   0.6  0.7   0.8   O.9   1.0

                                                                       Rexp

                                             FIGURE 10.  The Larsen-Mercier Model and Data from1
                                             Southeastern Lakes and  Reservoirs with Hydraulic
                                             Retention Times Greater than 0.08 Years
TABLE
        CHARACTERISTICS OF LAKE HARDING
MORPttOHETRY

 Surface Areai 23.67 square kilometers

 Mean Deptht 9.4 meters

 Maximum Depthi 33.8 meters

 Volume. 222.»98xl06 cubic meters

 Mean Hydraulic Retention Time: 1« days
    TRIBUTARY.

Chattahoochee River

All Others

TOTAL I

MEAH OUTLET FLOW
DRAINABE AREA (Km2)  HEAH FLOW Im3/sec)

     9,479.1             165.2

     1,478.9              20.6

    10,958.1             185.8

                       ISS.a
Eutrophication survey data was 58.74  grams/m^/year
based  on total input.   The calculated allowable
loadings in grams/m2/year to maintain dangerous and
permissible in-lake  concentrations based on the
models of Vollenweider,  Dillon, and Larsen-Mercier
are  shown in Table k.

      If 90 percent  of the Atlanta  point source con-
tribution of phosphorus to Lake Harding were removed,
the  loading rate would be reduced  to  20.45 grams/
m2/year, which is about twice  the  dangerous loading
indicated by the Larsen-Mercier and Dillon models
and  about 3 times that indicated by the Vollenweider
model.   If 99  percent  of  all of the Atlanta point
source phosphorus to Lake Harding were removed, the
loading rate would be  15.36 grams/m2/year which is
                                                        53

-------
beginning to approach the dangerous level of the
Dillon and Larsen-Mercier models, but is still
about 2.5 times that of the Vollenweider model.  A
cursory attempt was made to analyze other South-
eastern water bodies sampled by the National
Eutrophication Survey with phosphorus loadings in
the 15 grams/m2/year range and with physical charac-
teristics similar to Lake Harding to see if this
loading indicated an impairment of use of the water
body.  However, the analysis failed to reveal
sufficient data upon which to base a conclusion.
TABLE j . AVERAGE ANNUAL TOTAL PHOSPHORUS LOADING
TO LAKE HARDIHG
INPUTS
Chattahoochee River (nonpoint)
Other major tributaries (nonpoint)
Minor tributaries and immediate
drainage {nonpoint)
Municipal STP's (point)
Industrial
Septic tanks
Direct precipitation
TOTAL
OUTPUTS
Lake Outlet
NET ACCUMULATION

Kq P/yr % of Total
1,318,550 94. <*
36,430 2.b
j,480 0.3
31,190 2.3
Unknown
325 0.1
415 0.1
1,390,390

618,575
741,815
TABLE 4. CALCULATED LAKE HARDING PHOSPHORUS
LOADINGS (gr/m2/year)
Present Loadings=58. 74 gr/m^/year

Permissible
Dangerous
VOLLENWEIDER
3.1
6.2
DILLON
5.5
11.0
LARSEN-MERCIER
5.2
10.4
EPAECO

     The reservoir model EPAECO utilizes concentra-
tions of water quality constituents in the tribu-
taries as input to the model.  For the Lake Harding
calibration, the only tributary considered was the
Chattahoochee River because this was the only tribu-
tary on which any water quality data were available
during the simulation period (July through Decem-
ber, 1973).  Also, as shown in Table 3, the
Chattahoochee River provides the majority of the
phosphorus input to Lake Harding.  The tributary
water quality data available on the Chattahoochee
River consisted of one sample per month for the six
month study period.  Daily tributary input concen-
trations to EPAECO were obtained by linear inter-
polation of the monthly data.  Tributary concen-
trations of water quality constituents that were
not measured during the monthly sampling were
estimated based on the past experience of Water
Resources Engineers.

     In order that EPAECO simulate conditions with
various phosphorus removal rates from point sources
in Atlanta, a routine was written into the model to
reduce the daily tributary concentrations to reflect
point source removals.  This was accomplished by
 taking  the daily tributary concentration of phos
 phorus  and daily flow and converting to pounds of
 phosphorus.   The pounds of phosphorus assumed to
 be removed by 90% removal from the Atlanta point
 sources (based on yearly average loadings converted
 to daily average loadings) were subtracted from the
 instream pounds and  the resulting number was
 converted back to concentration for input to the
 reservoir.  On several days this calculation left
 no phosphorus for input to the reservoir.  However,
 this  was not believed significant due to the nature
 of the  various estimates such as estimated yearly
 average point source loadings and interpolated
 daily instream concentrations.

      The model is designed to simulate the reservoir
 for a one year period.   The Lake Harding simulation
 included only day 202 (July 21)  through day 365
 (December 31).   Accordingly,  the same period was
 used  in this study.   As noted earlier,  the average
 hydraulic retention  time was  approximately 20 days
 during  the simulation period.   Therefore, for the
 period  of study the  lake water theoretically
 exchanged about eight times,  which should be enough
 to wash out  initial  conditions and allow the impact
 of the  90 percent point source phosphorus removal
 to be assessed.

      The temperature stratification in the  reservoir
 as calculated  by EPAECO is shown in Figure 11 and the
 corresponding  dissolved oxygen stratification is
 shown in Figure 12.   Rather pronounced  temperature
 stratification existed  throughout  much  of the
'simulation period.   Supersaturated surface dissolved
 oxygen  concentrations  and  very low bottom dissolved
 oxygen  concentrations were also  exhibited throughout
 the majority of  the  simulation period.   The  simula-
 tion  denoted by "1973  conditions"  represents  the
 best  calibration of  the model  for  the study  period
 and is  used  as  the base for evaluating  conditions
 with  phosphorus  removal.
 cc.
 Ul
 a.
                                                                                              BOTTOM 	

                                                                                              SURFACE 	
                    1 I '
                    250
' I '
275
                               300

                           TIME (days)
  FIGURE  11.  Temperature  Stratification in Lake
  Harding as  Calculated  by the  Reservoir Ecologic
  Model EPAECO
     The changes in surface dissolved  oxygen  con-
centrations between 1973 conditions  and  the same
conditions with 90 percent point source  phosphorus
removal are shown in Figure 13.  Throughout the
simulation period the dissolved oxygen concentration
at the surface remained- essentially  the  same  after
removal of point source phosphorus.  The same was
true for dissolved oxygen concentrations at the
bottom of the reservoir.  The orthophosphorus (as P)
                                                       54

-------
   0>
   z
   UJ
   X
   o
   D
   UJ

   O
   v)
        ZOO   ZZS    ZSO    279    300 ~  329
                           TIME (days)
      FIGURE 12.   Dissolved Oxygen  Stratification in
      Lake Harding as Calculated by the Reservoir
      Ecologic Model EPAECO


concentrations at the surface are shown in Figure 14
and the corresponding concentrations  of green  and
blue-green algae at the surface are shown in
Figures 15 and 16, respectively.  As  would be
expected,  lowering the orthophosphorus concentration
to the reservoir resulted in an eventual lowering of
surface orthophosphorus concentrations.  The same
trend was  noted for orthophosphorus concentrations
at the reservoir bottom.  Phosphorus  removal reduced
the peaks  in surface green algae concentration which
would indicate lowering of concentrations during
periods of algal blooms.  The area  differential
under the  two curves in Figure 15 indicates a  20
percent reduction in green algal biomass after point
source phosphorus removal.  Peaks in  surface blue-
green algae concentrations are also reduced indica-
ting lower bloom concentrations and indicating a
357, reduction in biomass after point  source phos-
phorus removal.   The zooplankton concentrations at
the surface were the same before and  after point
source phosphorus removal.

     Figure 17 displays the total weight of the 3
types of fish on an areal basis throughout the
simulation period and indicates that  phosphorus
removal would reduce the .total weight of fish  in
the reservoir.  The decline in total  fish was  due
to a decline in the warmwater fish  which cannot be
readily explained based on model output.  Also shown
is a large growth of fish from day  250 to day  310
which is probably of the correct magnitude but not
at the proper time.
                                      in
                                      0
                                      
 Id
 O
 >-
 g
 O
 Ul

 o
 05
 o>
 o
90 X I- REMOVAL 	
1973 CONDITIONS	
            '  I '
            Z25
                          TIME (doyt)
  FIGURE 13.  The Dissolved Oxygen  Concentrations
  Before and After Point Source Phosphorus
  Removal as Calculated by EPAECO for  Lake Harding
                                                              x
                                                              v,
                                                              O
                                                              l-
                                                              x
                                                              o
                                       05
                                       U.
                        90% f REMOVAL .
                        1973 CONDITI 1NS-
                                                                            I
                                                                          220
                                                         I
                                                         240
            1
           260
T ^  \   '  T
280   300   320
 1
340
                                                       55
                                                                TIME (days)
                                          FIGURE 17.  The  Combined  Weight of the Three
                                          Classes of Fish  Before  and  After Point Source
                                          Phosphorus Removal  as Calculated by EPAECO
                                          for Lake Harding

-------
                   Conclusions
                                                                               References
     Based on the data collected by the EPA National
Eutrophication Survey on 66 water bodies in the
Southeastern United States, the Vollenweider model,
the Dillon model, and the Larsen-Mercier model all
have some merit when examining eutrophication
problems.  Since the Vollenweider model considers
total phosphorus input to a water body and does not
account for phosphorus in the outflow from the water
body, it is the most conservative of the three for
establishing load restrictions to a water body.  The
Vollenweider model should therefore be used as a
first cut analysis in the absence of data.  Where
data exist to establish a phosphorus retention
coefficient, the Larsen-Mercier and Dillon models
should be used as a first cut to establish load
restrictions to a water body.

     The EPAECO model simulations indicate that with
90 percent point source phosphorus removal, yearly
green and blue-greeen algal biomass would decrease
by 20 percent and 35 percent, respectively.  This
should result in some improvement in the water
quality in the reservoir even though the simulations
showed no differences in dissolved oxygen before
and after phosphorus removal.  The results from
using the Vollenweider, Dillon, and Larsen-Mercier
models indicate that even with point source phos-
phorus removal the reservoir would still remain
eutrophic.  However, a closer examination indicates
that 99 percent point source removal would reduce
the loading rate to the same range as the dangerous
rate calculated by the Dillon and Larsen-Mercier
models.  Therefore, intuitively, some improvement
should result over a long term.

     To make the Vollenweider type approach truly
applicable to Southeastern water bodies, further
work needs to be done to relate the trophic state
of a given water body to an actual present or future
impairment of water use in the water body.   The
trophic state classification presently used by the
National Eutrophication Survey weighs heavily
chemical parameters and turbidity.   In Southeastern
water bodies, these parameters are greatly influenced
by the interactions with the clay based soils
typical in the Southeast and may not give a true
indication of the actual trophic state.

     In terms of Lake Harding, either approach,
EPAECO or the Dillon and Larsen-Mercier models,
supports the conclusion that control of upstream
point sources of phosphorus would be of benefit
to the overall water quality in the lake.   The
conclusion is valid for the specific case examined,
i.e.,  a relatively small lake with an extremely high
phosphorus loading and a relatively small hydraulic
retention time.   Other lakes where sophisticated
lake models have been constructed should be
similarly  tested to see if this conclusion is
valid  for the majority of impounded reservoir
situations encountered in the United States.
1.  Water Resources Engineers.   "Computer Program^
    Documentation for  the Reservoir  Model EPAECO,"
    U. S. Environmental Protection Agency,  Wash-
    ington, D. C., February, 1975.

2.  Vollenweider, R.A., Input-Output Models.
    Schweiz. A. Hydrol. (In Press).

3.  Vollenweider, R.A., and Dillon,  P.J., "The
    Application of the Phosphorus Loading Concept
    to Eutrophication Research." Prepared  for the
    Associate Committee on Scientific Criteria
    for Environmental Quality.   Burlington,
    Ontario, June, 1974.

4.  Larsen, D.P., and Mercier, H.T.,  "Lake
    Phosphorus Loading Graphs:   An Alternative."
    U. S. Environmental Protection Agency National
    Eutrophication Survey Working Paper No. 174,
    July, 1975.

5.  Water Resources Engineers.   "Simulation of
    Measured Water Quality and Ecologic Responses  of
    Bartletts Ferry Reservoir Using  the Reservoir
    Ecologic Model EPAECO."  U.  S. Environmental
    Protection Agency, Washington, D.C.,  March, 1975.

6.  Gakstatter, J.H.,  et al.,"Lake Eutrophication:
    Results from the National Eutrophication Survey."
    Presented at the 26th Annual AIBS Meeting, Oregon
    State University,  Corvallis, Oregon, August
    17-22, 1975.

7.  Gakstatter, J.H.,  and Allum, M.O., Data Pre-
    sented at the EPA-Region IV Seminar on
    Eutrophication,  Atlanta,  Georgia, December 3,
    1975.

8.  EPA National Eutrophication Survey.  "Revised
    Preliminary Report on Lake Harding, Georgia."
    National Environmental Research Center, Las
    Vegas, Nevada,  July,  1975.

9.  Marlar,  J.T.  and Herndon,  A.B., "Evaluation
    of Available Data:  Lake Jackson and West Point
    Reservoir."  Internal EPA-Region IV Working
    Paper, August 20,  1974.
                    List of Symbols

 z = mean depth  (meters)

 TW   hydraulic  retention time  (years)

 L = phosphorus  loading (grams  per square meter
     per year)

 R or Rexp   phosphorus retention coefficient
    (fraction retained)

 P = hydraulic washout coefficient (J_/T  , years  )
                                     w
 [P] = mean influent phosphorus concentration
       (micro grams per liter)
                                                       56

-------
                                 MANAGEMENT IN COMPETITIVE ECOLOGICAL SYSTEMS
                                             Donald R. Falkenburg
                                            School of Engineering
                                              Oakland University
                                          Rochester, Michigan  48063
                     1.  Abstract
A competitive ecology is an essentially unbalanced
system in which a weaker species is driven out of ex-
istence by a stronger competitor.  Management policies
which include selective harvest and replenishment al-
ter the equilibrium states of such a system so that
both the stronger and weaker members of the competi-
tive system can coexist.

             2.  The Competition Equation
     The Volterra competition model describes the
growth dynamics of several species competing for the
same environmental resource.  Volterra  reasoned that
if the depletion of the resource increases linearly
with population size which in turn .reduces the growth
rate , and if each species has a different efficiency
for utilizing the resource, then the growth equations
have the following form:
       dxi
                          N
                                         1,2	N   (1)
 Although this quadratic interaction model may be  sim-
 plistic, it does possess sufficient "richness'' to
 predict the replacement of a "weaker" species by  a
 "stronger" competitor   the so called competitive ex-
                           2
 elusion principle of Cause.    Volterra's competition
 model is one of a class of eco-system models which
 describe the interaction among several species.
     g
 Scudo  presents an excellent summary of these models
 along with a well selected list of references to  Vol-
 terra's work.
     In order to understand the assumptions which un-
 derlie the Volterra and related competition models
 let us examine a situation in which two species are in
 a competitive environment.  Let N  and N  be the  size
 of the two populations which compete for a food supply
 F.  The assumption that the growth of N  and N  are

 independent of the age structure of those populations
 renders to the model a significant simplification.  It
 is important to realize that the preceeding statement
 does not imply age structure is absent from the popu-
 lation, but rather the population has attained a sta-
                     h
 ble age configuration ; under such conditions the pop-
 ulation can be modeled by an ordinary differential
 equation.   The second important assumption is that the
 model is fundamentally deterministic.   We know that
 the dynamic model of a population system reflects an
 aggregate  interaction among individual members of the
 population   an interaction which arises from random
 encounters.  If there are very few individuals then we
 must model such a system from a probabilistic perspec-
 tive.   If, on the other hand,  there are many indivi-
 duals  within the population,  it is often reasonable to
 construct  a macroscopic model - a model which we use
 to project the size  of a population and not the prob-
 ability density function for  the population size.   The
 assumption that the  model is  macroscopic does not pre-
 clude  that such a model may itself be  subject to ran-
 dom disturbances.
    The third and perhaps the most restrictive as-
 sumption is that the competing species which are being
modeled are isolated.   Simply  stated it is not likely
that one can ever observe a pure two-species interac-
tion in nature;  the  complex ecological webs which ex-
 ist  attest to  this.   In  fact,  the  growth of a single
 species may be affected  by hundreds  of other species.
 One  way of dealing with  this situation is to add dis-
 turbance  inputs to the describing  equations in an at-
 tempt to  include the  influence of  species which are  not
 explicitly represented in  the  model.
     The  competition  model involves  these three species
  the two competitors and  the  resource (in this case,
 a food base) for which they compete.   A  dynamic model
 for  such  a three-species system is given in equation
 (2).

          dN

          ~dt     -1 -1   -1 -1
          dN
          at
                              (2)
             _
          dt ' V
pF2 - a1F
- "2FN2
The last of these equations describes the dynamics of
the food base.  The coefficient of growth k_ for the

food base is assumed to be positive.  The second term
on the right hand side of this equation accounts for
the reduction in the population growth due to the pres-
sure of increasing population size.  The remaining two
terms account for the consumption of the resource by
the two species.  Here it is assumed that the per capi-
ta consumption increases linearly with the available
food.  In the first two equations, consumption of the
food base leads to a linear increase in the per capita
growth rate (this may occur through the reduction of
the mortality rate, an elevation in the reproductive
capacity or through, a combination of both).  Now, it is
not necessary to assume that the relationships des-
cribed above are linear; in fact within the last sec-
tion of this paper an analysis is presented which is
applicable to a generalized population interaction mo-
del.  Finally, if one assumes that the characteristic
response time of the food base is much smaller than the
response times for the competing species, i.e.,
k_»k , k , then the dynamic equation representing the
growth of the food base can be replaced by the quasi-
static equation
             pF
                                 0.
                               (3)
Using equation (3) to eliminate F from the first two
equations in (2) yields the Volterra competition model
        dN

        dt

        dN

        dt~

        where
              .   q./p  ,
          Vi - ki > °
      3.  Management with Proportionate Harvest
     The Volterra competition model presented in equa-
tion (>O contains six parameters.  It is easy to show,
                                                       57

-------
however, that the essential character of the response
depends only upon the single non-dimensional parameter
-r = A./A  where A.   t./y..  There are three singular

points or equilibrium states that the system can at-
tain.  If the equilibrium levels of population are de-
fined as 0^ and Q , these states are given by:
     (a)  Q1 = 0

     (b)  Q1 - 0

     (c)  Qi   ej/V'l

By normalizing population size X
                                      E2/a2Y2
                                                   (5)
                                              N2/Q2,
and scaling time T
ations.
                     e..t  we obtain the following equ-
             ^     X[l   X - Y/r]
             q   -
                            rX   Y]
      where
                                                   (6)
                        X.
                                                         These results demonstrate  that  the  species with the
                                                         largest value of  A.
                                                                               E./-Y- wil1 attain a stable  equi-
librium configuration at a finite, non-zero population
level, while the weaker competitor is driven to extinc-
tion at the stable equilibrium configuration.  This ex-
cursion variable analysis is born out globally in the
phase portrait for the non-linear competition equations
(see figure 1).  Here it becomes evident that the
stronger -member drives out or replaces the weaker com-
petitor.
      In the interest of preserving a balance between
the two species allowing each to coexist with the oth-
er, one might choose a management policy in which har-
vest of the stronger species and/or replenishment of
the weaker competitor is instituted; the intent here
being to remove the competitive advantage of the
stronger member of the system.  Suppose that spe-
cies Y is the stronger competitor and that the rates of
harvest and replenishment are given by H and R respec-
tively.  Defining normalized, time scaled rates of
0 = a Y R and V = oijTj11  ("nere a$_ and ^ are the coef-
ficients appearing in (4))then the competition equa-
tions become:
                                                                     =  X  [1    X    Y/r]  +  U
                                                                                                             (8)
                                              y0=0)_
The equilibrium points associated with (6) are of
course  (xo=Q> yo=0)> (xo=0> yo=1) and ^

In order to examine the nature of these equilibria, we
assume that X and Y experience small excursions x and
y from their respective equilibrium levels .  The line-
arized differential equations which arise from this
excursion analysis are presented in equation (7) below.
                  Az
         where
                                                   (7)
                    (1   2X°   Y°/r)
                                         -X°/r
                    -rY°/q
                                  - 2Y°   rX°)/q
Now, the roots of the characteristic equation
det[sl   A]   0 determine the  nature of the  singular
points.   These roots are computed for each equilibrium

state (X ,YU) and are summarized in  Table I.
equilib.
point
X°=0, Y°=0
X°=0, Y°=l
X°=l, Y°=0
roots of
characteristic
equation
si=1>V t
V T'S2=(1T>
n _ _(l-r)
°1 1>02' q
nature of equilb.
point
unstable node
stable node (rl)
stable node (r>l)
saddle point (r< 1)
             Table I.  Characterization of
                       Equilibrium Points
                                                         Let us consider first the harvest  strategy.   Our  first
                                                         thought  is to  institute a program  of proportionate har-
                                                         vesting, that  is, a program in which a specified  frac-
                                                         tion of Y is removed.  This is, perhaps, the  simplest
                                                         approach to take, since if a constant effort  is made
                                                         to harvest Y,  then the yield will  increase with larger
                                                         numbers of species Y producing an  approximately pro-
                                                         portionate harvest.  This is contrasted with  the  con-
                                                         cept of an absolute harvest in which M members must be
                                                         selected.  In  the absolute harvest the effort is  ad-
                                                         justed to target the desired yield.  In the propor-
                                                         tionate harvest, then V   f • y and U   0.  The effect
                                                         of such a strategy is to shift one of the singular
                                                         points of the  differential equation producing the fol-
                                                         lowing equilibrium configuration.
equilib.
points
X°=0,Y°=0
X°=0,Y°=(l-f)
X°=1,Y°=0
roots of
characteristic
equation
s1=i
S2=(l-f)/q
S1=-(l-f)/q
S2=(r+f-l)/r
s1=-i
S2=(l-r-f)/q
nature of
equilb . point
unstable node
stable node
r + f < 1
saddle r+f>l
saddle r+f < 1
stable node
r + f > 1
                                                                   Table  II.  The Effect  of  Proportionate
                                                                              Harvesting  on  Equilibria  (f< 1)


                                                          If  the harvest fraction is  sufficiently  large,  the orig-
                                                          inally weaker species will  dominate the  ecology.   If
                                                          this management  policy is continued indefinitely, the
                                                          harvested species, once the dominant competitor,  will
                                                          be  driven toward extinction.   If one desires a  scheme
                                                          whereby both species can co-exist,  it is possible to
                                                          institute a program in which a period of harvest  is
                                                       58

-------
followed by a period of "natural growth".  This strat-
egy would render alternating advantages to the two
species producing (if properly implemented) a limit cy-
cle or closed periodic solution in the phase plane.
Since neither species is permitted to dominate the
ecology for a sufficiently long period of time, both
can coexist.

              4.  Management with Absolute
                  Harvest and Replacement

     An alternate procedure for controlling the growth
in this ecological system is to impose an absolute
harvest.  Contrasted to the proportionate policy, the
absolute harvest involves the setting of a target har-
vest level; the harvesting of the stronger species is
continued until this target is achieved.  In a similar
fashion we could consider a constant replenishment pro-
gram for the weaker species.  If U and V are the con-
stant levels of replacement and harvest, the equations
which describe the growth within the competitive sys-
tem are
            -   f(X, Y, U)


          qg= g(X, Y, V)
                                         (9)
 where
         f(X, Y, U)   X[l - X   Y/r] + U

         g(X, Y, V)   Y[l   rX - Y] - V

 Setting f and g to zero yields a pair of simultaneous
 equations in X and Y; for any level of harvest V and
 replacement U these equations can be solved to obtain
 the  equilibrium points.  Graphically, one can inter-
 pret f=0 and g=0 as families of curves parameterized
 in U and V respectively.  These curves are illustrated
 in figure 2.  The intersection of any f=0 curve with
 another g=0 curve yield these equilibria.  A few equi-
 librium points are illustrated in the figure.
      Once the equilibrium point is established, we must
 determine the character of the equilibrium point - does
 it represent a stable or an unstable configuration?
 We proceed' to obtain the characteristic equation from
 the  linearized form of the describing equations
         dx   3 f    3 f
         - —   -—-y 4- 	 TJ
         dT   3lr   3 Y y
          dy_ = L6_ x + LS_
         qdT   3 X     3 Y y
                                                   (10)
 obtaining
det[sl   A]   sz + as + b   0
                                                 The nature of the singular point depends upon the co-

                                                 efficients a and b;  this dependence is illustrated in
                                                 figure 4.  We know immediately that the equilibrium
                                                 point must be stable; if in addition it is possible
                                                 to demonstrate that a2 > 4b then we would know that
                                                 the equilibrium configuration would be a stable node.
                                                 Using the fact that Sf   -(3 f/3 X)/(3 f/3 Y) and

                                                 S   =  -(3g/3 X)/(3 g/3 Y)  we can  write  this  inequality as
                                                          i£  _  I iS.
                                                          3X    q 3 Y
                                   .
                            q 3 Y 3X
                                                 (12)
Since 3 f /3 Y and 3 g/3 X have the same sign a2 > 4b and
from figure 4 we see that the singular point is a
stable node.   Similar reasoning leads to the conclu-
sion that for equilibrium point lib  < 0.  Again fig-
ure 4 can be used to establish this to be a saddle
point.  Both species can coexist at the stable equili-
brium point;   the motion of this system toward the
equilibrium point is illustrated in the phase portrait
of figure 5.
     It is interesting to consider the special case in
which the replacement rate U - 0 while the harvest
rate V > 0.  Under these conditions no such stable
equilibrium point exists such that both X > 0 and
Y > 0.  Thus an  absolute harvest policy with no re-
placement will not achieve our stated objectives fail-
ing to produce a balance within the system (see figure
S).  The compliment of this strategy  in which replace-
ment is instituted without harvest does produce a de-
sired stable equilibrium point; this  conclusion can
be reached using the geometric arguments presented
above.

                    5.  Summary

    In the unmanaged Volterra competition system, the
stronger species will replace the weaker competitor.
Proportionate harvest alters the balance within the
ecology, and can effect a competitive advantage to the
originally weaker species.  A continuous application
of such a management strategy merely  shifts the com-
petitive advantage;  again the system is driven to-
ward dominance by a single member.  Absolute harvest
alone does not produce a stable equilibrium configura-
tion in which both species can coexist.  A combined
policy which implements both an absolute harvest of
the stronger member and the replacement of the weaker
competitor can induce a balance within the system.
    The design of such a strategy is  based upon the
control curves presented in figure 2.  Although these
curves were developed for the Volterra competition
equations, they  can be modified if any of the as-
sumptions of linearity presented in the model develop-
ment are deemed  inappropriate.  Levels of harvest and
replacement are  then selected such that X > Y .  and
                                                                                                        min
 where
                                                           Y > Y  .  at the equilibrium  .
                                                                mm         ^
                    3f/3Y

                               b = — — -"•
                                   q 3Y 3Y
 Now  S  and Sf are slopes of the curves g=0 and f=0 at
 the  equilibrium point.  Consider the  illustration in
 figure 3.  At equilibrium point I the geometry of the
 curves is such that the partial derivatives of both f
 and  g with respect to each coordinate are negative,
 thus a > 0 and since S  > S_ the coefficient b > 0.
                      g    f
                                                                   6.  References

                                                1.  Volterra, V.,"Variazioni efluttuazioni del numero
                                        (11)        d'individui  in specie animali conviventi," R. Comit.
                                       ,            Talass. Italiano, Memoria 131 (Venesia, 19277!
                                                2.  Cause, G. F., The Struggle for Existence  , Williams
                                                    and Wilkins  (Baltimore, Md., 1934).
                                                3.  Scudo, F. H., "Vito Volterra and Theoretical Ecol-
                                                    ogy", Theoretical Population Biology, Vol. 2 (1971).
                                                4.  Lotka, A. J.,"The Stability of the Normal Age Dis-
                                                    tribution",  Proc. Nat. Acd. Sciences, Vol.8 ,(1922).
                                                5.  Minorsky, N.,Nonlinear Oscillations, Krieger (Hunt-
                                                    ington, N.Y., 1974)..
                                                        59

-------
o.
§
                                                                 1.0
                     X  (weaker competitor)

            Figure 1.  Urananaged Competition
                                                                                 X  (weaker competitor)
Figure 2.   Control Curves for Selecting
            Harvest and Replacement Rates
                                              f=o
                 X  (weaker competitor)
       Figure 3.   Representative Control Curves
                                                                                                     STABLE
                                                                                                     FOCUS
                                                                                                      CENTER"?
                                                                                                UNSTABLE
                                                                                                  FOCUS
                                                                   Figure  U.   Nature of Roots of Characteristic
                                                                               Equation
                                                          60

-------
               X  (weaker competitor)


Figure 5.  Competition with  both Harvest and
           Replenishment
             X  (weaker competitor)


Figure 6.  Competition with Harvest Management
                    o    l
                                        X  (weaker competitor)
                          Figure  7.   Competition with Replacement Management
                                                    61

-------
                 PLANNING  IMPLICATIONS OF DISSOLVED OXYGEN DEPLETION  IN THE WILLAMETTE RIVER,  OREGON

                           David A. Rickert, Walter G. Hines, and  Stuart W. McKenzie
                                         U.S.  Geological Survey
                                            Portland, Oregon
                    ABSTRACT

Basinwide secondary treatment of municipal and indus-
trial wastewaters has resulted in a dramatic increase
of summertime dissolved-oxygen (DO) concentrations in
the Willamette River.  Rates of carbonaceous decay
(ki) are very low (0.03 to 0.06/day), and point-source
BOD loading now accounts for less than one-third of the
satisfied oxygen demand.  Nitrification is now the
dominant DO sink.  DO concentrations met the state
standards in all reaches of the Willamette during the
low-flow period of 1974.  Mathematical modeling shows
that low-flow augmentation from storage reservoirs was
largely responsible for the standards being met.
Future achievement of DO standards will require con-
tinued low-flow augmentation in addition to pollution
control.  Summertime flows above 6000 ft^/s will be
needed even with increased treatment removals of oxygen
depleting materials.  The greatest immediate incremen-
tal improvement in DO can be made through reduction in
point-source ammonia loading.  The pros and cons of
upgrading treatment efficiencies for BOD removal would
best be determined after ammonia loadings have been
reduced to reasonable levels and the possibility of
controlling a benthal-oxygen demand in Portland Harbor
has been fully assessed.
(KEY TERMS: river-quality planning; dissolved-oxygen
standards; dissolved-oxygen modeling; biochemical-
oxygen demand; nitrification; low-flow augmentation.)

                INTRODUCTION
Historically, dissolved-oxygen (DO) depletion has been
the critical water-quality problem in the Willamette
River.  During summer low-flow periods, DO concentra-
tions of zero were sometimes observed in Portland
Harbor (see figure 1), and for years, low DO levels
inhibited the fall migration of salmon from the
Columbia River.1

In recent years, summer DO levels have increased
dramatically.  The improvement has resulted primarily
from the basinwide advent of secondary wastewater
treatment, coupled with streamflow augmentation from
storage reservoirs.  With an average annual flow of
35,000 ft3/s, the Willamette is now the largest river
in the United States on which all known point sources
of wastewaters receive secondary treatment.  The
Willamette thus offers a unique opportunity to docu-
ment the impacts of secondary treatment on a large
river and also to predict the amount of further
improvement likely to result from different alter-
natives of river-basin management.

               PHYSICAL SETTING

Willamette River Basin

The Willamette River  basin, a watershed of almost
11,500 sq mi  (figure  1), is located  in northwestern
Oregon between the Cascade and Coast ranges.  Within
the basin are the  state's three largest cities,
Portland, Salem, and  Eugene, and approximately 1.4
million people, representing 70 percent of the
state's population (1970 census).  The Willamette
River basin supports  an important timber, agricultural,
industrial, and recreational economy and also
extensive fish and wildlife habitats.

The basin is roughly  rectangular, with a north-south
dimension of about 150 mi and an east-west width of
75 mi.  Elevations range from less than 10  ft  near  the
mouth of the Willamette River to 450 ft on  the valley
floor near Eugene and to more than 10,000 ft in the
Cascade Range.  Average annual precipitation in the
basin is 63 inches.

Hydrology

Channel Morphology.—The Willamette River main stem
forms at the confluence of its Coast and Middle forks
south of Eugene and flows northward for 187 mi through
the Willamette Valley floor.  The river is  composed of
three morphological reaches  (figure 1 and table 1).
Each reach has a unique hydraulic regime and,  there-
fore, different velocities,  sediment-transport
characteristics, and patterns of biological activity.

The Upstream Reach, stretching from above Eugene to
near Newberg, is shallow and fast-moving.  The river-
bed is composed largely of cobbles and gravel  which
provide ample opportunity for attachment of periphytic
biological growths.  During  the summer low-flow period,
mean velocity in this reach  is about seven times that
observed in the Newberg Pool and 18 times greater than
in the Tidal Reach.  Morphologically, this section of
the river is an "eroding" reach.

Between a point just above Newberg and the Willamette
Falls is a deep, slow-moving reach known as the
Newberg Pool.  Hydraulically, the Pool can be  charac-
terized as a large stilling  basin behind a weir
(Willamette Falls).  Travel  time in this 25.5-mi reach
is relatively long during low-flow conditions.
Morphologically, the Pool is a depositional reach.

The lower 26.5 mi of the river is affected by  tides
(nonsaline water) transmitted from the Pacific  Ocean
via the Columbia River and,  during April to July, by
backwater from the Columbia.  The Tidal Reach  is
dredged to maintain a 40-ft-deep navigation channel up
to river mile (RM) 14.  During low flows, net  down-
stream movement in the Tidal Reach is slow, but tidal
flow reversals can cause large instantaneous changes
in velocity.  Low-flow hydraulics are most complex  in
the lower 10 mi where, depending on hourly changes  in
tidal conditions, Willamette River water may move
downstream or Columbia River water upstream.   Owing
to morphological characteristics and the hydraulic
conditions, the subreach below RM 10 is the primary
depositional area of the Willamette River.

Flow.—Most of the flow in the Willamette occurs in
the November to March period as a result of persistent
winter rainstorms and spring snowmelt.  Each summer
there is a naturally occurring low-flow period, the
timing, duration, and magnitude of which are now
largely controlled by reservoir releases.  Since 1954,
when large-scale reservoir regulation began, discharge
during the low-flow period of July-August has  been
maintained at a minimum of about 6000 ft3/s (Salem
gage) by reservoir augmentation.  In comparison, for
the unusually dry year of 1973, the calculated (from
a deterministic model) naturally occurring low flow
for this period would have been 3260 ft3/s.  The
summertime flow releases are made for purposes other
than river-quality enhancement, but, as subsequently
described, the augmentation  has a profound  impact on
the DO regime.

Temperature.—Water temperatures in the Willamette
River and in all tributaries reach a maximum during
                                                        62

-------
the annual July-August low-flow period.  Temperatures
during July average about 20° C in the Newberg Pool
and about 22° C in the Tidal Reach; these are
controlled primarily by ambient air temperatures.

                 DATA PROGRAM

Review of existing data indicated that an appreciable
DO deficit occurs in the Willamette only below RM 86
and during the yearly low-flow period of July through
August.  The DO-data-collection program was developed
to formulate a mathematical model for simulating
conditions below RM 86 for the critical summer period.
Emphasis was placed on direct intensive measurement
of waste loads and model coefficients to avoid
reliance on published values, engineering estimates,
and the development of model coefficients through
computerized curve fitting (optimization).   Details
of sampling approaches and analytical techniques are
reported in other papers. >^>

            DISSOLVED-OXYGEN REGIME

Dissolved-Oxygen Profiles

Basinwide secondary treatment has had a profound
impact on the major deoxygenation processes and
consequently on the DO regime of the Willamette
River.  During the summer low-flow period of 1974,
average daily DO concentrations met state standards
for all reaches of the river at flow conditions
between 6500 and 7000 ft /s and water temperatures
between 22° and 25° C.

Figure 2 compares the 1973 DO profile of the river
below RM 86 to historic conditions.  In 1956, there
was a DO-concentration "plateau" between Salem and
Newberg, followed by a sharp decrease in DO through
the Newberg Pool.  These conditions were consistent
with large loadings of carbonaceous biochemical
oxygen demand (BOD) in the vicinity of Salem, a rapid
travel time between Salem and Newberg, and a large
amount of carbonaceous deoxygenation in the slow-
flowing Newberg Pool.

The 1959 data show an increase in DO concentration
below Willamette Falls and a sharp decrease in DO
through the Tidal Reach.  These observations were
consistent with known reaeration at the falls, inflow
of cool high-DO water from the Clackamas River, and
the decay of carbonaceous wastes which entered the
river just below the falls and throughout Portland
Harbor.

The 1973 profile shows a rapid decrease of DO from
RM 86 to Newberg, a DO "plateau" in the Newberg Pool,
a DO increase over Willamette Falls, a gradual
decline in DO between RM's 24 and 13, a sharp decrease
in DO between RM's 13 and 5, and recovery of DO
below RM 5.  The DO decrease between RM 86 and
Newberg contrasts with a "plateau" in 1956 and
results from nitrification that did not occur at the
earlier date.  The 1973 DO "plateau" in the Newberg
Pool indicates that carbonaceous deoxygenation is
now occurring at a rate slow enough to be balanced
by DO inputs from atmospheric reaeration.  The DO
decrease between RM's 24 and 13 is consistent with
measured river loads of ultimate BOD (BODuit), but
the sharp decrease between RM's 13 and 5 cannot be
accounted for by known sources of BOD. (See section
entitled "Benthal-Oxygen Demand.")  The DO profile
of the Willamette during July-August 1974 was
essentially the same as that presented in figure 2
for 1973.
 Nitrification

 During  the  summers  of  1973 and 1974,  nitrification
 was  the dominant  control on DO in the shallow,  swift-
 flowing subreach  between RM's  85 and  55.   Examination
 of historical data  indicates that this reach began to
 receive appreciable ammonia loading from  a pulpmill
 in 1956.  Dissolved-oxygen data from  the  1950's and
 1960's  are  sketchy,  but  suggest that  nitrification did
 not  become  a  significant oxygen sink  until the  advent
 of secondary  treatment at pulp and paper  mills.
 Secondary treatment  incurred the use  of ammonium
 hydroxide for neutralizing wastewaters prior to
 treatment,  and resulted  in continuous discharge of
 effluents to  the  Willamette rather than the previously
 used program  of summer lagooning and  winter discharging,

 Figure  3 shows the  average instream concentrations of
 ammonia, nitrite, and  nitrate  nitrogen from RM  120 to
 7 for August  12-14,  1974.   The curves reflect a
 prominent ammonia source near  RM 116  and  rapid  in-
 stream  oxidation  of  ammonia to nitrite and nitrate
 downstream  to RM  86.   During the study period,  the
 subreach below RM 86 received  about 5800  Ib/d
 ammonia nitrogen  from  upstream sources, 16,200  Ib/d
 from an ammonia-base pulp and  paper mill  at RM  85,
 and  about 1700 Ib/d  from a municipal  sewage plant  at
 RM 78.  The instream data show a rapid conversion  of
 the  ammonia to nitrite and nitrate between RM's 85
 and  55.  The  deep, relatively  slow-moving Newberg
 Pool begins at RM 52 and,  although residual ammonia
 entered this  reach, no further nitrification could
 be detected from  nitrogen-species  analysis.

 The  occurrence of nitrification in a  shallow, surface-
 active  reach  and  the contrasting absence  in a deep,
 slow-moving reach is consistent with  a recent hypo-
 thesis  proposed by Tuffey,  Hunter,  and Matulewich.^
 According to  the hypothesis, nitrification in shallow,
 swift-flowing  reaches  would  occur  by  virtue of  an
 attached, rather  than  a  suspended  population of
 nitrifying organisms.  To  test  the hypothesis,
 enumerations  of nitrifying bacteria were  made on
 water samples  and on biological slimes scraped  from
 rocks.  Nitrosomonas concentrations were  <1  most
 probable number (MPN)/ml  in  all water  samples from -
 throughout the  river.  In  slimes,  Nitrosomonas
 concentrations were <1 MPN/mg above RM 86  and 1-4 MPN/
 mg in samples  collected between RM's  85 and  55  (the
 active  zone of nitrification).    The Newberg  Pool is a
 deep depositional reach and  few rocks  are  available
 for  attachment.   In comparison,  Nitrobacter concen-
 trations in the zone of nitrification  ranged  from
 <1 to 4 MPN/ml  in water samples  and from  6-50 MPN/mg
 in slimes.  The bacteriological  data  thus  support  the
 hypothesis that nitrification occurred in  slimes
 attached to rocks rather  than  in flowing  water.

 Based on observed river concentrations  of  nitrate
 (figure 3),  an  in-river rate of  nitrification was
 calculated for the affected  subreach.   Assuming first-
 order decay, the rate, kn  (log^o)» was about  0.7/d.
Applying this rate to  the measured  loadings  of
 ammonia indicates that, for the  August  12-14  period,
nitrification removed about 55,000  Ib/d DO  from the
 30-mi subreach.  This  satisfied  demand was  responsible
 for most of the decrease observed  in DO concentration
 (figure 2).

 Carbonaceous Deoxygenation

Present-day (1974) BOD-loading patterns and  rates of
 exertion contrast sharply with  those observed during
 the mid-1950's.
                                                       63

-------
 BOD Loading.—During the dry-weather period of 1954,
 the estimated point-source loading of BODu^t to the
 Willamette River was approximately 350,000 Ib/d.
 This total included chemical demands resulting from
 sulfite wastes,  soluble and suspended carbonaceous
 demands from pulp and paper mills,  and the carbona-
 ceous demands of raw sewage and  primary effluents.

 In contrast,  the point-source BODuj^ loading during
 August 1974 was  about 92,000 Ib/d (table 2).   The
 decrease resulted from secondary treatment of all
 carbonaceous  wastes,  chemical recovery of sulfite
 wastes, and the  routing of sewage effluents from
 metropolitan  Portland into the Columbia River instead
 of the Willamette.

 During the 1974  low-flow period,  nonpoint sources
 contributed about 77,000 Ib/d BODuit to the
 Willamette, or about  46 percent  of  the total  basin-
 wide loading  (table 2).   Because of  the design of
 the nonpoint-sampling program, it appears that
 almost all of the estimated loading  from diffuse
 sources represents  natural background  demand  from
 essentially pristine  streams.  Thus,  only about
 one-half the  observed total BODu^t  loading to the
 Willamette River is potentially  amenable to removal
 by future pollution-control programs.

 Concentrations and  Rates.—During the  low-flow period
 of 1954,  five-day BOD's  (BOD5) in the  Willamette
 River varied  from about  1.0 mg/1  at  sites  far  removed
 from waste inputs to  about 2.5 mg/1  below large waste
 outfalls.   During 1974,  measured  BOD5  concentrations
 were about 1.0 mg/1 throughout the river.   The
 apparent  anomaly of the  comparative  concentrations
 arises from marked  differences between 1954 and 1974
 in the river  rates  of deoxygenation  (k^).   The value
 throughout the river  in  1954 was  probably  0.1/d
 (log^o)  or greater.   The  measured k-^'s  in 1974 were
 clustered  around 0.04/d.   Thus in 1954,  a  minimum of
 about 68  percent of BODu^t was exerted  in  five days,
 whereas  in 1974  the comparative value was  39  percent
 (see Velz  for  discussion of BOD exertion).^  This
 comparison underscores  the need for determining ki
 values  and  BODul(.  (rather  than BOD5 alone) as  a basis
 for  accurately modeling  DO under  conditions of
 secondary  treatment.

 The  in-river concentrations of BODu^t during 1973
 and  1974 averaged about  2.5 mg/1.

 Benthal-Oxygen Demand

 During  1970, benthal respirometer studies  (written
 communication, John Sainsbury, 1970) of  the
 Willamette  documented a  benthal-oxygen demand of
 27,000  to  54,000  Ib/d in the subreach between RM's  13
 and  7.  The river at that  time still received some
 raw  sewage  in  the form of  combined sewer overflows
 and  a moderate loading of  settleable solids from
 pulp  and paper mills.  Because these sources of solids
 are now largely controlled, it was anticipated that
 the benthal demand would be greatly reduced by 1973.
 However, during preliminary calibration of our model,
 predicted DO concentrations between RM's 13 and 5
 were higher than  those measured in the river, whereas
 predicted BODu,t concentrations were considerably
 lower.  Refined modeling tests suggested an
 unaccounted-for oxygen demand of  about 27,000 Ib/d
 in the subreach.

DO Modeling

The model chosen for the study was the one developed
and used for more than 30 years by C. J. Velz.  The
basic model, described in detail  in Applied Stream
 Sanitation,7 is applicable to conditions of steady
 (invariable),  nonuniform (changing cross-sectional
 geometry),  plug (nondispersive) flow.   The computer
 program as  formulated for the present study is
 called the  WIRQAS (Willamette Intensive River Quality
 Assessment  Study) model.

 The model was  calibrated against 1974 streamflow and
 temperature conditions and verified against the
 slightly lower flow and higher temperature conditions
 of 1973.  For  conditions representing 1974 summer low,
 the model indicates that an oxygen demand of 164,000
 Ib/d was satisfied between RM's 86 and 5.  Of the
 total,  about 22 percent resulted from background
 carbonaceous-oxygen demand,  28 percent from point-
 source carbonaceous demand,  34 percent from point-
 source ammonia, and 16 percent from the unaccounted-
 for demand  in  Portland Harbor.

                 PLANNING IMPLICATIONS

 As an aid to river-quality planning,  the WIRQAS model
 has been used  to test management alternatives
 concerning  (1)  BOD loading,  (2)  ammonia loading,  (3)
 low-flow augmentation,  and (4) the effects of possible
 removal or  reduction of the  benthal-oxygen demand.
 This section describes the major implications of
 tested  alternatives by comparing measured DO profiles
 with profiles  generated by the verified model.  A
 fact the  reader should bear  in mind in examining  the
 following illustrations (figures 4-8)  is that,  for
 ease of presentation,  the  DO profiles  are plotted as
 a  function  of  river location.   Thus the slopes  of
 curves  in specific subreaches do not  represent  the
 actual  rates at which oxygen is  added  to or lost  from
 the river.   For example, the profiles  in figures 4-8
 suggest a rapid rate of oxygen depletion below
 Willamette  Falls.   Actually,  the steepness  of  the
 curves  is caused by the slow time-of-travel in  the
 Tidal Reach (see table 1)  rather than  by an
 accelerated rate of oxygen depletion.

 BOD Loading

 The effect  of  BOD loading  on summertime DO  is
 reflected in figure  4.   The  curve labeled 100 percent
 represents  the  average  DO  profile of the river at
 the  flow, water  temperatures,  ammonia  loading, and
 BOD  loading actually measured  during the low-flow,
 steady-state period  of  mid-August 1974.   The upper
 and  lower curves  represent the predicted DO profiles
 at  50 percent and  200 percent  of  the measured point-
 source  BOD  loading with all  other variables held
 constant  at  observed levels.   These curves  are
 calculated  on  the  basis that  all  point-source BOD
 receives  secondary treatment and  decays  at  rates of
 0.06/d  above RM 55 and  0.03/d  below this point.

 The  upper curve  in  figure 4  indicates  that  only a
 slight  improvement  in DO can be  obtained by a 50
 percent decrease of  BOD loading  from each point
 source  in the basin.  The predicted increase in DO
 would be  <5 percent  of  saturation at the bottom of
 the Newberg Pool  (RM 28) and  5 percent at RM 5, the
 low DO  point in  the  river.

 In contrast, a doubling of BOD loading from each
 point source would depress DO  by  5 percent  of
 saturation at RM 28  and by 10  percent at RM 5.  This
 decrease  in DO would cause violation of  the state DO
 standard  in  the subreach between  RM's 62 and 50.

Figure  5 compares  the DO profile  observed in mid-
August 1974  with the predicted profile assuming a
BOD5 standard of 10 mg/1 for all municipal wastewater
 effluents.  Such a standard, attainable  by high-level
                                                       64

-------
secondary treatment, is presently being considered by
Oregon and several other states throughout the
country.   Application of the 10 mg/1 standard in the
Willamette River basin would decrease municipal
summertime loading of BODult from 37,600 Ib/d to
17,400 Ib/d.  The largest individual decrease would
be about 7000 Ib/d at Salem (RM 78).  The modeling
results (figure 5) indicate that, over the investigated
reaches,  the reduced loading would have no effect on
river DO.  The lack of an effect stems from  (1) the
small reduction attainable in total loading  of BODuit
(about 12 percent; see table 2), (2) the low rate at
which BODuit is exerted in the river, and (3) the
locations within the basin of the largest municipal
wastewater treatment plants.^

Ammonia Loading

Figure 6 illustrates the effect of ammonia loading on
summertime DO in the Willamette.  Compared to the
observed DO profile (the 100 percent curve), a 50
percent reduction in ammonia-nitrogen loading from
each  point source would increase the DO by 7 percent
of saturation near the bottom of the Upstream Reach
 (RM 60), by 5 percent at RM 28, and by <5 percent at
RM 5.  In contrast, a doubling of ammonia-nitrogen
loading in each point source would decrease  the DO
by 13 percent at RM 60, by 11 percent at RM  28, and
by 7  percent at RM 5.  These results illustrate two
important points.  First, ammonia loading has its
greatest effect on DO in the active zone of  nitri-
fication between RM's 85 and 55.  Thereafter,
measurable nitrification ceases to occur and the
upstream effects of the process are gradually
diminished by atmospheric reaeration.  Second,
comparison of figures 4 and 6 indicates that point-
 source ammonia loading has a greater influence on
Willamette River DO than point-source BOD loading.
At  the observed relative point-source loadings
 (43,000  Ib/d ammonia-nitrogen; 92,000 Ib/d BODult),
 this  occurs primarily as a result of  (1) the greater
 oxygen demand per unit weight of ammonia as  compared
 to  the organic matter in secondary effluents, and
 (2)  the much greater rate at which ammonia is
 oxidized  (ki   0.03 to 0.06/d; kn = 0.7/d).

 The  upper curve in  figure 6 shows the impact of
 applying a  standard of 10 mg/1 ammonia-nitrogen to
 all  municipal and industrial effluents.

 Most  of  the ammonia that enters  the Willamette below
 RM  86 is discharged from a large pulp and paper mill
 at  RM 85  (see "Nitrification").  Control of  ammonia
 from this one source would greatly reduce the  impact
 of  nitrification on the DO regime of the river.

 Low-Flow Augmentation

 The effect  of flow  augmentation  on DO is illustrated
 in  figure 7.  The observed flow  at  Salem during mid-
 August  1974 was 6760 ft3/s.  For comparison,
 computed DO profiles are presented for Salem flows
 of  9000, 5000, and  3260 ft3/s.   The latter value  is
 near the lowest minimum monthly  average flow ever
 observed  for July under natural  (nonaugmented)
 conditions.   As previously noted, predictions from
 a deterministic model indicate this flow would have
 occurred during July of the unusually dry year of 1973.

 The  impact  of flow  augmentation  is marked.   At a  flow
 of  3260  ft3/s, the  1974 BOD and  ammonia-nitrogen
 loadings would cause violation of state DO standards
 by a  wide margin at most locations.  The predicted
 DO  saturation levels at 3260 ft3/s are nearly  30
percent  less than the observed values  (6760  ft-Vs) at
 RM's  60, 28, and 5.
At a flow of 5000 ft3/s, the state DO standard would
have been violated between RM's 67 and 50 and just
have been met in the Newberg Pool.

In contrast to the marked decrease predicted in DO
at decreased flows, an increase to 9000 ft3/s would
cause a relatively small improvement in saturation
percentages.  The effects of augmentation on
Willamette River DO result from a complex interaction
of flow with (1) the loading and rate of BOD exertion,
(2) ammonia loading and the rate of nitrification,
(3) time of travel, and (4) atmospheric reaeration.
The most significant interactions at different levels
of flow have not been delineated, but it appears
under the combined conditions observed in 1974 that
flow augmentation to discharges above 7000 ft-Vs
would provide little incremental increase in DO
concentrations.  However, 'under future-conditions,
flows in excess of 7000 ft3/s might be a desirable
alternative to expensive, energy-consuming advanced-
waste treatment processes.

Figure 8 illustrates the combined effects of
nitrification and flow augmentation.  Curve B is the
average DO profile observed during the steady, low-
flow period of July-August 1973.  Curve D is the
predicted DO profile at a flow of 3260 ft3/s and the
observed ammonia loading.  In comparing curves B and
D, note that without augmentation, the state DO
standards would have been violated at most points in
the river.  Curves A and C represent predicted DO
profiles at the same two flows, but with ammonia-
nitrogen loading reduced to 10 mg/1 in all point-source
discharges.  With such an effluent limitation, the DO
profile predicted at 6000 ft3/s is considerably above
the observed profile (curve B) throughout most of the
river.  However, curve C portrays the most important
finding for 1973 conditions.  Even with basinwide
secondary treatment and a reasonable limitation on
ammonia loading, low-flow augmentation would be
necessary to achieve DO standards in the Newberg Pool
and the Tidal Reach.

Benthal-Oxygen Demand

As previously noted, the WIRQAS model indicates the
presence of an unaccounted-for oxygen demand of
27,000 Ib/d between RM's 13 and 7.  Field investigations
suggest that most of the demand is benthal in origin,
but the exact causes are unknown.  Possible factors
that may either cause or affect the oxygen demand
include:

  1.  Unknown sources of raw sewage'.
  2.  Combined-sewer overflows.
  3.  Urban storm runoff.
  4.  Bilge water and refuse from  ships.
  5.  A net oxygen loss caused by algal respiration
      exceeding algal production  owing to limited
      light penetration of the water column.
  6.  A turbid, high-oxygen demanding, estuarine-like
      "null zone" resulting from  tidally influenced
      hydraulic conditions.
  7.  Sedimentation and decomposition of natural
      organics such as leaves and algae.

These possibilities are presently the focus of further
study.  Hopefully, all or at least part of the demand
can be related to controllable sources.  If so,
management control of these sources will provide a
means for improving summertime DO by up to 8 percent
of saturation at RM 5.
                                                        65

-------
             SUMMARY AND CONCLUSIONS
                                                                             REFERENCES
Future achievement of DO standards in the Willamette
River will require continued low-flow augmentation in
addition to pollution control.  Minimum flows of
6000 ft^/s (Salem gage) are presently (1974) needed to
meet the standards at existing BOD and ammonia
loadings and with the occurrence of an unidentified
(probably benthal) oxygen demand in Portland Harbor.
As basin development continues, it is likely that
summertime flows above 6000 ft^/s will be needed even
with increased treatment removal of oxygen-depleting
materials.

Point-source loading of ammonia is presently the major
cause of oxygen depletion below RM 86.  Because most
of the ammonia comes from one source, reduction of
ammonia loading offers a relatively simple alternative
for achieving a large improvement in summertime DO.

Removal or partial reduction of the oxygen demand in
Portland Harbor would improve the summer DO
concentrations between RM's 10 and 5.  However, the
feasibility of reducing the demand is yet to be
determined.

BOD loading from municipal wastewater treatment plants
presently exerts a relatively small impact on DO.
Increased efficiency of BOD removal at the largest
municipal plants and at selected industrial plants
might be desirable in the future.  The benefits to be
gained from this alternative would best be determined
after ammonia loadings have been reduced to reasonable
levels and the possibility of controlling the
suspected benthal demand has been fully assessed.
         1.  Gleeson, G. W., 1972.  The return of a river,  the
               Willamette River, Oregon.  Advisory Committee  on
               Environmental Science and Technology and  the
               Water Resources Research Institute, Oregon  State
               University, Corvallis, 103 pp.
         2.  Rickert, D. A., Hines, W. G., and McKenzie, S. W.,
               1975.  Methods and data requirements for  river-
               quality assessment.  Water Resources Bulletin,
               Vol. 11, No. 5, pp. 1013-1039.
         3.  Hines, W. G., Rickert, D. A., McKenzie,  S.  W.,
               and Bennett, J. P., 1975.  Formulation and use
               of practical models for river-quality  assessment.
               USGS Survey Circular 715-B, 16 pp.

         4.  Hines, W. G., McKenzie, S. W., and Rickert, D. A.
               (in press).  Dissolved oxygen regime of the
               Willamette River, Oregon, under conditions of
               basinwide secondary treatment.  USGS Survey
               Circular 715-1.
         5.  Tuffey, T. J., Hunter, J. V., and Matulewich, V. A..
               1974.  Zones of nitrification.  Water Resources
               Bulletin, Vol. 10, No. 3, pp. 555-564.

         6.  Velz, C. J., 1951.   Report on natural purification
               capacities, Willamette River.  National Council
               for Stream Improvement of the Pulp, Paper, and
               Paperboard Industries, Inc., School of Public
               Health, Michigan University, Ann Arbor, 80 pp.

         7.  Velz, C. J., 1970.   Applied stream sanitation.
               John Wiley & Sons, Inc., New York, 619 pp.
         This paper was originally published by the American
         Water Resources Association in the Proceedings of the
         Urbanization and Water Quality Control Symposium,  1975,
         pp. 70-84.
                     TABLE 1. Physical Characteristics of the Main Stem Willamette River
                                    (for discharge at Salem = 6000 ft3/s).
Reach





1 Tidal

2 Newberg
Pool
3 Upstream

Length,
miles




26.5

25.5

135

Approxi-
mate
bed slope,
ft /mile


0.1

.12

2.8

Bed
material




Clay, sand,
and gravel
do

Cobbles and
gravel
Represen-
tative
midchannel
water
depth,
ft
40

25

7

Average
velocity,
ft/s



0.16

.40

2.9

Approxi-
mate
travel
time
in reach,
days
10

3.9

2.8

            TABLE 2. Dry-Weather 1974 Ultimate BOD Loading, Willamette River, Oregon
               Sources
                                              Loading,  Ibs/day
                                 Percent
               Nonpoint
               Point
                 Municipal
                 Industrial

               Total:
 77,100

 37,600
 54,400

169,100
 46

 22
 _32

100

-------
                                                                                                                                  Willamette     Portland
                                                                                                                    Newberg         Falls     |— Harbor
                    DISTANCE. IN RIVER MILES ABOVE MOUTH
      Figure 1. — Map and profile representing the Willamette River,
         Oreg., (A) distinctive morphologic reaches, (81 elevation
         profile.
           80           60           40           20
                  RIVER MILES ABOVE MOUTH

  Figure 2. — Comparison of 1973 and historical DO profiles in the
          Willamette River for steady, low-flow conditions.
                             LOCATION (River Mile)
Albany Salem Newberg
  120   86  50
 1.4
                                                                                                                   Newberg
                                          Willamette     Portland
                                            Falls     |— Harbor
                                                                                <  80
                                                                                                  EXPLANATION
                                                                                                 	State DO standards
                          468
                         TIME OF PASSAGE, DAYS
        Figure 3. — Inorganic nitrogen concentrations in the Willamette
                        River during mid-August 1974.
                 RIVER MILES ABOVE MOUTH

Figure 4 — DO profiles for selected percentages of the measured
   point-source BOD loading during mid-August 1974. Flow and
   ammonia loading held constant at observed levels.
                                                                          67

-------
    Salem
   	1	
                           Newberg
  Willamette     Portland
    Falls     |— Harbor -*|
                EXPLANATION

      	 Observed

       A Predicted for a BODg municipal
          effluent standard of 10 mg/1

      	State DO standards
                                                                      Newberg
                                                                     	
Willamette     Portland
  Falls     i*— Harbor -+
                      60           40

                RIVER MILES ABOVE MOUTH
                                                  EXPLANATION
                                                 	State DO standards
                                                                 60            40

                                                           RIVER MILES ABOVE MOUTH
Figure 5 — DO profiles:  observed during mid-August 1974 and
   predicted for a BOD^ municipal effluent standard of 10 mg/1.
   Flow and ammonia loading held constant at observed levels.
                                          Figure 6 — DO profiles for an effluent standard of 10 mg/1
                                             NH4-N and for selected percentages of the measured point-
                                             source ammonia loading during mid-August 1974. Flow and
                                             BOD loading held constant at observed levels.
                          Newberg
 Willamette     Portland
    Falls     I*— Harbor *|
	1	1	
         EXPLANATION
        	State DO standards
                      60           40

                RIVER MILES ABOVE MOUTH
                                                                                     Salem
                                                                                    	1	
                                                                                                            Newberg
                                                                                                                          Willamette      Portland
                                                                                                                            Falls     I*— Harbor -
                                                  EXPLANATION
                                                 	State DO standards
Figure 7 — DO profiles for selected flows with BOD and ammo-
   nia loadings held constant at levels measured during mid-
   August 1974. Observed flow was about 6,760 ft3/s.
                                         o          so           eo           40           20
                                                           RIVER MILES ABOVE MOUTH

                                          Figure 8. — DO  profiles  for  selected conditions of flow  and
                                             ammonia loading.  BQD  loading  held  constant at levels ob-
                                             served  during July-August, 1973. Curves A C  and D are  pre-
                                             dicted; curve B is observed.
                                                                   68

-------
                                     URBANIZATION AND FLOODING   AN EXAMPLE
                 Robert P.  Shubinski, Vice President and William N. Fitch, Principal Engineer
                                        Water Resources Engineers, Inc.
                                            Springfield, Virginia
     The Four Mile Run watershed in Northern Virginia
is a classical  example of the development of flood
problems with urbanization.   The Corps of Engineers has
planned $29,000,000 in channel  improvements to
alleviate the problem, but the Congress, concerned that
future development in the basin will  create the problem
again, required that a land  management program be
developed.  The selected approach to land management
is designed to determine effective structural and
nonstructural methods of flood abatement.  Emphasis is
on the nonstructural.  The technical  portions of the
program rely on two stormwater models.  The models are
STORM, a Corps model which is useful  as a screening
tool, and SWMM, a more detailed model  sponsored by
EPA.  The paper describes the application of these
models, using historic and design hydrology, to
determine the plans and policies for further basin
development.
                     Introduction

     This paper describes the application of two
complementary stormwater models to an urbanizing
watershed.  The first model, STORM (Storage, Treatment,
Overflow, Runoff Model),1 is a simple model  based on
the rational  method.   It was used to develop a
statistical  analysis  of the basin's hydrology, thereby
defining the design storm.   The sophisticated model,
WREM (Water  Resources Engineers Model),2 was then
applied, using the design hydrology developed in STORM,
to determine the response of the watershed to various
control alternatives.  Later, in work now underway,
these results will be used  to assign design shares for
future development to each  of the political  subdivisions
in the basin.
              The Overview Model, STORM

     The Four Mile Run Basin, located in Northern
Virginia, has a total  watershed area of 19.5 square
miles.  Figure 1  is a  map of the basin.  There are
five major tributaries, all  of which have steep slopes
and in general have rapid and very peaked runoff
characteristics.   The  base flow of Four Mile Run, for
the purposes of this study defined as the dry weather
flow, varies from 3 to 7 cfs.  This discharge is
insignificant compared to peak discharge rates for
flooding events and was therefore not included in the
flood flow analysis.

     Three types  of hydrologic data were prepared as
input to STORM:  areal  and temporal  distribution of
rainfall, stream  flood stage and evaporation rate.
Rainfall distribution  data were analyzed to identify
significant long  term  trends in the major storm events
and the type of storm  which creates major floods, and
to define reliable isohyetal  patterns for the major
storm events.   Data for this study were taken from
recording and nonrecording rainfall  gages operated
in 60 different locations in and around the watershed
by the United States Geological Survey (US6S), the
National Weather  Service (NWS), and Arlington and
Fairfax Counties.

     Temporal data indicated that higher intensity
rainfalls occurred during thunderstorms rather than
during hurricanes or slow-moving storms and the time
at which peak intensities occur for different stations
was found to be less variable for thunderstorms than
for slow-moving storms.  Analysis of the areal
distribution showed a large variability of total rain-
fall between stations, particularly for thunderstorms.
Average rainfall for the basin was determined by
weighting the average rainfall between successive
isohyetals by the area between isohyetals, totalling
these products and dividing by the total area.  Figure
2 presents the isohyetals for the largest peak dis-
charge (July 23, 1969).

     Stream flow stage data are available from nine
gages in the watershed operated by the USGS.  However,
records are sketchy for the purposes of this study due
to the destruction of the gages during extreme flooding
and delays in replacing the gages.  Discharge rates for
all of seven primary locations are available for only
one of the seven flood events selected for the study.

     Evaporation directly affects the available
depression storage and therefore affects the proportion
of rainfall which occurs as runoff.  The evaporation
rate used as input to STORM is the pan coefficient for
the Washington area (0.76) published by the NWS
extrapolated for winter months.
    Definition of the Rainfall/Runoff Relationship

     The hydrologic data described above for six
storm events were used with data for present land use
conditions to adjust the runoff coefficients used in
STORM until the model reproduced field conditions
within a preset margin.  This procedure resulted in
the definition of the rainfall/runoff relationship for
the Four Mile Run Watershed and a calibrated simulation
model that can be used to generate extended runoff
records from the long term rainfall records at
National Airport.  The six storm events were selected
for the calibration procedure on the basis of flood
magnitude, recentness, and availability of rainfall and
runoff data.  All six storms had a recurrence interval
greater than five years.

     Land use data were transformed into percent
surface imperviousness for each of five land use
classifications:  single family residential and schools,
multi-family residential, commercial/office/institu-
tional, industrial and open space.  While the residen-
tial and open space classifications occupy 15.5 square
miles of the total 19.5 square miles of the watershed,
the percent impervious for residential usage ranges
from 13 to 71 percent.

     Determination of the runoff coefficient for
pervious and impervious areas is the essence of the
calibration process in STORM.  The runoff coefficient
is the fraction of the total rainfall which becomes
surface runoff, and is thus directly related to the
infiltration capacity of the surface.  The runoff
                                                        69

-------
coefficients determined during the STORM calibration
were, for pervious areas, 0.39 and, for impervious
areas, 0.90.  These values were then used in simulating
the flood which would result from the design storm
event.  It should be noted that all six calibration
storms caused major flooding; therefore the runoff
coefficients are defined, in the strict sense,  for
flooding events only.

     The model was calibrated using the complete
hydrographs of two of the six major storm events and
peak discharge measurements for the other four
available for a downstream gage.  Complete hydrographs
for these four storms were not available due to the
loss of the key downstream gage during the storms.  As
shown in Figure 3, the calibration simulations  for
discharges of five of the six storms showed agreement
within ± 12 percent of measured flows.  The discrep-
ancy can be caused by several factors, most importantly
that the model assumes that infiltration capacity of
the surface is constant through the duration of a storm
and that depression storage is constant for all storm
magnitudes.

     A sensitivity analysis was made for the model in
order to determine the relative effects of varying
key parameters, i.e., ratio of basin rainfall  to
National Airport rainfall, pervious and impervious
runoff coefficients, depression storage, percent
impervious for low density land use category and
percent impervious for the vacant land use category.
The sensitivity analysis showed that:

     1.  The predicted discharge rates are highly
sensitive to inaccuracies in the average rainfall
patterns for the basin,
     2.  Large changes in land use produce a signifi-
cant change in peak discharge for major storm events,
     3.  The effects of land use changes over the
calibration period (1963-1973) are overshadowed by
inaccuracies in stream flow measurement, and
     4.  The sensitivity of the model to changes in
land use is relatively independent of the runoff
coefficients determined in the calibration process.
         Analysis of Design Flood Frequencies

     Flood frequency analysis was used to determine
the probable extreme flow which the flood control
project must accommodate.  The specific flood  frequency
which serves as the basis of the project design must
be selected through economic analyses and policy
decisions.  The expected flood frequency for flood
events is determined by analyzing the statistical
variation of historical flow records.  Since the
project design is based on present land use conditions,
these historical  flow records must be adjusted to
account for the effects of urbanization.  Figure 4
presents a comparative flood frequency curve developed
by two methods—the unit hydrograph method with external
flow adjustment (USAGE method)3 and rainfall runoff
analysis for present land use (STORM method)

     The USAGE method for adjusting flow records was
to increase historical  flows by fixed annual
percentages.  The method used in the present study is
based on discharges predicted by STORM, calibrated for
present land use, and historical rainfall records.  A
comparison of the two methods shows a significant
difference, but flow frequency curves for the  methods
cross near the 100-year recurrence interval which  was
the point selected as the design frequency. For
intervals of less than 90 years STORM predicts higher
annual flows and for intervals greater  than  90 years
it predicts lower peak annual flows.  Since  the
predicted peak annual flows for these high recurrence
intervals are projected from available  records,  the
difference in the two methods for recurrence intervals
greater than 60 years is a direct result of  differences
of predicted flows in the less frequent events.   At a
design recurrence interval of 100 years the  peak annual
flow based on the model STORM anlaysis  is only 2.4
percent lower than the USAGE design flow.  That  differ-
ence was judged to be insignificant for project  design.

     It should be noted that the STORM  method was based
on calibration to only flood events.  Therefore,  STORM
overestimates minor annual floods.
           Development of the Design Storm

     A design storm event was developed in order to
evaluate the effects of urbanization and runoff control
on the USAGE project.  The channel design portion of
the project was selected as the focal point of the
study because it has a lower design capacity than
any of the other flood control structures in the flood
control project.  Since the basis of the channel
design is the 100-year flood, the design storm must
generate such an event when it is applied to the
existing watershed land use pattern.

     The design storm event was developed by the method
of Kiefer and Chu.1*  This method uses the rainfall
intensity-duration curve to yield the fraction of the
rainfall  before the time of peak intensity which is
equal to the ratio observed for the area.  It gives
a design storm consistent with the measured rainfall
patterns and the results of several previous studies.
The actual design storm used in STORM is a stepped
hyetograph developed from the continuous function
produced by the Kiefer and Chu method.  Figure 5
presents the design storm that was adopted.

     The design capacity of the channel proposed by
the USAGE is 22,500 cfs.  The model. STORM predicted
a 100-year flow of 21,950 cfs using historical data.
Using the Kiefer and Chu 100-year design storm, only
19,500 cfs was predicted.  Therefore, the design
storm rainfall was increased by a constant percentage
to generate a peak runoff of 22,500 cfs.  The frequency
of the design storm thus produced therefore exceeds
the 100-year rainfall return frequency by some small
amount.  Justification for this adjustment can be
inferred from the distinction between rainfall
frequency and flood frequency.  The analyses in STORM,
although they are based ostensibly on runoff, are
really based on rainfall because of the limitations
of the rational method.  It is logical, therefore, to
modify the design storm to reflect flow records
instead of rainfall.
               The Detailed Model, WREM

     Unlike STORM, which does not include routing and
considers the collection system only indirectly, the
model WREM routes flow using the Navier-Stokes
equation and requires a detailed description of the
watershed, the sewer system and the stream network.
A drainage area receives rainfall which is reduced by
infiltration losses and by intercepted runoff
(depression storage).  Infiltration is estimated by
equations relating the infiltration rate to the
antecedent moisture conditions and to soil type.
Standard SCS soil classifications are used and the
infiltration rate modified according to rainfall
intensity and timing.  Infiltration occurs both on
                                                       70

-------
pervious and impervious areas, but at different rates.

     Residential  runoff from both pervious and
impervious areas  is further modified by retaining a
small  portion of  the runoff on the surface.  Retention
depths ranging from 0.05 inch to 0.20 inch are used.

     Runoff is then routed from each subcatchment
using  the Manning equation to portray overland flow.
The average flow  length of the pervious and impervious
areas  is estimated and the flow velocity estimated.

     Runoff is collected at an inlet and conveyed by
storm sewer or gutter downstream through the collection
system.  Manning's equation for open channel flow is
solved by finite  difference techniques to maintain
continuity using  average flows over a time interval
usually ranging from 5 to 30 minutes.

     Hydrographs  are routed through the stream system
by  shortening  the time step to 40 seconds and solving
the transient flow equations at that time step by
finite difference approximations.  Tidal effects on
the lower reaches are included.
                   The WREM Network

     The watershed was subdivided into 177 drainage
areas connected by 97 minor pipes.  The main channel
and minor tributaries are conveyed through an addi-
tional 90 channels and major pipes.  The rainfall
interval used is 15 minutes with records developed
from 4 to 8 continuous gages depending on the storm
event simulated.  Nine major land use categories are
used for the impervious/pervious estimation.  The
impervious/pervious data by land use category are
developed for 49 zones within the watershed to
incorporate geographic locational differences in lot
coverage.  Soils data revealed that 3 of the 4 SCS
classifications occur in the watershed.  These soil
classifications are assigned to each of the 177
drainage areas.  Intensive data collection was
undertaken to define:

     Rainfall variations (temporal and area!)
     Soil type

     Land use (past, current and future)
     Soil cover (impervious/pervious data)
     Storm sewers
     Natural channels.
points.  The USGS gaging station at Shirlington  is
Transport Model Conduit #401 which drains 14.3 square
miles of the watershed's total of 19.5 square miles.
Agreement was good there for the two most recent
storms (1969 and 1972).  The other conduits are  in
tributary streams usually draining smaller tributary
areas.  The exceptions are 470 and 408 which are the
box culverts under the RFP Railroad and the Mt. Vernon
Avenue Bridge, i.e., downstream from the mouth.  Peak
flow estimates were available there.
                       Conclusion

      This project has demonstrated the conjoint use
of two models to utilize the best features of each.
They have been used individually by many investigators
throughout the country; this is the first time, so far
as the authors are aware, that the models have been
used together.  WREM has provided the capability to
examine the response of the system to a single design
event in great detail.  STORM has provided the
hydrologic background to insure that the single event
used is significant.  Taken together, the models give
the best of both continuous simulation and single
event simulation.
                       References

1.  "Urban Runoff:  Storage, Treatment and Overflow
    Model 'STORM'," Hydrologic Engineering Center,
    U.S. Army Corps of Engineers, Davis, California,
    September 1973.

2.  Kibler, D.F., J.R. Monser and L.A. Roesner, San
    Francisco Stormaatev Model:  User's Manual and
    Program Documentation, Water Resources Engineers,
    prepared for the City and County of San Francisco
    (undated).

3.  Baltimore District, Corps of Engineers, Fourmile
    Run:  Local Flood-Protection Project,  Design
    Memorandum No. 1, Hydrology and Hydraulic Analyses,
    June 1972.

4.  Keifer, C.J., and H.H. Chu, "Synthetic Storm
    Pattern for Drainage Design," J. Hydraulics Div.,
    Proa. ASCE, HY4, Vol. 83, August 1957, p. 1332.
                  Calibration Storms

     The watershed experienced major flooding events
in 1963, 1966, twice in 1969, 1970, 1972 and 1975.
The rainfall hyetographs were prepared for each of
these storms from the rain gage network data.
Streamflow data are unfortunately lacking as the 1969
storm destroyed the stream gage.  Therefore, only
peak flow measurements are available for the 1969
storms and all storms thereafter.

     Surveying the available hydrologic data and
comparing it to the available land use data resulted
in the selection of 1963, (July) 1969 and 1972 floods
as calibration storms.  The land use breakdowns for
each calibration year shown in Table 1 were prepared
as were details of channel/storm sewer modifications.
Antecedent moisture conditions were determined for
each storm.  The model was then used to simulate these
three events.  The model results are presented in
Table 2 and compared to measured flow at defined
                    FOUR MILE RUN LAUD USE
TYPE DESC.
OPEN SPACE
Low DENSITY
MEDIUM DENSITY
HIGH DENSITY
SCHOOLS
INSTITUTIONAL
COMMEP.CIAL
INDUSTRIAL
SHIRLEY HIGHWAY
1963
1.0901
8,3191
0.9751
2,3318
0,7207
0.8319
1.3053
0,5593
0,2019
1969
1.0395
8.2631
0.9751
2.1133
0,7311
0.8109
1.3053
0.5693
0.2019
1972
3.9125
8.2G81
0.9751
2,1661
0,7996
0.3166
1.3053
0.5693
0,2019
1975
3.3318
8.2652
0.9751
2.5171
0.7996
0.3195
1.3058
0.5693
0,2019
FUTURE
LAND
3.3119
8,3789
1.0212
2.3377
0.7996
0.8S68
1.3356
0.5693
0,2019
                                                        71

-------
                                   TABLE  2

                     FLOH CAL1B8ATIOII RESULTS AT USGS GAG DIG STATIONS
      r-1...-

TR*:;SPOST KODEL
CoiJDun 3
218
217
219
310
308
101
306
116
115
108
170
1972
USGS
FLO*
--
980

700
1,250
10,000


--
--
"
KODEL
FLOW
1,107
795
5,550
651
813
9,372
8,853
1,697
781
10,861
6,121
DlFF.

(18.9
--
|7.0
(32.5
|6.2

"
--
--
"
1969
USGS
FLOW
1,330
1,280
1,800
1,600
1,900
11,600

2,200
1,700
17,000
5,500
MODEL
FLOH
1,611
1,287
6,317
1,279
1,775
11,213
13,183
3,010
1,308
17,263
6,250
DT.
I23.1
|0.5
(31.6
(20.0
1 6'5
|2.G

(38.2
(23.1
fl.5
(13.1
1963
USGS
FLOW

830

"
--
11,700

"
--

"
I'ODEL
FLOW
1,290
911
5,393
772
779
9,290
8,802
2,196
1,018
11,780
5,752
D,jf.

|13.7
—
--
-
(20.6

--
"
"
—
--  INDICATES VALUE NOT DETERMINED
                                                                 20CO 0  2000 4000

                                                                  SCALE IN FEET
                                   FIGURE   1

                       Four  Mile Run Watershed  Map
                                                                                                            FIGURE  2                            o          I           2MILES

                                                                                           Area]  Distribution of Total  Rainfall  for               BASIN  BOUNDARY
                                                                                           700  EST July  22  to 1800  EST  July 23,  1969
                                                                                                                                                           -10% OF Q  MEASURED
                                                                                                                                              -Q MODEL  -  Q MEASURED



                                                                                                                                                               FIGURE 3
                                                                                                                                       '(72)
                                                                                                                                                      Variation Between Observed and
                                                                                                                                                      Model Peak Discharge Bates For
                                                                                                                                                          Model Calibration
NOTE -

   DEPRESSION STORAGE = ,23"
   RUNOFF  COEFFICIENT (PERVIOUS)" 39
   RUNOFF  COEFFICIENT (IMPERVIOUS) e 90

   HIMKRB IN WHENTHESIS INDICATE
   1-EAR DF  OCCURRENCE OF
   CALIBRATION  STORM
                                                                                                                           7,500     lopoo     12,500     I5.OOO
                                                                                                                            0 Mooiurtd ( eft )
                                                                                    72

-------
Maximum Annual Flow        \
Frequency Curve From
STORM (1922-1973  Rainfall)
Omlting  Flows  with Recurrence
 Irtervoi Less Than 5 Years
                                                     Maximum Annual  Flow Usng STORM
                                                     Maximum Annual  Flow Using USACE Method
                                                                 Flow Frequency  Curves at USGS
                                                               Gaging Station Using USACE Method
                                                                         and Model  STORM
                                                \
   Maximum  Annual Flow  Frequency Curve  Using  USACE Method
   (Adjusted For Urbanisation J            1951-1972
1000   200  IOO 50    20     D     5            2
                          RECURRENCE INTERVAL .YEARS
                                                                  ADOPTED DESIGN STORM

                                                                  AT NATIONAL AIRPORT
                                   TIME FROM PEAK INTENSITY-MNUTES
                                                                                                   73

-------
                                PLANNING MODELS FOR NON-POINT RUNOFF ASSESSMENT
                                                Howard A.  True
                                           Computer Systems Analyst
                                          Ambient Monitoring Section
                                      Surveillance and Analysis Division
                                     E.  P. A.,  Region IV,  Athens,  Georgia
ABSTRACT

Several computer based processes were developed for
assessing the potential magnitudes of constituents
from non-point sources.  These processes have evolved
from application of diverse procedures and literature
data to solving specific problems in Environmental Im-
pact Statement preparation and review, National Pollu-
tant Discharge Elimination System*permit load alloca-
tions, and preparation of water quality field survey
reports.  The availability of workable processes used
in industrial operations research and anticipated
needs in the Section 208 areawide studies have led to
the development of some very useful calculating pro-
cedures.  The coverage of much of the vast spectrum
of gross assessments amply justify the efforts ex-
pended to date.  Major benefits can be derived by re-
petitive use of these processes to calculate relative
numerical measures of effects resulting from changes
in treatment level percentages, land use allocation
percentages, population densities, loading rates, and
rainfall event intensities.

These grouped processes are referred to as planning
models and their formulation and data requirements are
given in separate documentation.  They are not excess-
ively complex or costly to run and simple problems can
be handled with a minimum of effort.  Flexible inputs
and external user controls allow maximum exercise of
judgment by the user and presents the opportunity for
imaginative and innovative problem formulation.  Re-
port exhibits and resource requirements are contained
in the handout material.

BACKGROUND
An early review of P.L. 92-500, FWPCA** amendments, in-
dicated that major emphasis would be placed on assess-
ment of areawide pollutant sources.  Historically, wa-
ter quality planners have given primary attention to
point sources and have developed few procedures for
assessing impacts from diffused sources.  Environmental
Impact Statement (EIS) requirements have imposed a man-
date on those responsible for preparation and review of
Environmental Impact Statements to Insure adequate cov-
erage for all significant environmental impacts.  EPA,
Region IV, which covers eight southeastern states, has
calculated the magnitude of non-point constituents for
a number of federally prepared Environmental Impact
Statements.  These operational needs forced the devel-
opment of assessment procedures for producing quick
answers and have resulted in a fair degree of success.
Analytical requirements have varied enough to require
the development of several discrete calculating pro-
cesses.  In response to the requirements of the Section
208 areawide planning program, these procedures have
been collected and formalized into a few processes cap-
able of serving as planning models.  No attempt has
been made to duplicate existing stream, reservoir, es-
tuary, and storm water management models.
 *   NPDES

**   Federal Water Pollution Control Act
GENERAL CHARACTERISTICS

The individual planning models discussed here are:   (A)
"Urban, Commercial, and Industrial Runoff,"   (B)  "Ero-
sion, Sedimentation, and Rural Runoff" and  (C)   "Total
Loadings from Point and Non-Point Sources to Waterbod-
ies."  Models (A) and (B) produce independent reports
and can also provide loading factors for model  (C).
Model (C) provides a composite report for multiple
point and non-point sources for a single parameter.
Urban areas can be split into twenty sub-areas  and up
to forty parameters can be calculated for single storm
events within a metropolitan area using model (A).
Large or small areas, such as entire river  basins or
single-acre plots, can be analyzed, using model (B) ,
for soil loss, sediment delivered, and the  usual rural
parameters (Nitrogen, Phosphorous, Potassium, BOD, TOC,
and Acid drainage).  Best combinations of treatment
requirements and land use alternatives can  be deter-
mined by multiple runs with model  (C).  The hydrologic,
transport, and calculating mechanisms differ  in all
processes, and each process is designed to  best ful-
fill its intended purpose.

The "Urban, Commercial, and Industrial Runoff"  Model
.(A)

This planning model is designed for a single  rainfall
event and will calculate the "first flush"  slug load
in pounds or coliform counts along with runoff  concen-
trations for up  to forty parameters at a time.   These
runoff  slugs from up to twenty sub-areas are  routed  to
a water body mixing zone, and the  resulting stream or
stillwater constituent concentrations are calculated
after each slug  arrives.  Runoff water quantity is cal-
culated In a manner similar to the  "Rational  Method"
and all parameter calculations use  deterministic meth-
ods.

Key parameter inputs to  the model  are:  parameter name,
units,  waterbody background concentration,  curb mile
and per acre loading factors.  Waterbody  input  infor-
mation  includes  acre-feet for stillwater or ft-Vsec  flow
and velocity for moving  streams.   Other  inputs  include
routing distances, runoff velocities, area, rainfall
intensity, area  type runoff factors,  and  either popu-
lation  for suburban areas or percent  imperviousness
for  industrial and  commercial areas.  Multiple  rain-
fall event intensities are  allowed to  give  multiple
reports for  all  areas  in a  single  run.

Model  features are:

      (1)  Utilization  of  the  first fifteen  minutes  of
          rainfall  only.

      (2)  Calculated curb miles  and percent impervious-
          ness,  utilizing  regression equations  and  pop-
          ulation  density,  for  suburban areas only.

      (3)  Summarization  of  curb  mile loaded suburban
          areas  with areal  loaded  commercial  and in-
          dustrial  areas.
                                                       74

-------
    (4)   Still water or moving stream mixing depend-
         ing on receiving water input data.

Productive uses are:

    (1)   Gross assessment of current and projected
         area non-point pollution potential.

    (2)   Reasonably accurate calculation of urban
         type non-point loads after load factor ad-
         justment consistent with sampling and local
         characterization of pollutants.

    (3)   Supplies refined, non-point loadings to
         Model (C) for consolidating point and non-
         point single parameter loads to waterbodies.

The "Erosion, Sedimentation, and Rural Runoff" Model
IB)

This planning model is primarily of the periodic type
and can be run for a single month or any group of con-
secutive months not exceeding one year.  It is essen-
tially nonhydrologic since the calculating mechanism
is the "Universal Soil Loss Equation" developed by
USDA.  It will handle a single storm for erosion and
sedimentation only, but special input requirements ap-
ply in this case.  The model calculates tons of soil
loss, sediment delivery to waterbodies and sediment
downstream migration.  Forest litter, nitrogen, phos-
phorous, potassium, BOD, TOC, and acid drainage are
calculated and reported in pounds.  Excluding acid
drainage, the remaining common parameters are calcu-
lated from sediment, litter (leaves, twigs, etc.),
and from animal and fowl droppings.

This is a probabilistic process using a random number
generator to obtain a better representation of highly
variable conditions.  The process can easily be made
into a pure deterministic process by using mean values
and zero deviations in the input data.  Some users
have operated in  this manner successfully.  The inter-
nal design of this process is quite complex and chan-
ges other than report line formats are  likely to lead
to disaster and are not recommended.

No provisions exist for handling pesticides; however,
model (A) will calculate  these parameters when oper-
ated with areal loading factors for pesticides.  Model
 (B) is designed to give the user almost total control
of results through localized input data and its flex-
 ibility allows a  minimum  of input with  a default to
national distributions and loading factors  contained
 internally or in  the master deck preceding  local data.

Key inputs to this  system include:  report  headings,
 time period, multiplier starting value  for  the random
number generator, standard state FIPS numbers, and
number of units.  Each sub-area requires:  acres,
blowup factor  (plot size), one to five  soil types,
percent slope and slope length range, one  to five crop
management practices, one to five erosion  control
practices, load factors for sediment and litter, ani-
mal and fowl counts, and  loading factors for acid
drainage.

Model features are:

    (1)  Multiple sub-areas within each state for de-
         tailed definition.

    (2)  Multiple number  of states for handling entire
         river basins.

    (3)  Gross assessments for large areas with mini-
         mum local input  data needed.
     (4)  Very detailed assessments for areas of in-
          terest by using many small units and compre-
          hensive localized data.

Productive uses include:

     (1)  Projections of effects of land use changes
          and erosion control practices with minor
          changes to input data.

     (2)  Refining load factors for model (C) for use
          in consolidating point and non-point single
          parameter loads to waterbodies.

The "Total Loadings from Point and Non-Point Sources
to Waterbodies" Model (C)

This planning model uses some concepts from R. A. Vol-
lenweider's "Export Process" and has no hydrologic
characteristics.  The time period can be from one day
to any number of days, such as a one-hundred-and-fifty
day growing season.  All loading factors are on an an-
nual basis and are modified by the factor (period in
days/365).  The model is designed to handle a single
parameter for three point sources and five non-point
sources for each of an unlimited number of sub-areas.
The composite report gives three columns of loading
information composed of minimum expected, most prob-
able, and maximum expected.  The minimum and maximum
quantities are calculated from loading factor limits.
Probabilistic methods, utilizing a random number gene-
rator,  are used to calculate the most probable quanti-
ty from each source.

This model is a relatively simple process and the com-
puter program can be modified by users if desired.
Any parameter that is quantifiable on a weight basis
can be  handled.  Simple attenuation processes are
built into the system.

Key inputs to this system include:  report headings,
time period, modification to multiplier starting value
for the random number generator, population, acres,
treatment level percentages for point sources, land
use distribution percentages for non-point sources,
loading factor limits (national and localized) and at-
tenuation factors.

Model features are:

      (1)  Unlimited number of  sub-areas.

      (2)  A  set  of national loading  factor limits  for
          default use in  the absence  of  local  loading
          factor limits.

      (3)  Capabilities  for producing  a  composite re-
          port  from multiple point and  non-point sour-
          ces  for a  single parameter.

Productive uses  include:

      (1)  Determining load allocations  for  issuing
          NPDES  permits by making multiple  runs  using
          modified point  source treatment levels and
          setting  each  sewage  treatment plant up in a
          separate  sub-area.

      (2)  Making long  range projections by changing
          population,  treatment level percentages and
          land use distribution percentages  for  cer-
          tain sub-areas.

      (3)  Gross assessments  to determine if  a more de-
          tailed study  is needed requiring use of mod-
          els  (A)  and  (B) .
                                                       75

-------
      (4)  Producing progress reports.

SUMMARY

The main objective in assessing non-point runoff is to
estimate constituent loads for some representative
time period for a defined drainage area.  No absolute-
ly accurate answers appear economically feasible now
or in the near future, and getting a handle on the
many facets of the problem is very difficult.  These
planning models are generalized tools designed for in-
itial gross assessments with refinement capabilities
to provide ball park numbers for decision making.  Nu-
merical values such as these, systematically arrived
at, provide a basis for estimating the relative ef-
fects of changes to physical features.  Control of run-
off constituents through process and/or structural
changes would settle, filter, or otherwise reduce con-
centrations rather than eliminate runoff.  This is of-
ten in direct conflict with some forms of flood con-
trol where speeding up the runoff from land is the
paramount objective.

The accuracy of output from these planning models is
directly related to the quality of input data supplied
by the user.  These models are tools for planners and
are being made available to anyone willing to learn
how to use them.

All computer programming is in the universally used
FORTRAN-4 language.  Source programs and test case run
decks for all three planning models are in library
files on the EPA-OSI computer system and may be repro-
duced on 80-column cards if a request is made through
normal EPA channels.  Run modules for each planning
model can be accessed by EPA-OSI users through Job
Control Language procedures.  Concise documentation
for problem definition and data coding is contained
in the exhibit handout.

All libraries are located on EPA-OSI disk RIV004.
Source programs and exhibit run decks (approximately
1,450 cards) are in a library named CNMD01.HAT.NPPP.OG
and the exhibit run decks only (256 cards) are in a
library named CNMD01.HAT.NPRUN.  Compiled modules are
in a library named CNMD01.HAT.ASSESS and program names
are EPAURA, EPARRB, and EPATLC for planning models A,
B, and C.
                                                        76

-------
                               DESIGN AMD APPLICATION OF THE TEXAS EPISODIC MODEL

                                               John H. Christiansen
                                             Data Processing  Section
                                             Texas  Air Control Board
                                                  Austin,  Texas
    The Texas Episodic Model (TEM)is a new short-term
air pollution dispersion computer model being used ex-
tensively by the Texas Air Control Board.

    Design innovations make the TEM several times
faster than models of comparable sophistication and
accuracy.  The TEM uses steady-state bivariate Gaussian
plume point source logic, but solves the dispersion
equation by interpolating in a table of precalculated
coefficients, rather than time-consuming explicit cal-
culations of the exponentials involved.  Area sources
are handled by a very fast algorithm based on work of
S.R. Hanna and F.A. Gifford.

    The model calculates plume rise via one of six
equations  (all due to G.A. Briggs), choosing the appro-
priate equation on the basis of 1) downwind distance,
2) atmospheric stability, and 3) whether the rise of
the plume is dominated by thermal buoyancy or momentum.
It also takes into account pollutant decay, variation
of wind speed with height, and atmospheric inversion
layers.  Plumes may be trapped below an inversion or
may penetrate it and escape.  The degree of penetrabil-
ity is variable, and is specified by the user, allowing
simulation of "weak" or "strong" inversions.

    Concentrations are calculated for one or two pollu-
tants at up to 2500 locations in -a. uniform grid of
arbitrary dimensions and spacing, for a wide range of
sample times from 10 minutes to 2k hours.

    The versatility provided by a. wide variety of input
options and graphic output options allows the TEM to
serve the needs of several different user groups within
the Texas Air Control Board.  Four examples of current
TEM applications are discussed.

                     Introduction
    The Texas Episodic Model, or TEM, is a. FORTRAN com-
puter program which may be used to predict air pollu-
tion concentrations for short time periods.  An emis-
sions inventory and a set of meteorological conditions
are used to create scenarios simulating the dispersion
of airborne pollutants in the lower atmosphere.  The
TEM was developed to fulfill the requirements of the
Texas Air Control Board (TACB) for a model of high
enough efficiency, sophistication, and versatility to
make it a worthwhile analytical tool for a wide range
of applications.

    This paper will first describe several design fea-
tures of the TEM.  The input data required and the
types of output available will also be discussed.  The
remainder of the paper will deal with some of the TEM's
current areas of application within the TACB.

                     Model Design
Point Source Algorithm

    The TEM employs the steady-state Gaussian plume
hypothesis for calculation of concentrations due to
point sources.   This hypothesis makes use of the
following assumptions:
 1. The emission rate of the source is constant, and no
    dispersion occurs in the downwind direction.  The
    pollutant is simply transported downwind at the
    appropriate wind speed.  The TEM uses the wind
    speed at the physical source height.
 2. In both the crosswind  and vertical  directions, the
    pollutant is dispersed by turbulent eddy  diffusion.
    The concentration patterns in these directions take
    the form of Gaussian distributions about  the center
    line of the plume.  The  standard deviations of the
    two Gaussian distributions increase with  downwind
    distance or time elapsed since release.   In the TEM,
    the standard deviations  are power law functions of
    downwind distance.

 3. The plume is reflected at the earth's surface. This
    means that none of the pollutant is lost  to reac-
    tion or deposition at  the surface.

Pollutants are assumed to  be essentially non-reactive.
For concentrations at ground level, the Gaussian plume
equation may be written:
                     exp
                                                    (1)
viiere x is the concentration, in micrograms per cubic
meter;
Q is the source emission rate, in grams per second;
U is the wind speed at physical source height, in
meters per second;
H is the effective source height, equal to the physical
source height plus the plume rise, in meters;
x,y,z are the downwind, crosswind, and vertical direc-
tions respectively, in meters.

    The standard deviations o~v and o~_ vary with down-
wind distance x and atmospheric stability class S
according to the following formulae:
                   a = a(S) x
                   z
                   cr.= c(s) x
b(S)

d(S)
(2)
(3)
Values of the stability-dependent coefficients a,b,c,
and d are derived from Turner  and Busse and Zimmerman .

    Vertical Wind Profile.  The mean wind speed in the
lower atmosphere typically increases with height in a
way that can be approximated by a power'3aw.  The quan-
tity U in equation (l) represents the wind speed at the
physical height of the source.  The TEM derives this
wind speed for each source from the input "ground level"
wind speed by a formula featuring exponential increase
with height, with the exponent dependent on atmospheric
stability.

    Plume Rise.  The effective source height, H, in
equation (l) is the sum of the physical source height,
h, and the plume rise, Ah.  Calculation of the plume
rise is handled quite rigorously by the TEM.  The plume
may emerge into "stable" (stability classes E and F) or
"unstable" (classes A through D) air.  The dimensions,
exit velocity, and exit temperature of the source and
the ambient temperature will indicate whether the up-
ward motion of the plume is dominated by momentum or
thermal buoyancy.  The TEM employs a separate set of
plume rise equations for each of the four possible
situations: stable/buoyant, stable/momentum, unstable/
buoyant, and unstable/momentum.  The atmospheric stabil-
ity is an input parameter for each TEM weather scenario,
so for each source the program has only to decide
whether the plume rise is momentum- or buoyancy-domin-
ated.  Peak plume rise is calculated using both the
momentum and buoyancy plume rise equations for the
                                                        77

-------
atmospheric stability in question.    If the momentum
equation yields a higher plume rise than the buoyancy
equation, the plume is assumed to be momentum-dominated^
and the momentum plume rise is used.  If the buoyancy
plume rise is higher, it is used instead.

    An additional equation is used to calculate plume
rise as a function of the downwind distance out to the
distance at which the plume reaches maximum height.
The six plume rise equations used by the TEM are all
due to Briggs-^.

    TEM Solution of the Dispersion Equation.  The TEM
is able to solve equation (1) very quickly for each
source-receptor combination due to a numerical trick
first introduced in the Texas Climatological Model^-
Let K and K  be defined by
             = 1000  exp   -y'
           V o-y       L   2
 and
f-H2
I   " "_
          K = 1000

           Z  ™z      U2
                    /  \
Then,  from equation (1),
          x  =
                            (It)
                                                    (5)
                                                    (6)
Note that Ky and Kz are independent of emission rate Q
and wind speed U.

    In a separate program, Ky values were generated for
twenty downwind distances, x, eight angular distances
from the plume centerline, 6 (y=xtan 0), and seven
stability classes, S.  The Ky values for each of these
1120 combinations (20x8x7) are stored as data tables
in the TEM.  Similarly, Kz values were generated for
the same twenty downwind distances, fourteen effective
source heights, H, and seven stability classes, giving
I960 values.  The downwind distance values chosen were
x=2, 3, »t, 5, 6, 7, 8, 10, 12, lit, IT, 20, 23, 27, 31, 36,i*l,l*T, 53 and
60 km.  The effective source heights were H= 10,20,30,
50,70,100,150,200,300,lt50,700,1000,lltOO and 2000 meters.
The seven stability categories are commonly referred to,
in increasing order of stability, as classes A,B,C,D
(day), D(night), E, and F.  Finally, the angular dis-
tance from the centerline were 9=0, $,26,.. .,76, where
the increment S  is a function of stability class, rang-
dng from 5° for A stability to just 1° for F stability.
This means that the Ky data table is good to an angle
of 35° from the plume centerline in A stability, but
only 7° in F stability.  The  6 values were chosen so
that the concentration at an angle of 76 from the
centerline vould be 1.0 percent or less of the center-
line concentration at all downwind distances.   As
stability decreases, the effectiveness of turbulent
diffusion increases, so that the plume spreads out more
and a larger 6 is required.

    For downwind distances greater than 2.0 kilometers,
the TEM calculates point source concentrations from
equation (6) instead of equation (l).  Ky and Kz are
found by linear interpolation in the Ky and Kz tables.
This procedure is much faster than the explicit calcu-
lation of the exponentials in equation (l), and is
chiefly responsible for the high speed of the model.
For downwind distances of less than 2.0 kilometers, the
TEM uses equation (l) instead of equation (6).  This is
because the accuracy of the linear interpolation is
inadequate at such short distances.

    When interpolating for a Kz value, it is assumed
that the plume has completed its rise, so that the
effective source height is constant in the distance
range under consideration.  The rise of the plume is
considered to be complete before the plume gets 2.0
kilometers downwind of its source, a. valid assumption
 in nearly every case.

     Mixing Height.   The "mixing layer" of relatively
 turbulent air near  the ground is very frequently
 bounded by a layer  of stable air aloft.  The distance
 from the ground to  the bottom of the stable layer is
 the "mixing height".  The effect of the stable layer is
i virtually to prevent vertical dispersion above the
 mixing height.   Pollutants emitted into the mixing
 layer will be trapped there, eventually becoming
 totally mixed in the vertical direction.  On the other
 hand, pollutants emitted directly into the stable layer
 will remain there,  and not disperse downward to any
 extent.   The TEM can simulate either of these possibil-
 ities.   If the  physical source height exceeds the mix-
 ing height, the plume will obviously emerge into the
 stable layer, and the  source is neglected.  If the
 maximum effective source height is less than the mixing
 height,  the plume is trapped in the mixing layer, and
 the expression  for  vertical dispersion (Kz) will have
 to be modified  to account for it .

     The TEM treats  restricted vertical mixing as sug-
 gested by Turner1.   Uniform vertical mixing impends  at
 some downwind distance x^j (a function of az), and is
 considered complete at 2xm.
 may be  written
                                                                                        For  xJ2xm,  equation  (l)
                                                                                      exp
                                                    (7)
 with mixing height  =L,  and the Gaussian distribution in
 z  replaced by a uniform distribution in z, 0< z  LI.
 For  the  weakest possible inversion, 1=1.  For n strong
 inversion, I> 2. A recent paper by Briggs' addresses
 the  inversion penetration problem  in considerable
 detail.   If LI $H>  L, then H is set equal to L, and  the
 plume does not escape.

     Pollutant Decay.  Removal  of  pollutants from a
 plume by various processes such as adsorption and chem-
 ical reaction may be  simulated (albeit  simplistically)
 by assigning a decay  half-life to  each  pollutant. The
 TEM  adds  a decay term to the dispersion' equation (6):
              X  =
                                                     QK K
                                                       y z
                                                       U
                         exp
                             •-0.692X
                            L UT.
                                1/2
                                                                                                             (8)
where  T  ^.   is  the  half-life,  in seconds.   Since half-
lives  of many pollutants  may be dependent  on meteoro-
logical  conditions,  separate half-life values are input
to  the model for each pollutant in each weather
scenario.

Area Source  Algorithm

     The  TEM's area  source logic is based on an algo-
rithm  of Gifford and HannaS. It uses  the standard
formalism of a,  grid of square  area sources of the same
size,  but varying emission rates.   The concentration
due to area   sources  in a given square is  due to the
emissions in that square  and in the N squares upwind.
                                                        78-

-------
The concentration is given by
x=

where A x is the area source grid spacing in meters,
Q^ is the area emission rate of the square containing
the receptor, in gm/km2/sec,
Q. are the area emission rates of the N upwind sources,
U  is the surface wind speed, in meters/sec,
and a(S) .and b(S) are as defined in equation (2).
The stability class index S is decreased by one (index
for stability class A=l, B=2,...,F=7) to simulate urban
surface roughness.  In the TEM, the value of If is four,
so each area source square affects itself and the four
squares downwind of it.

                TEM Inputs and Outputs

Input Structure
     Input to tne TEM can be divided into four sections,
as listed below:
  1.  Control Parameters  (U cards).  Control parameters
     remain constant throughout the run regardless of
     changes  in weather  in different scenarios.  They
     specify  such  items  as input  and output options  and
     dimensions and spacing of the grid of receptors at
     which concentrations are  calculated.
  2.  Scenario  Parameters (l to 8  cards).  From one to
     eight weather scenarios may  be created  in each  run.
     Each scenario uses  the  same  sources  and  receptors,
     but different weather conditions.  Each  card con-
     tains  the necessary weather  parameters  for  one
     scenario.
  3.  Area Sources (0 to  200  cards).   Each area  source
     card contains the location,  dimensions  and
     emission rates of one  area source.
  14.  Point Sources (0 to 300 cards).   Each point source
     card contains the location,  height,  diameter,  exit
     temperature, exit velocity,  emission rates  and
     identification for one  point source.

     The control parameters give the user considerable
flexibility in deciding what the model should calculate
and in what form the results should be presented.  A
few of the more important ones will be mentioned here.

     The TEM will calculate pollutant concentrations
for one or two pollutants at up to 2500 locations
("receptors") in a rectangular grid of arbitrary dimen-
sions and arbitrary but uniform spacing between rows
and columns.  The "receptor grid" is completely speci-
fied by five parameters: the coordinates of the south-
west corner of the grid, the number of rows and columns
(maximum of 50 each), and the spacing between rows and
columns.  An option allows the user to let the TEM cal-
culate the grid parameters itself, choosing them so as
to ensure that the point of maximum concentration for
the entire source distribution, as well as the individ-
ual maximum for each source, will fall within the
boundaries of a receptor grid of maximum allowable
resolution.

     The values of o~v and 0"z in the dispersion equation
were derived for a sample time of 10 minutes.  The
concentrations calculated by the TEM are thus 10-min-
ute values, but they may be converted to 30-minute,
1-hour or 3-hour readings by a statistical formula
dependent on atmospheric stability^ > .  In addition,
a 2it-hour time period can be simulated using eight
scenarios representing 3 hours of weather\each.

      The user has  a choice of formats and units  for
 source  and weather data.
     The concentrations of each pollutant at each
point in the receptor grid can be displayed in any of
the following forms-, or in virtually any combination
of them:
 1. List of the coordinates and concentrations at each
    receptor.  This is the standard form for most air
    quality models.

 2. Map of the receptor grid.  The concentrations
    across the receptor grid are displayed in two di-
    mensions with coordinates along the edges of the
    page.  Spatial concentration distributions are
    immediately apparent with this option.

 3. Control list.  As an aid in formulating control
    strategies, a list is printed of the identifica-
    tions and contributions of the five point sources
    contributing the most to the total concentration
    at each receptor, for a maximum of 625 receptors
    (a 25x25 grid).
 k. Finally, the coordinates and concentrations at
    each receptor can be output on punched cards for
    input to a contour plotting routine.

               Notes on TEM Performance

    To assess the relative speed of the TEM, several
timing tests were conducted against the other two
short-term models available to the TACB, with each
model operating on identical sources and receptors.
The two models, both of which are substantially less
sophisticated than the TEM, are the Argonne Steady-
State Model (ASSM), and the Small Area Model MK IV,
developed some years ago by the TACB.  The TEM proved
three to five times faster than both models.  The
larger the area covered by the simulation,  the
faster the TEM will be, since it uses a faster
algorithm for downwind distances over 2.0 kilo-
meters .  The error introduced by K  and K  inter-
                                  v      z
polation is typically on the order of 1.0 to 5-0
percent.
                      TEM Applications
    The TEM has found several areas of application
within the Texas Air Control Board.  Although the bulk
of all TEM run requests come from the Meteorology and
Permits Sections, each modeling study cited here orig-
inated in a different section of the TACB.

    In the discussion of each application,  the empha-
sis will be on the subject of the analysis, the TEM's
role in the analysis, and the impact of the model
results.

Permits Section

    The TACB's Permits Section employs the  TEM routine-
ly as part of its analysis of new construction permit
applications from Texas industries.  Action taken on a
permit application involving any potentially signifi-
cant air pollution sources is governed by four basic
criteria:
 1. State and Federal allowable in-stack pollutant
    concentrations.

 2. State allowable ground-level concentrations based
    on emissions from a single company or companies
    with contiguous properties.
 3. Federal allowable ground-level concentrations based
    on all emissions in the area and background con-
    centrations, if any.

 h. Ground-level concentration of one percent of the
    threshold limiting value (TLV) for any compound
    having a TLV.  These include about 500 compounds
    considered to have harmful health effects.
                                                        79

-------
      The  TEM is  used to  evaluate the  impact  of the
 proposed  new facility against the  second  and third
 criteria,  and may in the future he used for  the  fourth
 as well.

      The  pollutants  usually modeled are sulfur oxides
(SOX),  and less frequently, total suspended particulate
 matter (TSP).  Area  sources are not used, and point
 source and receptor  grid parameters are input in
 English units.   The  number of sources and the dimen-
 sions  of  the receptor grid vary greatly.  For studies
 of a single plant (criterion two),  there are  generally
 from one  to thirty sources, and a  spacing between ad-
 jacent receptors in  the  receptor grid of  100 to  500
 feet.   For studies of the impact of the plant on the
 air quality of an entire region  (criterion three),
 there  may be from 30 to  300 sources, with a  grid spac-
 ing of 500 to 2000 feet.  An example of the  latter is
 the basic inventory  of 268 sources for the Houston
 Ship Channel,  Texas'most heavily industrialized  area.
 A typical receptor grid  for this region would consist
 of 50  rows and 25 columns spaced 2000 feet apart, giv-
 ing an area of roughly 10 by 20 miles, uniformly
 covered by 1250  receptors.

      For  each permit study, the Meteorology  Section
 provides  an ensemble of  weather scenarios representing
 reasonable worst-case conditions for the  area in ques-
 tion.   From k  to 100 scenarios (run up to eight  at
 a. time by the  TEM) are usually necessary to  complete a
 study.  The number of scenarios run is largely depen-
 dent on the complexity of the source distribution.
 Also,  if  the TEM predicts a violation of  an  air  quality
 standard,  more runs  will be made in order to assess the
 magnitude  of the problem, and still more  runs may be
 made using source inventories altered to  reflect alter-
 nate approaches  and  control strategies for preventing
 the violation.   If a. significant violation of state or
 Federal standards is predicted, the permit application
 will be denied unless the necessary abatement proce-
 dures  are  instituted.  If a minor  violation  is pre-
 dicted, the applicant will be required to take precau-
 tions  against  its occurrence.  The granting  or denial
 of a permit on the basis  of criteria two  or  three is
 thus very  strongly dependent on the results  of TEM
 predictions.

 Air Quality Evaluation Division

     The  TEM is  also  being applied in a. study under-
 taken  by the TACB's  Air  Quality Evaluation Division
 (AQE).  The study is attempting to establish culpabil-
 ity for violations of the 24-hour  ambient air quality
 standard  for total particulate by  making use of the
 TEM's  control  list output option.  The methodology is
 as follows:

  1.  A  computer program searches the AQE master file of
     measured pollutant concentrations for receptors
     reporting  a  violation of the 24-hour  standard.  In
     general, when a  region shows a violation, roughly
     five to eight receptors exceed the standard.  The
     program then retrieves the weather data  for  the
     same  day at  a reliable weather station in the
     region.  The weather  data is in the form of  eight
     3-hour readings.

  2.  A  2U-hour  simulation  is run with the TEM, using
     eight  3-hour scenarios drawn from the weather
     data and the best  available inventory of point and
     area  sources of  TSP for the region in which the
     violation  occurred.   The control list option is
     used,  giving the  five sources  contributing the
     most to  the  total  concentration at each  of up to
     625 locations.   Grid  spacing is typically 0.5 to
     2.0 kilometers.   The  source inventories  for the
     Dallas  and Houston areas contain roughly 100 area
and 250 point  sources  each.
  3. The predicted and  observed  concentrations are com-
    pared.   If the predictions  are within +_ 30 percent
    of the measurements, the  control  list is consulted
    to see if  any sources  stand out as major polluters.
    If any such  sources are found, consideration will be
    given to rewriting the local emissions regulations
    to prevent future  violations.  If the TEM grossly
    underpredicts the  concentration,  a search is made
    for poorly sited receptors  (wind  flow obstructions,
    etc.) and  for unreported  sources.  Most of these
    searches have been successful.

This study is thus providing  a.  check  on  the validity  of
receptor data and the completeness of  the emissions
inventory as well as determining culpability for viola-
tions of air quality standards.

Meteorology and  Planning Sections

    Texas' control strategy work for  EPA's 10-year
Air Quality Maintenance Plan  (AQMP) is being undertaken
by the Meteorology and Planning  Sections  of the  TACB.
The necessary dispersion modeling of  sulfur oxides and
particulates in  designated Air Quality Maintenance
Areas is performed by  the TEM and the Texas Climatolog-
ical Model (TOM), a long-term companion to the TEM with
compatible inputs and  outputs.   The basic  goal of the
AQMP is to assure acceptable  air quality  through 1985,
despite industrial growth, population growth, and
changes expected in land use  and availability of dif-
ferent fuels.  The emissions  inventories  for each Air
Quality Maintenance Area are  extrapolated for future
years, and the TEM is  run with the control list  option
and an ensemble  of possible short-term, worst-case
weather scenarios to predict  future air quality, to
identify potential trouble spots and to aid in formu-
lating control strategies.

Laboratory Division

    The TACB's Laboratory Division is currently
involved in a project which uses an extensive field
study and TEM modeling to investigate the  relationships
between gaseous  and particulate  pollutants.   In  June
and September of 1975, over sixty high-volume particu-
late samplers in the Houston  area took readings  simul-
taneously on nine different days.  X-ray fluorescence
analysis of the  particulate matter collected yielded
concentrations of chlorine, ammonium, nitrates,  sul-
fates, total sulfur, benzine-soluble hydrocarbons, and
several metals.

    A substantial amount of sulfur in the  form of sul-
fates and sulfites appears in the particulate samples.
It is suspected that much of the sulfur was  actually
emitted in the form of sulfur dioxide.  Since S02 is
acidic, the sulfur could be tied up in the  form  of
sulfites on contact with any  alkaline particulate.
TEM modeling is being used to help test this hypothesis.
The TEM is making 24-hour predictions of  S02  and TSP
concentrations across the Houston area for each  of the
nine days of the field study.  If the hypothesis  is
valid, one would expect to find  receptors where  pre-
dicted TSP is less than measured TSP, predicted  S02 is
high, and the TSP collected contains large  amounts of
sulfur.  The extent of conversion of S02 to particu-
lates is probably dependent on travel time  from  source
to receptor.   Travel times can be approximated,  since
the wind speed is supposedly known and the  TEM control
list can identify the major contributing  sources  at
each receptor.

    Results of this project could have great  impact in
at least two areas.  First, control of sulfur oxide
                                                       80'

-------
emissions might be necessary to meet particulate
standards.  This should be reflected in the local
regulations.  Second, if the phenomenon mentioned
above exists, it should be taken into account in dis-
persion models, at least in terms of adjusted decay
rates and/or "calibration factors" if no better
method can be found.
                      References

1. D.B. Turner, Workbook of Atmospheric Dispersion
   Estimates, Public Health Service Publ. Ho. 999-
   AP-26, 1970.

2. A.D. Busse and J.R. Zimmerman, User's Guide for the
   Climatological Dispersion Model, U.S. Environmental
   Protection Agency, Research Triangle Park, H.C.
   (EPA-Rlt-73-02l(), 1973.

3. G.A. Briggs, Plume Rise, U.S. Atomic Energy
   Commission, Division of Technical Information,
   Oak Ridge, Tenn., 1969.

It. J.H. Christiansen and R.A. Porter,  "Ambient Air
   Quality Predictions with the Fast Air Quality
   Model," Proceedings of the Conference of Ambient
   Air Quality Measurements, APCA Southwest Section,
   Austin, Texas, 1975.

5. G.A. Briggs, Plume Rise Predictions, AMS Workshop
   on Meteorology and Environment Assessment, Boston,
   Mass., 1975-

6- F.A. Gifford, Jr. and S.R. Hanna, "Urban Air
   Pollution Modelling", paper No. ME-320, Second
   International Clean Air Congress, Washington, D.C.,
   1970.

7- I.A. Singer, Journal of the APCA 21_, Jjk, 1961.

8- J.C. Caraway, private communication, 1975-
                                                        8V

-------
                                            AIR MODELING IN OHIO EPA
                     John C. Burr
        Environmental Assessment Section Chief
                       Ohio EPA
                    Columbus, Ohio
                     A.  Ben  Clymer
                  Consulting Engineer
                     Columbus,  Ohio
       Annual  Mean  S02  and TSP  Over Flat  Terrain

Modeling for Ambient Network Design

     The term ambient network is intended to mean that
network which is used for assessing long-term concen-
trations as, e.g., annual mean concentrations for
comparison with annual  air quality standards.  Urban
background carbon monoxide concentrations are another
example.

     In this case one wants to know the mean concen-
tration distribution of a pollutant over some area of
concern.  In other words we want to determine a three-
dimensional concentration surface.  The design problem
then becomes one of determining the number and location
(geographic) of sensors adequate to define such a
surface with a specified degree of confidence.  The
design theory of such a network has been developed by
the authors.^'^  It utilizes a parametric representa-
tion of concentration as a function of distance.   For
pollutants originating from point sources a polar
coordinate system is used, and  the concentration repre-
sentation is a gaussian function of the coordinate. , 3
The theory has been applied to three cities in Ohio.
The number and distribution of sensors, so determined,
is reasonable.
Existing Models for Design of Regulations

     Annual models currently being used by us are the
Modified Climatological Dispersion Model (MCDM),4 and
the Ohio County Annual Maximum (OCAM) model.5  Both
MCDM and OCAM are steady-state, uniform wind, gaussian
dispersion models.  MCDM is a revision of COM6 which
has been modified to generate a source contribution
table.

     We use MCDM because it has been shown, when prop-
erly applied, to produce reasonable correlation
coefficients (range of 0.75 to 0.85).  It is  being
modified to produce "coupling coefficients" which are
merely row vectors of relative source contribution at
each receptor.  Thus a simple matrix multiplication of
emission rate and "coupling coefficient" will predict
the concentration at the receptor.  In this manner we
can easily examine the effect of altering various
emission rates.

     The OCAM model was developed to permit efficient
and realistic modeling of maximum annual concentrations
in smaller metropolitan areas.  It treats both area and
point sources.  Area sources are modeled by the method
of Miller and Holzworth.7  It has been found8 for point
sources, that maximum annual concentration, normalized
for emission rate, is related to mean plume height by
a power law.  The law is deduced from hypothetical
source COM modeling.   OCAM includes a quantitative
means of how a given source is to be modeled, i.e., as
a point source or as part of the area sources.

     Early application of OCAM for modeling sulfur
dioxide yielded a correlation coefficient of 0.78 when
based upon modeling  in  36 Ohio  Counties.5  When  extended
by a Larson Transform^  the OCAM model  predicted second-
highest 24-hour concentrations  in  these  same 36
counties with a correlation  coefficient  of 0.66.

Matrix Model of Concentration by Source  Category

     This is a model with which we  are experimenting.
The emission rates, are  categorized  according to a
column vector, E,(m x 1) by  Source  Industrial Classi-
fication (SIC) codes.   A matrix of  constants, D,(n x m)
relates SIC classified  emission rates to an  observed
concentration column vector, C(n x  1):
                           D
                         n  x m
The n x m matrix D can, in principle, be determined
from a least squares fit of historical data.  Alterna-
tively it could be synthesized from the "coupling
coefficients" between sources (by SIC code) and recep-
tors.

     If the emissions are projected by SIC code, then
the above matrix equation gives a forecast for the
concentrations.  It is also possible to invert the
equation, in order to solve for a unique set of
emissions, {E} which will yield a given set of concen-
trations {C}.

     This is a highly condensed city pollution model.
Its simplicity and potential ability for short-
circuiting disperson modeling commend it for consider-
ation.
     Annual Mean S02 and TSP Over Nonflat Terrain

Phenomena of Concern

     Nonflat terrain gives rise to a remarkable number
of micro-and mesoscale meteorological phenomena
affecting the dispersion of pollutants.10'12  It
would be valuable to be able to deal with the majority
of these effects upon concentration at an arbitrary
surface point.  Among these effects are the following:

     1.   Channeling of wind by a valley, causing a
         strongly bimodal wind rose;

     2.   Boundary layer instability effects, such
         as vortex (eddy) and downwash generation
         on the lee side of a ridge;

     3.   Plume impingement on a ridge or plateau;

     4.   Increase of turbulence aloft due to surface
         roughness over a wide range of scales;

     5.   Downflow of colder air in a valley;
                                                        82

-------
    6.   Irregular  distribution  of updrafts  due  to
         insolation of  rough  terrain;

    7.   Several  possible  effects  of  the  thermal
         structure  T(z)  of the atmosphere,
         including  an elevated or  ground-level
         inversion;

    8.   Extra  turbulence  due to wind  shear;

    9.   Bending  of plume  by  variation of wind
         direction  with  altitude;

    10.   Nonuniform wind speed and direction
         due  to potential  flow over rough terrain;

    11.   Possibility of a  Coriolis effect in  a  large
         valley;

    12.   Reduction  of wind speed in a  valley  relative
         to geostrophic wind;

    13.   Elevation  of a "cloud"  of pollution  which
         is caused  by inflow  of  cooler cleaner  air
         below.
Analytical  Approaches

     The simplest assumption  is  to ignore terrain
unevenness, as  in all  flat-earth models.   The resulting
errors are  not  well  understood in cause or magnitude.
If the terrain  is very rough,  the annual  average con-
centrations predicted  by a flat-earth model cannot be
"calibrated" satisfactorily to observed data at a set
of monitoring sites, because  the scatter is so bad.

     The next simplest assumption useful  in the case
of a ground level receptor which is elevated relative
to the base of  a source stack, is that the effect of
nonflat terrain may  be taken  into account by deducting
from stack  height the  elevation  of the receptor above
the stack base.  In  effect, this assumption is that
the elevated ground  is permeable to the wind and that
the wind field  is not  distorted  by the terrain.  A
somewhat more rational correction to plume height above
rough terrain is made  in the  PSDM program.1'

     Since  roughness of terrain  greatly increases the
diffusion coefficients ay and az, an appealing model
assumption  would be  that there is perfect mixing within
some specified  box,  with flowthrough being determined
by the wind. The authors have found that such a model,
even with substantial  dilution by exchange of air at
the top, gives  much  too high  a concentration at the
ground, when applied to the valley area of Steubenville,
Ohio.

     Part of the modeling problem is to describe the
wind vector field for  each of a  set of wind speed and
direction classes.   There are many approaches for
finding the wind field, such  as  the shallow fluid
model!3, closed-form approximations in simple
geometries^, and various numerical methods based on
the Navier-Stokes equations14>15 or modified potential
flow.16

     After  having the  wind field, one can apply one of
the existing models, such as  COM6, as a subroutine in
a program which deals  with pollution transport,
diffusion,  and  decay in a nonuniform wind.l'  the
authors are currently  investigating this  approach.
               Diurnal Scale Problems

Modeling for Episode Network Design

    An episode network has the purpose of detecting an
incipient episode of high pollutant concentration over
a period of days.  The detection is only possible with
some particular degree of confidence when performed by
a given number of sensors, because the "signal" is
"noisy".  The statistical theory involved has been
worked out by Clarenburg.18  One of us (JCB) has
extended the theory,2 and we have applied it to two
cities in Ohio.3,19  The numbers of monitors thus
determined are reasonable.
Diurnal Phenomena

    The most disastrous air pollution episodes are
those which last only a few days and which importantly
involve short-term phenomena.  These phenomena must be
modeled, in order to understand how an area's pollu-
tion compares with short-term standards under various
conditions.  The modeling problems associated with
these diurnal  phenomena may be summarized as follows:

    1.   Inversions.   The creation, persistence,
        and the diurnal rise and fall of an
        inversion are basic in pollution episodes,
        since they limit the mixing volume.  More-
        over, the production and maintenance of an
        inversion are correlated with low wind speed,
        which further increases pollutant concentra-
        tions.  The modeling problem is to represent
        the vertical transient thermal structure of
        the atmosphere T(z,t) over a period of days,
        usually in a valley.

    2.   Valley Minds.  An episode can be compounded
        by the intensification of an inversion by
        cold drainage winds sliding underneath at
        night.

    3.   Lake Breezes.  A large dammed river, such as
        the Ohio River, is essentially a lake.  It
        is conceivable that a two-cell cylindrical
        circulation could develop on a clear day,
        with downflow in the middle and rising air
        on both shores, trapping pollutants in the
        circulation field.

    4.   Urban Thermal Circulation.  This toroidal
        circulation, which is driven by the heat
        release of a city, can also trap and
        recirculate pollutants.

    The authors have not yet brought these phenomena
into a short-term model of such a city as Steubenville,
Ohio.
          "Urban Plume" Modeling for Ozone

The State of the Art of Ozone Modeling

    Regression models for ozone as a function of some
environmental variables, such as temperature and wind
   ed 20,21
speed,
            are available.
    The diurnal curve of ozone concentration versus
time is approximately a sinusoid plus a constant.
This curve is the solution of the following differen-
tial equation:
                                                       83

-------
                   d[03]
                     dt
KI
[03]
where I is illumination intensity (a sinusoidal func-
tion of time) and where K and T are constants.  The
authors have derived this equation from published sets
of kinetic equations by making some reasonable simpli-
fications.

     The next most complicated model would take account
of vertical movement (by diffusion or by transport in
a lake breeze) and the vertical distribution of ozone
over the diurnal cycle.22  Then it would be desirable
to model the growth and decay of ozone concentration in
an urban plume extending into a rural area.23  Until
such models are available, further complication of the
kinetics model seems unwarranted.

Considerations for Statewide Ozone Network

     As a necessary preliminary to modeling ozone in
Ohio, the monitoring network is being expanded.  At
present virtually all 28 monitors are urban.  Hence
it is desirable that all monitors added in the near
future have rural sites.  Each rural site should be
within about 50 miles of at least one city and prefer-
ably at the centroid of several cities, in order that
urban plumes can be studied.22
            The Automotive Source Problem

Metropolitan Carbon Monoxide Modeling

     All of this work has been done in cooperation with
the Ohio Department of Transportation (ODOT).   We
initially thought to use the APRAC24 model  for this
purpose.  We approached ODOT to obtain the  traffic-
grid and vehicle load factors for the Columbus, Ohio
area.  It was quickly found that ODOT's number of
traffic grid links exceeded the capacity of APRAC by
about a factor of ten.   Although an APRAC compatible
grid for Columbus was eventually produced,  it was
concluded that such an approach was impractical for
all Ohio cities.

     A greatly simplified model called COPOLLUT25 was
developed for survey purposes.  In the emission rate
algorithm, only traffic links are assumed to produce
the pollution pattern.   Turning movements are not
included.  In the dispersion algorithm, wind direction
is assumed to be at the same relative angle for each
traffic link, and, therefore actual meteorological
conditions are not used.  Emission factors  are taken
from reference 28.

     Thus the model emphasizes the emission character-
istics of carbon monoxide pollution.   Sophisticated
dispersion relationships are not used.  Such assump-
tions are compatible with measured concentration
patterns27 Of carbon monoxide.  The principal  short-
coming of the model is its inability to handle the
influence of street canyons upon dispersion.  Such
effects would be important, principally in  the Central
Business District.  The model produces realistic
patterns of pollution in the sense that they are
neither largely better than nor worse than  the air
quality standards for Columbus.

Finite Line Source Model

     A model  described elsewhere in this conference2^
has been developed for modeling finite line sources.
A closed-form,  time-dependent solution has  been
derived.  The dispersion function explicitly  incorpor-
ates ground roughness and vertical heat flux.   The
incorporation of ground roughness and the  functional
form of the time dependence are thought to be quite
characteristic of carbon moxoxide dispersion  from
roadways in a variety of areas of varying  degrees of
urban development.  It is intended to be applied to
analyses of proposed roadway development.  It can also
be readily adapted to Indirect Source complexes.
                                                   Conclusions
                                  Our  use  of  air quality  models  emphasizes  their
                              use for  assessing environmental  situations  and for
                              regulation development.   We emphasize that  models  be
                              theoretically realistic  and hopefully simple.   We
                              further  require  them  to  be  representative of  the real
                              world as attested by  measurement comparisons.

                                  Existing models which we  have  acquired  deal  with
                              only an  extremely limited number of real situations.
                              Such models  are  restricted  theoretically to flat
                              terrain, steady-state, uniform wind,  non-reactive
                              pollutant situations.  We are seeking and developing
                              additional models which  incorporate topography,
                              kinetics, thermal structure (both  vertical  and
                              horizontal), and diurnal phenomena  such as  valley
                              winds, heat  islands,  and lake breeze.

                                  The major air pollution problems  in Ohio  generally
                              occur where  some or all  of  these variables  are present.
                              We will  not  feel confident  in defining the  situation
                              or devising  control remedies  until  we can confidently
                              model where  and when  these  variables  occur.
                                                   References

                              1.  Burr, J. C., Jr., "Air Quality Monitoring Require-
                                  ments for Cleveland with Extensions  to Cuyahoga
                                  County:  A Quantitative Evaluation Based on
                                  Existing Data and Sources", Research  Report,
                                  City of Cleveland, Division of Air Pollution
                                  Control, Cleveland, Ohio,  1972.

                              2.  Burr, John, and Clymer, Ben, "Geographical
                                  Distribution of Sensors in Urban Air  Monitoring
                                  Networks", Third Conference on Energy and the
                                  Environment, Hueston Woods, Ohio, 1975.

                              3.  Clymer, A. Ben, "Design of  Episode and Ambient
                                  Networks for Monitoring Particulates  and Sulfur
                                  Dioxide in Metropolitan Columbus", Ohio EPA,
                                  November 1974.

                              4.  PEDCo-Environmental Specialists, Inc., "Modifica-
                                  tion of the Climatological Dispersion Model",
                                  Contract No. 68-02-1375, Task Order  No. 21,
                                  Prepared for U.S. EPA, Region V.

                              5.  Ohio EPA, "Air Quality Summary Report", submitted
                                  to the Ohio Energy Emergency Commission,
                                  November 1975.

                              6.  Busse, Adrian D., and Zimmerman, John R., "User's
                                  Guide for the Climatological Dispersion Model",
                                  Report EPA-R4-73-024, U.S. EPA, Research Triangle
                                  Park, N.C. 27711, December 1973.

                              7.  Miller, M. E., and G. C. Holzworth,  "An Atmos-
                                  pheric Diffusion Model for Metropolitan Areas",
                                  APCA Journal, 17_, pp 46-50(1967).
                                                       84

-------
 8.  Blaszak, T., private  communication,  Region  V,
    U. S. EPA,  230 South  Dearborn  Street,  Chicago,
    Illinois.

 9.  Larsen, R.  I., "A  Mathematical  Model for Relating
    Air  Quality Measurements  to  Air Quality  Standards",
    U. S. EPA,  No. AP-89(November  1971).

10.  Munn, "Descriptive Micrometeorology",  Academic
    Press, New  York, 1966.

11.  Scorer, "Air Pollution",  Pergamon  Press, London,
    1968.

12.  Egan, Bruce A.,  "Turbulent Diffusion in  Complex
    Terrain", Lecture  Notes,  Workshop  on Meteorology
    and  Environmental  Assessment,  Environmental
    Research and Technology,  Inc.,  Concord,  Mass.

13.  Cramer, H.  E., Geary,  H.V.,  and Bowers,  J.F.,
    "Diffusion  Model Calculations  of Long-Term  and
    Short-Term  Ground-Level S02  Concentrations  in
    Allegheny County,  Pennsylvania:, H.  E.  Cramer
    Co., Inc.,  Salt  Lake  City, Utah 84108,  Report
    PB-245262,  March 1975.

14.  Hotchkiss,  R. S.,  and Harlow,  F. H., "Air
    Pollution Transport in Street  Canyons",
    Report EPA-R4-73-029,  U.  S.  EPA Office of
    Research and Monitoring,  June  1973.

15.  Anderson, Gerald E.,  "Mesoscale Influences  on
    Wind Fields", J. of Applied  Meteorology, Vol.  10,
    June 1971,  pp 377-386.

16.   Settari, A., and Lantz, R. B.,  "A Turbulent Flow
     Model for Use in Numerical Evaluation  of Air
     Quality", J. of  Canadian  Petroleum Industry,
     October-December 1974, Montreal.

17.   Rosenblum,  Harvey  S.,  Egan,  Bruce A.,  Ingersoll,
     Claire S.,  and  Keefe,  Michael  J.,  "Adaptation
     of Gaussian Plume  Model to  Incorporate Multiple
     Station  Data  Input",  Environmental Research and
     Technology, Inc.,  Concord, Mass. 01742,  ERT
     Document No. P-1121,  vol.  1, June 1975.

18.   Clarenburg, L.A.,  "Air Pollution Control: A
     System to Predict  Unfavorable  Weather  Conditions",
     APCA Paper  68-55,  presented  at St. Paul, Minn.,
     1968.

19.   Burr, John  C., and Clymer, A.  Ben, "Design  of
     Episode and Ambient Networks for Monitoring
     Particulates and Sulfur Dioxide in Metropolitan
     Cincinnati", Ohio  EPA, Columbus, Ohio, October
     1975.

20.   Chock and Terrell, "Time  Series Analysis of
     Riverside,  California, Air Quality Data", General
     Motors Corporation Research  Laboratories, Warren,
    Michigan, Publication GMR-1591, presented at
     APCA meeting, Denver,  Colorado, June 1974.

21.  Bruntz, S.  M., et  al., "Ozone  Concentrations in
     New  Jersey  and New York:  Statistical  Association
    with Related Variables",  Science,  vol.  186,
    Ocotber  18, 1974,  pp  257-259.

22.   Research Triangle  Institute, "Investigation of
    Rural Oxidant Levels  as Related to Urban Hydro-
    carbon Control Strategies",  U.  S.  EPA  report
    EPA-450/3-75-036,  March 1975.
23.  Cleveland, W. S., et al. "Photochemical Air
     Pollution:  Transport from the New York City
     Area into Connecticut and Massachusetts",
     Science, vol. 191, January 16, 1976, pp 179-181.

24.  Johnson, W. B., F. L. Ludwig, W. F. Dabberdt, and
     R. J. Allen, "An Urban Diffusion Simulation Model
     for Carbon Monoxide", APCA Journal 23, No 6,
     pp 490-498 (June 1973).

25.  Gebhardt, C. R. "COPOLLUT", Ohio Department
     of Transportation, June 1975.

26.  Burr, J. C., and R. G. Duffy, "A Time-Dependent
     Gaussian Line Source Model", Paper #11-6,
     Conference on Environmental Modeling and
     Simulation, April 20-22, 1976.

27.  Ott, W., and R. Eliassen, "A Survey Technique
     for Determining the Representativeness of Urban
     Air Monitoring Stations with Respect to Carbon
     Monoxide", APCA Journal, 23, No 8, 685-690,
     August 1973.

28.  U. S. Environmental Protection Agency, "Compila-
     tion of Air Pollutant Emission Factors", AP-42,
     Supplement No. 2, September 1973.
                                                       85

-------
                  DESIGNING A REGIONAL AIR POLLUTION MONITORING  NETWORK:
                AN APPRAISAL OF A REGRESSION EXPERIMENTAL DESIGN APPROACH
               P. R. Gribik
Graduate School of industrial Administration
         Carnegie-Mellon University
      Pittsburgh, Pennsylvania  15213
                J. R. Sweigart
      School of Urban and Public Affairs
          Carnegie-Mellon University
       Pittsburgh, Pennsylvania  15213
               K. 0. Kortanek
         Department of Mathematics
         Carnegie-Melion University
      Pittsburgh, Pennsylvania  15213
ABSTRACT

The problem of allocating measuring resources
to aid in accurately estimating ground level
pollution concentrations throughout a region
is examined.  Application of optimal regres-
sion experimental design makes the uncertain-
ty in the estimates small for a given
measurement effort and suggests where the
measurements should be taken.  Design cri-
teria are surveyed for this problem as well
as the assumptions that underlie the applica-
tion of this technique.  The allocation and
location problem is illustrated with a hypo-
thetical example.
1.  INTRODUCTION

Generally, the effectiveness of an air qual-
ity management program depends greatly upon
the ability to estimate accurately the ambi-
ent air pollution levels throughout a given
region.  in turn, the ability to make ac-
curate estimates depends upon the design of
the monitoring network, specifically, upon
the locations of the measuring equipment.
in this paper we discuss the problem of al-
locating pollution measuring resources to
satisfy the need for accurate estimation of
the ground level concentration of a pollutant
throughout a region.  The techniques can be
viewed as being source-oriented in that they
give estimates of the pollution contribution
of each source, which is important for use in
an air quality management program.  Results
from a diffusion model are used to determine
the form of a response surface with which one
estimates the pollutant concentration at each
point in the region, including those points
where measurements are not made.  Multivari-
ate regression analysis can be used to fit
the response surface to the measurements ob-
tained from a monitoring network by computing
numerical values of unknown parameters, such
as the emission distribution of point sour-
ces.   Before actually taking measurements and
solving for these parameters, one first seeks
to allocate the measurement resources to
points throughout the region and thus deter-
mine sampling sites.

in this paper mathematical methods are sur-
veyed which treat the problem of allocating
these resources in some optimal way.  The ba-
sic problem is one of regression experimental
design, where the goal  is  to obtain  good  esti-
mates of unknown parameters.   Beginning with
Section 2, the underlying  experimental regres-
sion design model is presented.   Basic as-
sumptions are stated.   in  Section 3,  design
criteria are examined for  the  model  which
tend to make the uncertainty in  the  estimates
of the parameters as small as  possible in an
economically efficient  manner.   The  problem
of resource allocation  is  treated under the
assumption of a fixed weather  state.

The basic assumptions introduced in
Sections 2 and 3 are examined  in Section  4.
Finally, in Section 5 our  conclusions are
presented with a view towards  implementation.
2.  THE ESTIMATION OF UNKNOWN  PARAMETERS:
    EMISSION RATES

Given a control region  R,  consider  the prob-
lem of estimating the ground level concentra-
tion of a single pollutant  throughout  R  by
using measurements collected at  a finite num-
ber of points in  R.  in  R there are  n
known sources of pollutant  and this  pollutant
is assumed not to react with any others.  It
is clear that pollution concentrations are
highly dependent upon weather  state  as speci-
fied by wind direction, wind speed,  mixing
height, stability class and so forth.  Con-
sider a single weather state during  which the
need for accurate estimation of  air  pollution
concentrations is acute.

For a particular weather state a diffusion
model may be used to approximate the pollu-
tion concentration at any point  x   in  R
due to source  i.  This approximation is of
the form  9.u.(x) where  6.  may be  interpre-

ted as the emission rate of source   i  and
u.(x)  is the pollution transfer  function of
 i    J_T
the  i    source as determined by the dif-
fusion model  (with a unitary emission rate) .
Assuming unknown "background"  pollution,  6 ,

the pollution concentration at any point  x
in  R  can be written as
9o +
n
2 6,1
                                           (1)
In fitting  (1) to actual measurements,  linear
regression  analysis can be used  to  estimate
                                             86

-------
  ,

^ j
       8n.   Call these computed estimates
                resulting function
                                                 particular criterion  function.   in  the  next
                                                 section we discuss  the  allocation problem in
                                                 more detail and the problem  of  choosing a
                                                 criterion function.
is then used to estimate the concentration of
pollutant at any point in  R.

Assume that measurements of the pollutant are
made at the points  x.
                                                 3.
                     ..,...
k.
                               in  R  with

    measurements taken at  x.  and that any

observation can be written in the form,

               n

         9o +
              .
                 eiui(x)
Here  e (x)
the result of the
by  g-,  j = 1,..
    N =  £ k.
        1=1 1
           is a random error term.
                   j
                                    Denote
                        measurement at  x.
                       and set
    M =  £
        1=1
            ki           T
            -Tr u(x.)u(x.)
    b =  £ k.u. (x)g.
             k.
(the  total number
 of measurements
 taken in  R)


 (the information
  matrix for the
  given measure-
  ment scheme)


 and
                            (the average pol-
                             lution reading
                             at  x .   f o r

                             measurements) .
                                          k .
u(x.)  is the column vector
   1   m
u(x))  .   if the information matrix  M
                            (l,u_ (x. ),...,

                                         is
non-singular, the least squares estimator of

6= (90,91,..., 9n)T  is  S = ^ M"^.  Since
the total number of measurements that may be
made in a period is fixed, the measurements
should be allocated to points in  R  so that

H  is a good estimate of  6.
                                                     THE PROBLEM OF ALLOCATING  MEASUREMENT
                                                     RESOURCES
                                                 The problem here is  to determine  the  points
                                                 in  R  where measurements  are  to  be taken  and
                                                 the proportion of the total measurement  ef-
                                                 fort to be expended  at each location.  Let
                                                 x, , . . . ,x   denote the points where measure-
                                                  1m
                                                 ments are to be taken and
                                                 the respective proportions.
                                                 is not only to determine
                                                                            p.,...,p
                                                                                        denote
                                                   The problem here
                                                    and  p.  but

                      also the number of monitors, m.  We define a
                      design denoted by  e  as follows:
                                                                           x.
                                                   e =  [(PI»XI),...,(p^jxj}, where


                                                   and where
                                                   that
                                                     €(x) =
                                                              e  is actually a  function  such
                                                              O   if  x
                                                               I   if  x = x±
                                                                                   nr
                                                 The problem can then be restated  as one of
                                                 finding a design which provides "good" esti-
                                                 mates of pollution concentrations  throughout
                                                 R.  As mentioned previously this  can be done
                                                 by making some function of the covariance ma-

                                                 trix of  §, call it  $(M), small.  We thus
                                                 propose the following non-linear optimization
                                                 task, governed by a design criteria as speci-
                                                 fied by  i(M) .
                                                 Program P_

                                                    Compute  min $(M)

                                                    for all  M e R
                                                                            L'  and
                                                                            R1 x R
                                                    subject to the constraints

                                                              m             T
                                                         M =  £ p u(x.)u(x )
                                                             1=1 1   1    i
We consider the allocation problem under two
standard assumptions that will be used in de-

fining what is meant by  6  being a "good"
estimate.

  Al  The random errors in the observations
      are independent among all observations.

  A2  The mean of  e (x)  is  0  and the vari-
      ance of  e (x)  is  Ac  where  c  is
      known while  A  may be known or unknown.

Under these two assumptions, the covariance
matrix of
              is (Ac/NjM"
                             Since the co-
variance matrix of     gives an indication of
the uncertainty in our estimates and we wish
to make this uncertainty small in some sense,
we consider the problem of allocating the
measurement resources so as to "make the co-
variance matrix small" with respect to a
                                                    and
                                                         p± ^ O;  i = 1, . . . ,m
                                                         x± ^ x.  for  i jt j

                                                         m  is an arbitrary positive integer.
                                                 The solution to Program P, { (p^,x. ),...,
                                                 (p ,x )), will be called an optimal design.

                                                 Another assumption is required.

                                                   A3  There is at least one design whose in-
                                                       formation matrix is non- singular  for
                                                       the given weather state.

                                                 Under assumptions A1-A3 the properties  of
                                                 Program P and its optimal designs will  be
                                                 examined for two design criteria.  The  func-
                                                 tions that will be considered are
                                              87

-------
-log det(M) and  tr(GM  )  for some  specified
positive definite matrix   G   (det = determi-
nate, tr = trar-3) .  The designs optimal  for
Program P under  these two  criteria  are termed
D- optimal and L- optimal designs, respectively.
Fundamental contributions  to the study of
these problems have been made by Kiefer  and
Wolfowitz  [8] .   Mathematical properties  of
these problems and also some numerical algo-
rithms for their solution  are discussed  in
Fedorov  [4] .

Relying on a designn e  to estimate  6,  the
variance of
                +  E   .u. (x)  for any  xeR
              0
is given by  ^ u (x) TM~ ^ (e) u (x) .  M~1(e) de-
notes the inverse of the information matrix
M  which depends upon the design  e.  Kiefer
and Wolfowitz  [8] have shown that a design
e*  is D-optimal if and only if its associ-
ated information matrix  M(e*)  solves
                   T - 1
       min max u(x) M  u(x)
       Men xeR
where
       0 = [Me R(*+D*(n+l),  there is  a de-
                sign  e  for Program P  such
                that  M =  £ e (x)u(x)u(x)T
                          xeR
                and  det (M) ^  0) .

Thus the D-optimal design minimizes the maxi-
mum variance of the best linear unbiased
estimates (BLUE) of ground level pollution
concentrations in the region   R.

The L-optimal design is related to the  expec-
ted error in
                  Let
                              be the vector
of the true source strengths in equation  (1) .

Since  6  is unbiased  E(8) =  9fc     (E =
Expected value operator) .  If  the design  e
is used to collect data during a given weather
state, then
  E((6-etrue)TG(e-etrue))=
                              tr[GM~:L(€)].
                                           (3)
Thus the L- optimal design problem is equiva-
lent to finding the design for which the LHS
of  (3)  is minimized.  An important special
case occurs when  G  is chosen to be the
identity matrix.  Then the L-optimal design
seeks to minimize the expected sum squared of
errors in the BLUE of  8.

Another important choice of  G  is
           r      T
       G = j  u(x)u  (x) 6 (dx)
            R
where  6  is a probability measure on  R  for
which  G  is non-singular.  It can be shown
in this case that the L-optimal design prob-
lem seeks to minimize the weighted average of
the variance of the BLUE of the ground level
pollutant concentrations in the region  R
where  <5 ( • )  is the weighting term.  One pos-
sible choice of  5 ( • )  is to set  6 (A) =
"°"    where  pop (A) = population of region
pop R
A, for  A  some subset of  R.  In this case,
Program P would determine an allocation that
                                                 yields better estimates of the pollution in
                                                 the more densely populated sections  of  R.

                                                 The final choice of the design criterion will
                                                 depend upon the intended use of  the  estimated

                                                 coefficients  &•.  If the estimated  coeffici-
                                                 ents are to be used to estimate  total  concen-
                                                 tration at points throughout  R,  any of the
                                                 design criteria suggested is appropriate.
                                                 However, if the purpose is to use the  esti-
                                                 mates of  6  as estimated emission rates of
                                                 sources to be used as a basis for regulatory
                                                 policy, then the L-optimal design with  G = I
                                                 should be taken.
4.  DISCUSSION OF ASSUMPTIONS Al  THROUGH  A3

The previous results depend upon  the  assump-
tions that were made.  Assumption Al  is like-
ly to be difficult to satisfy and deserves
further discussion.  This  assumption  depends
upon the accuracy of the diffusion
model and the interpretation and
implementation of the optimal design.  Sup-
pose that we use one of the previous  models
and obtain { (p*,x*) , . . . , (p* ,x*) }  as an opti-
mal design.   Also suppose  that we are to  take
N  measurements and that we interpret the op-
timal design to mean that  we are  to take
ptN  observations at  x*   with  all measure-
ments taken at the same time.  Under  this in-
terpretation, the assumption that the devi-
ations of the measurements from the model are
independent  (Al)  may well  be violated.  For
assumption Al to be valid  under this  inter-
pretation, we would need a diffusion  model
that was accurate down to  very  small  scale
effects.  Since the diffusion models  avai-
lable at present are not this accurate, this
interpretation is not adequate.

Assume that the diffusion  model used  can  ac-
curately model effects as  small as d meters
in diameter and  t  minutes in duration.  Let
          T = t- (
                                                                     max
                                                                              p*N)
                                                 and assume that the weather state remains un-
                                                 changed and that the emission rates of the
                                                 sources and other parameters of the point
                                                 sources remain constant for a period of  T
                                                 minutes.  If we take a measurement at point
                                                 x^  every  T/(p*N)  minutes during this period
                                                 of  T  minutes and if  min||x* - x*|| ;>_ d

                                                 where  || • )|  is the Euclidean norm on  R  ,
                                                 then the deviations of the measurements  from
                                                 the model will be independent.
If  min|lxt -
                                                                  < d, we can alter  the pre-
                                                 vious programs to insure that the points  in a
                                                 design are far enough apart.  one way would
                                                 be to add the constraints
                                                             x.  - x .
                                                             1 i    3 i
                                                                           for
                                                 to a program.  However these constraints  are
                                                 nonconvex and so increase the difficulty  of
                                                 solving the program.  There is  another way
                                                                       -
to insure that
                                                                 minx- - x*
                                                                                d  which  is
                                              88

-------
consistent with the practicalities of the
problem.

In a practical problem, the optimal design
would probably not be exactly implemented but
only used as a guide since factors not pre-
cisely modeled would also be considered.  For
example,  a monitoring station would not be
placed in the lee of a large building or next
to a minor pollution source that was only
considered in the aggregate background pol-
lution  9 .  Hence, if a sufficiently fine
grid  H  on  R  is chosen, the resulting op-
timal design would be adequate for our
purposes.
If the grid  H  satisfies

          min IJx - y|| 1 d
         x,yeH
(4)
then no two points in an optimal design can
be closer than  d  and so assumption Al will
be satisfied.  in U.S. regional air pollution
studies, grid squares are typically one to
ten kilometers on a side.  if the region  R
is such that   max ||x - y|| » lo  kilometers,
              x,yeR
which is the case in most problems of in-
terest, and if  d  is on the order of one to
ten kilometers or smaller, H  can be chosen
to be a grid sufficiently fine to yield use-
ful results while satisfying (4).  Also, a
grid  H  will insure that no further assump-
tions on the transfer functions and the
region  R  are needed.
5.  CONCLUSIONS:  INTERPRETATIONS AND
    IMPLEMENTATION

The previous models consider the problem of
allocating measurement resources to collect
pollution concentration data during the time
that a specified weather state holds.  Since
the optimal allocation depends upon the
transfer functions obtained from a diffusion
model and these functions depend upon the
weather state, the models developed may give
different optimal allocations for different
weather states.  We do not suggest propor-
tionally dividing the equipment among the
allocations optimal for the different weather
states since the result may not be a good al-
location to the overall problem.

One way to approach this problem is to extend
the methods of this paper to more than one
weather state.  in this case, it is necessary
to choose a set of weather states
S = [s^,...,s,] which is of interest.  We

then wish to design a monitoring network for
the region  R  so that good estimates of the
parameters  8.  can be made, on the average,

when the weather state is in the set  S.   Let
w  be a random variable which denotes the
weather state at some time in the future
where frequency data can be used to estimate
the conditional probability that  w  is in
S.  The details of this extension have been
completed in P. R.  Gribik's Thesis [6].

The methods of this paper are directly appli-
cable if one is willing  to  choose  a  prevail-
ing or typical weather condition  for the
region and design on it.  Another  direct  ap-
proach would require the utilization of a
long-term diffusion model and  a prespecified
frequency distribution on wind direction  and
wind speed.  Then, the measurements  would be
used to develop long-term averages at the
given set of design points  and the   p.  would

be interpreted as the proportion of  measure-
ment effort to be expended  at  point   x.   to

develop this average.  The  main use  of the
estimated parameters computed  in this situ-
ation would be for the calibration of the
diffusion model results.

It is highly desirable that a  regional moni-
toring network provide information on air
quality both in the short-run  and  the long-
run.  The above alternatives suggest a vari-
ety of ways in which this goal may be accom-
plished.  For example, a monitoring  network
which is for all practical  purposes  station-
ary, can be established  to  satisfy the needs
for long term estimation.   If  this network is
not satisfactory in the  short-run  it would be
necessary to determine a set of "critical"
weather states where the probability of ex-
ceeding ambient air quality standards  is
high.   For critical states  designs could  be
computed and for short periods of time  dur-
ing these critical states mobile monitoring
equipment could be placed at the previously
designated locations.

The methods described in the paper are  per-
haps more attractive if mobile monitoring
equipment is available, but the application
of the methods is still reasonable if  all
equipment is stationary.  The  actual imple-
mentation plan will of course  depend upon the
resources available as well as individual
characteristics'of the region  under  consider-
ation.
        6.
    AN ILLUSTRATION OF THE ALLOCATION
    PROCEDURE
        In  this  section we  consider  a  small example
        which  illustrates the  allocation procedure
        and indicates  a sample allocation of resour-
        ces.   Suppose  we have  a region with four
        major  polluters and an unknown background
        source.   The four sources  are  described as
        fo1lows:
        Source
 HP

(m)
  TS

(deg
  K)
 VS

(m/
sec)
 D

(m)
  VF

(m3/
sec)
                                     R

                                    (mi)
                                                  S

                                                 (mi)
         1     61.0   600.   6.1   2.6    32.4   5.7   3.9
         2     34.7   727.   1.6   1.5     2.8   6.5   4.7
         3    113.0   546.   9.3   5.2   197.5   2.9   5.7
         4     50.0   460.   7.0   2.5    34.4   7.0   2.9
        and
            HP:  physical  stack height
            TS:  stack  gas temperature
            VS:  stack  gas exit velocity
             D:  inside diameter  of stack
            VF:  stack  gas volumetric flow rate
             R:  x-coordinate  of  stack
             S:  y-coordinate  of  stack.
                                              89

-------
The EPA computer program DBT51 which  calcu-
lates  concentrations  for multiple  point  sour-
ces was used  to calculate  concentrations at
729 grid points  (1 mile grid) for  a single
weather state.  A typical  weather  state  was
defined by Pasquill's stability  class  D,
wind speed    5 m/sec, mixing  lid -  1219  m,
ambient air temperature -  284 deg  K,  and with
wind from the southwest.   For purposes of il-
lustration we choose  a non-uniform grid  of 80
points which  reflect  the changes in individu-
al contributions of the sources  involved.
Figure 1 indicates the diffusion coefficients
x 10O  at the 80 points considered.   The lo-
cations of the sources are also  indicated in
Figure 1.  For each of those  locations con-
tributions from each of these sources are
computed and  stored.
                                                 Initial  Design
                                                 Location
                                                 (grid  point  #)
                                                 Mass  (p.)
                  223    226    251   3O5   368


                   .2     .2     .2    .2    .2
                                                Location    223   226   251  305  368  307  249
                                                 (grid
                                                 point  #)
                                                Mass  (pi)  .195  .198  .009 .009 .199 .196 .194

                                                Even without  termination it appears that a
                                                design  is  emerging with monitors at grid
                                                points  223, 226,,  368,  3O7,  and 249 and with
                                                approximately the same measurement effort to
                                                be expended at  each  of these points.   It is
                                                informative to  note  the individual contribu-
                                                tions of the  sources at each of these loca-
                                                tions.























®














































































®
























©

®
















,1

.1



14


















1.3
4.2

0

2.7


45















.2

5.4
1.1

.1
16
195

83
.1




















30
147
2

1.6














.7

6.7

.1
1.9

114
8.4
60























92
















1.4

7,1

.8

40

25

16





















65

















2

6.9

2.2

36

33

18















6.6

4

31

34

16

.1








.3

2.7




5.7

27

32

15

.5










2,9

5.9




24

29

14

1.1












5,7

8.1


22

27

13

1.7

.1












                                    462 516 570 624
Figure 1:
           Total concentrations at
           80 locations in  R
Let us choose our design criteria as the one
which minimizes the maximum variance of the
best linear unbiased extimate  (BLUE) of
ground level pollution over the chosen grid,
i.e., we propose to compute a D-optimal
design.  Using an algorithm proposed by
Fedorov [4] (p. 102)  we specify an initial
design and proceed to obtain a solution to
Program P.  The initial design as well as an
"approximate optimal design" are given below.
                                                Monitor

                                                    1
                                                    2
                                                    3
                                                    4
                                                    5
                                                           Location
                                                               223
                                                               226
                                                               249
                                                               307
                                                               368
Sources
#1
46.62
0
O
39.4
0
#1
181.64
O
0
74.46
0
#1
0
0
0
0
7.07
#i
0
0
82.3
0
0
                                                 in  this problem  it  can be  projected that 1
                                                 and 4 monitor  sources  1 and 2,  2 monitors the
                                                 background pollution,  3 monitors source 4,
                                                 and 5 monitors source  3.   We also note that
                                                 an  initial design with points 196,  202, 335,
                                                 424, and 648 led to a  comparable outcome.
                                                 The implication  to  the estimation problem
                                                 goes as follows.  Locate monitors at 223,
                                                 226, 249, 307, and  368 and take approximately
                                                 the same number  of  measurements at each lo-
                                                 cation during  the weather  state specified.
                                                 Use the average  of  these measurements,
                                                 g.,...,g5  to  fit the  measurements  to the dif-
                                                 fusion model estimates and then use
                                                      4
                                                 9O  +  £ @^u^(x)  to  estimate concentrations at

                                                 the remaining 75 grid  points.   We can then
                                                 guarantee that these estimates  are  "best"  in
                                                 the sense described earlier in  this section.
REFERENCES

[1]   Atwood, C. L., "Sequences Converging to
     D-optimal Designs of Experiments", Ann.
     Math. Stat. !_  (1973) , 342-352.
[2]   Escudero, L. F., "The Air Pollution
     Abatement MASC-AP Model", Proceedings gf_
     the international Conference on Mathe-
     matical Models for Environmenta1 Prob-
     lems . September, 1975, University of
     Southampton, United Kingdom.
[3]   Federer, W. T. and L. N. Balaam, Bibli-
     ography on Experiment and Treatment
     Design, Oliver and Boyd, Edinburgh, 1973.
[4]   Fedorov, V. V., Theory of Optimal Experi-
     ments (translated by w. j. Studden and
     E. M. Klimko), Academic Press, New York,
     1972.
                                              90

-------
[5]   Fortak, H. G., "Potential Applications
     of Mathematical Meteorological Diffusion
     Models to the Solution of Problems of
     Air Quality Maintenance", Proceedings of
     the Fifth NATO/CCMS Expert Panel on Air
     Pollution Model chapter 1_, Research Tri-
     angle Park, North Carolina, 1974.

[6]   Gribik, P. R., "Semi-infinite Program-
     ming Equivalents and Solution Techniques
     for Optimal Experimental Design and
     Geometric Programming Problems with An
     Application to Environmental Protection'1 ,
     Ph.D. Thesis, Graduate School of Indus-
     trial Administration, Carnegie-Mellon
     University, December, 1975.

[7]   Gustafson, S.-A. and K. 0. Kortanek,
     "Determining Sampling Equipment Loca-
     tions by Optimal Experimental Design
     with Applications to Environmental Pro-
     tection and Acoustics", Proc. Comp. Sci.
     and Stat. Seventh Annual Symposium on
     interface, W. J. Kennedy, ed., Iowa
     State University, Ames,  Iowa, 332-338,1973.

[8]   Kiefer, J. and J. Wolfowitz, "The Equiva-
     lence of Two Extremum Problems", Canadi-
     an J. Math. 12_  (I960), 363-366.

[9]  Seinfeld, John, "Optimal Location of Pol-
     lutant Monitoring Stations in an Air-
     shed", Atmospheric Environment 6_  (1972) ,
     847-858.

[10]  Sommer, G. and M. A. Pollatchek, "A Fuz-
     zy Programming Approach  to an Air Pol-
     lution Regulation Problem", Technical
     Report 76/01 Lehrstuhl filr Unternehmens-
     forschung. Aachen, West Germany.
                                               91

-------
                         SAMPLED CHRONOLOGICAL INPUT MODEL (SCIM) APPLIED TO AIR QUALITY
                                    PLANNING IN TWO LARGE METROPOLITAN AREAS

                                      R.C.  Koch,  D.J.  Pelton  and P.H.  Hwang

                                               GEOMET,  Incorporated
                                               Gaithersburg,  Maryland
     SCIM is a multiple-source urban diffusion model
based on the Gaussian plume equation and has been used
to analyze S0? control regulations in Boston and San
Francisco.  Maximum 3-hour, 24-hour, and annual mean con-
centrations are calculated from NEDS emission or fuel
use data and standard National Climatic Center data.
Model validation results for Boston show a model to
measurement correlation of 0.97 for annual means and
0.81 for maximum 24-hour concentrations of SOo based on
comparisons at 14 stations.  The analysis showed that
fuel regulations which permit increasing fuel-sulfur to
1 percent in the Boston core area will not meet National
Ambient Air Quality Standards (NAAQS).  In San Francisco
the analysis showed that limiting SOp emissions from
large sources is more critical for meeting NAAQS than is
limiting the sulfur content of fuels.

                    Introduction

     Projected shortages of low-sulfur fuels have caused
many states to reexamine their regulations for control-
ling sulfur emissions.  Areas where present regulations
are more stringent than necessary to meet air quality
standards may be suitable for more lenient S0« emission
regulations.  Proper evaluation of this questfon in
large metropolitan areas, which contain large power
plants and chemical processing plants and where sub-
stantial quantities of fuel are used for space heating,
requires the use of a computer simulation model to
account for the effects of many sources and to determine
both the long-term and maximum short-term air quality
levels resulting from these sources.   The recent require-
ment to develop air quality maintenance plans for many
urban areas also requires the use of a dispersion model
capable of evaluating air quality levels from many
simultaneous sources.  The multiple-source Gaussian
plume dispersion model can be used to make these evalu-
ations.  This paper describes the application of such a
model, the Sampled Chronological Input Model (SCIM), to
evaluate alternative SCL control strategies in Boston
and San Francisco.

                  Model Description

     SCIM is based on the Gaussian plume equation with
the origin at a receptor point of interest and the x-
axis pointed upwind into the mean wind direction.
Assuming an impervious ground surface and an exponential
depletion constant (k) to account for physical and
chemical removal processes, the ground-level concentra-
tion from a point source is given by:
                                                                                                   -  M «y dx
                                          XA
                                          The  integration operation is simplified by the
                                     "narrow plume"  assumption,  that:
•±- f    q exp J- W/
 y              *•        y
                                                                         dy
                                     As  long  as  the  spatial  distances  between variations  in
                                     area-source emission  rate  are large compared to the
                                     horizontal  diffusion  parameter,  it may be assumed that:
                                                         q (x)wq(x.o)
                                    As  a  result,
                                                                         '•*- I   - jj*- } dx
                                    This equation  is evaluated  using  a  variation  of the
                                    trapezoid  rule  in which  small  increments  in x are
                                    gradually  increased with  increasing x  to  a uniform
                                    increment.  Area sources  may be defined for up to five
                                    area source heights.

                                         The effects of  the  two types of sources  (area and
                                    point) are analyzed separately and  added  together to
                                    give a resultant concentration.

                                    Limited Mixing

                                         The marked reduction in vertical  diffusion which is
                                    caused by  a stable layer  aloft is approximated using a
                                    suggestion by Pasquill (1962)  that  a uniform  vertical
                                    distribution will be approximately  achieved at a down-
                                    wind distance from the source  at  which a  is  equal to
                                    the height of the mixing  layer.   At half  this  height no
                                    effect due to limited mixing needs  to  be  considered.
                                    Using linear interpolation  between  these  distances and
                                    assuming a  is  represented  as  a simple power  law of x:
                                                                • x
                                                              L, x
           IT  V  O
exp
       2a
                                    kx
                                    v
                                         2a
                                                                                   'ft)
     Let q(dx)(dy)  be the total  amount of pollutant
emitted per unit time in a horizontal  element of area
(dx)(dy).  Assuming that the total  concentration at
a receptor is the sum of concentration contributions
from all individual area source elements with an ef-
fective area source height h., the  concentration x.
at the receptor location due to the area source is:
                                     Plume Rise
                                          The effective height of point sources  is  repre-
                                     sented as the stack height plus a plume  rise calculated
                                     using Briggs (1969) equations for leveled off  plume
                                     heights.  For stable conditions (Pasquill stability
                                     class E) assuming a potential temperature gradient of
                                     0.02°c/m:
                                                       92

-------
                AH = 2.9
                          F T
                         0.03034 u
                                 \ 0.33
For neutral  and unstable conditions (Pasquill stability
classes  A through D):
                   3-75F0.33x0.67
                AH
                   .67.3F
                       .0.4
                               ,HS <_ 305 ,
                               ,HS'> 305,
Wind Speed
     The wind speed is estimated for each effective
point or area source height using the following  power
law (e.g. , Munn 1966):
                M(h)
                           '1
     Three values of "a" are input to SCIM correspond-
ing to unstable (classes A, B, and C), neutral  (class D)
and stable (classes E and F) conditions.

Diffusion Parameters
     Diffusion parameter values either for rural con-
ditions (Pasquill 1962) or for urban conditions  (McElroy
and Pooler 1968) are used to characterize a  and a   by
power law functions.                       y
                        =
                     az = bx^

     The rural parameters are given in Table 1, and  the
urban parameters are given in Table 2.
      Table  1.   Fitted  Constants  for  the Pasquill
                  Diffusion Parameters
Stability
CIu
A
B
C
D
E
Crosswlnd
Co=W>
a
0.40
0.295
0.20
0.13
0.098
Constants tor Vertical Diffusion Parameter, »
"S",
b
0. US
0.119
0.111
0.105
0.100
q
1.03
0.986
0.911
0.827
0.778
*1
(Meter4
250
1000
1000
1000
1000
«,<*S-*2
b
0. 00883
0. OS79
0.111
0.392
0.373
4
1.51
1.09
0.911
0.636
0.587
"2
( Meten)
500
10,000
10,000
10,000
10,000
*2<«
b
0. 000226
0.0579
0. Ill
0.948
2.85
1
2.10
1.09
0.911
0.540
0. 366
    (1) * • ax    t where x li downwind distance from Che source; a and x are In roeten.

    (2) * » tut ; * and x ore in meters.
   Table 2.  Fitted Constants for Urban Parameters
      Based on Turner Stability Classifications
Stability
Index
A'3'
B
C
D
C
Crojswlnd
Constants ( D
a
_
1.42
1.26
1.13
0.992
P
.
0.745
0.730
0.710
0.650
Constants for Vertical Diffusion Parameter (2*
x£600
b
.
0.0926
0.0891
0.083S
0.0777
1
.
1.18
1.11
1.08
0.9SS
x>600
b
.
0.0720
0.169
1.07
1.01
<1
.
1.22
1.01
0.6S2
0.554
(1) *„ • »XP( where x Is downwind distance from source; a and X are In meten.
(2) • s • bxP; «t and i an ID meters.
(J) Not available from McElroy and Pooler data; use Clasl B values.
     The stability classification  used  in  SCIM is  based
on the Pasquill classes of atmospheric  stability using-
a system suggested by Turner  (1964).  The  Turner sta-
bility categories are determined from routine  airport
weather observations.

Mixing Height

     The procedure which  is used to  define the height
of the mixing  layer  is the following:   Determine the
vertical temperature profile  from  the nearest  appropri-
ate  (same air  mass)  radiosonde, or by interpolation of
two  or more nearby radiosondes.  Estimate  minimum
morning and maximum  afternoon air  temperatures which
are  representative of the urban area.   The afternoon
temperature may be obtained directly from  airport
observations or other available data.   In  most cases
the  morning urban temperature will exceed  the  rural
temperature.   Construct adiabatic  temperature  profiles
from the urban temperatures which  intersect the rural
temperature profile.  The heights  of these intersect-
ions are assumed to  be the minimum and  maximum mixing
heights.  The  method of interpolating between  these
values to give hourly estimates is:

     1.  Use the morning  minimum from midnight to
6 a.m.

     2.  Linearly interpolate between the  minimum  and
maximum between 6 a.m. and 2  p.m.

     3.  Use the afternoon maximum between 2 p.m.  and
midnight.

                      Validation

 Validation  in  San  Francisco

     The average ratio of predicted  to  observed 20-day
mean concentration for 20 stations is 1.0,  which is an
excellent agreement.  The ratios obtained  for  each  of
three regions  are more instructive.

     The Martinez (northeast)  and  Richmond  (north)  re-
gions are the  areas of most interest since  most of  the
large sources  are located in  those regions.  In the Rich-
mond region the model tends to overpredict  the concen-
tration and the regional  average ratio  is  1.4.   In  the
Martinez region the  station to station  ratios  are more
uniform than in the Richmond  region, however the regional
average predicted to observed  ratio  is  0.4, which  is
not  quite as good in the  Richmond  region.   There is a
consistent and general underprediction  in  the  Martinez
region but the ratios are fairly uniform.   Validation
results were not as  favorable  in the San Francisco  region
where the predicted  to observed ratio is 3.9.   Because
the  observed concentrations in this  area'were  consis-
tently below the sensing  threshold of the  monitors, no
particular significance is attached  to  this result.
Overall, the performance  of the model on the 24-hour
average S02 concentrations from 20 stations for a  20-day
sample was judged to be acceptable.

 Validation  in  Boston

      Annual  Means.   S02 concentrations measured using
 gas  bubblers  were  compared  to model calculations  based
 on  8 hourly calculations, one for every third  hour of
 the  measurement day (Koch,  1975).  The  calculated values
 generally exceeded  the measured values by a small
 amount varying from about 5  vig/m3 when the measured
 value is  10 yg/m3  to about 7  ug/m3 when the measured
 value is  50 yg/m3.   The  correlation coefficient for the
 15  pairs  of values  is 0.97.
                                                        93.

-------
        Maximum 24-Hour Concentration.   S02 concentrations
   were calculated for all days for which a measurement
   was available for 1972.  The maximum  measured and  cal-
   culated values do not necessarily correspond to  the
   same day.  The calculated values deviate from the
   measured value by a maximum of 50 yg/nH for a measured
   value of 150 yg/m3 and by 35 yg/m3 for a measured  value
   of 75 yg/m^.  The correlation coefficient  is 0.81  for
   the 14 values for which comparisons were made.   The
   mean difference of measured minus calculated concentra-
   tion is 2.4 ug/m3. and the standard deviation of the
   differences is 26 yg/m3.

        Maximum 3-Hour Concentration.   Hourly concentra-
   tions of S02 were measured at four monitoring sites.
   Calculations were made for every third hour for days on
   which concentrations were measured.   Three of the four
   calculated maximums are within 10 percent of the maxi-
   mum measured values.  At one site the calculated value
   exceeded the measured value by 50 percent.   However,
   since the calculations for two of the sites are sus-
   pected  to be subject to errors in the emission  inventory,
   these results  are not conclusive regarding  the  model
   validity in estimating maximum 3-hour concentrations.

        Distribution of 24-Hour Concentrations.  The char-
   acteristics of the frequency distribution of paired,
   measured, and calculated 24-hour concentrations at 14
   stations were determined (see Table 3).

  Table 3.   Summary of Correlations Between  Parameters of
Paired, Measured,  and Calculated 24-Hour SO. Concentrations
Distribution C haracteristlcj
Maximum Value
95th Percenttle
Mean
Geometric Mean
Standard Deviation
Geometric Standard Deviation
Correlation Coefficients
14 Monitoring Sites
0. 81
0. 89
0.97
0.97
0.89
0.21






   The  measured  and  calculated  values  were  sorted  and
   ranked  from  high  to  low value,  independently, and the
   percentiles were  determined  by  linear  interpolation  of
   the  ranked arrays.   The paired  percentile  values do  not
   necessarily correspond  to  any specific day.

        The measured distribution  is well represented  by
   the  model calculations,  although the individual day-to-
   day  comparisons are  not as well correlated as the ranked
   percentiles might lead  one to expect.  The correlation
   coefficients  for  calculated  and measured values at a
   single  station vary  from 0 to 0.6 and average about  0.3.

        The chief reasons  why day-to-day variations in  S02
   concentrations are not  simulated in chronological se-
   quence  are:

          Only  annual  fuel consumption by point sources
          is accurately estimated.  Seasonal, weekly,
          and daily variations  are not represented.

        .  The allocation  of  residual and distillate oil
          to area sources  is only an approximation.

        .  While temperature  is  the best known basis for
          estimating variations in fuel consumption for
          space heating,  the sensitivity of  different
          fuel   users is not well known.

        .  Meteorological data obtained from  a single
          site may  not be  representative of  a metro-
          politan-area-wide average on some  days.
     The uncertainties  cited  above have plagued all
urban modeling  studies.   The  result is that, over
a significant period  of time  (a year or more) the
average concentrations  on  a given  day may vary sig-
nificantly from the model  estimate.   This is due to
randomly distributed  errors balancing out over a long
period of time.

          Evaluation  of Alternative Fuel-Sulfur
             Regulations in the  Boston AqcR"

     The SCIM model was used  to  analyze the impact of
eight fuel scenarios  (described  in Table 4) on the
ambient concentrations  of SOg in  the Metropolitan
Boston area.  Sulfur  dioxide  concentrations were cal-
culated for every  third hour  of  every sixth day of 1972
at 135 receptor .locations  (Koch,  1975).
  Table 4.   Fuel Sulfur  Content for  Boston  AQCR Fuel
                Regulation Strategies
Strategy Number
1
2
3
4
5
6
7
8
Maximum Fuel Sulfur Content (Percent)
Distillate
Oil
0.3
0.3
0.
0.
0.
0.
0.
0.5
Residual Oil and Coal
Borton Cora A*ea
0.
1.
0.
1.
0,
1.
a
'•
Outride Con Aroa
1.0
1.0
2.0
2.0
1.0
1.0
2.0
2.0
             if 13 town*, Including Arlington, Selmont, Boston, Brookllne, Cambridge,
    Chelsea, Everett, Maiden, Medford, Newton, SommervlUe, Waltham and VVatertown.

Emission  Inventory

     Sulfur dioxide  emissions  for  point and area  sources
were estimated from  the  1972 state emission inventory
which included sulfur  dioxide  emissions,  amount and
type of  fuel  used, and data related to  the effective
height of emissions  for  356 point  sources and 1718 area
sources.

     The point source  data were used without attempting
to account for seasonal  or diurnal  variations in  the
rate of  sulfur dioxide emissions.   Area source emissions
which have a  greater impact on  ground-level  concentra-
tions than point sources, because  they  are released at
lower heights and have less buoyancy, are mostly  due to
space heating.  It is  reasonable to generalize on the
seasonal and  diurnal variations in their  emissions
using the relationship:

              QI              FD,
           s-E
(H..  - T.J), annual sum.
     The parameters Di and  H-J  (Table  5)  were  previously
determined by the best fit  between  model  calculations
and sulfur dioxide measurements  in  New York City.   A
value for F  (the fraction of emissions which  are sen-
sitive to temperature) of 0.8  was adopted for the
Boston area  based on correlations between model  cal-
culations using experimental values of F and  sulfur
dioxide measurements at  some 20  sites.*
                                                          94
*It was later reported to M. Rosenstein of  EPA  Region I
 by the Better Home Heating Council that 85 percent of
 fuel  usage by a typical Boston home is for space  heat-
 ing.

-------
  Table 5.  Hourly Values of Fuel Demand Parameters
            for Estimating Space Heating
Hour of
1
I
3






1

1
1
1
I
1
1
ie
19
20
21
22
Z3
24
Hi
Heating
Threshold
55
55
55
S6
SB
59
61
63
64
65
55
65
65
65
55
65
65
65
65
65
54
6Z
60
56
Heat
Demand


!343
.434
.340
.416
.685
.123
.US
.046
.936
.936
.998
.003
.039
.152
.243
.313
.339
.306
.200
0.316
Q.J87
0.386
     The five  meteorological  parameters  required as  in-
put to SCIM are:   (1)  wind direction,  (2)  wind speed,
(3) temperature,  (4)  atmospheric stability,  and (5)  mix-
ing height.  Measurements  of  wind speed  and  direction,
cloud cover, and  air  temperature observed  at Logan Air-
port in Boston every  third hour of the day during 1972
were obtained on  magnetic  tape fron the National Climat-
ic Center  (NCC) in Asheville, North Carolina.  The wind
speed, total amount of cloud  cover in  tenths, and the
height of the cloud ceiling were used  in the SCIM model
to determine the  atmospheric  stability class.  A special
program was used  to calculate the mixing height for  each
12-hour radiosonde observation time using  the radiosonde
data for Portland, Maine,  and the surface  temperature  ob-
served at Logan Airport.   The mixing height is defined
as the greatest height to  which a parcel of air at \
the surface can be lifted  before it becomes 1°C or more
colder than atmospheric temperatures as indicated by the
radiosonde temperature profile.  Temperature changes in
the displaced parcel  are computed assuming adiabatic
expansion of the  air and adsorption of any latent heat
due to condensation as the parcel is lifted.  The 12-
hour mixing heights are interpolated to hourly values.

Annual Mean Concentrations of SOg

     The annual mean concentrations computed by the
model  show that the highest concentrations for each
scenario occur in three locations which form a belt
stretching from South Boston through the Boston Hub to
.Everett.   This belt runs through the center of the prin-
cipal  sources of SOg as shown in Figure 1.
     The  primary  National  Ambient Air Quality Standard
 for  annual  mean concentrations  of S02 (80 pg/m3)  is
 exceeded  in the belt  of high  concentrations  in scenarios
 2, 4,  6,  and 8.   These  are the  scenarios  in  which fuels
 other  than  distillate oil  are allowed to  contain  1.0
 percent sulfur  in the Boston  core area.   Furthermore,
 the  gradient around the high  belt zone is intensified
 as compared to  the present situation (scenario 1).   In
 scenarios 3, 5, and 7 the  increases are much more uni-
 form and  dispersed.   In scenario 7, in which both the
 sulfur content  of all distillate oil and  the sulfur
 content of other  fuels  outside  the core area are  raised,
 the  maximum concentration  in  the belt zone is very
 close  to  the NAAQS.

 Maximun  Short-Term Concentrations of SO,

     The  calculations for  each  receptor were used, assum-
 ing  a  log-normal  distribution,  to estimate the 99.73
 percentile (i.e., 364/365  of  the distribution) and the
 99.97  percentile  (i.e., 2919/2920 of the  distribution),
 respectively.  The geometric  mean and standard deviation
 for  24-hour values were determined from the  average  of
 eight  3-hour values for every sixth day.   All values
 were used to determine  a geometric mean and  standard
 deviation of 1-hour values.  The 1-hour and  24-hour
 values were converted to 3-hour values using the  rela-
• tionships suggested by  Larsen (1971).  The minimum of
 the  two  estimates was selected.

     The  percentile values were determined as follows:
                                                               Using this procedure, the 24-hour NAAQS was found
                                                          to be exceeded at several locations for every scenario.
                                                          The distribution of calculated 24-hour concentrations
                                                          was plotted on log-probability scales for each location
                                                          'which exceeded the NAAQS and for sufficient additional
                                                          locations to  identify the maximum concentration associ-
                                                          ated with each scenario.  At most of the locations
                                                          examined, it  was found that the distribution leveled off
                                                          near the high values and the calculated log-normal dis-
                                                          tribution was a poor fit.  However, the high end of the
                                                          distribution  could be fitted by eye to a straight line
                                                          which was consistent with the data.  An extrapolation
                                                          of this visually fitted line was used to derive a new,
                                                          more reasonable estimate of the concentrations not
                                                          exceeded more than once per year.

                                                               When the top part of the distribution  is extrapo-
                                                          lated graphically to determine the annual maximum con-
                                                          centration, it is estimated that for scenarios 1, 3, 5,
                                                          and 7 none of the sites will exceed the 24-hour standard.
                                                          When this procedure is repeated for 3-hour  concentra-
                                                          tions,  it is  estimated that for scenarios 1, 3, 5, and
                                                          7 none  of the sites will exceed the 3-hour  standard.
                                                          With regard to the scenarios 2, 4, 6, and 8, it is
                                                          estimated that both the  3-hour and 24-hour  standards
                                                          will be exceeded at several locations due to the
                                                          potentially large increase of S02 emissions from
                                                          point sources in the core area.

                                                             Table 6.    Maximum Concentrations For Each Scenario
Figure 1.   Locations  of Principal  Point and Area  Sources
    of SO-,  Belt  of High Computed  S02 Concentrations,

        and  Receptors Used  for Model  Calculations
  K.,.
  *fol«S«««.  I >!<*/•«>
  CD AMI Sourct  ( >3|Jt/IH /MC)


Scenario
1
2
3
4
5
6
7
8
Annual

Receptor
69
69
1,69
69
69
69
1
69
Concentration
toS/rn3)
60
110
60
110
65
115
70
115
24-Hour

Receptor
69
69
10,93
69
74
69
10
76
ConcenD^ation
(wsM3)
260
500
290
500
290
500
330
510
3-Hour

Receptor
40
40
40
40
40
40
10
40
Concentration
(CS/m3)
800
1500
800
1500
800
1500
860
1500
                                                           95

-------
         Evaluation of Alternative SO? Emission
              Limitations in San Francisco

      The impact of control  strategies which limit pro-
 cess source emissions to no greater than 300, 500, 1000,
 or 2000 ppm or which limit the sulfur content of the
 fuel in combustion sources to 0.3, 0.5, 0.7, and 0.9
 percent sulfur were evaluated for the San Francisco
 Bay Area.  As a further consideration, an alternative
 to limit power-generating plants to consumption of 0.5
 percent sulfur fuel oil and prohibit their use of nat-
 ural gas was evaluated.  These nine alternatives, plus
 the present emission situation, form the ten strategies
 evaluated in this study.

 Source Emission Inventory

      Data were obtained for point and area sources from
 the Bay Area Air Pollution Control District (BAAPCD).
 The data for 41 points were updated by reviewing sev-
 eral  sets of supplementary data to select those which
 are complete and are most representative.   No sea-
 sonal  or diurnal variations were applied to the point
 source emissions.   The area source emissions were
 represented by uniform 5 km squares over the whole
 area.   Seasonal and diurnal variations in emissions
 were furnished by BAAPCD for each grid square.

 Meteorolgical Data

      Meteorological data used in this study included
 surface observations from San Francisco and Oakland
 International Airports, upper air observations from
 Oakland International Airport, and surface wind speed
 and direction from seven sites operated by private
 companies.  A vector average of wind speed and direction
 was computed for each of three regions,  which pro-'
 vides  a reasonable spatial  variation of the wind
 throughout the area.

 Analysis

      Monthly variations in  the power plant emissions
 and hourly variations of the area source emissions were
 represented  in the model.   Due to  the  complex wind  pat-
 terns  in the Bay Area three separate  regions  represent-
 ing meteorological  and  geographical  groupings of the
 S02 sources  were used.   Winds  for  each source region
 are used to  advect SO-  into neighboring  regions.   24-
 hour average concentrations were made  for  every  other
 day in  1973  by averaging eight 1-hour  concentrations.

      The annual  mean  and the highest predicted  24-hour
 average concentration for each of the  ten  strategies
 were determined for each of 120 locations  based  on
 hourly evaluations for  about 1300  hours.   The annual
 maximum 24-hour concentration  was  estimated by  sta-
 tistical  extrapolation  of the  geometric  mean  and the
 geometric standard deviation,  assuming a log-normal
 distribution.

      It was  found  that  the  national  standard  for annual
 means  is not exceeded at any point under all  ten of the
 strategies.   However, the 24-hour  standard will  be
 exceeded if  a  strategy  is developed which  allows  non-
 combustion emissions  of SO,  which  exceed 2500 ppm.
 Based  on  this  finding,  it is  recommended that both  a
 concentration-emission  limit  for process  industries
 (e.g.,  2000  ppm) and  a  fuel-sulfur limit (e.g.,  1.0
'percent)  be  adopted for the  Bay Area.
                       Symbols
a, b, p, q - Empirical parameters for diffusion  functions
             (cry and az)

Cp, x» XA  - Concentration
         q   Point source emission rate
             Area source emission rate per unit  area
    Q,.,  Qt

    Di'Hi
             Space heat demand factor and temperature
             threshold
         F   Fraction of emissions due to, space heating

D ,  H , T , V    Stack diameter, height, temperature,
 s   s   s   s   and velocity

     h, hA   Effective stack height

        AH - Plume rise

 v (or y,) - Wind speed (y, means height h^)

         k - Exponential pollutant decay constant

         L - Mixing height

    m , s  - Geometric mean and standard deviation

      x, y   Alongwind and crosswind rectangular
             coordinates

        y~   Distances to upwind and crosswind edges of
             area source

         T - Air temperature
V  'V
    a  ,  a   -  Horizontal  and vertical  diffusion parameters.

        Z   -  Standard  deviation  corresponding  to
        P    percentile  p
                      References

Briggs, G.A., Plume  Rise,  U.S. Atomic  Energy  Commission,
  Oak  Ridge, Tennessee,  1969.

Koch,  R.C., and S.D.  Thayer, Validation  and Sensitivity
  Analysis of the Gaussian Plume Multiple-Source  Urban
  Diffusion Model. Contract No. CPA  70-94, 1971.
  Prepared for Environmental Protection  Agency, Research
  Triangle Park, N.C., by  GEOMET,  Incorporated, Gaithers-
  burg, Md.  Available in  PB 20691 from  NTIS,  Spring-
  field, Va;

Koch,  R.C., and G.E.  Fisher, Evaluation  of the Multiple-
  Source Gaussian Plume  Diffusion  Model,  Phase I  Report,
  Contract No. 68-02-0281, 1973.   Prepared for Environ-
  mental Protection  Agency, Research Triangle Park,
  N.C., by GEOMET, Incorporated, Gaithersburg, Md.

Koch,  R.C.  Impact of Proposed Revisions in Fuel-Sulfur
  Regulations on SOg Concentrations  in the Metropolitan

  Boston Area, Task  3, Final Report, Contract No. 68-02-
  1442, 1975.  Prepared  for Environmental Protection
  Agency, Research Triangle Park,  N.C.,  by GEOMET,
  Incorporated, Gaithersburg, Md.

McElroy, J.L., and F. Pooler, St.  Louis  Dispersion
  Study, Vol. II   Analysis.  Publication No.  AP-53,
  1968, Environmental Protection Agency,  Research
  Triangle Park, N.C.

Munn,  R.E., Descriptive  Meteorology, Advances in  Geo-
  physics, Supplement No.  1, Academic  Press,  New
  York, 1966.

Pasquill, F., Atmospheric  Diffusion, D.  Van Nostrand
  Co., Ltd., London, 1962.

Turner, D.B., "A Diffusion Model for an  Urban Area."
  Journal of Applied Meteorology,  3, 83-91, 1964.
                                                         96.

-------
                    MODELING OF PAHTICULATE AND SULFUR DIOXIDE IN SUPPORT OF TEH-YEAR PLANNING
             Richard A.  Porter, P.E.
                   Meteorology
             Texas  Air Control Board
                 Austin,  Texas
                 John H.  Christiansen
                    Data Processing
                Texas Air Control Board
                     Austin, Texas
Urban air pollution modeling is a vital part of the
planning for attainment and maintenance of ambient air
quality standards.   A Gaussian plume model based on
annual climatology and an accurate emissions inventory
can be made to represent adequately the ambient condi-
tions in an urban area through calibration with am-
bient air quality data.  A model such as the Texas
Climatological Model that incorporates options for use
in control strategy development allows the analyst to
identify sources of current and projected violations
of ambient air quality standards.
                    ,Introduction

Mathematical modeling is an important tool for re-
lating emitted pollutants to ambient air quality for
air quality maintenance planning and analysis.  As
such, there are three important elements in the model-
ing process: emissions inventory, the computer model
algorithm, and air quality data from ambient air moni-
tors.  This paper discusses the modeling process as
it applies to support for Air Quality Maintenance
Planning and Analysis (AQMPA) for the pollutants sul-
fur dioxide (SOa) and total suspended particulate
(TSP).  A method of obtaining and maintaining an emis-
sions inventory for modeling is discussed.  The Texas
Climatological Model (TCM) computer algorithm and its
use is outlined.  The use of ambient air quality data
for model calibration is illustrated.  Finally, model
projections of ambient air quality for future years
are discussed.
                Emissions Inventory

The emissions inventory for Air Quality Maintenance
Planning and Analysis (AQMPA) modeling is divided into
emission from point sources and emissions from area
sources.  For a typical Gaussian plume model, the in-
formation required for point sources is location,
emission rate, and stack parameters (stack height,
stack diameter, exit gas flow rate, and exit gas
temperature).  Area sources are formed as squares of
various sizes.  The information required for area
sources is location of the southwest corner, length of
a side of the square, and the emission rate.  Point
sources are usually the major sources of pollution;
therefore, a major effort should be made to be as
accurate and detailed in the point source inventory
as possible.  Area sources are appropriate for aggre-
gating the relatively nma.il numerous sources such as
residential space heating and vehicle traffic.  Guid-
ance for establishing emission inventories is pro-
vided by the Environmental Protection Agency (EPA).9

In a major urban area the cost of gathering a special
one-time emissions inventory is prohibitive.  For-
tunately the information gathered by the states for
                                                       97
the National Emissions Data System (NEDS) can be pro-
cessed to provide a point source emissions inventory
for modeling; however, such data are not available as
one of the regular reports from NEDS.  Once an in-
ventory base has been established, it is desirable to
establish a system for updating the inventory so that
in successive years a current inventory will be avail-
able for modeling.  The Texas Air Control Board has
devised a method of inventory maintenance that com-
bines the use of annual inventory of specified indus-
trial sources with onsite inspection and permit moni-
toring to continually update the emission inventory.
A base year emissions inventory for 1973 was estab-
lished by mailing questionaires to all known major
sources of pollution.  In future years questionaires
will be mailed to all accounts that have more than 500
tons/year emissions of any pollutant.  In addition,
inventories will be updated any time an operating per-
mit application is approved for new or expanded facili-
ties ; and all visits to facilities by investigators
from the State or Regional offices of the Texas Air
Control Board will include a check for changes to the
emissions inventory for that account.

            Computer Dispersion Algorithm

The primary computer algorithm used for AQMPA plan-
ning by the Texas Air Control Board is the Texas
Climatological Model (TCM).5  The TCM combines the
familiar Gaussian dispersion algorithm for point
sources with a simple area source algorithm suggested
by Hanna and Gifferd11 to compute pollution concen-
trations in an urban environment.  The TCM is similar
in concept to the Climatological Dispersion Model
(CDM)1* in that botli models are based on the same point
source and plume rise equations; however, the TCM
differs significantly in execution from the CDM be-
cause the point source equation is solved by inter-
polating in a table of precalculated coefficients and
a simple equation is used to calculate concentrations
due to area sources.  As a result of these changes,
the TCM is much faster than the CDM (roughly two
orders of magnitude); but both models predict essen-
tially the same concentrations given the same input
data.

The TCM is suitable for non-reactive pollutants such
as S02, TSP, and carbon monoxide (CO).  Input to the
model consists of:  (l) a stability wind rose for a
year or a season; (2) point source and area source
parameters for two pollutants; (3) air quality monitor
data for calibration (optional).

Model output from the version of the TCM used in
AQMPA planning differs significantly from the pub-
lished version of the model (known originally as the
Fast Air Quality Model).5  These, changes are tailored
to the needs of the analyst charged iritjr control
strategy development.  In addition to the listing of
expected concentrations and a punched card output op-
tion for isopleth mapping, the control strategy ver-
sion of the TCM provides a print plot grid suitable
for hand isoplething and a culpability list of the
five high contributors to the concentration at each
grid point.

-------
                  Model Calibration
                                                           n  = 'the  number  of data points,
 A mathematical model can,  at  best,  only account  for
 those physical phenomena which  are  described by  the
 mathematical algorithm.  Gaussian urban models such
 as the TCM,  CDM,  and Air Quality Display Model (AQDM)1"
 contain algorithms  that  account for steady-state emis-
 sions from discrete sources defined in  the  inventory
 with veil-defined meteorological conditions that change
 in discrete  increments.  The  TCM and the CDM do  account
 for some pollutant  reactivity with  a. decay  half-life
 term, but in practice neither model seems adequately to
 account for  the transformation  of S02 to sulfates .
 There are many important transformations which affect
 the concentration of pollutants in  an urban environ-
 ment that these urban models  do not attempt to address.
 These transformations include meteorological condi-
 tions that vary continuously  and often  cannot be char-
 acterized by a single value at  a.n  altitudes of  con-
 cern: reentrainment and  background  levels of pollution
 (especially  important in TSP  studies),  pollutant re-
 activity (especially important  in the SOz to sulfate
 conversion), and  absorption by  sinks (important  in CO
 removal).  The ideal solution is to modify  existing
 models or create  new models that include all important
 transformations of  pollution  in an  urban environment.
 This is a difficult ideal  to  fulfill.

 Reactivity,  reentrainment, background,  and  meteorology
 all change drastically from urban area  to urban  area,
 and there is no simple algorithm known  at this time
 that adequately accounts for  all these  important fac-
 tors.  There is a statistical technique, regression
 analysis,  which can be used to  relate the results of
 urban models to observed pollution  levels.  The  use
 of regression analysis for such a purpose is generally
 termed model calibration.  It would be  much better to
 build a model that  needs no calibration, since model
 calibration  is easily misused.   However, regression
 analysis is  the best  available method at this time
 to account for important transformations of the pol-
 lutant that  are not adequately  covered  by the model
 algorithm.

 Linear Regression

 Model calibration by  linear regression  involves  creat-
 ing a scatter diagram of points (See Figure l) that
 represent  the observed pollutant concentration
 (vertical  axis) versus the predicted concentration
 (horizontal  axis).  A best-fit  straight line is es- .
 tablished  for the data points by the method of least
 squares.   The equations  of interest are:2
                                                    (1)
                                                    (2)
                                                    (3)
where:
X    = the calibrated concentration
x^   = the predicted concentration at the i   point

y^   = the observed concentration at the i   point
a0   = the intercept of the line of regression

ai   = the slope of the line of regression
A measure of how well changes  in the observed data are
accounted for by the  model  is  given by the correlation
coefficient :
           I
           /
         V

The number of data points  and the magnitude  of r
(ranging  from 0 = no relation to 1  = perfect correla-
tion) combine to give  an estimate of the  confidence
we can have in the calibration of the model . l 3
where
 1+r
ll-r.
                                                    (5)
  Z = the abscissa value of the normal
     probability curve.
      TABLE 1:  Confidence Level for Z Values
                           1.96    2.575    2.8l
Confidence Level
                                            99.'.
Ambient Air Data

Ambient air quality data from monitors in the urban
area being modeled are important elements in the cali-
bration procedure.  The monitors used should represent
ambient conditions at the location being modeled.  It
is important that the ambient air monitor be sited
properly (no wind flow obstructions), that the method
used be sensitive enough to measure ambient levels,
and that a large enough sample be taken to charac-
terize the annual mean value adequately.  Recommenda-
tions for monitor siting6 and data evaluation10 are
detailed in the EPA guideline series.  Because pol-
lutant distributions at urban ambient air monitors
have been found to be log-normal12, the annual geo-
metric mean should be used in model calibration.  The
problem of zero values in the computation can be
avoided by assigning a value equal to one-half the
minimum detectable level to measurements that fall be>-
'low the minimum detectable level.  However, any moni-
tor that has recorded more than 25 percent of its values
for the year below the minimum detectable level should
not be used for model calibration.

Examples of Model Calibration
Figure 1 is a scatter diagram of observed and pre-
dicted values for TSP in the Dallas-Fort Worth metro-
politan area for 1972.  The model used is the TCM.
All TSP ambient air monitors in the area were surveyed
and only those monitors that were not wind flow ob-
structed were used to construct the calibration curve
of Figure 1.  Correlation and confidence level are
very high for these data. The intercept of the line of
regression is about 2k ug/m3 which is a reasonable
(perhaps low) number for a background TSP level.
The slope of the regression line is 1.9.  A possible
interpretation of the slope is that TSP in the busy
                                                       98

-------
Industrial areas is feeing reentrained because of the
higtL level of human activity.  This interpretation is
strictly conjecture and requires support by independent
tests before acceptance.
 L>
 UJ

 IT
 LJJ
 99.99%
                10       20        30        40

                       CALCULATED TSP (M9/m3)
    FIGURE 1.  Dallas-Fort Worth, 1972, Observed versus
               Predicted TSP (jj.g/m3)

Figure 2 is a scatter diagram of the same urban area
•with all monitors in the area used for calibration
without regard to wind flow obstructions.  The data
points referring to the sites with wind flow obstructed
monitors are indicated by x's, and the rest of the
monitors are shown with dots.  The solid line is the
same regression line as Figure 1.  The dashed line is
a result of a least-squares fit to all the monitor
data.  Although there is a dramatic drop in the cor-
relation coefficient when all sites are considered,
there is very little difference in the confidence level
because as r becomes smaller n increases.  Since includ-
ing data points whose physical relation to the model is
questionable results in almost no change in the confi-
dence level, the concept of confidence level is called
into question.  The critical assumption in the confi-
dence level equation is that the n observations are
independent.  The independence of the observations is
questionable because the means were generated from
samples taken on the same days of the year (thereby
experiencing the same meteorology and background) and
because some of the monitors are located close enough
together to be dominated by the same sources.  There-
fore, because of temporal and (in some cases) spatial
                          correlation between the observations a high confidence
                          level generated by equation (5) should be suspect.
                          Conversely, a low confidence level generated by equa-
                          tion (5) can be believed.  If the data will not support
                          a Z value of at least 1.96 (95% confidence), the validity
                          of the emissions inventory, the computer algorithm, and
                          the air quality data should be carefully examined.
                                                              100
                                                           H
                                                           D
                                           cc
                                           UJ
                                           I/I
                                                                                            DASHED LINE
                                                                                            (ALL DATA POINTS)
                                                               30
                                                   20        30

                                                 CALCULATED (M9/m3)
                                                                      40
                                   FIGURE 2.   Comparison of Calibration Curves
                          In terms  of designating an area as being in violation
                          of the annual standard, it makes very little differ-
                          ence which calibration curve in Figure 2 is used.
                         .Based on  the first calibration curve, all points for
                          which the uncalibrated model predictions exceed
                          27 pg/m3  would exceed the annual standard (75pg/m3)
                          when calibrated.   Using the second calibration curve,
                          only those points whose uncalibrated value exceed
                          32 i^g/m3  would be above the annual standard.  This
                          shows a degree of robustness in the calibrated model
                          predictions with respect to the quality of ambient  air
                          monitor data and illustrates the point that the model
                          predictions are not precision estimates.  At best,  the
                          calibrated urban air pollution model indicates "ball-
                          park" figures for projected ambient air quality.
                                                        99

-------
         Predicting Future Ambient Air Quality

 Mathematical modeling  allows the air quality planner
 to  consider the  impact of urban growth on future am-
 bient  air  quality.  It is necessary to project the
 future emissions inventory for the area being
 modeled.6'7  A joint frequency distribution of meteo-
 rological  elements  (stability wind rose) for a period
 of  several years should be used for climatological in-
 put to the model.  The model algorithm can then be
 exercised  using  projected emissions and average meteo-
 rology.  Linear  regression for model calibration can-
 not be performed because ambient air quality data are
 not available for future years.  The calibration
 equation that was established for a year of known
 emissions  inventory and sampled air quality data must
 be  used.   Future year model projections can be accu-
 rate only  to the  degree that the projected emissions
 inventory  is accurate; the future year meteorology
 will conform to  the average climatology, and the local
 conditions that  influenced the model calibration
 equation remain  the same.
                                                     12.   Larsen, R.I., "A Mathematical Model  for  Relating
                                                           Air Quality Measurements to Air  Quality  Standards,"
                                                           Report AP-89, USEPA, RTF, NC, November 1971-

                                                     13.   Miller, I., and Freund, J.E., Probability and
                                                           Statistics for Engineers, Prentice-Hall  Inc.,
                                                           Englewood Cliffs, N.J. , 1965-

                                                     Ik.   TRW Systems Group, Air Quality Display Model,
                                                           National Air Pollution Control Administration,
                                                           Washington, B.C., 19&9-
                       References


      "Air Quality Monitoring Site Description Guide-
      lines," USEPA, RTF, NC, OAQPS, Number 1.2-019.

      Bevington, P.R., Data Reduction and Error Analysis
      for the Physical Sciences, McGraw-Hill, New York,
      1969.

      Brier, G.W. , Validity of the Air Quality Pis-play
      Model Calibration Procedure, USEPA, RTF, NC,
      (EPA-Rl*-73-017), January 1973.

      Busse, A.D., and Zimmerman, J.R. , User's Guide
      for the Climatological Dispersion Model, USEPA,
      RTF, NC, (EPA-Rl*-T3-02lt), 1973.

      Christiansen, J.H., and Porter, R.A., "Ambient
      Air Quality Predictions with the Fast Air Quality
      Model," Proceedings of the Conference on Ambient
      Air Quality Measurements, Southwest Section,
      APCA, March 1975.

      Guidelines for Air Quality Maintenance Planning
      and Analysis, Vol.  13, "Allocating Projected
      Emissions to Subcounty Areas," EPA h'yO/k-Th-Olk,
      USEPA, TRP, NC, November 197^.

      Guidelines for Air Quality Maintenance Planning
      and Analysis, Vol.  12, "Applying Atmospheric
      Simulation Models to Air Quality Maintenance
      Areas," EPA-h50/k-lk-013, USEPA, RTF, NC,
      September 19Jh.
10.
11.
Guidance for Air Quality Monitoring Network Design
and Instrument Siting. USEPA, RTF, NC, OAQPS,
Number 1.2-012, January 197*t.

Guidelines for Compiling an Emission Inventory.
Report NO. APTD-1135, USEPA, RTF, NC , 27711,
March 1973.

Guidelines for the Evaluation of Air Quality Data,
USEPA, RTF, NC, OAQPS, Number 1.2-015, February
Hanna, S.R. , "A Simple Method of Calculating
Dispersion from Urban Area Sources," Journal of
APCA 21, Ilk, December 1971.
                                                        TOO

-------
                                        A MATHEMATICAL MODEL OF  DISSOLVED
                                       OXYGEN IN THE LOWER CUYAHOGA  RIVER
                                                  Alan E. Ranun
                                     Cleveland Environmental  Research  Group
                                           Cleveland State University
                                                 Cleveland, Ohio
                      ABSTRACT

     A computer model was developed to rapidly simulate
dissolved oxygen content in the Cuyahoga River under
varying conditions of flow and biochemical oxygen de-
mand.  The model, which has been used to simulate pres-
ent and projected dissolved oxygen levels for the navi-
gation channel of the Cuyahoga River, shows that de-
spite the fact that industrial and municipal discharges
may be completely eliminated, other factors are signif-
icant enough to cause a severe oxygen sag in the navi-
gation channel.
                     BACKGROUND

     Because of its recreational potential and the vast
 industrial complexes which span its banks depend
 upon  it as a route for transporting raw and finished
 goods, the Cuyahoga River is an important river.  Its
 importance, however, is being overshadowed by its pol-
 lution.

     The current pollution problem in the Cuyahoga
 River is twofold:

 1)  The natural contour of the mouth and delta have
 been altered by man in an effort to make this section
 navigable to large vessels.  These alterations have
 decreased the velocity of water, which has in turn
 decreased the river's capacity for natu*ral aeration of
 water in this section; and

 2)  Industries and municipalities have become dependent
 upon the river as a receptacle for their discharged
 waste.  This waste, which had generally been improperly
 treated or untreated, has created a condition of anoxia
 and physical degradation in certain sections of the
 river.

 Both of the above conditions have resulted in decreased
 dissolved oxygen in sections of the river.

     Because dissolved oxygen is vital to maintaining
 a homeostatic environment in stream ecosystems, one is
 justifiably concerned about the low dissolved oxygen
 content in sections of the Cuyahoga River.  This con-
 cern is not only for the effect that low dissolved ox-
 ygen may have upon the plant and animal life in the
 river, but also for the effect that it may have upon
 the near shore water quality in Lake Erie.

     In order to determine the effect of discharged
waste upon dissolved oxygen in the river and the effect
of river dissolved oxygen upon dissolved oxygen at the
 confluence of Lake Erie,  a mathematical simulation com-
puter model was developed.   A model is advantageous for
resolution of problems of this nature because param-
eters can be manipulated and hypothetical situations
can be tested.

     This model addresses itself to the problems of
dissolved oxygen,  and is  designed specifically for use
in the Cuyahoga River;  however,  minor alterations could
make it adaptable to any  stream possessing similar
physical-hydraulic conditions.
     The navigation channel is the dredged portion of
the lower Cuyahoga River which extends from its mouth
to mile point 6.  Dredging maintains the navigation
channel at a depth of approximately 25 feet.  While
lake water intrusion is generally restricted to the
lower one mile of the navigation channel, the hydraulic
effect of lake level fluctuations is suspected to exist
throughout much of the channel.  This hydraulic effect
tends to increase longitudinal mixing within the chan-
nel much as tidal flux increases longitudinal mixing in
estuaries.  In the case of estuaries the dispersive ef-
fects of tidal fluxing are generally experienced well
above that point where there is a measurable salinity
change.  Within the navigation channel, then, one might
expect dispersion to influence water quality to varying
degrees.  The most significant influence is observed
during periods of low flow..  Because the magnitude of
mixing and its significance to water quality was not
previously determined, a model of the navigation chan-
nel was developed to incorporate dispersion.
                        METHODS
MODEL FORM
Many forms of models have been developed for estuaries
in which dispersion is important and must be incorpor-
ated.  Of the many forms available, the finite differ-
ence approach was selected because of its logical par-
allelism to the Cuyahoga River and its amenability to
computeri zation.

Conceptually, the navigation channel was divided into
twenty sections,  each having a length of 0.3 miles.
The choice of the number of sections was dictated by
the hydrology and geometry of the channel and by the
amount of computer time required to obtain a solution.
Since the solution methodology requires inversion of
a matrix of order N, (where N equals the number of sec-
tions in the river) as N increases the time to obtain
a solution increases significantly.  Each section is
considered completely mixed, and hence it is assumed
that no vertical  or horizontal variations within a
section of the river exist.

Mass balances were developed for each section with re-
spect to DO deficit and CBOD.   The balances incorporate
flow from section to section and dispersion and advec-
tion between adjacent sections.   Any input to or output
from a given section was included in the mass balance
equations for that section, as were source and sink
terms for processes occurring within a section.  The
approach has been described in detail by Thomann, 1972
(1). The model simulates steady state conditions.

DATA REQUIREMENTS

The data required as inputs to the model may be classi-
fied under three  headings:  (1)  coefficient determina-
tion data, (2) field data and (3) simulation run data.
Coefficient determination data and field data are nec-
essary to adapt the model's parameters to those of the
Cuyahoga River system.   Simulation run data is neces-
sary to exercise  the model utilizing various sets of
system conditions.
                                                       101

-------
 The coefficients considered in the model include longi-
 tudinal dispersion, flow, benthal uptake, deoxygenation
 and reaeration.

      Longitudinal Dispersion (D ) within the channel

 was estimated from chloride distributions.   Within the
 lower one mile,  where lake intrusion is dominant, re-
 gression techniques produced estimates of longitudinal

 mixing coefficients on the order of 1.0-2.5 mi /day.
 It was observed that mixing effects were most intense
 within this region but became less intense as one pro-
 ceeded upstream.   Since longitudinal dispersion had
 never been measured upstream, the rate of decrease in
 magnitude of dispersion was not known.  However, rea-
 sonable estimates were obtained from historical data
 on upstream chloride distributions.   As will be noted
 in the following discussion,  such errors as those in-
 volved in 'educated guessing' were found to be rela-
 tively unimportant to the system's general  behavior.

      Benthal Uptake (Sb) has  never been measured within
 the navigation channel and consequently no  data was
 available regarding the magnitude of this sink in the
 river.   A decision not to design a study to measure
 benthal uptake was based upon current investigations
 being conducted  at Cleveland  State University.   These
 investigations are attempting to evaluate the design
 of benthal respirometers of the bell jar variety.

 Preliminary results of the above mentioned  investiga-
 tions indicate numerous problems resulting  from the
 use of this type  respirometer and tend to cast doubt
 upon measurements obtained from its  use.  Additionally,
 the model did not appear to be  very  sensitive to
 changes in benthal uptake (see  section on Sensitivity
 Analysis).   Since it was felt that the cost and time
 required to conduct such a study were not justifiable,
 a  study of benthal uptake was not undertaken.   Litera-
 ture estimates of benthal uptake in  rivers  such as  the

 Cuyahoga indicate a range of  values  from 2-10 gm/m/day.

 An estimated uptake from the  channel  of 5 gm/m2/day
 was used.
 trification, to result in depletion of  DO  within the
 navigation channel.

      Reaeration was estimated from the  empirical re-
 lationship formulated by O'Connor (3).

      Flow data were taken from USGS records.
                 RESULTS AND DISCUSSION

 SENSITIVITY ANALYSES

 One of the more useful applications of water quality
 models is  to test the response of the water quality
 parameters under observation to changes in system pa-
 rameters.   By holding all but one parameter constant,
 it  is  possible to determine the relative effects of
 each parameter on DO.  Loadings used in the sensitivity
 analyses were taken from Table 1, with the exception of
 flow which was 850 cfs in the channel.
2
3
4
5
6
7
8
9
11
12
14
15
17
18
19
20
25
25
25
25
25
25
25
25
25
25
25
25
23
25
4200 3
4400 3
4300 3
9000 3
4700 3
5100 3
4900 3
7400 3
4200 3
6200 3
6200 3
6500 3
4500 3
7000 3
5 0
5 0
5 0
5 0
5 0
5 0
5 0
5 0
5 0
5 0
5 0
5 0
5 0
5 1
22
22
22
22
22
22
22
22
22
22
22
60
BO
00
0 5
510 5
9990 5
0 5
0 5
1602 5
0 5
0 5
0 5
0 5
0 5
0 5
0 5
0 5
0 5
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
5 30
5 30
5 30
5 31
5 31
5 31
5 31
5 30
5 30
5 30
5 30
5 28
5 28
5 28
5 28
        21
D - depth (ft)
A - area (ft2)
F - flow (cfs)
D - dispersion <
         *
                               w - waste load Ubs/day)
                               Sb - benthal uptake  (gm/m2/day)

                               TEMP - temperture (°c)
      Deoxygenation  coefficients(K )in  the  lower  Cuya-

hoga  River were  estimated  from previous  Cuyahoga River
studies.  Values utilized  by  Dal ton, Dalton  and  Little
(2) ranged from  0.2  to 0.07 liters  per day (base e).
These estimates  were derived  from an empirical equation
developed by O'Connor  (3)  which utilized a combination
of parameters, including river depth,  to estimate K  .

Small variations in  the value of K  were found to have

fairly large effects upon  dissolved oxygen steady-state
concentrations in the navigation channel (see section
on Sensitivity Analysis).

     Nitrogenous Demand  (Nitrification)  was  assumed  to
be negligible.  Some investigators  have  assumed  the
process to be important, while others  (2)  considered
it unlikely that nitrification occurs.    O'Connor (4)
suggests that nitrification is typically observed when
dissolved oxygen exceeds 1-2 mg/1.   This is  generally
true for rivers which do not receive a high  concentra-
tion of various industrial wastes which  inhibit  bacter-
ial growth; however, the navigation channel, because
of its high industrial waste load,   does  not necessarily
meet the conditions for this assumption.    The basic  ar-
guments against nitrification are based  upon the  as-
sumption that river and water quality  conditions  exist-
ing at critical low flow periods are not suitable for
growth of nitrifying bacteria.  No reliable  experi-
mental study of the nitrification process  within  the
lower Cuyahoga exists despite the fact that  loadings of
ammonia are significant enough,  through potential  ni-
     Dispersion.  The  effect  of variations in disper-
sion coefficients is illustrated in  Figure (1A).   Dou-
bling the dispersion coefficients while holding  flow
and temperature constant had  very little effect  upon
the results.  This suggests that a 2-  or 4-  fold error
in dispersion estimates would not appreciably affect
the simulation output.

     Benthal Uptake.   Figure  (IB)  indicates  that the
maximum difference in  DO which  results from  a 4-  fold
change in benthal uptake is only about 1 mg/1.   Al-
though benthal uptake  has not been measured  in the
river, it is doubtful  that it is greater than

10 gm/m /day.  Hence an error in estimating  benthal  up-
take by 2- to 4- fold  was also  not critical  to the sim-
ulation of the DO sag  in the  channel.

     Deoxygenation.  Figure (1C)  illustrates  the  re-
sults of varying the deoxygenation coefficient (K )  in

the channel.  It is immediately apparent that the mag-
nitude of the sag is quite sensitive to relatively
small changes in KI-   For example, decreasing K   from

0.15 to 0.07 resulted  in an increase of nearly 1.5 mg/1
in the minimum DO.  Literature  values  of K  in the Cuy-

ahoga River ranged from 0.25  to  0.07.   For critical
tuning of the model a  study of  deoxygenation  coeffici-
ents in the channel during critical low flow  conditions
is necessary.
                                                       102

-------
      Upstream Conditions.  Figure (ID) illustrates the
 effect upon DO concentration of improving the quality
 of the water entering the channel.  The effect of im-
 proving water quality by 1 mg/1 at the head of the
 channel increases the minimum DO near mile point 2.0
 by approximately 0.5 mg/1.  To obtain water having 1
 mg/1 of DO at mile point 2.0 would require upstream
 water of better than 5 mg/1 DO.
o
O
V)
5
                                             B '
          5432106543210

                      MILE POINT
                Figure 1. Sensitivity Analyses.
 MODEL VERIFICATION

 Because there was no data available for simultaneous
 DO at several locations within the channel, a sampling
 run was conducted in the channel on August 28, 1974 to
 supply this information.  On this date the flow within
 the channel was 715 cfs.  By slightly adjusting dis-
 persion coefficients for the upper reach of the chan-
 nel, it was possible to obtain a stable simulation for
 the river conditions on August 28, 1974.  This minor
 adjustment of dispersion coefficients can be justified
 since the sensitivity analysis indicated the system to
 be relatively insensitive to this parameter.

 The major trend in dissolved oxygen fluctuations was
 simulated by the model (Figure 2).  From upstream to
 downstream the general shape of the observed data was
 successfully modeled.   It is assumed that biological
 and random influences which were not incorporated in
 the model, resulted in the slight variations at each
 sample point.

 Figure (2) indicates that the model is valid and, if
 properly utilized, can provide significant insight and
 understanding into DO behavior in the lower Cuyahoga
 River.

 SIMULATION RUNS

      General.   A variety of simulation runs was con-
 ducted.   These runs incorporated variations in waste
 load allocations,  and input values were altered to re-
 flect changes  in waste load conditions (BOD and flow).
 The simulation runs were used to assess the influence
 of alternate waste quality control measures on the
 overall  dissolved oxygen quality in the system.
       o
       2  5
       UJ  4
       O
       S'
       s  •
       1,
       (fl
       (0  „
       5  '
                                                                             I	I	I	I
              6    5    4     3    2    I    0
                     MILE POINT
                  Figure 2. Verification Run.


The program was developed  for  input of values for
cross-sectional area, flow and BOD.  Cross-sectional
areas at the  interface of  adjacent sections, where
dispersion is considered,  were obtained from U. S. Army
Corps of Engineers' dredging maps.  Where necessary,
water levels  were adjusted to  late-summer, early-fall
depths.

Flow within the navigation channel is relatively con-
stant with respect to distance.  Small increases in
flow occur near the upper  end  of the channel due to
the Ohio Canal return, and to  a much lesser degree,
Morgan Run and Burke Brook.  Flow data utilized in the
simulations conducted within the navigation channel
were averages obtained from Havens and Emerson (5) and
from the United States Geological Survey Water Re-
sources Data  for Ohio (6) (7).  A low flow of 345 cfs
and an average flow of 850 cfs were used.

Photosynthesis, a major biological source of DO, was
considered to be insignificant within the navigation
channel.  Water is turbid  and  it is doubtful that any
significant photosynthesis occurs except at the sur-
face.  Chlorophyll analyses of both surface and bottom
water within  the lower channel indicated no measurable
chlorophyll.

BOD loadings  were determined from Ohio EPA records.
Records indicated that the majority of industries
within the navigation channel which discharge signifi-
cant amounts of waste are  located above section 10
(m.p. 3.15).  The results of simulation runs utilizing
these data are presented and compared in the following.

     Simulation 1.  This baseline simulation illus-
trates the effect of present municipal and industrial
discharges on water quality during low flow conditions.
It was assumed that if all other water quality param-
eters remained constant or improved, this simulation
would represent the poorest expected water quality pro-
file for the navigation channel.  System parameters
for this simulation are presented in Table 1.

The results (Figure 3A)  of this simulation show that
discharges into Sections 2, 4  and 5 degrade water
quality until the DO reaches zero in Section 5 (m.p.
4.65).   More waste is discharged into Section 8 (m.p.
3.75),  but its effect is not observed since DO has al-
ready reached zero.  Based upon this simulation run,
one would expect the river to be anoxic from Section 5
to Section 19 (m.p. .45).  At Section 19 water quality
improves slightly due to lake water intrusion.

The following simulation runs manipulate flow, BOD and
DO to illustrate how the model can be used as a manage-
ment tool.   A summary of simulation runs and the vari-
                                                       103

-------
ables manipulated is given in Table 2.
Table 2.  Summary of Parameters used in Simulations
Simulation Flow
No (cfs)

1 345
2 850
3 345
4 345
5 850
Loading
Source

1973-OEPA
1973-OEPA
1978-OEPA
50% 1973
1978-OEPA
Boundary
Upstream
BOD
8
8
6
4
6
DO
3
3
3.5
4
5
Conditions
Downstream
BOD
6
e
6
6
6
DO
6
6
6
6
6
     Simulation 2.  The effect of flow upon DO was
tested in Simulation (3).   An average flow of 850 cfs
was used as the flow in the navigation channel.  Fig-
ure (3B) shows that DO begins to drop slowly until zero
DO is reached in Section 10 (m.p. 3.15).

When comparing Simulations (1) and (2), it is apparent
that for identical conditions, river water quality dur-
ing low flow is greatly reduced.  This is primarily due
to the low velocity and high holding time in each sec-
tion during low flow.  In general, it could then be
assumed that water quality in the Cuyahoga River could
be improved if the concentration of waste being dis-
charged during low flow periods is reduced.  This could
be accomplished by temporarily storing the waste and
releasing it when river flow is high or by storing
water in large reservoirs and releasing it as dilution
water when river flow is low.

     Simulation 5.  If the best practical treatment
guidelines are met by 1978, it is expected that the
DO in the navigation channel will improve.  Projected
1978 waste load reductions were obtained from the Ohio
EPA in Columbus.  These values were input to illustrate
the degree of improvement which could be anticipated.
                                            D
                    210    54
                       MILE POINT
                   Figure 3. Simulation Runs.
3210
Results are shown in Figure  (3C).   Since all other con-
ditions are identical to Run  #1,  the  trend in DO is ex-
pected to be somewhat similar.   As  expected, DO drops
to zero in Section 5.  While  water  quality improves
slightly as Ib/day of waste load decreases,  the im-
provement does not appear to  be  very  significant.

     Simulation 4.  Simulation  (4)  was  conducted to ob-
serve how dissolved oxygen is affected  when  all waste
loads are decreased to 50% of 1973  values.   The results
of this simulation are compared  in  Figure (3D)  with
those of Simulations (1) and  (3).   It is apparent  from
the figure that water quality is only slightly improved
by waste load reductions.  Despite  the  reduced load-
ings, benthal uptake, upstream loadings,  low rates or
reaeration and long channel residence times  combine to
produce anoxia within much of the channel.   Since  the
model does not reflect reduced benthal  uptake rates,
which might in time result from  reduced loadings,  these
results may be somewhat pessimistic.

     Simulation 5.  Simulation (5)  was  conducted to
test the combined effects of  improved upstream water
quality (entering DO   5 mg/1, BOD  =  6  mg/1),  reduced
loadings (1978 projections) and  augmented flow  (850
cfs.)  Under these combined conditions  DO dropped  slow-
ly, reaching a low of 0.35 mg/1  at  mile point 1.35
(Section 16) (see Figure 3E).  Thus a combination  of
improved upstream water quality,  reduced waste  loading
and increased flow produced a significant improvement
in DO concentrations within the.  channel.

UTILIZING THE TRANSFER MATRIX

As the model calculates the DO deficit  response for
each section, the DO drop for each  section is  computed
and listed in tabular format.  The  changes in DO from
one section to another resulting from variations in
waste load allocations can thus  be  directly  and quickly
determined from the matrix shown  in Table 3.  (Only half
of the complete matrix is shown.)

As an example of the use of this  matrix,  consider  the
DO profile for the channel shown  in Figure  (3F)  as
"1973 channel loadings".  This profile  results  from a
flow of 900 cfs in the channel,  a DO  of 4.4  mg/1 and
a BOD of 8.0 mg/1 for water entering  the  channel,  and
the waste loadings shown in Table 1.

Suppose that Republic Steel and  U.  S. Steel  were to re-
duce their waste loadings to  zero.  This  would  result
in a removal of approximately 10,000  Ibs/days of waste
from Section 5 (Republic Steel)  and a removal of ap-
proximately 1,600 Ibs/day from Section  8  (U.  S.  Steel).

Table 3 indicates the decrease in DO  (Sections  1-20)
resulting from waste inputs to Sections  1-10.   It  also
can be interpreted to read the increase in DO  in Sec-
tions 1-20 resulting from waste  reductions in Sections
1-10.  Thus a 10,000 Ib/day waste removal from  Section
5 would result in the increases  in  DO shown  under  'Sec-
tion 5' (Table 3).  A removal of 1600 Ibs/day of waste
from Section 8 would produce  the  response obtained by
taking the values from Table  3 (under 'Section  8')  and
multiplying each by 1600/10000    .16).

The total response is the sum of the  two  responses and
indicated by the line labeled 'improved conditions'  in
Figure (3F).

These operations allow a decision-maker to assess  im-
mediately the results of hypothetical waste  load allo-
cations without running the model.  In  addition, the
matrix indicates that Section 16  is the most sensitive
region of the channel and will receive  its maximum ef-
fect (a drop in DO of 0.59 mg/1)  when 10,000 Ibs/day of
waste is discharged into Section 5.
                                                       104

-------
The transfer matrix must be recalculated  (i.e., the
model must be run) for different river conditions.
Once the matrix is available, however, any set of
waste load allocations may be applied (without rerun-
ning the model) to observe the corresponding  DO re-
sponse.

Table 3.  Transfer Matrix.
Section
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
1
0.12
0.15
0.17
0.19
0.22
0.24
0.27
0.28
0.31
0.32
0.34
0.36
0.36
0.34
0.26
0.18
-
2
0.
0.
0.
0.
0.
0.
0.
0.
0.
0
0,
0
0.
0
0
0.
0.
11
15
.17
.20
.23
.26
.29
.31
.34
.36
.38
.40
.41
.39
.29
.20
.10
3
0 .
0.
0.
0.
0.
0
0
0.
0
0
0
0
0
0
0
0
.13
.15
.19
.22
.26
.29
.32
.35
.37
.40
.42
.43
.41
.31
.22
.11
4
0.04
0.09
0.12
0.16
0.19
0.23
0.26
0.29
0.32
0.35
0.38
0.40
0.41
0.40
0.30
0.21
0.10
5
0.08
0.12
0.18
0.24
0.29
0.35
0.39
0.45
0.49
0.53
0.57
0.59
0.57
0.44
0.30
0.15
6
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
03
.08
.14
.20
.26
.32
.37
.43
.46
.51
.56
.58
.56
.43
.30
.15
7

0.
0.
0
0
0
0.
0
0
0.
0.
0
0
0,
0.

.05
.09
.14
.18
.22
.26
.29
.32
.36
.58
.37
.28
.20
.10
8


0
0
0
0
0
0
0
0
0
0
0
0
0


.05
.09
.14
.18
.23
.26
.30
.34
.36
.35
.27
.19
.09
9


-
0.05
0.10
0.14
0.19
0.23
0.27
0.31
0.33
0.33
0.26
0.18
0.09
10



-
0.07
0.12
0.19
0.23
0.28
0.34
0.37
0.37
0.29
0.20
0.10
            managing water quality?

Answer 3:   The Transfer Matrix  (Table 12) provides an
            excellent tool for determining the optimal
            locations for outfalls and the optimal
            waste load inputs because this matrix
            points out the sections which can least
            tolerate and most tolerate a. waste load.

     Through an understanding of the complex physical,
chemical and biological events occurring simultaneous-
ly within the system, the model has demonstrated its
ability to simulate the dissolved oxygen profile in
the river.  The oxygen profiles resulting from use of
the model, when compared with field measurements, pro-
vided a reasonable fit.  The model, therefore, allows
a water planner to assess the impact of alternate
water quality control measures on the river system by
varying the treatment levels at each discharge point
and the water quality conditions in Lake Erie at its
mouth.  By increasing flow, while holding discharge
constant, the model can also estimate the volume of
dilution water required to meet dissolved oxygen stand-
ards in the river.
                                                                                 REFERENCES
                         SUMMARY
 By utilizing the model it is possible to answer sever-
 al types of questions which must be addressed by man-
 agement :

 Question 1:  How can the model determine the upstream
              water quality required to achieve the
              water quality standards set for the Cuya-
              hoga River's navigation channel?

 Answer 1:    In order to maintain the standards set
              for the river, water quality in sections
              14-16 must be controlled.  Therefore, up-
              stream flow, BOD, DO and waste inputs
              must be manipulated until an acceptable
              DO is obtained in Sections 14-17.  Simu-
              lations 1-5 demonstrate the expected
              changes which would occur when manipulat-
              ing each of these parameters.  Additional
              manipulations require only changing the
              input data.

 Question 2:  How can the model be utilized to deter-
              mine the best physical system for achiev-
              ing that water quality?

 Answer 2:    Once the desired DO level is obtained in
              Sections 14-16, one must then determine
              the most economic or most efficient means
              for effecting the required changes.  For
              example, if flow is doubled and BOD is
              decreased by half, then one must decide
              how to double the flow and decrease the
              BOD.  Such alternatives as storing dilu-
              tion water to augment flow, eliminating
              all discharges, etc., must be approached
              from an economical point of view; how-
              ever, the response to using combinations
              of the different alternatives can be ob-
              served from the model.

 Question 3:  How can the model assist in determining
              the optimal system for administering and
 6.
Thomann, R. V. System Analysis and Water Quality
Management.  New York.  Environmental Science
Services Division, 1972.  286 p.

Dalton, Dalton 5 Little.  Industrial Waste Survey
Program for the Lower Cuyahoga River.  Cleveland,
Ohio.  January 1971.

O'Connor, D. J. Estuarine Distribution of Non-
Conservative Substances.  Jour. San. Eng. Div.
ASCE.  Vol 91.  No. SA 1.  February 1965. p. 23.

O'Connor, D. J., et al.  Dynamic Water Quality
Forecasting and Management.  Environmental Pro-
tection Agency.  Publication Number 600/3   73
009.  August 1973.

Havens 5 Emerson.  Master Plan for Pollution
Abatement.  City of Cleveland, Ohio.  July 1968.

U. S. Department of Interior, Water Resources
Data for Ohio.  1973.

U. S. Department of Interior, Water Resources
Data for Ohio.  Part  1.  Surface Water Records.
1974.
                                                        105

-------
                           A WATER RESIDUALS  INVENTORY  FOR NATIONAL  POLICY  ANALYSIS
                Edward  H.  Pechan*
                  Consultant
         National  Academy  of  Sciences
                Washington, D.C.

 ABSTRACT

 A computer based  water residuals  generation  and  dis-
 charge  inventory  was developed to assist  in  the
 evaluation of  regional and national implications of
 the uniform effluent requirements of  the  Federal
 Water Pollution Control Act  Amendments  of 1972
 (PL 92-500) and in  the evaluation of  alternative
 residuals  control policies.

 The completed  system,  termed the  National Residuals
 Discharge  Inventory (NRDI),  has been  used in a number
 of applications including the  investigation of  costs,
 residuals  discharge, and  residuals dilution  effects
 of three alternative policies to  national uniform
 effluent standards.

 BACKGROUND

 The Federal Water Pollution  Control Act Amendments of
 1972 (hereafter referred  to  as P.L. 92-500 or the
 1972 Act)  marked  a  decisive  shift in  the  nation's
 approach to restoring  and maintaining the physical,
 chemical,  and  biological  integrity of its waters.
 That shift is  best  reflected in the major change in
 enforcement mechanisms.   Under prior  legislation,
 ambient water  quality  standards were  set  as  the  control
 mechanism.  The use of the waters for such activities
 as drinking, recreation,  and manufacturing determined
 the kinds  and  amounts  of  residuals to be  discharged,
 the degree of  residual abatement  required, and the
 rapidity with  which dischargers were  to install  the
 necessary  abatement technology.   Under  the 1972  Act,
 effluent limitations were set as  the  control mechanism.
 The existence  and availability of water pollution
 control technology  determined the kinds and  amounts
 of residuals to be  discharged, and legislatively man-
 dated compliance  dates determined the rapidity with
 which dischargers must install the necessary abate-
 ment technology.

 The overwhelming  Congressional support  for the 1972
 Law resulted from disillusionment with  the lack  of
 progress under more than  two decades  of Federal
 legislation.   Contributing problems to  the failure
 of previous efforts were  the tardiness  of the States
 in setting water  quality  standards, the complex
 procedures which  delayed  enforcement  actions  against
 polluters, and the  failure to fully implement the
 Federal construction grant program for  municipal
 sewage  treatment  facilities.

 The 1972 Act attempted to respond to  these limitations
 with three essential elements; uniformity, finality,
 and enforceability.  Uniformity is mandated  by the
 requirement that  each  residual discharger within a
 category or class of industrial sources and  all
 municipal  sources must meet  stipulated  effluent
 limitations regardless of geographic  location.
 Categories or  classes  of  industrial sources  will be
 required to meet  unique,  nationally uniform,  effluent
 limitations based on "best practicable  control
 technology currently available" (BPT) by  1977, and
 to meet even more stringent  nationally  uniform
 effluent  limitations  based  on "best  available
 technology economically achievable" (BAT)  by  1983.
 All  municipal  sources  (publicly owned treatment  works)
 will be required  to meet  effluent  limitations based
on secondary treatment  (ST) by  1977 and  based
                    Ralph A. Luken**
                     Consultant
            National Academy of  Sciences
                    Washington,  D.C.
on "best practicable wastewater  treatment  technology"
(BPWTT) by 1983.

Finality is mandated by the requirement  to meet  more
stringent effluent limitations by point  sources
(municipal and industrial activities) at specific
dates in the future.  While prior legislation  did
not set specific dates for meeting water quality
goals, the 1972 Act requires dischargers to meet one
set of effluent limitations in 1977, a more stringent
set of effluent limitations in 1983, and looks toward
achieving a final goal of zero discharge of pollutants
into navigable waters by 1985.   In addition, an
interim goal of achieving waters fit for fishing and
swimming by 1983 is established.  The concept  of
finality is intended to remove the uncertainty on the
part of industrial and municipal dischargers about
the nation's (or at least Congress') commitment  to
maintaining and restoring the quality of the nation's
waters.

Enforceability is assured through the provisions of
the permit program and the new enforcement  authorities
given to the EPA.  The 1972 Act  is based on the
assumption that violations of permit conditions
would be easier to determine than violations of
water quality standards, assuming the ability to
design an adequate compliance monitoring program and
to inspect the operations of residual dischargers.
EPA not only has the authority, but is required to
issue an abatement order whenever there  is  a violation
of the conditions of a permit and a state fails to
move against the violator in a timely fashion.
Furthermore, a citizen may bring suit against EPA if
it fails to issue a necessary order.

Although the 1972 Act received Congressional support
in its final form sufficient to override a  Presiden-
tial veto, there were many compromises in the develop-
ment of the final version as it moved through the
procedures of the Congress.  Thus, while the three
major innovative provisions of the Senate version
survived in the legislation agreed to by the
Conference Committee, a provision was inserted to
establish a National Study Commission to "make a
full and complete investigation and study of all of
the technological aspects of achieving,  and all
aspects of the total economic, social and environmen-
tal effects of achieving or not achieving the
effluent limitations and goals set forth for 1983
..." (P.L. 92-500) looking toward recommendations not
later than October 18, 1975 as to any needed "... mid-
course corrections that may be necessary ..."
(H.R. 92-1465).

In implementing its study program, using the efforts
of almost 100 contractors, the National  Commission
on Water Quality (NCWQ) appeared to accept  the
uniformity provisions of the Act and defined its
contract studies to concentrate primarily on the
finality provisions and secondarily on the
enforceability provisions.  None of the  studies

* Mr. Pechan is currently with the U.S.  Energy Research
and Development Administration in Washington, D.C.
** Dr. Luken is currently with the United Nations in
Bangkok, Thailand.
                                                      106

-------
questioned the benefits or costs of requiring uniform
treatment of similar classes of residual discharges
regardless of geographic location.

Early  in the course of its study program, the NCWQ
contracted with the National Academy of Sciences/
National Academy of Engineering/National Research
Council under the provisions of Section 315 for
assistance in particular areas of concern.

In order to provide the assistance needed, the
Environmental Studies Board of the National Research
Council created the Study Committee on Water Quality
Policy (CWQP).  In connection with its accomplish-
ments of the tasks assigned by NCWQ, CWOP determined
that an independent assessment of residual reduction
technologies was essential to provide perspective on
its assignment.  In the absence of a breakdown by
NCWQ of national totals by geographic regions,
primarily due to inability of the Strategic Environ-
mental Assessment System (SEAS) to accurately compute
and display  (by region) data fron the contractor
studies, CWQP engaged consultants and directed them
to devise a system which could provide it with a
basis for an independent analysis of the effects of
'achieving or not achieving the goals of the Act.

NATIONAL RESIDUALS DISCHARGE INVENTORY

To provide a basis for handling the immense amount
of data available from the NCWQ contractor's reports,
from the U.S. Environmental Protection Agency (EPA)
and from other available sources, the consultants
devised a system for computerized analysis called
the National Residuals Discharge Inventory (NRDI).

The NRDI analysis was to provide a basis for the
CWQP's comments on the NCWQ's contractor and staff
draft reports.  - It was also hoped that the analysis
would be of value to the NCWQ as it moved toward
preparation of its own final report.

To carry out this assignment, the CWQP consultants
were requested to:

• document the distribution of residual generation
and discharges by region;

• document the distribution of residual generation
by activity;

• describe the relative importance of activity
categories by region;

• document the distribution of residual reduction
technology cost by region;

• indicate the sensitivity of estimates of residuals
generation, discharge, and reduction technology
costs to various assumptions;

• evaluate the quality of basic data on residual
generation, discharge, and technology used by
NCWQ contractors and describe other possible data
sources.

NRDI is a quantitative assessment of residual genera-
tion and discharges and of residual reduction techno-
logy costs in each of the 3,111 counties or county
approximations in the contiguous U.S.  Data for
industrial, municipal, urban runoff, and non-irriga-
ted agriculture sources are available for each county.
However, the data are not displayed at the county
level, but rather are aggregated for purposes of
analysis by the Water Resources Council's 99 aggrega-
ted sub-areas (ASAs), the 18 Water Resource Regions
(WRRs) and by the nation.  The ASAs and WRRs are
often generically  referred to in this report as
river basins.

The purposes of NRDI are: (a) to provide a compre-
hensive measure of biological oxygen demand (BOD),
total suspended solids (TSS), nitrogen (N), and
phosphorus (P) residual generation and discharge
aggregated for the nation and for each of the 18
WRRs and 99 ASAs, (b) to indicate the relative
importance of various activities as sources of
residuals after the effluent limitations for 1977 and
1983 are met; (c) to provide a comprehensive measure
of the costs of residual reduction technologies
required to meet the 1977 and 1983 technological
objectives of the 1972 Law aggregated for the nation,
the 18 WRRs, and the 99 ASAs, and (d) to estimate the
cost savings to the nation of pursuing alternative
policies.  The NRDI analyses include point sources,
which'are defined as discharges from municipal and
industrial activities, and areal sources, which are
defined as urban runoff and drainage from non-irri-
gated agricultural activities.

Thus, NRDI is a conceptually simple but systematic
computational procedure for evaluating various
aspects of the 1972 Law.  The inventory has the
capacity to predict potential reductions in residuals
discharged into the ambient environment and the
associated costs of the application of uniform
residual reduction technologies stipulated by EPA for
municipal and industrial residual generators.   It can
compare the resulting reductions in discharges from
these sources with those from other sources, primarily
urban storm water runoff and non-irrigated agriculture,
by river basins.  More importantly, NRDI allows for an
evaluation of policy alternatives to the uniform
application of residual reduction technologies to
legislatively defined (P.L. 92-500) point sources.
These policies reflect alternatives where in a given
river basin, achievement of the 1983 effluent limita-
tions would not make a significant improvement in
total residual reductions and ambient water quality,
and where a given level of residual reduction could
be achieved at a lower cost without the uniform appli-
cation of residual reduction technology to point sources.

NRDI consists of (a) inventories of production and
consumption activities which generate and discharge
residuals, (b) a system for analyzing the effects
of increased industrial production and population
growth, (c) an index of potential water quality
changes, and (d) residual discharge reduction policies
which include the BPT/ST and BAT/BPWTT technology
goals in the Act.

ACTIVITY INVENTORIES

The purposes of the activity inventories are twofold.
First, the inventories relate process/production
data to residual generation coefficients for calcula-
tion of residual generation for a particular activity.
Second, the inventories assign an appropriate
residual reduction technology, as specified under
the policy alternatives, thereby enabling the
computation of abatement costs and of residuals dis-
charged into the ambient environment.  The specifi-
city of the individual activity inventories for the
above process depends upon the importance of each
sector as a residual generator and upon the
availability of data.

The data input files for activity inventories contain
information on identifiable point and areal source
residual generating activities.  The point and areal
activities combined cover most major waterborne
                                                      107

-------
residual  generating activities.   Information  included
about these activities, where appropriate and avail-
able, are location of activity, measures of production
(physical output, employees, land area, or population)
type of production process, and current residual
reduction technologies being used.

There are several activity inventories for the muni-
cipal, industrial, and areal categories.  The muni-
cipal category  includes a sewage treatment plant inven-
tory based on the 1974 EPA Needs Survey1.  The indus-
trial category  includes an in depth industry  inventory
for the significant process water users and a general
industry  inventory for the vast majority of other
residual  generating industries (Table 1).

TABLE 1   NRDI  INDUSTRY STUDY CATEGORIES
 INDUSTRIES STUDIED IN DEPTH
Pulp and Paper
Petroleum Refining
Textiles
Iron £ Steel
Plastics 6 Synthetics
Organic Chemicals
Inorganic Chemicals
Steam Electric
 INDUSTRIES STUDIED IN GENERAL
Ore Mining
Coal Mining
Petroleum £ Gas
Mineral Mining
Meat Processing
Dairy Products
Grain Mills
Cane Sugar
Beet Sugar
Seafood
Builders Paper
Fertilizer
Paving £ Roofing
Rubber
Leather
Glass
Cement
Pottery
Asbestos
Ferroalloys
Non-ferrous Metals
Electroplating
Fruits £ Vegetables
Other Organic Chemicals
Industrial plant data for the in-depth industries
was developed from numerous sources as described in
the NRDI report2.  Plant data for the general indus-
tries was obtained from Census data3.  Residuals
generation and water use information was obtained
from both EPA Development Documents and Census data4.

Residual generation coefficients for industries
studied in depth are specified for each production
process within an activity category.  For the indus-
tries studied in general, total residual generation
for a four-digit SIC category is used.  The coeffi-
 cients  are  given as weights of residuals per produc-
tion output unit (for example, pounds of organic
residuals generated per ton of pulp or per barrel of
crude oil processed).   The residuals included are BOD,
TSS, and wastewater flow.

The input file also specifies the various residual
reduction technologies available for each of the
activities.   Information specified for each technology
or unit process includes costs and residual reduction
rates or removal efficiencies.

Areal sources include  urban runoff and non-irrigated
agriculture, both analyzed by county.  Information
for the urban runoff was obtained from the Needs
Survey, Census data5,  and an NCWQ contractor report6.
Counties were included which were (1) part of an
SMSA and (2) had an average resident population
density of .6 persons  per acre or more.

Non-irrigated agriculture activities were defined on a
county-by county basis by acres under cultivation,
soil types,  etc.   Residuals were computed by
successive application of the uniform soil loss
equation, sediment delivery ratios, and residuals
carried by sediment.  The single control policy was
developed by simulating the application of soil conser-
vation measures as outlined in the 1967 Conservation
Needs Inventory.

Estimated residual delivery and costs of control were
obtained from the NCWQ contractor?.  The sediment
delivery ratios were back-computed and residual load-
ings were adjusted to correspond with Iowa State
Sediment delivery ratios8.

The costs of residual reduction technologies are com-
puted for the capital or initial investment cost.

GROWTH ANALYSIS

The purpose of the growth analysis is to project
future levels of residual generation and discharge.
The projected growth for industry is based on increas-
es in physical output growth while that for municipal-
ities  is based on population growth rates.  Growth
is not projected for urban runoff or non-irrigated
agriculture activities.  While the growth analysis
design permitted either national or ASA averages for
industrial and municipal sectors, as a first approx-
imation, only the former have been used to date.

The growth assumption does not generally influence the
results of the analysis presented here which is based
only on 1973 data.

Inputs into the growth analysis are projected
industrial production and population increases.  The
projected growth for industry is available either from
the Wharton Economic Forecasting Analysis used by
NCWQ9 or the U.S. Department of Commerce, OBERS Series
filO.  The projected growth for municipalities is
based on U.S. Department of Commerce, Census Series E
population growth rates.

RESIDUALS DILUTION RANKING INDEX

The purpose of the water quality indexing procedure is
to convert the information on residuals discharged into
an approximate measure of water quality.  The procedure
is essentially a mechanism for ranking the basins
according to relative average water quality, i.e.,
"average" conditions are determined for each basin and
the basins ranked accordingly.  Such a. ranking may then
be used to identify those ASAs which are relatively well
off or have problems under current conditions and
which may be significantly affected by different water
quality management policies.  Since the "average"
conditions do not reflect a real situation at any given
location or time, they cannot be used to attempt to
pinpoint specific water quality problems in a sub-basin
or stream segment.

At this time, the only water quality related data
unique to each river basin are approximations of low
and average flow conditions and number of stream
miles.

OUTCOME SUMMARIES

The outputs resulting from each policy alternative are:

• residual generation;

• residual discharge;

• abatement costs;

• residuals dilution index.
                                                      108

-------
These outputs must be taken together to evaluate a
given policy, because no one output is by itself an
adequate evaluation measure.  However, even the
combination is no substitute for basin specific evalua-
tion.  At that level of detail, data are available for
assessing actual changes in physical, chemical and
biological parameters and for measuring the damages
which are sustained directly by human beings or
indirectly by plants and animals of value to man.

ALTERNATIVE POLICIES

A variety of alternative policies can be selected for
solution in the NKDI.  These policies include both
uniform and non-uniform abatement policies and can
simulate controls on areal as well as point sources.

In using the  model, a target year for analysis is
selected.  Standard years are 1973 (base case), 1977,
and 1983.

The model has been exercised with a variety of abate-
ment policies.  Due to the simplicity of the model, it
is relatively easy to add new policies.  The basic
policies used to date are discussed below:

a.  No control -  This policy estimates risiduals
discharge if no control technology is used.

b.  1973 controls - This policy estimates discharge
and costs based on control technology in place in 1973.

c.  BPT/ST - This policy estimates effects of the 1977
standards of the Act:  Best Practicable Treatment for
industry and Secondary Treatment for municipalities.

d.  BAT/BPWTT - This policy estimates effects of the
1983 standards for industry and secondary treatment
for municipalities supplemented with tertiary facil-
ities when requested in the Needs Survey.

e.  BAT/BPWTT+- This policy is identical to (d) for
industrial sources but includes filtration for all
municipalities not requesting treatment more stringent
than secondary in the Needs Survey.

f.  Non-irrigated agricultural control - Costs and
residual implications of implementing practices out-
lined in the 1967 Conservation Needs Inventory are
included.

g.  Urban storm control   Costs and residual implica-
tions of one of five urban storm control strategies
(combined, separate storm, and unsewered) is simulated.

h.  Ocean discharges - Effects of discharge and costs
for ocean counties are excluded.  This function is
used to simulate lower levels of treatment for ocean
discharges based on using a specified set of counties.

i.  New Source Performance Standards - In this policy,
residual discharges and costs for industrial growth
are based on new source performance standards (approx-
imated by BAT).

j•  Limited technology - Simulation of stringent
effluent limitation policies can be limited to ASAs
with relatively bad water quality.

k.  Cost effective strategy - This policy used data on
cost per quantity of residuals removed to identify
cost-effective solutions in each ASA.

Combinations of these policy components can be
combinned in a single run if desired.
RESULTS

Illustrative results and conclusions are presented in
this paper.  The results presented show the costs of
uniform application of BAT/BPWTT, as well as three
policy alternatives.

Table 2 presents summary results of the policies.
The results of the illustrative alternatives to
the uniform application of BAT/BPWTT are discussed
below.

    TABLE 2 - COMPARISONS OF ALTERNATIVE POLICIES

                          BOD Removed     Total Capital

Technology                109 Ibs/yr      10? 1975 $
Uniform BPT/ST            7.4
Uniform BAT/BPWTT         8.8

Alternative I             8.0

Alternative II
Cost-eff. BPT/ST          7.4
Cost-eff. BAT/BPWTT       8.8

Alternative III
EPA counties              8.7
Potential counties        8.6
38.5
55.6

43.2
23.0
37.0
55.2
52.9
Alternative I:  Limit BAT/BPWTT Technology Investment
to Areas with Relatively Poor Water Quality

One alternative is to require that the BAT/BPWTT
technology objectives be met only in those areas
(ASAs) which have relatively severe water quality
problems.  This alternative limits the application
of BAT/BPWTT technologies to ASAs which have a BOD
dilution index equal to or greater than 3.0 mg/1.
The result of this alternative is that if uniform
water quality is a policy objective, it can be
obtained for only $5.. 7 billion more than BPT/ST,
a reduction of $12.5 billion from the costs of
uniform BAT/BPWTT.  This is achieved by applying the
more stringent effluent limitation to 21 instead of
all 99 ASAs.  The results suggest that in the re-
maining areas (78 ASAs) BAT/BPWTT may not really be
necessary because they generally may have met water
quality standards after BPT/ST.  Areas (ASAs) with
no additional cost, in this case, would be located
in the New England, Tennessee, Upper Mississippi,
Lower Mississippi, Upper Colorado, and Pacific North-
west WRRs.  This alternative illustrates that if the
ultimate goal of the law is to restore the nation's
water quality, then it can be  achieved at  lesser cost

Alternative II;  Invest Only in Cost Effective Options

The second alternative is to require only those
sources which have the least cost to invest in more
stringent technology in order to achieve the residual
reduction accomplished by uniform BAT/BPWTT.  This
alternative is based on considering all major sources
of residuals, including urban runoff and non-irrigated
agriculture,and applying both BPT/ST and BAT/BPWTT
non-uniformly in order to achieve a given level of
residual reduction for the least cpsts.

The results showed that a 33 percent reduciton in
total BPT/ST and BAT/BPWTT costs, 41 percent for
BPT/ST and 12 percent for BAT/BPWTT, can be obtained,
and the same quantities of residuals removed, by
substituting  a nonuniform cost effective policy
                                                     109

-------
 approach for uniformity.   The five regions (WRRs)  that
 benefit the most from a cost effective approach include
 the Upper Mississippi,  Lower Mississippi,  Missouri,
 Rio Grande and Arkansas-White-Red Water Resource Regions.
 This alternative suggests  that  if  the  quantities  of
 residuals removed after  uniform BPT/ST and  BAT/BPWTT
 are policy objectives, then  they can be achieved  at
 lesser costs.

 Alternative III;   Do  Not Invest in BAT Technology in
 Areas with Potential  for Ocean  Discharge

 The last  alternative  is  to not  require point  source
 dischargers in  all counties  which  have the  potential
 for ocean discharge to meet  the BAT/BTWPP technology
 objectives.  This alternative is based on computing
 a lower bound,  which  excluded point  dischargers in
 selected  counties in  only  three regions with  generally
 recognized assimilative  capacity,  and  an upper bound
 which excluded  point  source  dischargers in  selected
 counties  in regions with potential tidal dilution
 capacity.   The  lower  bound resulted  in a national
 cost saving of  $0.4 billion  or  only  two percent of
 the NRDl  estimated BAT/BPWTT costs while the  upper
 bound considered  172  counties and  resulted  in a
 national  cost savings of $2.7 billion.   Savings
 included  fifteen  percent of  the uniform BAT/BPWTT
 costs for the California region and  twenty-eight
 percent for the Pacific  Northwest.   This alternative
 illustrates that  if the  nation  is  willing to  use  the
 natural assimilative  capacity of the oceans,  a few
 regions would achieve a  significant  reduction in  BAT/
 BPWTT costs.

 In summary, we examined three illustrative alternatives
 to the uniform  application of the  technology  standards.
 These alternatives  could conceivably yield  savings of
 between 2  percent and 70 percent of  the  costs of  meet-
 ing uniform standards without a significant deterior-
 ation of  either the residual reductions  or  the water
 quality gains achieved using the uniform policy.   Since
 these alternatives  are not mutually  exclusive, all or
 some of them could be adapted simultaneously  and
 result  in  a  significant  percent reduction of  the  costs
 of uniform standards.   Of course, more detailed
 analysis would be needed including consideration of
 institutional issues and methods of implementation
 before  a new approach is adopted.

 CONCLUSIONS

 Analysis done to date with the NRDI has  shown it  to
 be a powerful tool which can be used to  examine the
 effects of various abatement policies at a regional
 level.  The model is unique in  its capability of
 simultaneously estimating costs, residuals discharge,
 and  residuals dilution effects of alternative policies.

Major limitations of the system include its omission
of  some important sources i.e.   silviculture,  construc-
 tion, its limited coverage  of residuals, and our
 limited faith in the "water quality" estimates.

REFERENCES
1.
2.
3.
U.S. Environmental Protection Agency, Joint State-
EPA Survey of Needs for Municipal Waste water
Treatment Facilities, computer tape, March, 1975.

Luken, Basta, and Pechan, The National Residuals
Discharge Inventory, National Research Council,
Washington, D.C. January, 1976.

U.S. Department of Commerce, Bureau of the Census,
1972 County Business Patterns, Washington, D.C.
                                                         4.  U.S.  Department of  Commerce, Bureau of the Census
                                                            1972  Census of Manufacturers, Water Use in
                                                            Manufacturing, Washington, D.C.

                                                        5.   U.S.  Department of Commerce, Bureau of the Census,
                                                            1972  City County Data Book, Washington, D.C.

                                                        6.   Black,  Crow,  and Eidsness, Study and Assessment
                                                            of Capabilities and Costs of Technology for
                                                            Control of Pollutant Discharges from Urban
                                                            Runoff, NCWQ  Contract,  November, 1975.

                                                        7.   Midwest Research Institute, Cost and Effective-
                                                            ness  of Control of Pollution from Selected Non-
                                                            point Sources, NCWQ Contract, July, 1975.

                                                        8.   Wade, James,  C., Iowa State University, letter
                                                            communication, May, 1975.

                                                        9.   Wharton Econometric Forecasting Associates,
                                                            Wharton Econometric Forecasting Estimates, Mark IV,
                                                            Solution of March 4, 1975.

                                                        10.  U.S.  Water Resources Council, 1972 OBERS Project-
                                                            ions, April,  1974.
                                                      no

-------
                                        A MULTI-PARAMETER ESTUARY MODEL*
   Paul A. Johanson
   Tetra Tech,  Inc.
   Lafayette, California
Marc W. Lorenzen
Tetra Tech, Inc.
Lafayette, California
William W. Waddel
Washington Public Power
Supply System
Richland, Washington
                     ABSTRACT
 To  obtain  information  needed  in  the  development  of a
 water  quality  plan  for Grays  Harbor, in  Washington
 State, the mathematical  water quality model  EXPLORE
 was modified for  application  to  the  harbor and the
 lower  Chehalis  River.   This report describes the model
 selection  criteria  and the  procedures used in applying
 the model  to a  tidally influenced estuary and river.

 Results of the  study  show that model calculations and
 observed data  correlate well, confirming that the
 model  is a valuable tool  for  evaluating  the effects
 of  various waste  discharge  schemes on the quality
 of  a water body and thus for  helping to  select a plan
 for managing water  resources. The study also indi-
 cates  that further  information about rates of benthic
 oxygen demand  and the  oxygen  content of  incoming sea-
 water  would  improve the accuracy of  the  model calcula-
 tions.

                  INTRODUCTION

 The need for high quality water  to maintain natural
 productivity and  the  need to  assimilate  waste mater-
 ials often conflict in estuarine areas (1).   Careful
 management of  water resources is essential in these
 areas, and one  tool which can be of  immense help to
 the engineer/planner  is mathematical simulation.
 Through the  use of  computer models it is possible to
 describe quantitatively the behavior and interactions
 of  various water  quality parameters  and  thereby  pre-
 dict the effects  of various management schemes.

 This report  describes  the modification of a mathemati-
 cal model  for  application to  a particular estuary
 system.  Water  quality problems  and  conflicting  water
 uses in the  estuary have indicated the need for  more
 careful resource  management,  and water quality model-
 ing can be of  obvious  benefit.

       PURPOSE  AND CONCLUSIONS OF THE STUDY

 Application  of  the  Battelle-Northwest EXPLORE water
 quality model  to  Grays Harbor and the lower Chehalis
 River  in Washington State was the major  task of  a
 recently completed  program.   The purpose of the  study
 was to provide  specific information  needed to develop
 a water quality management  plan  for  Grays Harbor
 County.  It was felt  that mathematical modeling  tech-
 niques would provide  the best method of  evaluating
 the effects of  various waste  discharge schemes on the
 water  quality  of  Grays Harbor.

 This report  reviews the characteristics  of a water
 quality model  needed  for this task and discusses
 calibration  and use of the  model.  Results are in
 reasonable agreement with observed data  and show the
 utility of the  model  as a water  management tool.
*This  work  was  completed  under partial  support  of Con-
 tract 68-01-1807  of  the  Environmental  Protection Agen -
 cy in Oct.  1974,  while the  participants  were employed
 at Battelle Northwest Laboratories,  Richland,  WA.
 The assistance of Charles R.  Cole  of BNW is grate-
 fully acknowledged.
           Results of this study indicate that further informa-
           tion is needed about rates of benthic oxygen demand
           and the oxygen content of incoming seawater, and field
           measurements of these parameters are suggested.

                     CAPABILITIES REQUIRED OF THE MODEL

           Water quality modeling in an estuarine system requires
           the determination of water flows, depths, and veloci-
           ties in order to properly transport the quality para-
           meters through the system.  Thus, hydrodynamic calcu-
           lations are prerequisite to any quality calculations.
           Once the physical transport of water has been deter-
           mined, biological and chemical reactions can be
           superimposed to calculate water quality at any loca-
           tion and time.

           The primary quality parameter of concern in Grays Har-
           bor is dissolved oxygen (DO).  Low oxygen concentra-
           tions have been observed in the estuary during periods
           of low flow.  DO is predominantly a function of bio-
           chemical oxygen demand (BOD) discharged to the system,
           benthic oxygen demand, surface reaeration, and algal
           growth and decay.  BOD and benthic oxygen demand de-
           pend on wastes discharged to the system.  Surface
           reaeration depends on water and wind velocities as well
           as oxygen deficiencies.  Algal growth and decay rates
           depend on nutrient concentrations, light penetration,
           temperature and possibly toxic substances discharged
           to the system.

           The EXPLORE model is derived from a hydrodynamic code
           developed by Water Resources Engineers (WRE) (2).
           Whereas standard approaches to the solution of un-
           steady flow equations generally represent varying
           methods of making the equations numerically discrete,
           the approach adopted by WRE involves a discretization
           of the physical system being modeled.  The region is
           subdivided into a number of nodes with channels con-
           necting adjacent nodes.  The continuity equation is
           solved at junctions or node points while the momentum
           equation is solved along connecting channels.  This
           approach has been applied successfully to the simula-
           tion of estuarine networks of the Sacramento-San
           Joaquin Delta by WRE and to the Columbia River by the
           Environmental Protection Agency.  The general purpose
           computer program originally written by WRE was later
           modified, updated and refined by the Federal Water
           Pollution Control Administration and is reported by
           Callaway, Byram and Ditsworth (3) and Feigner and
           Harris (4).  More generalized versions of the code
           have been incorporated into the Storm Water Manage-
           ment Model (5) and the Battelle-Northwest EXPLORE
           Program (6).

                               MODEL SELECTION

           The Battelle-Northwest EXPLORE hydraulic code was
           chosen for use in the Grays Harbor-Chehalis River
           estuary for three reasons:  it has been successfully
           tested and used for a number of different simulations;
           it can be effectively applied to an estuary with the
           physiographic features of Grays Harbor; and a number
           of water quality programs have been written for use
                                                       111

-------
with  this  code.

The three  most widely  used  of  these  water quality
models  are discussed in  references 4,  5  and  6.   The
EXPLORE quality  code was selected because it is the
most  comprehensive  and versatile  of  the  three  programs
and because it was  written  for maximum compatibility
with  the EXPLORE  hydraulic  code.

The water  quality models  used  were developed by Bat-
tell e-Northwest  for the  Environmental  Protection Agen-
cy to serve as a  management tool  in  the  study  of
water resources  and pollution  abatement  programs.  The
models  afford an  overall  perspective of  the  synergis-
tic effects of various proposed plans.

Not all  of the water quality models  were used  in the
present modeling  effort  due to a  lack  of data  for  cal-
ibration,  but other constituents  can be  studied when
more  data  become  available.

A detailed description of the  procedure  for  using  the
EXPLORE code has  been  given elsewhere  (7).   Briefly:
the study  area is divided into nodes and channels.
Surface area and  depth are  determined  for each  node,
while length and  width are  measured  for  each channel.
The process of dividing  the area  into  nodes  requires
experience, three considerations  being paramount:
areas of great concern (such as waste  discharges,
municipal  areas,  or regions characterized by low water
quality) or requiring  great detail will  generally  re-
quire a large number of  nodes; channels  must be longer
than  a  limit which  is  determined  by  the  timestep used;
and more detailed systems involve greater expense  in
data  preparation  and computational time.

Tributary stream flows  are added as  either time-varying
or constant values, upstream control points  andtidally
controlled nodes  are established  and the  tidal regime
is chosen.   The  hydraulic code then  calculates  flows
and velocities  for each channel and water surface ele-
vations  and volumes  for each node  as  functions  of time.
The  code is allowed  to  run for  a number of tidal cycles
until steady-state  conditions are  established.  Although
the code can handle transient  conditions, only  average
diurnal  variations  are considered for  specific  condi-
tions such  as low flow periods.  This  is  so  because
the effect  on the predicted  water quality of transient
conditions  caused by variations in tidal   cycles  and
river  flows is  usually  small for the  short period of
time  for which  the simulation is performed.   By aver-
aging  the variations the  computational  time is  reduced,
which significantly reduces  costs without compromising
the usefulness of the  results.

Utilizing  the output from the  hydraulic  code in con-
junction with the quality of the  waste sources  and
initial  conditions  of  each  water  quality parameter,
the quality code  calculates  the concentrations  of
all the parameters  as  functions of time  and  location.
The values  of constants which  are not  known  are set
so that the computer output simulates  field  observa-
tions.   Once the  model is calibrated,  the locations
and magnitudes of waste  sources can  be varied  to
evaluate the effects of  different management schemes.

           CALIBRATION  AND VERIFICATION

Hydrodynamic Model

In order to calibrate  the hydrodynamic portion  of  the
model, stream flow and  tidal  data  were  needed.   August
15-20,  1971, was  chosen  for calibration  because quali-
ty data for this  period were available.   High  and  low
water predictions from Department of Commerce  Tide
Tables  were chosen  to  describe the time-stage  curve,
and river flow data for  the  same  period were taken
from Water Resources Data  for  Washington.

Water Quality Model

Because sufficient data  were not  available with regard
to chemical concentrations,  algal  nutrients, and
photosynthetic processes in  Grays  Harbor,  only BOD,
benthic oxygen demand, and surface reaeration were
considered in the model  calculations.   Information
about sources of major BOD discharge were  obtained
from the State Department  of Ecology.

The dissolved oxygen concentrations measured in the
Harbor for 1970, 1971 and  1972 were taken  from
Interim Reports and from raw data  supplied by the
Department of Ecology.

Initial calculations showed  that waste  discharges
alone would not account  for  the low observed DO.   It
became apparent that a benthic oxygen demand would
have to be assumed in areas  where  waste discharges
occurred, a reasonable assumption  since organic matter
would be expected to settle  to the bottom  in such
areas and create an oxygen demand.  The values  used
were chosen to correctly simulate  the observed  oxygen
concentrations in the estuary  for  1971  and were not
changed for the 1970 and 1972  verification periods
other than to examine the  effect of eliminating ben-
thic demand.   There is probably.some benthic oxygen
demand throughout the estuary, but assumption of
values was considered to be  warranted only in the
areas where oxygen data were available  for calibration.

In the process of analyzing  the oxygen  field data it
was observed that although less BOD was  discharged
to the North Channel during  1970 than in 1971,  the
measured oxygen concentrations were lower  in 1970
than in 1971.  The most  logical explanation for this
observation is that incoming ocean  water must have
been lower in dissolved  oxygen in  1970  than in  1971.
For this reason the 1970 incoming  ocean water oxygen
concentration was fixed  at 6.0 mg/1.  In 1971 and
1972 the incoming oxygen concentration  was set  at 7.0
mg/1.

Figures 1, 2, and 3 show computer  predictions (curves)
and data for five consecutive  days  (points)  for 1970,
1971, and 1972.  DO is plotted against  Department of
Ecology station numbers  (lower axis) and node numbers
(upper axis)
                       NODE NUMBER          M-
                  9-10      10-11 11-12 D    14 15 15 16 17-18 18  20
     AUG 1970 	CALIBRATED MODEL

           • FIELD DATA
                   53       52   51  50

                    D.O.E. STATION NUMBER
                                      685 49 33 47  45 «  71
                                                           Figure 1.
                                                       112

-------
                        NODE NUMBER            14-
                   9-10      10-11 11-12 13    M   15 15 16 17-18 18   20
       AUG 1171
                -CALIBRATED MODEL

                 FIELD DATA
                                            t  !
                    53       52    51  50    685 49 33  47 45  44   71

                       D.O E. STATION NUMBER
                        NODE NUMBER           14-
                    9-10      10-11  11-12 13   14  15 15 16  17-18 18  20
                                                                          CASE1
                                                                       1970 FLOW CONDITIONS
                                                                            	CALIBRATED MODEL

                                                                            	NO BENTHIC DEMAND
                    53       52    51  50   685 49 33 47  45  44  71

                         D.O.E. STATION NUMBER
Figure  2.
Figure 4.
                         NODE NUMBER           M-
                    9-10      10-11 11-12 13    M  15 15 16 17-18 18  20
            -CALIBRATED MODEL

            FIELD DATA
                    53       52    51  50   685 49 33 47  45  44  71

                       D 0 E. STATION NUMBER
                                                                                       NODE NUMBER
                                                                                   9-10     10-11  11-12 13
                                                                                                       14  15 15 16  17-18 18  20
             CASE 2
          1970 FLOW CONDITIONS
                                                                      	CALIBRATED MODEL

                                                                      	NO BENTHIC DEMAND
                                                                           +1/2 REAERATION RATE
  3.0
   38
                                                                                           52   51  50

                                                                                       0.0.E. STATION NUMBER
                                                                                                      685 49 33 47  45  44   71
Figure  3.
               SENSITIVITY ANALYSIS
In each  of the sensitivity studies lowest  DO was found
to occur in the same  general area, so the  figures show
only this region.

The importance of benthic oxygen  demand  in  1970 is
illustrated in Figure 4.   The model  predictions, with
the elimination of benthic oxygen demand,  are shown
with the calibrated model predictions for  1970.  It  is
readily  apparent that the benthic demand used in the
model contributes significantly to the predicted ox-
ygen sag.   Figure 5 shows the predicted oxygen concen-
trations with no benthic demand and a reduced
reaeration coefficient.   It is apparent that some
other combination of  lower benthic BOD and  reaeration
coefficients could have  simulated the data.   However,
it was felt that the  values used  were the most ap-
propriate.

Figure 6 illustrates  the effect of removing  the ben-
thic oxygen demand for the 1972 flow conditions.  With
no benthic demand, the predicted  oxygen concentrations
are significantly higher than the calibrated model had
predicted.   A comparison  of Figures  4 and 6  reveals
that the benthic oxygen  demand was more important in
producing  the oxygen  sag  during the  1972 period than
during 1970.   This result is consistent with the fact
that industrial  discharges were lower in 1972 than in
1970.   As  the discharged  BOD is decreased, the contri-
bution of  benthic demand  becomes  more important in
the total  oxygen balance
                                                               Figure  5.
                        NODE NUMBER
                    9-10      10-11  11-12 13
                                        14  15 15  16 17-1! 18  20
          CASE 3

       1972 FLOW CONDITIONS
           -CALIBRATED MODEL

           -NO BENTHIC DEMAND
                    53       52    51  50

                       D.O.E. STATION NUMBER
                                        685  49 33  47  45 44  71
Figure  6.

The effect of eliminating  industrial discharges of
BOD is  shown in  Figure 7.  The model was  run with
ocean DO set at  6.0 and 7.0 mg/1.   Minimum DO occurs
in approximately the same  point for each  of these
cases and ranges from 5.8  mg/1  to 6.2 mg/1.  Benthic
demand  would be  expected to gradually decrease under
these conditions but the rate  of change  is not known.
                                                          113

-------
  5.0
  3.0
                       NODE NUMBER           14-
                   9-10     10-11  11-12 13   14  15 15  16 17-18 18  20
         CASES 4 & 5

      1972 FLOW CONDITIONS
                            OCEAN D.O. -7.0 mg/1
    OCEAN D.O. =6.0 mg/1


	CALIBRATED MODEL

	NO INDUSTRIAL LOADING
                   53       52   51  50

                      D.O.E. STATION NUMBER
                                        49 33  47 45  44  71
                                                                         9-10   N°DEUU-Tl6ERll-1213    14  is'15 16  17-18 18  20
                                                            _ 7.0

                                                            t
                                                                 CASES
                                                              1972 FLOW CONDITIONS
CALIBRATED MODEL
 OCEAN D.O. -7.0mg/l

PROPOSED D.O.E. LOADINGS
 BOTH IN NODE 28
 OCEAN D.O. -6.0mg/l
                                                                         53       52   51  50   685 49 33 47  45  44 71

                                                                             D.O.E. STATION NUMBER
Figure  7.

The effects  of the proposed Department of Ecology
discharge  limits were simulated with  ocean DO of 6.0
mg/1  and 7.0 mg/1, as shown in Figure 8.   The observed
1972  conditions do not differ greatly from the pro-
posed limits.   The results show that  proposed limits
will  probably not be sufficient to meet quality
standards.
                        NODE NUMBER          14-
                   9-10      10-11  11-12 13   14  15 15 16 17-1818  20
                                                      Figure  9.
o 5.0
  3.0
             CASE 6
          1972 FLOW CONDITIONS
                           OCEAN D.O. -7.0 mg/1
          OCEAN 0.0. • 6.0 mg/1
          	CALIBRATED MODEL

          	PROPOSED D.O.E. DISCHARGES
                   53       52   51  50

                       D.O.E. STATION NUMBER
                                         49 33 47 45  44 ' 71
Figure  8.

Figure  9  shows  the predicted results  of combining the
pulp mill  discharges.   The model predictions show
virtually  no  difference between results of the separ-
ate and combined discharges (Figures  8  and 9).

The last  alternative placed the entire  industrial dis-
charge  much closer to  the estuary mouth.   The results
for the proposed discharge levels and a benthic oxygen
                  2
demand  of  2.0 g/m /day between the discharge and the
estuary outlet  indicated essentially  no oxygen deple-
tion.   The calculations show that for these conditions
DO in the  estuary may  be very nearly  equal  to the
concentration in incoming seawater.   However, this
result  should be interpreted cautiously because there
is presently  no information about tidal  exchange co-
efficients.

Study of saltwater intrusion based on 1972 hydraulic
conditions showed that water similar  to ocean water
extends nearly  up to the region of DO sag  examined
earlier, corroborating the small DO change up to this
point.
                                                                    MONITORING SYSTEM  IMPROVEMENTS
                                                      It is suggested that future  effort be devoted to field
                                                      measurements  of benthic oxygen  uptake rates and the
                                                      determination of ebb and flood  tide water quality at
                                                      the seaward  boundary of Grays Harbor.  The benthic
                                                      measurements  should be made  in  the North and South
                                                      Channels  and  up the Chehalis River to Cosmopolis.

                                                      Other monitoring system improvements, such as the
                                                      measurement  of nutrient concentration and photo-
                                                      synthetic rates, would be valuable but of secondary
                                                      importance compared to the above  suggestions.

                                                                             REFERENCES

                                                      Odum, E.P.,  Fundamentals of  Ecology.   3rd ed. W.B.
                                                      Saunders  Co., Philadelphia,  1971.   pp. 352-362.

                                                      A Hydraulic Hater Quality Model of Suisun and San
                                                      Pablo Bays.   Water Resources Engineers report of an
                                                      investigation conducted for  FWPCA.   35 pp., March,
                                                      1966.

                                                      Call away, R.J., K.V. Byram and  6.R.  Ditsworth, Mathej
                                                      matical Model of the Columbia River from the Pacific
                                                      Ocean to  Bonneville Dam - Part  I.   Federal Water
                                                      Pollution Control Administration,  Pacific Northwest
                                                      Water Laboratory.  155 pp.,  November, 1969.

                                                      Feigner,  K.D., and H.S. Harris, Documentation Report,
                                                      FWQA Dynamic  Estuary Model.  July,  1970.

                                                      Storm Water Management Model, Volumes 1-4.  Metcalf
                                                      & Eddy, Inc., Palo Alto, California;  University of
                                                      Florida,  Gainesville, Florida;  and Water Resources
                                                      Engineers, Inc., Walnut Creek,  California.

                                                      Baca, R.G., W.W. Waddel, C.R. Cole,  A. Brandstetter
                                                      and D.B.  Cearlock, EXPLORE I:   A  River Basin Water
                                                      Quality Model.  Report to the U.S.  Environmental
                                                      Protection Agency, Battelle-Northwest, Richland, WA.,
                                                      1973.

                                                      Lorenzen, M.W., W.W. Waddel, and  P.A. Johanson,
                                                      Development of a Mathematical Water Quality Model
                                                      for Grays Harbor and the Chehalis  River, Washington.
                                                      Report to the U.S. Environmental  Protection Agency.
                                                      4 vols.   Battelle-Northwest, Richland, WA., 1974.

                                                      Interim Reports I and II, Cooperative Grays Harbor
                                                      Surveillance  Program.  Washington  State Department
                                                      of Ecology, 1970-1971.
                                                        114

-------
                                   MATHEMATICAL MODEL OF A GREAT LAKES ESTUARY

                                                Charles G. Delos
                                      U.S. Environmental Protection Agency
                                                Chicago, Illinois
                       Abstract
     A one dimensional steady state finite section
estuary model was applied successfully to the lower 11
miles of the Black River of Ohio.  The approach used
was necessitated by the fact that water quality in the
lower portion of the river is strongly influenced by
Lake Erie waters.  The one dimensional estuary model
represents a compromise between conventional stream
models, which are fundamentally inadequate to simulate
this type of system, and multi-dimensional models,
which require considerably greater resources to apply
successfully.  The approach used is likely applicable
to the lower reaches of nearly all rivers tributary to
the Great Lakes.

                 General Considerations

     The Black River of Ohio drains an area of 467
square miles to Lake Erie .  Water quality of the lower
11 miles of the river is severly degraded by discharges
from U.S.  Steel Lorain Works and the cities of Elyria
and Lorain sewage treatment plants.  In order to assess
the degree of waste treatment required to attain
acceptable levels of dissolved oxygen in the river,
field data collection and mathematical modeling was
initiated by U.S. Environmental Protection Agency.

     The study area shown diagramatically in Figure 1,
encompassed 11.3 miles of waterway, extending from
above Elyria STP  (River Mile 10.8) down through the
river-harbor interface  (R,M. 0.0), out to the harbor-
lake interface  (R.M. -0.5). Above approximately
River Mile  6.5, the Black River is a free flowing
stream.  Below this point water level and quality are
influenced by backwaters of Lake Erie; thus, although
it is not  saline, the system conforms to an accepted
definition of an estuary.1' 2  Following the last
glacial retreat the stage of Lake Erie has been hy-
pothesized to have risen significantly due to gradual
upwarp of  the outlet sill in response to removal of the
ice loading.1' 3  The river valley formed by downcut-
ting during the low water period was thereby drowned,
forming the present backwater.1' 2  Additional en-
largement  of the lower 3 miles of channel has been done
by man to  expedite navigation.

     While the free flowing portion of the river  (R.M.
10.8 - 6.5) is shallow and of moderate velocity and
slope, the estuary portion is quite deep and slow
moving.  Here current measurements and water quality
data indicate stratification with intrusion of cleaner,
cooler Lake Erie waters beneath the warmer effluent
waters .

     Vertical concentration gradients were not found to
be excessive, however.  The variation of dissolved
oxygen with depth averaged about 1 mg/1 in the lower
portion of the river.  Consequently, it is appropriate
to describe the system one dimensionally using the
average concentration (from top to bottom) at each
station as commonly applied to pollution analysis of
estuaries.  In this case, the transport of material
caused by  the rather complex hydrodynamic behavior in
the estuary portion of the river is described in terms
of advective and dispersive transport along the
longitudinal axis, as discussed by Harleman.4
     The hydrograph of the Black River  at  Elyria  in-
dicated that a. very low and relatively  steady flow
regime had been maintained for about two weeks
preceding the July 1974 survey and continued  throughout
the survey period.  Under such conditions, the system
is likely to approach a steady state.
     The mathematical description of water  quality  be-
havior in a one dimensional estuary under steady  state
conditions is well developed and is elucidated  else-
where.5  Furthermore, a. number of computer  programs
are available to expedite solution.  The program
utilized, the AUTO-SS version of AUTO-QUAL,  incor-
porates a finite section approach.6  The river,
between R.M. -0.6 -  10.8 was divided into a large
number of equal length segments within which mixing
was assumed to be complete.  Concentrations were
determined by advective and dispersive transport  into
and out of each section and by the sources  and  sinks
of material within each section.

     For a system thus discretized the dissolved
oxygen concentration in section j , located  upstream of
section j-1 and downstream of section j+1,  is defined
by
= (-QjDOj+1
                       O.  -qoutjOOj  +qin.jDOin . ) /V .
          CBODj  -KN NBOD-; +KA  (DOsat .-DO . ) +P .-R.-SOO;
          j     J    j     J    j       3   3   J   J     J
where
        A = cross  sectional  area  (L )
       DO = dissolved oxygen concentration (M/L )
        E = dispersion  coefficient  (L2/T)
       K^ = reaeration  coefficient  (1/T)
       K(-, = carbonaceous  BOD decay  coefficient (1/T)
       KN = nitrogenous BOD  decay coefficient (1/T)
        P = photosynthetic oxygen production (M/L3/T)
        Q = river  flow  (L3/T)
      qin = effluent or tributary flow (L3/T)
     qout = diversion flow  (L3/T)
        R = algal  respiration (M/L3/T)
      SOD = sediment oxygen  demand  (M/L3/T)
        V = section volume  (L3)
        X = section length  (L)
     The primary difference between the finite section
 estuary model  and the  finite section stream model
 (e.g., QUAL-II^)  is  that  the downstream boundry con-
 dition must be fixed in order to model the estuary.
 This reflects  the obvious fact that the water quality
 of Lake Erie  (a large  system)  is relatively unaffected
 by conditions  in the Black River (a small system) .

                   Model  Calibration

     The calibration is based on a rather comprehen-
 sive field survey performed on July 23-26, 1974.

 Hydraulic Characteristics

     Flow conditions comparable to the once in 10 year
 7 day low flow were  observed in the river during the
                                                       115

-------
t
o-
oo
<
UJ
ce

CO
8
C£
O
O
0
_l
5 ^ 	 	 FIGURE 1: BLACK RIVER DIAGRAM
\\ ^ 	 ~ 	 --_
HARBOR ESTUARY PORTION FREE FLOWING PORTION 1 FT DEPTH
1 31 FT DREDGLD UNUktUULU ^ — A
PFPTH ™ FT 2-15 FT,,, 	 _.- 	 '
s — fi~~lr vi ELYRIA STP
^L^STP 1U_ I *™™«»«

U.S. STEEL
RIVER MILE
012 34 56 789 10
I i i i i i i 1 	 1 	 1 	 1 	
July 1974 survey.  Net flow traveling past the steel
mill averaged 25.4 cfs.  This low flow is dwarfed by
the 266 cfs cycled through the steel mill intakes and
outfalls.

      Hydraulic slope in the free flowing portion of
the river (above R.M. 6.5) averaged 4.7 ft/mile.  At
R.M. 6.5 the Lake Erie water elevation essentially
intercepts the river elevation and the river bed slope
of 3-4 ft/mile results in increasing depth.  Below
R.M. 2.9, the channel is dredged to a depth of 30 ft.
Channel dimensions are shown in Figure 1.

      Current measurements were made at two cross
sections (R.M. 2.3 and 3.1)  in the estuary portion of
the river.   Four times in three days, current direc-
tion and velocity were measured at 3 foot depth
intervals at 3-5 points along each transect.  The
longitudinal component of velocity, averaged laterally
and temporally, is shown in Figure 2.  Intrusion of
lake water beneath the effluent waters is clearly in-
dicated in both cases.  Flow was more sharply strati-
fied at the more upstream cross-section.   Also
noteworthy at this cross-section is the small current
traveling downstream along the river bed, possibly
originating from cool upstream water which escaped
entrainment in the upstream water intake.

      The net advective velocity, computed for the
simulation from flow and channel dimensions, was only
0.02 and 0.001 ft/sec at the upper and lower cross
sections respectively.  These very low velocities
sharply contrast the maximum stratified current velo-
cities (shown in Figure 2) of 0.25 and 0.2 ft/sec at
the upper and lower cross sections respectively.  The
necessity for a high degree of longitudinal dispersion
in a one dimensional model is thus apparent.
               0.0   -0.1   -0.2    0.2   0.1
                    VELOCITY  (FT/SEC)
       FIGURE 2:   MEASURED CURRENT VELOCITIES
0.0  -0.1
                 The longitudinal dispersion  coefficient,  E,  was
            determined from conservative materials  profiles,  using
            the trial and error  fit procedure described by Thomann
            for use with finite  difference  computation.5 The  value
            of E is shown as a function of  river mile  in Figure 3.
            Comparison of observed and predicted dissolved solids
            profiles is shown in Figure 4.  Fluoride,  chloride,
            and sulfate displayed analogous profiles and comparable
            fits.

                 The observed level of dispersive mixing is believ-
            ed to be a manifestation primarily of the  stratified
            flow conditions which result from gravitational insta-
            bility of the lighter effluent  waters and  heavier lake
            waters.  The 3-4 °C  vertical temperature differentials
            represent density differentials of approximately  0.001
            g/ml.  It is also likely of importance  that the steel
            mill withdraws water from some  distance beneath the
            surface and discharges heated effluent  at  the  surface.
            The dispersion coefficient, highest below  the  steel
            mill, decreases in the harbor towards values expected
            for near shore and open lake waters.

                 Wind effects, in the form  of seiches,  may also
            constitute a portion of the dispersive  energy.  Lunar
            tides, on the other  hand, are not observed on  the Great
            Lakes.

            Dissolved Oxygen Balance

                 Reaeration capacity, Ka, was calculated using the
            O'Connor formula modified as recommended by O'Connor?'

                                 KA = KL/H
                    and
                                 KL =  12.9
                                                               constrained by  KL > 2

                                                          where KL is the surface transfer coefficient, H
                                                          depth, and U is net velocity.
                                                          ~ 1000
                                                              750
                                                          o
                                                          cj
                                                              500
                                                              250
                 FIGURE 3:
      2468
          RIVER MILE
LONGITUDINAL DISPERSION COEFFICIENT
                                                       116

-------
      FIGURE 4:

OBSERVED AND PREDICTED
   DISSOLVED SOLIDS
   JULY 23-26, 1974
                            600
   400h
o
00
                                                JLJL
                                                                                                  10
                                                                RIVER MILE
      The Tsivoglou formula   was considered for
application to the free flowing portion but was found
to significantly underestimate reaeration capacity.
The Churchill formula, on the other hand, was consid-
ered to be inapplicable for this situation as it was
developed for streams with velocities considerably
higher than found anywhere in the study reach, and
depths greater than those found in the free flowing
portion.11  Its use would also underestimate reaera-
tion capacity.

      The bulk of the oxidizable nitrogen consisted of
ammonia.   As the rate limiting step under this condi-
tion can be expected to be ammonia oxidation, a single
first order kinetic reaction will closely approximate
the three or four stage reaction (depending on whether
starting with ammonia or organic nitrogen):8' 12

               Org-N —NH3 —N02 —N03

Nitrogenous BOD (NBOD) was calculated based on total
Kjeldahl nitrogen concentration.

      Differences in decay rates were expected to exist
between the estuary and free flowing portions of the
river, due to differences in benthai character, ratio
of volume to benthal surface,  and rate of replacement
of fluid elements at the benthal interface.8  In the
free flowing portion (above R.M. 6.5) the decay co-
efficient was found to be 0.15 day"1 (base e) based on
the observed rate of disappearance.   Such a low rate
is characteristic of a system dominated by gross levels
of carbonaceous BOD.8

      The decay coefficient in the estuary portion of
the river was estimated to be  0.05 day"1, based on fit
to the observed NBOD levels.   This unusually low rate
is attributed to insufficient  levels of dissolved
oxygen existing through much of the  estuary.8'  9»  12
                                      Carbonaceous BOD (CBOD)  was determined from the
                                 long term BOD (20 or 30 day BOD) less the NBOD.  The
                                 decay coefficient, estimated from observed CBOD levels
                                 and rates of disapperance, was found to be 0.7 day"1
                                 in the first mile below Elyria STP, 0.5 day"1 through
                                 the remainder of the free flowing portion of the river,
                                 and 0.14 day"1 in the estuary portion.

                                      BOD loading is summarized in Table 1.

                                 Table 1:  Oxygen demanding effluent loads (Ibs/day)

                                        Source              CBOD         NBOD
                                      Elyria STP            9800
                                      U.S.  Steel (net)      14000
                                      Lorain STP            2600
5000
8700
2800
                                      The diurnal dissolved oxygen variation at all
                                 stations in the estuary portion of the river was either
                                 small or inconsistant with photosynthetic activity.
                                 Negligible algal productivity is likely a consequence
                                 of rapid light extinction in the water column.

                                      Sediment oxygen demand (SOD)  measurements were
                                 made at various locations.   When converted to mg/l/day,
                                 the SOD was found to be minor relative to the oxygen
                                 uptake of BOD dissolved and suspended in the water
                                 column (Kc x CBOD + % * NBOD,  in mg/l/day).

                                      Thus, without incurring significant error, the
                                 accumulation of organic matter  in the sediments could
                                 be assumed to have attained a steady state,  with the
                                 rate of decay within the sediments balanced by the
                                 rate of deposition.  3  The small sediment oxygen demand
                                 found was therefore implicitly  accounted for in the  BOD
                                 decay, as originally suggested  by Streeter.1^
      FIGURE  5:

OBSERVED AND  PREDICTED
   DISSOLVED  OXYGEN
   JULY  23-26,  1974
                                                              4           6
                                                               RIVER MILE
                                                                        10
                                                      117

-------
     Comparison of observed and predicted dissolved
oxygen concentrations for the July 1974 survey is shown
in Figure 5.

                      Verification

     Water quality data collected in September 16, 1975
was used to test the predictive capability of the model.
Net flow past U.S. Steel was approximately four times
greater, and temperatures 3-4 °C less than those found
during the July 1974 survey.

     As the U.S. Steel effluents were not monitered at
this time, previously measured net loads (concentra-
tion deltas between intakes and outfalls) were assumed.
However, due to the high degree of recirculation
through the river by the steel mill, the plant's intake
and effluent qualities are interdependent upon each
other.  Subsequent to using a tedious manual conver-
gence method involving several computer runs to
determine the correct intake and outfall concentrations
for a given set of conditions, the computer program was
modified to couple each intake to the outfall it feeds
(as shown in Figure 1) .  Effluent BOD was computed
from the given change in BOD between intake and outfall,
added to the intake BOD computed during the previous
iterative step in the solution.15

     Effluent dissolved oxygen (DO) was handled in
analogous manner; however, the relationship between
the intake and outfall DO is more complex.  The follow-
ing simplification of the process is expected:

    (1) Water is pumped from intake with DO concentra-
       tion Cj_.

    (2) Temperature is raised to outfall temperature;
       DO saturation is depressed to CSO; deficit is
    (3) Water undergoes reaeration in returning to
       lake elevation, resulting in new deficit Do;
       DQ/D-J^ = e-KAt, where KA is a reaeration
       coefficient.

The product K^t  (or ratio DO/D^) was determined from
the July 1974 data.  Since the reaeration coefficient
can be expected to be temperature dependent, an
Arrhenius rate dependency was assumed.

     Comparison of observed and predicted dissolved
oxygen concentrations for the September 1975 survey are
shown in Figure 6.

                   Sensitivity Analysis

     The sensitivity of dissolved oxygen predictions
to changes in system parameters under low flow condi-
  UJ
  CJ
  X
  o 4
  Q
 o 2

 5
    0
            0246
                         RIVER MILE
   FIGURE  6:   OBSERVED AND  PREDICTED  DO,  SEPT.  16,  1975
tions was investigated.  The analysis was based on
conditions expected following implementation of
improved treatment by dischargers.

    Predicted dissolved oxygen levels were  most sensi-
tive to changes in the reaeration and dispersion
coefficients and Lake Erie BOD  (downstream  boundary
condition).  They were less sensitive to CBOD and NBOD
decay coefficients and Lake Erie dissolved  oxygen, and
were quite insensitive to river flow and upstream
boundary conditions.

    The great difference in sensitivity to  dispersion
and flow of course reflects the previously  discussed
magnitude of difference between stratified  flow
velocities and net advective velocity.

                    Conclusion

    Intrusion of Lake Erie waters into the  Black River
estuary is brought about primarily by thermally induced
density differences between Lake and effluent waters.
Under low flow conditions net advection downstream
plays little role in the transport of pollutants out of
the estuary.  Rather, transport brought about by oppos-
ing vertically stratified flows may be simulated as
longitudinal dispersion in a one dimensional,  steady
state estuary model.  The magnitude of the  dispersion
coefficient used is somewhat smaller than found for
many ocean estuaries, but greater than normally applied
to streams or lakes.

    The approach taken is adequate for planning and
enforcement purposes.

    Additionally, due to the sluggish flow  in the
backwater, reaeration is best calculated from basic
surface transfer considerations  (surface to volume
ratio), with minimum values of  surface transfer co-
efficient chosen independently of flow turbulence
 (velocity, depth, slope) considerations.

    The observed influence of the lake on water quality
in the lower reaches of the river also has  an important
implication to the modeling of Great Lakes.   Loads to
Lake Erie calculated by multiplying the observed
concentration by the net advective flow will signifi-
cantly underestimate the true load being delivered to
the lake  (by dispersive transport).  Using  concentra-
tions observed further upstream will by-pass this
effect but will also fail to include major  waste
sources.

                    References

1. Brant, R.A., and Herdendorf, C.E., "Delineation of
   Great Lakes Estuaries", Proceedings 15th Conference
   of Great Lakes Research, page 710, 1972.

2. Pritchard, D.W., "What is an Estuary:  Physical
   Viewpoint", in Estuaries, edited by G.H.  Lauff,
   American Association for Advancement of  Science,
   Washington, D.C., 1967.

3. Hough, J.L., Geology of the  Great Lakes, University
   of Illinois Press, Urbana, 1958.

4. Harleman, D.R.F., "Diffusion Processes in Stratified
   Flow", in Estuary and Coastline Hydrodynamics,
   edited by A.T. Ippen, McGraw-Hill Book Co., New York,
   1966.

5. Thomann, R.V., Systems Analysis and Water Quality
   Management,  Environmental Science Services Division,
   New York, 1972.

-------
6. Crim, R.L., and Lovelace, N.L.  "AUTO-QUAL Modelling
   Systems" EPA-440/9-73-003, U.S.  EPA, Washington
   D.C., March, 1973.

7. Water Resources Engineers, Inc.,  "Computer  Program
   Documentation  for  the Stream Quality Model  QUAL-
   II", prepared  for  U.S. EPA, May,  1973.

8. O'Connor, D.J., Thomann, R.V., DiToro, D.M.  and
   Brooks, N.H.,  "Mathematical Modeling of  Natural
   Systems", Manhattan  College, New York, 1974.

9. Hydroscience,  Inc.,  "Water Quality Analysis for the
   Markland Pool  of the Ohio River",  prepared  for
   Malcolm Pirnie Engineers and the Metropolitan Sewer
   District of Greater  Cincinnati,  October, 1968

10. Tsivoglou, E.C., and Wallace, J.R., "Characteriza-
   tion of Stream Reaeration Capacity" EPA-R3-72-012,
   U.S. EPA, October, 1972.

11. Churchill, M.A. Elmore, H.L., and Buckingham, R.A.,
   "The Prediction of Stream Reaeration Rates",
   Journal SEP, ASCE, Volume 88, Number 4,  SA4,  July,
   1962.

12. O'Connor, D.J., Thomann, R.V.,  and DiToro,  D.M.,
    "Dynamic Water Quality Forecasting and Management",
   EPA-660/3-73-009,  U.S. EPA, August, 1973.

13. Velz, C.J., "Significance of Organic Sludge
   Deposits", Oxygen  Relationships in Streams, Public
   Health Technical Report W58-2.

14. Streeter, H.W.,  "Modern Sewage  Disposal", Federa-
    tion of Sewage Works Association, page  198, 1938.

15. Schregardus, D. , U.S. EPA Michigan-Ohio  District
   Office, unpublished  communication.
                                                       119

-------
                               COST-EFFECTIVE ANALYSIS OF WASTE LOAD ALLOCATIONS

                       John Kingscott, Environmental Protection Agency, Washington, D.C.
                   Introduction

     The Federal Water Pollution Control Act
Amendments of 1972 require the States to identify
those waters for which the minimum legislated
effluent limitations are not stringent enough to meet
applicable water quality standards.  Roughly 2000
segments have been identified as being water quality
limited for a variety of reasons.  The waste load
allocation procedure is used to determine effluent
limitations based on water quality considerations
rather than uniform applications of technology.
State Basin Planning under Section 303 of the Act and
currently Water Quality Management Planning under
Section 208 is attempting to ensure that the 1983
goals of fishable and swimmable water uses will be
achieved in these segments.  The potential investment
of large sums of money for advanced waste treatment
justifies a close look at the practical  implications
of the waste load allocations.  Much of the current
attention being given to nonpoint sources is the
result of a concern that these sources will negate
the upgrading of water uses despite increased levels
of point source control.  However, many segments do
have predominately point source problems or the non-
point problems can be independently addressed for
periods of high flow.

     Common stream analysis practice consists of the
application of verified deterministic models to pre-
dict the water quality response during critical or
design conditions.  The behavior of any water segment
can most reasonably be approximated as a probabilistic
system since the flow, temperature, waste load, and
initial instream concentrations all vary over time.
The state of the art in model development is far be-
yond the availability of basic data and insight into
biological processes needed to broadly apply more
sophisticated methods.  This emphasizes the need for
judgment in interpreting the results of the simpler
deterministic models.

     This paper considers the relative consequences
of some procedures used in the application of determi-
nistic models, in particular the choice of design
conditions and seasonal application of waste load
allocations.  It was desired that the analysis be
general and applicable to a number of situations and
issues.  It was also necessary not to be hypothetical
but to address real situations.  The resulting
analysis considers the costs of advanced waste treat-
ment and the effects in terms of a risk for the
violation of dissolved oxygen stream standards.  An
effluent analysis was undertaken to define an empirical
procedure for generating effluent loading factors.
The waste treatment costs were considered by combining
flow dependent unit processes to form viable treatment
systems.  Five water quality limited segments were
analyzed using historical U. S. Geological Survey
streamflow records.  Cost-effective curves were gen-
erated to define feasible treatment options for
nitrogenous and carbonaceous BOD removal.   The optimal
investment strategy for levels of treatment higher
than secondary was then used to study issues related
to waste load allocations.
                  Effluent Analysis

     A number of  factors can  be  expected to affect
treatment efficiency  and the  variability of effluent
loadings.  The need existed to produce a generally
applicable procedure  to generate BOD loadings  for  dif-
ferent treatment  schemes.

     Daily effluent BOD concentrations were analyzed
for nine Michigan and five Texas3 secondary plants.
The mean, variance, and coefficient  of skewness were
obtained for one year of operation of each  of  the
plants.  The distributions were  assumed to  be  log
normal and a method was sought to generate  synthetic
daily mean concentrations given  an annual mean effluent
load.

     Matalas6,1* has suggested a  procedure for pre-
serving the moments of a distribution  when  log values
are generated.  If "a" is the lower  bound of random
daily BOD represented by x» then y -  log (x  a) is
normally distributed.  If the x  parameters  represent
the daily mean concentration of  BOD,  they are related
to the y parameters as follows:

 v(x) = a + exp Ca2(y)/2 + v(y}H
exp f2 LV(y)
                                - exp
     (1)

+ 2,(fJ]_
             - 3
                                     + 2
                             U  /2

     where y is the mean, a2 is the variance,
     and Y is the coefficient of skewness.

     To preserve the statistics of the generated BOD
values, the mean, variance, and coefficient of skew-
ness are determined for the historic distribution
(the x variables); substituted into equations 1, 2,
and 3; and solved for y(y), n2(y) and "a."  These are
the parametric values that are used in the generation
process to give a series of synthetic normally distri-
buted logarithms y-| , y^—y^-  Tne generated BOD's
are then calculated back through the transformation
by the relation:

               XT = exp(yi) + a

     The independent variables chosen to describe the
distribution were the wastewater discharge Q, in mgd,
and yearly average effluent BOD concentration y(x) in
mg/1.  The following empirical relationships were
developed by stepwise regression:

  a2(x)   4.06 y(x)i.27q-.2S                      (4)

     multiple correlation coefficient = .83

  Y(X)   5.81 yCx)--"3                            (5)

     multiple correlation coefficient = .47

     Thus normally distributed logs could be generated
and transformed into daily mean concentrations of
effluent BOD.  The standard deviations for distribu-
tions thus generated are related to those for the
historical operation of the treatment plants in
figure 1.  The response of the generated distributions
to variations of the independent variables is given in
figure 2.
                                                       120

-------
                     • MICHIGAN
                     O TEXAS
            20r
                    5     10   15
                     GENERATED
         Fig.  1.   Comparison  of  generated  and
           observed  standard  deviations.
                         FLOW=25 m.g.d.
                         41

                    BOD  (mg/1)
                               BOD-30  (««n)
              21
    4<
BOD (mg/1)
 Fig. 2.  Response of the generated BOD distribution
        to changes in waste flew and mean BOD.
     This  technique  serves as a crude approximation of
reality  and does not consider numerous  important fac-
tors.  However, the  procedure is very general and can
be applied with a minumum amount of  information.
                                                       Cost Analysis

                                         Costs were  calculated  using  combinations  of
                                    various  unit  processes for  wastewater treatment and
                                    sludge disposal.10   Consideration was given  to those
                                    processes which  remove oxygen demanding material for
                                    what  might be considered secondary and advanced sys-
                                    tems.  The various  combinations of processes were
                                    classified according to characteristic effluent quality
                                    assuming a typical  influent waste stream.  Compatible
                                    unit  processes were  combined as building blocks con-
                                    sidering basic design criteria, equipment sizing, and
                                    quantities and characteristics of sludges so that
                                    compatible combinations were formed.

                                         The total cost  for each unit process was
                                    determined based on wastewater flow and includes cap-
                                    ital, operation, and maintenance.  These costs were
                                    developed based on  unit sizing as determined by
                                    standard design criteria, process loading capabilities,
                                    solids generation,  chemical and energy consumption,
                                    and manpower  requirements.  The values were trended
                                    to a  common cost level representative of February 1973.
                                    Figure 3 traces the combinations of unit wastewater
                                    processes and the effluent  values which are character-
                                    istic of the  combined systems.
                                                          Figure 3


                                                    WASTEWATER TREATMENT
                                                  UNIT PROCESS COMBINATIONS
                                                                          15  15  15   15  15   15
    Al CONVENTIONAL PRIMARY
    A3 PRIMARY WITH SINGLE STAGE LIME ADDITION
    A4 PRIMARY WITH ALUM ADDITION
    A5 PRIMARY WITH FERRIC CHLORIDE ADDITION
Bl,2,3 TRICKLING FILTER
Cl,2,3 ACTIVATED SLUDGE
    C4 ACTIVATED SLUDGE WITH ALUM ADDITION
    C5 ACTIVATED SLUDGE WITH FERRIC CHLORIDE
    C6 HIGH RATE ACTIVATED SLUDGE
     D FILTRATION
     E ACTIVATED CARBON
  Gl,2 BIOLOGICAL NITRIFICATION
     J BREAK POINT CHLORINATION
                                                       121

-------
                 Segment Analysis

     The fundamental  equation which describes the
longitudinal  distribution of dissolved oxygen can be
developed from the principles of mass balance and
continuity.  An advective system with carbonaceous
and nitrogenous oxygen sinks and first order kinetic
relationships takes the following form:

  JC=_ |_L  (QC) + Ks(Cs-C)-KdL(x)KnN(x)  (6)

where:  C    = concentration of dissolved oxygen
        Cs   - saturation concentration of D.O.
        K,   = reaeration rate constant
        iq     carbonaceous BOD oxidation rate
               constant
        L(x)    concentration of carbonaceous BOD
        K_     nitrogenous BOD oxidation rate
               constant
        N(x)    concentration of nitrogenous BOD
        Q      river volumetric flow
        A      river cross-sectional area
        x    = longitudinal distance

Equations of this nature have been solved by Li5 for
the case where boundary conditions are arbitrary func-
tions of time and by DiToro and 0'Conner2 for the case
where boundary conditions are functions of time and
the flow is time-variable.  The steady state solution
to equation 6 for constant boundary conditions and
coefficients is:
                                                                           State
                                                                                      Basin
                    K u
D(x)=Cs-C(x)=D      N
                                         d  ' exp
     where:  D(x)
             o'  Q'
             u  =  Q/A
                     N,
                                                  (7)


                       = distribution of D.O. deficit
                       = initial concentrations
                   A  " = stream velocity
     Historical daily streamflows were considered
along' with randomly generated waste loads and charac-
teristic monthly temperatures to calculate a minimum
dissolved oxygen value with equation 7.   This method
implies that the stream flow and velocity are constant
for one day and then abruptly change to  another con-
stant value the subsequent day and so on.  It is
important to note that equation 7 describes a profile
derived from steady state conditions for constant
streamflow, waste load, and reaction rate coefficients.
However, the equation was applied by assuming the
parameters are constant for one day and  then immedi-
ately change.   Limitations to this approach1 imply
that the solution is valid only for one  day's travel
time below the waste source.  This will  be significant
for relatively small reaction rate coefficients which
cause the minimum D.O. to occur some time after one
day from the time of input.  The significance of this
fact on the analysis is unknown but could conceivably
be small if D.O. standards violations are occurring
during relatively steady flow periods on the recession
tail of the hydrograph.

     Five stream segments that are water quality
limited for dissolved oxygen were analyzed.  The seg-
ments have been modeled through the EPA  National
River Basin Modeling Program and represent a variety
of hydrologic  conditions.
Flint
Cache La Poudre
Schuylkill
Reedy
Upper Miss.
Ga.
Colo.
Pa.
S.Car.
Minn.
Chattahoochee
S. Platte
Delaware
Santee
Mississippi
                                          Average
                                         Flow (cfs)

                                             345
                                             130
                                            1740
                                              85
                                           12090
     The  stream  hydraulic  descriptions  were simplified
and assumed to be  constant for  the  entire segment.
Municipal and industrial waste  sources  were combined
for the simulation  so  either  one  or two point sources
were included depending on the  existence of industrial
discharges.  The distribution of  daily  loadings  for an
industrial source was  assumed to  have the same charac-
teristics as a municipal source but generated
independently.  The nitrogenous BOD component was
assumed to also have the same log normal
characteristics.

     U. S. Geological  Survey  stream gaging stations
exist on  all segments, and daily  flows  for the last
twenty years of record were obtained from STORET
(except the Colorado stream where twelve years are
available).  The daily stream flows were used to calcu-
late the  hydraulic response and the  reaeration rate
constant.  Daily waste loadings were produced by
assuming  a constant waste  flow  and  randomly generated
daily mean concentrations.  Consequently,  the load-
ings were assumed to be independent of  the time  of
year and  streamflow.   Representative monthly water
temperatures were determined  from the U.  S.  Geological
Survey Water Resources Data-Water Quality  Records.
Representative boundary conditions were  determined from
the original model validation studies and  assumed to be
constant.  The power functions  for  velocity,  depth,
and reaeration coefficient determination  and  reaction
rate constant temperature  adjustments were applied over
the wide  range of flows and temperatures.   Table I
shows the values which were used  for the  simulations.

     The  simulation procedure was used to  calculate the
cost-effective curves unique  to each segment  system
shown in figure 4.  For each  classification  of effluent
quality a simulation was made from the historical daily
streamflow records to determine the number  of  times the
average daily oxygen was below  a  standard  of  4 mg/1.
This standard has tentatively been identified8 as pro-
viding a  low level of protection; that is,  it  should
permit populations of tolerant  species and  successful
passage of most migrants while  there may  be  a  reduced
production or elimination of  sensitive fish.   The cor-
responding costs were determined  as the  least  costly
combination of unit processes capable of achieving the
effluent values (including sludge disposal).   The
ammonia removal  points were determined by  considering
an average of four alternatives—trickling  filter and
break point chlorination,  trickling filter  and biological
nitrification, activated sludge and break  point  chlori-
nation, and high rate activated sludge with  biological
nitrification.  The averaging was done to  generalize the
procedure as biological nitrification was  less costly
but potential  seasonal  operating  problems  and  perhaps
land requirements would not always make  it  the more
reasonable choice.  Ammonia removal was  considered on
increments of one quarter of  the  waste flow to aid the
construction of a continuous  plot.  This  possibility
is not unreasonable and might be  compared  to  split
treatment in a water softening  operation.   The result-
ing cost-effective curves provide a guide  to  the
selection of advanced treatment schemes  that  minimize
the total  cost associated with  a  given frequency of
oxygen standard violation.   The curves map  the optimal
                                                       122

-------
                  UPPER MISSISSIPPI
   21
~  15
o
o
o
                             O  CBOD Removal
                             X  NBOD Removal
                                           CBOD=30
                                            NH3=15
               1        2        3
               VIOLATION  OCCURRENCE
                   CACHE LA POUDPE
      30
    t
                               O  CBOD Removal
                               X  NBOD Removal
    31
        •         1234

             VIOLATION  OCCURRENCE  (%)
                          REEDY
 choice  of advanced  treatment  schemes  and  clearly define
 the  point at  which  ammonia  removal  should be considered.
 In some segments  there  is a strong  indication that  an
 understanding of  the  process  of  stream nitrification  is
 important to  waste  load allocation  decisions and
 efforts should be made  to assess the  likelihood  of  its
 occurrence.9,10  The  adequacy of first order kinetic
 relationships should  probably also  be confirmed  against
 possible nonlinear  approaches that  may more  reasonably
 represent the autotrophic bacteria  activity.

                Haste  Load Allocation  Analysis

      The implications of a  seven-day  ten-year low flow
 for  allocations have  been questioned  as the  practice
 is based on tradition rather  than substantive justifi-
 cation.  The  effects  of the choice  of alternative crit-
 ical  conditions and varying modes of  operation deserve
 further investigation.

      For each of  the  five segments  a  3% annual growth
 rate was assumed  and  allocations calculated  for  a
 twenty-year flow  projection.   Seven-day two-,  five-,
 and  ten-year  low  flows  were calculated from  the  his-
 torical record and  allocations determined with the
 maximum monthly temperature.   The occurrence of  oxygen
 violations and corresponding  total  costs  for secon-
 dary treatment (carbonaceous  BOD   30 and ammonia
 15)  and low flow  dependent  allocations under present
 waste flow conditions are given  in  figure 5.   The
 occurrence is expressed as  a  percent  and  calculated
 by dividing the number  of days having a mean D.O.
 below 4.0 mg/1. by  the  number of historical  daily
 average flows. The segments  show varying degrees of
 sensitivity to the  choice of  critical  flow conditions.

      The increased  liklihood  of  an  instream  violation
 as point source flows increase to the  twenty-year
 projected level while concentrations  are  held  con-
 stant is given in figure 6.   Simulations  were  also
 made at the twenty-year projected flow with  the  car-
 bonaceous reaction  rate decreased by  2556  as  an indi-
 cation  of a possible  safety factor  resulting  from a
 biologically  more stable waste from advanced  treatment
 processes.

     The result of adherence to the  load allocation
for specified  months of  the  year  is  shown  in  figure
7.  A four-month effluent standard  (July,  August, Sep-
tember, & October) and six-month  standard  (June,  July,
August, September, October,  &  November) are compared to
yearly operation at the  allocated level.  The Georgia
    21
                          O  CBOD Removal
                          X  NBOD Removal
o
o
o
               2411

              VIOLATION  OCCURRENCE  (%)


        Fig.  4.   Cost-effective curves for three
                     river segments.
                                                                                          Secondary
                                                                                          7-dy.  2-yr.
                                                                                          7 dy.  10 yr.
                                                                 Mss
                                                                          Cache   Reedy  Schuyl.    Flint
                                                            Fig.  5.   Cost and corresponding frequency of a
                                                             D.O.  less than 4 mg/1 for secondary treatment
                                                             and  allocations based on 7-day 2 and 10-year
                                                                               low flows.
                                                     123

-------
                                                          TABLE  I

                             Values of  Constants and  Boundary Conditions  used  for  Simulation
 SEGMENT
                 Municipal
                 Flow (mgd)
Industrial
Flow (mgd)
Velocity   Depth
(ft/sec)   (ft)
                                                                        K.
 Upper  Mississippi      250
 Reedy                   15
 Schuylkill             25
 Cache  La  Poudre
                                             .000133Q   10.7     12.96u-5
                                                                   1.5      .35

                                             .038Q-716  .292Q'41  7.6u
                                                                  D1.33     .6

                                             .06Q-4
                         .8Q'23   7.6u
                                                                  D
                                             .0855Q'17  .44Q'335 2.833u
                                                                      1.33      .55
                                        Q = stream-flow, in cfs
                                       Ah   change in water surface elevation, in feet
                                       tf   time of travel, days
                                        D = mean depth of stream, in feet
                                        u = mean velocity of stream, in ft/sec
                                                                     .25


                                                                     .2
Flint
10 1 .09Q'31
D
.45Ah
tf .4 .5
2 0.0
and South Carolina segments show some  difference  bet-
ween four- and six-month standards,  while the Minne-
sota and Pennsylvania streams exhibit  the practicality
of a four-month requirement.   The capital  and annual
operation and maintenance costs  for  secondary and the
increment for advanced treatment are given.   Relative
to secondary treatment, a greater proportion  of the
total cost associated with the load  allocation is for
operation and maintenance.  This is  due  in part to
the higher cost of operating break point chlorination
which was averaged with biological nitrification  to
form the means of ammonia removal.
                   Ol
                 00
                          °
11
dP
• —

M
o 12
w
PH
K
D
U 1
O
o
o
H 4
E-i
5
o
H
> 1
-M
r s
d1 S
Q] ^ .— ,. "!J
• ^ c
^ 5 G -a

y en q LJ*
H a) o >^
'Q 03 fN ^







..T"















o
o
CO








(U 1 o

CL| (J 1
L t£ PL
I— 1 o ^
\0
3 ^ \
-^

n

-i n
.-,













^
n

1 n 1





















-Th




















     Miss.    cache    Reedy   Schuyl.   Flint

Fig. 6.  Frequency of a D.O. less than 4 mg/1 at
   present wastewater flow and 20 year projection
             at a 3% increase per year.
                                                                                                   -l29
                                                                    Miss.  Cache  Reedy Schuyl. Flint
                                                             Fig.  7.   Amortized capital costs and annual
                                                              operation and maintenance costs for a 7-day
                                                               10-year WLA and corresponding instream
                                                               violations for time dependent operation.
                                                           M
                                                           O
                                                        D
                                                        B3
                                                        o
                                                           O
                                                           H
                                                                                                              o
                                                                                                              o
                                                                                                              o
                                                                 Miss.
                                                                          Cache    Reedy   Schuyl.
                                                                                                     Flint
                                                            Fig.  8.   Costs and instream D.O.  violations for
                                                              7  day-2year allocation and 6 months operation
                                                                        at 7-day 10—year allocation.
                                                      124

-------
     Figure 8 is a comparison of seven-day two-year
based effluent limits with seven-day ten-year based
limits which are practiced for six months with secon-
dary treatment the remaining months.  The costs for
the seasonal treatment option were calculated by as-
suming the operation and maintenance costs for the
advanced treatment processes were less by one half
for six months of operation.  The figure shows the
advantage of a seasonal  effluent limit for all segments.

                    Conclusions

     The assumptions made in this analysis imply that
the absolute values for the cost and violation occur-
rence are of secondary importance to their relative
values.  Assuming the existence of a verified deter-
ministic stream model the procedure given may be used
to produce cost-effective curves which guide the
choice of advanced wastewater treatment schemes.
With the minimum legislated levels of treatment it is
probable that stream nitrification will become increas-
ingly important and should be given close attention
during model calibration and verification.  The waste
load allocation procedure is an effective means for
controlling the risk of a stream standards violation
and absorbing the effects of increased waste flow.  In
some segments the significant difference in the risk
associated with the two-year and ten-year recurrence
interval for the design condition indicates the advan-
tage of using the more stringent requirement. Cost-
effective considerations imply the practicality of im-
plementing variable effluent limits such as on a sea-
sonal basis.

     This work was stimulated by practical concerns.
The assumptions made were necessary to produce results
which could provide some insight into policy decisions
which must be made soon  due to legislated deadlines.
Additional  work is needed to assess the implications
of the various application-oriented procedures and
decisions which must be  addressed by those involved
with all phases of related water quality
planning.
8.


9.





10.
               References

Ditoro, D. M., O'Connor, D. J., Thoman, R. V.,
"Discussion to Risk Evaluation in Sewage Treat-
ment Plant Design," Journal of the Sanitary Engi-
neering Division, ASCE, No. SA6, December 1967,
pp. 268-271.

DiToro, D. M., O'Connor, D. J., "The Distribution
of Dissolved Oxygen in a Stream with Time Varying
Velocity," Water Resources Research, 4(3), June
1968, pp. 639-646.

"Evaluation of Factors Affecting Discharge Quality
Variation," Texas A&R, Environmental Engineering
Division.

Fiering, M. , Jackson, B., Synthetic Streamflows,
American Geophysical Union, Water Resources
Monograph Series 1, Washington, D.C., 1971.

Li, Wen-Hsiung, "Unsteady Dissolved-Oxygen Sag
in a Stream," Journal of the Sanitary Engineering
Division. ASCE, Vol. 88, No. SA3, Proc. Paper
3129, May 1962, pp. 75-85.

Matalas, N. C., "Mathematical  Assessment of
Synthetic Hydrology," Water Resources Research,
3(4), 1967, pp. 937-945.

O'Connor, D. J., "The Temporal and Spatial  Distri-
bution of Dissolved Oxygen in  Streams," Water
Resources Research, 3(1), 1967, p. 65.

Quality Criteria for Water, U. S.  Environmental
Protection Agency, preliminary draft.

Tuffey, T. J., Hunter, J. V.,  Whipple,  W.,  Yu,
S.L., Instream Aeration and Parameters  of Stream
and Estuarine Nitrification, Rutgers University,
Water Resources Research Institute,  November
1974, p. 59.

Van Note, R. H., Hebert, P. V., Putel ,  R.  M.,
Chupek, C., Feldman, L., A Guide to  the Selection
of Cost-Effective Wastewater Treatment  Systems,
EPA-430/9-75-002, July 1975.
                                                      125

-------
                             WASTE ALLOCATIONS IN THE BUFFALO (NEW YORK) RIVER BASIN

                                                Donald H. Sargent
                                                   Versar Inc.
                                              Springfield, Virginia
                        Abstract

      A water quality simulation model, VERWAQ, was
developed for the corplex hydraulic and waste load
characteristics of the Buffalo River.  These character-
istics include very low water velocities, oscillating
flow, upstream flow, inter-basin transfer of water,
many critical conservative and non-conservative water
quality parameters, thermal pollution, and important
non-point as well as point sources of wastes.  The
developed and verified model was used to project water
quality and to allocate waste loads.

              Description of the Study Area

      The Buffalo River was the subject of a compre-
hensive evaluation of waste loadings and water quality,
performed by Versar Inc. in 1973 (under EPA Contract
68-01-1569) as part of the U.S. Environmental Pro-
tection Agency's conmitments to abate and control
water pollution under the 1972 Great Lakes Water Qual-
ity Agreement between the U.S. and Canada.1  This river
in western New York was identified as one of several
concentrated areas of municipal and industrial activity
which have had poor water quality and which contributed
to the waste loads of the Great Lakes.

      The Buffalo River discharges into the easternmost
end of Lake Erie, just at the head  (southern) end of
the Niagara River.  It extends only 13 kilometers
 (8 miles) upstream from its mouth, and is located in
the City of Buffalo and in surrounding Erie County.
The watershed of the Buffalo River and its three tribu-
taries  (Cazenovia Creek, Buffalo Creek, and Cayuga
Creek) is roughly triangular in shape, extending to the
south and east of Buffalo, and has a drainage area of
446 square miles.  Except for a few miles just above
their confluence with tie Buffalo River, the tribu-
taries are fast-flowing streams with primarily agri-
cultural drainage areas and with several small
communities.  The lower reach of Cayuga Creek passes
through the large urban residential coiritiunities of
Lancaster and Depew, and bears little resemblance to
its upper reaches or to the other two tributaries.

      Buffalo River itself is characterized by heavy
industrial development in the midst of a large munici-
pality.  Its waste load and water quality problems
dominate any such concerns for the entire watershed.
There are 43 individual industrial discharges into the
Buffalo River.  Very heavy waste loads into this reach
are imposed by frequent overflows, from numerous out-
falls, from the combined storm/sanitary sewer system.
The problems are aggravated by the hydraulic character-
istics of this reach and by large heat loads.  As a
result, the water quality deficiencies in the Buffalo
River were  (until recently) typified by a summertime
dissolved oxygen concentration of less than one mg/1
and by the almost complete absence of aquatic life.

         Characterization of Present Conditions

Hydraulics of the Buffalo River

      The industrialized reach of the Buffalo River is
maintained as a shipping channel to a depth of 6.7
meters  (22 feet), and has a very low slope, less than
0.2 meters per kilometer.  Most of the river's volu-
metric flow is due to industrial discharges whose in-
take source is not the River but in the Buffalo Outer
Harbor.  These industrial  flows  amount to more than
twice the natural discharge at average summertime con-
ditions and to twenty times the  natural discharge at
critical flow conditions;  resulting in a relatively
stable total flow rate in  summertime.   Because of the
very large man-made river  cross-section, however,  the
calculated average velocity is very low, less than 0.02
meters per second, and the calculated  residence time in
this short reach is greater than five  days.

      Oscillating flow in  the upstream as well as  down-
stream direction  (driven by oscillations in  the level
of Lake Erie) of significantly higher  velocities than
the calculated average, was observed and measured  in
the industrialized reach of the  Buffalo River.   In-
dependent sets of time-varying water-level data for
Lake Erie at the mouth of  the Buffalo  River  and for the
Buffalo River itself also  exhibited significant oscil-
lations.  A dynamic analysis, which converted observed
water level oscillations to flow rate  oscillations,
resulted in a calculated R.M.S.  velocity of  0.096
m/sec, which is in general agreement with the R.M.S.
velocity (from direct measurements  of  velocity)  of
0.082 m/sec, and which is  five times the calculated
time-average downstream velocity of 0.018 m/sec.   An
extension of the dynamic analysis resulted in a calcu-
lated longitudinal movement of water of ± 200 meters
superimposed upon the time-average  movement.

Water Quality

      Except for the lower reach of Cayuga Creek and
for the short Buffalo River itself, most of  the Buffalo
River watershed  (including all of Buffalo Creek and
the upper reaches of Cayuga Creek)  is  typified by  good
water quality.  This is consistent  with an agricul-
tural, wooded, and vacant  land use  pattern,  dotted with
small residential communities and scattered  park and
recreational areas.

      Table 1 summarizes the water  quality data for
the industrial  (dredged) reach of the  Buffalo River.
Specific contraventions of water quality standards in
this reach are an average  summertime dissolved oxygen
concentration of 0.9 mg/1  (compared to the minimum
allowable of 3.0 mg/1) and an average  iron concentra-
tion of 3.1 mg/1  (compared to the maximum allowable
of 0.8 mg/1).  Although many of  the other parameters,
including temperature, are at high  levels compared to
the natural waters, no other specific  water  quality
contraventions were found.

      Chemical analysis of bottom deposits from the
industrialized reach of the Buffalo River indicate
high levels of oxygen demand, oil,  grease, and iron.
Biological sampling of these bottom deposits indicate
that this reach of river is essentially devoid of
bottom organisms; a finding consistent with  the meas-
ured dissolved oxygen level of less than 1 mg/1.

Waste Loads

      The Buffalo River receives the waste loads of
its upstream tributaries,  a very heavy concentration of
industrial discharges, and frequent overflows from com-
bined sewers.                  ?

      The waste load to the Buffalo River from the
three upstream tributaries is based upon the measured
water quality and flow data for  these  tributaries
                                                       126

-------
under two conditions of flow:  the average summertime
flow, equivalent to the 70 per cent duration point;
and the minimum average seven-day critical discharge
with a recurrence interval of ten years  (MA7CD/10),
equivalent to the 99 per cent duration point, and
specified as critical flow by the New York State De-
partment of Environmental Conservation.  The heat  flux
of the upstream discharge is defined as  zero, with the
choice of a baseline temperature equivalent to the
temperature of this discharge  (19.0°C in summer).

      The waste load from industrial point discharges
is based upon NPDES permit applications  on file at EPA
Region II as of July 1973.  The dissolved oxygen con-
tent of the industrial discharges, not included in the
NPDES permit applications, was based upon data inde-
pendently supplied by the major dischargers.  The  heat
flux was calculated from the temperature difference
between each industrial effluent and the baseline
temperature.

      The combined sewer overflows into  the Buffalo
River were, for the purposes of this study, judged to
be quite evenly distributed in time and  in distance.
Overflows from the Buffalo combined sewer system occur
on the average of once every five days,  and are quite
evenly distributed over the year.  There are 70 over-
flow outfalls from more than 250 overflow chambers.
The fact that much of this waste is deposited on the
bottom of the industrialized reach of the Buffalo
River and affects water quality as a benthal load  is
further justification for approximating  the combined
sewer waste load as a distributed  (non-point) load.
This combined sewer overflow waste load  was quantified
from two studies of overflow quantity in Buffalo,  from
the difference between runoff and influent at the
sewage treatment plant, and from two studies which
characterized the constituents of combined sewer over-
flows in places other than Buffalo.

      A comparison of the various waste  loads at aver-
age summer flow, using BOD-5 as the parameter of com-
parison, indicates that the combined sewer overflow
accounts for 31 per cent of the wastes to the Buffalo
River.

                    Simulation Model

      In general, the widely-used steady-state uniform
flew stream models, which are essentially computerized
versions of the Streeter-Phelps analysis for the BOD-DO
relationship, are limited to the very simplest appli-
cations of point sources of wastes to a  constant-
temperature, non-dispersive, free-flowing stream.
VERWAQ, a computerized model, was developed by extend-
ing the capabilities of existing models  to accomodate
the complex nature of the industrialized reach of  the
Buffalo River.

Features of VERWAQ

      Hydraulics.  The industrialized reach of the
Buffalo River exhibited longitudinally homogeneous
water quality measurements of virtually  every parameter.
The independent measurement and analysis of oscillating
flow in the upstream as well as the downstream direc-
tion  (driven by oscillations in the level of Lake  Erie)
strengthened the hypothesis that this reach may be a
well-mixed body of water as opposed to a free-flowing
stream.  The simulation model therefore  was required
to test this hypothesis; e.g., VERWAQ is useful as
either a plug-flow model  (no longitudinal dispersion)
or a completely-mixed model  (complete dispersion of all
constituents including heat).  The same  VERWAQ com-
puter program is used for both; the desired approach
is selected with an input key word.
      Water Quality Parameters.  A total of  26 water
quality parameters were specifically  identified by EPA
for careful attention in this study  (and in  other
Great Lakes studies).  The water quality data and the
waste load data for the Buffalo River revealed that 57
constituents were deserving of analysis in this heavily-
industrialized reach.  The simulation model  was re-
quired to track these many parameters, both  conserva-
tive and non-conservative.  In addition to the con-
ventional treatment of carbonaceous BOD as non-
conservative, the model was required  to similarly treat
nitrogenous BOD, ammonia, organic  nitrogen,  and
phenols.  Three distinct deoxygenation rate  constants
are used in the model.

      Reaeration.  The industrialized reach  of the
Buffalo River has extremely low linear velocities.
Moreover, the prevailing winds off Lake Erie are per-
sistent and of high velocity.  Consequently, the model
calculates the reaeration coefficient in two ways:  as
determined by stream velocity, and as determined by
wind velocity.  The program selects the larger of the
two coefficients for each river segment.

      Thermal Analysis.  The very  large heat loads
from industrial sources into the Buffalo River, plus
the high residence times for water in this reach and
the high wind velocities, required that the model
simulate effects upon the river water temperature.  The
thermal analysis of VERWAQ includes the heat flux from
discharges, tributaries, and non-point sources, con-
vection and conduction between the stream and the
ambient air, and solar radiation to the stream.  Rate
constants are then appropriately adjusted for temper-
ature.

      Non-Point Waste Loads.  The  combined sewer over-
flows (and benthal loads) constitute  almost one-third
of the total waste loads.  The model was required to
treat non-point sources as distributed waste loads
simultaneously as it treats point  sources of other
wastes.  The Streeter-Phelps equations in differential
form were augmented by a distributed waste model
(chemical and thermal constituents) and then reinte-
grated.

      In the conventional Streeter-Phelps analysis, the
steady-state BOD balance around a  differential longi-
tudinal segment of the river (between point-source
additions) is composed of three terms:  the upstream
waste input, the downstream waste  output, and the
reaction  (oxidation) loss in the segment.  The analysis
for VERWAQ adds a fourth term, the non-point source
(distributed) waste input in the distance interval dx:
(QL/R)dx; where Q, L, and R are respectively the non-
point-source total flow rate, the  non-point-source
BOD concentration, and the longitudinal distance (reach)
over which the non-point discharge is evenly distri-
buted.

      As in the conventional Streeter-Phelps analysis,
the sum of the terms is set equal  to  zero (for steady-
state) and integrated over a longitudinal river
distance x.  In this analysis, however, the  extra non-
point-source term is included in the  sum and in the
integral.  The result is solved for the BOD  concentra-
tion which is then substituted into the equation for
deoxygenation rate.  Integration of this equation
yields the expression for oxygen deficit as  a function
of longitudinal distance.

Testing of the Model in the Buffalo River

      Plug-Flow Model.  The plug-flow model  was applied
extensively to the dredged portion of the Buffalo
River, using various values and combinations of values
for the constants.  Satisfactory simulation  of the
                                                        127

-------
ernpiricaHy-determined non-conservative water quality
parameters was not achieved, confirming the prior
conclusion of significant longitudinal mixing based
upon the river hydraulics and upon the empirical water
quality data.  Typically, the dissolved oxygen profile
calculated with the plug-flow model is a decrease in
DO from about 7 ppm to near zero in the two-mile reach
with the heaviest waste loads.  The experimentally-
determined dissolved oxygen content, however, was
uniformly low (0.0 to 1.8 mg/1)  throughout this reach.
Despite high values (consistent with high temperatures
but still reasonable)  for the deoxygenation coeffi-
cients , the plug-flow model could not approximate the
measured step change in dissolved oxygen content with
distance.

      Completely-Mixed Model.  The completely-mixed
modeling option of VERWSQ resulted in excellent
agreanent  (as shown in Table 1)  with experimentally-
determined data, for conservative parameters and non-
conservative parameters  (dissolved oxygen, BODg,
NH3-N, and phenols), using for the most part constants
independently published by others.  For all except
fluoride and nickel, the calculated values came well
within the range of measurements.  It is possible
that slightly-soluble salts such as fluorides, whose
ions originate from different industrial discharges,
may exceed their solubilities and precipitate in the
river.  The model was then adequately verified by
comparing its water quality predictions with measured
winter time data in a completely different flow rate
regime from the upstream tributaries (two to seven
times the average summer time flow).

              Water Qiality Projections

Projected Waste Loads

      The projected waste loads into the Buffalo River
were based upon implementation of Best Practicable
Control Technology Currently Available (BPCTCA).   It
was projected that three  sewage  treatment plants  cur-
rently discharging into Cayuga Creek would be phased
out during dry weather  as the  sewage is incorporated
into the Buffalo system.  The  projected industrial
waste loads were based  upon existing permits,
effluent limitation guideline  development documents,
interim effluent guidance documents, or Region II
permit summary tables;  as these  were available in
October 1973.  Several  independent  judgements were
made in the absence, at that time,  of  promulgated
effluent limitations guidelines  or  of  issued  permits.
It was also judged that several  low-volume industrial
discharges would be incorporated into  the municipal
sewer system.  It was projected  that the combined
sewer overflow waste load would  remain the same as the
present load.

Projected Water Quality and Waste Load Allocations

      The developed and verified model was utilized to
predict water quality from  the projected BPCTCA waste
loads.  These projected water  quality  data, for both
the average summer time and critical flow conditions,
are listed in Table 1.  The projected  water quality,
at critical flow conditions, marginally came within
the standards for temperature  and dissolved oxygen.
However, more stringent waste  allocations were recon-
mended for iron.  Upon  implementation  of BPCTCA, which
would be effective in reducing most waste loads, the
oxygen-demanding waste  load of the  combined sewer over-
flows would then become the dominant constraint for
achieving good water quality in  the Buffalo River.

                      References

1.  Sargent, Donald H., Waste Allocations in the
    Buffalo (New York)  River Basin, Final Report,
    EPA-905/9-74-010  (February 1975).
                                                     Table 1

                               Water Quality, Dredged Portion of the Buffalo River
                                             Concentrations in mg/1

Dissolved Oxygen
BOD-5
NH3-N
N03-N
Cyanide
P-Total
Sulfate
Chloride
Fluoride
Oil & Grease
Phenols
Arsenic
Barium
Cadmium
Chromium
Copper
Iron
Lead
Mercury
Nickel
Selenium
Zinc
Water
Quality,
Criteria^
3.0*
—
2.0*
4
0.1*
25
500
250
1.5
7
0.2
1.0
5.0
0.3*
0.05
0.2*
0.8
0.1
0.006
0.7
2.5
0.3*

Measured
No. Data Pts. Max.
76
41
29
17
28
28
33
33
17
29
29
12
11
15
27
24
10
21
27
12
21
10
4.0
14.0
1.26
0.59
0.05
0.85
68
70
0.69
7.2
0.266
0.03
0.20
0.00
0.08
0.06
5.65
0.23
0.017
0.00
0.004
0.178
Data
Min.
0.0
0.6
0.14
0.0
0.0
0.07
49
46
0.44
0.1
0.008
0.00
0.0
0.00
0.00
0.00
0.68
0.00
0.000
0.00
0.001
0.024

Average
0.94
4.22
0.69
0.13
0.01
0.29
57
57
0.53
2.6
0.027
0.02
0.0
0.00
0.02
0.02
3.11
0.06
0.001
0.00
0.003
0.084
Calculated
Present
Data (b)
1.03
4.22
0.69
0.42
0.034
0.60
60.5
51.7
1.14
3.89
0.02
0.011
0.001
0.004
0.057
0.034
3.066
0.071
0.001
0.027
0.000
0.098
Projected Data
Avg. Summer Critical
FlcwW Flow(C>
3.79
1.89
0.22
0.39
0.03
0.45
58.6
43.2
1.09
2.23
0.010
0.010
0.0
0.001
0.015
0.030
2.054
0.037
0.000
0.026
0.000
0.087
3.06
1.73
0.21
0.28
0.038
0.55
58.6
44.6
1.40
2.40
0.013
0.014
0.0
0.001
0.020
0.037
2.729
0.048
0.000
0.036
0.000
0.116
(a)  Criteria Labelled * are explicit in N.Y.  State Standards
    Others are implied by "fish survival" criterion.
(b)  Average Summer Flow,  Completely-Mixed Model.
(c)  Critical Flow, Completely-Mixed Model
                                                      128

-------
                                    STREAM MODELING AND WASTE LOAD ALLOCATION
                                 James Y.  Hung,  Aolad Hossain and T.  P.  Chang
                                       Water Pollution Control  Division
                                         Indiana State Board of Health
                                             Indianapolis,  Indiana
                       ABSTRACT

The Indiana Stream Pollution Control  Board conducted
an intensive stream modeling program  for Indiana's
major rivers during the past three years.   These
stream models were used primarily for the purpose of
waste load allocation.   This paper describes the
stream self-purification system models for BOD,  DO and
ammonia.   In addition to the analysis of model  compo-
nents, problems of evaluating system  parameters are
examined.   The formulation of the waste load alloca-
tion methodology and the issues in allocation imple-
mentations are reviewed.   The paper is concluded by a
discussion of the limitations in using stream models.

                     INTRODUCTION

The Federal  Water Pollution Control Act Amendments of
1972 have  established improved river  quality as a ma-
jor goal of overall river basin planning.   The tasks
of setting water quality standards and determining
waste load allocations for dischargers are bestowed
upon the State Water Pollution Control Agencies in
conjunction with the U. S.  Environmental Protection
Agency (USEPA).  During the past three years, the In-
diana Stream Pollution Control Board  (ISPCB) has been
in the process of building stream quality models for
Indiana's  rivers.1  Because dissolved oxygen (DO) is
traditionally the main indicator of pollution,  a ma-
jor effort was made to model DO as well as the oxygen
consuming  parameters such as biochemical oxygen demand
(BOD) and  nitrogenous oxygen demand (NOD).

In the Indiana Water Quality Management Plan, Indiana
streams are divided into ninety-nine  segments with
forty-two  segments classified as water quality limited
segments.   The criteria of segment classification was
based mainly on the projected condition of dissolved
oxygen deficiency and the need of advanced waste
treatment.  Of the ninety-nine segments classified,
eighteen segments have been modeled,  including the Wa-
bash River,  White River,3 Grand Calumet River,
Little Calumet RiverS and the Mississinewa River."
The model  was designed mainly for waste load alloca-
'tion purposes and therefore emphasis  was placed on a
critical  condition at low stream flow period.

Objectives of this paper are (1) to describe the ra-
tionale,  considerations and procedures of ISPCB's
stream modeling and waste load allocation processes;
and  (2) to summarize ISPCB's experience, in particu-
lar, the types of problems they encountered during
this entire endeavor.

                    STREAM MODELING

In the selection of model components,  it is necessary
to consider the local climate and stream conditions as
well as the purpose of modeling.  Indiana climate is of
the humid, continental, warm summer type.   It is chai—
acterized  by definite winter and summer seasons accom-
panied by  wide temperature ranges. Occasionally,
stream temperature in the summer months, May to Sept-
ember, can be above 30°C.  Annual precipitation aver-
ages approximately 38 inches and stream runoff is a-
bout 12 inches.  Indiana adopts the average seven-con-
secutive-day, once in ten years low flow in the defin-
ition of its water quality criteria.  Dry seasons are
usually between August and October.  Therefore, stream
nitrification can be significant during the summer
months with low stream flow and high water temperature.

As described earlier, the purpose of modeling is to
determine BOD and NOD allocations for municipal and
industrial  dischargers.  Only daily average DO stand-
ards were tested against the load allocation, and thus
photosynthetic and respiration factors were not con-
sidered which cause diurnal DO variations.   In view of
the fact that the sludge deposit in the stream bed is
expected to be reduced due to increasing pollution
control measures, benthal demand was also neglected
for most segments modeled.

In the ISPCB study, a modified version of the
Streeter-Phelps equation for DO deficits was utilized
which includes both carbonaceous and nitrogenous bio-
chemical  oxygen demands and atmospheric reaeration.
The revised Streetei—Phelps equations are as follows:
       K1Lo   , -K2t  "Kit   KnN0     -K2t   -Knt)
D(t) --K-J7   (e    -    ) -—   (e    -
              (e

           -K2t
                                 n
        D0e
N = N0e"Knt

where:
D(t) = DO deficit at time t.

D0   = Initial DO deficit, mg/1

L0   = Initial carbonaceous BOD, mg/1

L    = Carbonaceous BOD at time t,  mg/1

N0   = Initial NOD, mg/1

N    = NOD at time t, mg/1
                                                   (D

                                                   (2)

                                                   (3)
K1

Kn

KZ
                              ion rate constant
     = Carbonaceous deoxygenati
       (base e) ,  day
     = Nitrogenous deoxygenat ion rate constant
       (base e) ,  day ~1
     = Reaeration rate constant (base e) ,  day
The carbonaceous deoxygenat ion rate,  K-| ,  and the
nitrogenous deoxygenation rate, Kn, were determined by
the slope of the BOD and N03~N profiles respectively
when plotted on a semi log paper.   The stream reaera-
tion coefficient, K2, was computed by one of the emp-
irical equations' which are functions of stream temp-
erature, stream flow velocity and mean depth.  The
hydraulic data usually available for flow velocity and
depth are taken at the gaging stations.  These stations
are often located at the control  sections of the stream
where the flow velocity tends to be higher and mean
depth to be smaller than that of the normal stream
reaches.  Using the above mentioned hydraulic data
would tend to produce an overestimated K2 value.  An-
other source of the hydraulic data is the dye travel
                                                       129

-------
study which provides the time of travel information.
However, dye studies taken at critical low flow period
are very rare.  Previous investigators '>° have found
that general hydraulic equations developed at various
ranges of flow can be quite different and that the
actual travel time at low flow period is longer than
that computed by hydraulic equations for high flows.

When stream DO profile data were available, the model
verification was made to compare the computed stream
DO profile with the measured profile.  In this way, an
appropriate equation of K2 was decided for a particu-
lar stream segment.  However, complete sets of stream
profile data for DO, BOD and N03-N are often difficult
to obtain.  In this case, the choice of a stream re-
aeration equation would be difficult because various
proposed equations^ could produce quite different re-
sults.  Frequently, individual judgement must be used
in the selection of equations.

Further complications resulted from the fact that our
purpose of modeling was waste load allocation.  Bio-
chemical characteristics are expected to be different
in the effluent and in the stream when additional
treatment and additional quantity of wastewater are
realized.  The deoxygenation rates, both K-j and Kn,
computed from existing measurements can serve only as
a reference for predicting future stream deoxygenation
rates.  The problem of deoxygenation rate prediction
is unresolved in the current state of art.   Further-
more, there is evidence in our Wabash River study that
stream deoxygenation rates are functions of dilution
ratio and therefore dependent upon the stream dis-
charge rate, in addition to stream temperature.

The one dimensional modeling of DO, BOD and NOD, such
as that represented by equations (1), (2),  and (3), may
yield poor results in a short reach immediately below
the effluent outfall because of the incomplete mixing
problem.  This is particularly true when the dilution
ratio is large.  Stream survey data used for ISPCB
model verifications were mainly composite samples
taken in a twenty-four hour intensive survey.  For
each stream cross section, samples were taken at cen-
ter, left side and right side of the stream width.
These samples were analyzed separately and their aver-
age values were used for model verification.

          WASTE LOAD ALLOCATION FORMULATIONS

PL 92-500 requires all  dischargers to provide, at the
minimum, a secondary wastewater treatment (such  as an
activated sludge process) for municipal  wastewater
plants and the best practicable treatment (BPT)  for
industrial wastewater plants.  However,  if the pre-
determined stream water quality standard in the
affected segment cannot be achieved as a result  of
this minimum treatment (defined as a water quality
limited segment), then various levels of advanced
wastewater treatment (AWT)  would be required for some
or all dischargers.  Methods for determination of each
polluter's treatment level  (or waste load allocation)
in this affected segment then becomes a question for
consideration.   The problem would be simple if only
one discharger was responsible for the affected  stream
quality.   The answer becomes somewhat cloudy when more
than one discharger is involved.   Proposed  solutions
to this problem follow two basic approaches^;   the
cost effectiveness approach and the equity approach.

The cost effectiveness approach is a typical mathe-
matical  programming problem of the form;

   minimize:   total  treatment costs
   subject to:
quality standard,  physical  and tech-
nical  constraints
Due to  the usually  nonlinear  nature of the cost func-
tion associated with  the  treatment  levels,  a nonlinear
programming  solution  is generally  required.^  However,
the major difficulty  in implementing this cost effec-
tiveness approach is  the  inequality which results  from
discrimination in treatment  requirements.  Difficul-
ties may also be encountered  when new waste sources
enter into this segment and complete readjustments may
then be  required.   In  addition,  this approach  assumes
that optimal solution   in  the stream segment being
considered is independent  of  the influences from both
upstream and downstream segments.   This,  however,  is
usual ly untrue.

The second proposed solution  is  an  equity approach in
the form;

TI = T2	T,

Subject to;  quality goal   satisfied,  physical  and
             technical constraints

Where Tj = the degree  of treatment  for the  i-th plant.
Again, the so-called "degree  of  treatment"  is  diffi-
cult to define, especially when  comparing a privately-
owned industrial  plant with a publically-owned munic-
ipal plant.  The present practice in  Indiana is to a-
dopt a combination of  the  two above mentioned  ap-
proaches.  An example  is the  waste  load allocation for
the Grand Calumet River Basin.

Academicians have proposed a  third  but  not  yet prac-
ticed approach,'" which is to treat  the stream assimi-
lative capacity as a commodity and  to  offer it  in a
competively open market.  The allocations would be
settled purely by the  balance of supply and demand
subject to certain constraint.  However,  this approach
neglects historical  factors and would  require  institu-
tional  changes.

Allocation computation in all  three  approaches re-
quires the predetermination of the  relationships be-
tween the effluent quality (such as  biochemical oxygen
demand and ammonia concentration).   These relation-
ships can be established through either regressional
analysis or simulational analysis (such as  the
Streeter-Phelps equation for  dissolved oxygen).  How-
ever,  the simulational method  is generally  preferred
because it provides better capability  in  generating
alternative solutions  such as  by-pass  piping and
timing adjustments.

              ALLOCATION RELATED PROBLEMS

Compared to wastewater treatment technology and stream
modeling techniques, studies  related to wasteload
allocation methodology are still in  their infancy.   No
established pattern or criteria exist  as  to the selec-
tion of boundary conditions and  loading frequencies in
a load allocation computation.  For  example, the head-
water source for a stream segment represents a multi-
parametric loading which consists of flow rate, temp-
erature and pollutant  concentrations  such as DO, BOD
and ammonia.   However, these  parameters are not con-
stant but rather stochastic processes.  Each parameter
follows a given statistical distribution.   The problem
is one of selecting statistically a  reasonable combin-
ation of loading concentrations  in  performing waste-
load allocation analysis.   The situation  becomes more
complex when one takes into account  simultaneously the
stochastic loadings of tributaries  as  well  as  treat-
ment plant effluents.

The present I SPCB practice in  the selection of bound-
ary conditions and loading frequencies  is on a case-
by-case basis and the  factors  considered  include di-
lution ratio, loading  characteristics  and stream qual-
                                                      130

-------
Ity criteria for that segment.

The increasing uses of biological treatment processes
in treating municipal and industrial wastewater have
made it necessary to include stream nitrification in
the stream DO analysis, especially where the summer
temperature range covers the optimal temperatures of
nitrification, that is, between 25°C to 30°C.  Under
this condition, the conventional concept of a single
valued stream assimilative capacity of BOD becomes in-
adequate because NOD is also involved, and because the
carbonaceous deoxygenation rate (K'j) and the nitro-
genous deoxygenation rate (Kn) are not necessarily e-
qual.  The recent USEPA recommendation to use total
oxygen demand, which is defined as the summation of
ultimate BOD,  NOD and DO,  as a single-valued loading
allocation would have the same problem.  Instead, the
analysis would have to provide an optimal combination
of allocated BOD, DO and ammonia loadings for an ef-
fluent source.  The definition of stream assimilative
capacity becomes more elusive when multiple point
sources scattered over different locations are exist-
ent in the same stream segment.

Traditionally, one assumes that critically low DO con-
centration in the stream occurs at extremely low flow
This was not always found to be,true in the case of
multiple sources distributed at different locations,
particularly when presented with both BOD and NOD sag
curves.  This phenomena occurs because the stream
DO profile is formed by the superposition of all indi-
vidual sag curves.  The alternation of the shape and
location of each individual DO sag due to the change
of stream velocity and temperature can create such an
overlapping that the critical DO can take place at a
flow rate higher than a seven day, once in ten year
stream flow.

                 IMPLEMENTATION PHASE

Stream modeling and waste load allocation of BOD, NOD
and DO are parts of the National Pollutant Discharge
Elimination System as well as the State Continuing
Planning Process.  Once the allocation is determined,
it enters into the permit as an effluent limitation.
The duration of the permit is usually five years.  At
the end of the permit duration, ISPCB reevaluates the
status of the stream water quality and the program of
wastewater treatment technology.  It then reevaluates
waste load allocations for that stream segment.

A majority of the waste load allocations are presently
designed on a year round basis.  In some cases, sepa-
rate allocation values are given for summer months and
for winter months.  Eventually, the waste allocation
may require a detailed operational  schedule for efflu-
ent limits on a monthly basis or directly tied to
daily climate and stream flow conditions.   This pro-
cess would require a higher degree of scientific so-
phistication and management which could become an
overwhelming administrative task under the present
understaffed condition in  the ISPCB.

                DISCUSSION AND SUMMARY

Computer modeling of stream self-purification systems
is a useful  tool  for water quality management,  espe-
cially in a dynamic program like waste load allocation.
However,  when  applying this tool  one has  to be mindful
of its limitations and a certain degree of flexibility
and precaution are required.   First, not  all  aspects  of
stream self-purification systems are understood at the
present stage  of  development.   The first  order differ-
ential  equation currently  used for describing the
self-purification systems  has its shortcomings,  nota-
bly in dealing with stream nitrification,  which is a
two-stage process.   Furthermore,  the K rates  in the
Streeter-Phelps equation are not constants and their
predictabilities are uncertain.  As a result, model
verification can be difficult.  Secondly, complete
sets of climate and stream quality  information are
often not available in the calibration of model char-
acteristics and individual judgement has to be substi-
tuted.  Thirdly, due to the incompleteness of the
allocation criteria relative to boundary and loading
frequencies, case-by-case negotiations and compromises
based on local circumstances are unavoidable in load
allocation determinations.

Although the problems discussed in this paper are for
BOD, NOD and DO, similar problems also exist for ther-
mal and conservative pollutants.  The situation cou]d
become even more complex if effects of nonpoint source
pollutants are taken into consideration.

                    ACKNOWLEDGMENTS

The authors wish to express their appreciation to Mr.
Samuel L, Moore for his support and encouragement in
the preparation of this paper.  Messrs.  Burt Jacobs,
Richard Moss, and Patrick O'Connell have contributed
significantly to the Indiana Stream Pollution Control
Board waste load allocation programs.

                AFFILIATIONS OF AUTHORS

Drs. James Y. Hung, Aolad Hossain and T. P.  Chang are
presently serving as Acting Chiefs of the Engineering
System Section, the Computer Modeling Section and the
Program Support Branch, respectively, Water Pollution
Control Division, Indiana State Board of Health,  Indi-
anapoli s, Indiana.

                      REFERENCES

1.  Chang, T.P., "Computer Applications in Water Pol-
    lution Control", Proc.  1975 International  Computer
    Symposium.  Vol.  II,  pp.  24-32, Taipei,  Taiwan,
    China, August 1975

2.  Water Pollution Control  Division, "Wabash River
    Models and Load Allocations",  Tech.  Report,  Indi-
    ana Stream Pollution Control  Board,  Indianapolis,
    Indiana, 1975.

3.  Water Pollution  Control  Division,  "West  Fork,
    White River Models and Load Allocations",  Tech.
    Report,  Indiana Stream Pollution Control  Board,
    Indianapolis,  Indiana,  1974.

k.  Combinatorizs, Inc.,  "Load Allocation Study of  the
    Grand Calumet River and Indiana Harbor Ship Canal",
    Combinatorizs, Inc.,  Lafayette, Indiana,  1974.

5.  Henry Steeg and Associates, "Load Allocation
    Study - Little Calumet River Basin  in Indiana",
    Henry B. Steeg and Associates,  Indianapolis,  Indi-
    ana,  1974.

6.  Water Pollution Control  Division, "Middle Missis-
    sinewa River Models and Load Allocations",  Tech.
    Report,  Indiana Stream Pollution Control  Board,
    Indianapolis,  Indiana,  1973.

7.  Stall,  J.B., and Yu-Si  Fok, "Hydraulic Geometry of
    Illinois Streams",  University of Illinois Water
    Resources Center,  Urbana Research Report  no.  15,
    1968.

8.  Stall,  J.B., and D.W.  Hiestand, "Provisional  Time-
    of-Travel  for Illinois Streams", Illinois State
    Water Survey Report of Investigation 63,  1969.

9.  Cleary,  E.J.,  "Effluent Standards Strategy:   Re-
                                                      131

-------
    juvenation of an Old Game Plan".  Jour. Water  Pol-
    lution Control  Federation,  Vol. k6,  pp.  9-17,
10.  Mar,  B.W. ,  "A System of  Waste  Discharge  Rights  for
    the Management of  Water  Quality", Water  Resour.
    Res.,  Vol.  7,  No.  5,  pp.  1079-1086,  1971.
                                                     132

-------
                                         PATUXENT RIVER BASIN MODEL
                                                 RATES STUDY
                Thomas H.  Pheiffer
                   Leo J.  Clark
       U.  S.  Environmental  Protection Agency
                    Region III
              Annapolis Field Office
                Annapolis, Maryland
               Norman L. Lovelace
                 Water  Division
        Water Planning  & Standards Branch
      U. S. Environmental  Protection Agency
                     Region  IX
            San  Francisco,  California
ABSTRACT

During the summer seasons of 1973 and 1975, inten-
sive water quality surveys were carried out in
the Patuxent River Basin for the purposes of
mathematical model calibration and validation.  In
the summer of 1973, the Patuxent was receiving
secondary effluent from eight major municipal
treatment plants.  No significant industrial waste
discharges are present in the Patuxent system.  A
steady state water quality model was calibrated
and validated using the data collected from the
1973 field surveys.  During 1975, a major treat-
ment plant was upgraded to include high BOD re-
moval and nitrification; new field surveys were
conducted and the model was recalibrated and
validated to reflect changes in the instream re-
action rates* as a result of the changed effluent
characteristics.  This paper discusses the field
studies, data results, model application procedures
and perhaps most importantly, how the procedures
that were used could be improved.

BACKGROUND

The state-of-the-art of modelling is such that
mathematical expressions can be written and trans-
lated into computer programs to represent complex
environmental interactions.  However, too little
effort is being directed towards defining the
numerous variables and/or biological coefficients
required to make these mathematical expressions
either descriptive or predictive.  Much effort is
being devoted to studying the mathematical behavior
of these equations and expressions, but not enough
is being given to real world applications.

A basic, but essential problem confronting many
modellers is what instream reaction rates to assume
for carbonaceous decay and nitrification when
treatment is upgraded above the conditions that
existed when field data was collected.  To date,
estimates of reaction rates for highly treated
municipal effluents are based solely on the best
judgement of modelling experts, not on well docu-
mented field data.  With this in mind, the Annapolis
Field Office (AFO), Region III, EPA, has attempted
to define changes in instream reaction rates re-
sulting from the upgrading of a major wastewater
treatment plant, in particular, the Parkway Plant
of the Washington Suburban Sanitary Commission,
located near Laurel, Maryland.

STUDY AREA

The Patuxent River, located entirely within the
State of Maryland, has a drainage area of approxi-
mately 930 square miles.  Its two major tributaries
are the Little Patuxent River and the Western Branch
with drainage areas of 160 and 110 miles,
respectively.  The tidal portion extends to
Hardesty, Maryland, a distance of 54 miles from the
mouth of the estuary.
The headwaters of the Patuxent are impounded above
Laurel, Maryland in the Triadelphia Reservoir and
the T. Howard Duckett Reservoir (Rocky Gorge).  The
Rocky Gorge Dam provides for a generally regulated
flow in the mainstream of the Patuxent downstream
to the confluence of the Little Patuxent with the
mainstem, a distance of 17.5 river miles.  During
the summer months, this regulated flow amounts to
approximately 10 million gallons per day (mgd) or
15 cubic feet per second (cfs).  It is in this
critical reach (17.5 miles) that stream quality was
improved by upgrading the Parkway Wastewater
Treatment Plant.

The Little Patuxent is not regulated by dams and
exhibits irregular flow patterns following thunder-
storm activity.  Surging flows into the estuary
during the summer are attributed mainly to the
Little Patuxent.  Total annual precipitation in the
basin is estimated at 30-44 inches per year with the
maximum precipitation occurring in July or August.

DESCRIPTION OF STUDY

For the purpose of obtaining data for model
application, 52 water quality sampling stations
were located in the Patuxent River Basin.  Twenty-
seven of these stations were located in the estuary
between river mile 0.0 and 54.0.  Fourteen stations
were located in the free flowing mainstream of the
Patuxent between r.iver mile 54.0 and river mile
81.0 at Laurel, Maryland downstream from Rocky
Gorge Dam.   Eleven sampling stations were estab-
lished in the Little Patuxent from its confluence
with the mainstem upstream to Savage,  Maryland,
river mile  18.0.

Water quality surveys were carried out in the Basin
during April  3-5, June 4-7, July 9-12, and
October 9-12, 1973.   The June and  July intensive
surveys encompassed the entire Basin.   The April
and October surveys were confined  to the estuary
with the April  survey designed to  determine rough
salinity gradients for estimating  dispersion co-
efficients.   The October survey measured surface
and bottom  dissolved oxygen concentrations for model
verification.  Data collected in the estuary were
obtained during slack water conditions.
* Reaction rates refer to first order rate constants
  The models used in these studies use first order
  rate expressions to represent the nitrification
  and carbonaceous deoxygenation processes.  The
  question of whether a first order representation
  of these processes is the most realistic is not
  a point of discussion for this paper.
                                                     133

-------
Curing the period of July 28-31, 1975, an intensive
water quality survey was carried out in the
critical reach of the free-flowing Patuxent from
below the dam of the Rocky Gorge Reservoir
(mile 81) downstream to the head of tide (mile 54).
The purpose of this survey was to obtain data for
recalibration of the existing model.  The Parkway
Plant located at river mile 74.5 had gone on line
during January, 1975 with advance waste treatment
(AWT).  Next, a survey was initiated from October
14-16, 1975, to obtain data to validate the reaction
rates determined from the July 28-31, 1975 survey
data.

Flows for the studies were obtained from stream
discharge gages located in the free-flowing
portion of the Basin.  The United States Geological
Survey made current meter discharge measurements at
each site during the June and July, 1973 studies
and again prior to the 1975 surveys.  This enabled
the USGS to furnish stream discharges for the gage
heights read at the time of sampling.  The samples
were analyzed at the Annapolis Field Office labora-
tory during the 1973 and 1975 surveys for the
following parameters:  DO, BOD5, TOC, TC, TKN, NH3,
N02+N03, Pi, TP and Ciiloro a_.  Salinity,
conductivity, temperature and pH were routinely
measured in the field.

Special studies during 1973 included long-term BOD
measurements at specified estuarine and stream
stations for the purpose of attempting to measure
in-stream carbonaceous and nitrogenous oxygen
demand  rate constants.  Methyl blue, an inhibitant
to the  bacterial oxidation of ammonia nitrogen, was
injected into duplicate samples to determine the
second  stage oxygen demand (nitrogenous BOD).
Again, during July 28-31, 1975, in the critical
reach below the Parkway AWT Plant, an attempt was
made to follow a specific parcel of water based
on time-of-travel data obtained during the week of
June 30, 1975.  Long-term BOD and the nitrogen
series were run on samples from the selected
stations below the discharge of the Parkway AWT
Plant.

Twenty-four hour composite samples of wastewater
treatment plant effluents were obtained from the
major plants in the basin during the June and July,
1975 surveys.  Composite treatment plant data were
also available from a survey by the Annapolis Field
Office  during October, 1972.  The nine plants
shown in Figure 1 account for approximately 96% of
the treated wastewater discharged in the entire
basin.  During the 1975 surveys, only the waste-
water treatment plants in the critical reach of the
mainstem of the Patuxent were sampled.  These
included the Maryland City, Parkway and Bowie-
Bel air  Plants.

DATA ANALYSIS

This discussion and those that follow will focus
on the critical reach of the free flowing Patuxent,
i.e., from below the Rocky Gorge Dam (mile 81)
downstream to the head of tide (mile 54).  As
previously mentioned, the Parkway Plant is located
at river mile 74.5 in this segment.

The data comparisons discussed below will concern
the water quality data collected during the
July 9-12, 1973 and July 28-31, 1975 surveys.  The
low flow conditions were essentially the same,
36 cfs in 1973 and 31 cfs in 1975, at the
Baltimore-Washington Parkway (mile 75) just above
            PATUXENT  RIVER  BASIN
the Parkway Plant.  Likewise, stream temperatures
were similar in the segment, i.e., 23°C in 1973
and 24°C in 1975.

Figure 2 illustrates the improvement in D.O. levels
between the 1973 and 1975 July surveys.  The
average D.O. concentration at mile 71.5 for the
4 day survey periods increased from 5.1 mg/1 (7/9-
7/12, 1973) to approximately 5.9 mg/1  (7/28-7/31,
1975).  The minimum observed concentration in-
creased from 3.1 mg/1 to 5.5 mg/1 for  the same
periods.

At this point, it is important to note the recent
modifications to the Parkway Plant.  Its design
flow has been expanded from a 2.4 mgd  secondary
facility to a 7.5 mgd AWT plant.  The  AWT plant
went on line during January, 1975.  The current
monthly average flow through the plant is 4.5 mgd.
It should be noted that the secondary  facility was
overloaded during the 1973 studies.  In 1973, the
flow was about the same as the current 4.5 mgd.

Prior to expansion, the Parkway Plant  was a
secondary facility utilizing trickling filters.
The expanded plant encompasses the trickling
filters plus an activated sludge system which
achieves high BOD removal and the nitrification of
ammonia nitrogen to nitrate nitrogen.  Micro-
strainers have been added to further reduce the
suspended solids.  The effluent then goes through
a chlorine contact chamber and is aerated prior to
discharge to the Patuxent (1).  This added aeration
process has resulted in D.O. levels of 7-8 mg/1
in the AWT effluent.  In July, 1973, D.O. effluent
                                                     134

-------
a.
u
g
8
5
S2
S
3
    IS.O-


    12.0-


    IIJJ'


    10 JO


    IJB
t.Q


t.O


4.0


3.0
                       LITTLE PATUXENT
                          RIVER
                                      PARKWAY
                                      PLANT
   levels were  4-5 mg/1  in  the  absence  of  aeration
   and  with  secondary  treatment.

   During July,  1973,  the typical  wastewater  reduc-
   tions achieved at the Parkway  Plant  amounted  to
   83%  for BOD5  and 17 mg/1  for TKN.  The  ammonia
   levels comprised about 90% of  the  TKN in the  July,
   1973 effluent.  The AWT  effluent during July, 1975
   showed good  levels  of BOD removal, averaging  5.5
   mg/1 BOD5, while TKN  averaged  7.1  mg/1, the re-
   moval rate being 98%  for BOD5  and  72% for  TKN.
   During August, 1975,  the plant achieved an 85.8%
   removal of TKN.  A  95% BOD5  reduction is the  norm
   at the Parkway AWT  Plant.

   With the  above background information in mind,
   one  would naturally expect accompanying reductions
   in BOD and TKN downstream from the Parkway Plant.
   The  average  ammonia levels for the July 9-12, 1973
   and  the July  28-31, 1975 periods decreased from
   2.7  to 0.5 mg/1 at  river mile  73.7,  a mile below
   the  plant, and from 0.5  to 0.09 mg/1 at mile  66.4,
   about 8 miles below the  Parkway discharge  and
   just above the Bowie-Belair  discharge.  The
   N02+N03 levels at the same two stations for the
   two  periods  increased from 1.3 to  3.5 mg/1 and
   from 1.4  to  3.0 mg/1  for 1973  and  1975, respective-
   ly.  The  N02+N03  1973 data  indicate no instream
   nitrification of NH3  to  N02+N03, even though  there
   was  evidence  that nitrification was  occurring (see
   NH3  results  above).   The large increase in
   N02+N03 during July,  1975 was  due  to the Parkway
   Plant effluent containing around 10-12  mg/1
   N02+N03.  The apparent loss  of nitrogen from  the
   system will  be addressed later in  this  paper.
   Also, there were corresponding  reductions  in
instream BOD5 concentrations.  For example, the
July, 1973 BOD5 averaged 5.0 mg/1 below the plant
(mile 73.7) and 3.0 mg/1 at the end of the reach
(mile 66.4).  July, 1975 data showed a general
average of 1.0 mg/1 BOD5 throughout the reach.

MODEL APPLICATION

Two models from the CMS (Comprehensive Modelling
System), a system of mathematical models developed
by Crim and Lovelace, were applied to the Patuxent
River System.  The two models used were AUTOSS and
AUTOQD (2).

Both AUTOSS and AUTOQD contain a hydraulic
component, that computes the streamflow profile,
and a water quality component that computes concen-
tration profiles.  The models are one-dimensional,
single channelled models that use first order
kinetics to represent instream bio-chemical
processes.  AUTOSS is a steady state model while
AUTOQD is a quasi-dynamic model.  AUTOQD represents
flow patterns as step shaped patterns in time, and
water quality concentrations as continuous patterns,

Data on the free flowing portion (for the period
July 15-19, 1968) were available from a cooperative
study with the Maryland Department of Water
Resources.  Both the 1968 and the 1973 data show
the mainstem to contain high concentrations of TKN
nitrogen.  These high nitrogen concentrations were
attributed to wastewater treatment plant discharges
of excessive amount of TKN and NH3 forms of
nitrogen.

AUTOSS was calibrated to simulate DO conditions in
the mainstem for the periods July 15-19, 1968 and
June 4-7, 1973.  Ultimate carbonaceous (CBOD) and
nitrogenous (NBOD) BOD loadings from the treatment
paints were entered into the model at the appro-
priate river miles.  The ultimate CBOD and NBOD
loadings were calculated from the commonly used
literature values where ultimate CBOD   1.45 BOD5*
and ultimate NBOD = 4.57 TKN.  Relatively steady
state low flow conditions occurred during July,
1968 and medium flow conditions occurred during
June, 1973.  The out flow at the downstream junc-
tion of the model was 120.1  cubic feet per second
(cfs) and 458.3 cfs, respectively.  Stream
velocities used in model calibration were obtained
from 1968 and 1975 studies by the Annapolis Field
Office of time-of-travel and from depth measure-
ments made during 1973 studies.

During the period of July 9-12, 1973, an average
net flow of 152.3 cfs was recorded in the mainstem
of the Patuxent.  The calibrated coefficients
obtained from the July, 1968 and June, 1973 model
runs were used in the model  validation runs.
Treatment plant discharge values for carbonaceous
and nitrogenous oxygen demand loadings used in the
model reflected the results of the composite
sampling of July 10-11, 1973.  Observed DO values
were assigned to major inflows while a DO of 5.0
mg/1* was used for treament plant effluents.  The
model verification curve for this flow oeriod and
                                                        * D.O.  data were not taken for the final  effluent
                                                          just  prior to its discharge to the stream.   This
                                                          was an unfortunate oversight since effluents
                                                          comprise a substantial  portion of the total
                                                          flow  in the Patuxent above confluence with the
                                                          Little Patuxent.
                                                       135

-------
 the  June  4-7,  1973  calibration  are  shown  in
 Figure  3.   Documentation  of  the 1973  studies  in-
 cluding model  application to the estuary  is  set
 forth in  Technical  Report 58, by Pheiffer and
 Lovelace  (3).
   9 JO-


   10


   7,0-
 .

 •
 O
   4.0'

   XO-

   U
        |t/4 - 8/7 , l»73| \ AVERAGE  RANGE
  ^
  fl
  o
       4(4>505tMMUMM«4


        |7/» - 7/B , H73| | WERAGE 1 RANGE
                                      TO 71  74 T«  T»
             50 32  54 M  M BO  M <4  M •«  TO 72  T4 Te  71

                        RIVER MILE
MODEL RECALIBRATION

As stated earlier, the Parkway AWT Plant went on
line in January, 1975.  This necessitated adjust-
ments to the existing model which had been cali-
brated and verified on instream reaction to the
discharge of secondary treated effluent from the
Maryland City  (mile 77.5), Parkway (mile 74.5),
and the Bowie-Belair Plants (mile 64.5).  With
this knowledge, the July 28-31, 1975 intensive
survey was planned to encompass the critical reach
from below Rocky Gorge Dam (mile 81) downstream to
the head of tide (mile 54).  The Washington
Suburban Sanitary Commission maintained a low flow
similar to the July 9-12, 1973 flow condition, at
the Rocky Gorge Reservoir for the study period.
Instream temperatures resembled July, 1973 water
temperatures.

Utilizing the July, 1973 and the July, 1975 stream.
data, two independent methods were employed to
determine reaction rates for the carbonaceous bio-
chemical  oxygen demand (CBOD) and the nitrogenous
biochemical  oxygen demand (NBOD).  First, a semi-
logarithmic  graphic solution of plotting stream
station loadings (Ibs/day)  versus travel time in
days was  used.   Next, the reaction rates obtained
from the  semi-log plots were tested in the model.
Only  through  model  testing of the rates and a
knowledge  of  the  stream can the modeller select
the best values which  work in the model and yet do
not compromise the  field data.

The above  methods for  rate determination were
employed for  the  calibration of the 1973 version
of the model, i.e.,  the one validated when the
critical stream segment was receiving secondary
effluent only.  The  existing version of the model
was recalibrated  from  rate determinations based on
the July 28-31, 1975 stream data.  This calibration
reflects the  effect  of the Parkway AWT Plant
discharge  from mile  74.5 downstream to mile 64.5.

CONCLUSIONS

Figures 4  and 5 are  semi-log plots of the July,
1973  and July, 1975  CBOD and NBOD loadings at
stream sampling stations in the critical  reach
below the  Parkway Plant, mile 73.7 to 66.4.   These
plots are  intended to  graphically show the reduced
loadings for  both CBOD and  NBOD due to improved
treatment  at  the Parkway Plant.   As previously
discussed, the stream  flow  conditions and stream
temperatures  are nearly identical  for the two
study periods.

Figure 4 shows a reduction  in the CBOD decay rate
of 51% due to higher BOD5  removal  at the  Parkway
Plant.  The K rates determined  were 0.61
(I/day base e) for the  period July 9-12,  1973  and
0.30  (I/day base e) during  July 28-31,  1975.   It
should be  noted that only two reliable data  points,
mile 73.7  and 66.4, were  obtained from the lona
term BOD studies.   The  grab  samples were  not
dechlorinated as were  the long  term samples,
thereby giving low, erratic  BOD5  values.
                                                    136

-------
The conclusion drawn from Figure  5  is  that the  K
rate for NBOD has been reduced  37%  with  the
addition of nitrification at  the  Parkway Plant.
The determined K rates for NBOD-decreased from
0.76 (I/day base e) in 1973 to  0.48 in 1975.
higher DO levels in the  lower  part  of the critical
reach.  The discussion section which  follows,  will
address the need for further field  studies and
model adjustments.  However, the  calibrated decay
rates were tested with an  independent set of data
for October 15-16, 1975, and the  model  predicted
DO quite accurately in the sag area at the dis-
charge ooint of the Parkway AWT Plant.
                                           FIGURE 5
                                                             13.0


                                                             11.0
      7/26-7/31 If AVERAGE 1  RANGE
                                                                                            O 1973 MODCl RATES
                                                                                              (SECONDARY TREATMENT)

                                                                                            A 1975 MODEL CALIBRATION

                                                                                            O 1975 GRAPHC RATES
                                                                                              (SEMI-LOG PLOTS)
                                                                                         2  X
                                                                                       o o  A  o
                                                                                      *+*+-*•
                   ~T- 3  T"i ~ i   LO
                TRAVEL TIME FROM PARKWAY (DAYS)
 The rate that best fit the 1973 version of the
 model for the decay of CBOD was 0.62 (I/day base e)
 in the critical  reach below the Parkway Plant. The
 CBOD rate tested in the 1975 model was 0.40
 (I/day base e).   Even with the restriction of
 limited BOD data for the July, 1975 period, the
 rates used in the model pretty much paralleled
 those determined graphically.

 For NBOD, a K rate of 0.65 was used in the upper
 half of the critical reach in the 1973 model,
 while a rate of 0.45 worked best in the lower end
 of the reach.  The calibrated rates for the 1975
 model were 0.50 and 0.27 in the same segments.

 Figure 6 represents a sensitivity analysis of the
 model decay rates discussed above.  Three
 separate model runs were made.  First, the 1973
 rates were used to predict the July, 1975 DO field
 data.  These rates gave a better fit in the lower
 half of the critical reach, but not at the sag
 point.  Next, the 1975 graphic rates were plugged
 into the model.   Thirdly, the graphic rates were
 increased for NBOD decay in an attempt to get the
 best calibration.  However, no rates were adjusted
 to the point where the TKN and BOD field data were
 compromised.  As indicated by Figure 6, the model
 prediction with the calibrated rates predicted
 The general  conclusion to be drawn in this paper
 is  that modifications to model rates due to
 changes in stream loadings, changes in discharge
 locations, etc., should be based on estimates from
 actual  field data.   The best way to estimate these
 decay rates for free flowing streams is not by
 curve fitting with  the existing model.  Rather,
 free flowing stream rates should be obtained by
 plotting semi-logarithmically the actual stream
 segment loadings versus travel time.  These rates
 can then be adjusted to calibrate the model (but
 not to the degree that the BOD and nitrogen data
 are compromised) so that DO prediction profile
 matches the field data.

 DISCUSSION

 The investigators realize that there are short-
 comings in the studies both in terms of data
 requirements and model simulation.  The contribution
 of  this paper to the state-of-the-art review of
 modelling might best be described as the basic
 awareness of the investigators of the need to up-
 date a model when the conditions on which that
 model had previously been validated have changed
                                                      137

-------
and the realization that field studies must be
carried out to define changes in model coefficients
due to modifications in wastewater inputs to the
model.

Additional studies seem warranted to further
define instream changes in the decay of CBOD and
NBOD below the Parkway AWT Plant.  Field studies
should again be carried out during a steady state,
low flow condition accompanied by warm weather
stream temperatures.  The nature of effluents from
the Maryland City, Parkway and the Bowie-Belair
Plants must be better defined.  The results of
24 hour composite samples might not truly represent
the proper numbers for BOD and nitrogen to use as
a steady state input.  Either hourly grab samples
or a daily grab sample at discrete plant flow
periods should be obtained.  A weighted average
of these samples results could give more reliable
numbers for model usage.  In addition, a firm fix
on the DO level of the final pi ant (s) effluent
should be established for a typical  warm weather
condition.

Future studies should delineate the effluent plume
in order to determine the extent of instream
mixing, the rate of decay by the microbial popula-
tion within the plume area, and the configuration
of the plume so that the stream sampling below the
point of discharge can be designed to represent a
composite analysis of stream quality in that
segment.  In addition, oxygen sediment demand
should be measured at and below the point of dis-
charge in an attempt to quantify the affects of
any sludge deposits on the oxygen content of the
overlying water.  Benthic oxygen demand should be
quantified throughout the entire critical segment
in order to substantiate any assumption made
regarding background oxygen demand or depletion.

The investigators strongly feel that more visual
observations are needed in the critical area where
they are trying to define instream reaction rates.
As noted earlier, the data  for 1973  indicate that
nitrification  of NH3 to N03 appears  to occur.
However, the reduction of NH3 does not result in a
corresponding increase in N03.  Rather, the data
indicates a loss of nitrogen from the system.  The
loss of nitrogen through instream denitrification
does not appear probable, since DO levels do not
approach anoxic conditions.  Algae cannot account
for this decrease, since chlorophyll a_ levels are
and have been extremely low, i.e., 1-10 yg/1 in the
free flowing Patuxent.  Rooted aquatic plants, or
other shoreline vegetation ,if present  in sufficient
quantities, could account for the nitrogen loss
from the water column by plant utilization of the
inorganic forms of nitrogen as well  as affecting
the DO budget on a diurnal basis.  This possibility
should definitely be investigated, since field
biologists with the State of Maryland have indica-
ted the presence of rooted aquatic plants in the
study area.

It is recognized that BOD is not the ideal
parameter for determining rates.   One reason is
that laboratory error is often high.  In the
Patuxent, chlorine in the stream samples gave
erroneous values in some instances,  though once the
chlorine was detected, it was destroyed in the
laboratory before BOD was run.  But, if BOD is to
be used for rate determinations, sufficient BOD
measurements should be taken to make the rate
determination statistically valid.
It should be noted that the models discussed  in
this paper were used by the State of Maryland
in 1973 to evaluate effluent limitations  proposed
for wastewater treatment plants in the  Patuxent
River Basin Water Quality Management Plan  (4).
The recalibrated model (1975) of the free  flowing
mainstem was also given to Maryland at  the request
of the Maryland Water Resources Administration.
REFERENCES

(1)  Schell, T. "Parkway Wastewater Treatment
     Plant", Washington Suburban Sanitary
     Commission, (unpublished manuscript),
     April, 1974.

(2)  Crim, R. L. and Lovelace, N. L.  "AUT0-QUAL
     Modelling System", Annapolis Field Office,
     Region III, U. S. Environmental Protection
     Agency, Technical Report No. 54, March, 1973.

(3)  Pheiffer, T. H. and Lovelace, N. L.
     "Application of AUT0-QUAL Modelling System
     to the Patuxent River Basin", Annapolis Field
     Office, Region III, U. S. Environmental
     Protection Agency, Technical Report No. 58,
     December, 1973.

(4)  "The Patuxent River Basin Water Quality
     Management Plan", Maryland Environmental
     Service, (draft copy), April, 1974.
                                                    138

-------
                                  EFFICIENT STORAGE OF URBAN STORM WATER RUNOFF
                    J.  Robert Doyle
          US Environmental Protection Agency
               Denver,  Colorado  80201
 James  P.  Heaney,  Wayne C.  Huber and Sheikh M.  Hasan
    Department  of  Environmental Engineering Sciences
                  University of Florida
               Gainesville,  Florida  32611
A mixed integer linear programming model is used to
evaluate alternatives for use of storm water detention
in flood plains and developing areas.  This model is
suitable where a refined analysis is needed.  Mixed
integer programming is appropriate when it is necessary
to handle fixed charge problems.  This added feature
significantly increases the computational complexity
of the model as compared to standard linear program-
ming procedures.  Given an inventory of available
storage sites, both in and out of the flood plain, and
costs for other flow reduction measures, the optimi-
zation model determines the least costly combination
of storage reservoirs.  Application to the Hogtown
Creek drainage basin in Gainesville, Florida  is
included to demonstrate the techniques.
    «
                      Introduction

Storage facilities for controlling the quantity and
quality of urban runoff are becoming increasingly
popular.*  Urban areas have numerous storage options
available such as natural depressions, rooftops and
parking lots within the drainage basin in addition to
storage in the flood plain itself.  The model presen-
ted in this paper addresses one part of the analysis
regarding selecting the number and capacity of reser-
voir sites.  The objective is to find the least costly
way of providing a specified level of service.  Other
considerations regarding environmental impacts,
implementation problems, etc., are not considered here.
Hasan presents procedures for examining these other
considerations•

                  The Decision Model

A mixed integer programming model was used to evaluate
alternatives for storing urban runoff.3  The objective
is to minimize the cost of storing water for a specified
level of runoff control.  The model provides the
engineer or urban planner with a method for evaluating
the complete drainage system and utilizes simplified
information from each subsystem to test the consequences
of instituting various land use or water management
plans.

The objectives of the runoff control model are:

   1.  to synthesize hydrologic, land use, and
       runoff control cost data from all parts
       of an urban watershed in order to evaluate
       a storm water runoff alternative;

   2.  to find the least cost solution to the
       problem of maintaining natural stream
       flows within an urbanizing watershed,
       thus deriving certain water quality
       benefits; and/or

   3.  to assign the responsibility for control
       of urban storm water quality to land
       developers and owners by specifying
       an allowable rate of runoff from their
       lands.

The complete mixed integer programming model is listed
below with each equation or function discussed subse-
quently.

     Minimize Z = £(S u  + X.D ) +
                                                    (1)
subject to

   EX.. + S.
where Z


      i


      S

      S
          \
                    IX ,   R  = 0   for all i
                    m mi    i
                      Tk + \ = pk  for all k
                    S  - S A. <_ 0   for all i
                                1.0 for all k
                                                    (2)


                                                    (3)

                                                    (4)

                                                    (5)


                                                    (6)
          = the total fixed and variable cost of
            storm water storage in the flood plain
            and all sub-basins,

          = number designating a node; flood plain
            storage site or stream junction,

       .   = units of water stored at flood plain
       1     site ±,
          = total capacity of storage site i,

          = unit cost of water stored at site  i,

          = 1 if storage site i is used, 0 otherwise,

      D     fixed cost of site i,

      k   = number designating a sub-basin,
      1     number designating a sub-basin storage
            alternative or alternative combination
            in sub-basin k,
      T   = volume of water stored in sub-basin k,
       k
      v,   = unit cost of water stored in sub-basin k,
       k                                            '

      $, ..    1 if sub-basin storage alternative 1 in
            sub-basin k is used, 0 otherwise,

      C     fixed cost of storage alternative  1 in
            sub-basin k,
                                                        139

-------
      X   = volume of water which flows from node i
            to downstream node j,

      X   = volume of water which flows from upstream
       m    node m into node i,
      R.  = volume of water entering flood plain
       1    site i (R± = Rk),

      &   = volume of water leaving sub-basin k,

      P   = volume of runoff entering sub-basin k, and

            storage capacity of alternative 1 in
       kl
            sub-basin k.
The objective function, equation (1), is minimized in
order to obtain the least total cost, subject to cer-
tain constraints.  The first summation in the objective
function is the total variable cost, (S^Uj)> and fixed
cost, (X.D ), of flood plain storage throughout the
watershed.  The second summation in the objective
function is the total cost associated with use of all
sub-basin storage alternatives, the first term being
the variable cost, (T v ) , and the second term being

the fixed cost (klckl) •

Determination of the least cost is subject to the
physical laws of continuity.  Continuity constraints
are written for stream flows in the flood plain and
flows within each sub-basin as shown by equations (2)
and (3), respectively.  Continuity of stream flow must
be maintained at every point or node in the network
where two or more streams join or where storage is
permitted.  These constraints specify that the dif-
ference between the water volume which flows into and
out at a given point must be stored at that point.
Therefore, equation (2) requires that storage at node
i, (S ), equals the difference between the storm water

inflow, (XX  . + R ), and downstream releases, (ZX  ).
         m m                                   j
Equation (3) states simply that within sub-basin k,
storage plus releases to the flood plain, (T  + IL ) ,

must equal the total runoff volume entering the sub-
basin, (P ) .


The variables X and $ of the objective function can
only take on values of either 0.0 or 1.0.  When one
of these variables is set equal to 1.0, a fixed charge
is incurred; when the value is 0.0, no charge is in-
curred.  Each of these zero-one variables is associated
with a possible storage site, such that if any amount
of water is stored there, the zero-one variable should
be set to 1.0.  In fact, when solving for the optimal
continuous solution,  it is very possible that many of
the zero-one variables will be set at values between
0.0 and 1.0.  If this is the case,  a branch and bound
procedure is used to determine the optimal mixed
integer solution in which all zero-one variables take
on integer values.  The functional inequalities (4)
and (5) are zero-one inducement constraints.

The first of these zero-one constraints, inequality (4),
is related to flood plain storage sites and performs two
functions in the decision model.  The first function is
to force at least a portion of the fixed cost to be
incurred at a used storage site when solving the problem
for the optimal continuous solution.  This is necessary
to ensure the proper behavior of the model.  The second
function is to increase the efficiency of the branch and
bound procedure in finding the optimal mixed integer
solution.   For all flood plain storage sites, the
storage volume used,  (S.) , must be less than equal to
the site's storage capacity,  (S^i.


The second zero-one inducement  constraint  set,
inequality (5), is required for sub-basins which have
fixed cost storage alternatives.  These  storage alter-
natives are actually combinations of potential  storage
sites which might be utilized within a single sub-basin.
Therefore, these alternatives are mutually exclusive.
Equation (6) specifies this mutually exclusive  condition
by requiring that the sum of all   terms  (one term
associated with each alternative) within a sub-basin
equal 1.0.  The branch and bound integer solution pro-
cedure will require that the $  terms equal either 0.0
or 1.0.  Therefore, inequality  (5)  actually  specifies
the relationship between the variables R.  and ((i^.

For a. given value of R,. the <(>,- term associated with
the alternative with the lowest fixed cost and  suffi-
cient storage capacity, (t  ),  to satisfy  the continuity

conditions specified by equation (3), will be set equal
to 1.0.  Within sub-basin k, all other $ terms  will be
set equal to 0.0.

All variables of the decision model must take on non-
negative values.  The variables which represent stream
flow volumes and storage volumes also have a specified
upper bound.  The stream flow variables have an upper
bound, (X  ), equal to the estimated natural stream flow
         ij                                       *
volume over a set time period.  This upper bound is cal-
culated through simulation of the natural  hydrograph
representing conditions prior to urbanization,  thus con-
straining urban storm water runoff  to rates  which
existed under natural conditions.  The upper bound on
storage volume for each facility is set by the  feasible
limitations of storing water in each facility.

The mixed integer programming model was run  using the
IBM MPSX package.  It is relatively expensive to
utilize and is not widely available.  If the fixed
charge part of the problem is eliminated,  then  one can
use standard linear programming codes which  are widely
available.  Thus, this approach is appropriate  for more
refined investigations.

                    The Study Area

Hogtown Creek is the major natural drainage  system for
the western portion of Gainesville, Florida, where the
University of Florida is located.  The drainage basin
has an area of around 13,000 acres  and is  made  of two
predominately different land forms.  The southern part
of the basin is primarily low lands in which water col-
lected throughout the watershed is  eventually recharged •
to the ground water system.

In contrast, the northern part  of the basin  comprises
uplands which have been extensively developed in some
areas with more outlying areas  currently undergoing
suburban development.  However, a significant amount of
natural and agricultural land still exists.

Increasingly severe downstream  flooding problems asso-
ciated with upstream development led to  the  passage of
a flood plain ordinance which included provisions to
retain runoff peak flows and volumes at  their pre-
development levels.  As a result of'this ordinance
numerous detention facilities are in operation  in the
basin.  This modeling application was made to provide
some guidance in comparing the  suitability of alter-
native sites.

The decision model is set up by partitioning the entire
drainage basin into the twenty  subcatchments shown in
network form in Figure 1.  The  appropriate continuity
equations are written according to  this  network
                                                       140

-------
utilizing the general form shown by equations (2) and
(3).   Constraints established by estimating natural
hydrologic conditions and storage capacities are used
in equation (3), and inequalities (4) and (5).
                                DETENTION V7
                                Ri^lM     V
                GROUND WATER
                                BASIN
                                RUNOFF
                             |9  INPUT
                                JUNCTION
                                NODES
         PREPARED BYI8M-AS
                                POINT OF   O
                                CONCENTRATION
                                ONLY
good drainage characteristics, the use of grass-lined
swales in residential areas was assumed.  Runoff  coef-
ficients were calculated for the projected residential
areas by taking into account the use of grass-lined
svales for drainage.  For each hydrograph the volume of
water which flowed from the respective sub-basins during
a 75 hour time period, assuming no sub-basin storage,
was used as the input variable, (P ), the total runoff

volume into sub-basin k.

Capacity of Storage Sites, S. and t,.
                            1      K..L
   and Streamflow Volumes, X. .


The study area was surveyed to determine potential _
storage sites for storage within the flood plain, (S ) ,
and to estimate each site's storage capacity and land
area.  Within each sub-basin, significant available
sites, e.g., wetlands, were inventoried to determine
t
         Figure 1.  Hogtown Creek Network
                    Flow Diagram
 Variables for which input values must be determined
 prior to running the model are discussed below.

 Fixed and Variable Costs, D  , C,  , u   v


 The fixed cost of flood plain storage at site i, D ,
 and of storage alternative 1 in sub-basin k, C   , were
 assumed to be $1,500 per acre of land.  The unit costs
 of water stored at site i, u , or in sub-basin k, v  ,
                                    3
 were assumed to be $350 per  1,000 ft  .
 Runoff Volume, P
 Soils information, along with assumptions about the
 natural vegetative and hydrologic conditions of the
 watershed, were used to calculate natural stream flow
 hydrographs. ** 5  All hydrographs were calculated
 utilizing a design event with a recurrence interval
 of 3 years and a rainfall intensity of 0.31 inches
 per hour for a duration of 15 hours.  These hydrographs
 were used to establish stream flow constraints for use
 in the decision model and were calculated using a com-
 puter simulation model which required runoff coeffi-
 cients and times of concentration as inputs.

 In order to examine ultimate urban conditions, the study
 area was first categorized into developed and undeveloped
 areas.  The developed area was further broken down into
 land use types to determine runoff coefficients.7  The
 undeveloped area was categorized by projected land use,
 which was predominantly residential.   The same runoff
 coefficients used for existing land use categories were
 used for future land uses, with the exception of resi-
 dential use.  In areas where soils were known to have
                                                           ukl'
      Sub-basin storage capacity was limited to the
                                An upper limit on stream
                                                          runoff entering the sub-basin
                                                          flow volume,
              (X..), was determined based on maintaining
                                                          bank stability.
                                                                                  Results
In solving the problem, the model allocates the
specified total potential runoff volumes from each sub-
basin among the sub-basin and flood plain storage sites,
while allowing only a specified volume of runoff to flow
downstream.  The model results for this example are
shown in Tables 1 and 2.

Table 1 shows how the flow volumes were allocated to
flood plain storage sites while maintaining as a maximum
flow volume the natural stream flow conditions.  In
order to minimize storage costs, the flow volumes are set
near to the natural flow conditions.  However, only for
a few stream reaches are the flow volumes equal to the
maximum limit.  These stream reaches form a constraint
for upstream flows.  Two reaches show a zero flow which
should be changed by specifying a minimum allowable flow
volume different than zero.  Depending on the minimum
volumes specified for each reach, this added constraint
could significantly alter the results given for this
example.

As shown in  Table 1, almost all of the flood plain
storage sites were utilized to capacity.  The cost of
utilizing these sites was calculated from the land area
required for each site if filled to capacity.  The cost-
effectiveness of the various sites is not the same
because the capacity/area relationship (and therefore
capacity/cost relationship) differed for each site.
However, almost all sites were fully utilized because
the cost of flood plain storage was generally much less
than storage within the sub-basins.

Table 2 summarizes all storage allocations and costs.
Only in sub-basin 7 was the total storage capacity
utilized.  This is because this sub-basin contained the
natural depression sites which were estimated to cost
much less than constructing storage facilities.  In all
other sub-basins, storage was assumed to be available
only by providing special storm water holding facilities
at $350 per 1,000 ft  of storage which was the most
expensive alternative considered.

The fixed costs of providing storm water storage is
the sum of the flood plain storage costs and sub-basin
7 storage costs, or $440,000.  This figure is less than
3 percent of the total optimal storage cost of approxi-
mately $15 million.  However, storage within the flood
plain alone represents more than 50 percent of the total
storm water volume stored.
                                                        141

-------
  Table 1.  Summary of Model Results - Flood Plain Allocations
              Flow and Storage Volumes   100,000 ft
Link
i.J
1.J1*
2,J2
4.J1
J1.J2
J2,3
3,7*
14,7
7,8*
8,J5
10,11
11, J3
12, J3*
J3.13
13,15
15, J4
16, J4*
J4,17
17,9
9,18
18, J5*
J5.19
19, J6*
20, J6*
J6,Sink*

XU
56.4
0.0
25.1
81.5
81.5
169.0
3.5
251.0
267.0
0.0
59.2
63.4
123.0
145.0
125.0
17.8
143.0
178.0
298.0
334.0
602.0
643.0
426.0
685.0

Xij
56.4
45.3
25.7
82.0
127.0
169.0
22.5
251.0
288.0
63.6
96.1
63.4
159.0
183.0
201.0
17.8
218.0
238.0
314.0
334.0
617.0
643.0
426.0
685.0
Sub -basin
i
1
2
3
4
7
8
9
10
11
12
13
14
15
16
17
18
19
20







Ri
186.0
7.5
136.0
25.5
96.1
39.1
121.0
3.5
61.0
99.0
80.3
6.8
49.0
35.9
52.4
61.7
40.9
58.5







Si
130.0
7.4
48.2
0.0
17.0
23.3
0.0
3.5
1.8
35.6
57.4
3.4
69.7
18.1
17.3
25.7
0.0
16.0







Si
130.0
7.4
48.2
0.0
17.0
28.3
0.0
3.5
1.8
35.6
57.4
3.4
69.7
18.1
17.3
25.7
3.1
16.0






*Xij * Xij
Table 2.  Summary of Model Results   Sub-basin Storage and  Costs
Sub-basin
1
2
3
4
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Flood Plain
Total
Storage Volume Used
(100,000 ft3)
13.8
86.6
0.0
37.4
37.9
0.0
0.0
106.0
0.0
34.8
11.8
28.2
0.0
13.2
0.0
0.0
0.0
87.6
483.0
940.3
Storage Capacity
(100,000 ft3)
200.0
94.0
136.0
62.9
37.9
39.3
121.0
110.0
61.0
134.0
92.1
35.1
49.0
49.1
52.4
61.7
40.9
146.0
475.0
1997.4
Storage Cost
($)
484,000
3,030,000
0
1,310,000
45,000
0
0
3,720,000
0
1,220,000
412,000
989,000
0
462,000
0
0
0
3,060,000
395,000
15,127,000
                              142

-------
These results are given only as an example of how the
model can be utilized.  In actual practice, the model
should be run several times to evaluate the effects of
different land use projections and different design
storm events.  Several runs with varied inputs would
provide insight into the sensitivity of the storage
allocations and resulting costs.  For example, sub-
basin 1 at the very top of the drainage system has a.
greater total potential runoff volume than any other
sub-basin (more than twice the average for any sub-
basin) .  What would be the effect of a decision to
maintain that area in a more natural state rather than
allowing residential development?  The resulting cost
difference could be significant due to the location
of sub-basin 1 and the large increase in potential run-
off volume caused by development.  Also of interest may
be the cost difference of providing the necessary
storage for a one year storm rather than the three year
event considered in this example.

For this example, the model was run utilizing limited
information on the availability and cost of storage.
Presently, a great deal more information is available
on the cost of storage alternatives.  Also, as indi-
cated by the model results, utilization of the flood
plain water detention sites and marsh land may provide
the most cost-effective solution to controlling storm
water runoff.  Therefore, more emphasis should be
given to considering the availability of natural
storage areas within the various sub-basins.  Certainly,
the model is better utilized when the sub-basin alter-
native storage costs differ, as they would in
evaluating various natural depression sites which might
exist throughout the watershed.

The results for this example show that the fixed costs
 for providing storage within the flood plain and in
 sub-basin 7 are a small percentage of the total costs.
For other situations, this may not be the case.  How-
ever, if fixed costs are not considered significant,
 or if they can be reasonably estimated as variable
 cost, the model can be greatly simplified and more
 easily solved.  The model presented can be easily
 altered for considering only variable costs by elimi-
nating the integer variables and zero-one inducement
 constraints.

                      Conclusions

 The mixed integer programming model provides an
 efficient method for allocating urban storm water run-
 off  among alternative storage sites and can also be
 used  to compare land use and drainage options within an
 urbanizing watershed where it is important to include
 fixed charges.  The effort required to utilize the
 decision model as a planning tool is weighted heavily
 towards data collection; e.g., storage site locations
 and  capacities, hydrologic simulation, land use, and
 can be used most easily where existing planning has
 already developed much of the input data requirements.
 Unless a significant portion of the cost associated
with  storage alternatives are fixed, the use of a
 mixed-integer model as presented appears to make the
 solving of the model more difficult than is necessary.
 Where all costs can be assumed to   be a variable, the
 model can be greatly simplified by dropping the zero-
 one variables and constraints.  This type of decision
 model should be used after more simplified approxi-
 mations have been developed.  These simpler approxi-
 mations should be adequate for most planning studies.
 More  refined procedures such as this mixed integer
 programming model can be used in specialized cases.
                     References

1.  Poertner, H. G. , On-Site Detention of Urban Storm-
   water Runoff, OWRR, USDI, 1973.

2.  Hasan, S. M., Integrated Approach to Urban Waste-
   water Quality Management, PhD Dissertation,
   University of Florida, Gainesville, 1976.
3.  Doyle, J. R. , Evaluation of Land Use Alternatives
   to Control Urban Storm Water Quality, ME Thesis,
   University of Florida, Gainesville, 1973.
4.  United States Department of Agriculture, Soil Map
   for Alachua County, Florida, University of Florida
   Agriculture Experiment Station, 1954.

5.  Soil Conservation Service, SCS National Engineering
   Handbook, Section 4—Hydrology, USDA, Washington,
   DC, 1972.

6.  Herrera, S. D., "Floodplain Definition in a
   Developing Urban Area," Masters Thesis, University
   of Florida, Gainesville, 1973.
7.  Vargas, C., "Evaluation of Area Parameters Con-
   trolling Stormwater Runoff in the Hogtown Drainage
   Basin," Masters Thesis, University of Florida,
   Gainesville, 1972.
8.  Planning Division, Department of Community Develop-
   ment, "Land Use Plan, Gainesville Urban Area,"
   Gainesville, Florida, 1970.
                                                         143

-------
                                       JOINT USE OF SWMM AND STORM MODELS

                                        FOR PLANNING URBAN SEWER SYSTEMS
               Herbert L.  Kaufman
               Partner
               Clinton Bogert Associates
               Fort Lee, New Jersey
         Fu-Hsiung Lai
         Group Supervisor and Project Engineer
         Clinton Bogert Associates
         Fort Lee, New Jersey
     A joint use of the SWMM and STORM models was
demonstrated to provide a tool for sewer system plan-
ning which effectively alleviates urban flooding and
prevents pollution in the receiving waters.  Techniques
were developed for projection of runoff characteristics
from one drainage district to others for citywide
sewer planning.

     A concept making use of the characteristics of
runoff quantity and quality and interceptor capacity
for cost-effective pollution control is described.
The pollutants discharged to receiving waters for
various interceptor capacities have been comparatively
quantified.

                     Introduction

     In the past, concern with storm runoff was mainly
over the street and basement flooding and sewers were
installed to correct the problem.  With the recognition
of the pollution problems associated with storm runoff,
reduction of pollution reaching natural water bodies
has become increasingly desirable.

     Elimination of all pollution from wet weather
flow could be prohibitively expensive nor is it nec-
essary to prevent damage to the environment.  The
marginal benefit received by society usually diminish-
es with each additional increment of pollution abate-
ment facility provided.  Hence, there exists an opti-
mal level of expenditure for pollution control that
society should plan to provide.  The determination of
this optimal level depends upon many factors, includ-
ing (1) the degree of flood protection justified, (2)
the characteristics of real storm runoff, (3) the
combined sewage quantity and quality and the character
of the receiving waters.

     Because of the complex nature of the urban rain-
fall-runoff and pollutant accumulation-washout-trans-
port processes, and the many management alternatives,
reliance on computer models to assist in the system
simulation becomes advantageous.  There are at least
16 mathematical models developed which permit the
planning of sewer systems to alleviate urban flooding,
and prevent pollution in the receiving waters.^>2,3
Of these models, the EPA Storm Water Management
Model,"4 frequently abbreviated "SWMM" and the Corps of
Engineers' Storage, Treatment, and Overflow Model^.
abbreviated as "STORM", are probably the most useful
and comprehensive for urban sewer system design and
planning.   These two models consider both runoff
quantity and quality.

     The SWMM Model can simulate rainfall-runoff
processes in fine scale, both spatially and temporally,
and route storm runoff quantity and quality from
individual catchments and subcatchments through a
sewer pipe network.  It can be used to analyze or
design a sewer system for an actual or synthetic storm
event.   It can also be adapted for planning studies.
The STORM Model can economically analyze hourly runoff
quantity and quality for long-term precipitation
records, based on such parameters as percent  imper-
viousness and land use.  It has been used  to  evaluate
the effectiveness of storage and treatment facilities
for overflow pollution control6.  Unlike the  SWMM
Model, which considers overland and sewer  flow  rou-
ting, no such routing is made in the STORM Model.

     The existing SWMM Model requires  the  use of short
time intervals (minutes) for routing runoff quantity
and quality.  It cannot be used for simulation  or
analysis based on long-term precipitation  data.  The
STORM Model, while it can provide a time history of
overflows for a given storage and treatment capacity
for continuous rainfall data recorded  hourly, does not
model the collection and conveyance system which is an
essential part in a cost-effective study.

     The advantage of joint use of the SWMM and STORM
Models has been demonstrated in the current study to
establish the optimum design for possible  alternative
sewer systems for the City of Elizabeth, New  Jersey.
This paper will attempt to demonstrate the advantage
of using these two models, somewhat modified, jointly
for flood control and pollution abatement.

     The design of sewer system components  was based
on (1) protecting the urban area from  flooding by a
storm with a 5-year return frequency,  and  (2) provid-
ing interceptor, storage and treatment facilities to
optimally minimize the frequency of, and the  pol-
lutants in, the untreated overflows.   The  amount and
frequency of overflow that can  be tolerated  would
depend upon the characteristics of the overflow and
the assimilating capacity of receiving waters.  As far
as the environmental effects are concerned, the more
frequently occurring rainfalls appear  to cause  greater
impact on the receiving waters than the more  intensive
storms with return frequencies greater than one-year.
Hence, design of storage and treatment facilities and
interceptor sewers would be based on real  rainstorms
which could be no more intense than a  one-year return
frequency storm.

               Description of Study Area

     The study area consists of the City of Elizabeth,
New Jersey.  Data developed through modeling  of Drain-
age District A was through correlation of  STORM and
SWMM applied to planning for the entire City.  Figure
1 shows the location of the study area.

     The 4400 acres of urban development are  served by
25 drainage districts.  The population of  the City is
close to saturation and is expected to have only a
moderate future growth.  The land uses in  District A
are predominantly residential (about 90 percent),
with some neighborhood commercial (about 5 percent)
and small industrial areas (about 3 percent).   The
relevant land use data are shown in Table  1.  The
district has an estimated population of 16,500, or
about 25 persons per acre.  Its impervious area equals
47 percent of the total.

     The existing sewer system in the  City is of the
combined type.  The sewers are old and undersized, as
                                                        144

-------
    FIGURE 1.  Study Area
                                  forwarded  to  the  Hydrologic  Engineering Center (EEC)
                                  of  the Army Corps of  Engineers  for  incorporation in
                                  the new version of STORM to  be  released.

                                             Quantity and  Quality  Considerations

                                       Differences  between SWUM and STORM in the consid-
                                  eration of storm  runoff quantity and  quality are worth
                                  noting.

                                  Quantity

                                       In STORM, runoff volume from the watershed is
                                  calculated on an  hourly basis as a  function of rain-
                                  fall plus  snowmelt.  Losses  of  rainfall and/or snow-
                                  melt volume due to infiltration in  the  watershed are
                                  accounted  for by  the use of  a runoff  coefficient C.   C
                                  is  derived from two basic coefficients,  C^ and C2-  GI
                                  represents the runoff coefficient for pervious areas
                                  and C2 for impervious areas.  For a given  watershed,
                                  knowing the land  uses,  the amount of  depression storage
                                  averaged over the watershed  and the amount of  rainfall
                                  and snowmelt, C can be  calculated from  C^  and  C2 to
                                  determine  the amount of runoff.  There  is  no runoff
                                  from the watershed until the detention  storage,  which
                                  is  uniformly  applied to the  entire watershed,  is
                                  filled.
                         TABLE 1
           DRAINAGE DISTRICT A LAND USE DATA
 Land Use
 Single Family
 Multiple Family
 Commercial
 Industrial
 Open  Space
% of
Area

71.7
18.2
 5.3
 2.8
 2.0
% Imper-  Curb Length
 vious     (Ft/Acre)
  43
  50
  80
  80
  19
413.
298.
283.
216.
296.
 is  the Westerly Interceptor which parallels  the  Eliza-
 beth River.  There are numerous  complaints of  street
 and basement flooding.  Overflows to  the Elizabeth
 River are frequent.

     Secondary treatment facilities are now  under con-
 struction at the Joint Meeting Plant.  The City  has
 allocated to it a peak wet weather flow capacity of 40
 million gallons per day (mgd).   The Corps of Engineers
 has also planned a diked storage area, with  a  total
 capacity of about 21 million  gallons, along  the  Eliza-
 beth River near the Joint Meeting Plant.

     The Elizabeth River is tidal from its mouth to
 the Penn Central Railroad.  The  river, which drains
 about 23 square miles, does not provide adequate dilu-
 tion for the untreated initial overflows of  combined
 sewage.

        Modification of SWMM and STORM Programs

     In addition to continuous updating of the models
 as  revisions become available, both models were modi-
 fied.   The SWMM program was modified to allow design
 capability with gutter pipes surcharged.   This elimi-
 nated  revising the input sewer dimension for the
 elimination of such surcharge.

    The STORM program was modified to include a dry
weather flow routine  for simulation of combined sewage
overflows.   Input  data allow diurnal hourly variation
of dry  weather flow quantity and  quality  for various
land uses.   A copy of  the  program changes  has been
     In SWMM, runoff volume over a fine time interval
is made by using a number of overland flow elements to
simulate the initial collection processes.  The
amount of runoff from pervious and impervious areas is
separately considered.  Infiltration loss from per-
vious areas is computed using Morton's equation.  In
addition, rainfall on a certain percent of impervious
areas results in immediate runoff and enters the sewer
system without time delay and loss of volume.  This is
true regardless of the amount of rainfall since dwell-
ings, such as those in Elizabeth with pitched roofs,
have roof drains directly connected to a street gutter.

     SWMM uses a more valid concept of the hydraulics
of rainfall-runoff processes than STORM.  Parameters
required for SWMM Model can be reasonably estimated.
The model has been applied to a number of watershed
in the United States' >8 and the accuracy of the runoff
quantity computations has been relatively good.  If a
watershed is segmented properly, SWMM can be used for
runoff prediction in urban areas.

     The significant advantage of STORM is its ability
to analyze input from long-term rainfall records to
evaluate overall rainfall pattern effects.
                                       Neglecting land surface erosion and dry weather
                                  flow, both SWMM and STORM compute street pollutant
                                  washout by storm runoff according to the amount of
                                  dust and dirt accumulated along the street curbs prior
                                  to the occurrence of a storm.  From the total pounds
                                  of dust and dirt washout, the pollutant components
                                  such as suspended solids (SS) and BOD are computed
                                  either for the available local data or from specified
                                  default values.  In study areas where quantity and
                                  quality data are not available for evaluating pollutant
                                  parameters, default values specified to either program
                                  could be used.

                                       Both SWMM and STORM use the same default values
                                  for most of the pollutant calculations except for SS
                                  and BOD.  The value used by SWMM for suspended solids
                                  is about ten times and for BOD, about 5 times that
                                  used by STORM.  More discussions and comparisons of
                                  runoff quality computation can be found in the re-
                                  ference^.
                                                        145

-------
     As SWMM is an event simulator and  STORM an
analytical tool for long-term rainfall  records,  the
computations of street dust and dirt accumulation with
dry days and street sweeping are different.   Figure 2
shows that pollutant accumulation as computed by SWMM
increases monotonically with the number of antecedent
dry days for an assumed seven-day street  sweeping
interval.  The pollutant accumulation calculated by
STORM indicated periodical fluctuation  of S3 accumula-
tion at the street curb reflecting the  effect of
street cleaning frequency and the number  of  dry days
since the last street cleaning.  In SWMM, additional
accumulation of dust and dirt on the street  curb is
assumed to be equal to the maximum accumulation for
the period between successive street sweeping.   It
does not credit the cleaning effects of street sweeping.
Recognition of this difference is significant when
making comparison of runoff quality from a single
storm event with two models.
   £ looo-
   I
                    DRAINAGE AREA = 55 ACRES
                    STREET SWEEPING INTERVAL =
                    SWEEPING  EFFICIENCY -075
CURVE I - SWMM USING SWMM DEFAULT \fflLUES
CURVE 2- STORM USING SWMM DEFAULT VALUES
CURVE 3- SWMM USING STORM DEFAULT VALUES
CURVE 4- STORM USING STORM DEWULT VALUES
             -.**^\  --.-~~    \^---s
                     10     15     20
                       NUMBER OF DRY DAYS
                                          The following assumptions were  made in the cali-
                                     bration of Cj and C2-

                                     1,    Surface runoff data was generated by a program
                                          adapted from the SWMM RUNOFF  Block without gutter
                                          routing.  This is consistent  with the STORM
                                          program, in which the effect  of gutter flow is
                                          not considered.

                                     2.    The depression storage capacities for pervious
                                          and impervious areas used in  SWMM were 0.25 and
                                          0.062 inches, respectively.   25 percent of the
                                          impervious area was assumed to  have no detention
                                          storage.  The equivalent depression storage for
                                          District A was computed as 0.155 inches, based
                                          upon 47 percent of the area being impervious.

                                     3.    The infiltration capacity curve shown in Figure 3
                                          was used in SWMM to account for the infiltration
                                          loss.  A maximum rate of infiltration of 3.0
                                          in/hr, a minimum of 0.28 in/hr, and a decay rate
                                          of 0.00138/sec was used.  The antecedent condi-
                                          tions for rainfall events were  such that the
                                          infiltration curve specified  applied.   Assumption
                                          for STORM is that the depression storage capacity
                                          of 0.155 inches is available  prior to the begin-
                                          ning of the rainfall event.
                                                                                    TIME IN MINUTES
                                         FIGURE 3.   5-Year Storm Hyetograph and Infiltra-
                                                    tion Curve
   FIGURE 2.  Street SS Accumulation, SWMM Vs. STORM
     As the use of SWMM generally requires  fine  time
interval for rainfall input and flow routing,  rainfall
intensity during the course of a rainstorm  may be
greater than hourly rainfall intensity  required  for
STORM.  Higher intensity of rainfall would  mean  greater
pollutant washout from streets.

        Calibration of STORM Runoff Coefficient

     Runoff coefficients, C-^ and C2 respectively,  for
pervious and impervious areas, used in  STORM,  were
calibrated using data generated by SWMM in  District A.
These coefficients were applied to other drainage
districts in the City to obtain runoff  volume  from
synthetic or real rainstorms.

     For use of SWMM, District A was subdivided  into
279 subcatchments with an average area  of 2.3  acres.
The surface runoff from these subcatchments drains
into 139 gutter pipes and subsequently  to 32 trunk
sewers.  The downstream end of the sewer system  con-
nects to the Westerly Interceptor.
                                     4.   A typical  rainfall  event is assumed to have
                                          three-hour duration and an intermediate pattern
                                          similar to the  5-year design storm with hourly
                                          interval as  shown in Figure 3.   In fact, use of
                                          2-minute hyetograph or of 1-hour hyetograph
                                          results in little difference in total surface
                                          runoff volume from  3-hour rainfall event.  The 5-
                                          year storm has  the  average 1-hour, 2-hour, and 3-
                                          hour rainfall intensities of 1.6, 1.05 and 0.81
                                          inches' per hour respectively.

                                     5.   The runoff coefficient, C^, for the impervious
                                          area, was  set equal to 1.0, since there is no
                                          infiltration loss for an impervious area and the
                                          depression storage  is accounted for separately.

                                          The runoff coefficient for pervious area C^ is
                                     calibrated so that the 3-hour storm runoff volume
                                     computed with the adapted SWMM program is the same as
                                     that computed with the STORM program.  The calibrated
                                     Ci values are shown  in Figure 4 as a function of 3-
                                     hour rainfalls.  As  anticipated, C^ increases with an
                                     increase in the amount of rainfall (or average rain-
                                     fall intensity) over the specified duration. For the
                                                        146

-------
5-year design storm used in the study, with  a  total
rainfall of 2.43 inches or an average intensity  of
0.81 inches per hour, a C-^ value of 0.55 would be
appropriate.  For other rainfall amounts,  such as 0.6,
1.38, and 4.86 inches (or intensities 0.2, 0.46  and
1.62 inches per hour respectively), the appropriate C±
values are 0.25, 0.3 and 0.78.  0.6 and 1.38 inches of
rainfall respectively correspond to a storm  return
interval of 1.3 month and 1 year, based on the analysis
of hourly rainfall data recorded at the Newark
International Airport from 1963 to 1974.   85 percent
of rainfall during that period has an amount less than
0.6 inches.
                               X DATA POINT
                               RUNOFF COEF. FOR IMPERVIOUS AREA'1.0
                      3-HR. RAINFALL (INCHES)

   FIGURE 4.  STORM Runoff Coefficient Vs. Rainfall
     Considering  that  the  runoff  coefficient,  C^,
 increases with  the amount  of  rainfall and STORM assumes
 a constant runoff coefficient for the entire time  span
 of records to be  simulated regardless of the rainfall
 volume or intensity, a C;L  value of 0.25 was proposed
 for the simulation of  long-term rainfall records for
 overflow pollutional evaluation.   Reducing C^ value to
 0.15 results in a nine percent reduction in the mass
 volume of overland flow  from  District A using 12-year
 data.  The amount of pollutant washout from streets is
 independent of  Ci^.  It therefore  is apparent that  the
 selected C-^ value of 0.25  should  provide sufficiently
 consistent results from  the STORM program to permit
 valid engineering evaluation.

   Generation of 5-Year Design Storm Runoff Hydrograph

     As mentioned earlier,  the City has 25 drainage
 districts with  District  A  the largest in area.
 Cltywlde planning  requires development of a runoff
 hydrograph and  pollutograph from  each drainage basin
 for storms of interest.  These runoff hydrographs  and
 pollutographs are required for cost effective sizing
 of intercepting sewers,  storage and treatment facili-
 ties.  Upstream collection sewers in each drainage
 district are adequately  sized so  that street and/or
 basement flooding would  be prevented for a design
 storm with a 5-year return interval.

     A 5-year storm runoff  hydrograph for all drainage
 districts could be obtained by making a. detailed sewer
 layout and by segmenting the  catchment and preparing
 land use data in each  district.   The amount of  work
 involved is usually more than required or justified
 for master planning.   An alternative is to make a
 detailed study  in one  district for projection to other
 drainage districts.

     Storm runoff and  sewer routing in District A  were
analyzed using SWMM.  Overland  and  gutter  flows were
analyzed with the SWMM RUNOFF Block and  trunk sewer
flow with or without sanitary wastes with  the SWMM
TRANSPORT Block.  Catchment and  land use data were
prepared to the necessary detail for accuracy.
Although there are existing combined sewers  in District
A, they are totally inadequate  in size.  The new sewer
system was designed to convey the total  5-year storm
runoff.  Existing sewer data, however, were  used in
preparing sewer layout, slope,  and  other pertinent
sewer information.

     The two primary factors governing the shape and
rate of the routed hydrograph for a given  drainage
basin are area and percent of imperviousness.   The
area affects mainly the extent  of flow attenuation and
the peaking time of a hydrograph.   The land  use,  and
consequently percent of imperviousness,  affects mainly
the runoff volume.  Normalized  hydrographs,  based on
information developed for District  A, were used to
determine hydrographs for other  drainage districts.

     Comparison of the normalized hydrographs  within
the range of drainage areas to be analyzed showed the
effect on the hydrograph shape  of percent  of imper-
viousness or equivalently, the  land uses to  be insig-
nificant.  To represent the variation of hydrograph
shape with drainage area, five  normalized  hydrographs
were used.  Two of the normalized hydrographs  are shown
in Figure 5, one for a drainage  area greater than 500
acres and one for a drainage area less than  150 acres.
Figure 5 also shows the larger  drainage  area to have a
hydrograph with greater spread  and  delayed peaking
time.
                                                                                                DRAINAGE AHErt > 300 ACRES
      0    20    40    GO   80   100   I2O    I4O    160    180
     FIGURE 5.   Normalized Hydrographs
     The  5-year  storm runoff volume  in  drainage
districts other  than  District A was  obtained with the
STORM program, using  the  calibrated  runoff  coefficient
0.55 for  pervious  area and  1.0  for impervious  area and
the available  land use data.  Dividing  the  runoff
volume by the  integrated  area enclosed  by the  appro-
priate normalized  hydrograph permitted  estimation of
the outflow hydrograph.

     Figure 6  shows three computed outflow  hydrographs
expressed in cubic foot per second per  acre.   It
illustrates the  effect of drainage area combined with
the percent of imperviousness on  the shape  and peak
runoff rate per  acre.

      Planning Interceptors For Pollution Control

     Conveyance  of 5-year storm runoff  by interceptors
to storage for later  treatment  would not only  be
prohibitively  costly  but  also is  not required  to
prevent pollution.  Pollutional  effects  from a  storm
occurring, on  the  average,  once in  five years, would
not cause as much  damage  as a storm  occurring  monthly.
                                                        147

-------
                                  CURVE I—655 ACRES, 47% IMR
                                  CURVE 2— 229 ACRES, 57 % IMR
                                  CURVE 3—[22 ACRES,73%IMR
               30 40  50  60 70 80  90  IOC
                     TIME SINCE START OF STORM
                                          130  140  150 160
    FIGURE 6.  Computed Outflow Hydrographs
To determine the quantity of runoff to be stored and
treated, the characteristics of storm runoff and
combined sewage quantity and quality were investigated.

     SWMM was used to obtain the storm runoff and
combined sewage quantity and quality from District A.
Average conditions of four antecedent dry days, a
seven-day street sweeping interval and 75 percent
sweeping efficiency were assumed.  Quantity and
quality of dry weather flow used are in conformance
with the EPA Studyl°.  In computing suspended solids
from street dust and dirt washout, STORM default
values were used.

     In addition to  the 5-year storm, runoff from a  1-
year and a 1.3-month storm was analyzed.  The rainfall
characteristics of these two storms were obtained
                                   using a partial duration series analysis  of  hourly
                                   data for a 12 year period.  They were  also assumed to
                                   have the same pattern as the 5-year storm.   Figure 7
                                   shows the 5-year storm hydrograph and  pollutographs
                                   from District A.  Runoff from 1-year and  1.3-month
                                   storms has characteristics similar to  the 5-year  storm,
                                   with the difference basically in the magnitude of flow
                                   and pollutant loading.

                                        Curve A of Figure 7 is the outflow hydrograph
                                   which maintains a relative low value until one hour
                                   after the rainfall starts.  Curves B and  C respect-
                                   ively show the suspended solids (SS) concentration
                                   (mg/1) and rate (pounds per minute) of combined sewage
                                   outflow.  The SS concentration of storm runoff without
                                   sanitary wastes is shown in Curve D.

                                        Curve C illustrates the existence of two flushes
                                   in a combined sewer.  The first flush ends about 50
                                   minutes from the start of the storm when  the flow rate
                                   is computed at 207 cfs and the concentration of sus-
                                   pended solids is 25 mg/1.  This flush is mainly
                                   attributed to the deposit of solids in combined sewers
                                   from sanitary wastes  during dry days.   The second
                                   flush ends about 40 minutes later when the flow rate
                                   is estimated at 850 cfs.  This flush represents
                                   mainly the street pollutant washout.  To  contain the
                                   first flush, a storage volume equivalent  to 0.063
                                   inches of rainfall over the entire area of District A
                                   is required.  For containment, of the second flush,
                                   however, 0.906 inches of storage would be necessary.

                                        Curve B, which sets forth the concentration of
                                   the pollutant discharge, permits drawing significant
                                   conclusions.  There is only one peak polluting dis-
                                   charge which ends about 56 minutes from the beginning
                                   of rainfall.  The peak polluting discharge is defined
                                   as one containing a SS concentration of more than 20
                                   mg/1.   The flow rate  is computed as 249 cfs at that
                                   time.   The storage volume required to  contain this
                                   first flush is 0.098  inches over the entire drainage
          22-i
                       10
                                 20
             FIGURE 7.
                   30        40        50        60
               TIME  SINCE START  OF  STORM (MINUTES)
5-Year Storm Hydrograph and Pollutographs
                                                                                    70
                                                                                              80
                                                                                                        90
                                                       148

-------
basin.  Hence, because of the low  concentration of
pollutants found in the second flush  shown in Curve C,
its containment does not appear justified.

     For containment of the first  flush  (as previously
defined), from the 1-year storm, the  magnitude of
intercepted flows from District A  was computed as 549
cfs and the required storage equal to 0.144 inches.
If the criteria for the first flush limit  is increased
to 22 mg/1 of SS, the design requirement reduces to
153 cfs and 0.055 inches.  For the 1.3-month storm,
the computed flows are 213 cfs and a  storage of .103
inches for the defined first flush limitation.

     Dividing regulated flows of 35,  153,  207, 249,
and 549 cfs by the unregulated 5-year storm peak
outflow of 1406 cfs, the ratios of regulated peak out-
flow  to unregulated 5-year storm peak outflow are
0.0249, 0.1088, 0.1472, 0.1771, and 0.3905 respec-
tively.  Applying these ratios to  the 5-year storm
runoff hydrographs for other drainage districts, the
inflow hydrographs to interceptors were  obtained for
various degrees of runoff  control.

      Runoff from the City's twenty-five  drainage
districts was assumed to drain into interceptors at 14
 inlet locations.  The SWMM Transport  Block was used
 for  sizing of interceptors for conveyance  of regulated
 flow to  a storage basin near  the  treatment plant.

      STORM was also used for Drainage District A to
 analyze  the 12-year  (1963-1974) hourly precipitation
 data to  obtain annual statistics  of overflow events
 and  pollutional loadings for various  amounts of flow
 intercepted for treatment.

      Figure 8 shows at various ratios of interceptor
 capacity the  (1) combined  sewage  SS concentration
 discharged  to receiving waters,  (2) annual number of
 overflow events from District A,  and (3) cost of
 pumping facilities for storage and interceptor
 facilities.  Data for storms with a return frequency
 of 5 years, 1 year and 1.3 months  is  shown.  Other
 costs do not vary with the ratio  of interceptor ca-
 pacity  to peak design storm  flow.

                   	 OVERFLOW CONCENTRATION (SWUM)
                   	OVERFLOW EVENT (STORM)
                   0,-REGULATED 5-YR. STORM PEAK OUTFLOW
                   02-UNREGULATED 5-YR. STORM PEAK OUTFLOW
                                                  o^
                                                  ££
                                               -10 z^
                                                  gS
is
                           0.2        0.3        0.4
                          QI/OZ
    FIGURE 8.  Cost-Effective Pollution Control
               Considerations
     For interceptor capacity  to peak flow ratio of
0.0249, inadequate control of  pollution would  be
experienced.

     At a ratio of 0.1088, the cost  is estimated at
$39.4 million, with discharge  SS concentration of 40,
22, and 30 mg/1 respectively for the 5-year, 1-year
and 1.3-month storms.  The number  of annual overflow
events extending one hour or more  is 6.6.   These
events would discharge a total of  2702 Ibs.  of SS and
620 Ibs. of BOD.  An increase  of the ratio to  0.1472
increases costs by 15 percent  to $45.5 million,  but
reduces the overflow SS concentrations to  25,  21.4 and
26.7 mg/1 for the three storms respectively and  the
number of annual overflow events of  one hour or  more
duration to 3.8.  These would  contain a total  of 1150
Ibs. of SS and 260 Ibs. of BOD.  However,  short  duration
overflows would still occur with the 1.3-month storm.
Further increase of the ratio  to 0.1991 would  increase
cost by less than 5 percent to $47.6 million but would
eliminate overflow from 1.3-month  storm.   The  number
of annual overflow events of one hour or more  would be
2.3.  Further increase in the  ratio  and its  capital
cost increment would result in insignificant return in
pollution control.  Conveyance of  uncontrolled 5-year
storm runoff would cost as much as $121 million.

     Based on the above discussions,  the range of cost-
effective interceptor flows for pollution  control
would be from 10 to 18 percent of  the peak 5-year
storm runoff with the planned  storage and  treatment
capacity available in Elizabeth.

                Conclusions

     A joint use of the SWMM and STORM models  was
demonstrated to provide a useful tool for  planning
sewer systems for cost-effective flood control and
pollution abatement.

     The study shows that intercepting flows from 10
to 18 percent of the peak 5-year design storm  runoff
would be within the range of being cost-effective and
would adequately intercept the most  significant  part
of runoff pollutants.

                   Acknowledgements

     The study described in this paper was undertaken
for the City of Elizabeth, New Jersey and  financed in
part with Federal funds from the Environmental Protec-
tion Agency under Demonstration Grant No.  S-802971.

     The writers wish to acknowledge the valuable
suggestions during the course  of the study by  Messrs.
Richard Field and Anthony N. Tafuri,  Chief and Project
Officer respectively, Storm and Combined Sewer Section,
Advanced Waste Treatment Research Laboratory,  U.S.
Environmental Protection Agency.  Dr. Brendan  M.
Harley, Dr. Guillermo J. Vicens and  Mr. Richard  L.
Laramie of Resource Analysis,  Inc.,  Cambridge, Mass-
achusetts and Mr. Gerald G. Gardner  of Clinton Bogert
Associates have provided considerable input during the
course of the study.

                       References

1.   ASCE Urban Water Resources Research Program Tech.
     Memo. No. IHP1, "Urban Mathematical Modeling and
     Catchment Research in the U.S.A.", 345 East 47th
     Street, N.Y., N.Y.  10017, June, 1975.

2.   Huber, Wayne C, "Modeling for Storm Water Strate-
     gies", APWA Reporter, May 1975.

3.   Torno, Harry C., "Storm Water Management  Models"
     Conference Proceedings, Urban Runoff  Quantity and
                                                       149

-------
     Quality, held at Franklin Pierce College, Rindge,
     New Hampshire, August 11-16, 1974.

4.   Metcalf and Eddy, Inc., University of Florida and
     Water Resources Engineers, Inc. "Storm Water
     Management Model" Four Volumes, Environmental
     Protection Agency, Water Quality Office, Report
     No. 11024DOC07/71 to 11024DOC10/71 ,  1971.

5.   Hydrologic Engineering Center,  Corps of Engineers,
     "Urban Storm Water Runoff: STORM",  Generalized
     Computer Program 723-S8-L2520,  October 1974.

6.   Water Resources Engineers, the Hydrologic Engi-
     neering Center and Dept. of Public Works, City of
     San Francisco, "A Model for Evaluating Runoff-
     Quality in Metropolitan Master Planning.", ASCE
     Urban Water Resources Research program, Tech.
     Memo No. 23, ASCE, 345 East 47th Street, N.Y.,
     N.Y. 10017, April 1974.

7.   Environmental Protection Agency and University of
     Massachusetts, "Application of Stormwater Manage-
     ment Models-1976"; A short course held at Pacific
     Grove, California, January 5-9, 1976.

8.   Jewell, T.K., P.A. Mangarella,  and F.A. DiGiano,
     "Application and Testing of the EPA Stormwater
     Management Model to Greenfield, Massachusetts",
     Proceedings, National Symposium on Urban Rainfall
     and Runoff and Sediment Control, Lexington,
     Kentucky, July 26-31, 1974.

9.   Huber, Wayne C., "Differences Between Old and New
     Runoff Quality Models", Memo to N682 File, Grant
     R802411, May 28, 1974.

10.  Environmental Protection Agency, "Water Quality
     Studies", Water Program Operations Training
     Program, PB-237 586, May 1974.
                                                      150

-------
                                         SIMULATION OF AGRICULTURAL RUNOFF
                Anthony S. Donigian, Jr.
             Hydrocomp, Inc.  Palo Alto, CA
                   Norman  H.  Crawford
               Hydrocomp,  Inc.   Palo Alto,  CA
                        Abstract

The Agricultural  Runoff Management (ARM) Model described
in this  paper simulated  runoff, snow accumulation  and
melt, sediment loss,  pesticide-soil   interactions,  and
soil  nutrient  transformations  on  small  agricultural
watersheds.   The results of Model testing for simulation
of runoff, sediment, and pesticide loss are presented to
demonstrate possible uses of the ARM Model as a tool for
evaluating  the  water  quality  impact  of agricultural
practicies.

                      Introduction

The development of models to simulate the water  quality
impact   of  nonpoint  source  pollutants  is  receiving
considerable attention by the engineering and scientific
community.  One of the major reasons for  this  interest
is  the  passage  of the Federal Water Pollution Control
Act  Amendments  of  1972,  specifically  requiring  the
evaluation  of  the  contribution  of  nonpoint   source
pollution   to   overall   water  quality.   This  paper
describes a modeling effort whose goal is the simulation
of  water quality resulting from agricultural  lands.  The
beginnings of this research modeling  effort  date   from
1971  when  the  U.S.  Environmental  Protection Agency,
through the  direction  of  the  Environmental  Research
Laboratory  in  Athens,  Georgia (ERL-Athens), sponsored
the development and  initial  testing  of  the  Pesticide
Transport  and Runoff (PTR) Model (1).  The Agricultural
Runoff Management (ARM) Model discussed in this paper is
the combined  result  of  further  model  testing   and
refinement,  algorithm  modifications,  and inclusion of
additional capabilities not present in  the   PTR  Model.
The   ultimate   goal   of   the  continuing  ARM  Model
development effort is the establishment of a  methodology
and a  tool  for  the  evaluation  of  the  efficacy of
management  practices  to  control the loss of sediment,
pesticides, nutrients,  and  other  nonpoint  pollutants
from agricultural lands.

                  Modeling Philosophy

The guiding philosophy  of the  modeling  effort  is  to
represent,  in mathematical form, the physical processes
occurring in the transport of nonpoint pollutants.   The
hydrologic and water quality related processes occurring
on   the  land  surface  (and  in  the  soil profile) are
continuous in nature; hence,  continuous  simulation is
critical   to   the  accurate  representation  of  these
physical processes.  Although nonpoint source  pollution
from  the   land    surface   takes  place  only  during
runoff-producing events, the status of the soil moisture
and the  pollutant  prior  to  the  event  is  a  major
determinant of the amount of runoff and pollutants   that
can reach  the  stream  during the event.  In turn, the
soil moisture and pollutant status prior  to the event is
the result  of  processes  that  occur  between  events.
Cultivation   and    tillage   practices,  pesticide  and
fertilizer  applications,  pesticide   degradation   and
nutrient transformations, all critically  affect the  mass
of   pollutant  that  can  enter  the aquatic  environment
during a runoff-producing event.  Models  that  simulate
only    single   events   cannot   accurately   evaluate
agricultural    land   management    practices    since
between-event  processes  are  ignored.   Although   all
between-event processes cannot be precisely described at
the present  state  of technology, continuous simulation
provides a sound framework for their  approximation  and
for further research into their  quantification.
When  modeling  nonpoint  source  pollution,  the   above
stated  philosophy  is  joined  by  the   fact  that  the
transport mechanisms of such pollutants   are  universal.
Whether   the  pollutants  originate   from  pervious   or
impervious lands, from agricultural or urban  areas,   or
from  natural  or  developed  lands, the  major transport
modes of runoff and sediment loss are  operative.    (Wind
transport  may  be  significant  in  some areas, but  its
importance relative  to  runoff  and   sediment  loss   is
usually  small.) In this way, the simulation of nonpoint
source  pollution  is  analogous  to   a    three-layered
pyramid.   The  basic  foundation  of  the pyramid  is  the
hydrology of the watershed.  Without accurate simulation
of runoff, modeling nonpoint pollutants   is  practically
impossible.   Sediment loss simulation, the second layer
of the  pyramid,  follows  in  sequence   the  hydrologic
modeling.   Although  highly  complex  and  variable   in
nature,  sediment  modeling  provides  the other critical
transport mechanism.  The pinnacle or  final layer  of  the
pyramid is the interaction of  various  pollutants  with
sediment  loss  and  runoff,  resulting   in  the overall
transport simulation of nonpoint source pollutants.

     The Agricultural Runoff Management (ARM) Model

The  ARM  Model   simulates   runoff   (including   snow
accumulation   and   melt),  sediment,  pesticides,   and
nutrient contributions  to  stream  channels  from  both
surface  and  subsurface  sources.   No   channel routing
procedures are included.  Thus, the Model is  applicable
to   watersheds  that  are  small  enough  that  channel
processes and transformations can be assumed negligible.
Although the limiting area will vary with  climatic   and
topographic characteristics, watersheds greater than  one
to  two  square miles are approaching  the upper limit  of
applicability of the ARM Model.  Channel  processes  will
significantly   affect   the  water  quality  in  larger
watersheds.

Figure  1  demonstrates  the   general    structure    and
operation of the ARM Model.  The major components of  the
Model  individually  simulate  the  hydrologic  response
(LANDS) of the watershed,  sediment  production  (SEDT),
pesticide   adsorption/desorption   (ADSRB),   pesticide
degradation    (DEGRAD),   and  nutrient   transformations
(NUTRNT).  The  executive routine,  MAIN,  controls   the
overall execution of the program; calling subroutines  at
proper  intervals,  transferring   information   between
routines,  and performing the necessary input and  output
functions.
        e 1 ARM model structure and operation
      INPUT	

      OUTPUT-
                   MAIN
                   nEcunvc
-CHECKR CHic« »pn s[oi[«ci
-NUTRIO HID MIIIIiNI MPUI
-OUTMON, OUTYR oiim somuns
                      PEST——
ADSRB
P(STICID[ AD
AND mom

SORPTION
                                                         151

-------
In   order   to   simulate   vertical    movement    and
transformations  of pesticides and nutrients in the soil
profile,  specific   soil   zones   (and   depths)   are
established so that the total soil mass in each zone can
be specified.  Total soil mass is a necessary ingredient
in  the  pesticide  adsorption/desorption  reactions and
nutrient  transformations.   The  vertical  soil   zones
simulated  in  the ARM Model include the surface, upper,
lower, and groundwater zones.  The depths of the surface
and upper soil zones are specified by  the  Model  input
parameters,  and  are  generally  3-6  mm and 75-150 mm,
respectively.  The upper zone depth corresponds  to  the
depth  of  incorporation of soil-incorporated chemicals.
It also indicates the depth used to calculate  the  mass
of soil in the upper zone whether agricultural chemicals
are soil-incorporated or surface-applied.  The depths of
the  surface  and lower zones are  important because the
active surface  zone  is  crucial  to  the  washoff  and
degradation  of agricultural chemicals, while the extent
of the lower zone  determines  to  what  degree  soluble
pollutants  will contaminate the groundwater.  The lower
zone depth is  presently  specified  as  1.8  meters  (6
feet).   However,  the  zonal  depths will vary with the
geology  and  topography  of  the  watershed.    Further
evaluation of these zones is presently in progress.

The transport and vertical movement  of  pesticides  and
nutrients,  as  conceived in the ARM Model, is indicated
in Figure 2.  Pollutant contributions to the stream  can
occur  from  the  surface  zone, the upper zone, and the
groundwater zone.  Surface runoff is the major transport
mechanism  carrying   dissolved   chemicals,   pesticide
particles,  or  sediment  and  adsorbed  chemicals.  The
interflow component of runoff  can  transport  dissolved
pesticides  or  nutrients  occurring  in the upper zone.
Vertical chemical movement between the soil zones is the
result of infiltrating and percolating water.  From  the
surface,    upper,   and   lower   zones,   uptake   and
transformation   of   nutrients   and   degradation   of
pesticides is allowed.  On the  watersheds  tested,  the
groundwater  zone  has  been  considered a sink for deep
percolating  chemicals  since   the   groundwater   flow
contribution  has  been  negligible.  However, on larger
watersheds this contribution could be significant.

  Figure 2 Pesticide and nutrient movement in the ARM model
Model Algorithms

The algorithms,  or  equations,  used  to  describe  the
processes simulated by the ARM Model are fully discussed
in  the  final project report (2).  A brief presentation
of the general methodology is included here.

Hydrology

Hydrologic simulation by the LANDS subprogram is derived
from modifications of the Stanford Watershed  Model  (3)
and the Hydrocomp Simulation Program (4).  Through a set
of  mathematical functions, LANDS simulates continuously
the major components of the hydrologic cycle,  including
interception,  surface  runoff,  interflow,  infiltration,
and percolation to  groundwater.    In   addition,   energy
balance  calculations  are  performed   to   simulate  _the
processes   of  snow  accumulation   and melt.    Various
publications have previously  described the   hydrologic
(1, 3, 4, 5) and snowmelt algorithms  (2, 4, 5).

Sediment

The algorithms for simulating  soil   loss,  or erosion,
were   initially  derived  from   research   by  Negev  at
Stanford  University  (6)  and   have  been  subsequently
influenced by the work  of  Meyer  and  Wischmeier  (7),
Onstad and Foster (8), and Fleming  and  Fahmy  (9).

Although Negev simulated  the  entire   spectrum   of  the
erosion  process,  only  sheet   and  rill   erosion were
included  in the ARM Model.  The  two  component processes
of sheet and rill erosion pertain to  (1)  detachment  of
soil  fines  (generally  the  silt  and  clay fraction) by
raindrop and impact, and (2) pick-up  and   transport  of
soil  fines  by  overland  flow.    These  processes  are
represented as follows:
Soil fines detachment:                  1PFD
     RER(t)   (1 - COVER(T))*KRER*PR(t)JKtK
(1)
Soil fines transport:     ,-.-D
     SER(t)   KSER*OVQ(t)di>tK, for SER(t)±SRER(t)   (2)
     SER(t) = SRER(t), for SER(t) > SRER(t)           (3)
     ERSN(t)   SER(t)*F                              (4)

where
 RER(t)    soil fines detached during time
           interval t, tonnes/ha
 COVER(T)= fraction of vegetal cover as a function
           of time, T, within the growing season
 KRER      detachment coefficient for soil properties
 PR(t)     precipitation  during the time interval, mm
 JRER      exponent for soil detachment
 SER(t)    fines transport by overland flow, tonnes/ha
 JSER      exponent for fines transport by overland flow
 KSER      coefficient of transport
 SRER      reservoir of soil fines at the beginning of
           time interval, t, tonnes/ha
 OVQ(t)    overland flow  occurring during the time
           interval, t, mm
 F         fraction of overland flow reaching the stream
           during the time interval, t
 ERSN(t)   sediment loss  to the stream during the time
           interval, t, tonnes/ha
In the operation  of  the  algorithms,  the  soil  fines
detachment  (RER)  during  each  time  (5 or 15 minutes)
interval is calculated by Equation 1 and  added  to  the
total  fines  storage  or  reservoir  (SRER).  Next, the
total transport capacity of the overland flow  (SER)  is
determined  by  Equation  2.   Sediment is assumed to be
transported  at  capacity  if   sufficient   fines   are
available, otherwise the amount of fines in transport is
limited  by  the  fines storage, SRER (Equation 3).  The
sediment loss to the waterway in the  time  interval  is
calculated  in  Equation  4  by  the  fraction  of total
overland flow that reaches the stream.  A  land  surface
flow routing technique (1, 4, 5) determines the overland
flow  contribution  to the stream in each time interval.
After the fines storage (SRER) is reduced by the  actual
sediment  loss  to the stream (ERSN), the algorithms are
ready for simulation of the next time  interval.   Thus,
the  sediment  that doesn't reach the stream is returned
to the fines storage and is available for  transport  in
the  next  time  interval.   The methodology attempts to
represent the major  processes  of  importance  in  soil
erosion  so that the impact of land management practices
(e.g.  tillage,  terracing,  mulching,  etc.)   can   be
specified by their effects on the sediment parameters.
                                                        152

-------
Pesticides

The  process  of  pesticide  adsorption/desorption   onto
sediment particles is a major determinant  of the  amount
of   pesticide  loss  that  will  occur.    This   process
establishes the division of available  pesticide   between
the  water  and  sediment phases, and  thus  specifies the
amounts of pesticide  transported   in   solution   and  on
sediment.   The  algorithm  employed   to   simulate   this
process in the ARM Model is described  as follows:
              X/M   KC
                      1/N
               + F/M
(5)
where X/M
      F/M
pesticide adsorbed per unit soil, ug/gm
pesticide adsorbed in permanent fixed
state per unit soil.  F/M is less than
or equal to FP/M, where FP/M is the
permanent fixed capacity of soil in ug/gm
for pesticide.  This can be approximated by
the cation or anion exchange capacity for
 that particular soil type.
equilibruim pesticide concentration in
solution, mg/1
exponent
coefficient
 Basically this algorithm  is comprised   of  an   empirical
 term,  F/M,  plus  the standard  Freundlich  single-valued
 (SV)  adsorption/desorption   isotherm   (solid   line    in
 Figure  3).   The  empirical  term,  F/M,  accounts   for
 pesticides   that   are   permanently   adsorbed  to   soil
 particles and will not desorb  under   repeated  washing.
 As   indicated  in Figure  3, the  available pesticide  must
 exceed the capacity of the soil   to  permanently  adsorb
 pesticides  before the adsorption/desorption equilibrium
 is  operative.  Thus the pesticide concentration on   soil
 particles  must  exceed FP/M  before the equilibrium  soil
 and solution pesticide concentrations  are  evaluated  by
 the  Freundlich  curve.   An  in-depth  description   and
 discussion of the underlying  assumptions  is presented in
 the PTR Model report  (1).
       " T
       1 _FP
                   SINGLE-VALUED (SV)
               	NON-SINGLE-VALDE(HSV)
                    1-ADSORPTION
                    2-DESORPTION
                    3-NEVI ABSORPTION
                    (-DEWOESORP1ION
                PESTICIDE SOLUTION CONC.(C) MC/l
       Figure 3 Adsorption/desorption algorithms in the ARM model

 The  ARM   Model    includes    an    option    to   use   a
 non-single-valued   (NSV) adsorption/ desorption  function
 because research has  indicated that  the   assumption  of
 single-valued  adsorption/desorption  (Figure   3)  is not
 valid  for many pesticides  (10, 11,  12).   In these  cases,
 the  adsorption  and  desorption  processes would  follow
 different curves, as  indicated by the   dashed   lines  in
 Figure  3.   The  NSV  algorithm  utilizes   the  above SV
 algorithm (solid line) as  a base  from which   different
 desorption  curves  are  calculated.    The   form  of the
 desorption curve is identical  to Equation  5 except  that
 K  and  N values are  replaced  by K'  and N1  respectively.
 The prime denotes   the  desorption   process..  The  user
 specifies  the  N'  value as an input parameter (NP), and
 the ARM  Model  calculates  K1  as   a   function   of  the
 adsorption/desorption  parameters   (K, N,   N1)   and the
 pesticide solution  concentration (12).  The calculation
 is   performed   whenever   the  desorption process  is
 initiated.   The  end  result   is   desorption    curves
 emanating  from the base SV adsorption curve as  shown in
Figure  3.   Thus  the   NSV   function   simulates   higher
pesticide  concentrations  on   sediment  than    the   SV
function  in  order  to  represent  the  irreversibility of
the adsorption process.

Attenuation of the  applied pesticide,  through   volatil-
ization and degradation  processes,  is  also   critical   to
the  accurate simulation of pesticide  transport  from  the
land surface.  These processes  are  not  well  understood
and  are  topics  of continuing research.  The ARM Model
includes a simple daily  first-order degradation   factor
(user  input) to approximate the reduction in the  amount
of pesticide that can be transported anytime during  the
growing  season.   More  sophisticated  degradation  models
are presently being investigated for addition to the  ARM
llodel.

Nutrients
     Nutrient   simulation   in   the  ARM  Model   attempts   to
     represent   the   reactions   of  nitrogen  and  phosphorus
     compounds   in the  soil  profile  as  a basis  for predicting
     the   nutrient   content of  agricultural   runoff.     The
     nutrient model  assumes first-order reaction rates  and is
     derived  from   work   by Mehran  and Tanji  (13), and Hagin
     and Arnberger  (14).    The   processes  simulated  include
     immobilization, mineralization, nitrification/denitrifi-
     cation,  plant  uptake, and  adsorption/desorption.  The
     model  is presently being refined  and  tested  on   field
     data.   The final  project  report (2) includes a complete
     description of  the nutrient model  and discussions  of the
     component  processes.

             ARM Model  Testing  and Simulation  Results

     The ARM Model development   effort   is  supported  by  an
     extensive  data  collection  and analysis program sponsored
     by  the  Environmental  Protection  Agency's  Environmental
     Research Laboratory   in Athens,  Georgia   (ERL-Athens).
     Test  watersheds located in Georgia and Michigan, ranging
     from  0.6 to 2.7 hectares,  have  been instrumented for the
     continuous  monitoring and  sampling  of    runoff   and
     sediment.   Collected  samples   are refrigerated on site
     and later  analyzed for pesticide and  nutrient  content.
     In  addition,   meteorologic  conditions are continuously
     monitored  and soil core samples are taken   and  analyzed
     immediately  following   application  and   periodically
     throughout  the  growing season.

     Model  testing for  runoff,  sediment loss,   and  pesticide
     loss   was  completed  on   one   year  of  data  (January
     1973-December 1973)  from the PI  and  P3   watersheds  in
     Watkinsville,   Georgia.   PI  (2.70  ha)   is  a  natural
     watershed   while  P3   (1.26  ha) is a terraced watershed
     with   a  grass   waterway.    Both  watersheds    received
     identical   management  practices  during   1973:  minimum
     tillage was employed, soybeans   were  planted,  and  the
     herbicides    paraquat   (l,l'-dimethyl-4,4-bipyridinium
     ion), diphenamid (N,  N-dimethyl-2, 2-diphenylacetamide),
     and   trifluralin  (  a, a,  a -trifluoro-2,    6-dinitro-N,
     N-dipropyl-p-toluidine) were applied at 1.1, 3.4 and 1.1
     kg/ha,  respectively.    Pesticide   simulations    were
     performed  for paraquat and diphenamid.

     The monthly simulation  results  on  the   PI  watershed
     (Figures   4 and  5)   were  obtained from one continuous
     simulation  run  for 1973.   The  simulated  runoff  values
     (Figure  4) agree  quite  well  with recorded data except
     for the spring  period. The  hydrology  parameters  were
     calibrated  on   7.5  months   of  data  in  1972;  the
     calibration results   were   reported  in  the  PTR  Model
     report  (1).    Additional  trial runs have indicated that
     the hydrologic   characterisitcs  appear  to  vary  on   a
     seasonal basis.  During the dry summer- fall period, the
     watershed  is highly  responsive, producing short-duration
     sharp-peaked  hydrographs   from  the  thunderstorms that
                                                         153

-------
occur in the area.   In  the  wetter winter-spring  period,
the  watershed  response  is much more moderate with less
erratic hydrographs  extending over  a  longer  duration.
Since  most  pesticide  loss  occurs  during  the summer
months, the simulation  studies were concentrated on  the
critical summer.
                -|—r
                                       • RECORDED
                                       - SIMULATED
                 IFMtMl  IISOKD
                          TIME. MONTHS
           Figure 4 1973 monthly rainfall, runoff and sediment loss
                 for the P1 watershed
The monthly sediment  simulation  in  Figure  4  indicates
the impact of tillage operations.   Major storms occurred
in  May  and  June  immediately  following tillage of the
watersheds.  In fact, the recorded monthly sediment loss
in  May  and  June  was  estimated  due   to   equipment
malfunctions  resulting  from  the  high  sediment load.
Except for these two  months, the simulated and  recorded
sediment  loss are  reasonably  close.  Since the sediment
algorithms  were  modified   during   this   study,   the
simulation  shown   in  Figure  4  was   obtained  through
calibration of the  sediment  parameters.  More experience
with  the sediment  algorithms  on different watersheds is
needed to truly verify  the methodology.

The monthly pesticide simulation results  are  shown  in
Figure  5  for  paraquat  and diphenamid.  The simulation
values were  obtained  with  parameters  evaluated  from
laboratory  data  and  the   literature;  calibration  of
pesticide  parameters was minimized in order to evaluate
the applicability of  the  algorithms   using  parameters
from   the   literature.     The   agreement  between  the
simulated and recorded  monthly   values  is  fair.   The
following points are  indicated:
                             	 RECORDED
                             	NSV SIMULATION
                             	SV SIMULATION
                               * PESTICIDE ANALYSIS
                                 DISCONTINUED
                                 AFTER 9/9/73
                             s   o
                            TIME. MONTHS
              Figure 5 Monthly paraquat and diphenamid loss from
                    the P1 watershed for the 1973 growing season
(1)  Since  paraquat  is   entirely    (and    essentially
     irreversibly)   adsorbed  onto  sediment  particles,
     pesticide   loss  closely  parallels  sediment loss.
     Both the recorded and simulated values   demonstrate
     this  behavior.   Thus  more accurate simulation of
     sediment    loss   would   improve    the    paraquat
     simulation.
(2)  Diphenamid  is  transported both in solution   and  on
     sediment  particles;  thus an initial comparison of
     the SV and  NSV adsorption/desorption algorithms was
     possible.   Although the  diphenamid  simulation  in
     Figure  5   agrees  well  with  the recorded values,
     results for various and other  watersheds   indicate
     that  further   investigation  is warranted.   The SV
     function performs better for some storms, while the
     NSV function performs better for others.
(3)  The  importance   of   attenuation   processes    is
     demonstrated   by   both  the paraquat and diphenamid
     data.  The  large  majority of pesticide  loss   occurs
     within  one to  two  months  following application
     (June 13, 1973 for the  PI  watershed).   Thus   the
     first    storm    events    immediately  following
     application  are the  critical  ones  for  pesticide
     transport from the land surface.


Numerous  storm   events  were  simulated  during    1973.
Figures  6  and   7   present the results for  the  storm of
June 21, 1973.    This   storm  occurred  one  week   after
planting  and  is   one  of  the  better simulated  storms
during the  summer   period.   The  original  report   (2)
includes  similar figures for various events to  indicate
the variability  of  the simulation  results.   The   storm
runoff and sediment loss for the June 21st storm is  well
simulated.  The  pesticide loss for both paraquat  (Figure
6) and diphenamid (Figure 7) is plotted in terms of  mass
removal,  i.e.   pesticide  mass  per  unit   time.    This
representation    demonstrates   the   close  association
between pesticide loss and the transport  mechanisms   of
runoff and sediment loss.  Although the SV function  more
closely  represents the  recorded  diphenamid   loss   in
Figure  7, as mentioned above the NSV function performed
better for other storms.  In essence, Figures  6   and 7
demonstrate  the type  of comparisons that  must be  made
for storm events when  analyzing the ability  of   a  model
to represent agricultural runoff.
            0.25


            0.20
                                                                                              	RECORDED

                                                                                              	SIMUUTED
          Figure 6 Runoff, sediment and paraquat loss from the P1
                watershed on June 21.1973
                                                         154

-------
           i OE
           X
           3
                             ~   I   I    T
                               	 RfCOIOtD
                               	HSV SIHIIUTION
                               	SV SIMULATION
          Figure 7 Diphenamid loss in water and on sediment from the
                P1 watershed on June 21,1973

                       Conclusions

The testing  of the  ARM  Model   has  indicated  that  the
hydrology  and sediment simulations  reasonably represent
the observed data while the   pesticide  simulations  can
show  considerable  deviation  from recorded values.  This
is especially true  for  pesticides  that  move  by  both
runoff  and   sediment  loss.    The  effects  of  tillage
operations   and management  practices need to be further
evaluated  for  hydrology   and   sediment   production.
Parameter  changes  as a result of agricultural practices
need to be quantified.  Although the results of sediment
simulation have been promising,  certain  deviations  in
the  results indicate a lack  of understanding of certain
aspects of the physical process.   Other processes in the
soil erosion mechanism, such  as  natural  compaction  of
the surface  following tillage  and the effect of rainfall
intensity    on  the  transport  capacity,  need  to  be
evaluated for possible inclusion in  the Model.  Although
the hydrology model  has  been   applied  to  hundreds  of
watersheds    in the  United   States,  the  accompanying
sediment model has  been applied to only a few.   If  the
ARM  Model   is  to   be  generally  applicable,  the most
immediate need is  to evaluate   the  sediment  simulation
capability  in varying climatic and edaphic regions.

For pesticide simulation, the  results  demonstrate  the
need  to further  investigate  the processes of pesticide
degradation  and pesticide-soil interactions.   Both  the
SV   and NSV  adsorption/desorption  functions  require
further research.   A non-equilibrium approach should  be
investigated   to    determine  its  applicability.   The
interactions  in  the  active   surface  zone  appear  to
control the  major  portion of pesticide  loss  especially
for  highly   sediment-adsorbed pesticides like paraquat.
The depth of the active surface zone and the  extent  of
pesticide  degradation  in that zone are critical to the
simulation of pesticide loss  for any storm  event.   The
need  for  testing  the  ARM Model in other regions also
pertains to  both the pesticide and  nutrient  functions.
The  processes  recommended   above  for further research
should be studied  and evaluated in many regions  of  the
country to   determine  the   impact  of soil and climatic
conditions.

The final version  of the ARM  Model will  be designed  for
use  by state  and  local  agencies across the country.
This work has demonstrated that simulation models can be
developed to represent the processes  important  to  the
quality of   agricultural runoff.  Moreover, continuous
simulation models  can be employed to develop probability
distributions for  sediment,  pesticide, and nutrient loss
as  a  basis for economic evaluation (15).  In this way,
models, like the ARM Model, can  provide  a  valuable   tool
for  planning  and  evaluation of  pesticide  regulations,
fertilizer   application,   and     other    agricultural
management practices.

                    Acknowledgments

The authors gratefully acknowledge  the financial  support
of the U.S.  Environmental Protection Agency,  Office  of
Research  and  Development.   Coordination and direction
was provided by the Environmental  Research Laboratory in
Athens, Georgia.

                       References

 1.  Crawford, N.H., and A.S.  Donigian, Jr.    Pesticide
     Transport  and Runoff Model for Agricultural Lands.
     Office   of   Research   and   Development,     U.S.
     Environmental  Protection  Agency,  Washington  D.C.
     EPA 660/2-74-013.  December 1973.   211  p.
 2.  Donigian, A.S., Jr., and N.H.  Crawford.   Modeling
     Pesticides  and  Nutrients  on  Agricultural Lands.
     Office   of   Research   and   Development.     U.S.
     Environmental Protection Agency.    September  1975.
     263 p.  (in press)
 3.  Crawford,  N.H.   and   R.K.    Linsley.     Digital
     Simulation  in Hydrology:  Stanford Watershed  Model
     IV.   Department  of  Civil   Engineering,  Stanford
     University.    Stanford,   California.     Technical
     Report No.  39.  July 1966.   210 p.
 4.  Hydrocomp   Simulation   Programming:     Operations
     Manual.  Hydrocomp Inc.  Palo Alto, California,  2nd
     ed.  1969.  p.1-1 to 1-27, p.  3-5  to 3-16.
 5.  Donigian, A.S., Jr., and N.H.  Crawford.   Modeling
     Nonpoint  Pollution  from the Land  Surface.  Office
     of Research and  Development,  U.S.   Environmental
                 Agency.   February  1976.   (draft  final
10.
11.
12.
13.
14.
 15.
Protection
report)
Negev, M.A.
Department
University.
Report No.
             Sediment Model on a Digital  Computer.
              of    Civil   Engineering,   Stanford
               Stanford,   California.    Technical
            76.  March 1967.  109 p.
Meyer, L.D., and  W.H.   Wischmeier.   Mathematical
Simulation of the Process of Soil Erosion by Water.
Trans.  Am.  Soc.  Agric.  Eng.   12(6) :754-758,762,
1969.
Onstad, C.A., and G.R.  Foster.    Erosion  Modeling
on  a  Watershed.   Trans.   Am.  Soc.  Agri.  Eng.
18(2)=288-292, 1975.
Fleming,  G.,  and  M.   Fahmy,  Some  Mathematical
Concepts for  Simulating  the  Water  and  Sediment
Systems  of Natural Watershed Areas.  Department of
Civil Engineering, Strathclyde University.  Glasgow
Scotland.  Report HO-73-26.  1973.
Davidson, J.M., and  J.R.  McDougal.   Experimental
and   Predicted  Movement  of  Three Herbicides in  a
Water-Saturated   Soil.    J.    Environ.     Qua!.
2(4):428-433, October-December 1973.
Van   Genuchten,  M.Th.,  J.M.  Davidson,  and  P.J.
Wierenga.  An Evaluation of Kinetic and Equilibrium
Equations for the Prediction of  Pesticide  Movement
through  Porus  Media.   Soil Sci.  Soc. Amer. Proc.
38:29-35, January-February 1974.
Davidson,  J.M.,  R.S.  Mansell,  and  D.R.  Baker.
Herbicide Distributions within a Soil  Profile  and
their  Dependence Upon Adsorption-Desorption.  Soil
Crop  Sci. Soc. Florida Proc. 1973.  26p.
Mehran, M., and K.K. Tanji.  Computer  Modeling  of
Nitrogen  Transformations  in  Soils.   J. Environ.
Qua!.  3(4):391-395,  1974.
Hagin,  J.,  and  A.  Amberger.   Contribution   of
Fertilizers  and  Manures  to the N-  and  P-  Load of
Waters.  A Computer Simulation.   Report  Submitted
to Deutsche Forschungs Gemeinschaft.   1974.   123 p.
Donigian, A.S., Jr.,  and W.H.  Wagcjy.   Simulation:
A  Tool  for Water Resource Management.   Water Res.
Bull.  10(2):229-244,  April  1974.
                                                        155

-------
                                    MODELING THE EFFECT OF PESTICIDE LOADING

                                             ON RIVERINE ECOSYSTEMS
                     J. W. Falco
              Research Chemical Engineer
          US Environmental Protection Agency
           Environmental Research Laboratory
                 Athens, Georgia  30601
                     L. A. Mulkey
                Agricultural  Engineer
           US Environmental  Protection  Agency
            Environmental Research  Laboratory
                  Athens, Georgia   30601
                       Abstract

     A mathematical model for predicting the  fate  and
transport  of malathion in riverine ecosystems has been
developed.  The model  predicts  the  concentration  of
malathion  down  the  length  of  a  river  reach  as a
function of time and non-point source  loading.   Model
simulations predict that standing crops of various fish
species  and  other  organisms decrease with increasing
malathion concentration.  Mass die-offs were  predicted
at critical malathion loadings and concentrations.
                     Introduction
     Under   Section   208   of   Public   Law  92-500,
approximately 150 designated areas of the country  will
require area-wide waste treatment management plans.  In
light  of  a  recent  court  test  (National  Resources
Defense Council versus the US Environmental  Protection
Agency),  many other areas will also require basin-wide
plans.  There  are  numerous  problems  which  must  be
considered  in  area-wide  planning.   One  can broadly
classify these problems into two areas, evaluation  of:
(1)  point  source  related problems, and (2) non-point
source  (NPS)  related  problems.    Furthermore,   NPS
problems can be divided into urban and non-urban runoff
related problems.

     In  evaluating  potential   water pollution impacts
from  non-point  sources,   mathematical   models   and
statistical  correlations provide a powerful analytical
tool, particularly  when  limited  data  exist.   These
models  provide  the means to determine major potential
NPS  impacts;   e.g.,   sediment   loads   created   by
agricultural  practices  in  a   given  area.  Secondly,
these  models  provide  the  means  to   estimate   the
consequences of management decisions on a basin scale.

     A  number of models and correlations are available
in the public domain for NPS pollution evaluation.  For
estimation of pollutant loads from single storm events,
the  "Storm   watershed   Management   Model   (SWMM),"
developed  by  Metcalf  and  Eddy,  Inc.; University of
Florida; and Water Resources Engineers,  Inc.,  can  be
used.1   For  long-term pollutant loads based on annual
average loadings, statistical correlations developed by
McElroy et^ al_._ can be used.2

     For  continuous  simulation  of   multiple   storm
events,  two  models  are available.  STORM is a runoff
model developed by Water Resources Engineers, Inc., and
revised by the US Army Corps of Engineers.3  This model
accounts for variation in  surface  water  storage  but
does  not  account  for  water   storage  in  subsurface
compartments of watersheds.  Consequently, it does  not
accurately  simulate  water  movement  in the watershed
during dry periods between storm events.  Both SWMM and
STORM are particularly suited for and used  extensively
in analyzing urban NPS problems.

     Hydrocomp  Corporation  has  done  a great deal of
research on continuous simulation models which  predict
runoff continuously from multiple storm events.  Unlike
STORM,   these  models  account  for   subsurface  water
movement,  and  are  more  comprehensive   in   terms  of
estimating  water  movement  on  the   watershed between
storm events.  Two  models  which  have   recently  been
developed  by  Hydrocomp have potential application for
basin-scale planning.  The "Agricultural   Runoff  Model
(ARM)"1*  is  a continuous simulation runoff model which
simulates pesticide and nutrient loads.   In addition to
simulating pesticide-soil interactions, the model  also
simulates   soil  nutrient  transformations.   The  NPS
Model5    simulates    pollutant    contributions    to
streamchannels  from  both urban and non-urban sources.
The model is keyed to the transport of sediments  over
the  watershed.   Potential  basin applications include
analysis of BOD-DO, temperature, and   suspended  solids
related problems.

     Similarly,  there  are  a  number of water quality
models which can be used in non-point  source   pollution
analysis.   QUAL I,6 QUAL II,7 DOSAG,9 and, to a lesser
extent,  AUTO-QUAL,9  are  river   models   which   are
applicable  for analysis.  It should be noted  that most
of these models require lumping  of  non-point  sources
into  a  small  number of equivalent point sources.   In
addition, many variants of  the  previously  referenced
river  models are also available.  The most significant
of these is the EXPLORE-I model.10  In addition to  the
mass  balance  equations  incorporated  into   the above
mentioned models, EXPLORE-I also  contains  a  momentum
balance   which  permits  flood  routing  during  storm
events.

     This paper focuses on NPS problems associated with
non-urban  areas.   Specifically,  it  deals  with  the
impact  on  rivers  of  application  of  malathion,   an
organo-phosphorus    pesticide,     on     agricultural
watersheds.    We   have   chosen  this  problem  as  a
demonstrative example because  it  clearly  illustrates
the  unique  nature  of  non-point  source problems  and
because a sufficient amount of data  was  available   on
this compound.

     For evaluation purposes, ranges of malathion loads
were  inputted  into  a river quality  model to show the
effects on different species of fish.  These loads were
pulses in time along the entire  length  of  the  river
section  of interest.  Eventually we intend to link  ARM
with the water quality model and simulate a  series   of
storm   events   which   were   actually   observed   on
experimental  watersheds.   Based  on  biological    and
chemical  processes  for  which  we  have  data, a time
series of malathion  concentration  profiles   down  the
length of the river reach were calculated and  potential
reduction  in the standing crops of Carp, Striped Bass,
and Bluegills were estimated.
               Mathematical Development

     The receiving water  model  used  to   predict  the
impact  of  malathion  loads   is  a  modification  of a
pesticide transport model developed  by Falco   et_ al.11
It is essentially a material balance which  accountsTor
the   transport  and  transformation  of  chemical  and
                                                       156

-------
biological   constituents  which  are  involved  in  the
chemical   and  biological   degradation processes.  With
one exception,  the continuity equation  used  for  each
constituent is  as follows:
                                                  where    Iq      second-order rate coefficient

                                                           CQ,,     concentration of hydroxide  ion
                                                           Cu      concentration of malathion
   at
      ax2
v 8Ci

  3x
                                  Si
                                                           Since  the  two  competing  reactions  have  been   combined,
                                                           the  rate  constant  kj  is the  sum of the  individual  rate

                                                           constants  for  each  of  the competing reactions;  i.e.,
where
Ci
                concentration of constituent  i
                dispersion coefficient
                rate of production or  elimination   of
                constituent i by pathway j
                source strength of component  i
                time
                distance  in the direction of  flow
                                                                       kelim  +   khydrol
                                                                                                               (4)
                                                           Using the data provided,12 the variation of  these   two
                                                           rate   coefficients   can  be  fit  to  an  exponential
                                                           function,
 This form of the continuity equation assumes that  flow
 is one dimensional and that the cross-sectional area of
 the  river reach is constant.  We have assumed that, in
 the case  of  fish  which  are  adversely  affected  by
 malathion, these organisms are stationary; i.e.,
                                                                   kelim
                                                                      Al
                                                                    A2

                                                                    T
(5)
                  8t
                                                     (2)
                                                                   *hydro!
                                                                                   exp
                                                                                                                (6)
 where    R   =   rate of reduction of the standing crop
                 of organism i due to the  presence  of
                 malathion
Equation   2  assumes   that  there  is  no  net  transport  of
organisms  over  the  length of  the  stream.  This  is  not a
particularly realistic assumption.   The reason  we   have
used  it   is  to  clearly   illustrate   the   deleterious
effects of high malathion concentrations  on a test set
of  organisms.  It  should be  noted that this assumption
eliminated the  possibility  of using  this  version of the
model  to predict natural restocking  of  fish by  invasion
from unaffected areas.

     Two processes  which are  responsible  for malathion
degradation in  aquatic ecosystems have  been included  in
the model.  The chemical degradation pathway modeled  is
alkaline   hydrolysis.   A   detailed  discussion of the
chemical reactions  involved in this  pathway has   been
presented  by   Wolfe.12  Wolfe's12 results  indicate two
competing  temperature  dependent reactions  occur.   The
first reaction, favored at  low temperatures, results  in
the  formation  of  an  intermediate malathion  monoacid
product.   An  elimination   reaction, favored   at   high
temperatures,   results   in    production   of  diethyl
furmarate  and 0,0-dimethyl-phosphoro-dithioic acid.

     In modeling these two  reactions, we  have  assumed
that the overall chemical degradation of  malathion is a
second order reaction; i.e.,
                                                       At temperatures usually  associated  with  natural
                                                  environments,   the  hydrolosis  reaction  is  favored.
                                                  Thus,  it  was  assumed  in  the  model  that  chemical
                                                  degradation  of  malathion  and  the  appearance  of a-
                                                  malathion  monoacid  are  stoichiometrically   related.
                                                  Consequently,
                                                                    a- monoacid
                                                                                                                (7)
                                                           where
                                                                   yield of a- monoacid from malathion
                                                                 For microbial degradation,   Paris13   proposed   two
                                                           models.    The    first   used  was  the  standard  Monod
                                                           expression   for   growth  of  organisms   and    limiting
                                                           substrate utilization.   The second model assumed  second
                                                           order reaction   between malathion  and bacteria.   The
                                                           standards of deviation   calculated  for  least  squares
                                                           fits   of  the data   to  both models indicated  that  the
                                                           second order reaction model gave  the best  fit.  In   the
                                                           model  constructed   by   Falco,11  a Monod expression  was
                                                           used  to approximate  the  growth of bacteria on a readily
                                                           degradable carbon source,  and   a  second order rate
                                                           equation  was used   to  describe the  degradation   of
                                                           malathion by bacteria; i.e.,
               hydrolysis
                       "klCOHCM
                                                     (3)
                                                                R,
                                                                 Bacteria
                                                                          k3-CB
(8)
                                                                                                                (9)
                                                        157

-------
                inal
where    RBacteria
                          -k2 CDCU
                                   (10)
          B
         Ci
        net  rate   of   increase   of
        bacteria
        microbial degradation rate  of
        malathion
        rate of carbon utilization
        concentration of bacteria
        concentration of carbon source
                                   growth    rate    of
                                   on  specified carbon
         "Sn
        maximum
        bacteria
        source
        half-saturation  constant  for
        bacterial  growth on specified
        carbon source
        specific microbial degradation
        rate for malathion
        specific bacterial death rate
        bacteria growth yield
 It should be noted that equation  8
 k3CD>  to  account  for  the  death
   D
 starvation conditions.
                    includes  a  term,
                    of  bacteria under
     Paris observed that the major product of bacterial
mediated  malathion  degradation   was   B-   malathion
monoacid.13   Consequently,  malathion  degradation and
formation  of  g-   monoacid   are   stoichiometrically
related; i.e.,
          R
           6- monoacid
            Y2k2-C3-C
                     M
                         (11)
where    y2
yield of B- monoacid from malathion
In this paper, the bacterial degradation model used
the one developed by Falco.11
                                    is
     To summarize briefly, the water quality model used
accounts   for   transport  of  chemical  constituents:
malathion, a- monoacid,  6-  monoacid,  and  degradable
carbon  and bacteria by equation 1.  The degradation of

malathion which appears  as  a  sink  term  (^ R-J)  in
                                             J   ' J
equation  1 is accounted for by equations 3 and 5.  The
growth and death of bacteria are accounted for  in  the
transport   equation   for  bacteria  by  inclusion  of
equation  8.   The  uptake  of  degradable  carbon   is
accounted for in its transport equation by substitution

of  equation  9  for (. R. .) in equation 1.  The source
                      J  IJ
terms in the equations which describe the transport  of
a-  and 6- monoacids are defined by equations 7 and 11,
respectively.  Lastly, the toxic effects  of  malathion
on  standing  crops  of  fish  are modeled according to
equation 2, where it is also assumed that
                 Ri
~ K L. U i. u i-
                                  (12)
and      CF  =   concentration of organisms effected by
                 malathion
         kt  =   specific death rate
                                 The only   required   relationships  which  we  have  not
                                 discussed  are  the boundary and initial  conditions which
                                 are  applicable  to  the  river  system.   These, along with
                                 a description  of the nature of the sources of pollutant
                                 loads  (S.) are specific to the particular problem being
                                 investigated.
                                                  Results  and  Discussion

                                      The equations discussed in   the  previous  section
                                 were coded into  a Fortran  program described by Falco.11
                                 The  appropriate, coefficients used  for each simulation
                                 are listed in Table  1.
ymax ^ mg org"1 hr"^
7.2 x 10"10
km (mg T1)
6.3
k, (M"1 hr'1)
1.43 x 10"
k2 (i org'1 hr"1)
1.21 x 10'12
k3 (hr'1)
5.16 x 10'3
Yi (mg/mg)
0.915
Yz (mg/mg)
0.915
Y (org/mg)
5.73 x 109
     Table 1.  VALUES OF RATE COEFFICIENTS AND YIELD
               FACTORS USED  IN ALL SIMULATIONS.

     For all simulations shown,  it was assumed  that  a
significant  point  source   of readily available carbon
was located 16 km from  the  upstream  reference  point
with  a discharge rate of 10.9 kg/day of  usable carbon.
For simulations in which reduction in fish   populations
were  projected, it was assumed  that malathion degraded
via  alkaline  hydrolysis  at  a   rate   which   would
correspond  to  a  pH  8.  This  is an extremely high pH
which is not very likely to  occur in streams.  We  have
used  it  here  because  it  predicts  a  rapid decay in
malathion.  As it will be shown, in spite of this rapid
decay, standing crops of  fish   are  severely  affected
under  moderate  malathion   loads.   The  point is, even
under  the  most  favorable  conditions   for  malathion
degradation,   material  entering  the  system  can  be
present long enough to have  an adverse impact.

     The physical characteristics of the  system we have
simulated are shown in Table 2.
River cross-
sectional area

30 m2
River
length

129 km
Surface area
of basin

25.8 km?
Average
river
velocity
9 m/min
   Table 2.  PHYSICAL CHARACTERISTICS OF THE SYSTEM.

For simulations in  which  fish  population  reductions
were projected, the following specific death rates were
used:

     1.  For Carp, let = 1.44 x 10"3 i mg'1 hr'1  ,    .
     2.  For Striped Bass, kt = 3.28 x 10"2 f mg-  hr
     3.  For Bluegill, k,, = 0.135 i mg-1 hr-1

These values were obtained by fitting toxicity data for
                                          24, 48,  and  96  hr  TL_

                                          Ferguson1"
                                             to  an  exponential
                                 concentration and time; i.e.,
                           concentrations
                                 function
reported  by
in malathion
                                                       158

-------
where
          0.5
         "TL
exp    -k,.CTL  -tTL             (13)


 exposure time

 measured toxicity of malathion
     Figure 1  shows  the  steady  state   concentration
profile  which  would  exist  if  the   loading  rate  for
malathion were 0.126 gin/acre day.  The  contribution   of
microbial and chemical degradation are  also  shown  along
with  the  concentration  profile  that would  exist if
neither of these two  processes  occurred.    Under  the
conditions  simulated,  both  alkaline  hydrolysis  and
microbial degradation are important processes  for  the
elimination of malathion.
                      Base Line
                      Microbial Activity

                      Hydrolysis, pH =
                      Total Degradation
                           Length (km)

     Figure 1.  Comparison of steady-state malathion
                concentration profiles  in response  to
                a load of 0.126 gm/acre day.

     Figures  2 and 3 show the response of the  river to
a pulse of malathion loaded over a period of  two   days
in  the  amount  of  700  nig/acre.   Figure 2 shows the
concentration profiles for malathion as  the  pesticide
is  degraded  and  diluted  out of the river.   Figure  3
shows  the  concentration  profiles  of  a-   malathion
monoacid  as it is formed and diluted out of the river.
Comparing the two graphs,  it  can  be  seen  that  the
monoacid  persists  in  the river for longer periods of
time  than  malathion.    Because   of   its    relative
persistence,   more   information  is  needed   on   this
degradation product.

     Figures 4 and 5 show the relative  standing  crops
of  Carp,  Bass  and  Bluegill before and after a storm
event in which malathion  is  loaded  into  the river.
Figure  4  shows  the impact of runoff amounting to 700
mg/acre  and  Figure  5  shows  the  impact  of runoff
loads  were  chosen  to  demonstrate  the  variation in
species response to a range of malathion inputs.  Based
upon  recommended  application  rates of malathion, the
range of loads used in  these  examples  are  possible.
However,  neither  field  data  nor  simulation loading
model   results   are   available   to   determine   the
probability  of occurence of such loads or to delineate
the actual  physical  conditions  under  which  they  may
occur.    The   projections  indicate  that severe damage
could occur to both  Bass  and  Bluegill   at  the  higher
loading.    At   low  loading  rates,  only Bluegills are
adversely affected.
                                                                        16     32    48     64    80    96    112     128
     0.1
                                                           Figure 2.   Response of a stream to a pulse of malathion
                                                                      of 700 mg/acre.
                                                               0.30 p
                                                               0.25 ~
J=    0.20  -
    0.15  -
    0.10  ~
    0.05  -
              16
                   32
 Figure 3.
                                                        159
                                                                 Length (km)


                                                  Concentration  profiles of a-  malathion
                                                  monoacid as  a  function of time in response
                                                  to a 700 mg/acre  pulse of malathion.

-------
     40 -
 »   20 -
            16
                       48     64    80

                          Length (km)
                                         96
                                               112   128
Figure 4.  Response of three species of fish to a pulse
           of malathion of 700 mg/acre.
    100
    80
    60
                        i	1	1	1    7
4    20
                  Bluegill
            16
                  32    48
                             64
                                   80    96    112    128
                         Length (km)
Figure 5.  Response of three species of fish to a pulse
           of malathion of 44 mg/acre.

     In summary, we  have  presented  an  analysis  and
projected the impact of a pesticide on three species of
fish.    We  have  indicated  the  types  of information
necessary for this  analysis.   Although  the  absolute
values of coefficients and parameters may vary from one
problem  to another, the procedure should be applicable
to  many  situations.    The  use  of  this  model   and
eventually   more   sophisticated   ones  as  they  are
published should provide insight into a broad range  of
NPS pollution problems.
                                                                                 References
1.   Metcalf & Eddy,  Inc.,  University of  Florida,  and
     Water   Resources   Engineers,   Inc.   Storm  Water
     Management  Model,   Volume   II—Verification   and
     Testing.   US EPA Water  Pollution Control  Research
     Series, 11024DOC08171.

2.   McElroy, A. D.,  S.   Y.   Chiu,   J.  W.   Nebgen,  A.
     Aleti,  and  F.  W.  Bennett.    Interim  Report on
     Loading  Functions   for   Assessment    of   Water
     Pollution  from  Nonpoint Sources.   US EPA Report,
     Project 68-01-2293.  1976.

3.   Hydrologic Engineering Center.   Urban  Storm  Water
     Runoff   "STORM".    US   Army  Corps of  Engineers
     Report.  1975.

4.   Donigian, A. S., Jr. and N.  H.  Crawford.   Modeling
     Pesticides and Nutrients on  Agricultural   Lands.
     US EPA Report, EPA-600/2-76-043.   1976.

5.   Donigian, A. S., Jr. and N.  H.  Crawford.   Modeling
     Nonpoint Pollution  From  the  Land  Surface.   US  EPA
     Report.  In press.

6.   Masch, F. D. and Associates  and  the  Texas   Water
     Development Board.   Simulation  of Water Quality in
     Streams  and Canals, Theory  and Description  of the
     QUAL-I Mathematical  Modeling  System.   The   Texas
     Water Development Board, Report 128.   1971.

7.   Roesner, L. A.,  J.  R. Monser,  and D.   E.   Evenson.
     Computer  Program   Documentation  for   the   Stream
     Quality  Model   QUAL-II.   US    EPA   Intermediate
     Technical Report, Contract No.  68-01-0739.   1973.

8.   Texas Water Development  Board.   DOSAG-1 Simulation
     of Water Quality in  Streams  and  Canals   Program
     Documentation    and  Users   Manual.    Texas   Water
     Development Board Report.  1970.

9.   Crim, R. L. and  N.  L. Lovelace.   AUTO-QUAL   Model-
     ing System.  US  EPA  Report 440/9-73-003.   1973.

10.  Baca, R. G., W. W. Waddel, C.  R.  Cole,  A.   Brand-
     stetter,  and D. B.  Cearlock.   EXPLORE-I:  A River
     Basin  Water  Quality  Model.    Battelle    Pacific
     Northwest Laboratories Report.   1973.

11.  Falco, J. W., D. L.  Brockway,  K.  L.  Sampson, H. P.
     Kollig, and J. R. Maudsley.  Models for  Transport
     and   Transformation   of    Malathion   in  Aquatic
     Systems.  Proceedings of the   American  Institute
     for   Biological  Sciences   Symposium,  Freshwater
     Quality Criteria  Research   of   the Environmental
     Protection Agency, Con/all is,  1975.  In press.

12.  Wolfe, N. L., R. G.  Zepp, J. A.  Gordon, and  G.  L.
     Baughman.  The Kinetics  of Chemical  Degradation of
     Malathion   in   Water.     Environmental   Research
     Laboratory, Athens,  Georgia.   In  press.

13.  Paris, D. F., D. L.  Lewis, and  N. L. Wolfe.   Rates
     of Degradation of Malathion  by  Bacteria   Isolated
     From  an  Aquatic   System.   Environmental  Sciences
     and Technology.  £(2):135-138.   1975.

14.  Ferguson,  T.  L.   and   R.   von  Rumker.    Initial
     Scientific  and  Minieconomic  Review of Malathion.
     US EPA Report, Contract  No.  68-01-2448.  1975.
                                                       .160

-------
                               RADIONUCLIDE TRANSPORT IN THE GREAT LAKES
               R.  E.  Sullivan, Ph.D.
          Environmental Protection Agency
           Office  of Radiation  Programs
                Washington, D. C.
                     W.  H.  Ellett,  Ph.D.
               Environmental Protection Agency
                Office of Radiation  Programs
                      Washington,  D.  C.
                     Summary

     A mathematical model has been developed to
predict radionuclide levels in the Great Lakes due to
nuclear power generation in the United States and
Canada.  The calculations have been used to verify
the feasibility of proposed International water
quality objectives for radioactivity in the Lakes.
Dose rates and doses to reference-man from the inges-
tion of Lake waters are predicted based on expected
future power generation in this region.

                    Introduction

     A recent bipartite agreement between the United
States and Canada on  water  quality  in  the  Great
Lakes  mandated  establishment  of  a  radioactivity
objective  for  the  Lakes.   The  liquid  effluents
discharged  into  the Great Lakes from, nuclear power
plants and other nuclear facilities,  such  as  fuel
reprocessing  plants,  are of particular interest in
this   regard   since   some   of   the    entrained
radionuclides have relatively long half-lives.

                                       1 2
     Previous   work   in   this  area  '   has  been
concerned mainly with fallout from  nuclear  weapons
tests.   Since   in  this  study   we  are primarily
concerned  with  predicting  the  concentrations  of
specific  radionuclides  emitted in the nuclear fuel
cycle and the resultant doses to reference-man,  the
contribution   from   fallout  has  been  neglected.
However,  such  source  terms  may  be  included  by
specifying  appropriate  initial  concentrations for
these radionuclides.

     A simplified model of the  Great  Lakes  system
has  been  employed which assumes perfect mixing but
allows  for  the   periodic   establishment   of   a
thermocline    by   varying   the   mixing   volume.
Corrections are made, where necessary,  for  removal
of radionuclides by sedimentation and equilibration.
The  results  are  given  in  terms  of radionuclide
concentrations in each lake and the dose  rates  and
doses  ensuing  from continuous, long-term ingestion
of system waters.  With the model described,  it  is
possible  to  obtain  analytical  solutions  for the
coupled  differential  equations  describing   these
quantities  as  a  function  of  time.   However,  a
FORTRAN computer program has been employed to reduce
the calculational effort required.

     In   succeeding   sections,   we   present    a
description  of the physical and mathematical models
developed,  the  rationale  employed  in  specifying
source  terms  for  various types of facilities, and
details of the dose calculation.  A sample  problem,
projecting   the   future   effects  of  radioactive
contamination of the Great Lakes  due  to  projected
nuclear  plant  operations,  is  described  in  some
detail.  Results from .this and similar problems have
been used to verify the  feasibility  of  the  water
quality objectives set by the U.S.-Canada agreement.
              Physical Model Analysis

Radionuclide Concentrations

     The physical model of the Great Lakes comprises
a set of  five  bodies  of  water  characterized  by
constant  total volume, inflow, outflow, and surface
area.  The lakes are interconnected  so  that,  with
the  exception  of Lakes Superior and Michigan, each
may contribute radioactivity to  succeeding  members
of the chain as indicated in Figure 1.
'Figure 1.  Physical Model of the Great Lakes
The  governing  differential  equation for models of
this type, for the ith lake and a single nuclide is:
          dC,
where

     C^ = concentration for ith lake  [Ci/cm3]

     Ri = input rate into ith lake  [Ci/yjr]

     Vj_ = mixing volume of ith lake  [cm3]

     \r = radioactive  decay  constant    for    this
nuclide

     \p = decay  constant  for  physical   removal
 (sedimentation, equilibration)  [yr"1]

     q^ = volumetric flow out of  ith lake [cm3/yt]

Because of the summation on j, a  major  difficulty  in
solving the equation arises in  that each Cj   term
embodies  the complete differential  equation  for all
preceeding lakes,  thus complicating  the  expressions
for the lower lakes.
                                                      161

-------
     We  have  chosen to apply the Laplace transform
in order to obtain  solutions  to  these  equations.
the transformed equation for Ci is
                      c?   + •
                  + (s+ki)
                                                  (2)
where  kj^ =  ( \r + \p +   y )  depends  on both the
characteristics  of  the  lake (i) and the  physical
properties  of  the radionuclide.  Cj is the initial
lake  concentration.    For   Lakes   Superior   and
Michigan,  which  have  no  lake tributaries, the Cj
term vanishes and the equation (2) reduces to
                                                  (3)
                                   C§
     The general equation becomes increasingly  more
complex  as  we  proceed  down  the  chain of lakes.
However, the transformed solutions to these  general
equations comprise only terms of the form
                  c(s) =
                         f(s)
                         g(s)
in which f (s) is constant and g(s) is the product of
linear, non-repeated factors,
                                                  (4)
         g(s) = (s+k1)(s+k2)•
     To  reduce  the effort required in solving such
expressions,  a  variation  of  Heavisides'  partial
fraction expansion3,

                                                  (5)
                 _
              eU )
                     n=l
is  applied.  Here, g(kn)  denotes the product of all
the factors except the factor (s-kj^) .  Using (5) , the
solution to equation (3)  for Lake Michigan or Lake
Superior is

                                                  (6)
     For the next lake (Huron) equation  (2) includes
expressions for the preceeding lakes
          Hi
          Vi
                                                  (7)

       ifj: _ IfinC  1   "I
       j |(s+ki )j V-, [g (s+kO J
where the  summation  over j  indicates the presence of
two  terms,  one  for Lake Superior and the other for
Lake  Michigan,   the  c(s)   terms  for  these  lakes
corresponding  to  the Cj (s)  terms in equation (2).

     It is evident that  as  the differential equation
for  each  lake   in   the progression is transformed,
each  term will  contain   an   additional   factor
 (s + k)    Again,  solutions  are found by means of the
inverse  transform of equation (5), which yields the
concentration  of a  specified  radionuclide  as  a
function of time.

Dose Rate  and  Dose to Reference-Man

     The   concentrations of  radioactivity in
lake water can be used to find the  annual dose  rate
due  to  ingestion of lake  water by   reference-man.
Because the radioactivity in the lakes  is  expected
to  be  a  strongly  varying function of time,  due to
the rapid  projected  growth  of  nuclear  power,   dose
estimates  cannot be based on a constant intake of
activity   over   the  time   necessary   to   reach
equilibirum in the body  except for  nuclides having a
relatively short effective half-life.   Nor can the
dose over  a 50-year  period  be determined  using  the
conventional   models  given  in ICRP Publication 2.4
Rather,  in  this  study the  dose  rate  and   dose
calculations   are    based   on  equations  and   data
presented  in ICRP 105 and ICRP IDA6.   However,  for
organ  burden,  b(t),  and cumulated activity ,  B(T),
the equations  have been  revised slightly to conform
to  program  usage.    Both   dose  and  dose rate  are
predicated,  at  present,   solely  on   an  assumed
consumption  by  reference - man  of  2.2  liters  of
drinking water per day.   This quantity  is  somewhat
larger  than that usually consumed  as drinking  water
to account for the contribution to  the  body burden
from food pathways.

     Over a time  interval short  enough to allow
treating the average  concentration  as constant,   the
intake,  I(t),  is directly proportional to the lake
concentration.  Integration of the  ICRP  equations
for   organ   burden   and   cumulated  activity are
straightforward if   the   retention   function,   R(t),
contains  only  exponential  terms.   For  the isotopes
of interest here retention  functions   of  this  form
are  given  in  reference   5.   For  ingestion  at a
constant average intake,  I,
             b(t) =  I
                                                                                                           (8)
                                                                                   R(t-r)  dr
                                                         and
                                                                                                          (9)
                                                                     B(T)
                                                                            Jo   Jo
                             R(t-r)  dr  dt
                                                              Since the retentibn function, R(t), is the  sum
                                                         of a series of exponentials,

                                                                                                          (10)
                                                                           R(t)  =
                                                                                  n=l
                                                     162

-------
each  term in the integral defining the organ burden
will be of the form
                                                           are  assumed to be removed only by radioactive
                                                           and,  where applicable,  sedimentation.
                                               decay
                                                 (11)
           bn(t)  =
There are  two  solutions  for
first yields the organ burden
          bn(t) =
                                    dr
                                 this   equation.   The
                                                 (12)
                    _
                   Pn
 at any  time  during  the  period,  beginning at time tj_,
 of  ingestion.    The  second solution gives the organ
 burden  at  any  time  subsequent to t-21 the end of  the
 ingestion  period.
                                                 (13)
                    3-pn(t-t2) . e-
 The instantaneous  dose  rate  depends only on the organ
 content at some time t.  However,  cumulated activity
 and, therefore, the dose depend  on  the  whole  time
 history  of  ingestion   so that the sum of equations
 (12) and (13)  must be  used  in  evaluation  of  the
 total  dose over a period T.  The cumulated activity
 is then
                                                 (14)
               -Pn(t-t2) -
                                     H
 where T^,  T2,  and T are analogous to  the  t  values
 used  in the organ burden equations.  Performing the
 integration and collecting terms,

                                                 .(15)
 with A similar term for each exponential  needed  in
 the  retention function.  Note that, since intake is
 directly proportional to lake  concentration   (which
 comprises  only  constant  and  exponential  terms),
 equations (8) and (9) may  be  solved  analytically.
 However,  for  the  short  time intervals considered
 here  (one  year)  the  use  of  an  average  I   is
 sufficient.

 Computer Program Analysis

      The basic program uses three loops  to  account
 for  the  dependence on time, lake and isotope.  The
 time loop is usually solved in one - year  increments
 and,  to account for the existence of a thermocline,
 mixing during the first half year is  based  on  the
 total  lake  volume while in the last half year a 17-
 meter  depth  (thermocline)  is  presumed  and   the
 product  of  this  depth  and  the lake surface area
 define the mixing volume.  The  model  assumes  that
 lake  outflows  remain  constant  in the epilimnion,
 with equilibration dependent  on  the  concentration
 above  the thermocline.  Nuclides in the hypolimnion
     Several  options are available for defining the
source terms, R.  Specification of reactor  type  is
required    since   the   liquid   discharges   vary
significantly between the various  (BWR, PWR)  types.
These  releases also depend on the sophistication of
the liquid radwaste system employed by each type  of
reactor.  Since detailed examination of the radwaste
system   for  each  operating  reactor  may  not  be
practical and is not possible for  plants  scheduled
for  future operation, it has been necessary to make
some assumptions regarding these releases.  We  have
utilized  the  results  of an in-depth environmental
analysis'  which presented typical releases  expected
from  four  classes each of BWR and PWR liquid waste
system  representing  a  range  of  treatment   from
minimum  to maximum.  This data is incorporated into
the computer program  for  use  in  a  source  input
option.    Thus,  the  simplest  input  consists  of
specifying the number of BWRs and PWRs (nominal 1000
MWe) on each lake along with the appropriate choice of
radwaste system type  (1 through 4) and allowing  the
program to internally generate source terms for each
lake  and isotope.  Alternatively, the actual source
terms for each lake  and  isotope  may  be  directly
entered or the two options may be combined.

     The  data required to obtain solutions for five
isotopes  (H3,  Co60,  Sr90, Cs134, Cs137) is
presently   stored   in   the   program,   but  other
radionuclides may easily be added.  At present, this
data includes correction factors, in the form of the
effective decay constants indicated in equation (2),
to  account  for  equilibration  of   tritium8 and
sedimentation of cesium9.

     The standard output consists, for each isotope,
of  the average radionuclide concentrations for each
year and Great Lake.  A summary table giving  annual
dose  rates  and  cummulative doses by year for each
lake  is  also  printed  out.   The  critical  organ
assumed  for  each isotope is identified at the head
of each column.

                Problem Description

     The underlying purpose of this analysis and the
resulting computer  program   was  to  estimate  the
effect  on  the  Great  Lakes of nuclear power plant
operation through the year  2050.   These  estimates
were   needed   in  order  to  establish  reasonable
estimates of water quality for the lakes.  To obtain
a valid assessment it is necessary to  consider  not
only  the  effluent  from the plants themselves, but
also that from any reprocessing  plants  located  on
the  lakes.   Separate  determinations were made for
each of the sources  described  below  in  order  to
compare the relative effects of each.

                     Sources
                                                           U.S. Nuclear Power Stations

                                                                The total number of nuclear power plants in the
                                                           United States has been estimated by interpolation of
                                                           data  contained  in  a  compilation  issued  by  the
                                                               10
                                                           AEC.-LU    The apportionment of reactors to the Great
                                                           Lakes basin and to the individual lakes was taken to
                                                           have the same ratio to the total number as the known
                                                           1980 values.

                                                           U.S. Fuel Reprocessing Plants

                                                                Only one reprocessing  operation,  the  Nuclear
                                                       163

-------
Fuel  Services plant, is presently scheduled for the
Great  Lakes  basin.   The  source  terms  for  this
facility    were    taken    from   the   associated
Environmental Impact Statement.H    One  additional
facility,  located  inland  but  contributing to the
tritium concentration in Lake Michigan  by  rainout,
has been postulated.

Canadian Power Stations
     Source  terms  for  the  Canadian  heavy  water
reactors  expected  to  be in use during this period
have been estimated from current data  furnished  by
facility  operators.12    It  should  be  noted that
these  values  are   very   conservative   and   may
overestimate  the  activity  entering the Lakes from
Canadian   reactors.    In    particular,    tritium
discharges  are expected to be reduced in the future
due to the  economic  incentive  to  conserve  heavy
water.

                      Results
     Using  the  general  procedure described in the
text, the nuclide concentrations,  dose  rates,  and
cumulative doses have been determined for the period
1962-2050.   Two  sets  of operating conditions have
been used: in  the  first,  the  nuclear  facilities
operating  in  the  year  2000  have been assumed to
continue operation at a constant level  until  2050.
In  the second, all sources have been presumed to be
removed after the year 2000 in order to estimate the
time  required  for  radioactive  decay   and   lake
turnover  to  clear the lakes.  The only operational
reactors  in  the  period  1969-1970  were  on  Lake
Michigan.   Nuclide  concentrations in the remaining
lakes during  this  period  are  due  to  flow  from
Michigan  through  connecting rivers.  Subsequent to
this period,generating stations  begin  to  come  on
line  in  the  other lakes until, by 1980, operating
reactors are projected for all lakes  but  Superior.
Lakes  Erie  and  Ontario, which not only have large
numbers of facilities but receive the effluent  from
other   lakes,   have   roughly  twice  the  nuclide
concentrations of the  other  lakes.   Radioactivity
concentrations  are rather insignificant until after
1980 when there is a sharp  rise  through  the  year
2000.

     To indicate the overall effect of nuclear power
generation  on  radionuclide  levels  in  the  Great
Lakes, the dose rates from each  isotope  considered
are given in Table 1 for each lake.  Table 2 shows
the cumulative doses, by lake, incurred from inges-
tion of each nuclide through the year 2050.  Both
sets of results are for operation at a constant
level through  the year  2050 assuming the number of
installations is constant after the year 2000.

     Based on the model described in  the  text,  by
far  the  largest  cumulative  dose  is  due  to the
concentration of tritium in Lake waters.   The  vast
majority   of  the  tritium  present  is  from  fuel
reprocessing activities and is in the  effluent from
Canadian heavy water reactors.  However, the maximum
seventy-year  dose —  from  about 1980 to 2050 — is
only 23 millirem, due to ingestion of  Lake  Ontario
water.   The remaining isotopes contribute less than
1 millirem to the total dose.

     It should be noted that the  results  presented
may  be altered when refinements to the model  (i.e.,
hydrographies, source, equilibration, sedimentation,
etc.) are possible.  In particular, the  effects  of
localized  near-shore  currents may  variably affect
the concentration of isotopes in the drinking  water
 intakes  of  cities adjacent to effluent discharges.
 On a  long-term basis,  however,  in  which  relatively
 perfect  mixing may be assumed, these results should
 not be  affected drastically.  Moreover, these results
 indicate  that nuclide   concentrations  arising  from
 currently  projected  nuclear  fuel cycle operations
 yield radiation doses  which lie within the  proposed
 objective for Great Lakes Water Quality.
                      References
1.  Machta, L.,  Harris,  D.  L.,  and  Telegados,   K.,
    "Strontium-90  Fallout   Over Lake Michigan," J^_
    Geophys. Res., 75,  1092-1096,  1970.

2.  Lerman, A.,  "Strontium-90 in the  Great  Lakes:
    Concentration  -  Time Model,"  J.  Geophys. Res.,
    77, 3256-3264, 1972.

3.  Churchill,   R.   V.,    "Modern    Operational
    Mathematics  in Engineering," p.44, McGraw-Hill,
    New York, 1944.

4.  INTERNATIONAL   COMMISSION   ON    RADIOLOGICAL
    PROTECTION.    Permissible   Dose  for   Internal
    Radiation, ICRP Publication 2, Pergammon Press,
    N.Y., N.Y. (1959).

5.  INTERNATIONAL   COMMISSION   ON    RADIOLOGICAL
    PROTECTION.   Evaluation  of Radiation  Doses to
    Body Tissues from Internal  Contamination due to
    Occupational  Exposure,  ICRP  Publication   10,
    Pergamon Press, N.Y., N.Y.  (1968).

6.  INTERNATIONAL   COMMISSION   ON    RADIOLOGICAL
    PROTECTION.    The   Assessment   of   Internal
    Contamination  Resulting  from    Recurrent   or
    Prolonged   Uptakes,   ICRP   Publication  10A,
    Pergamon Press, N.Y., N.Y.  (1971).
7.  U.S.    ENVIRONMENTAL    PROTECTION     AGENCY.
    "Environmental  Analysis  of  the  Uranium Fuel
    Cycle   Part II, Nuclear Power  Reactors,  EPA-
    520/9-73-003-C,  Office  of Radiation Programs,
    Environmental Protection Agency, Washington, D.C.
    (1973) .

8.  Strom, Peter 0., "Method for Estimating Tritium
    (HTO) in the Great Lakes," USNRC, Unpublished.

9.  Wahlgren, M. A., and Nelson, D. M.,  "Residence
    Times  for  239Pu  and  137Cs  in Lake Michigan
    Water,"  ANL-8060,  Part  III,  85-89,  Argonne
    National  Laboratory, Argonne, Illionis (1973).
    (Residence time estimates updated by  telephone
    communication.)

10. U. S. ATOMIC ENERGY COMMISSION.  "Nuclear Power
    Growth  1974-2000,"  WASH-1139,  p.6,  Case  D,
    USAEC, February 1974.

11. NUCLEAR FUEL SERVICES.   Environmental  Report,
    Docket Number 50-201, p.8.2-3, NFS Inc. (1973).

12. Personal  Communication,  K.  Y.  Wong,  Supv.,
    Central   Health   Physics   Services,  Ontario
    Hydroelectric,   to  A.  H.   Booth,   Director,
    Radiation   Protection  Bureau,  Department  of
    Health and Welfare (Canada) dated November  26,
    1975.
                                                     164

-------
                                               TABLE 1

                              DOSE EQUIVALENT RATE  IN THE  YEAR 2050*
                                           (mi crorem/year)
Isotope and
Critical Organ
Tritium
(Body Water)


Cobalt-60
(Total Body)


Strontium-90
(Bone)


Cesium-134
(Total Body)


Cesium-137
(Total Body)


*After 50 years


1
2
3
4
1
2
3
4
1
2
3
4
1
2
3
4
1
2
3
4
Lake
Michigan
6.402

55.38

0.007



2.633



0.635



0.995



Lake
Huron
5.205

22.48
137.8
0.004


0.041
2.434



0.283


0.199
0.478


0.907
Lake
Erie
13.36
55.92
15.91
97.91
0.017


0.028
5.189
4.865


2.097
0.176

0.064
2.842
0.140

0.393
Lake
Ontario
11.34
110.8
8.797
257.7
0.012


0.082
5.121
10.71


0.874
0.195

0.447
1.452
0.178

1.956
operation at constant source level.
1. U. S. Nuclear Power
2. NFS Fuel



Isotope and
Critical Organ
Tritium
(Body Water)


Cobalt-60
(Total Body)


Strontium-90
(Bone)


Cesium-134
(Total Body)


Cesium-137
(Total Body)


Reactors
Reprocessing Plant




1
2
3
4
1
2
3
4
1
2
3
4
1
2
3
4
1
2
3
4

DOSE

Lake
Michigan
332.0

2791.7

0.372



100.6



40.06



61.81



3
4
TABLE 2
EQUIVALENT BY THE
(microrem)
Lake
Huron
247.9

954.5
7646.5
0.226


2.39
79.89
279.3


16.68


12.42
27.74


55.74
. Postulated H-3
Canadian Power

YEAR 2050*

Lake
Erie
727.5
3852.0
644.7
5236.5
0.998


1.55
218.7
565.0


124.3
12.27

3.93
167.2
9.723

23.69
Rainout into Lake Michigan
Reactors



Lake
Ontario
584.1
7245.4
317.4
14464. S
0.691


4.76
209.6



51.76
13.53

27.59
84.43
12.21

119.02
*After 50  years  operation at constant source leveli  lifetime dose.
    1.   Nuclear Power Stations
    2.   NFS Reprocessing Plant
3.  Postulated H-3 Rainout into Lake Michigan
4.  Canadian Power Stations
                                                 165

-------
          FEDBAK03 - A COMPUTER PROGRAM FOR .THE, MODELLING OF FIRST ORDER CONSECUTIVE REACTIONS  WITH
                      FEEDBACK-UNDER A'STEADY STATE'TOLTffilMENSrONAL. NATflRAL AQUATIC SYSTEM


                                                 George A. Nossa
                                              Environmental Engineer
                                               Data Systems Branch
                                       U.S. Environmental Protection Agency
                                                 New York, N.Y.
ABSTRACT

The computer model described is used to compute the
steady-state distribution of water quality variables
undergoing consecutive reactions with feedback and
following first order kinetics.  The program has
been developed in a general form but is specifically
applicable to the reactions observed by nitrogenous
species and the associated dissolved oxygen uptake
in the natural environment.

The basis for this model is the theory of conserva-
tion of mass.  The approach used to solve the equa-
tions is a finite difference scheme developed by
Thomann (%'$> , which has been shown to be a very
effective tool in the field of water quality manage-
ment.

INTRODUCTION

A computer model  has been developed to serve as a
useful tool  in the prediction of water quality para-
meters which react under first order kinetics and as
a system of consecutive reactions.where any parameter
can react in .a feedback fashion. The problem setting
assumes an aquati'c environment' in which steady state:
conditions can be applied

Lets consider a system of five  reactants:
                FIGURE 1
    SCHEMATIC   OF FIVE REACTANT SYSTEM

 In  the above system, the consecutive feedfoward and
 feedback  reaction rates are presented.  All the
 possible  reaction loops for the first reactant have
 also  been  included.  This particular system will  be
 used  in the theory development.

 THEORY

 The general estuarine advection/dispersion equation
may be written  as:
                                                         where:
                                                               N   concentration of constituent
                                                               t = time, in tidal cycles
                                                               E = tidally averaged dispersion coefficient
                                                                   which includes the dispersive  effects  of
                                                                   tidal motion
                                                               U = net advective velocity
                                                               K = first order decay coefficient  of  constitu-
                                                                   ent N
                                                                   direct discharge? of N
                W(x)  =

             Assuming steady state conditions:  dN/dt
             equation (1)  becomes:
                                                                                                    0  and
                                                              0 = E d2N
                               U  dN
                                 dY
KN + W(x)	(2)
                                                         Direct solutions for the above  equation  have  been
                                                         computed by O'ConnorO .2), a computer  program has
                                                         been documented by EPA, Region  II(3) which  uses this
                                                         technique to solve for water quality parameters.

                                                         A second solution approach developed by  Thomann^4'5'
                                                         solves the above differential equation directly by
                                                         replacing the derivatives with  finite-difference
                                                         approximations.  This  approach  is  used by computer
                                                         program HAR03; documented by EPA,  Region ll(°> to
                                                         analyze systems of consecutive  reactions.   The
                                                         Thomann solution technique will  also be  the approach
                                                         followed in program FEDBAK03.

                                                         If we take the first reactant N-]  on figure  1  and we
                                                         incorporate all the feedback loops, equation  (2)
                                                         becomes:
                                                                                                        (3)
                                                                                      K51N5+W1
                                                         where  K-JJ  are  the  appropriate first order reaction
                                                         rate constants.  The incorporation of the reaction
                                                         term KH allows  for the first order decay of this
                                                         first  component  out of the system.

                                                         For the  other  remaining reactants equation (2)  would
                                                         be  written as:

                                                           Q   E  d2N2 _ U dN2 _ K22N2+K12N2+K32N3+K42N4+

                                                                                      K52N5+W2  ........ (4)
                                                                                      K53N5+W3  ........ (5)
                U 6N_ -  KN +
                6x
                          WN(x)
(1)
                                                     166

-------
  0 =
U £^A - K44N4+K14Ni+K24N2+K34N3+
  dx
        |/_,N +U.
        ^54^5 W4 	
                                                 (6)
       E d2Nr   U dNc
   0     —T5-     j-5-  "
         dx2      dx
The Thomann solution approach  divides the system
into completely mixed segments, as illustrated for a
one dimensional estuary in figure 2.
                       FIGURE 2
           SEGMENTED ONE DIMENSIONAL ESTUARY

where segments 1 and 'n' each form an interface with
the boundaries.  Equation (3) for the "i" segment on
figure (2) can be written as:
   EiAiiid2Nlsi _ Qiii
                      dx
                                                 (8)
The derivatives on the equation above can be replaced
by finite-difference approximations giving:
                Q-

                 i.i+1
where:
                                       -l,iNl,i)  (9B)

                                                  (9C)
          i  i = subscripts used to denote the
                interface between adjacent segments
                i and j
          i  -i = average  length of segments i and
          1>J
         a-  • and  3-j j are weight  factors  to
                correct  the  concentrations  from
                equation 9B
    xi  i
     1 >J
                                                 (10)
                                                  (n)
 In  order  for the final  concentrations  to  be  positive,
 it  is  required that:
                                                          Substituting  equations  (9A)  and  (9B)  into  equation
                                                          (8) yields:
                                                          Grouping terms  in the above equation yields:
                                                          Letting:
                                                              i ,i
                                                                                                           (u)
                                                                                   1,1+1
                                                          The general equation for the ith segment becomes:
                                                             ai ,i-!
                                                                                                   , ^5. .  (18)
                                         The use of this finite difference approximation
                                         scheme has a numerical dispersion, which can be
                                         approximated as (5)

                                            Enum   Ul(a-l/2)  	 (ISA)

                                         Where Eoun,is the numerical dispersion.  This is par-
                                         ticularly important to stream applications where
                                         advective velocities may be high and this effect may
                                         lead to distorted results.

                                         Extension to multi-dimensional analysis:

                                         If we consider a grid of orthogonally straped
                                         sections such as the one illustrated in figure 3:

kl

k2
i
k4

k3

                                                                               FIGURE  3
                                                                     HYPOTHETICAL  TWO  DIMENSIONAL  SYSTEM
                                                      167

-------
Following the convention that flows entering a sec-
tion is negative and flow out of a section is posi-
tive, a mass balance due to the transport and disper-
sion of material from section i to all surrounding
sections k(s) is:(4>5>8)
   vidN1.i
                                                 (19)
This equation is the equivalent to equation (13) for
the one dimensional case.  The generalization of the
advection term is possible since:
   -Qikaik =
and
   -Qikgik =
                                                 (2°)
                                                 (21)
Using equation (19), if the terms containing the de-
pendent variable NT t-\ are grouped on the left hand
side and the direct loads of this component and the
terms for the formation of NT due to other components
are placed on the right hand side, one obtains: (8 )
where
   aik
                                                 (22)




                                                 (22A)

                                                 (22B)
For sections where flow enters a section from the
boundary with a concentration c^
                                             ... (23)
and the forcing function at the boundary is added to
the direct loads at that section by:
                                                 (24)
For sections forming a boundary, where flow leaves
this section to an area with a concentration c,:
                                             ... (25)
and
                                                 (26)
The set of equations for component N]for n number of
spacial sections in the system described in figure 1
would be:
                                ,4+- • -+alnNl ,rT
         N5,l  	 (27)

   a21Nl ,l+a22Ni ,2+a23Nl ,3+a24N-| >4+. • •+a2nN1 >n=

         wl,2+V2K2l,2N2>2+V2K31>2N3)2+...+V2K51>2

         N5,2  	 (28)
         ,i+a32Nl ,2+a33Nl ,3+- • • -+a3nNl ,n=Wl ,3+

        V3K21,3N2)3+V3K31)3N3>3+...+V3K51,3

        N-5,3
                                                             an!Nl ,l+an2Nl ,2+an3+- • -+annNl ,n=wl ,n+vnK21 ,n
                                                                                                            (29)
                                                                                                            (30)
In matrix notation equations  (27)  thru  (30)  can  be
written as:

   [A1](N1)=(W1)+[VK21](N2)+[VK31J(N3)+[VK41J

        (N4)+[VK51](N5)  ........................  (31)

where:

[A]J is a square matrix  of n  order, containing the
     a's as defined on equations (22A)  and (22B),
     note that the main  diagonal has the  reaction
     rate constant KIT
(N-] ) ,(N2) ,. . .(Ms) are nxl vectors  of the  reactant
     over all  sections
(Wl) is an nxl vector of the  waste  loads  for react-
     ant N-|for all sections
[VK2l],[VK31],[VK4i] and [VK51].each of these is an
     nxn diagonal matrix of the section volume and
     the first order reaction coefficient at that
     segment.

A similar analysis as above for the second reactant
N2 on figure 1 yields:

   [A2](N2)=(W2)+[VK12](N1)+[VK32](N3)+[VK42]

        (N4)+[VK52](N5)  ........................ (32)

where:

[A2] is an nxn matrix similar to [A-|J,  but the
     main diagonal contains the reaction  rate
     constant K22
(W2) is an nxl vector of direct waste loads for
     component N2 over the n  sections

For the other reactants  N3, N4, NS similar equations
are generated:

   [A3](N3)=(W3)+[VK13](N1)+[VK23](N2)+[VK43]

        (N4H[VK53](N5)  ........................ (33)

   [A4](N4)=(W4)+[VK14](N1)+[VK24](N2)+[VK34]

        (N3MVK54](N5)  ........................ (34)

   CA5](N5) = (W5)+[VK15](N1)+[VK25](N2)+[VK35]

        (N3)+[VK45](N4)  ........................ (35)

The above matrix equations (31) thru (35) can be
written as a matrix of Matrices: (5, 8)
                                                     168

-------
 [A]   ]  -[VK21HVK31]-[VK41J-[VK5l]

•[VK12]   [A2   ]-[VK32]-[VK42]-[VK52]

-[VK13]  -[VK23]  [A3  ]-[VK43]-[VK35]

-[VK14J  -[VK24J-[VK34J  [A4  J-[VK54]

        -[VK25]-[VK35]-[VK45] [A5  ]
                     (NT

                     (N2

                     (N3

                     (N4

                     (N5
                                                 W2)

                                                 W3)
or
   CA]  (N)
(w).
                                                 (37)
wjiere [A] is  the 5n x 5n matrix above and (N) and
(W) are 5n x  1  vectors.   The solution of the five
reactants over  all  the spacial  sections are given by

   (N)  =   [A]'1  (W) 	 (38)

Application of  the  theory by the computer program

The program described follows a modular approach in
which the user  specifies to the main line program the
options desired, and subroutines are called accord-
ingly to perform specific tasks.

The steps to  be accomplished can be summarized as:

a) Input the  physical characteristics of the system;
   namely, the  geometry, temperature, hydro!ogic
   characteristics, reaction schemes and correspond-
   ing reaction rates.
b) Calculate  E1 and «'s for all the sections as
   described  on equations (9C)  and (10).  In order to
   handle the constraint stated on equation (12), the
   program tests the expression:
            1-EWQ4
                              (39)
and if such  is  the  case a^-  is  recalculated as:

      -ij    l-E'ij/2Qij 	  (40)

which places "ij well  within the tolerable range.

c) Set up  the system matrix  [A-,-] by computing its
   elements  as  given on equations (22A)  and (22B).
   It should be noted  that the  difference between  the
   [Ai] matrices in equations (31)  thru  (35) is  the
   addition  of  a separate main  diagonal  term KjKjj^.
   In order  to  conserve space,  this matrix is set  up
   without this term and  during the creation of  the
   matrix  of matrices  [A], the  appropriate VfKjj^-
   term is added.
d) Set up  the matrix of matrices [A],  this is done by
   combining an offline disk file containing all  the
   ViKi terms for the  system and the [Ai] matrix.
e) Input of  direct  discharges into  the system and
   boundary  concentrations,  and from these compute
   the system source vector  (W) as  described on  equa-
   tions (24) and (26).
f) Solve for the reactants concentrations at all  seg-
   ments by  inverting  the matrix [~R] and multiplying
   it by the waste  vector (W).

Optionally,  the program can  also perform system  sen-
sitivity analysis by varying the waste vector (W)  and
re-multiplying  by [A]'1  and/or  changing  the reaction
rate constants  for  any reactants and repeating step
(f).   A second  option  is  the computation of dissolved
oxygen deficit  and  the corresponding dissolved oxygen
concentration by selecting the  reaction  schemes  pro-
ducing the deficit  and the associated  stochiometric
coefficient.
The computer program has been written for the  IBM  370
with a Fortran (IV) 6 or H level compiler.  The  pro-
gram occupies 140K of core to execute and takes  35
CPU seconds to solve a 10 segment, 8 component system.
As presently written, the program can accommodate  a
multi-dimensional system of up to sixty sections and
each section can have a maximum of six interfaces.
The maximum number of reactants is such that when
multiplied by the number of sections cannot exceed
120.  This present limitation can be easily expanded.

Application to nitrification

Figure 4 is a schematic representation of the nitro-
gen cycle:
                                                             FIGURE  4
                                               MAJOR FEATURES OF THE NITROGEN  CYCLE

                                       Since waste  loads are usually in the form of organic
                                       nitrogen or  ammonia,  these species will consume oxy-
                                       gen by the bacterial  reactions:(7)
                                                             NH4+ +3/2 02 Nitrosomas>NO?+2H++HoO
                                                                          Bacteria

                                                          followed by
                                                             N02 + 1/2 0? Nitrobacte^NOj
                                                                          Bacteria
                                                                                         (41;
                                                                                         (42)
                                                          From the stochiometry of the reaction on equation
                                                          (41), it takes 3.43 grams of oxygen for the oxidation
                                                          of one gram of ammonia as nitrogen to nitrite.  The
                                                          second reaction takes 1.14 grams of oxygen for the
                                                          oxydation of one gram of nitrite as nitrogen to nit-
                                                          rate.  The entire oxydation process therefore takes
                                                          4.57 grams of oxygen per gram of ammonia nitrogen.

                                                          Letting:

                                                             NT = organic nitrogen
                                                             N2 = ammonia nitrogen
                                                             N3 = nitrite nitrogen
                                                             N4 = nitrate nitrogen
                                                             Ng = plant and animal nitrogen

                                                          The system to be solved would be:
                                                                           K25
                                                                  K36=1.14 K
                                                                                               34
                                                                                     K26=3.43 K23

                                                                           FIGURE 5
                                                                 NITROGEN CYCLE WITH DEFICIT COMPONENT
                                                      169

-------
 If we assume these reactions to follow first order
 kinetics, the system can readily be solved using
 program FEDBAK03.  The computation of dissolved oxy-
 gen deficit can be accomplished two ways.  A deficit
 "species" can be defined (noted Ng above), the decay
 of which is the reaeration rate,  Ka.  The reaction
 schemes producing deficit are then defined, and the
 corresponding reaction rate would be the product of
 the stochiometric coefficient by the reaction rate
 of the reaction using up oxygen.  A second method
 to compute deficit concentrations as done for com-
 ponent NI in equations (22) thru (31), one obtains

   [B](D)i=3.43[VK23](N2)+1.14[VK34](N3) 	 (43)

 where [B] is a matrix similar to the [AjJ matrices
 of equations (31) thru (35), except that the main
 diagonal term has the reaeration rate Ka instead ^f
 K-J-J, (N2) and (N^) are nxl vectors of the steady-
 state concentration of these reactants.  (D)-j is an
 nxl vector of the deficit concentrations over all
 segments due to the oxydation of ammonia and nitrite.
 The solution to the deficit concentration over all
 space is given by:

   (D)i=3.43[VK23](N2)[B]-'1+1.14[VK34](N3)[B]-l.(44)

 This method is used to compute deficit in program
 FEDBAK, by using the optional subroutine.

 The application to nitrification and dissolved oxygen
 deficit assumed first order kinetics for the bacter-
 ial reactions. This should be confirmed by laboratory
 studies, or the nature of the system should be care-
 fully considered. This computer model has been found
 to be very useful as a predictive tool  and in provi-
 ding insights to the behavior of nitrogen species in
 the aquatic environment.  On new applications this
 will ultimately depend on the applicatively of the
 underlying assumptions to the system of interest.

 ACKNOWLEDGEMENTS

 The author is grateful to Steve Chapra of the Great
 Lakes Environmental  Research Laboratories and Richard
 Winfield of the Manhattan College Department of Envi-
 ronmental Engineering and Science for their review
 and comments on this paper.

 The data for test applications of this  model  were made
 available by Dr. Richard  Tortoriello of the Delaware
 River Basin Commission.  His  input to this project is
 gratefully acknowledged.

 REFERENCES

 (1) O'Connor, D.J.,  "Oxygen  Balance of an Estuary".
 Jour. San.  Eng.  Div.  ASCE,  Vol  86,  May 1960,  pp 35-55

 (2) O'Connor, D.J.,  "The  Temporal  and Spacial  Distri-
 bution of Dissolved Oxygen  in Streams".  Water Resour-
 ces Research, Vol  3, No.  1,  1967,  pp 65-79.

 (3) Chapra, S.C.  and Gordimer S.,  ES001  A Steady Sta-
 te, One Dimensional,  Estuarine Water Quality Model,
 USEPA,Region II,  New York,  N.Y.  September 1973.

 (4) Thomann,  R.V.,  "Mathematical  Model  for  Dissolved
 Oxygen".  Jour.  San.  Eng.  Div.,  ASCE,  Vol  89,  No.  SA5,
 October 1963, pp 1-30.

 (5) Thomann,  R.V.,  System Analysis  and  Water Quality
Management, Environmental  Science  Division,  New York,
 1971.
     (6)  Chapra,  S.C.  and Nossa, G.A.  HAR03 A Computer
     Program for  the Modelling of Water Quality Parameters
     in Steady State Multidimensional  Natural Aquatic Sys-
     tem..  USEPA,  Region II,  New York, N.Y. October 1974

     (7)  Stratton,  F.E.  and Me Carty,  P.L., "Prediction of
     Nitrification  Effects on the Dissolved Oxygen Balance
     of Streams"  Env.  Sci. and Tech.,  Vol. 1, No. 5, May
     1967,  pp 405-410.

     (8)  O'Connor,  D.J., Thomann, R.V., and Di Toro, D.M.,
     Dynamic Water  Quality Forecasting and Management,
     USEPA,  Office  of  Research and Development, Wash.,D.C.
     Report  No. EPA-660/3-73-009, August 1973
170

-------
                              MODELING THE HYDRODYNAMIC  EFFECTS  OF LARGE  MAN-MADE

                                            MODIFICATION TO  LAKES
                                                 John F. Paul
                                         Department of Earth Sciences
                                        Case Western Reserve University
                                             Cleveland, Ohio 44106
                                                 currently at
                                         Large Lakes Research Station
                                   Environmental Research Laboratory-Duluth
                                       '   Grosse lie, Michigan 48138
     A three  dimensional  hydrodynamic model is des-
cribed which  can  be  used  as  a  predictive tool for as-
sessing the possible effects of  large man-made modi-
fications  to  lakes.   The  example of the proposed jet-
port island in  Lake  Erie  is  used as a sample applica-
tion of the model.

                  Introduction

     The real value  of numerical models is in their
predictive capability.  By this  is meant their abil-
ity to be  used  for physical  situations that are dis-
tinctly different from those for which they have been
developed. The major use of models has so far been
in the verification  sense, that  is, they have been
developed  to  agree with existing sets of data.  The
purpose of this paper is  to  present an example of
the predictive  use of one particular hydrodynamic
numerical  model.1'2'3

     A new jetport has been  proposed to be built in
the vicinity  of Cleveland, Ohio.  One possible site
being considered  is  a to-be-built dyked area in Lake
Erie near  Cleveland.  As  part  of the feasibility
studies for the proposed  lake  jetport, a numerical
model describing  the hydrodynamics of the Lake Erie
area near  Cleveland  was developed to help determine
the possible  effects of such a jetport on the summer
temperature structure in  the lake.

     A numerical  model for a situation such as the
proposed jetport  has several advantages.  First, once
the model  is  developed, simulations are relatively
inexpensive to  produce, compared to building and run-
ning a physical model or  conducting field surveys.
For example,  the  numerical model to be discussed re-
quires approximately twenty  minutes of CPU time for
one day of real-time simulation.  Second, it is ex-
tremely easy  to simulate  different physical condi-
tions on the  lake, e.g.,  different wind directions
and speeds, and different thermal structure.  Third,
it is a simple  task  to alter the model geometry to
simulate the  effect  of different jetport configura-
tions.   In this way  the model  could be considered as
a design tool.

     Another  advantage of a  numerical model is that
it may be  the only alternative for assessing a pro-
posed modification to a lake.  For this jetport ex-
ample,  field  data can only be  used to tell what is
happening  in  the  lake at  the present time, not what
happens after a jetport is built in the lake.  One
conception of the jetport is a two mile by three mile
island located five miles off Cleveland.  No previous
experience with modifications of this scale to large
lakes is available.  A physical model of this situa-
tion would be extremely expensive, and even if it
were built, its results may be questionable due to
the extreme distortion required in the model and the
inability to properly represent some of the physical
mechanisms occurring in the lake.

        Description of the Numerical Model

     The equations for the numerical model are de-
rived from the time-dependent, three-dimensional
equations of motion for a viscous, heat-conducting
fluid.  The basic assumptions used in the model are:
(a) The Boussinesq approximation is valid.  This as-
sumes that density variations are small and can be
neglected in the.equations of motion except in the
gravity term.  The coupling between the energy and
momentum equations is retained.  (b) Eddy coefficients
are used to account for turbulent diffusion effects
in both the momentum and energy equations.  The hori-
zontal eddy coefficients are assumed constant but the
vertical eddy coefficients vary depending on the ver-
tical temperature gradient and other parameters.  (c)
The rigid-lid approximation is valid, i.e., the ver-
tical velocity at the undisturbed water surface is
zero.  This approximation is used to eliminate sur-
face gravity waves and the small time scales associ-
ated with them, greatly increasing the maximum time
step possible in the numerical computations.  In this
approximation, only the high frequency surface varia-
tions associated with gravity waves are neglected.
(d) The pressure is assumed to vary hydrostatically.

     The model equations, as described in detail by
Paul and Lick3, are:

     1.  the three-dimensional, imcompressible
         continuity equation,

     2.  two time-dependent, three-dimensional
         horizontal momentum equations,

     3.  the time-dependent, three-dimensional
         temperature equation,

     4.  the equation of state,

     5.  the Poisson equation for the pressure.

     The boundary conditions used with the above
equations are as follows.  The bottom and shore are
taken as no-slip, impermeable, insulated surfaces.
A heat transfer condition proportional to a temper-
                                                      171

-------
                                                                              SCALE:

                                                                                20     40   MILES
                                                                                        J
     TOLEDO
                                                                                            KILOMETERS
                                                                                                      BUFFALO
                                 CLEVELAND
                                                   AREA CONSIDERED
                                                   FOR MODEL APPLICATION
               ERIE
                      FIGURE 1.   AREA OF LAKE ERIE CONSIDERED FOR APPLICATION OF THE MODEL
ature difference1 and a wind-dependent stress are
imposed at the water surface.  The pressure boundary
conditions are derived from the appropriate horizon-
tal momentum equation.  Along the open water bound-
aries either velocity and temperature values are
specified or normal derivatives of the velocity and
temperature are set to zero.

     The equations and boundary conditions are put
into appropriate finite differences form in both
space and time.  A strictly conservative numerical
scheme is used in the model.  In addition, a stretch-
ing of the vertical coordinate proportional to the
local depth is used.  With this transformation, the
same number of vertical grid points are present in
the shallow as in the deeper parts of the lake.  This
ensures that in the shallow areas there is no loss
of accuracy in the computations due to lack of ver-
tical resolution.  Refer to the report by Paul and
Lick3 for details.

 Application of the Numerical Model to the Jetport

     The section of Lake Erie considered is a sixteen
mile by sixteen mile area near Cleveland (Figure 1).
The jetport configuration used in this example is a
two mile by three mile island five miles from Cleve-
land in approximately fifty feet (15.2 m) of water.
The numerical model has been run with and without the
jetport island.  Sample results are presented for
14.8 hours after the start of a 12 mph (5.4 m/sec)
wind from the south.  The lake is initially stratified
with a thermocline depth of 30 ft (9.15 m), epilimnion
temperature of 75°F (24°C), and hypolimnion tempera-
ture of 55°F (13°C).  Figures 2 through 5 show results
without the jetport island and Figures 6 through 9
show results with the jetport island.

     Comparing the horizontal isotherm plots (Figures
2, 3, 6, 7) for the two cases, it is apparent that the
jetport island influences the temperature of the lake
over a large distance (about 6 to 8 miles) from the
island.  The velocity plots (Figures 4, 5, 8, 9) also
indicate a large region of influence.  This effect is
due to the upwelling of cold water on the eastern
edge of the island and downwelling of warm water on
the western edge.  These upwellings and downwellings
result in changes to the stratification structure in
that area of the lake.  Since this is a variable-den-
sity model, changes in the temperature structure do
cause changes in the velocity pattern.  Using a con-
stant-density, free surface model, Sheng  found that
the jetport island only exerted an influence over a
distance of one to two miles into the lake.

     The results presented are for only one parti-
cular wind direction.  As the wind shifts, the up-
welling and downwelling regions change their posi-
tions around the island.  Thus, it can be seen that
the effect of the proposed jetport island during the
summer season would be to erode the thermocline in
that area of the lake.  The mixing of epilimnion and
hypolimnion waters may be considered in one way to
be beneficial since it will keep the area of the
lake affected from going anoxic in the late summer,
but in another way it may not be considered benefi-
cial because of the increased nutrient input to the
epilimnion.  Also, this forcing of warm water to the
bottom may be detrimental to aquatic species depend-
ent upon colder waters for their existence or repro-
ductive activities.

     The model also could be used to predict the ef-
fect of different jetport configurations, for example,
a jetport peninsula instead of a jetport island.  The
results presented are qualitative and indicate how a
numerical model might be used to predict the effects
of large man-made modifications to lakes.

                  Acknowledgement

     This work was supported by the U. S. Environ-
mental Protection Agency and the U. S. Army Corps of
Engineers.  I would like to thank Dr. W. J. Lick for
his advice while this work was being performed.

                   Bibliography

1.  J. F. Paul and W. J. Lick.  A numerical model for
    three-dimensional, variable-density jet.  Techni-
    cal Report, Division of Fluid, Thermal and Aero-
    space Sciences, Case Western Reserve University,
    Cleveland, Ohio, 1973.
                                                      172

-------
                   W I II D
                          SCALE

                           0  1
                          MILES
                                               ISOTHERM  LEGEND
                                                 73 . 9°F
                                                 72 . 6°F
                                                 71 . 3°F
                                                 70 . 0°F
                                                 68 . 7°F
                                                 67 . 1°F
                                                 66 . 1°F
                                                 61 . 8°F
                                                 63 . B°F
                                                 62 . 2°F
                                                 60 . 9°F
                                                 S9 . 6°F
                                                 5 8 , 1 ° F
                                                 57 . 1°F
(23
(22
(21
(21
(20
(19
(19
(18
(17
(16
(16
(15
(11
(13
3°C)
6°C)
8°C)
7°C)
0°C)
2°C)
5°C)
8°C)
6°C)
9°C)
                                                                                            WIND
                                                 SCALE


                                             0  1    IB CM/ S E C
                                            MILES
FIGURE 2 . SURFACE ISOTHERMS FOR MODEL WITHOUT JETPORT
                                                                         FIGURE4.  SURFACE  VELOCITIES FOR MODEL WITHOUT  JETPORT
                        SCALE
                         0 1
                 WIND    MILES
ISOTHERM
73
72
71
70
68
67
66
61
63
62
60
59
58
57
9°F
,6°F
.3°F
. 0°F
.7°F
. 1°F
,1°F
. 8°F
B°F
. 2°F
. 9°F
,6°F
, 1°F
.1°F
LEG
(23
(22
(21
(21
(20
(19
(19
(18
(17
(16
(16
(IB
(11
(13
END
, 3°C )
.6°C)
, 8°C)
.1°C)
1 ° C )
!7°C)
. 0°C )
.2°C)
.B°C)
. 8°C)
.1°C)
, 1°C )
. 6°C)
, 9°C)
                                N              SCALE


                              /     /       \\     15C./SEC
                                   HIND      n I Ltb
                                                                                                               *.  v  \
FIGURE 3 .  ISOTHERMS AT HO FT FOR MODEL UITHOUT JETPORT
                                                                        FIGURES.  VELOCITIES AT 10 FT  FOR  MODEL  WITHOUT JETPORT

-------
                 WIND
                        SCALE
                         0   1
                        MILES

A
B
C
D
E
F
f,
H
I
J
K
L
PI
N
I SOTI
73 .
72 ,
71 .
70 .
68 .
67 .
66 ,
61 .
63 .
62 .
60 .
59 .
58 ,
57 .
HERM
9°F
6°F
3°F
0°F
7°F
1°F
1°F
8°F
5°F
2°F
9°F
6°F
1°F
1°F
                                                       LEGEND
(23
(22
(21
(21
(20
(19
(19
(18
(17
(16
(16
(15
( 1 1
(13
. 3°C)
. 6°C)
. 8°C)
. 1 ° C )
, 1 ° C )
. 7°C)
. 0°C)
. 2°C)
. 5°C)
. 8°C)
. 1 ° C )
, 1 ° C )
, 6°C)
. 9°C)
                                    N               SCALE

                                  -ft      1\      I	1     -*
                                 /      /       01    1 5  CM/S EC
                                      WIND     MILES
FIGURE 6. SURFACE ISOTHERMS FOR MODEL WITH JETPORT
                  FIGURES.  SURFACE VELOCITIES  FOR  MODEL WITH JETPORT
                 WIND
                        SCALE

                         0   1
                        MILES
                                                                                                             SCALE

A
B
C
D
E
F
fi
H
I
J
K
L
M
N
ISOTHERM
73 .
72 .
71 ,
70 .
68 .
67 .
66 ,
61 .
63 .
62.
60 .
59 .
58 .
57 .
9°F
6°F
3°F
0°F
7°F
1°F
1°F
8°F
5°F
2°F
9°F
6°F
1°F
1°F
                                                       LEGEND
                                                       (23
                                                       (22
                                                       (21
                                                       (21
                                                       (20
                                                       (19
                                                       (19
                                                       (18
                                                       (17
                                                       (16
                                                       (16
                                                       (15
                                                       (11
                                                       (13
, 3°C)
, 6°C)
, 8°C)
,7°C)
, 0°C)
, 2°C)
, 5°C)
, 8°C)
, 6°C)
,9°C)
                                                                                                 WIND
                                                0  1
                                               MILES
15 CM/SEC
   FIGURE 7  ISOTHERMS AT 10 FT FDR MODEL WITH JETPORT
                                                                             FIGURE9.  VELOCITIES AT 10FT FOR MODEL  WITH  JET
                                                                                                                              PORT

-------
2.   J.  F.  Paul and W. J. Lick.   A numerical model for
    thermal plumes and river discharges.   Proc.  17th
    Conf.  Great Lakes Res.,  IAGLR, 1974,  pp. 445-455.
    J. F.  Paul and W.  J.  Lick.   Application of a
    three-dimensional  hydrodynamic model to study the
    effects of a proposed jetport island on the ther-
    mocline structure  in Lake Erie.  Report 17-6 of
    the Lake Erie International Jetport Model Feasi-
    bility Investigation.  U.S. Army Engineer Water-
    ways Experiment Station,  Vicksburg, Miss. 1975.
4.  Y. P. Sheng.  The wind-driven currents and con-
    taminant dispersion in the near-shore of large
    lakes.  Report 17-6 of the Lake Erie International
    Jetport Model Feasibility Investigation.  U. S.
    Army Engineer Waterways Experiment Station, Vicks-
    burg, Miss.  1975.
                                                      175

-------
                       AN EMPIRICAL MODEL FOR NUTRIENT ACCUMULATION  RATES  IN  LAKE  ONTARIO
                  Patricia A.A. Clark
         U.S.  Environmental Protection Agency
                  Rochester, New York

                  Jane P. Sandwick
         U.S.  Environmental Protection Agnecy
                  Rochester, New York

                       Abstract

Based on the chemical concentration data collected
during the International  Field Year for the Great Lakes
(IFYGL)--May 1972 through June 1973, monthly average
rates of chemical accumulation have been determined
for total phosphate (TP), nitrite-nitrate (N02-N03),
ammonia (NHs), total Kjeldahl  nitrogen (TKN),  total
organic carbon (TOC), and ($04).   The accumulation
rates are the consequence of such  processes as biochem-
ical transformation processes, sediment exchanges, etc.
The model relates the accumulation rate of a particular
substance with the rate of exchange of the total  mass
of that substance in the lake and  with the total  net
loading rate to the lake (tributaries, direct indus-
trial, direct municipal  and on-lake precipitation).

The total masses of each chemical  substance for each of
the 11 cruises (Figures 1-6)  have  been calculated using
the numerical integration computer program SPLOTCH
(Boyce 1973) with the input of concentration measure-
ments which were collected from about 75 stations on
the lake at depths of 1,5,10,20,25,30,40,50,100,150
meters and at the lake bottom.3 This study is de-
scribed by Casey, Clark and Sandwick (1976) together
with the U.S. tributary loading rates and the direct on-
lake precipitation loading rates.4  Canadian tributary
loading rates for the same period  were presented  by
Casey and Salbach (1975).5  The mass balance equation
relating these quantities and the  accumulation rate will
now be derived.  All quantities in the equation can be
evaluated directly on the basis of the measured lake
concentrations and loading rates so that the equation
can be solved for the accumulation rate in each case.
In addition, analysis of the equation will provide a
means for the assessment of certain assumptions which
are commonly made in large lake limnology.

             Accumulation Rate Equation

It is convenient to begin with the hydrodynamic equation
for the conservation of mass in the integral form (see,
for example, Batchelor 1967),
                                                   (1)
where e,  is the concentration of chemical  species i,  v_
is the flow velocity and r,  is the rate of accumulation
(or loss) per unit volume of the same species.   V is the
volume of the fluid (in this case the volume of Lake
Ontario).  S is the total surface bounding  the  volume  of
the lake.

The term on the left-hand side of eq. (1) can be written
as
                    Donald J.  Casey
          U.S.  Environmental  Protection Agency
                  Rochester,  New York

                   Anthony Solpietro
          U.S.  Environmental  Protection Agency
                  Rochester,  New York

where m. is the total mass of chemical species  i in
the lake at time t.  Cn- designates the second term on
the right of eq. (2).  This term can be neglected
whenever (Ap/p )»   (AV/V).  The-U.S. Army Corps of
Engineers measured a change of about 1 meter in the
level of Lake Ontario (Monthly Bulletin of the  Lake
Levels, 1972 and T973).''1  This corresponds to  a volume
increment of about 20 km3 so that Av/v^-012.   If this
is compared with (Ap/p)~.20 for total phosphate (see
Table 1), which has  about the smallest concentration
variation of any of  the chemical substances studied,
it is apparent  that  (Ap/p)» (AV/V) will hold  for all
substances.

The second term in eq. (1) is the net loading rate
where Ln-T is the net loading rate  (inflow minus out-
flow) due to tributary stream flow, L-JR  is the load-
ing  rate due to rainfall directly on the lake surface
and L-jS  is the net loading due to sediment  (sediment
release   sediment adsorption).  LjTand  L^R are shown
in Tables 1, 3, 4, 6, 7, 9.  Calculations of LiR are
based on precipitation chemistry measurements re-
ported by Shiomi and Kuntz (1973) and  by Casey et al.,
(1975) and monthly total of  lake precipitation measured
by Bolsenga and Ragman (1975).'0'4'2     L^  must be
either estimated or calculated.

The third term in eq. (1)  is the total net  rate of
production of species i
                                                  (4)
where  T-j  is  a  function  of  time.

Substituting eqs.  (2),  (3),  and  (4)  into eq.  (1) re-
sults  in  the following  equation.
 All  quantities  in eq.  (5)  can be determined from mea-
 surements  except  the sum,  L-J  + T-j  = S-j  so this  sum can
 be  obtained  from  equation  (5).  Eq. (5)  may be  re*-
 written  as

           dm — LJ +Sj                            (6)


 where     , . _ , .T^, ,R
                                                            Equation  (6)  is  similar  to  that obtained by
                                                            Vollenweider  (1969),
          dmw —  J— Q
          dt   ~    ~v~
                                                                                                              (7)
                                                        176

-------
where mw is the total amount of  substance  w in  the lake
at time t, J is the rate of tributary  loading of sub-
stance w to the lake, Q is the mean  discharge out of
the lake, V is the mean volume of  the  lake and a is
the sedimentation rate coefficient.'2

In comparing eqs. (6) and (7) L-j is  the  net loading
rate (inflow-outflow) obtained directly  from measure-
ment.  The comparable terms in eq.  (7),  J-Qm /V, in-
volve an assumption about the outflow.   The actual out-
flow and the Qn^/V assumed form  are  compared.   The in-
clusion of a surface contribution  in the source term
in these equations seems prudent in  view of the dis-
cussion by Dillon and Kirchner (1975), Kirchner and
Dillon (1975), Dillon (1975) and Chapra  (1975)  with re-
gard to phosphate. 8'9<7-6

dmw/dt is obtained from a numerical  differentiation of
mi (t) calculated by means of the  SPLOTCH  program.  It
is assumed that the chemical masses  so obtained are
characteristic of the average chemical masses  in the
lake for the month during which  the  cruise occurred.
The assumption seems justified on  the  basis of  the
relatively smooth progression of mass  determinations
from cruise to cruise.  For those  months for which no
cruises took place, linearly interpolated  values have
been obtained.  The monthly variations in  chemical mass
contents of the lake are plotted in  Figures 1-6 and
will be discussed in the accumulation  rates section.
Numerical differentiation of m-j  with respect to t is
performed by passing a parabola  through  3  successive
monthly mass values, m,, mz, m3.   The  derivative at
the mid point is given by
                                                           and  1973  and  early winter 1972, reflecting seasonal
                                                           perturbations from the mean (see Figure 1).  Thus
               d m
               dT
                     m3-mjl +0(h2)
                       2 h
                                                   (8)
where h   t -t _1 (see, for example, Wylie,  1951).   In
this case h = month, however, dm/dt is expressed  in
units of metric tons/day.   The monthly values of  dm/dt
for each substance are provided in Tables 2, 5, 8.
These tables also include the total monthly  loading
rate L and the calculated value of the monthly source
term S for each substance.

            Nutrient Accumulation Rates

For each substance studied, the variation of the  mass
content is shown and described.  A numerical time dif-
ferentiation of the monthly mass content has been per-
formed and is tabulated.  This quantity together  with
the monthly loading rates to the lake have been sub-
stituted into eq. (6) to yield the source term.   The
nature of the source term variation is discussed  in
order to extract information regarding the nature of
the physical processes.

Using the IFYGL data we have examined the Vollenweider
model which assumed a source term of the form, -crmw
(eq. (7)).  For each of the 6 substances of  this  study,
the model proved inadequate since the structures  of the
functions S and m (or mw)  for each substance are  very
different.  The assumed form of the outflow  term  in eq.
(7), Qmw/V, when compared with the measured St.
Lawrence loading proves a useful model for nitrite-
nitrate, total  Kjeldahl nitrogen, sulfate and organic
carbon while in the cases of total  phosphate and  ammo-
nia, the model  predictions deviate considerably from
the measured value.

Total  Phosphate

The total  phosphate  content of the lake shows an  aver-
age pf 9.5% from the mean  with a maximum deviation of
19%.   The maximum deviations occurred in spring 1972
                                                                                              . — . — mxlCP Melr
                                                              FIGURE 1  THE MASS CONTENT (ra) AND THE PRODUCTION RATE CS) OF TOTAL PHOSPHATE
                                                                    DURING THE FIELD YEAR.
dm/dt will be small.  Table 1 lists the various contri-
butions and the net total loading rate of total phos-
phate by month.  The total net loading rate to the lake
varied by a factor of 10 reaching a maximum in the
December 1972 through March 1973 period and a minimum
in the August through October 1972 period.   Having

     Table 1    Total  phosphate loading rates to Lake Ontario
                     (metric tons/day)

Month
Apr 1972
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan 1973
Feb
Mar
Apr
May
Mean
N 1 agara
River
19.2
12.5
2 It. 0
20. It
22.1
17.2
15. If
19.3
31.5
18. G
23.5
28.9
17. 8
lit. 5
20. ll
U.S.
Tr 1 butar 1 es
18.it
12.5
11.1
12. 7
It. 6
2.9
It. 8
9.0
11|.9
12.5
8.9
18.7
11|.2
7.7
10.9
Direct municipal and
Industrial
Month
Apr 1972
May
Jun
July
Aug
Sep
Oct
Nov
Dec
Jan 1973
Feb
Mar
Apr
May
Mean
U.S.
.16
.lit
.111
.15
.13
.15
.lit
.111
.18
.18
.19
.17
.16
.15
.1C
Canada
8.3
7.3
7.1
7.7
6.7
7.9
7. It
7.3
9.1
9.6
9.9
9.0
8.3
7.9
8.1
Canad 1 an
Tr 1 butar 1 es
5.1
3.9
3.6
5.8
1.9
1.3
2. 0
2. 2
It. 8
3-D
2.6
5.7
3.9
2.7
3.5
Dl rect
Preclp.
It. 9
6.0
7.9
It. 8
6. It
5.1
5.8
7.9
8.2
2.8
k.O
7.5
7.7
5.8
6.1
St. Lawrence
Rl ver
20.7
19.8
21.7
27.2
33.6
21. G
16.9
15.8
13.3
12.9
20. It
25. If
27.0
28.0
21.7
Net loading
rate
35. it
22.5
32.1
2l4.lt
8.2
13.0
18.6
30.0
55.lt
31|.6
28. 7
It It. 6
25.1
10.8
27.lt
                                                           obtained  the  mean  monthly  numerical  derivative, dm/dt,
                                                           and  the total  net  loading  rate L,  eq.  (6)  yields the
                                                           source term.   All  of these quantities  are  listed in
                                                           Table 2.
                                                        177

-------
       Total phosphate and nitrite-nitrate   mass balance
          equation terms (metric tons/day)
      Table3 -
Nitrite-nitrate  loading rates to lake Ontario
        (metric tons/day)
           Total Phosphate
                                      Ni trIte-NItrate
Month dm/dt/lO*
May 1972 1 1.8
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan 1973 -1
Feb
Mar 1
Apr
May
88
33
13
19
65
55
02
25
07
19
92
08
Mean ,39
•L/10*
2.25
3.21
2. 1.1.
.82
1.30
1.6G
3.00
5.51.
3.4G
2.87
4.46
2.51
1.08
2. 68
s/io*
.77
-2.33
-2.11
.69
-1.11
-1.21
-2.45
-5.52
-I..71
-2.80
-3.27
-1.59
-1.00
-2.27
dra/dt/10
-5.60
.87
• .28
-3.19
-1.C3
-1.11
-1.C1
.03
3.0C
3.14
• .34
-2.50
-2.31
.88
L/10J
.09
.28
.03
.01
.06
.06
.14
.17
.15
.07
.09
.11
.10
.10
S/10*
-5.69
.59
• .31
-3.20
-1.69
-1.17
-1.75
.14
2.91
3.07
.43
-2.61
-2.41
-.115
Throughout  the  field year, there  is  a loss rate for
phosphate,  -S,  which averages 2.27 metric tons/day.
This monthly  loss rate varies by  a factor of about 7
during the  field year with maximum losses occurring  in
the winter.   The August through October period shows  a
minimum  loss  rate.

Nitrite-Nitrate

The total nitrite-nitrate content of the lake shows
very definite seasonal variation  (see Figure 2).  Max-
imum mass content is characteristic  of the early summer
1972 and spring 1973 periods with a  low occurring in
                                   . — . — mxlO5 Melrrc Tom

                                   	 S xlO3 Metric Tom/day
FIGURE 2  THE MASS CONTENT (m) AND THE PRODUCTION RATE (S) OP NITRITE-NITRATE
       DURING THE PIELD YEAR.
the late  summer through fall  period.   The range of
variation is  a factor of about  2.3.   In Table 3 a
compilation of the partial and  net  total loading
rates for NOo-NO., is provided.   This  net total
loading rate  varied by a factor of  more than 20
during the field year, but is typically more than
an order  of magnitude smaller than  either dm/dt or  S.
This  indicates that a major  source  of nitrite-nitrate
variation is  due to the biochemical  transformation
rather than loading rate variations.

In contract to total phosphate, the source term
changed sign  during the year so that  losses of NOg-NOa
occurred  in the spring and summer and production was
noted during  the winter months.
Month
Apr 1972
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan 1973
Feb
Mar
Apr
May
Mean
Niagara
River
116.2
112.7
182.9
122.1
93.9
24.6
1(1.2
93.6
130.5
175.lt
133.8
153.3
137.2
218.7
121.. 0
U.S.
Tributaries
85.7
57.3
62.6
63.9
16.5
8.3
13.3
46.5
73.1
611.7
57.8
98.1
77.5
36.6
5ll.lt
Canad 1 an
Tr 1 butar I es
17.3
11.9
15.lt
17. C
3.5
1.5
2.2
9.7
16.1
111. 2
llt.S
22.2
17.1
8.5
12.3
St. Lawrence
River
100.0
158.6
62.6
222.5
169.7
29.0
60.3
90.9
103.2
136. 2
181.7
258.lt
198.9
226.5
li|3.1
Direct municipal and
Industrial
Month
Apr 1972
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan 1973
Feb
Mar
Apr
May
Mean
Ammonia
U.S.
.47
.4it
.1)2
.39
.35
.35
.38
. UO
.Sit
.It 9
.53
.5I|
.5"t
.48
.45

Canada
2.3
2.2
2.1
1.9
1.7
1.7
1.9
2.0
2.7
2. It
2.6
2.7
2.7
2. It
2.2

Direct
Precl p.
49.2
60.7
79. It
It8. 5
6".. 3
51.7
58.0
80.1
58.0
28.7
1.0.2
75.9
78.1
58.0
59. U

Net loading
rate
171.2
8G.6
280.2
31.9
11.1
59.2
56.7
lltl.lt
172.7
149.7
67.7
94.2
1111.2
98.2
109.6

                                                             The total  ammonia content of the lake shows  a  strong
                                                             seasonal  variation (Figure  3).   Highest mass content
                                     ,- — , — m x 10 Metric Tom

                                     "     Sx]Q2 Meirie Ton i/day
  FIGVRE 3  THE MASS CONTENT (m) AND THE PRODUCTION RATE (S) OP AMMONIA
        DURING THE FIELD YEAH.

occurred  in  the late summer  through fall of  1972.
After reaching  a midwinter minimum, the mass  content
climbed with  onset of spring 1973.  Provided  in Table
4 are the loading rate contributions of ammonia, which
show  a variation by a factor of about 3 during the
field year.
                                                          178

-------
      Table A
                Ammonia loading rates to  Lake Ontario
                    (metric  tons/day)

Month
Apr 1972
May
Jun
Jul
Aug
iep
Oct
Nov
Dec
Jan 1973
feb
Mar
Apr
May
Mean

Month
Apr 1972
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan 1973
Feb
Mar
Apr
May
Mean
N 1 agar«
Klver
27.8
16.0
U7.5
33.3
27.6
17.7
17.2
60.7
lii.lt
30.5
9.5
20.8
15. It
12.0
25.0
Direct
U.S.
3.5
2.9
3.0
2.6
2.5
2. It
2.5
3.5
3.5
2.8
3.0
3.3
3. it
3.0
3.0
3 U.S.
Tributaries
16.2
9.9
13.7
10.1
3.9
2.6
6.1
17.6
19.0
15. It
13.5
18.6
lit, 6
9.8
12.2
municipal and
1 ndustr 1 al
Canada
31.it
25.9
25.9
23.7
22.6
20.7
22.6
31.8
31.1
25.6
27.1
30. 0
30.6
27.0
26.9
Canad 1 an
Tr I butar I es
it. 5
2.7
it. 7
3. It
1.3
.85
2.2
C.5
6.5
5.1
It. 7
5.7
It. 3
3. It
it.O
Dl rect
Precl p.
31.0
38.3
50.0
30.6
U0.9
33.7
36,6
50,5
52.2
18.1
25.it
U7.9
It9.2
36.6
38.6
St. Lawrence
River
3.3
it. It
12.5
29.2
32.8
13.0
11.3
11.1
13.0
19.2
26.5
36. It
27.8
16.5
18.lt
Net loading
rate
111.1
91.3
132.3
7it.5
66.0
65.0
75.9
159.5
113.7
78.3
56.7
89.9
89.7
75.3
91. It
Because of the comparable sizes  of the 3 terms  in eq.
(6) (see Table 5),  loadings as well  as such processes
as biochemical transformation and  sediment exchange are
important to changes in the ammonia  mass content  of the
lake.

    Table5   Ammon I a and total Kjeldahl  nitrogen - mass balance
               equat I on terms (r.ietr I c  tons/day)
                 Ammonla              Total Kj e1dah 1  NItrogen
Month dm/dt/101 L/10* S/10*
May 1972
Jun 3
Jul
Aug
Sep
Oct -1
Nov -3
Dec -2
Jan 1973
Feb
Mar 1
Apr 1
May
93
b2 1
81
10
25
10
15 1
38 1
Sit
13
01
17
Sli
91 .02
32 2.30
75 .07
66 .70
65 .140
76 -l.CG
CO -14.75
lit -3.52
78 -1.C2
57 .It It
90 .11
90 .27
75 .19
dm/dt/103 L/10* S/lo'




-1
_1
_ 1






53
r.i4
00
»7
10
75
31
10
18
22
76
2!
4S
18
01
13
14 -1
1)2 -1
01 -1
17 -1
23
16
13
19
21
18
35
75
13
01
12
76
"4«
33
02
35
95
42
30
Total Kjeldahl  Nitrogen

Figure 4 shows  the seasonal variation of the  total
Kjeldahl nitrogen content of  Lake Ontario  during the
field year.   High values were characteristic  of the
summer 1972  followed by low levels during  the winter
and spring 1973.   A variation by a factor  of  20 oc-
curred in the total net loading  rate (see  Table 4), L.
                                                                                                         1 S x 10^ Metric ToTivday
                                                                  FIGURE 4  THE MASS CONTENT (m) AND THE PRODUCTION RATE (S) OF TOTAL tUELDAHL NITROGEN
                                                                        DURING TKF, FIELD YEAR.
                                                              As is indicated  in Table 6 a  considerable difference in
                                                              the relative  sizes of dm/dt,  L  and S was noted  during
                                                              the field year.   In spring 1972,  winter and  spring 1973
                                                              the magnitudes of the three terms are comparable while
                                                              during the summer and fall L  is smaller in order of
                                                              magnitude than dm/dt and S.   Thus the biochemical
                                                              transformations  and sediment  exchange processes are the
                                                               Table 6 • Total  Kjeldahl  nitrogen loading rates  to  Lake Ontario
                                                                                     (metric tons/day)

Month
Apr 1972
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan 1973
Feb
Mar
Apr
May
Mean
Month
Apr 1972
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan 1973
Feb
Mar
Apr
May
Mean
NI agara
Klver
117.3
89.8
91.6
110.0
138.0
m.s
85.9
92.9
109.2
93.9
106.5
107.6
115.8
113.5
106.9
U.S.
Tr 1 butar 1 es
6lt.3
It It. 3
146.8
U2.C
18.9
10.3
18.0
It2.l|
55.8
51.8
39.1
60.8
U9.2
31.3
Ul.O
Direct municipal and
1 ndustr 1 a)
U.S. Canada
5.2
it. it
U. 5
3.9
3.7
3.6
3.7
5.3
5.2
ll.3
It. 5
5.1
5.1
It. 5
It. 5
111. 8
35.0
35.6
31.5
29.6
29.0
28.1
30.1
U0.9
3lt.5
36.6
Il0.lt
U1.2
36.0
35.0
Canad Ian
Tr I butar 1 es
36.0
36.6
i|2.9
U5.0
18.5
10.5
18.1
36.0
l|2.i|
ill. 3
33.2
33.5
26.5
28.2
32.1
Direct
Precl p.
It7.lt
58.5
76.5
H6.8
62.5
51.5
55.9
77.2
79.7
27.6
38.8
73.2
75.3
55.9
59.1
St. Lawrence
Rl ver
98.1)
85. 7
2011.6
153.2
128.lt
211.3
197.9
117.5
106.3
99.1
130.1
132.lt
98.7
89.7
132. It
Net loading
rate
213.6
182.9
93 3
126.6
lltl.8
18.1
11.8
166. It
226.9
15U.9
128.6
188.2
2lk.lt
179.7
11(6.2
                                                           179

-------
dominant sources  for changes in the TKN  content of the
lake during the  summer-fall period while loading con-
tributions became more important in the  winter.and
spring.  The  TKN  source term, S, in eq.  (6)  changes
sign during the  year with losses indicated in the late
summer through winter periods and production in both
spring 1972 and  1973.

Sulfate

May, June and July measurements are missing  because of
difficulties  in  the chemical analysis of these  samples.
Sulfate mass  content of the lake remained  fairly uniform
                                   ,. _ . _ mxlO'Meirk Tor



                                    1973
    M    J    J
                    S    O    N    D
                                        F   M    A    M
FIGURE 5  THE MASS CONTENT (.) AND PRODUCTION RATE (S) 0? SULPATE
       DURING THE FIELD YEAR.
 throughout the field year so that  dm/dt « 0.   This then
 required a balance between S and L.   On the basis of the


Month
Aor 1972
May
Jun
Ju]
AUP
Sep
r>ct
Nov
nee
Jan 1973
Feh
Mar
Apr
May
'Van
Month
Apr 1972
May
Jun
Jul
AUK
S*p
net
Nov
nee
Jan 1973
Feb
Mar
Apr
May
Table/ Sulfate
(m
loading rate to Lake Ontario
etrlc tnns/day)
Niagara U.S.
P.lver Tributaries
...
	

...
6609
68U3
11507
16120
15822
U163
H259
11(732
114961
151149
13016
...
	
...
...
1327
731
9148
2815
4080
3605
2855
14373
3260
269«
2P60
nlrert municipal and
i ndus tr 1 al
U.S. Canada
...
	

...
27.3
27.0
27.5
38.9
38.5
31.6
32.lv
37.1
37.6
33.0
...
	
...
...
132
130
132
187
186
153
157
179
181
159
Cana-1 Ian
Trlhutarlps
...
	
	
...
656
358
lilq
12U3
1822
16014
1311
1786
1262
13148
1181
nirert
Prect n.
...
	

...
5PS
1491
533
736
760
263
370
6H8
717
53?
St. Lawrence
D|Ver

...
---
...
1839U
20110
210143
21U146
1752"
163140
17795
19521
20680
10771
19263
Met loading
rate
...
	
---
— .
- 90U7
-11530
- 7l4?7
- 306
5180
ill 80
118P
2105
- 261
150
source term, Table  7 shows sulfate  utilization in the
summer through  fall  period and sulfate  production in
the winter.
Tahle 8 Sul

Month dm/rtt/10
Hay 1972 	
Jun ----
Ju) 	
AUK 	
Sen 0
net 0
Nov 0
nee 0
Jan 1973 0
Feb 0
Mar 0
Apr 0
May 0
Mean 0

Sulfate
L/105

....
	
- .09
- .12
.07
.00
.05
.03
.01
.02
.00
.00
.01


S/104
....
....
....
.09
.12
.07
.00
.05
.03
.01
.02
.00
.00
.01


dm/rft/10
1,35
11.63
2.76
.23
.31
-1.39
-3.69
-14.66
-2.76
.89
1.1*8
.89
1.85
.10


L/104
.20
.Oil
.15
.18
-.02
-.11
.03
.23
.2.14
.18
.32
,2I|
.19
.111


S/104
1.15
l|.59
2.61
.05
• .29
-1.28
-3.72
-11.89
-3.00
.70
1.16
.65
1.67
.116
                                                            Total  Organic Carbon

                                                            Strong seasonal variations  in the total organic  carbon
                                                            content of the lake are  illustrated in Figure  6.
                                                            Peakinq in the summer-fall  1972 period, the TOC  content
                                                                                                      Sx10d Metric Toni/doy
                                                               FIGUHE 6  THE MASS CONTENT (m) AND THE PRODUCTION RATE (S) OF TOTAL ORGANIC CARBON
                                                                     DURING THE FIELD YEAR.


                                                             fell  to a midwinter minimum before beginning a  gradual
                                                             spring rise.   A comparison  of the terms in eq.  (6)  as
                                                             shown in Table 9 indicates  that the main balance was
                                                             between dm/dt and S since L was an order of magnitude
                                                             smaller.  Thus changes  in the TOC content of the lake
                                                             are mainly a consequence of biochemical transformations
                                                             rather than loading rate differences.
              33.1
                        isn
                                    sol
                                                -1B52
                                                          180

-------
Table 9
        Total  organic carbon loading  rates to Lake Ontario
                     (metric  tons/day)

Month
Apr 1972
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan 1973
Feb
Mar
Apr
May
Mean

Month
Apr 1972
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan 1973
Feb
Mar
Apr
May
Niagara
.(Iver
U78
2075
1323
1925
2557
1871)
1051
298
1067
18811
1838
2102
2382
21(00
1732
Dl rect
1
U.S.
1*8.9
U0.3
1.1.7
36.7
31). 5
33.5
31). 6
1*9.1
50.1
39.5
1)1.9
K6.7
1(6.9
1)1.5
U.S.
Tr 1 butar 1 es
711)
525
369
379
325
281
317
1*22
519
377
255
1)68
36>)
280
UOO
munlcl pal and
ndustr 1 al
Canada
5i). 2
1)1). 2
U5.7
1(0.5
37.9
37.2
37.9
51). 2
53.7
1)3.6
1)6.2
51.8
51.6
1)5.5
Canadl an
Tributaries
930
716
1)80
531
502
1)50
1)87
717
672
1)58
302
509
397
376
538
Dl rect
Precl p.
1)1(8
552
722
1(1)1
590
1)86
528
729
753
261
366
691
710
528
St. Lawrence
River
1667
1921
2601)
1891)
2233
3320
3562
1935
813
051
1002
61)7
1591
1812
1832
Net loading
rate
2006
2032
377
11)59
1813
- 158
-1107
33I(
2302
2U12
181)7
3222
2361
1859
8.   Dillon, P.J. and W.B. Kirchner, 1975, Reply, Water
       Resour. Res. Y\_, p. 1035-1036.

9.   Kirchner, W.B. and P.J. Dillon, 1975, An empirical
       method of estimating the retention of phosphorus
       in lakes, Water Resour. Res. V\_, p. 182.

10.  Shiomi, M.T. and K.W. Kuntz, 1973, Great Lakes
       precipitation chemistry:  Part 1.  Lake Ontario
       basin.  Proc. of the 16th Conference on Great
       Lakes Research, pp. 581-602.

11.  U.S. Army Corps of Engineers, 1972 and 1973,
       Monthly Bull, of the Lake Levels, Detroit, Mich.

12.  Vollenweider, R.A., 1969, Moglichkeiten und
       Grenzen elementarer Model!e der stoffbilanz von
       Seen, Arch. Hydrobiol., pp. 1-36.

13.  Wylle, C.R., Jr., 1951, Advanced Engineering
       Mathematics, McGraw Hill Book Co., New York.
               1)1.9
                                    558
                                                11)83
                     References

 1.   Batchelor,  G.K.,  1967,  An Introduction to Fluid
        Dynamics,  p.  74,  Cambridge University Press,
        New York,  1967.

 2.   Bolsenga,  S.J.  and  J.C.  Ragman, 1975, IFYGL
        Bulletin  #16, pp.  57-62, National  Oceanic &
        Atmospheric Administration, Rockville, Maryland
        20852.

 3.   Boyce, P.M.,  1973,  A computer routine for calcu-
        lating  total  volume contents of a  dissolved
        substance from an arbitrary distribution of a
        dissolved substance from an arbitrary distribu-
        tion of  concentration  profiles, Technical Bull.
        No. 83,  CCIW, Burlington, Ontario.

 4.   Casey, D.J.,  P.A.  Clark  and J. Sandwick, 1976,
        Comprehensive IFYGL materials balance study of
        Lake Ontario (preprint).

 5.   Casey, D.J.  and S.E. Salbach, 1975,  IFYGL stream
        materials  balance study, Proceedings 17th
        Conference  on Great Lakes Research, Inter-
        national  Association for Great Lakes Research,
        pp. 668-681.

 6.   Chapra, S.C., Comment on "An Empirical Method of
        Estimating  the Retention of Phosphorus in Lakes"
        by W.B.  Kirchner  and P.J. Dillon,  Water Resour.
        Res. 11,  p. 1033-1034.

 7.   Dillon, P.J., 1975,  The  application  of the
        phosphorus-loading concept to eutrophieation
        research, Scientific Series No. 46, Inland
        Waters Directorate, CCIW, Burlington, Ontario.
                                                       181

-------
                                     MODELS FOR EXTRAPOLATION OF HEALTH RISK

                                                William M/ Upholt
                                         Environmental Protection Agency
                                                 Washington, B.C.
     Extrapolation models for estimating the risk
of adverse effects on human health resulting from
low dosages of radiation to which man may be exposed
may assume a threshold dosage below which the risk
becomes vanishingly low.  Aside from theoretical
considerations such an assumption is not particu-
larly helpful unless there is a reasonable basis
for estimating at what dosage that threshold
exists.  Moreover, to be most useful in consider-
ing the balance between risk and social cost of
regulation, the model should provide both a best
estimate of risk and an estimate of confidence in
that estimate.

                    Background

     Historically, it has been the practice for
regulatory agencies to attempt to assure the
public that they can promise safety from adverse
health effects caused by the toxicants they are
regulating.  This concept was challenged in the
case of ionizing radiation in the first instance
and in the case of chemical carcinogens more
recently.  In both cases it was claimed that there
is no reason to assume a threshold and, therefore,
there can be no safe dosage.  The regulators of
radiation realized very early that there was no
way to completely eliminate all exposure to ion-
izing radiation and so they were forced to regu-
late at exposure levels of acceptable risk rather
than no risk.  In the case of chemical carcino-
gens, there is still a strong opinion that no
preventable exposure is acceptable and thus com-
plete elimination of all exposure to any control-
lable carcinogen is the only regulation that is
acceptable.  Nevertheless, for a number of years
some scientists have recognized the difficulties
of completely eliminating all exposure to certain
carcinogens and so they attempted to define an
acceptable risk as one that is mathematically
"virtually zero."  This "virtual zero" may be 1CT9>
10~H, or some other figure depending, presumably,
on the size of the population at risk.

     A more recent school of regulatory decision-
makers insists that the determination of an "ac-
ceptable risk" depends in part upon the cost of
achieving a lower risk.  Thus, according to this
school, some form of cost/benefit balancing is
necessary to rational decision-making.  For pur-
poses of this paper the latter position is taken.

              Cost/Benefit Balancing

     It is not within the scope of this paper to
discuss the many models for decision-making.
Neither is the use of the term cost/benefit bal-
ancing intended to refer necessarily to costs and
benefits in common units and thus arrive at a
critical equation for making the regulatory deci-
sion.  Rather it implies only that both costs and
benefits must be considered by the decision-maker
before he arrives at his final decision.  No spe-
cific units are prescribed nor is it necessary to
use the same units for both costs and benefits.
In fact, it may be undesirable to use dollars or
equivalents because of  the psychological implica-
tion of equating human health to dollars.  It is
even possible in some cases to describe costs or
benefits or both  in non-numerical but quantitative
terms such as "less than background"  or  to compare-
them with other more familiar but similar  costs
or benefits.  Nevertheless,  for modeling pur-
poses, at least,  it is  desirable  to strive for
understandable numerical terms.

     Of course, both costs  and benefits  may be
reversed depending upon the viewpoint of the
observer.  Even the same observer may reverse the
terms from time to time depending upon how he
views the decision he is about to make or  the
audience to which he is addressing his argument.
To avoid needless confusion in this paper  I have
arbitrarily chosen the  viewpoint  that one  purpose
of the Environmental Protection Agency is  to
reduce risk of adverse  health effects and  thus any
reduction in risk that  can  be attributed to a
regulatory action is a  benefit.   To complete this
rationale, the deprivation sustained by society in
having to do without the product  in question or
the increased cost of the product associated with
complying with the regulation is  the  cost  to so-
ciety of achieving the  reduced risk.

                Threshold Concepts

     Accepting the rationale that costs  of  regu-
lating a product  must be justified  in terms  of
reduction in risk does  not  eliminate  the useful-
ness of the concept of  a threshold.

     A threshold  is normally defined  as  the  lowest
dosage at which a given effect is produced.  If
the effect is one commonly  found  in the  population
even in the absence of  the  substance  in  question
then the threshold becomes  the lowest dosage at
which the frequency of  the  effect rises  above the
background level.  Such a point is  difficult or
impossible to determine experimentally because of
the principles of variability and the empirical
limits to the size of experimental  populations.
Thus, it can best be approximated by  using  a suit-
able model for extrapolation from points more
easily determined experimentally.   Though it is
always simpler to build a model based upon  a
smooth continuous curve, it  is more important that
the model approximate the empirical evidence.
Thus, if there is adequate  evidence of an unex-
plained or unanticipated break or other  irregular-
ity in the slope  of the curve, the  model should be
modified to accommodate such empirical evidence if
it is to be of maximal  usefulness in  the very
practical world of regulation.

     There is a concept of  long standing in  toxi-
cology that effect of a toxicant  depends upon dos-
age.  This does not mean that higher  dosages nec-
essarily produce  more severe or more  frequent
effects, though this is often true.   For instance,
if the effect in  question is delayed  in  its  devel-
opment, higher dosages  may  produce a  more  serious
effect such as death which  prevents the  develop-
ment of the effect in question.   Actually,  it is
quite common for  a substance to have  a reversal in
type of effect at extremely low dosages  as  compared
to much higher dosages.  Thus it  is well known
that odors that are strongly attractive  at  very
low dosages may be very repugnant at  higher dos-
                                                       182

-------
ages.   It is also true that substances such as
vitamins that are essential to good health at low
dosages may be toxic at high dosages.  For such
substances it is reasonable to assume that a
dose/response curve will go through an inflection
point where the slope changes from negative to
positive if frequency of adverse effect is shown
on the ordinate and dosage on the abscissa.  Such
a point might well be considered as a threshold
even though there is a background of effects from
other causes which means that the curve never
crosses the x-axis.  Since, in such cases, the
adverse effect from very low dosages is apt to be
different, and possibly independent of the adverse
effect at higher dosages, it is probably more
reasonable to consider this point an intersection
of two curves relating effects of the substance
caused by different mechanisms.

     Clearly there may also be theoretically
sound mechanisms for a positive intersect with the
X-axis and also for inflection points showing a
sharp change in slope.  Thus, the observed effect
of a substance may require a two-stage metabolic
reaction within the body or a natural defense
mechanism may be far more effective at very low
dosages than at even slightly higher dosages.  For
instance, the normal organism may have an excess
of cholinesterase that prevents cholinergic symp-
toms at relatively low dosages of cholinesterase
inhibitors, but once the excess is exhausted, the
development of cholinergic symptoms occurs over a
very narrow range of increased dosages.  The ob-
served symptoms may thus have an apparent threshold
even though the cholinesterase inhibition
curve may have quite a different slope with no
distinct threshold.

         Models for Extrapolation of Risk

Traditional Threshold Model

     Faced with the need for estimating risk at
the relatively low dosages to which human popula-
tions are exposed, and the frequent necessity to
conduct toxicity testing at much higher dosages, a
regulatory agency has no alternative but to rely
upon extrapolation by means of some model relating
dosage to effect, unless it finds it possible to
completely stop all exposure to the substance
being  regulated  (or alternatively decides  to'
take no regulatory action).  Faced with this prob-
lem and the social demands for "safety," regula-
tory agencies have long found it convenient to
assume that there is a threshold or true "no
effect" level for any particular adverse health
effect and that levels producing no observed
effects in experimental animals are a reasonable
approximation to that true threshold.  Recognizing
the realities of experimental variance and limited
numbers of experimental subjects, a compensatory
"safety" factor was introduced to assure that the
standard for regulation was at or below the true
threshold.  The size of this factor depends upon
the same factors that affect experimental variance
(size and uniformity of test population and vari-
ability between replications in the same and dif-
ferent laboratories).  In addition, another factor
was added to cover the undetermined physiological
differences between man and the experimental
organism.  The size of this factor might depend
upon whether or not there is evidence of human
exposure and other aspects of our knowledge of
Che comparative physiology and toxicology between
the species involved.
     This model for estimating risk  to human  popu-
lations has been satisfactory in  those cases  where
the cost to society from the resulting regulations
has not been exorbitant and the adverse  effects
have rarely been observed in man.  It does not
provide a quantitative estimate of risk, nor  of
benefit of the regulation in terms of reduced
risk, and therefore is not very helpful  in deter-
mining whether or not the societal cost  of the
regulation is reasonable.  Thus,  it  encourages
over-regulation when the adverse  effect  is easily
detected and immediate but it encourages under-
regulation when the adverse effect is delayed or
otherwise difficult to associate with human expo-
sure.

     It is the latter aspect of this type of
extrapolation model that has led  to  a demand  that
a different model, namely a model which  postulates
no threshold, be used for regulating substances
suspected of causing cancer or other adverse
effects that may have an obscured cause.  Unfor-
tunately, many individuals have combined this very
rational demand for a "no threshold" model with
the less rational desire for absolute safety  and
concluded that the only acceptable standard for
such effects is zero exposure or as near to that
as can be achieved in the real world.

     Interestingly, the other aspect of the tradi-
tional "threshold" model (that it encourages  over-
regulation of substances causing readily apparent
adverse effects) also should have led to demand
for a more realistic model that would provide a
more quantitative estimate of risk, thus reducing
societal cost of needless over-regulation.

No Threshold Models

          The assumption of no threshold demands
more emphasis on the shape of the dose/response
curve and its position as related to the axes.  It
also requires a clearer description of the
effect to be assessed and the time at which the
effect is to be observed.  In the case of a de-
layed effect such as cancer, it may be a problem
to maintain the experimental animals alive long
enough for the cancer to be observed.  This period
often approaches the life expectancy of the unex-
posed animals.  Even at such a termination of
observation, the effect seen may not be an obvious
cancer but rather a neoplasm that must be care-
fully examined and classified by an experienced
pathologist.  If some other effect such as a
benign tumor can be described as a precursor  to
the adverse effect of principal concern  (such as
cancer) then it is possible to consider this  as
the effect to be observed.  On the other hand,
there should be a clear distinction made between
various effects in describing the extrapolation
model.  Thus, if the experimental end point is to
be cancer at 24 months of exposure,  then a benign
tumor that might become malignant at 30 months
does not meet the definition.  If, on the other
hand, benign tumors at 24 months is  the end point
then that could well include frank cancers (pre-
sumably developed from the benign tumors) as  well
as earlier deaths if benign tumors were  present at
death.  Some statisticians have attempted to  de-
velop extrapolation models that take "time-to-
cancer" into account since there  is  experimental
evidence that the latent period of cancer is
longer at lower levels of exposure.  Even in  the
case of relatively acute effects, the presence of
precursors to the adverse end effect must be
                                                       183

-------
clearly recognized and considered in developing an
extrapolation model.  For instance, in the case of
cholinesterase inhibition, mentioned above, the
adverse health effect of major concern may be
serious cholinergic symptoms.  Such symptoms,
though clearly recognizable, may not be defined
easily.  It is common to use depression of cholin-
esterase activity in peripheral blood as a more
reproducible end point.   However, significant
depression in such activity may be detected in the
absence of symptoms, and, because cholinesterase
can be readily regenerated in. the normal body, a
low but detectable level of cholinesterase may be
of no particular concern.

     It is possible to carry this argument to the
extreme in which it can be claimed that any for-
eign substance reaching a living cell will produce
some reaction in that cell and thus there is no
possible "no-effect" level of exposure that re-
sults in such contact with a cell.  The conclusion
is simply that the end effect of concern must be
clearly defined as well as the point in time at
which it is to be observed.

     Having agreed upon the definition of the
effect to be observed and the period of observa-
tion, the simplest extrapolation model is a
straight line on an arithmetic scale intersecting
the origin (in the case of a "no threshold" model)
and some observable or experimentally determined
point.  This model recognizes the principle that
frequency of effect usually increases as dosage
increases.  It ignores experience that shows that
the most common curve relating observed effects to
dosage is sigmoid in shape.  C.I. Bliss  developed
a widely acclaimed model for use in the experimental
range that used probability units (standard deviation
using five as the arbitrary unit for 50 percent
effect) as the ordinate  and the logarithm of the
dosage as the abscissa.   This model often produces
a satisfactory straight line in the usual experimental
range of 30 percent to 70 percent effects.  Mantel
and Bryan  used this relationship as the basis for
their model but they chose to use a slope of one
regardless of the experimental slope (which is
frequently greater than one) on the basis that
extrapolation is always dangerous so one should be
conservative in the sense of minimizing the risk
of underestimating the probability of effect at
any given dosage. For the same reason,they chose
as a determinant point for their model the upper
99 percent confidence limit for the estimate of
probability of effect at the highest dosage tested
at which no effect was observed.

     Numerous other models have been designed in
an effort to accommodate certain other observed
and theoretical characteristics of the dosage/
response relationship.  Most of these models agree
quite well with observed points (which are usually
in the range of 30 percent to 70 percent response,
except for those that are either 0 percent or 100
percent) but diverge considerably at very low
levels—those levels of greatest concern to the
regulatory agencies.

     It is perhaps surprising that most modelers
have chosen to adopt a series of assumptions, as
did Mantel and Bryan, described as conservative
and designed to avoid underestimating the risk at
any particular dosage.  Probably this can be
explained on the basis that they have been more
concerned with the risk of adverse effects than
they have been with cost to society.  This is per-
haps characteristic of an affluent society that is
accustomed to buying what it wants with little
concern for cost.  It appears  less  desirable in a
regulatory agency charged with protecting society
from adverse effects without upsetting the economy.
Thus over-conservatism as expressed above can
result in major economic problems and  even in
reducing availability of certain products that
society has come to consider essential.   This
becomes more dramatic when societal cost  is
expressed in terms of more expensive automobiles,
more expensive energy, less plastics,  and less
ease and rapidity of mobility.  Since  these are
factors that are a part of societal cost  which  may
be required to reduce incidence of  cancer and
other adverse health effects,  and since society
does not seem to be willing to pay  such costs
needlessly, it becomes increasingly important for
regulatory agencies to be realistic in their
extrapolations rather than "conservative" as that
term is used to describe a bias.

                   Conclusions

     It is becoming more important  that regulatory
agencies consider societal cost of  their  regula-
tions as well as benefits to society in terms of
less risk of adverse health effects.   Since the
risk can seldom be estimated directly  from experi-
mental data based upon relatively high dosages,  it
is essential that such agencies make judicious  use
of extrapolation models, always bearing in mind
the obvious dangers of extrapolation.

     Extrapolation models should provide  estimates
of two parameters just as is expected  of  many
statistical models.  First they should provide a
best estimate of the probability of  adverse  effect
at the dosage under consideration.   These dosages
should include the dosages to which  various  seg-
ments of the population are exposed  in the  absence
of regulation and they should  also  include  dosages
that would be expected if various alternative reg-
ulatory actions were in fact taken.

     It should be borne in mind that such alterna-
tives are typically discrete;  that  is  to  say that
they are, in the last analysis, dependent  upon
some form of technology which will  reduce pollu-
tion to a fixed degree rather  than  to  a continu-
ously variable range.  Thus, the dosages  that
should be considered are a discontinuous  set or a
step function.  They are, therefore, the  inde-
pendent variable, and the probability of adverse
effect is the dependent variable.   There  is  little
value in selecting arbitrarily a "safe" probability
of risk and then designing the technology and
regulations to match it.  The value  or benefits
attributable to each alternative regulatory  option
is thus the reduction of risk  that  can be expected
therefrom.

     Secondly, the model should provide an  esti-
mate of the degree of uncertainty associated with
the estimate of risk.  Since, by definition, there
is no experimental data in the region  of  extrapo-
lation, there can be no experimental second moment
about the mean and thus no standard  deviation in
the classical sense.  To assume some such figures
may well lead to a false confidence  in extrapola-
tion.  It is most reasonable,  then,  to be content
with a verbal description of uncertainty  of  the
estimated risk.  This can be done,  possibly by
indicating what estimates would have resulted from
other assumptions such as the  "conservative"
assumptions now commonly used.  With more experi-
ence it may be possible to develop  other  estimates
of uncertainty that are more meaningful to the
decision-maker.  In any case a "best estimate"  of
                                                      184

-------
risk or  reduction in risk coupled with meaningful
disclaimers  of accuracy,  preferably in terms of
describing some of the estimates from alternative
assumptions,  is preferable to purely subjective
guesses  either by the decision-maker, or by some
expert who very likely has already made up his
mind as  to the degree of  regulation that is justi-
fied.

                    References

1.  Bliss, C.I.  Calculation of the Dosage-
    Mortality Curve.   Ann. Appl. Biol. 22:134-137,
    1935.

2.  Mantel,  M. , and Bryan, W.R.  Safety Testing of
    Carcinogenic Agents.   National Cancer Institute
    27:455-470, 1961.
                                                       185

-------
                          USE OF MATHEMATICAL MODELS IN NONIONIZING RADIATION RESEARCH
                                                Claude M. Weil
                                          Experimental Biology Division
                                       Health Effects Research Laboratory
                                      U.S. Environmental Protection Agency
                                        Research Triangle Park, NC  27711
                       Abstract
Mathematical models are described which provide an
improved understaniing of the interaction with bio-
logical objects of electromagnetic energy in the
radio frequency-microwave spectrum.  Significant
dosimetric data are derived for the absorption
characteristics and internal dose distribution, using
a rrulti-layered sphere model exposed to plane wave
radiation over the frequency range 0.1 to 10 GHz.
Using such data, some generalized conclusions are
presented which provide useful dosage estimation
methods to those involved in the health effects of
nonionizing radiation research.

                   Introduction

There is increasing concern regarding the potentially
harmful effects of exposure to nonionizing electro-
magnetic radiation in the radio frequency (RF) -
microwave spectrum (wavelengths range of approx. 3000
to 0.1 cms).  Such concerns have been prompted by two
factors:  the increasing proliferation of high-
powered RF and microwave sources, such as radio and
TV broadcast transmitters, radar transmitters,
domestic and industrial cooking and drying ovens,
diathermy devices, etc., leading to the potential for
excessive human exposure to man-made radiation.  The
other factor is the thousand fold difference which
now exists between the ANSI recommended protection
guide of 10 irW/on^ maximum exposure rate in the United
States and the more conservative protection standard
adopted in the Soviet Union and Eastern Europe
(exposures greater than 2 hours).  Much research has
been done in this country on the short-term, high
level heating effects of microwave energy.  This
work has been well documented in a number of recent
review papers.1'2,3  work on the chronic effects of
long term, low level exposure has been pursued by
Soviet workers for many years and is now being
strongly emphasized in this country.^  Findings are
frequently contradictory and often not repeatable,
leading to much controversy and speculation regarding
the effects of low level exposure.  This is un-
doubtedly due, in large part, to the difficulties
involved in measuring or estimating absorbed energy
dose for the subject undergoing irradiation.

Johnson '  has emphasized that observed biological
effects or phenomena can only be related to the
absorbed dose and not to the incident power density.
The- degree to which electromagnetic energy is coupled
into the irradiated subject is a very complex function
of size, shape, dielectric composition and orientation
of the subject as well as the wavelength, spatial
characteristics and polarization of the incident
radiation.  Furthermore, the internal distribution
of absorbed energy is never uniform, except when the
incident wavelength is nuch larger than object size,
and is frequently concentrated into localized "hot
spot" regions.  This means that for the same exposure
conditions the absorbed dose and internal dose
distribution for a small object such as an experi-
mental animal will be very different from that for a
much larger object such as a human.

Because of the general complexity of the electromagnetic
interaction problem, much use has been made of very
simplified mathematical models  in order to obtain a
better understanding of the nature of this inter-
action.  Such models consist of objects having a
simple planar, spherical or cylindrical shape that
are generally exposed to the simplest form of rad-
iation; i.e., the electromagnetic plane wave.   These
objects are composed of various layers of  homogeneous
and dissipative dielectrics which approximate the
known dielectric properties of  various biological
tissues such as muscle, fat, bone,  skin, etc.
Solutions for the planar model  are very simple-*''  but
the results are not really applicable to any closed
object with curved boundaries except when  the incident
wave length is verv short,compared to object size.
The sphere model ''' '   has  been popular because it
better approximates a curved object and because the
solution is well known and can  be readily  handled using
high speed machine techniques.   Such a model can be
considered to be a crude representation of animal and
human heads, but the analogy is obviously  approximate
and very limited.  Some work has also been done on the
interaction of cylindrical and  spherical models with
the more complex radiation from a direct-contact
aperture source.10'I1  Models of prolate spheroid  and
ellipsoid shape which better approximate the char-
acteristically elongated bodies of laboratory animals
as well as humans, are presently being investigated.
To date, solutions have only been obtained for the
case where incident wavelength  is much greater than
object size.    The results have underlined the
important dependency of energy  absorption  on object
orientation with respect to the polarization of the
incident electric field.  Several workers13 are
presently attempting to solve problems involving models
of arbitrary shape and homogeneity,  using  finite
element methods and numerical solutions, but such
methods are relatively costly and limited  by the total
number of elements that a computer can handle.

Detailed results for the multi-layered sphere model
exposed to plane wave radiation are now presented.
Some results for the prolate spheroid   are also
included.

            Formulation of Sphere Model Problem
Figure 1 shows the six-layered  model used  in this
study, with a plane wave, polarized in the x-direction
and propagating in the z-direction,  incident upon it.

The outer most region  (p = 7) or sixth layer repre-
sents air.  The dielectric properties and  layer thick-
ness of the remaining regions  ( p = 1,2,3,4,5,6),
consisting of a core of brain-like matter  and five
concentric layers, are summarized in Table 1 for
three different sized spheres  (6.6,  12 and 20 cms
diameter).

Region
(P)
1
2
3
4
5
6

Tissue
Modeled
Brain
CSF
Dura
Bone
Fat
Skin
Electrical Properties
of Tissue at 109 Hz
Relative
Perm1tt1v1tv
60
76
45
8.5
5.5
45
Conductivity
(ohm-m) "1
0.9
1.7
1.0
0.11
0.08
1.0
Core Size and
Layer Thicknesst cms
Outer Radius rfi B
3.3cms 6ons lOcms
r,=2.68 r.-5.27 r,-9.10
1 0.2 ' 0.2 ' 0.2
0.05 0.05 0.05
0.2 0.28 0.4
0.07 0.1 0.15
0.1 0.1 0.1
                                                       186

-------
Tissue electical properties were obtained fron values
published by Schwan7 and others5.  Variations of +30%
or more can exist in the figures quoted.  The changes
of electrical characteristics with frequency are
significant and were incorporated  into all aspects of
this work; sets of curves giving average permittivity
and conductivity changes with frequency for the various
tissues modeled, were prepared  and stored in a data
bank.

Expansion of the incident and secondary (scattered
an3 internally induced) fields  into  vector spherical
harmonics is based on Stratton's formulation^  (See
Fig 1).  Tangential components  of  E  and H-fields
sized spheres ranging from 2 to 12.5 ons outer radii
are shown in terms  of the frequency of incident
radiation over the  spectral range 100 to 10,000 MHz
(A = 300 to 3 cms).  This representation readily shows
that combination of model size and incident frequency
for which the energy  absorption is greatest.  Two
major "ridge" lines running diagonally across the plot
are clearly identifiable; these represent regions of
resonant absorption.   The third ridge line to the
right of the plot represents a resonant coupling of
energy into the  core  of the model by the outer tissue
layers.  Note that the absorption coefficient can
considerably exceed unity in the resonant absorption
regions.
                                             2.0
                                              200  250 300
                                                             500 600 700 800  1000    1500  2000 2500 3000
                                                                         FREQUENCY, MHz
                                    4000  5000
                                                    10,000
  Fig. 1. Plane wave incident upon spherical model with six concentric
                     shells.

      Spherical Harmonic Expansions for Electric Fields

       Incident:
       Reflected:



                 fi»i

       Induced within sphere:



                 n-i

       Scattering cross section:


                = 2"  V
                 ' n«l

       Total cross section:
       Abjoiption cross section, fia = Qt ~ Qr
 are then equated at the six regional boundaries in
 order to determine the unknown expansion coefficients.

      Absorption Properties and Dose Distribution

 Par a simple object having a well known geometric
 cross-section, such as a sphere,  the absorption
 characteristics are conveniently  defined in terms of
 an absorption coefficient, given  by the actual
 absorption cross section, Qa divided by the shadow
 cross section.  This coefficient  is a measure of how
 efficiently the incident energy is coupled into the
 object being irradiated.  In the  contour plot of
 Fig 2, the absorption characteristics for different
 Fig,  2.  Radius versus frequency diagram for multi-
 layered sphere; the contours represent lines of
 constant absorption coefficient.

 The internal distribution of absorbed energy  (dose)
 in the brain-like core of a 6 ons radius sphere
 (roughly equivalent to infant sized head)is
 illustrated in Figs 3 and 4 for two different
 frequencies.  The distributions are shown  in the
 plane ( = 0) of the incident electric field vector
 (E-plane) and the contours are normalized  iso-dose
 rate lines  (constant absorbed dose rate, normalized
 to E0 = 1 volt/neter peak).  At the resonant absorp-
 tion frequency of 800 MHz (see Fig 3), a major "hot
 spot" concentration is found to exist immediately in
 front of the sphere center, due to focusing of energy
 into the center, as well as standing wave  effects.
 Ohe greatest internal field concentration  was  found
 to exist at 1650 MHz  (see Fig 4) where the original
 hot spot has now split into two separate and more
 intense concentrations that are located behind the
 sphere center on both sides of the z-axis. At still
 higher frequencies microwave energy is decreasingly
 able to penetrate very far into the sphere owing to
 the greatly increased conductivrly values  of  the core
 dielectric  (conductivity at 3 GHz has increased  three
 fold over its value at 100 MHz).  Consequently most
 of the incident energy is now deposited in the front
 hemisphere and the Internal concentrations collapse.
 By programming the computer to methodically  scan
 throughout the E-plane of the model and to select the
 maximum field strength both inside and on the surface
 of the sphere, comprehensive data are  obtained for both
 peak and average absorbed dose rates.   Fig 5 shows such
 data as a function of frequency for the 6 cms radius
 sphere exposed to an  incident power density of 10 mW/an2.
 At low frequencies  (<500 MHz), absorption is relatively
 poor and the internal distribution is  seen to be
 relatively  even.  In  the resonant region  (500-2500 MHz),
                                                         187

-------
absorption is strong arid most of the energy is
internally deposited.  At frequencies above 2500 MHz,
surface heating strongly predominates and the overall
absorption gradually diminishes with frequency.
Similar data were obtained for both a smaller (3.3 cms
radius and a larger  (10 cms radius) sphere, equivalent
to a monkey and a human head respectively.
                                                            40.0
 Fig.  3. Iformalized dose rate distribution in core of
 6 cms radius sphere at 800 MHz, E-plane.
  6.0 cms
  1650 MHz
  E-PLANE
                      .in    180    iin
                       	2!
   VALUES ON CONTOUR LINES REPRESENT
 Fig. 4. E-plane distribution in  core of 6  cms
 sphere at 1650 MHz.
   — OUTER RADIUS = 6.0 cms
   	INCIDENT FLUX = 10 MW/cmZ
                                                             100   150  200   300 WO 500  700   1000   1500 2000  3000 40005000  ?COO 10,000

                                                                                      FREQUENCY, MHz


                                                           Fig. 5. Average and peak (localized)  absorbed dose
                                                           rate versus incident frequency for 6 cms radius
                                                           sphere.
             Prolate Spheroid Model

Problems involving the  interaction of plane wave
radiation with prolate  spheriod and ellipsoid
models have recently received some attention.
Solutions have, so far, only been obtained for
the below-resonance approximation where incident
wavelength is still much  longer than the model
dimensions.  Dumey et  al.-^ have obtained data
on the absorption characteristics of a large man-
size prolate spheriod,  composed of muscle-
equivalent dielectric,  exposed to relatively
low frequency radiation in the 1-30 MHz band.
Diese results have shown  a significant dependency
of energy absorption on the orientation of the
spheroid with respect to  the polarization of the
incident field.  Durney's results are reproduced
in Figs 6 and 7; maximum  absorption is seen to
occur for the electric  polarization case when the
major axis of the spheriod (length 2a) is oriented
parallel to the electric  field vector (see Fig 6).
For the two other polarization cases, when the major
axis is oriented parallel to either the magnetic field
vector or along the direction of propagation (cross
polarization), absorption is seen to be less than
that for an equivalent  sphere model of the same
volume.  In Fig 7, total  absorption of a constant
volume spheriod, normalized with respect to that of
the equivalent sphere model is plotted against the
eccentricity a/b of the spheriod.  Ihe orientational
effects are seen to be  further accentuated as the
spheriod eccentricity increases; note that for the
electric polarization case, energy absorption has
increased to a level some seven times greater than
that for the sphere model.

Solutions for the prolate spheroid problem in the
resonant region, where  absorption is greatest, are
now being attempted.  Preliminary results have shown
that a man-size model will exhibit resonant absorption,
under free-space conditions, in the frequency range
65-75 MHz.
                                                        188,

-------
                        Fr.qu.nt, In MHi
Pig.  6. Average absorbed dose  rate of a muscle^
equivalent prolate  spheroid for  three different
polarizations, electric  Pe,  magnetic Pft and cross Pc;
incident  power density = 1  mW/cm , spheroid volume =
0.07  m3,  a =  1 m, a/b  =  7.73.  The dotted line labeled
Ps represents the absorption by  an equivalent sphere
of equal  volume.
 (Reproduced by permission of the authors and the IEEE.)
Fig.  7.  Total power absorbed by  an C.07 m muscle
equivalent prolate spheroid,  relative to that absorbed
by a  sphere of equal volume,  versus spheroid
eccentricity a/b  for the three basic polarizations
considered.
(Reproduced by permission of  the  authors and the IEEE.)
                   Conclusions

Using the various nodel data, it is possible  to draw
some generalized conclusions regarding the  inter-
action of microwaves with biological objects:  a)  All
objects exhibit a resonant behavior, marked by a
significant increase in absorbed energy when  the
incident wavelength is comparable to the object
dimensions.  Large objects respond uniformly  to a
broad spectrum of relatively low frequencies while
small objects have a narrow and more peaked response
at higher frequencies.  The response of a specific
subject will obviously depend on the subject's
anthropomorphic form as well as the other factors
already mentioned,  b)  For larger objects, where
path lengths are relatively long, hot spot  effects
are not significant and at low frequencies  the
deposited energy is relatively evenly distributed.
At higher frequencies virtually all the energy is
frontally deposited,  c)  For small and medium sized
objects, hot spot effects are significant over
essentially the same frequency range for which res-
onance absorption occurs.  As the object becomes
smaller, peak internal fields can reach prohibitively
high values at frequencies close to resonance.
d)  The higher the frequency, the poorer the energy
penetration, so that microwaves at frequencies above
about 5 GHz are incapable of penetrating even the
smallest experimental object usually considered.

Finally, it is worth repeating again the conclusions
reached by numerous other investigators in  this field:
that any effects seen during microwave exposures of
experimental animals are not necessarily extrapola-
table to man, owing to the widely differing absorption
characteristics and internal distributions  existing
for man compared to that of the animal at the same
frequency and same incident field level.  This is
clearly supported by the results of this study, which
show that much greater local and average thermal
burdens exist in a small object or animal than is the
case for the large object (incident power density and
frequency remaining the same).  Great care must
therefore be taken in the interpretation of results
obtained during animal experimentation.
                                                                            References

                                                           1.   S.F.  deary:   "Biological Effects of Microwave
                                                               and Radiofrequency Radiation,"  CRC Critical
                                                               Reviews in Environmental Control, pp. 257-306,
                                                               (July 19701.

                                                           2.   S.M.  Michaelson:   "Effects of Exposure to
                                                               Microwaves:   Problems and Perspectives,"
                                                               Environmental Health Perspectives, Vol. 8,
                            •specti-y
                            W.
                                                               pp.  133-156,  (August 1974
3.  D.I. MdRee:  "Environmental Aspects of Micro-
    wave Radiation," Environmental Health
    Perspectives, Vol. 6, pp. 41-53,(October
    1972).

4.  C.H. Dodge:  "Clinical and Hygienic Aspects
    of Exposure to Electromagnetic Fields,"
    Proceedings of Symposium on Biological
    Effects and Health Implications of Microwave
    Radiation held in Richmond, Va., September
    1969, pp. 140-149, NTIS Doc. No. PB 193 898.

5.  C.C. Johnson and A.W. Guy:  "Nonionizing
    Electromagnetic Wave Effects in Biological
    Materials and Systems," Proc. IEEE, Vol. 60,
    pp. 692-718, (June 1972).
                                                       189

-------
6.  C.C. Johnson:  "Research Needs for Establish-
    ing a Radio Frequency Electromagnetic Radiation
    Safety Standard," J. Microwave Power, Vol. 8,
    (3/4), pp. 367-388, (1973).                             10.

7.  H.P. Schwan:  "Radiation Biology,  Medical
    Applications and Radiation Hazards," in
    Microwave Power Engineering,  Vol.  2, ed.  by
    B.C. Okress, Academic Press,  NY,  1968,
    pp. 215-234.                                           11.

8.  A.R. Shapiro, R.F.  Lutomirski and H.T. Yura:
    "Induced Heating within a Cranial Structure
    Irradiated by an Electromagnetic  Plane Wave,"
    IKKK Trans. Microwave Theory  and  Techs,                12.
    MTT-19, pp. 187-196, (Feb 1971).

9.  C.M. Weil:  "Absorption Characteristics of
    Multilayered Sphere Models Exposed to UHF/
Microwave Badiation,"  IEEE Trans.  Biomed.  Bng.,
BME-22, pp.  468-476,  (Nbv 1975).

H.S. Ho, A.W. Guy, R.A. Sigelmann  and J.F.
Lehmann:  "Microwave Heating of Simulated
Human Liiribs by Aperture Sources,"  TRRR
Trans Microwave Theory and leans,  MTT-19,
pp. 224-231  (Feb 1971

H.S. Ho:  "Contrast of Dose Distribution in
Phantom Heads due to Aperture and Plane  Wave
Sources," Annals N.Y. Acadony Sciences,  Vol. 247,
pp. 454-472,  (Feb 1975).

C.H. Durney, C.C. Johnson and H. Massoudi:
"Long Wavelength Analysis of Plane Wave
Irradiation of a Prolate Spheroid Model  of
Man,"  IKW; Trans. Microwave Theory and Techs.,
MTT-23, pp. 246-253,  (Feb 1975).
                                                  DISCLAIMER

                               This  report has been reviewed by the Office of
                               Research and Development, EPA,  and approved for
                               publication.  Approval  does not signify that  the
                               contents necessarily reflect  the views  and policies
                               of  the Environmental Protection Agency,  nor does
                               mention  of  trade names  or commercial products
                               constitute  endorsement  or recotinendation for  use.
                                                      190

-------
                                 AIR  POLLUTANT  HEALTH EFFECTS ESTIMATION MODEL
                              William  C.  Nelson,  John H.  Knelson, Victor Hasselblad
                                      Health Effects Research Laboratory
                                        Environmental Protection Agency
                                    Research Triangle Park, North Carolina
    A  computerized  system  has  been  developed which
sequentially  utilizes  estimates of air pollutant emis-
sions,  ambient  levels,  health damage functions, and
populations at  risk  to  provide  an aggregate estimate of
health  effect.   Emissions estimates  should be pollutant
specific  for  a  base  year  (and any additional  years),
can be  source specific  such as  stationary and mobile
or power  plant  and nonpower plant, and can be geo-
graphic area  specific.  Ambient levels should be
pollutant specific for a  base year.   Levels for addi-
tional  years  can either be  provided  or estimated from
the emissions estimates.  Arithmetic means are used
to estimate chronic  health  effects.   Geometric means
and standard  geometric deviations are used for acute
health  effects.   Daily or hourly averages are esti-
mated  assuming  the log  normal distribution.  The
model  can consider compound effects  such as the
variable  short-term  contribution of  mobile and sta-
tionary sources.   Health  damage functions have been
developed separately for  input  to the model for
sulfates, photochemical oxidants, carbon monoxide,
and nitrogen  dioxide.   Various  specific health ef-
fects  were considered  including mortality, aggrava-
tion of asthma,  acute  lower respiratory disease in
children, aggravation  of  chronic heart and lung dis-
ease in the elderly, chronic respiratory disease,
and transient irritation  symptoms.  Age and disease
status  specific populations at  risk  were considered.
Aggregate estimates  were  developed for each health
effect and pollutant damage function.  All estimates
,are in terms  of an excess above a baseline since none
of these  effects  are caused by  air pollution alone.
    Although the resulting estimates are admittedly
very rough approximations,  this first level of quanti-
fication  is valuable for  comparison  of differing con-
trol strategies  and  for establishing ranges of uncer-
tainty, which can be considered more fully in future
research.

                    Introduction

     Environmental control  policymakers require
knowledge of  the complex  relationships between air
pollutant emissions, air  quality, human exposures, and
health  damages  for a variety of pollutant categories.
This need is  particularly critical for two of our
largest and most important  industry  groups, the electric
power  industry  and the motor vehicle transportation
industry. The  necessity  for trade-offs is obvious as
our national  shortage  of  low-sulfur  fossil fuel is
superimposed  on our commitment  to the implementation
of the  Clean  Air Act amendments.
     Difficult  decisions  must  be made involving consid-
eration of benefit-cost relationships.  Figure 1 shows
the cyclical  relationship of emissions, air quality,
effects,  and  control decisions.  The pollutants emitted
By stationary sources  and by motor vehicles are sub-
jected  to meteorological  factors and to physical and
chemical  forces  which  produce  an ambient air quality
level.  With  knowledge of damage functions and of the
exposure  of a target "population," human or otherwise,
one can estimate physical effects.  These physical
effects would include  effects  on human health, vegeta-
tion, animals,  and materials.   Policy decisions might
be made on the  basis of these  physical effects.
Population
at Risk


Physical
Effects
<;>
/*
Economic
Damage
Function
   Figure 1.   Schematic Relationship of Emissions, Air
              Quality, Effects, and Control  Measures

Our computerized model presently assumes that this
is the case.   In fact, the only physical effects  con-
sidered  to date are effects on human  health.
     For completeness, damage  functions might also  be
used to  estimate economic damage.  A  control policy
must be  made and enacted into  law.  The resulting
control  measures will exert their  effect on emissions
and the  cycle  shall continue.
     Unfortunately, the research information base for
determining these critical relationships is fragmentary
rather than complete.  Nevertheless,  these available
fragments must be utilized to  provide the best possible
estimates for  these relationships.
     This model considers the  elements of Figure  1
through  the physical  (health)  damage  stage.  Since  the
resulting estimates of health  effects from various
"scenario" assumptions have been presented elsewhere,
this paper will stress the methodology of the model and
its flexibility as an estimation tool.  The following
sections describe the components of the model in some
detail.

                      Emissions

     The model requires emissions  and air quality
information for some  base period,  ideally a year.   The
simplest relation between emissions and air quality is
based on the assumption that the change in air quality
due to man-^nade pollution sources  is  proportional to
tfie change in man-made emissions in tne region of
interest.  Therefore  if emission estimates~are provided
for additional years, resulting air quality for the ith
year is estimated by  the formula

     (AQrBAQ.)/(AQ-BAO)   E^E

where AQ, BAQ, and E  represent total  air quality,
natural background air quality, and emissions for a
base year, and where  the subscripted  variables repre-
sent these same quantities for the ith year.
                                                       191

-------
     It should be noted that this model  assumption does
not relate air quality to emissions but only relates
changes in air quality to changes in emissions.  This
assumption provides a reasonable estimate if the
meteorologic and topographic characteristics of the
area and the temporal and spatial distribution of
emissions remain stable over the time period of inter-
est.
     Emissions have also been classified into compart-
ments such as mobile and stationary sources or power
plant and non.power plant sources.  Various "growth"
scenario assumptions can be considered for the various
cases.  For each, the simple assumption of a linear
relation between changing emissions and air quality is
made.
     Emissions estimates have also been classified by
geographic area.  The contiguous United States has been
divided into seven regions representing approximately
the Northeast, the Southeast, the Eastcentral, the
Midwest, the Southcentral, the Northern Plains, and the
West.  State boundaries have been maintained.

                      Air Quality

     As mentioned earlier, air quality information for
some base time period is required.  For our model we
utilized the data base of the National Aerometric Data
Bank (NADB).  We developed air quality data for each
of the seven geographic areas mentioned previously.
Additionally, each region was divided into four strata,
depending on population, based on the 1970 Census.
These four strata were classified as rural (including
towns of less than 2500, urban places of less than
100,000, urban areas larger than 100,000 but less than
2,000,000, and urban areas larger than 2,000,000.  Air
quality data were derived from NADB representing each
of the 28 population and geographic classes.
     Obviously the inclusion of these 28 classes adds
a little more realism to the model since it permits
consideration of different control options for differ-
ent population size areas and for different regions of
the country.  Additional subdivision is desirable for
certain problems.  For example, the south coast air
basin of California was considered separately for the
oxidant problem.  The model can be easily modified to
permit consideration of other individual regions,
cities, or states.
     To date, information on ambient levels has been
obtained and used for suspended sulfates, oxidants,
carbon monoxide, and nitrogen dioxide.  More monitoring
data are obviously available for some of these pollu-
tants than for others.
     The particular aerometric parameter used to esti-
mate health damage is dependent on the type of effect.
Annual arithmetic means are used to estimate health
effects which are attributable to long-term pollutant
exposure.  Averages or maxima for shorter time periods
are required for estimating acute health effects.
Daily or hourly averages are calculated from the annual
geometric mean and standard geometric deviation, assum-
ing a log normal distribution.  The acute health ef-
fects from these shorter exposures are aggregated so
that all damage estimates are expressed on an annual
basts.

               Health Damage Functions

     The health information base for pollutant effects
was reviewed in careful detail.  Although a great
number of research studies have been carried out, the
total information available is limited.  Methodology
variations in the individual studies usually prevent
•strict comparability of study results.  The significant
effects observed in many studies despite differences in
the health end point, target population, and pollutants
measured, ensure that pollutant effects are widespread.
The data base, therefore, provides far more qualitative
information than quantitative information.  However,
there are a relatively few pollutants and  health
effects for which enough reliable quantitative  data
exist from multiple studies to allow estimation of
health damage functions.
     These functions are now discussed  for each rele-
vant pollutant.  Each function has been determined on
the basis of excess risk of illness or  death  above a
baseline level  since none of these effects are  due to
air pollution alone.  The specific characteristics of
each function are also displayed in Table  1.

Carbon Monoxide

     The effect of low-level carbon monoxide  exposure
on the cardiovascular system has been investigated by
several investigators.5  There is evidence from
laboratory animal studies and from human volunteer
studies of less severe cardiac effects,that carbon
monoxide exposure can increase risk of  death  for  indi-
viduals suffering myocardial infarctions.6 While
adverse effects might occur at very low carboxyhemoglo-
bin levels, the damage function was constructed assum-
ing no adverse effect could be demonstrated below  a
carboxyhemoglobin level of 2 percent.   A linear
increase in adverse effect was assumed  up  to  a
carboxyhemoglobin level of 10 percent.
       There also is evidence that carbon  monoxide ex-
posure can decrease the time to onset of chest  pain
and increase the duration of the pain for  persons  with
stable coronary artery disease.7  Studies  with  human
volunteers were conducted at 2.9 percent carboxyhemo-
globin and at 4.5 percent carboxyhemoglobin.  The
total mean time of increased disability (decreased
activity plus increased duration of chest  pain) was 87
seconds at 2.9% COHb and 144 seconds at 4.5%  COHb.
Linear damage functions were estimated  for these data
points assuming an effects threshold ranging  from 0.5%
COHb to 2.0% COHb.  The most reasonable point estimate
of the threshold has been determined to be 1.5% COHb
and this is the value shown in Table 1.

Suspended Sulfates

     Exposure to elevated levels of sulfur oxides,
particularly suspended sulfate aerosols, has  been shown
to cause or aggravate several  health effects.   A
problem is that these effects were observed in communi-
ty studies where levels of sulfur dioxide, acid-sulfate
aerosol, and suspended particulate matter  were usually
simultaneously elevated.  Another limitation  is that
for some studies, suspended sulfate levels had to be
estimated from measured sulfur dioxide  concentrations.
     Despite these difficulties, it is  likely that
short-term elevated exposure to sulfates is largely
responsible for the perceptible increases  in  daily mor-
tality observed during air pollution episodes in New
York,?.10 London,11 and Oslo.12  Data points
from these studies were plotted and a linear  regression
equation was estimated.  An effects threshold for a 24-
hour average sulfates concentration was estimated  to  be
25 yg/m3.
     Elevated short-term exposures also cause aggrava-
tion of asthma and of preexisting heart and lung
diseases.  The studies of volunteer asthmatics  were
done in the United States8 and Japan.13 The
studies of elderly volunteers with chronic heart or
lung disease were done in Chicago14 and New York.8
Results indicated that each of these susceptible
groups were more likely to experience an attack or a
worsening of their chronic symptoms on  high sulfate
days.  Data points from these studies allowed plots to
be constructed and, as with the mortality  data, linear
regression equations to be estimated.   For the  asthma
damage function, a threshold was estimated to occur at
a daily average sulfate concentration of 6 vg/m3 and
for the aggravation of preexisting heart and  lung
                                                       192

-------
Table 1.  Summary of Damage Function Characteristics


Pollutant and
Health Effect
Carbon Monoxide
Mortality




Angina
Pectoris



Oxidants
Aggravation of
Heart and Lung
Disease in
Elderly


Aggravation of
Asthma


Eye Discomfort



Cough



Chest Discomfort



Headache



Nitrogen Dioxide
Lower Respiratory
Disease in
Children
Days of Restricted
Activity from
Lower Respiratory
Disease



Population at Risk

One-sixth of persons
suffering myocardial
infarctions or sudden
coronary death (0.26
percent of population)
Two percent of the
population




The prevalence of
chronic heart and
lung disease among the
11 percent of the pop-
ulation older than 65
years is 27 percent
The prevalence of
asthma in the general
population is 3
percent
Healthy Population
(Excludes persons
with asthma or
heart and lung disease)
Healthy Population
(Excludes persons
with asthma or heart
and lung disease)
Healthy Population
(Excludes persons
with asthma or heart
and lung disease)
Healthy Population
(Excludes persons
with asthma or heart
and lung disease)

All children in the
population or 23.5
percent of population
Children with a
lower respiratory
disease

Assumed Baseline
Frequency of
Disorder within
Population at Risk

Prevalence of one
out of 200 of
population at risk


One attack per day
lasting 254 seconds
per attack or
- 0.07 person-hours
per day

One out of five of
population at risk
complain of symptom
aggravation on any
given day

One out of 50
asthmatics experience
an attack each day

Five percent per day



Ten percent per day



Two percent per day



Ten percent per day




Fifty percent of
children have one
attack per year
2.66 days per
attack


Pollutant
Concentration
Threshold
For Effect

2.0 % COHb or
13.1 mg/m3
8-hour average
CO

1.50% COHb or
9.5 mg/m3
8-hour average
CO


400 yg/m3 for
one hour or
more



400 yg/m3 for
one hour or
more

260 yg/m3 for
one hour or
more

400 yg/m3 for
one hour or
more

420 yg/m3 for
one hour or
more

100 yg/m3 for
one hour or
more


50 yg/m3 annual
average

50 yg/m3 annual
average


Effect Increase as %
of Baseline Per
Pollutant Unit Above
Threshold

5.0% per % COHb




18.75% per % COHb





1.75% per 100 yg/m3





1.75% per 100 yg/m3



3.25% per 100 yg/m3



1.75% per 100 yg/m3



1 .0% per 100 yg/m3



.35% per 100 yg/m3




5.0% per 25 yg/m3


5.0% per 25 yg/m3



                         193

-------
                                              Table 1.  (Continued)
Pollutant and
Health Effect
Sul fates
Mortality
Aggravation of
Heart and Lung
Disease in Elderly
Aggravation of
Asthma
Lower Respiratory
Disease in Children
Chronic Respiratory
Disease
Nonsmokers
Smokers
Population at Risk
Total Population
Same as above for
oxidants function
Same as above for
oxidants function
Same as above for
nitrogen dioxide
function
62 percent of
population age 21
or older
38 percent of
population age 21
or older
Assumed Baseline
Frequency of
Disorder within
Population at Risk
Daily death rate of
2.58 per 100,000
Same
Same
Same
Two percent
prevalence
Ten percent
prevalence
Pollutant
Concentration
Threshold
For Effect
25 yg/m3 for
one day or
more
9 yg/m3 for
one day or
more
6 yg/m3 for
one day or
more
13 yg/m3 for
several years
10 yg/m3 for
several years
15 yg/m3 for
several years
Effect Increase as %
of Baseline Per
Pollutant Unit Above
Threshold
2.5% per 10 pg/m3
14.1% per 10 yg/m3
33.5% per 10 yg/m3
76.9% per 10 yg/m3
134% per 10 yg/m3
73.8% per 10 yg/m3
disease function the threshold was estimated to be 9
yg/m3.
     Long-term exposures or repeated shortrterm expo-
sures to suspended sulfates have also been linked with
increased acute respiratory disease in normal  healthy
children.  Epidemiologic studies which have related ob-
served increases in 3-year incidence rates of acute
lower respiratory disease in children 12 years old
and younger to increases in annual average concentra-
tions of suspended sulfates have been carried out in
the United StatesS and  England.15'16 These
studies permit the estimation of a damage function.
     Another health parameter linked to sulfur oxide
exposure is chronic respiratory disease.  Commmunity
questionnaire surveys in several United States cities^
have consistently shown differences in prevalence
of chronic respiratory disease symptoms in adults
attributable to annual  average exposure to suspended
sulfates.  In these studies, a very important codeter-
minant of chronic respiratory disease is individual
cigarette smoking.  The available data showed that
cigarette smokers were slightly less affected by ambi-
ent sulfates than were their nonsmoking neighbors.
The relatively large sample size of these studies made
it possible to estimate a separate damage function for
smokers and for nonsmokers.

Oxidants

     Exposures to elevated photochemical oxidant levels
have been associated with increases in minor irritation
symptoms in otherwise healthy adults.  A volunteer
panel of student nurses in Southern California main-
tained daily diaries in their health symptoms.'7
Significant associations were found with daily fre-
quency rates for headache, chest discomfort, eye irri-
tation and cough and daily oxidant level.  The inves-
tigators had estimated segmented regression lines,
known as "hockey stick" functions, for each of these
four health effects.  The functions were used as the
basis of the damage functions for our model, requiring
the ordinates to be converted from observed frequency
rates to percent excess above baseline frequencies.
     In addition to the previously described relation-
ships with sulfates, it is also believed that oxidants
can aggravate asthma and symptoms of chronic heart and
lung disease.  As an estimate of a lower boundary for
this functional relationship for the susceptible popu-
lation at risk, the slope of the regression line for
cough in a healthy population was used.  As can be seen
from Table 1, the baseline frequency and the target
population are different for each health end point.

Nitrogen Dioxide

     A damage function relating increased  incidence
of acute lower respiratory disease in children with
annual average concentrations of nitrogen  dioxide has
Been developed.  This function and a related one esti-
mating days- of restricted activity resulting from
these illnesses have been obtained from data of a study
conducted in Chattanooga, Tennessee.'"  Although
tSe survey was conducted during a period of rapidly
decreasing nitrogen dioxide exposures, it  is possible
to form reasonable assumptions about the causes of the
observed effects.19  A regression equation was esti-
mated for three different threshold estimates; however,
the estimate providing the intermediate effect of the
three appears to be most reliable and  is therefore
shown in Table 1.  Several different estimates were
also considered for the baseline annual incidence of
lower respiratory disease, ranging from a  per child
rate of 0.5 to 2.0 attacks per year.   In order to be
conservative, the smallest of these, 0.5,  has been
incorporated into the model.

                 Population at Risk

     As mentioned previously, the appropriate  popula-
tion at risk must be determined  for  each  individual
damage function and obviously will  be  matched  as  close-
ly as possible to the target population  used  in  the
                                                       194

-------
studies upon which the damage function is based.  The
specific populations at risk which the model considers
are shown in Table 1.  For each function except
mortality attributable to elevated daily average
sulfates, some subset of the total population is used.
     The model incorporates these population subsets in
two different ways.   First, the specific population
subsets are calculated for each of 28 population
density and geographic region categories described
previously in the Air Quality section.  These cate-
gories are necessary for the model to determine aggre-
gated national estimates of health impact.  However,
there are some situations in which national estimates
are not appropriate  and estimates are required for
smaller regions.   To provide flexibility for these
situations, all  population subclasses have also been
calculated on the basis of a standard million popula-
tion.

                    Health Effects
     As stated in the introduction, the purpose of this
paper is not to provide specific numbers for excess
illnesses or premature deaths, but rather to summarize
the development and methodology of the computerized
model.  Therefore the effects estimated, which obvious-
ly depend heavily on the region (population) and time
period (air quality) of interest, will be covered only
very briefly.
     The magnitude of most effects is very large, if
considered for an area of moderate or large population.
An exception is mortality attributable to carbon
monoxide for which the assumption of a threshold level
of 2  percent carboxyhemoglobin  (or equivalently an 8-
hour average carbon monoxide level of approximately
13 mg/m3) ensures a small estimated effect.  However,
for effects attributable to sulfates, the set of damage
functions estimate that the annual national public
health toll is on the level of millions of excess
diseases and thousands of premature deaths.  The
estimated figures for effects attributable to oxidants
and nitrogen dioxide are also very large as is the
estimate of increased disability from angina pectoris
attributable to carbon monoxide.

                      Conclusion

     The fact that the national estimates of health
effects attributable to air pollutant exposure are very
large brings out several major points.

     First, it must be stressed that the damage func-
tions documented in this model provide very rough
estimates of reality.   Although these functions are
believed to be the best available from the present
health information base, their limitations must con-
stantly be remembered.  These functions should be
frequently reevaluated and revised.
     Second, the commitment to performing new research
studies must be renewed.  These studies must have the
proper research design and methodology to permit valid
quantitative results.   Too often in the past, only
qualitative information has been obtained of air pollu-
tion's effect  on health.  Also, investigators must
increase their efforts to extract quantitative results
from  their available data by use of improved statistical
analysis procedures.
     Third, the existence of this computerized model  at
least provides a flexible mechanism for systematic
assessment of  the magnitude of environmentally related
public health  problems.   Its use can aid in the specific
definition of  feasible alternatives and hence in the
making of many of our  present difficult environmental
decisions.
                      References

 1.  Chapman,  L.  D.  et al,  Electricity Demand:   Project
    Independence and  the Clean Air  Act,  ORNL-NSF-EP-89,
    Oak  Ridge National  Laboratory,  pp.  14-31,  November
    1975.
 2.  Finklea,  J.  F.  et al,  Health  Effects of Increasing
    Sulfur  Oxides Emissions,  EPA  In-house Report,
    March 1975.
 3.  Draft Report of the Air Quality,  Noise and Health
    Panel,  Department of Transportation  Interagency
    Task Force,  1975.
 4.  Finklea,  J.  F.  et al,  Estimates of the Public
    Health  Benefits and Risks Attributable to  Equipping
    Light Duty Motor  Vehicles with  Oxidation Catalysts,
    EPA  In-house Report, February 1975.
 5.  Air  Quality  Criteria for  Carbon Monoxide,  N. 10,
    NATO Committee on Challenges  of Modern Society,
    pp.  7-51  to  7-68, June 1972.
 6.  EPA  Memo  Knelson  to Finklea,  December 26,  1974,
    entitled  "Excess  Cardiac  Deaths Related to CO
    Exposure: Extrapolation  from Animal  Dose-Response
    Data."
 7.  Knelson,  J.  H., General Population Morbidity
    Estimates from Exacerbation of  Angina Pectoris
    Related to Low-Level Carbon Monoxide Exposure,  EPA
     In-house  Report,  August 1975.
 8.  Health  Consequences of Sulfur Oxides:   A Report
    from CHESS,  1970-71, EPA  650/1-74-004, Environmen-
    tal  Protection Agency, May  1974.
 9.   Buechley, R. W. et al, S02  Levels and Perturbations
     in Mortality, Archives of Environmental Health
     27,  pp. 134-137,  September  1973.
10.  Glasser,  M.  and L. Greenburg, Air Pollution
    Mortality and Weather: New York  City 1960-64,
     (presented at the Epidemiology  Section of  the
    American  Public Health Association Annual  Meeting,
     Philadelphia, November 1969.
11.  Martin, A. E. and W.  Bradley, Mortality Fog and
    Atmospheric  Pollution, Monthly  Bulletin of the
    Ministry  of  Health (London) 36, pp.  341-344,  1963.
12.   LindeBerg, W., Air PollutionTn Norway III:
     Correlations Between Air  Pollutant Concentrations
     and Death Rates in Oslo,  Smoke  Damage Council,
     Oslo,  1968.
13.   Sugita, O.M. et al, The Correlation Between
     Respiratory  Disease Symptoms  in Children and Air
     Pollution, Report No.  1:   A Questionnaire  Health
     Survey, Taiki Osen Kenkyu 5_,  p. 134, 1970.
14.   Carnow, B. W. et  al,  The  Role of Air Pollution
     in Chronic Obstructive Pulmonary Disease,  J.
     American  Medical  Association  214, pp. 894-899,
     November 1970.
15.   Douglas,  J.  W. B. and  R.  E.  Waller, Air Pollution
     and Respiratory  Infection in  Children, British
     Journal of Preventive Social  Medicine 20,  pp.  1-8,
     1966.
16.   Lunn, J.  E.  et al, Patterns of Respiratory Illness
     in Sheffield Infant School  Children, British
     Journal of Preventive Social  Medicine  21,  pp.  7-16,
     19.67.
17.   Hammer, D.  I. et al, Los  Angeles Student Nurse
     Study, Archives  of Environmental  Health 28, pp.
     225-260,  May 1974.                      ~~
18.   Hammer, D.  I. et al, Air Pollution and  Childhood
     Lower Respiratory  Disease:  Exposures  to Oxides
     of Nitrogen, EPA  In-house Report, February 1975.
19.   Knelson, J.  H. et  al, Impact on  Public  Health
     of Low-Level Long-Term N02 Exposure,  EPA  In-house
     Report, July 1975.
                                                       195

-------
                               MORTALITY MODELS:  A POLICY  TOOL

        Wilson B. Riggan, Ph.D., John B. Van Bruggen, Larry Truppi,  Marvin Hertz,  Ph.D.
                  Environmental Protection Agency, Research Triangle Park, N.C.
                    Summary

     The recent Pittsburgh air pollution epi-
sode in November 1975 presents a striking need
to use daily mortality models as a policy
tool.  In this preliminary study we found 16
deaths when the episode period was compared to
the same four days of the week before, and the
same four days of the week following the epi-
sode.  Estimated excess deaths of 23 were
found when the period of the episode was com-
pared to the same month and period in the
years 1962 through 1972.  However, after
fitting the model which accounted for tempera-
ture and other covariates we found only 14
deaths.  In the preceding comparison the ef-
fect of temperature had been assigned to air
pollution.
       Mortality Models:  A Policy Tool

     With the great improvement in air quality
monitoring technology, there is a strong ac-
companying need to quantify the health impact
from environmental pollution.  The recent
Pittsburgh episode in November 1975 is a
striking example of this present need.  The
Donora, Meuse Valley, New York, and London
episodes of previous decades, which were
handicapped by a lack of pollution exposure
data, also provide glaring examples of this
present need for more air monitoring data
which can be related to observed health
changes.

     An important tool for improving the
assessment of the total health effects of
pollution is the use of daily mortality
models.  Although man reacts to pollution
through a full spectrum of biological re-
sponses ranging from subtle physiologic
changes to death, mortality is currently the
best documented and defined health indicator
available.  It is extremely noteworthy to
recall that statistically strong effects were
not obvious at the time of some of the his-
toric pollution episodes.  The adverse health
effects in the 1952 London episode, for ex-
ample, became clear only when mortality
records became vital statistics.

     This paper will describe the use of
daily mortality models based on single fore-
cast equations that can apply to metropolitan
areas in the Northeastern United States.
Specifically the Pittsburgh pollution episode
of November 17-20,  1975, will be discussed,
using the model to draw mortality inferences.
The models enable epidemiologists to estimate
deaths caused by high concentration of air
pollution.  Mortality models are very useful
to prospective pollution control in that they
enable authorities to forecast the probable
effect of a specific control action and later
to assess the effectiveness of controls.

     Why use a model rather than the real
world •  Admittedly, a model is a crude "Alice
In Wonderland" simplification of the real
world.  But  it  provides  information on rela-
tionships between  measurable factors which
may be adjusted for,  or  controlled.  The model
must be scientifically valid in that it must
approximate  a microcosm  of the real world.
The validity of various  models can be compared
by how closely  they  approximate the actual
observation  data.

           Materials  and Methodology

     For the recent  Pittsburgh episode we have
three major  sources  of mortality data:
National Center for  Health Statistics; Depart-
ment of Vital Statistics,  the State of Penn-
sylvania, and Allegheny  County Health Depart-
ment.  The National Oceanic and Atmospheric
Administration  supplied  the meteorological
data.  Aerometric  data were supplied by the
Allegheny County Air  Pollution Control Board.

Background of the  Pittsburgh Episode

     The National  Weather  Service  Forecast
Office at Pittsburgh  Airport issued an Air
Stagnation warning at noon,  Monday,  November
17, 1975.  The  areas  covered included western
Pennsylvania, several eastern Ohio and nor-
thern West Virginia counties.   A large high
pressure system became stationary  over the
State of West Virginia,  causing strong sur-
face temperature inversions which  trapped
cooler air at the  ground,  particularly in
valleys such as are common around  Pittsburgh.
Pittsburgh's location also brought very light
surface winds causing poor dispersion.   Wind
speeds at the Pittsburgh Airport averaged
6.8 kph on November 17,  fell  to 4.2  kph on
November 18, 4.0 kph  on  November 19,  and rose
to 13.7 kph  on  November  20,  the last day of
the episode.  Table 1 presents  daily maximum
and minimum  temperatures,  departure  from
normal average  temperature,  afternoon mixing
depths, average wind  speed,  resultant wind
direction and speed,  and average relative
humidity.

                   Table 1
           Daily Weather Conditions
           Nov. 17 to Nov.  20,  1975

                      17      18      19     20
Temperature  (C)
  Maximum           16.7   17.2   17.2   18.3
  Minimum            1.7    2.2    1.7    1.7
Departure from
  Normal  (C)
+ 4.4   +5.. 6   +5.0   +6.12
Afternoon Mixing
  Depth  (m)          926  1,061    869    927

Average Windspeed
  (kph)              6.8    4.2    4.0   13.7

Resultant Wind
  Direction  (deg)    230    270    160    160
Resultant Wind
  Speed  (kph)        6.3    3.4    1.6   13.2

Avg. Rel. Humidity(%) 60      63      60     56
                                             196

-------
Approach

     We secured death certificates from Alle-
gheny County Health Department.  We compiled
mortality figures for the four days of the
Pittsburgh pollution episode, and the corre-
sponding four days in the preceding and fol-
lowing weeks.  These records were not complete,
comprising 85-90 percent of ultimate recorded
deaths.  This variation is due to a number of
residents who died outside the county; and
will be added to the county records at a later
time.  Table 2 gives this comparison, reveal-
ing 16 excess deaths during the episode.

                   Table 2  ,

   Mortality Figures From Allegheny County
   for the Four Days of the Pittsburgh Air
   Pollution Episode, and the Corresponding
        Four Days in the Preceding and
                Following Week
Deaths
During
Episode

  181
Average Deaths
of Individuals
for Preceding
and Following
    Week	

    163.5
       Excess Deaths
       During Episode

           17.5
Discussion

     By using the same four days of the pre-
ceding and the following week as a control,
we have removed the day of week.  However,
the last day of the corresponding four-day
period of the following week was Thanksgiving
which normally has the higher holiday death
rate.  This suggests that without the holiday
the excess deaths may have been greater than
17.

     We adjusted for incomplete mortality
records for November 1975 in the following
manner.  First we checked for an annual trend
and found none.  We divided the average daily
deaths of the 11 years of November (47.3) by
the average daily deaths of November 1975
(40.4).  We used this factor of 1.17 to ad-
just the daily deaths upward for November
1975.

     Table 3 compares the average number of
deaths for November 17 through 20 for years
1962 through 1972 with the deaths during the
Pittsburgh episode of November 17 through 20.
This comparison gives an excess of 23 deaths.

                   Table 3
Day of
Month
  17
  18
  19
  20
  Comparison of Deaths


 1975
 Average
1962-1972
  60
  52
  47
  54
    49
    47
    47
    47
Excess
Deaths
  11
   5
   0
   7
                                Total  23
             Probability = .048
                                      effect has been removed, we selected  for each
                                      year Monday through Thursday of  the week pre-
                                      ceding Thanksgiving for comparing with Monday
                                      through Friday of the episode.

                                                         Table 4
                                           Comparison of Deaths by Day  of Week
                                      Day of Week
                                      Monday
                                      Tuesday
                                      Wednesday
                                      Thursday
                                          1975
                                          Deaths
                                             60
                                             52
                                             47
                                             54
                                      Average
                                      1962-1972

                                         49
                                         48
                                         44
                                         49
                                      Excess
                                      Deaths
                                        11
                                         4
                                         3
                                         5
                                                                      Total    23

                                                     Probability =  .048
     The above comparison has removed the
seasonal effect;  to be sure the day of week
     Hence, the difference is not due to the
day of the week or the annual cycle.

Application of Model

     Daily fluctuations in mortality rates
are primarily determined by four major
factors:

     1.  Annual cycles
     2.  Epidemic influenza-pneumonia
     3.  Temperature
     4.  Environmental pollution.

     Annual cycles of mortality are important
in determining mortality rates because the
highest death rates are in the winter and the
lowest in the summer.  Epidemic influenza-
pneumonia is important because during an epi-
demic, death rates rise far above those due to
annual cycle.  Temperature has an effect as
well as the annual cycle, in that a sharp
drop in temperature associated with the move-
ment of a weather front reduces mortality.
Heat waves also have an extreme effect on
mortality.  Environmental pollutants increase
mortality, but their effects are small com-
pared to the others except in air pollution
episodes.  Temperature and annual cycle may
have 15 to 20 times the effect of air pollu-
tion.  Environmental pollution has a signifi-
cant additional effect, assessable only when
the other, strong effects are adequately
measured.

Application

     Our first step in developing an empiri-
cal forecast model for Allegheny County was
to divide the 11 years of mortality data into
two periods:  5 years, 1962-1966; and 6 years,
1967-1972.  The first period was used to de-
velop the model and estimate the coefficients
while the second period was used to test the
model.

     First, daily total mortality observa-
tions were corrected to eliminate major
influenza epidemics.  Next, mortality data
were checked for trend, and adjustable daily
mortality ratios were computed as the daily
observations divided by the average of the
11 years.  We estimated coefficients for the
following model:
                                              197

-------
             X(2)   X(3)
                           a5ti+a6ti+a7ti
where Y^  = Daily mortality ratio of observed
           deaths on the itn day multiplied.
           by 100 and divided by the average
           number of deaths per day for the
           11 years.

    X(l) = Lagged function distributes
           temperature effect over 3 days
                                      However, Figure 1 presents graphically  the
                                      results using deviations from expected  deaths
                                      generated by the model which adjusted for
                                      annual cycle, temperature, etc.

                                          20-
(used
                  ._,
                      as distributed lagged
           function) .
  t.t2t3
   """ """ """
           Exponential polynomial function,
           third power of observed maximum
           temperature in degrees Celsius
           for the given day.

    X(2) = Observed temperature minus the
           average temperature for the
           preceding seven days.

    X(3) = Precipitation during the day in
           millimeters .

    X(4) = Holiday effect - Thanksgiving,
           Christmas, etc.

     Mortality is given as "mortality ratio
expected. "

     This standardized ratio allows direct
comparison between places and times, and
statements about percent change in mortality
per unit change in the pollution variable.

     We used 1962-1966 data to estimate a set
of coefficients.   We also estimated a set of
coefficients using 1967-1972 data.  Estimated
expected deaths for 1967 through 1972 with
coefficients generated from the same data
gave a  sum of squares of deviation from
expected of 98.3.  Sum of squares of devia-
tion from expected deaths for 1967-1972 using
coefficients generated from data for 1962-1966
was 98.7.  Therefore, the relationship found
in the first period holds for the second
period.

     We felt justified in using the coeffi-
cients from 1962-1966 to calculate the expected
mortality ratios for November 1975.  The air
pollution episode was the only observable
unusual condition in November 1975 that could
have caused expected mortality to deviate so
widely.

     After adjusting deaths during the epi-
sodes and for the same days of the week in
the previous and following weeks for tempera-
ture, precipitation, annual cycle, and day of
week, we still show at least 14 excess deaths
during the episode.  There seems little
possibility that this result could be due
to random chance.

Aerometric Data

     With aerometric data for only three
weeks from seven stations, we have not in
this preliminary report attempted to estimate
coefficients for a dose-response function.
                                                      14-
•H
4J
(0
•H
a>  2-|
Q  0
  -2
                                          SO,
                                         COH
                                                          Week before
                                                            episode

                                                               70

                                                              1.09
                    Week after
                     episode

                        93

                       0.99
Week of
episode

  152

 3.00
                                        Figure  1.  Comparison  of  Deviations from
                                         Expected Deaths  Generated by the  Model
                                           The above results indicate that using
                                      deaths without considering temperature and
                                      other covariates in the Pittsburgh episode
                                      tends to inflate the number of deaths.

                                                 Comment and Conclusion

                                           One may ask if the excess deaths would
                                      have occurred within a few days or weeks
                                      rather than during the episode.  We simply
                                      do not know.  However, mortality rates were
                                      higher the week following the episode than
                                      the week preceding.  At least, there is no
                                      evidence that the excess deaths would have
                                      occurred during the week following the epi-
                                      sode.

                                           This preliminary study also found a need
                                      for more timely aerometric data, especially
                                      in pollution episodes.
                                      1.  Goldberger, Arthur S., Econometric Theory,
                                         John S.  Wiley and Sons, Inc., New York,
                                         1964, pp. 274-278.
                                              198

-------
                                A RADIOACTIVE WASTE MANAGEMENT ASSESSMENT MODEL
                      S.E. Logan
    Department of Chemical and Nuclear Engineering
         The University of New Mexico
            Albuquerque, New Mexico
                    S.M. Goldberg
            Office of Radiation Programs
        U.S. Environmental Protection Agency
                  Washington, D.C.
One of the  major environmental concerns associated
with the  projected increase in nuclear power genera-
tion is the treatment  and storage or disposal of high-
level and transuranic  radioactive waste.   This model
provides  a  detailed assessment methodology for the
short-term  as  well as  long-term quantitative effects
on the environment resulting from the release of
radionuclides  during all  phases of radioactive waste
management  operations.  This model includes a fault
tree for  determination of release probabilities and
their resultant magnitudes, an environmental model
for calculating transport of radionuclides to man by
environmental  pathways and an economic model for an
evaluation  of  associated  damages.  Full implementa-
tion of this technology assessment model will aid
EPA and others in evaluating the radioactive high-
level and transuranic  waste management programs.

                     Background

Assessment  methodology, that is both independent and
flexible, is urgently  needed for the evaluation of
the various long-term  waste disposal methods and
management  options.   High-level and transuranic radio-
active waste must eventually be placed in long-term
repositories for hundreds of thousands of years to
prevent the entry of these wastes into the environ-
ment.  Management of  these wastes must be accom-
plished in  a fashion which ensures a minimum public
health hazard  and a minimum risk to the environment
from the  detrimental effects of radioactive contami-
nation.

In this regard, a technology assessment model is being
developed to perform parametric risk calculations for
high-level  and transuranic wastes for a variety of
geologic  disposal concepts, fixation processes, and
reprocessing and repository operations.  The model is
specifically designed  to  translate probabilities and
consequences of risk occurrences so that they can be
considered in  a cost-effectiveness methodology.
During FY 1976, this assessment model is being
utilized  initially for a  specific demographic and
geographic  site and a  specific geologic concept, i.e.,
bedded salt in the Los Medanos area of Southeast New
Mexico.3   This model could be applied later to other
specific  concepts and  sites that are considered or
proposed  by ERDA as part  of their terminal storage
program.
The source terms for the environmental model are the
quantities of the significant radionuclides that will
be part of the inventory of commercial reprocessing
plants and will be transported to a Federal repository.
A screening method has been developed and applied to
select significant radionuclides.'*  These include
fission product isotopes and heavy metal isotopes.
The radionuclide concentrations versus time were
obtained utilizing the ORIGEN isotope generation and
depletion code developed at ORNL^ and up-to-date fuel
and power conditions.   The waste form is assumed to
be a borosilicate glass with 25 wt% waste calcine
content.

Fault trees have been constructed to provide the
relationships between various geologic, meteorological,
and man-caused events which are potential mechanisms
for release of radioactive material to the environ-
ment.2'3'8'9  The fault tree model within AMRAW
evaluates the probability for release by each of numer-
ous potential release mechanisms (such as diapirism,
tectonic process, fractures of underlying rock,
groundwater transport resulting from aquifers, etc.),
and the fraction of the inventory released by each
such occurrence during a specific increment of time.
Each path through a fault tree which leads to a
release represents a set of conditions existing at a
given time which together can permit a release to
occur (Figure 2).  Each such path comprises a "cut
set" and has associated probability factors and
release fractions or transfer coefficients.  A flex-
ible system has been programmed in AMRAW for the fault
tree data.  For each environmental release category,
any number of cut sets can be accommodated, subject
only to an adequate DIMENSION statement.  Each cut set
may consist of a number of component probability
factors, which could provide a parametric survey for a
single or a group of initial release conditions.
Further, each component probability can be represented
by any or all of the following built-in functions;
constant, step change, ramp change, and exponential
change.  Thus, for example, the geologic process of
basin-range crustal extension is expected to occur
 (and simultaneously represented by the code) with zero
probability at the present time and gradually increas-
ing probability ramp function in the future.
                   Model Application
                 Model Development

The University of New Mexico has been developing an
environmental model entitled AMRAW. (Assessment Method
for Radioactive Waste Management).   This radioactive
waste management systems model has four parallel paths
(Figure 1);  each path represents a phase in the waste
management  sequence and includes a release or fault
tree model,  an environmental model,  and an economic
model.   Presently,  the major effort  is being applied
to the  terminal or long-term storage branch for a
site-specific environment.   It is planned that the
repository  operations branch will be implemented
during  the  next phase of work on the model.
The environmental model determines the transport to and
accumulations at various receptors in the biosphere.
These receptors are:  air, ground surface, surface
water, and groundwater  (Figure 3).  The model does
adjust each release amount to account for environmental
removal and/or fixation processes.  The environmental
model is also used to determine pathways from environ-
mental input concentrations to radiation dose to man.
Pathways include:  immersion in air, inhalation, inges-
tion of groundwater, submersion in water, ingestion
of  contaminated  food and drink, and direct  surface
exposure.  The release  increments to the  four recep-
tors in the environment are represented from all
release events in the geologic condition  of deep  rock-
melt disposal for a variety of transuranic  material
                                                       199

-------
at several decay times (Table 1) .   Transfer coeffi-
cients for environmental transport and radiation dose
is obtained by applying results from other available
environmental codes such as Percol (groundwater
transport model); INREM and EXREM (radiation dose
codes); and AQUAMOD, AIRDOS, and TERMOD (environmen-
tal receptor
The economic model will calculate detailed total
damage and marginal damage costs.  These damages will
be evaluated from the appropriate residual effects
that are associated with the release of radionuclides
in the waste management process.  A study of rela-
tionships between long-term costs associated with
radioactivity and long-term costs associated with
other environmental pollutants will be started for
the purpose of placing residuals effects in a proper
perspective.  These costs are presented in a para-
metric format, utilizing simplified sensitivity
analysis to allow a cost -effectiveness perspective
to be utilized in any decisionmaking process.

The AMRAW code is structured to allow incorporation
of many of the existing or newly developed nuclear
fuel cycle and environmental sciences codes as sub-
routines, thus allowing the main program to be as
simple and straightforward as possible to avoid any
""black "box1' mysteries.  By this means, AMEAW serves
as a vehicle to bring together data from several
disciplines in an organized -manner.

                   Conclusions

Application of this technology assessment model is
planned by EPA for the following uses:  (1) to com-
pare and assess possible and proposed future storage
and/or disposal concepts and methods for high-level
waste; (2) to help develop the technical bases and
guidelines for establishing environmental policy
relative to the control of commercial alpha wastes
and high-level wastes; (3) to apply information from
the model to EPA's continuing effort to develop the
generic ability to evaluate the environmental accept-
ability of presently operating and proposed fuel cycle
facilities that produce, treat, store, and dispose of
transuranic and high-level waste; and (4) to assist
EPA in developing criteria and standards relating to
transuranic and high-level waste management activities.

Implementation of this model is possible for a whole
range of both radioactive as well as non -radioactive
hazardous materials which require perpetual care.
This model provides the capability to evaluate feed-
back effects from the results, to handle changes in
any of the treatment or processing operations of
these hazardous waste products, in order to minimize
environmental impact .   These feedback effects could
serve to identify options which could act as incen-
tives to transform these wastes into less hazardous
forms, such as the application of transmutation to
modify the very long-term hazard potential of trans-
uranic wastes .
3.   H. C. Claiborne and F. Gera, "Potential  Contain-
     ment Failure Mechanisms and Their Consequences
     at a Radioactive Waste Repository in  Bedded  Salt
     in New Mexico," ORNL-TM-4639, Oak Ridge  National
     Lab., October 1974.

4.   S. E. Logan and G. H. Whan, "Selection of  Signifi-
     cant Elements and Radionuclides for Waste  Manage-
     ment Assessment," Trans. Am. Nucl. Soc., 19,  204,
     1974.

5.   M. J. Bell, "ORIGEN--The ORNL Isotope Generation
     and Depletion Code," ORNL-4628, Oak Ridge
     National Lab., May 1973.

6.   Generic Environmental Statement on Mixed-Oxide
     Fuel, WASH-1327, January 1976.

7.   Nuclear Power 1974-2000, WASH-1175, February  1974.

8.   F. Gera and D. G. Jacobs, "Considerations  in  the
     Long-Term Management of High-Level Radioactive
     Wastes," ORNL-4762, Oak Ridge National Lab.,
     February 1972.

9.   D. H. Denham, D. A. Baker, J.  K. Soldat and J. P.
     Corley, "Radiological Evaluations for Advanced
     Waste Management Studies," BNWL-1764, Battelle
     Pacific Northwest Labs., September 1973.

10.  R. C. Routson and R. J. Serne, "One-Dimensional
     Model of the Movement of Trace Radioactive Solute
     Through Soil Columns:  The PERCOL Model,"  BNWL-
     1718, Battelle Northwest Labs. (1972).

11.  R. E. Moore, "AIRDOS--A Computer Code for Estimat-
     ing Population and Individual  Doses Resulting
     From Atmospheric Releases of Radionuclides from
     Nuclear Facilities," ORNL-TM-4687,  Oak Ridge
     National Lab., January 1975.

12.  D. K. Trubey and S. V.  Kaye, "The EXREM III
     Computer Code for Estimating External Radiation
     Doses to Populations from Environmental Releases,"
     ORNL-TM-4322, Oak Ridge National Lab., December
     1973.

13.  G. S. Killough,  P.  S. Rohwer and W.  D. Turner,
     "INREM--A Fortran Code Which Implements (ICRP 22
     Models for Internal Radiation  Dose  to Man," ORNL-
     5003, Oak Ridge  National Lab., February 1975.
                    References

     S. E. Logan, "A Technology Assessment Methodology
     Applied to High-Level Radioactive Waste Manage-
     ment," Ph.D. Dissertation, The University of New
     Mexico, May 1974.

     K. J. Schneider and A. M. Platt, Ed., "Advanced
     Waste Management Studies, High-Level Radioactive
     Waste Disposal Alternatives," USAEC Report
     BNWL-1900 (4 volumes), May 1974.
                                                     200

-------
      w - 1
                          w - 2
                                             w - 3
                                                               w - 4
REPROCESSING
PLANT
J
RESIDUALS
GENERATION
RESIDUALS
TREATMENT


WASTE
TRANSPORT
*k 1 x" 1


REPOSITORY
OPERATIONS
xkl


LONG TERM
STORAGE
xkl
ACTIVITY
TRANSFER CDEFF.
DAMAGE CHARGES
ASSESSED
AGAINST RESIDUALS

-------
WASTE TRANSPORT
TO SHALLOW DEPTH


VOLCANISM  DIAPIRISM
                     MELT
                   MIGRATION
  TRANSPORT OF WASTE DEPOSIT
                                                            ACCIDENTAL   HEAT  GROUND
                                                            PENETRATION BARRIER WATER
                                                                       DECAY
o
Q
                                                                                     D
                                                                                     0
CAUSE OF RELEASE

MIXING ZONE

TRANSPORT MECHANISM

RELEASE MEDIA

MATRIX

COMBINING POINT
           Figure  2.   A Simplified Version of a Possible  Release Cutset For Deep Rock-Melt  Disposal
                                               RELEASE TO
                                               ENVIRONMENT

T

RELEASE TO
AIR




RELEASE TO
SURFACE WATER


T




RELEASE TO
GROUND WATER




RELEASE TO
LAND SURFACE


                                        A
            Figure  3.   Categorical Breakdown of Environmental Receptors  in Biosphere for  AMRAW
                                                      202

-------
               TABLE  1.   Concentrations  of Selected  Significant Transuranic  Waste Material  Per
                            Increment  of  Fuel  for  a Simulated  Case  of  Deep-Rock Melt Disposal
                         TIME
                                             RECEPTOR CONCENTRATIONS (Ci)
1.   RADIONUCLIDE Pu-239


1.
3.
10.
30.
100.
300.
1000.
3000.
10000.
30000.
100000.
300000.
1000000.
AIR

O.OOE-01
O.OOE-01
O.OOE-01
O.OOE-01
5.51E-09
1.88E-08
9.95E-08
5.37E-07
3.73E-06
1.42E-05
2.87E-05
1.13E-05
1.36E-07
GROUND
SURFACE
O.OOE-01
O.OOE-01
O.OOE-01
O.OOE-01
3.94E-07
1.34E-06
7.11E-06
3.84E-05
2.67E-04
1.01E-03
2.05E-03
8.11E-04
9.72E-06
SURFACE
WATER
O.OOE-01
O.OOE-01
O.OOE-01
O.OOE-01
5.28E-11
1.80E-10
9.54E-10
5.15E-09
3.58E-08
1.36E-07
2.75E-07
1.09E-07
1.30E-09
GROUND
WATER
O.OOE-01
O.OOE-01
O.OOE-01
O.OOE-01
6.85E-12
2.34E-11
1.24E-10
6.68E-10
4.64E-09
1.76E-08
3.57E-08
1.41E-08
1.69E-10
2.  RADIONUCLIDE Np-237
AIR
GROUND
SURFACE
1.
3.
10.
30.
100.
300.
1000.
3000.
10000.
30000.
100000.
300000.
1000000.
0
0
0
0
7
2
8
2
9
2
9
2
8
.OOE-01
.OOE-01
.OOE-01
.OOE-01
.31E-10
.20E-09
.48E-09
.65E-08
.66E-08
.83E-07
.90E-07
.71E-06
.26E-06
0.
0,
0
0,
5.
1,
6,
1,
6.
2,
7,
1,
5.
.OOE-01
.OOE-01
.OOE-01
.OOE-01
.22E-08
.57E-07
.06E-07
.89E-06
.90E-06
.02E-05
.07E-05
.94E-04
.90E-04
SURFACE
WATER
0
0
0
0
7
2
8
2
9
2
9
2
7
.OOE-01
.OOE-01
.OOE-01
.OOE-01
.OOE-12
.11E-11
.13E-11
.54E-10
.26E-10
.71E-09
.49E-09
.60E-08
.92E-08
GROUND
WATER
0
0
0
0
9
2
1
3
1
3
1
3
1
.OOE-01
.OOE-01
.OOE-01
.OOE-01
.08E-13
.74E-12
.05E-11
.29E-11
.20E-10
.51E-10
.23E-09
.38E-09
.03E-08
3.  RADIONUCLIDE AM-243


1.
3.
10.
30.
100.
300.
1000.
3000.
10000.
30000.
100000.
300000.
1000000.
AIR

O.OOE-01
O.OOE-01
O.OOE-01
O.OOE-01
2.85E-07
8.06E-07
2.71E-06
6.87E-06
1.68E-05
1.93E-05
9.47E-06
4.71E-08
1.58E-10
GROUND
SURFACE
O.OOE-01
O.OOE-01
O.OOE-01
O.OOE-01
2.04E-05
5.76E-05
1.93E-04
4.91E-04
1.20E-03
1.38E-03
6.77E-04
3.36E-06
1.13E-08
SURFACE
WATER
O.OOE-01
O.OOE-01
O.OOE-01
O.OOE-01
2.73E-09
7.72E-09
2.59E-08
6.S9E-08
1.61E-07
1.85E-07
9.08E-08
4.51E-10
1.51E-12
GROUND
WATER
O.OOE-01
O.OOE-01
O.OOE-01
O.OOE-01
3.54E-10
l.OOE-09
3.37E-09
8.55E-09
2.09E-08
2.40E-08
1.18E-08
5.85E-11
1.96E-13
                                                  203

-------
                                 FOOD  - AN  INTERACTIVE  CODE TO  CALCULATE  INTERNAL
                                 RADIATION  DOSES  FROM CONTAMINATED  FOOD PRODUCTS

                                   D.  A.  Baker, G.  R. Hoenes  and  J.  K.  Soldat

                                                    Battelle
                                         Pacific Northwest Laboratory
                                           Richland, Washington  99352
                      Summary
An interactive code, FOOD, has been written in BASIC
for the UNIVAC 1108 to facilitate calculation of
internal radiation doses to man from radionuclides in
food products.  In the dose model, vegetation may be
contaminated by either air or irrigation water con-
taining radionuclides.   The model considers two mecha-
nisms for radionuclide contamination of vegetation:
1) direct deposition on leaves and 2) uptake from soil
through the root system.  The user may select up to
14 food categories with corresponding consumption
rates, growing peri9ds, and either irrigation rates or
atmospheric deposition rates.  These foods include
various kinds of produce, grains, and animal products.
At present, doses  may be calculated for the
total body and six internal organs from 190 radio-
nucl ides.  Dose summaries can be displayed at the
local terminal.  Further details on percent contribu-
tion to dose by nuclide and by food type are avail-
able from an auxiliary high-speed printer.  This out-
put also includes  estimated radionucl ide concentrations
in soil, plants, and animal products.

                    Introduction

The computer program FOOD is designed to calculate
radiation doses to man from ingestion of foods, such
as produce, milk,  eggs, and meat contaminated by
radionuclides.  These radionuclides may be deposited
on vegetation and  the ground by water used for irriga-
tion or directly from the air.  A total of 14 food
categories may be  selected with corresponding con-
sumption rates, growing periods, and irrigation rates
or atmospheric deposition assigned by the user.  At
present, doses to  the total body, and six internal
organs from 190 radionuclides may be calculated.
Dose summaries are displayed at the local terminal.
Additional detains on percent contribution to dose  ,
by nuclide and by  food type are available from an
auxiliary high-speed printer.  This latter output
also includes estimated radionuclide concentrations
in soil, plants, and animal products.

The program is designed to be compatible with files
of releases and dose factors which are used by a
program, ARRRG, which calculates doses to man from
ingestion of drinking water and aquatic foods and
from aquatic recreation.  The program ARRRG has been
described in detail previously.^

                        Model
The model presented for estimating the transfer of
radionuclides (except for H-3 and C-14) from irriga-
tion water or from air to plants through both leaves
and soil to food products was derived by Soldat2 for
a study of the potential doses to people from a
nuclear power complex in the year 2000.

Deposition on Food Products

The source of the radionuclide contamination of the
foods may be either deposition with water used for
sprinkler irrigation or deposition of airborne radio-
nuclides.  In the absence of specific data, sprinkler
irrigation is normally assumed, rather than surface
irrigation, because the aerial  spray  produced by the
sprinkler leads to foliar deposition  resulting in
higher radionuclide concentrations in the plants  (and
animals consuming them) than would irrigation via fur-
rows or drip.  These latter systems can be simulated,
if desired, by setting the factor for foliar retention
in the program to zero.
Deposition by Irrigation Hater.  The deposition rate
di from irrigation water is defined by the relation
     d.   C.  I (water deposition)
                                              (la)
where:
     d.   deposition rate or flux [pCi/(m -d)] of
          radionuclide i
    C.    concentration of radionuclide i in water used
     1W   for irrigation (pCi/J.)

      I   irrigation rate [&/(m  -d)].  Amount of water
          sprinkled on unit area of field in one day.

Deposition Directly from Air.  The deposition rate onto
the foilage from airborne radionuclides is defined as:
d.   86,400 x.. \IA. (air deposition)
                    Mi
where:

 86,400   dimensional conversion factor (sec/d)

    V..   deposition "velocity" of radionuclide i
     dl   (m/sec)
                                                 o
     ^.   annual average air concentration  (pCi/m ) of
      1   radionuclide i.


Concentration in Vegetation

The concentration of radioactive material  in vegetation
resulting from deposition onto the plant foliage and
uptake from the soil of prior depositions on the ground
is given in Equation (2).
          Civ   di
                    r V1
                         YVXEi
                  BivO -
                                                                                    -A.t.
                       PA,
                                                    (2)
where:

    C
     iv
     concentration of radionuclide i in edible
     portion of plant v (pCi/kg)

     fraction of deposition retained on plant
     (dimensionless), taken to be 0.25

     factor for the translocation of externally
     deposited radionuclide to edible parts of
     plants (dimensionless).  For simplicity it
     is taken to be independent of radionuclide
                                                      204

-------
          and  set  to  1  for  leafy  vegetables  and  fresh
          forage,  and  0.1 for  all  other  produce,
          including grain.   (Reference 2  lists values
          of this  parameter which  vary with  nuclide.)
                                                     The  second  set  of  terms  in  the  brackets  in Equation (3)
                                                     is omitted  if the  animal  does not  drink  contaminated
                                                     water.   Animal  consumption  rates normally assumed are
                                                     given  in Table  1 .
     A. =  radiological  decay  constant  for  radionuclide
      1    i  (
-------
                    hv'
                                                     (4)    Dose  Calculations  for Man
 where:

     C
    ,    concentration of tritium in the environmental
    lw   water  (pCi/a)*

        concentration in irrigation water (for water
        release)

        pCi  H/m air * absolute humidity U/m3) (for
        airborne release)
     1/9 - fraction of the mass of water which is
          hydrogen

     Fhu = fraction of hydrogen in total vegetation
          (see Table 3).

The  concentration of tritium in the animal product is:
     'la
            Cnc Qr + C,   Q   i
             1F yF    1 aw yaw
             hF
                                 ha
                                                    (5)
where

    C
     IF
        concentration of tritium in feed or forage
        (pCi/kg) calculated by Equation (4) above,
          where now C
                     IF
        - fraction of hydrogen in animal feed.where
          now F.
               hf
                  Fhy (grain)
    F,   = fraction of hydrogen in animal product (see
     "a   Table 3)

          concentration tritium in animal drinking
          water (set to 0 unless there is a release
          to water).
 "law
Similarly, the concentration of carbon-14 in
vegetation is:
          C  *F
          U   r
               cv
                                                  (6)
where

    C
     3w
        Concentration of carbon-14 in  the  environ-
        mental  mediums carbon  concentration  in  that
        medium.  (pCi  ^C/kg carbon)

        pCi   C/a v carbon concentration  in  irri-
        gation water (kg/n) for water  release
     cv
          pCi   c/m  T carbon concentration in air
          (kg/m3) for air release

          fraction of carbon in total  vegetation.
The concentration of carbon-14 in the animal
product is:
                                                        The dose, Rvr,  in mrem  to  a  person  consuming vegetation
                                                        is:
                                                           'vr
                                                                            iv   v   ir
                                                                                                               (9)
                                                           Similarly  the  dose  from  consuming  a  particular animal
                                                           product  is:
                                                               -ar
                                                                           CiaUaDir
                                                                                                            (10)
                                                         where:

                                                          U  ,U  = annual consumption of contaminated vegetable
                                                           v  a   or animal products in kg

                                                            D.  = a factor which converts intake in pCi of
                                                               ir
                                                                  nuclide i to dose in mrem to organ r.
                                                       Normally the exposure mode is assumed to be a 1-year1
                                                       chronic ingestion at a uniform rate.  Dose factors are
                                                       available for calculating the dose during the year of
                                                       ingestion or for calculating a 50-year dose commit-
                                                       ment.  Additional factors are also available for 1-
                                                       and 50-year doses from single acute intakes and for
                                                       ages other than adults.  However, these have not been
                                                       entered into the routine program.  The dose and dose
                                                       commitment factors employed have been derived from
                                                       the ingestion and inhalation models given in ICRP
                                                       Publication 2.5                    \

                                                       Dose Calculations for Biota
                                                       Since the program output lists the radionuclide con-
                                                       centrations in the final product from the consumption
                                                       by animals of both contaminated feed and drinking
                                                       water, the internal radiation dose to animals can be
                                                       estimated in a manner analogous to calculation of
                                                       internal dose to man.  If the assumption were made
                                                       that the concentration of the radionuclides in meat
                                                       were similar to the average concentration in the whole
                                                       animal, then the total body dose would be similar to
                                                       that in the meat.  The following equations can be used
                                                       to calculate the dose rate in mrad/yr to an animal con-
                                                       taining a constant concentration of a radionuclide.
                                                                          18'7 Ma Cia
                                                                                                         (11)
                                                         where:
                                                            e.    effective absorbed energy of nuclide i in
                                                             13   the animal  (MeV/dis)

                                                           18.7   conversion  factor calculated as follows:
    "3a
            Cor Qr + Cn   Q
             3F XF    3aw ya
                                  ca
                                                    (7)
For an air release C3aw= 0 and since Fcw is very small
compared to Fcf, Equation (7) reduces to:
C3a   C3F
              fe)
                                                  (8)
                                                          (1.17 x 106 dis-yr^-pCi"1)  (1.6 x  10"5 g-mrad-MeV"1)
                                                                _ 1R 7 dis-g-mrad
                                                                  I0'/ pCi-yr-MeV

                                                              ia = concentration of nuclide  i  in the animal
                                                                  (pCi/g).
The subscript 1 refers to tritium which is the first
nuclide in the isotope listing; similarly the sub-
script 3 in Equation (6) refers to '^C.
                                                       206

-------
                   TABLE  1.   Consumption  Rates  of Feed  and Water
                              by Farm Animals
Feed or Forage
( kg/day )
Milk Cow
Beef Cattle





EJemerrt


Be
N
F
Na
P
Ca
Sc
Cr
Hn
Fe
Co
Ni
Cu
Zn
Se
Br
Rb
Sr
Y
Zr
Nb
Mo
Tc
Ru
Rh
Pd
Ag
Cd
Sn
Sb
Te
I
Cs
Ba
La
Ce
Pr
Nd
Pm
Sm
Eu
Tb
Ho
W
Pb
Bi
Po
Ra
Ac
Th
Pa
U
Np
Pu
Am
Cm
Cf
Pig
Poultry
TABLE 2

Plant/Soil

(chickens)
Water
(i/day
1
Qc Q
yF xaw
55 (fresh forage) 60
68 (dry feed) 50
4.2 (dry
0.12 (dry
feed)
feed)
Plant Concentration Factors and
Transfer
Egq/Feed
(Dimerisionless) (day/kg)
H
Biv
4.7E-04
7.5E+00
2.0E-02
5.0E-02
5.0E+01
4.0E-02
1.1E-03
2.5E-04
3.0E-02
4.0E-04
9.4E-03
1.9E-02
1.3E-01
4.0E-01
1.3E+00
7.6E-01
1.3E-01
2.0E-01
2.5E-03
1.7E-04
9.4E-03
1.3E-01
2.5E-01
l.OE-02
1.3E+01
5.0E+00
1.5E-01
3.0E-01
2.5E-03
1.1E-02
1.3E+00
2.0E-02
2.0E-03
5.0E-03
2.5E-03
5.0E-04
2.5E-03
2.4E-03
2.5E-03
2.5E-03
2.5E-03
2.6E-03
2.6E-03
1.8E-02
6.8E-02
1.5E-01
9.0E-03
1.4E-03
2.5E-03
4.2E-03
2.5E-03
2.5E-03
2.5E-03
2.5E-04
2.5E-04
2.5E-03
2.5E-03


2.0E-02
9.9E-04*
9.9E-04
2.0E-01
l.OE+01
l.OE+00
9.9E-04
9.9E-04
l.OE-01
l.OE-01
l.OE-01
l.OE-01
2.0E-01
4.0E-03
2.1E+00
1.6E+00
3.0E-00
4.0E-01
5.0E-04
1.2E-03
1.2E-03
4.0E-01
9.9E-04
4.0E-03
4.0E-03
4.0E-03
9.9E-04
9.9E-04
9.9E-04
7.0E-02
4.0E-01
1.6E+00
6.0E-01
4.0E-01
2.0E-03
3.0E-03
4.0E-03
2.0E-04
7.0E-03
7.0E-03
7.0E-03
7.0E-03
7.0E-03
9.9E-04
9.9E-04
9.9E-04
9.9E-04
2.0E-05
2.0E-03
2.0E-03
2.0E-03
3.4E-01
2.0E-03
2.0E-03
2.0E-03
2.0E-03
2.0E-03
Coefficients
Milk/Grass
(day/£)


2.0E-06
1.1E-02
7.0E-03
4.0E-02
1.2E-02
8.0E-03
2.5E-06
1.1E-03
l.OE-04
6.0E-04
5.0E-04
3.4E-03
7.0E-03
6.0E-03
2.3E-02
2.5E-02
l.OE-02
1.5E-03
5.0E-06
2.5E-06
1.2E-03
4.0E-03
1.2E-02
5.0E-07
5.0E-03
5.0E-03
2.5E-02
6.2E-05
1.3E-03
7.5E-04
5.0E-04
l.OE-02
5.0E-03
4.0E-04
2.5E-06
l.OE-05
2.5E-06
2.5E-06
2.5E-06
2.5E-06
2.5E-06
2.5E-06
2.5E-06
2.5E-04
l.OE-05
2.5E-04
1.2E-04
2.0E-04
2.5E-06
2.5E-06
2.5E-06
6.0E-04
2.5E-06
2.5E-08
2.5E-06
2.5E-06
7.5E-07

Beef/Feed
(day/kg)
- - ^ -
bia
8.0E-04
9.9E-04
2.0E-02
5.0E-02
5.0E-02
3.3E-03
6.0E-03
9.9E-04
5.0E-03
2.0E-02
l.OE-03
l.OE-03
l.OE-02
5.0E-02
l.OEtOO
2.0E-02
1.5E-01
3.0E-04
5.0E-03
5.0E-04
5.0E-04
l.OE-02
9.9E-04
l.OE-03
l.OE-03
l.OE-03
9.9E-04
1.6E-02
9.9E-04
3.0E-03
5.0E-02
2.0E-02
3.0E-02
5.0E-04
5.0E-03
l.OE-03
5.0E-03
5.0E-03
5.0E-03
5.0E-03
5.0E-03
5.0E-03
5.0E-03
9.9E-04
9.9E-04
9.9E-04
9.9E-04
9.9E-04
5.0E-03
5.0E-03
5.0E-03
5.0E-03
5.0E-03
5.0E-03
5.0E-03
5.0E-03
5.0E-03
10
0.3
Animal Product

Pork/Feed
(day/kg)


l.OE-02
9.9E-04
9.0E-02
l.OE-01
5.4E-01
3.3E-03
l.OE-02
9.9E-04
2.0E-02
5.0E-03
5.0E-03
5.0E-03
1.5E-02
1.4E-01
4.5E-01
9.0E-02
2.0E-01
7.3E-03
5.0E-03
l.OE-03
l.OE-03
2.0E-02
9.9E-04
5.0E-03
5.0E-03
5.0E-03
9.9E-04
1.6E-02
9.9E-04
7.0E-03
l.OE-02
9.0E-02
2.6E-01
l.OE-02
5.0E-03
5.0E-03
5.0E-03
5.0E-03
5.0E-03
5.0E-03
5.0E-03
5.0E-03
5.0E-03
9.9E-04
9.9E-04
9.9E-04
9.9E-04
9.9E-04
l.OE-02
l.OE-02
l.OE-02
6.0E-04
l.OE-02
l.OE-02
l.OE-02
l.OE-02
l.OE-02




Poultry/Feed
(day/kg)


4.0E-01
9.9E-04
9.9E-04
l.OE-02
1.9E-01
3.3E-03
4.0E-03
9.9E-04
1.1E-01
l.OE-03
1 .OE-03
l.OE-03
2. OE-03
2. OE-03
3.7E-01
4. OE-03
2.0E+00
9.0E-04
5.0E-04
1 .OE-04
l.OE-04
2. OE-03
9.9E-04
3. OE-04
3. OE-04
3. OE-04
9.9E-04
1 .6E-02
9.9E-04
6. OE-03
1 .OE-02
4. OE-03
4.5E+00
5. OE-04
4. OE-03
6. OE-04
l.OE-03
4. OE-03
l.OE-04
4. OE-03
4. OE-03
4. OE-03
4. OE-03
9.9E-04
9.9E-04
9.9E-04
9.9E-04
9.9E-04
4. OE-03
4. OE-03
4. OE-03
1.2E-03
4. OE-03
4. OE-03
4. OE-03
4. OE-03
4.CE-03
* Where value unknown, a  default value of 9.9E-04 was used.

                                         207

-------
                       TABLE J3.  Calculation of Fractions of Hydrogen  and  Carbon
                                 in  Environmental Media, Vegetation, and Animal
                                 Products
Food or Fodder

Fresh Fruits, Vegetables and
Grass
Grain and Stored Animal Feed
Eggs
Milk
Beef
Pork
Poultry
Absolute Humidity 	
Water
fw
0.80
0.12
0.75
0.88
0.60
0.50
0.70

Carbon
(dry)

0.45
0.45
0.60
0.58
0.60
0.66
0.67

Hydrogen
(dry)
fh
0.062
0.062
0.092
0.083
0.094
0.10
0.087
0.008 H/m3
Carbon(a)
(wet)
Fcv ' Fca
0.090
0.40
0.15
0.070
0.24
0.33
0.20

Hydrogen^ '
(wet)
FhV Fha
0.10
0.068
0.11
0.11
0.10
0.11
0.10

           Concentration of carbon in water 	2.0 x 10"  kg/Vc'
                                                                     -4
           Concentration of carbon in air	1.6  x 10   kg/m
                                     w'
(*)  Fcv or Fca  =  fc  (1    f,

  Fhv or Fha  =  V9  +  fh (1    fw>
(c)  Assumes a typical  bicarbonate  concentration  of  100  mg/i
(d)  Assumes a typical  atmospheric  COp  concentration  of  320 ppmy.
                   References

1.   Soldat, J.  K.,  et al.,  Models and Computer Codes
    for Evaluating  Environmental  Radiation  Doses,
    USAEC Report BNWL-1754, Pacific Northwest Labora-
    tory, Richland, WA, February  1974.

2.   Fletcher, J. F. and W.  L. Dotson (compilers),
    HERMES - Digital Computer Code for Estimating
    Regional Radiological  Effects from the  Nuclear
    Power Industry, USAEC  Report  HEDL-TME-71-168,
    Hanford Engineering Development Laboratory,
    Richland, WA,  1971.

3.   Mraz, F. R., et al., "Fission Product Metabolism
    in Hens and Transference to Eggs," Health Physics,
    no. 10, pp. 77-782, 1964.

4.   Ng, Y. C.,  et al., Prediction of the Maximum
    Dosage to Man from the Fallout of Nuclear
    Devices - IV,  Handbook for Estimating the Maximum
    Internal Dose from Radionuclides Released to the
    Biosphere,  USAEC Report, UCRL-50163, Lawrence
    Radiation Laboratory,  University of California,
    Livermore,  CA,  1968.

5.   International  Commission on Radiological  Protec-
    tion, Report of ICRP Committee II on Permissible
    Dose for Internal Radiation,  ICRP Publication  2,
    Pergamon Press, New York, 1959.

6.   Annenkov, I. K., et al., "The  Radiobiology and
    Radioecology of Farm Animals," USAEC Report
    AEC-tr-7523, April  1974.
                                                     208

-------
                                      AIR QUALITY AND INTRA-URBAN MORTALITY

                                                 John J. Gregor
                                  Center for the Study of Environmental Policy
                                        The Pennsylvania State University
                                          University Park, Pennsylvania
     The effect of air pollution on white mortality
for Allegheny County,  Pennsylvania is examined for the
years 1968-1972 through the use of eighteen weighted
regressions.   The mortality rates are characterized by:
age (less than 45, 45-64, and 65 and over); sex (male
and female);  and, cause grouping (overall, pollution-
related, and all other).  Air pollution is demonstrated
to have a greater effect on men than women in the two
younger age groups and approximately the same effect
on both sexes for the 65 and over age group.

     The possibility that extremely high levels of air
pollution can shorten lives and affect the quality of
human life was painfully brought to the attention of
individuals in the United States by the 1948 episode
of Donora, Pennsylvania.  Today few people would argue
that the existence of pollution concentrations of
Donora's magnitude do not have significant health
effects.  There is substantially less agreement, how-
ever, on the significance of the effects of smaller
levels of air pollution on health.
                   Background

     The health effects of air pollution have been
given increased attention in recent years.  A growing;
number of experimental, episodic, and epidemiological
studies have shown inverse relationships between air
quality and various measures of health.   This paper
makes use of the insights developed by these earlier
works in an attempt to quantify the relationship
between ambient air quality and intra-urban mortality
differentials.   Some of the experimental studies with
animals have shown the existence of synergistic
effects of sulfur dioxide (S0?) and various types of
particulates (4, 5, 10 and 21).  These,  and other ex-
perimental studies have shown changes in vital functions
and mortality in animals, but only changes in pulmonary
functions could be studied in man.  For this reason,
experimental studies have shed only limited light on
the relationship between levels of air pollution and
mortality in man.
     Episodic studies conducted by Schrenk
           23
                                          17
                                              Scott
                                                   18
and Wilkins ^ have shown the existence of direct rela-
tionships between high levels of air pollution and
                   23
mortality.   Wilkins   estimated that the five-day
London fog of December 1952, caused at least 4,000
deaths.  Similar, but not so drastic, estimates of

excess mortality were also presented by Schrenk   con-
                                            -1 Q
earning the 1948 Donora episode and by Scott   for the
1962 London episode.   As can be seen from these
episodic studies, they have the advantage of being able
to study mortality but are hindered in that they deal
only with specific episodes of abnormally high
pollution.   Hence, such studies are not applicable to
the everyday pollution levels faced by individuals in
our urban areas.  As  Anderson notes:

     Although the deleterious effects of acute
     exposures to air pollution are well estab-
     lished, it is not possible to extrapolate
     from these data  to the low levels of air
     pollution to which persons are exposed in

     a modern urban society (Anderson  p. 585).

     For these reasons, epidemiological studies have
grown in popularity since, in theory, they allow one
to isolate the effects of lower levels of pollution on
mortality.  Most of these studies have also shown an
inverse relationship between air quality and mortality.
The procedure usually consists of calculating different
mortality rates or partial correlation coefficients for
populations exposed to different air quality conditions,
after controlling for socioeconomic class by dividing
the area under study into four or five socioeconomic

groups (Griffith9, Winkelstein24, and Zeidberg25).
     There are, however, other factors which affect
mortality which are correlated with air pollution.

Freeman , for example, has shown that air quality
levels of white neighborhoods are higher than those of
non-white neighborhoods.  These relationships have led
Lave and Seskin to note:
     If the explanatory variables were orthogonal
     to each other, the inability to get measures
     on all variables would not be important.  If
     they were independent, one could find the
     effect of any variable on the mortality rate
     by a univariate regression.  However, ortho-
     gonality is not a reasonable assumption . . .
     This colinearity among explanatory variables
     means that univariate regression, or simple
     cross tabulations (which constitutes the
     preponderance of evidence), are not likely
     to produce results that one could interpret.
     (Lave and Seskin14 p. 295).
     In an attempt to circumvent these limitations,
Lave and Seskin estimated their own relationships using
mortality data for 117 Standard Metropolitan Statis-
tical Areas (SMSA's) for the period 1959-1961.  They
used multiple regressions to explain the variance in 35
different mortality rates (characterized by age:  under
28 days, under 1 year, 14 years and younger, 15-44,
45-64, and 65 and older; by race:  white and non-white;
and by sex:  male and female).  Their independent vari-
ables were minimum sulfates, mean particulates, minimum
particulates, percent poor, population per square mile,
percent of population 65 or older, and percent non-
white, all of which do not appear in any of their
specific linear regressions since only the "best"
results were presented.  The results of their linear
regressions show a predominantly inverse relation-
ship between mortality and ambient air quality.
     Their analysis, however, was severely restricted
by data limitations, the most important of these for
the purpose of isolating ambient air quality's effects
on mortality being the use of only one monitoring
stations' readings for an entire SMSA.  As they note,
"it is a heroic assumption to regard these figures as
representative of an entire SMSA in making comparisons

across areas'1  (Lave and Seskin   p. 286).
     It should also be noted that the disaggreagated
mortality rates they used are probably not accurate
since as Lave and Seskin note:
     These age specific death rates were derived
     by dividing the number of people who died
     by the total population.  If the age dis-
     tribution of people differs across cities,
     these approximate death rates will not
     even be proportional to the true rates.
                      14
      (Lave and Seskin    p. 318).

The errors in  the measurement of these dependent
                                                        209

-------
variables could be a reason for the low coefficients
of determination for their disaggregated (age, age-sex-
specific) mortality rates.
     Even with these limitations, Lave and Seskin have
significantly advanced our insights into the epidemic-
logical association between ambient air quality and
mortality to a new peak.  They have shown that after
controlling for other factors which may affect mor-
tality, air quality does exhibit a significant associ-
ation with mortality.  Their experiments with alterna-
tive specifications have given results not signifi-
cantly different from the general linear model (Lave
and Seskin  ).   Whether these associations are causal
is still an unsettled question.  The experimental and
episodic works mentioned earlier, however, strengthen
the arguments for the existence of at least an
aggrevation effect.
            Objectives and Significance
     Recognizing the probable existence of ambient air
quality's aggravation effects on mortality, it is sur-
prising that only a few attempts have been made iso-
lating these effects in the epidemlological laboratory.
If the energy crisis continues as appears likely, this
isolation will become increasingly important as primary
air quality standards come under closer scrutiny; since
these standards are the fundamental impediments to the
use of alternative energy sources (coal and high-sulfur
oil).

     As noted earlier, with the exception of Lave and
Seskin's work, few attempts have been made to control
for the collinearity among independent variables in
explaining the variance in mortality rates for areas
of different air quality.  However, due to data con-
straints, their results are subject to some debate,
especially for the disaggregated mortality rates.
     In an attempt to overcome these data constraints
the current analysis examines long-term (1968-1972)
mortality functions for small groupings of census
tracts within Allegheny County, Pennsylvania.
Allegheny County is particularly well suited for this
type of analysis since it has neighborhoods of both
good and bad air quality.  Moreover, the relatively
rich microcounty data base (i.e., census and ain
quality data) enabled the circumvention of some of the
problems that beset Lave and Seskin.  Specifically,
the existence of a multitude of monitoring stations'
readings for air quality removes reliance on a single
monitoring station.  The relatively accurate infor-
mation of population at risk by age, sex, and race
from the 1970 census ensures that the mortality rates
calculated will be representative of their population
even in disaggregated form.
     By approaching the problem of isolating air
quality's effect on mortality at the intra-urban level
through the'.use of multiple regression analysis, it was
possible to control for a majority of "other" factors
which were believed to influence an individual's
probability of'death.  This procedure allowed isolation
of air quality'^ influence on mortality in such a way
that relatively accurate shift parameters in the
various mortality, functions could be estimated.

                Thie Mortality Model
Dependent Variables ,
     Any attempt to provide a more reliable estimate
of the effects of air quality on mortality, via intra-
urban, cross-sectional mortality analysis, must recog-
nize that other factors besides air quality influence
the risks of death.  Specifically, since many variables
are collinear to air quality, it is necessary to isolate
the affect of ambient air quality on mortality.  The
best procedure for controlling some of the most
important factors is by using age-sex-race-cause-
specific mortality rates.   The  following narrative pro-
vides a brief explanation  of  mortality differentials
according  to each classification:
     Age.   In general,  children are not subject to the
same hazards as adults.  This result coupled with a
decreasing  ability of the  body  to  protect itself after
a certain  age leads  to  the expectation of differentials
in mortality with respect  to  age.
     Sex.   Differentials in the risk of death can also
be expected on the basis of sex.   Although females do
experience  lower mortality rates, whether or not these
differentials are biological  or social is still an
unsettled  question.
     Race.  Differential mortality is also exhibited by
different  races within  any age-sex group.  With the
exception  of those causes  of  death related to inherent
generic deficiencies such  as  sickle cell anemia,  this
result is  probably due  to  social class differences.
     Cause.  Finally, it must be recognized that air
quality would be expected  only  to  accentuate the risks
of death from certain causes  (bronchitis, emphysema,
asthma, etc.).  Therefore,  deaths  should be separated
into various causes  to  enable the  isolation of air
quality's  influence  on  mortality.
     Thus,  by controlling  for age,  sex and race,  as
well as examining cause-specific mortality rates, it  is
possible partially to isolate the  effects of air qual-
ity on mortality.  This analysis was conducted using
five-year  average age-sex-cause-specific mortality
rates for  the white  population  as  the dependent vari-
ables in the multiple regression analysis.   Another
advantage  of using these specific  mortality rates  (par-
ticularly  age and sex)  as  dependent variables is that
such variables provide  valuable indicators of the  pri-
mary etiological effects of air pollution.  Specifically,
by isolating housewives the analysis avoids complica-
tions associated with work-related pollution exposures.

     Individual five-year  average  mortality rates  were
calculated  by distributing the  87,349 descendants  of
Allegheny  County during 1968-1972  to their correspond-
ing 1970 census tract.  This  procedure resulted in
occasional  census tract groupings  since place of resi-
dence was  originally coded based on 1960 census tracts.
Following  this step, all groups were deleted for which
complete census information was not available due  to
confidentiality suppressions.   This deletion, in
addition to the exclusion  of  individuals with no  coded
residence,  reduced the file to 84,034 (approximately
96 percent  of the original file).   The remaining census
tracts were then reaggregated in order to obtain a
minimum of  300 deaths per  area  resulting in 175 group-
ings.  It  should be  recognized  that those groupings
were constructed to minimize  the variance of certain
key soicoeconomic variables (i.e.,  family income,
education,  percent white)  while maintaining the con-
tiguous nature of each  group.

     The reason for  reducing  the number of areas for
which mortality rates are  calculated as well as using
a five-year average  mortality rate is that such pro-
cedures will help reduce the  variance caused in these
rates by small and differing  sample sizes (population
of the census tracts).  This  is an important consid-
eration since we are concerned  with the stability of
these mortality rates and  not their absolute size.
Consider,  for example,  a census tract with twenty
individuals, one of which dies during our five-year
period.  In this case the  five-year average mortality
rate will  be 1/100 while the  yearly rates will be
highly unstable ranging from  1/20  to zero (undefined).
This non-constant variance is also the motive for using
weighted multiple regression  analysis.  Since these
.mortality  rates are  a proportion (or a proportion
multiplied by a constant), the  variance of the observed
                                                       210

-------
                         2          2
mortality rates will be 0 /N where CT  is the variance
of a proportion and N is the sample size.  Assuming
      2
that 0  is appropriately the same for all  samples  for
a given age group, then the variance of the observed
mortality rates are inversely proportional to  the
sample size.  In the weighted regression,  instead  of
                                            2
minimizing the sum of squares of errors, IE    where E

is the difference between observed mortality rates and
estimated mortality rates.  The objective  is to mini-
        2   2
mize ZE. /(a /N.), which, assuming CT is constant,  will
                     2
be minimized when IE  N. is minimized or when
           2
E(E.N  1/2)  is minimized.  In essence, this means the

errors should be weighted by a factor equal to the
                                                20
square root of the sample size.   (See Smith, W.    and

also Draper and Smith  .)

Independent Variables

     The rationale for  using multiple regression
analysis is that there  exist important factors other
than age, sex, race,and air quality which  influence an
individual's probability of death.  Some of these
factors may be colinear to other  independent vari-
ables.  These variables include income, education,
social status, occupation, residence, housing, climate,
availability and access to quality medical care,
quantity and quality of food consumed, tobacco consump-

tion, sanitation, and marital status  (Kosa   , Shryock
          19
and Siegel  ).  Unfortunately, measurement or  observa-
 tion of many of these  factors is  extremely difficult.
Certain proxies can, however, be  used while estimates
 can be made of other variables.   The following
 independent variables  are used in the model:

     Percentage of Adult White Population  With High
 School Education.  "Higher levels of education may be
associated with relatively more medical care at preven-

 tive stages"  (Auster,  et al.,  p. 415).  It may also be
 associated with the possession of better knowledge of
 preventive  care and willingness to seek and follow a
 doctor's advice.  In general, people with  higher
 education also tend to  have higher incomes (correlation
 coefficient of  .83 for  Allegheny  County) and thus  con-
 sume higher quality goods which should favorably affect
 their health.  Due to  this high colinearity only
 education  (as opposed  to education and income) is  used.
Negative association between education and mortality
                                      12        8
 have been shown by Kitagawa and Hauser   ,  Fuchs  ,  and

Auster, et al.    The  estimates of this variable for
 each of the 175 Allegheny County  groupings were taken
 from 1970 census data.

     Total Particulates Multiplied by Sulfur Dioxide.
These variables represent two of  the most  available air
 quality measurements.   Although each variable  was
 originally employed- separately, the most consistent
 results were obtained when the measurements interacted
multiplicatively.  In  essence, this procedure  allowed
 for the possible synergistic effects mentioned previous-
 ly.  The five-year average for each of these variables
•was  calculated from monitoring data  obtained  from  the
Allegheny County Department of Public Health.   Missing
values for each monitoring station were assigned the
mean value for the years when such observations were
available.  These monitoring stations were located on a
map by their USGS coordinates and, using a computerized
napping program, estimates were made for the remaining
points in Allegheny County using  "standard" mapping
procedures.  Specifically, the calculation method  was
a weighted average of slopes and values of nearby  data
points developed from a gravity-type model and modified
to consider distance and direction.
     The resulting interpolated values were  then
plotted on a map of Allegheny County to facilitate  the
estimation of average values of S0_  and total partic-

ulates for each census tract grouping.  A clear overlay
was placed over this computer-generated map  and
weighted averages of the air quality variables were
calculated for each census tract grouping.

     Number of Days With .1 Inch or More Precipi-
tation and Number of Days With A Maximum Temp.erature
Less Than 32 Degrees.  These variables are the two most
consistent and significant of the fourteen climatologi-
cal variables originally considered.  Although climate
has been recognized as an important factor in mortality

(Hirsch11, Petersen16 and Berke & Wilson3),  the
literature on the cause-effect mechanism has not
developed to the level where it can be a_ priori deter-
mined which climatological variables are of major
importance.  The average values for these variables
were estimated using the same procedure outlined for
total particulates and SO .
      Population Per .156 Residential Acres.
Proximity to other individuals can influence exposure
to various diseases.  This variable was calculated by
dividing the total population of the area by its
residential acreage.
                      Results
     The results of applying weighted regression
analysis to the white male and female five-year average
mortality rates for Allegheny County are presented in
Table 1.  It should be recognized that the inclusion of
additional independent variables for each mortality
function did not significantly increase the equations
overall explanatory power.  The dichotomy of mortality
rates into overall, pollution-related and all other is
based on the a priori explanation's previously dis-
cussed.  Specifically, the contention is that air
quality will only influence the probability of death
from certain causes (e.g., respiratory diseases) while
having no discernable effect on others (e.g., motor
vehicle accidents).
     The interactive air quality term (Total Particu-
lates Multiplied by SO,) is consistently positive and

significant for the pollution-related causes of death.
Moreover,  size of the coeffiecient increases with age,
as expected, although it differs little between sexes
for the 65 and above age group.  For the less than 45
and 45-64 age groups the coefficient is at least twice
as large for males as females, suggesting the existence
of higher exposures at work.

     The air quality term is negative only for the less
than 45 overall and the all other causes mortality
irates.  Probable explanations of this result are that:
1) individuals in this age group die from non-pollution
related causes, 2) the ability of the body to withstand
the influence of pollution on health decrease with age,
and, 3) there exists a cumulative-type influence of
pollution on mortality.  Since the coefficient in the
overall mortality function will be, optimally, the sum
of the pollution-related and all other causes coeffi-
cients, the relatively large negative sign for all
other causes in the less than 45 age group also causes
the overall coefficient to be negative.

     The signs of the remaining coefficient  are pre-
dominantly significant and of the expected sign where
ji priori values could be expected  (percentage of adult
white population with a high school education and
population per  .156 residential acres).  For the number
of days that precipitation exceeds  .1 inch the sign  is
continually positive and significant while for number
of days with a maximum temperature  less that 32 degrees
the sign is predominantly negative  and significant.
                                                        211

-------
                                   TABLE  1.   1968-1972  White  Male and  Female
                                    Age-Cause-Specific  Mortality  Functions

                                    PART  A - OVERALL MORTALITY FUNCTIONS
Variable
Percent of Adult White
Population with High School
Education (1)
Total Participates
Multiplied by S02 (2)
Number of Days
Precipitation > .1" (3)
Number of Days Maximum
Temperature < 32 degrees (4)
Population per .156'
Residential Acres (5)
Constant (6)
I2
Less than 45
Male Female
-1-449 AA -0.639
(5.206) (3.041)
-0.001 -0.001
(0.327) (0.378)
2.824 1.713
(5.371) (4.527)
-1.471 -1.462
(1.368) (1.935)
0.023 0.012
(0.773) (0.555)
3056.927 2168.157
(4.620) (4.622)
.281 .185
*
All mortality rates (dependent variables) are in deaths
**
The values in parenthesis are the corresponding student
PART
Variable
Percent of Adult White
Population with High
School Education (1)
Total Particulates
Multiplied by SO (2)
Number of Days
Precipitation >.l" (3)
Number of Days Maximum
Temperature < 32 degrees (4)
Population per .156'
Residential Acres (5)
45-64
Male Female
-14.941
(7.121)
0.045
(2.257)
19.455
(4.354)
-0.729
(0.079)
0.647
(2.733)
16862.343
(4.830)
.404
per 100,000.
t values.
-5.690
(5.490)
0.026
(2.779)
6.996
(3.152)
4.246
(0.943)
0.205
(1.958)
7492.812
(4.109)
.404

65
Male
-39.621
(6.238)
0.112
(1.757)
111.159
(7.121)
-49.172
(1.553)
0.283
(0.471)
29326.868
(3.534)
.539

and over
Female
-10.752
(2.047)
0.110
(2.120)
60.013
(4.699)
-27.657
(1.070)
-0.365
(0.779)
31072.048
(4.467)
.491

***
B - POLLUTION RELATED CAUSE-SPECIFIC MORTALITY FUNCTIONS
Less than 45 45-64
Male Female Male Female
-0.406 -0.059 -9.
(3.721) (0.651) (6.
0.002 0.001 0.
(1.656) (1.515) (2.
0.271 0.154 13.
(1.313) (0.961) (4.
0.606 -0.311 -2.
(1.439) (0.971) (0.
0.004 0.014 0.
(0.351) (1.574) (1.
706 -4.351
431) (6.639)
031 0.014
180) (2.395)
494 3.991
198) (2.844)
702 2.777
406) (0.975)
338 0.138
983) (2.082)





65 and over
Male Female
-27.284 -8.494
(5.476) (2.123)
0.095 0.097
(1.906) (2.452)
77.549 43.159
(6.333) (4.436)
-30.882 -20.563
(1.243) (1.044)
0.015 -0.301
(0.032) (0.843)






Constant (6)
R2
281.944   697.898
 (1.087)   (3.514)
11738.581  3815.604
   (4.674)   (3.309)
19207.563 22439.555
   (2.956)   (4.241)
                               .156
                                         .019
                                                        .348
                                      .357
                                                                                      .498
                                                                                                .465
   These mortality rates are based on total deaths from tuberculosis of the respiratory system,
   malignant neoplasms of bruccal cavity,  pharynx and respiratory system, major cardiovascular
   disease, acute and chronic bronchitis and bronchiolitis,  emphysema and asthma.
                                                       212

-------
                                 PART C - MORTALITY FUNCTIONS - ALL OTHER CAUSES
       Variable

Percent of  Adult  White
Population  with High
School  Education  (1)

Total Particulates
Multiplied  by  S02 (2)


Number  of Days
Precipitation  > .1"  (3)


Number  of Days Maximum
Temperature <  32  degrees

Population per .156
Residential Acres (5)

Constant (6)
—9
                                Less than 45
                              Male      Female
                                    45-64
                              Male       Female
                                                  65 and over
                                               Male        Female
   -1.043
   (4.353)

   -0.003
   (1.134)

    2.553
   (5.641)

   -2.077
(4) (2.244)

    0.019
   (0.738)

 2774.988
   (4.871)

     .234
  -0.581
  (3.013)

  -0.002
  (1.110)

   1.559
  (4.489)

  -1.151
  (1.659)

  -0.002
  (0.122)

1470.260
  (3.414)

    .193
  -5.236
  (5.729)

   0.014
  (1.580)

   5.960
  (3.062)

   1.973
  (0.489)

   0.309
  (2.999)

5123.750
  (3.369)

    .315
  -1.339
  (2.289)

   0.012
  (2.240)

   3.005
  (2.399)

   1.469
  (0.542)

   0.067
  (1.137)

3677.218
  (3.574)

    .332
  -12.336
   (5.471)

    0.017
   (0.739)

   33.610
   (6.065)

  -18.290
   (1.627)

    0.268
   (1.254)

10119.306
   (3.435)

     .415
  -2.258
  (1.927)

   0.013
  (0.701)

  16.854
  (3.661)

  -7.095
  (0.761)

  -0.064
  (0.378)

8587.513
  (3.430)

    .360
               References

 1.   Anderson,  D.  0.,  "The  Effects  of Air Contamin.  on    16.
     Health:   In Three Parts,"  Canad.  Med.  Assoc.  J.
     97 (Sept.  1967):   528-536,  585-593,  802-806.          17.
 2.   Auster,  Richard;  Leveson,  Irving; Sarachek,
     Deborah,  "The Prod, of Health,  An Exploratory        18.
     Study,"  J. Hum. Res.  (4)  (Fall  1969):   411-436.
 3.   Berke, Jacqueline and  Wilson,  Vivian,  Watch  Out      19.
     for the  Weather.  New York,  (1951), Viking Press.
 4.   Bushtueva, K.  A., "Toxicity of H2S04 Aerosol,"  in
     U.S.S.R.  Literature on Air Pol,  and  Rel.  Occup.       20.
     Pis.. A  Survey. Vol. 1, pp.  63-66.
 5.             , "Exper. Studies on the Effects of Low
     Oxides of Sulfur  Concentrations on the Animal
     Organism," in Limits of Allowable Concentrations      21.
     of Atmospheric Pollutants.  Book 5, pp. 92-102.
 6.   Draper,  N. R.  and Smith,  H., Applied Regression
     Analysis.  New York:  John  Wiley and  Sons, Inc.,
     TJB1T
 7.   Freeman,  A. Myrick, III,  "Distrib. of Environ.        22.
     Quality,"  in  Environmental  Quality Analysis,  pp.
     243-280.   Edited  by A. Kneese  and B.  Bower.  Balti-
     more:  Johns  Hopkins Univ.  Press, 1972.               23.
 8.   Fuchs, Victor, "Some Economic  Aspects of  Mort.  in
     the U.S.", 1965,  NBER.  (Mimeographed).
 9.   Griffiths, M., "A Geograp.  Study of  Mort. in  an      24.
     Urban Area,"  Urban Studies  8 (June 1971): 111-
     120.
10.   Gross, P.; Rinehart, W. E.;  deTreville, R. T.,
     "The Pulmonary Reactions  to Toxic Gases," Am.        25.
     Indust.  Hyg.  Assoc J.  28  (J/A  1967):   315-3T1.
11.   Hirsh, Joseph, "Comfort and Dis.  in  Rel.  to  Cli-
     mate," Climate and Man. 1941 Yearbook of  Agr.,
     U.S. Dept. of Agr., Publ.  (1941).
12.   Kitagawa,  Evelyn  M., and  Hauser,  Philip M.,  "Ed-
     ucation  Differ, in Mort.  by Cause of Death,  U.S.
     1960," Demo.  5 (No. 1, 1968):   318-353.
13.   Kosa, John; Antonovsky, Aaron;  Zola,  Irving,  eds.,
     Poverty  and Health.  Harvard U.  Press. 1969.
14.   Lave, Lester  B. and Seskin,  Eugene P., "Does  Air
     Pol. Shorten  Lives?,"  Proc.  of Second Res. Conf.
     of Inter-University Committee  on Urban Econ.,
     Chicago,  pp.  293-328 (1970).
15.	, "An Analysis of the Assoc. between
     U.S. Mort. and Air Poll.,"  J  of Amer. Stat.
     Assoc. 68  (June 1973):  284^50"!
                                      Petersen, William F., M.D., Man-Weather-Surv.,
                                      Charles C. Thomas, Springfield,  111.,  (1947).
                                      Schrenk, H., et al., Air Poll, in Donora,  Pa.,
                                      Wash., D.C.:  U.S. G.P.O.. 1949.
                                      Scott, J. A., "The London Fog of Dec.  1962," Med.
                                      Officer 109  (1963):  250-252.
                                      Shryock, Harry S. and Siegel, Jacob S., The
                                      Methods and Mater, of Demo., Vol. 2, Wash., D.C.:
                                      U.S. G.P.O., (1971).
                                      Smith, Wayne E., "Factors Associated with  Age-
                                      Specific Death Rates, California Counties, 1964,"
                                      Amer. J. of Pub. Health 58 (Oct. 1968):  1937-
                                          ~
                                      Toyama, T.,  "Studies on Aerosol.  I.  Synergistic
                                      Response of  the  Pulmonary Airway  Resistance on
                                      Inhaling Sodium  Chloride Aerosols and  S02  in
                                      Man," Japan. J.  of  Indus. Health  4  (January 1962):
                                      86-92.
                                      U.S. Dept. of  Health, Education and  Welfare, Air
                                      Quality Criteria for Parti cul ate  Matter, Wash.,
                                      D.C.:  U.S.  G.P.O.  (1969).
                                      Wilkins, E.  T.,  "Air Pollution Aspects of  the
                                      London Fog of  December 1952," Royal  Meteo. Soc.
                                      J. 80 (April 1954):  267-271.
                                      Winkelstein, Warren, et al . , "The Relat. of Air
                                      Poll, and Econ.  Status to Total Mort.  and  Sel-
                                      ected Resp.  System  Mort. in Men.  I.  Susp.  Part.,"
                                      Arch of Enyir. Health 14. (Jan. 1967):  162-171.
                                      Zeidberg, Louis  D., et al . , "The  Nashville Air
                                      Poll. Study, (Pts.  5.6.7)," Arch, of Envir.
                                      Health 15 (Aug.  1967):  214-238:
                                                       213

-------
                          EVALUATION OF HEALTH DATA IN TERMS OF ENVIRONMENTAL FACTORS
                     Meyer Katzper
           Systems and Information Analysis
                  Rockville, Maryland
                    N.  Phillip Ross
           Bureau of Quality Assurance, DHEW
                  Rockville, Maryland
Presently, improved health data is becoming available
in terms of hospital records which are to be part of
a national computerized data base.  Simultaneously,
the more comprehensive environmental monitoring which
is being implemented provides measures of environ-
mental pollutants.  The question that our models
address is whether stress due to environmental  factors
can be detected in hospital data.  To the extent that
hospital data reflects environmental stress, a dis-
criminating tool is available for determining the most
probably significant pollutants from a health view-
point.

                      Background

A major underlying motivation for environmental studies
is the recognition that environmental factors affect
man's health.   In carcinogenesis research, efforts
have been undertaken to establish a direct link, be-
tween an environmental factor such as might exist in
a special work environment and the development of
cancer.1>2  However, there have been few systematic
attempts to establish a relationship between data on
incidence of illness and broad-range environmental
monitoring data.  Epidemiological studies generally
cover severe episodes of environmental pollution.  In
such cases, changes in morbidity and mortality patterns
have been related to pollution in the community
environment.3,4  Part of the difficulty in the past of
relating health and environmental factors has been a
a lack of good data.  Another difficulty has been
insufficient realization of the practical significance
of large-scale studies and a resulting lack of high-
level committment to their support.  These drawbacks
are presently being overcome.5,6

This paper examines the potential use of a large-scale
health data base in defining standards for levels and
exposure times to environmental pollutants known to
have deleterious effects on human health.  In 1972,
Congress passed legislation creating Professional
Standards Review Organizations (PSRO).  The PSRO
program provides the unique opportunity to collect
uniform hospital discharge records on a national
basis.  For the first time there will exist a com-
prehensive data base containing health information
on an entire population.  The necessity for statisti-
cal inference will be eliminated.  The population
data which will be available could be used to validate
suspected relationships as well as to uncover rela-
tionships heretofore only hinted at in sample data
sets.  Once this data base is established, linkage
to local and national environmental data bases  will
provide a new and powerful capacity for analysis of
health data as it relates to environmental conditions.

In order to utilize this potential, there is a need
to develop a conceptual base from which meaningful
analysis of the data will be possible.  Our appoach
will be to develop a series of models which will
present a conceptual framework on which to build
more complex and realistic models enabling researchers
to realize the potential of this new data base.
A series of models is considered, starting with  the
simplest and proceeding to introduce complicating
factors.  All models consider dose  levels of
deleterious substances and the length of time of
human exposure.  Within the framework of the model,
we seek to answer the question of whether a response
would be detectable by examination  of clinical
records.  When results are detectable, the model
presents a basis for setting human  health hazard
levels in terms of human health effects.  As a
canonical example we consider a population subjected
to an ongoing environmental stressor.  An attempt is
made to describe possible effects of the continued
stress on the population.  Underlying assumptions
must be made as to the characteristics of the
resultant illness.  These assumptions form the frame-
work of the model which is then mathematically and
logically formulated.  These results can then be used
to set up a quantitative methodology for establishing
acceptable stressor levels.  The direct and immediate
benefit of the modeling is development of the metho-
dology which will yield insight into the nature of
the problem if not its actual solution.
                   Model Development
Background
There is no doubt that environmental pollution has
adverse affects on human health.  The episodes of
Donora, Pennsylvania (1948) and London (1952) provide
indisputable evidence that in extreme cases environ-
mental pollution can result in considerable loss of
life and in serious illness.  Acute episodes of pollu-
tion represent abrupt and unusual exposure to high
concentrations of pollutants, and produce the most ob-
vious health effects.  However, human populations are
continually exposed to varying levels of pollutants
during their lifetimes.  Recent studies have shown
that chronic exposure to moderate concentrations of
pollutants do adversely affect human morbidity and
mortality.7

Chronic exposure of human populations to low levels of
pollutants is an inevitable consequence of man's
technological development and has become a political
and economic fact of life.  The problem of developing
standards for acceptable levels of pollutants is
complex.  In principle, a "dose-response" relationship
can be established if the exposure levels are high and
the cause and effect relationship clear; however, with
low-level chronic exposure the setting of standards
becomes very complex.

The models developed in this paper focus on long-term
exposure of populations to low-level concentrations of
environmental pollutants with known cause and,effect
relationships to human health.  The models are simpli-
fied by design, and as stated in the introduction, are
intended to provide a structural framework from which
continued probing and analysis of the data will lead
to more realistic models and the development of
objective methodologies for the setting of standards.
                                                       214

-------
In all  the models discussed, the basic underlying
premise is that although chemical  substances are
toxic at some concentration,  a concentration exists
for all substances from which no injurious effects
will result no matter how long the exposure.8

Using this conceptual  basis,  the models are  developed
in such a manner as to provide for the generation of
dose-response curves which will  allow for estimation
of standards for pollution levels  based on analysis
of clinical data.   Even though the state-of-the-art
is such that the generation of a complete set of
dose-response curves for all  pollutants  for differ-
ent types of populations is not technically feasible,
the models can provide a basis for initial generation
of dose-response curves which will provide the basis
for decisionmaking in setting standards for individual
pollutant levels in the environment.

                        Models

The  simplest model relates effects on health to long-
term chronic exposure at low levels of a single pol-
lutant.  Exposure is assumed to result in a cumulative
effect over time with the resultant onset of clinical
symptoms when the cumulative dose reaches a critical
level.  The effect may be espressed as follows:

Y.J = LT   Q   Onset of Clinical  Symptoms           (1)

where

Y-J is the cumulative effect value for the ith
   observation

L   is the level of pollutant which is assumed to be
   constant over time

T   is the time of continuous exposure to the pollutant

Q   is the critical value for the cumulative effect.

This model assumes all individuals in the exposed
population are identical in their reaction from
exposure to the pollutant.  The model implies a
binary situation in which the ith individual does
not demonstrate any clinical  symptoms until the
cumulative exposure effect equals or exceeds the
critical value Q

P  (of Clinical Symptoms)   0  when LT < Q
P  (of Clinical Symptoms)   1   when LT ^.Q

Validation of this model required the examination of
the clinical records of a cohort population (i.e.,
a  group of individuals all exposed to level L of a
pollutant at the same time for the same period of
time (T)).  If the model is valid, the clinical records
for the cohort will show onset of symptoms for all
members of the cohort at the same time.  Determination
of Q in turn allows for the establishment of accept-
able standards for pollutant levels.

Unfortunately, humans rarely respond in such a uni-
form fashion as is assumed by this model.  Biolog-
ical variations within the cohort population will
result in variations in response to constant exposure.
It is not unreasonable to assume that the variations
in dose-response times will be a function of biologi-
cal variability resulting in the dose-response times
being distributed as a random normal  variate.  It is
possible to modify model one to accommodate this
concept.

As in model one the modified model relates health
effects to chronic exposure at low levels of a
single pollutant; however, this model provides for
individual biological variations in the members of
the exposed population.  This model may be expressed
as follows:
     LT +
                                          (2)
where
ei   normally distributed variable representing the
individual biological variability component for the
ith individual.  The expected value of e-j is 0.

Using this model one would expect to find in a cohort
population exposed to a single pollutant at level L
for a given time T (such that LT = Q) that only a
portion of the cohort population would exhibit
clinical symptoms as a result of exposure rather than
the entire population as would be expected under
model one.  If we accept the assumption that e is a
random variable normally distributed with an expected
value of 0, then the portion of the cohort population
exhibiting clinical symptoms after exposure to level L
of the pollutant for time T would be one half (.50).
The concept of a cumulative critical value still
holds; however, each individual reacts differently to
the same exposure effectively having an individual-
ized Q level .

YT = LT + ei

If LT   Q for the single pollutant, then
     Q +
If
0 for (Q
1 for (Q
                  + ej 
-------
Y.J = The cumulative effective value for the ith in-
     dividual

 al=Weight factor for exposure to the first pollutant

 a2 = Weight factor for exposure to the second
     pollutant

LI =Level of concentration of first pollutant

1-2 = Level of concentration of second pollutant

T-j =Time of exposure to first pollutant

T2 =Time of exposure to second pollutant

I  =Synergistic or antagonistic effect of interaction
     between the two pollutants

e-j =Measure of biological variation for ith
     individual

If a cohort population is exposed to constant levels
of the first (Li) and second^ (L2) pollutants for a
specified time (T) where

T = T]   T2

such that T( a-|b| +  a2L2^   ^12

where

Q]2   a cumulative critical value for exposure to
      LI and L2
then Y,-
Q12
I + e.
Validation of the model from clinical records becomes
a matter of detecting the existence of an interactive
effect.  If I   0, then the model is simply a
linearly additive function and one would expect to
detect clinical symptoms in approximately one half
the cohort population after exposure to the two
pollutants for time T.  However, if I ^ 0, one must
examine the data to determine the presence of
synergistic or antagonistic effects due to the
interactive process.  For example, there have been
synergistic effects observed from exposure to sulfur
oxides in the presence of undifferentiated parti -
culate matter.'  Laboratory studies have shown that
a combination of sulfur oxides and particulates may
produce an effect that is greater than the sum of
effects produced by the pollutants individually.
The degree of potentiation is dependent on the mix
of pollutants and varies across different concentra-
tion.  A three-to four-fold potentiation of the
irritant response to sulfur dioxide is observed in
the presence of particulate matter capable of
oxidizing sulfur dioxide to sulfuric acid.10  In
situations where two pollutants are involved, the
validation of interactive effects through examination
of clinical records is straightforward.  Interactive
synergistic effects would result in more than one
half the cohort population exhibiting clinical
symptoms after exposure to the pollutants for a time
equal to T.  The greater the proportion of records
showing clinical symptoms, the greater the synergism.
Interactive antagonistic effects would result in
less than one half the cohort population exhibiting
clinical symptoms after exposure to both pollutants
for a time equal to T.  The greater the antagonistic
effect, the lower the proportion of records showing
clinical symptoms.

Expansion of this model to involve more than two
different pollutants is possible; however, the
                                                       216
complications produced by the possibilities of_
secondary, tertiary, quaternary .  .  . interactions
are limitless and present methodological problems of
extreme complexity in relating such models to actual
clinical data.

            Further Modeling Consideration

In reality, individuals are exposed to a variety of
pollutants at varying concentrations and for differ-
ent periods of time during their lifetime.  Those
exposed do not necessarily suffer from a single
specific pollution-induced disease, but rather
experience an aggregation of clinical symptoms in
part due to pollutant exposure, aggravation of a
previous weakness due to prior illness, etc.

An approach to the multifactor problems of pollution-
induced illness offered by the availability of clini-
cal information on a national basis is the capability
of grouping disease entities relative to the presence
of environmental  stressors for specified populations
(populations defined by geographic groups or logical
groupings).  Consider a patient exposed over time to
a variety of pollutants.  In such a case it would be
very difficult to determine which pollutant or
combination of pollutants were responsible for the
illness.  If there is available a large accumulation
of clinical and associated environmental records for
different populations, it would be possible to set
up a three-dimensional matrix in which we set out
environmental stressors versus clinical symptoms and
population.  By examining the matrix it would be
possible through logical elimination to determine
which stressors were not related to special disease
syndromes.  For example, consider two populations
have identical clinical symptoms and in population one,
environmental stressors A and B are present and in
population two environmental stressors A and C are
present.  It is reasonable to conclude that factor
A is a critical environmental stressor in contrib-
uting to the presence of the observed clinical
symptoms.  Analytical techniques such as cluster
analysis applied to such matrices could provide key
information in relating specific pollutants or
clusters of pollutants to specific disease entities
in the population.  Once identified, appropriate
models can be developed to aid in the establishment
of pollutant level standards for different populations.

The models developed in this paper do not address the
complexity inherent in real-world situations.  Obvious-
ly the conditions and assumptions of these models are
not frequently met in real-world situations.  The
models are indicators of analytic approaches which must
be undertaken.  The basic models must be augmented
and modified to accomodate the specific situation
addressed by the available data.  For example, one of
the aspects of environmental stress that must be
accounted for in evaluating health effects is the
level at which permanent damage to the organism
occurs.  A complementary aspect of this study is the
modeling of recuperation under improved conditions.  A
case in point is an area with good air quality which
at frequent intervals gets a peak of some pollutant.
The physiological effects may then be reversible; the
large dose received being compensated for by the long
recovery time available between peaks.  These observa-
tions can form the basis for a recuperative model of
environmental effects where the cumulative critical
value is never reached even though exposure time
exceeds that which is necessary to produce illness.

Another situation which must be addressed is the
possibility that the clinical records may show stress
symptoms which cannot be directly related to any

-------
given environmental stressor.  Studies by Martinll
in London showed direct relationships between levels
of smoke and sulfur dioxide and respiratory and
cardiac morbidity.  Empirical evidence indicates that
environmental stress is producing clinical symptoms
in individuals with prior weaknesses.  In cases
where it is suspected that environmental stressors
have exacerbated prior weaknesses, the extended
clinical records must be examined.  In some cir-
cumstances it may be desirable to carry out
retrospective studies to ascertain whether environ-
mental factors are the underlying stressors.  Such
studies require an extra level of data and extra
inductive steps which are not presently considered
by our models.

All of the models which we have discussed can be
programmed and simulated. With increasing com-
plexity when analytic formulations cannot be
solved, the logical modeling structure can still be
established and simulated to determine results.

                      Conclusion

The models which we have presented illustrate our
perceptions of environmental effects and indicate
how health data can provide a basis for testing of
the hypotheses underlying the models.  Presently,
the system for gathering data is being established.
With the initial gathering of data, the appropriate
ranges of parameters to be used in the models will
be determined.  At this point, simulation will be
used to indicate the effects across the empirically
determined ranges.  Results will be compared with
data and improvement will be made as our understanding
increases.

The main reason for formulation of basic models
using a minimum of hypotheses and advanced tech-
niques is that our present level of information is
insufficient to support intricate theories.   With
the basic models as a guide, fundamental questions
can be addressed and appropriate data collected
for their resolution.  Based on the new informa-
tion obtained, the models can be revised, refined,
and expanded.  In this stepwise manner with  the
interaction of theoretical  constructs with data,
a firm foundation can be laid for understanding
the environmental  effects on health.

                      References
    Morris, J.  N., Uses of Epidemiology, Williams and
      Wilks Co., 1964.

    Ember, L.,  "The Spector of Cancer," Environmental
      Science and Technology, 116, December  1975.

    National  Air Pollution Control Administration,  Air
      Quality Criteria  for Sulfur Oxides, Department
      of Health, Education, and Welfare, Publication
      AP-50,  1969, (Part of a series of publications
      on air  pollutants).

    Health Hazards of the  Human Environment,  World
      Health  Organization, Geneva, 1972.

    Goran, M.J., et al., "The PSRO Hospital  Review
      System,"  Supplement  to Medical  Care,  13,  No.  4,
      April   1975.
6.  International Conference on Environmental Sensing
      and Assessment (ICESA), Report in Environmental
      Science and Technology, December, 1975.

7.  Purdom, W.  P., Environmental  Health, Academic
      Press, New York,  1972.

8.  IBID, number 4, p.  133.

9.  National Air Polution Control  Administration, Air
      Quality Criteria  for Particulate Matter,
      Department of Health,  Education, and Welfare,
      Publication AP-49,  1969.

10.  IBID, number 3.

11.  Martin, A.  E., and  Bradley, W.,  "Mortality  and
      Morbidity Statistics and  Air Pollution",
      Proceedings Royal Society of Medicine,  57,
      969-975,  1964.
                                                      217

-------
                               INTEGRATED ASSESSMENT:  CONCEPT AND LIMITATIONS
                     Lowell  Smith, Richard H. Ball, Steve Plotkin, and Frank Princiotta
                                     U.S. Environmental Protection Agency
                                              Washington, D.C.
                                                     and
                                               Peter M. Cukor
                                               Teknekron, Inc.
                                            Berkeley, California
                  Introduction

The Integrated Assessment (IA) program conducted by
the Environmental Protection Agency (EPA) is a multi-
agency, interdisciplinary effort to define and eval-
uate the various environmental and socioeconomic ef-
fects which result from energy extraction, processing,
transportation, conversion and end use activities.
Integrated Assessment at EPA traces its genesis to
the early socioeconomic and modeling work performed
within the EPA's Washington Environmental Research
Center, where research was conducted on environmental
assessment methodology, on environmental benefit deter-
minations, and on linking environmental protection
strategies with models of the U.S. economy.

In early 1974, following release of the Ray report
(reference 1), the Office of Management and Budget
established an interagency task force on "Health and
Environmental Effects of Energy Use," chaired by Dr.
Donald King, Department of State, and Dr. Warren Muir
of the President's Council on Environmental Quality.
The task force's objectives were to:
     •  Examine the existing Federal research program
        relating to the human health and environmental
        effects of energy use; and to
     •  Recommend mutually supporting multi-agency
        research programs, including a programmatic
        allocation of Federal research funds, to deve-
        lop a clearer understanding of the health and
        environmental effects of energy use.

An important conclusion of the task force was that the
social and economic consequences of alternative energy
and environmental policies needed to be considered
along with, and in coordination with, the health and
environmental impacts of such policies.  The authors
of the task force report (reference 2) recommended the
formation of a research program to identify "environ-
mentally, socially, and economically acceptable
(energy development) alternatives" by integrating re-
sults from the two research areas, socioeconomic and
health/ecological, as well as from research on cost/
benefit/risk evaluation and policy implementation
alternatives.  In response to these recommendations,
the Office of Energy, Minerals, and Industry (OEMI)
established its IA program, which is further described
herein in terms of its modeling requirements.

                  Problem Statement

The problem addressed by the IA program is one that
has become painfully apparent to society over the past
several decades.  The development of new technologies,
or the extension of technologies to undeveloped geo-
graphical areas, carries with it a chain of impacts
extending throughout the physical, economic, and social
systems.  Many of these impacts are initially unfore-
seen, yet they may have far-reaching consequences which
run counter to, and even overshadow, the intended
benefits brought by a technology.
The traditional research  programs  of the EPA have in-
cluded in  their analyses  of  energy technologies an
examination of a broad  range of  environmental effects.
This range of research  effort spans the measurement of
pollutants discharged from stacks,  outflows,  etc.,
determination of their  ecosystem and health impacts,
control technologies for  controlling or mitigating
these impacts, and computations  of  the  associated
control costs.  The IA  cuts  across  these several
areas of research interest,  emphasizing their inter-
connectedness for policy  analysis purposes.

Additionally, the IA program attempts to carry these
analyses further, by focusing on the secondary and
higher order impacts of the  technologies themselves
and of the environmental  controls applied to  them.
The higher order effects  considered include the
possible social and economic consequences of  techno-
logies on land use and population migration,  and the
measures of the associated impacts  on:  the social struc-
ture (e.g. changes from rural to urban  society,  in-
flux of workers with different social values  leading
to conflict), the environment (e.g.,  influx of popu-
lation creating sewage problems, destruction  of
natural habitats), and on the economy (e.g.,  demand
for new construction and  operation  workers which may
create labor shortages for previously established
industry and agriculture, and capital requirements and
their associated economic and environmental  implica-
tions).  The program also attempts  to trace  in depth
the effects of environmental controls on the  environ-
ment, society, and the economy.

An obvious prerequisite to incorporating social and
economic analysis into the health and environmental
effects research program  is  that of insuring  that the
more focused portions of  the Federal  energy research
program are complete with regard to investigating in
sufficient depth all of the  impact  areas of concern.
The IA program is responsible for identifying gaps in
the overall research effort  that prevent a complete
assessment of optimal development and environmental
control alternatives.  Thus,  work conducted within the
program consists mainly of integrative  analysis  rather
then original research or data collection.  When
further original research or data collections are iden-
tified as being necessary to allow  a  complete analy-
sis, the program will generally  turn  to the other
research programs of the  EPA and the  other Federal
agencies for assistance.

                  Program Objectives

The IA program is responsive to  the national  need of
developing a well-coordinated set of  energy policies
which will foster the joint  attainment  of energy and
environmental goals.  Further, it provides a  mecha-
nism whereby implementation  strategies  for these coor-
dinated energy policies can  be conceptually  tested as
to their full range of socioeconomic  and environmental
consequences.  Objectives of the IA program include:
                                                      218

-------
    o  Identification of energy  supply  and  con-
       version alternatives which  are acceptable
       when judged jointly by  environmental,  social
       and economic  criteria and constraints;
    o  Evaluation of the cost/risk/benefit  trade-
       offs of energy production,  conservation,
       and pollution control alternatives,  especial-
       ly as these prevent environmental  damage and
       secure related benefits;
    •  Assistance to the nation, and EPA  in parti-
       cular, in the selection of  optimized policies
       for the attainment of environmental  quality
       goals; and
    •  Identification of critical  gaps  in current
       energy-related research programs,  and of
       other priority research topics,  which must
       be addressed  in  order to  support direct EPA
       responsibilities.

                 Program Methodology

The primary analysis  tool used  by the IA program is the
Technology Assessment (TA).  Coates (reference 3)  de-
fines  TA  as "the  systematic study of the effects on
society that may  occur when a technology is  introduced,
extended, or modified with a special emphasis on the
impacts that are  unintended, indirect,  and delayed."
By this definition, TA precisely  fits the  analysis re-
quirements defined  above.  The  TA's incorporated in the
IA program will  focus on regional energy development
problems  and emerging energy technologies.

The appropriate way  to conduct  a  TA remains  a matter
for extended debate.  TA methodologies  range from
highly formal  structures modeled  by decision analysis
techniques•,  event trees, and  quantitative  cost/risk/
benefit analyses  that stress intense interaction
within one or  more  interdisciplinary teams.   These
various techniques  for conducting Technology Assess-
ments are described  at length  in  the literature (see
references 4  through  7).  The  IA  program is  delibe-
rately neutral with respect  to  favoring any  TA meth-
odology,  at  least at  this  early stage of the program.
This attitude  is  subject to  change  as further ex-
perience  is  acquired, or as  special situations are
encountered.

Although  selection  of an appropriate methodology is ob-
viously critical,  the successful  conclusion  of a TA may
be even more  closely  linked  to  choices  made  regarding
the scope or  boundaries  of the  assessment.  These
choices are  linked  to:
     o  Identification  of the  decision-maker(s);
     o  Resources available  to  the assessment team; and
     o  Nature of the assessment  subject.

For instance,  local decision-makers will usually make
decisions based  on  the  impacts  on their jurisdications
alone.  However,  a  TA addressed  to  this type of deci-
sion-maker must  consider what  actions outside juris-
dictions  might take if  the client chooses  a  course of
action which  is  antithetical to  the interests of these
outside jurisdictions.   Would  a  state cut  off finan-
cial assistance  if  a  locality  insisted  on pursuing a
course of action which  hurt  its neighbors? A TA that
did not consider  these  aspects  would be of limited
value.

When the  decision-maker  to be  addressed is the Federal
Government,  as is the case to  a  great extent within the
IA program,  the  definition of  project scope  becomes
quite different.   The Federal  decision-maker is nor-
mally placed  in  the rather ambiguous position of having
to incorporate simultaneously  the viewpoints and in-
terests of the Nation as a whole, as well  as the States
and other regional  or local  interest groups.  This type
of "global" perspective can rarely be fully accommo-
dated in a TA, and thus each assessment is forced,  re-
luctantly, to make critical choices as to the geo-
graphical boundaries of impacts considered, the time
frames to be examined, the types of impacts to be fo-
cused on, parts of fuel cycles to be stressed, etc. In
multi-year assessments, such choices are particularly
critical to the success of the first year efforts.

              Modeling Requirements

The IA program attempts to utilize research results
and specific models for relating a wide range of causes
with environmental and socioeconomic effects.  Some of
these include:

     •  Source emission characterization of specified
        operations as a function of operating load,
        fuel input, etc.;
     •  Operating effectiveness, economic costs,
        effects on reliability, etc., of pollution
        control technologies;
     •  Pollutant transport within and between media;
     •  Pollutant chemical transformation processes
        which occur within a given medium;
     •  Pollutant uptake and concentration in food
        webs;
     •  Acute and chronic responses of organisms to
        pollutant exposures;
     •  Pollutant effects on human welfare, including
        as yet poorly quantifiable impacts;
     •  Acute and chronic human health responses to
        ambient pollutant concentrations;
     •  Economic damage expected from pollutant re-
        leases;
     •  Local socioeconomic effects of energy develop-
        ment;
     •  Individual, corporate and institutional re-
        sponse mechanisms to changes in driving forces;
     •  Cost/risk/benefit analysis disaggregated to
        specific classes of parties at interest; and
     •  Net energy analysis for entire fuel cycles from
        extraction through transportation and conver-
        sion to pollution control and waste disposal.

Results of these analyses are integrated, through the
TA mechanism, into several possible cross-cuts or di-
mensions of comparative analysis'.  Of greatest interest
are analyses which articulate the range of environmen-
tal and socioeconomic consequences for:

     •  Differing levels of pollutant control within a
        given energy technology;
     •  Alternative energy technologies within a spec-
        ific geographical region;
     •  Selected energy technologies applied to all
        geographical regions; and
     •  Differing strategies for development of a
        specified, energy resource,   including factors
        such as institutional constraints, time phasing
        and method of extraction.

Because of their generally wide-ranging nature, which
attempts to draw results from a number of disparate
disciplines into a policy-oriented decision structure,
the TA's within the IA program are referred to as Inte-
grated Technology Assessments (ITA's) Each ITA at-
tempts to integrate its results across two or more of
the above listed analysis dimensions.

                  Current Projects

The IA program currently has two TA's fully underway, a
third about to be launched, and two more in the active
planning phase. The first three of these are described in
some detail in order to indicate the range of model-
                                                       219

-------
 ing requirements  and opportunities  within the IA program.

      1.   An Integrated Technology Assessment of Western
          Energy Resource Development:   The objectives of
          the Western Energy ITA are to:

      •  Assist the EPA in developing environmental con-
         trol policies and implementation strategies for
         mitigating the adverse impacts  of Western energy
         resource  development;
      •  Assist EPA's Office of Research and Development
         in evaluating that portion  of its environmental
         research  program dealing with the problems of
         Western energy development;
      •  Provide a balanced assessment of the full range
         of costs  and benefits  stemming  from alternative
         energy resource developments in the Western
         United States in order to assist Federal and
         State planning for such development.

 This ITA is being conducted jointly by  the University
 of  Oklahoma's Science and Public Policy (S&PP)  Program
 and the  Radian Corporation.  The Project Director is
 Dr.  Irvin (Jack)  White,  professor of political  science
 at  Oklahoma University and assistant director of the
 S&PP Program.

 The ITA  focuses on the impacts of developing coal,  oil
 shale, oil,  natural gas,  geothermal  and  uranium re-
 sources  in 13 western states (reference  8).  Develop-
 ment of  these resources,  and especially  of coal and
 oil shale,  has become a source of extreme contention
 among interest groups both within and outside of the
 region,  largely because the impacts, positive and
 negative,  are separated spatially and temporally.  For
 example,  the development  will  satisfy demand  for
 energy largely in the Midwest  and the Pacific coastal
 states,  while environmental damage will  largely accrue
 to  the resource rich states  inside the region.   Al-
 though in the long term the overall  financial position
 of  the resource states may possibly  improve  from the
 expanded tax base created by development,  short  term
 demands  for  services such as education,  housing, and
 other  services  associated with a  rapidly expanding
 population will create an initial severe strain  on
 local  finances.

 The  Western  Energy ITA team does not favor a  highly
 structured approach  to  TA, and  thus  there  is  a de-
 emphasis  of  formal  decision analysis and  cost/risk/
 benefit  tools and model building.  Impact  analysis
 will focus on a series of  site-specific  and regional
 scenarios.  Energy development  levels are  set by as-
 suming levels of national  energy demand based on pre-
 vious forecasts and  allocating  shares of  the  supply
 responses to the region (possibly by utilizing the
 Gulf-SRI energy model). During  the first year of the
 study, the "boundaries" can be specified as follows:

     •  All portions of the fuel cycle (excluding end
        use) are considered except that   the uranium
        fuel cycle is examined  only  to the milling
        stage;
     •  The focus  of attention  in impact analysis will
        be the eight major resource  states.  Impacts
        outside the region are  not considered in depth
        with the possible exception  being at electri-
        city demand centers in  the Midwest; and
     •  Exogenous  variables affecting development
        rates are  not examined  in depth.

The implication of these boundaries  is that the  ITA
 focuses,  in the first year, on the question of how  to
 cope with development if it occurs.   The parallel ques-
 tions that a "complete" TA would attempt  to answer
whether or not development should occur,  and how to
 promote the level of development desired  (or,  at least,
 how to predict the level likely to occur)    require
 analyses considerably beyond the first-year  study
 boundaries.

      2.  An Integrated Technology Assessment of
          Electric Utility Energy Systems:  The Electri-
          cal Utility ITA has as its objectives;

      •  To provide a means of testing pollution  con-
         trol policies and strategies which affect the
         electric utility industry, and which must be
         formulated in response to current and  near
         terms issues.
      •  To identify those issues, especially environ-
         mental issues, which are likely to require po-
         licy decisions in the future, and to identify
         the research programs which should be  initiated
         in order to provide a sound basis for  future
         decisions regarding these issues.

 The ITA is being conducted by the Energy and Environ-
 mental Engineering Division of Teknekron, Inc.,
 Berkeley,  California.  The Principal Investigator is
 Dr.  Peter  M.  Cukor;  the Project Director is
 Mr.  Glen R.  Kendall.

 The Electric Utility ITA focuses on the energy con-
 version and  pollution control technology alternatives,
 health and ecological effects,  and resultant national
 economic impacts  associated with activities of the
 electrical utility industry (reference 9).   Rapid
 depletion  of the  readily accessible fluid state dom-
 estic  energy resources for electrical power generation,
 coupled with international concerns about the quantity
 and security of  imported oil and gas,  have produced  a
 major  tilt in the industry in favor of nuclear  fission
 and coal combustion  as the electricity-producing  tech-
 nologies of  choice over  the next decade.   Additionally,
 factors such  as  the  decreasing  availability of  natural
 gas  supplies  are  continuing to  produce a shift  towards
 increasing electrical demand at  the expense of  the more
 traditional  energy sources,  even as the total national
 energy demand  has decreased over the  past two years.

 The  future development  implied by  these forces, invol-
 ing  development of new mining areas and increased pro-
 duction in established areas, development of  extensive
 new  transmission  and  storage facilities,  construction
 of  extremely  large fossil  and nuclear  generating  faci-
 lities,  and a vast quantity of supporting development,
 may  result in  the creation of new  (and the  exacerbation
 of existing)  environmental,  social  and economic problems
 that demand  the close attention  of  the Federal  govern-
 ment .

 In  contrast  to the Western  Energy  ITA,  which  is con-
 siderably broader in  terms  of the  "actors" who  play
 significant roles in  affecting the  course of  develop-
 ment,  the  Electric Utility  ITA is  structured  so as to
 consider the  actions  and effects of one industry  as it
 is affected by external  forces.  Thus,  Teknekron's
 approach to conducting this  ITA  relies  heavily  on
 creating models to predict  the behavior  of  the  electric
 utility  industry.  These models  are to  be exercised to-
 wards  the  end  of  the  first  year  by  analyzing  a  set of
 scenarios which are designed  to  display the results on
 the  industry,  environment and society  of  implementing
 alternative policy options.  A parallel  effort  will be
 conducted  to critically  review and  analyze  the  data
 and models available  to measure  the impacts of  alter-
 native  courses of development, and  to  analyze the sen-
 sitivity of current industry practices  to emerging pol-
 itical and social  changes.  A particularly important
part of Teknekron's work involves a thorough  review of
 the mechanisms of  atmospheric transport and transfor-
                                                       220

-------
matlon of sulfur oxides and their associated health
impacts on human and other receptors.  Currently, this
area is rich with possibilities for modeling the rela-
tionship between regional sulfur oxide emissions and
the adverse effects of aerosol sulfates on public
health, welfare and ecological systems.

During the first year, the boundaries of the ITA are
such that it will:

     •  Focus primarily on existing coal technologies
        and secondarily on other fossil fuel technolo-
        gies;
     •  Focus on the power plant portion of the fuel
        cycle;
     •  In terms of environmental impacts focus on air
        pollutants and, more specifically, on long dis-
        tance transport and associated chemistry of
        atmospheric aerosols; and
     o  Generally confine air impact analysis to defin-
        ing exposure of populations to pollutants with-
        out calculating health effects, aesthetic or
        economic damages.

A description of the individual models being con-
structed during the first year to simulate the economic
decision making practices of the utility industry and
the resulting exposure of receptors to pollutant  re-
leases is contained in the following paper.

   3. Ohio River Basin Energy Facility Impact Study:

Congress, in a rider to EPA's FY 76 appropriation bill
has required the Office of Research and Development to
conduct a study of the Lower Ohio River Basin, to "be
comprehensive in scope, investigating the impacts from
air, water and solid residues on trie natural environ-
ment and residents of the region" which might result
from an increasing concentration of power plants in
the Ohio River Basin.  The IA program has developed a
plan for this study which casts it in the form of a
regionally-focused ITA (reference 10).  The scope is
broadened to consider the accelerated deployment of
both conventional power plants and coal-based syn-
thetic fuel plants in the Basin, (includes portions of
the states of Ohio, Illinois, Indiana, and Kentucky).
The major focus will be on an in depth examination of
the impacts of coal development and conversion pro-
cesses on the region, its people, social infrastructure,
agricultural lands and natural environments.  Mech-
anisms to mitigate potential adverse impacts and to
shape  future development along environmentally and
socially acceptable lines will be analyzed.  Addition-
ally,  attention will be paid to extra-regional con-
cerns, e.g., the role of Basin development in meeting
national energy demands, and the issue of how much
impact the long distance transport of sulfur dioxide
and the resulting sulfate aerosols may have on the
urbanized Northeast.

This ITA will be conducted over a two and a half year
period by several teams of academic researchers who
have been selected from Mid-west universities.  It wil]
be one of the most ambitious TA'« yet attempted in
terms  of the number of separate research groups and
institutions participating.

     4.  Other Planned Studies:

A regionally-oriented Appalachian ITA will commence in
 the fall of this year.  It will round out the major
regional ITA's which  focus on accelerated coal deve-
lopment and utilization.  A fifth ITA, to be oriented
towards a thorough assessment of advanced coal combus-
tion and conversion technologies, will begin in the
early  spring of 1977.
 ITA studies are also planned  to address  pollution
 control issues for industries, other  than  primary
 energy producing industries.  These studies will
 emphasize inter-industry interactions and  aggregate
 effects of pollution from all industries.  Initial
 methodological studies will begin in  the fall  of 1976.
      5.
          Strategic Environmental Assessment  System
          (SEAS):
 Although it is not a TA in any sense, SEAS is an  im-
 portant enough analysis tool to merit a brief dis-
 cussion here.  SEAS, originally developed within  the
 old Washington Environmental Research Center of EPA, is
 a system of interdependent models designed to forecast
' the economic, environmental and energy consequences of
 alternative Federal environmental policies under vary-
 ing assumptions about the future.  The core of SEAS is
 an input/output model of the United States economy
 (INFORUM) which models the interactions between differ-
 ent economic sectors.

 SEAS is capable of developing estimates to 1985 of:
      •  Economic projections in terms of physical out-
         put for 350 industries and processes;
      •  Pollution control costs for 500 control tech-
         nologies; and
      •  Projections of environmental residuals and
         energy use for each of 350 industries.

 A detailed description of SEAS is available in re-
 ference 11.

 SEAS represents a potentially important tool for tech-
 nology assessment and is thus being maintained and de-
 veloped further under the IA program.   For instance,
 SEAS offers the potential to measure the national im-
 pacts of new energy development to complement the fo-
 cus on in-region impacts of the regional TA's (such as
 the Western Energy ITA).  This type of measurement is
 crucial if Federal decision-makers are to take into
 account all of the potential impacts of development
 alternatives.

 Work currently in progress to modify SEAS includes the
 development of additional capability to predict energy
 demand in the transportation, residential and com-
 mercial, and industrial  sectors.   For  instance,  a new
 transportation model will forecast activity (vehicle
 miles traveled),  emissions and energy  demand, with
 feedbacks to the input/output model to account for
 changes in automobile mix and transportation efficiency.

 Another important part of current work is the inte-
 gration into SEAS of the Brookhaven National Labora-
 tory's ESNS energy supply model.   Addition of ESNS will
 allow the study of new energy sources  including coal
 gasification and liquefaction,  oil shale, off-shore
 oil drilling, and geothermal and  solar energy.  Finally,
 consideration is being given to extending the SEAS
 economic models to the year 2000, and  to improving the
 completeness and accuracy of the  regional data bases.

 Although the TA is the heart of the IA program,  other
 types of projects are undertaken  in support of the
 general program objectives.   These projects may be
 categorized as:

 Supplementary Studies- research projects that will
 supplement the TA's either by providing results that
 will fill research gaps identified by a TA, or by
 providing increased coverage of issues associated with
 a TA, as these become identified in the course of as-
 sessment as being crucial to EPA's fulfilling its re-
 sponsibilities.  This category also includes integrative
 studies that fall short of full TA's on topics of con-
                                                        221

-------
cern to the EPA.  Frequently, detailed modeling work
will be funded, at it supports the direct needs of an
ongoing TA.  Examples of this model development might
include:
     •  A detailed economic model of a pollution abate-
        ment technology for a specified energy conver-
        sion process, taking into account variable
        characteristics of input fuel types, several
        levels of required control, technological
        approach, etc.;
     •  Models which transform pollutant releases into
        ambient concentrations, especially for reactive
        pollutants on a regional scale; and
     •  A generalized model for displaying socioecono-
        mic impacts on idealized local communities for
        particular energy technologies.

Integrated Assessment Methodology - projects that will
develop new methods of conducting TA's and other inte-
grative analyses.  These are projects that integrate
and adapt the results of research being conducted by
the Office of Technology Assessment, the National
Science Foundation, other Federal agencies and the
private sector on TA methodology (e.g., cost/benefit/
risk analysis, multivariate decision analysis, etc.)
into a framework which is suitable for EPA's decision-
making processes.  Case studies conducted by these
projects will be chosen so as to be supportive of on-
going TA's.  This portion of the program includes
maintenance and further development of the Strategic
Environmental Assessment System (SEAS) model.

"Pass-Through Programs"- projects supporting the IA
program that are conducted by other Federal agencies
under EPA funding.  Agencies participating in this
portion of the program include USDA, TVA, ERDA, HUD,
and Commerce.

                  Conclusions
Although the Integrated Assessment program is in its
infancy, the two TA's are well enough along to have
surfaced several important issues for the program.
First, the TA's tend to deal with issues that go well
beyond the traditional interests of the EPA.  Thus,
coordination with interested Federal agencies and
other entities is vital not only for information ex-
change purposes but also to prevent questions of the
"propriety" of the research from hindering its pro-
gress.  Second, it has become clear that the object-
ive of incorporating social and economic concerns into
the decision-making process is extremely ambitious.
Defining useful but realistic analytical boundaries is
clearly one of the most crucial-if not the most
crucial-problems facing the program.  The danger here
is that these boundaries may be set so wide that the
level of analysis will become too shallow to be
credible.

Third, a fundamental limitation to the value of policy
analysis studies such as those just described is the
availability of necessary inputs in the form of tested
research results from the physical, biological and
medical sciences, and from economics, political and the
social sciences.  The uncertainties are large in much
of this desired information.   We are limited by an
ability to model many of the interactions which cri-
tically affect, or drive, policy decisions.  The error
bars on these uncertainties grow progressively wider
as we progress in our examination of the availability
of models and supporting data for pollutant releases
at one end of the analysis structure through the media
transport processes to consider the effect of pollu-
tants upon distributions of receptors at the other end
of the analysis structure.
 Fourth, systematic methodologies to evaluate  these
 several forms of impact within a common base  of  com-
 parison are not available at present.  Finally,
 modeling the behavior of individuals' perceptions,
 social structures and institutions is in an embryonic
 stage of development.  Yet, it is these factors  which,
 in the end, determine how we treat the environment,
 what economic impacts and costs are to be internalized,
 how much economic activity the environment must  sustain.
 There are challenges here to engage our best  efforts
 for some time to come.

 REFERENCES

 1.  The Nation's Energy Future, A Report to the
     President of the United States.     December 1973,
     submitted by Dr. Dixy Lee Ray, Chairman, U.S.
     Atomic Energy Commission.

 2.  Report to the Interagency Work Group on Health and
     Environmental Effects of Energy Use. November 1974
     Prepared for the Office of Management and Budget:
     Council on Environmental Quality, Executive Office
     of the President.

 3.  Coates, Joseph F., "Technology Assessments:  The
     Benefits ... the Costs...The Consequences."  The
     Futurist;  December,  1971.

 4.  Arnstein,  Sherry R., and Alexander N.
     Christakis (1975) Perspectives on Technology
     Assessment,  based on a workshop sponsored by the
     Academy for Contemporary Problems and the National
     Science Foundation.   Columbus, Ohio:  Academy for
     Contemporary Problems.

 5.  Coates, Joseph F. (1974)  "Technology Assess-
     ment," in McGraw-Hill Yearbook of Science and
     Technology.

 6.  Coates, Vary T.  (1972) Technology and  Public
     Policy:  The Process of Tbchnology Assessment
     in the Federal Government.   Washington,  D.C.:
     Studies in Science and Technology, 2 vols.

 7.  Jones, Martin V. (1973) A Comparative  State-
     of-the-Art Review of Selected U.S. Technology
     Assessment Studies,  Tbe Mitre Corporation,
     M73-62.

 8.  White, Irvin L., et_ al_ (1976) First Year Work Plan
     for a Technology Assessment of Western Energy
     Resources  Development.  Washington, D.C.:   EPA-
     600/5-76-001.

 9.  First Year Work Plan, An Integrated Technology
     Assessment of Electric Utility Energy  Systems,
     December 15, 1975, Prepared for EPA by Teknekron
     under EPA Contract No. 68-01-1921.

10.  Work Plan for an Impact Assessment of Energy
     Conversion Facilities in the Ohio River Basin,
     Phase I, March 30, 1976 draft, Office of Energy
     Minerals, and Industry, U.S. Environmental
     Protection Agency, Washington, D.C.

11.  Strategic Environmental Assessment System (Draft),
     U.S. Environmental Protection Agency,  December 16,
     1975.  Can be obtained from Technical Information
     Division, Office of Research and Development, EPA
     Washington, D.C.  20460.
                                                       222

-------
                 AN INTEGRATED TECHNOLOGY ASSESSMENT OF ELECTRIC UTILITY ENERGY SYSTEMS
                                                     Peter M. Cukor
                                                      Sanford Cohen
                                                     Glen R. Kendall
                                                     Tom L. Johnston
                                                     Teknekron, Inc.
                                                     2118 Milvia Street
                                                   Berkeley, California


                                                     Stephen J.  Gage
                                                       Lowell Smith
                                          Office of Energy, Minerals and Industry
                                            Office of Research and Development
                                           U.S. Environmental Protection Agency
                                                     Washington, D.C.
                       Introduction

Teknekron,  Inc.,  under  the  sponsorship of  the  Office  of
Energy, Minerals and Industry; U.S. Environmental  Protection
Agency, is conducting  an Integrated Technology Assessment
(ITA) of Electric  Utility Energy Systems.  The ITA has two
primary goals.   The first  goal  is to provide EPA with  the
capability to assess the environmental, economic,  institu-
tional and social effects of the generation of electricity and
those  activities which supply the  fuels  used  to  produce
electricity.  These effects  will be  quantified for a number of
scenarios. The  scenario elements  include alternative futures
for utility development, pollution control  and siting regula-
tions.  The second goal  is to assist EPA in developing research
and development programs whose results  are  necessary  in
order to conduct these  impact assessments.  The time frame
for the assessments is the period 1975-2000.

The ITA  is being conducted  over a three-year period.  The
scope and direction of  the   first  year's  effort  have been
defined such that  the  results will be responsive  to  the key
policy  issues which are likely to be faced by EPA over  the
next 6-18  months.    As  a   result,  the  first  year's  work
emphasizes:

     •    Fossil fuel electricity generation
     •    Primary and secondary air pollutants
     •    Short-range and long-range dispersion of air pollu-
          tants
     •    Human populations exposed to air pollutants

This paper provides a  description  of the models,  data  bases
and analytical  techniques  which are being employed  in  the
development  of the overall   ITA model.    This  model will
simulate  the environmental  effects associated  with  elec-
tricity  generation  and  the economic impacts of  alternative
policies for pollution  control on the   utility industry and
electricity consumers.  It must be  emphasized, however, that
impact prediction is not the  only  product of the  Integrated
Technology Assessment.  Rather,  impact prediction provides
the quantitative  information  which  serves  as  input to  the
technology assessment  exercise  in which the feasibility and
impact of alternative  environmental policies are appraised.
Thus, this paper focuses on only one part of the  conceptual
framework for conducting the  ITA.

                   Modeling Methodology

The basic analytical framework  for assessment of alterna-
tives with respect to a  single module of an electrical energy
fuel cycle is displayed  in Figure I.  Although this  framework
is appropriate for assessing alternative regulations for pollu-
tion control  and siting  for any fuel cycle module,  quantifica-
tion of results rests upon the  assumption that such a module
will actually be constructed  and  operated (or continued in
operation, if  now existing) at a  definite level of production.
The electric utility industry can develop in alternative ways
over the coming decades.  Future developments will include
alternative  fuel  cycles,  fuel  cycle  configurations, facility
sites,  pollution control  technologies,  regulatory  policies,
demand growth rates, etc.  Control levels and  siting limita-
tions will affect not only  production costs and environmental
impacts, as  indicated by Figure  I,  but will  also affect the
decisions made within the utility industry   with  regard  to
employing or not employing these technologies  under various
conditions.

It  is therefore necessary to  integrate the modular analysis
within  a more comprehensive framework which allows assess-
ment of the fundamental industry decisions  with regard to
utilization of  modules.   This comprehensive framework is
displayed in Figure 2.  The diagram  is intended to show the
major  technical and economic areas and interrelationships
which  must  be  integrated. It  demonstrates the  key  role, as a
driving force,  of  policy  analysis and scenario development
(represented by  the exogenously specified   inputs to  each
model  component).   Potential demand  for  electricity  and
costs of alternative methods  for  meeting this demand  under
environmental and other constraints are shown to be the basis
of utility decisions.  These decisions determine  the  course of
industry development and the consequences  which  may flow
from this course, including effects on human  populations and
ecosystems as well as socioeconomic effects.

Figure 2 has been constructed in two sections.  The section
above  the dotted line shows the configuration of components
and information flows for simulation of the development and
operation of an electric utility system.  By proper definition
of the  system,  it  is possible (with some important  limitations)
to  conduct  simulations  on   a  regional,  multiregional  or
national basis.  Inspection  of the components in the  upper
part of Figure 2 reveals that only  economic, technical and
environmental  policy decisions are involved.  The simulation
of physical, chemical  and biological phenomena,  which  must
be  addressed on a  site-specific  basis,  is described by the
configuration of  components  below  the dotted  line.   This
method of display has been selected in order to demonstrate
clearly how economics and policy affect decisions concerning
the type, quantity and location of power production facilities
and  the manner  in which  they are operated.   A realistic
assessment of how future developments in the electric utility
industry will affect environmental  quality must include an
estimate of  industry response to  alternative  policies as well
as simulation  of the  production, dispersion, transformation
and effects of environmental pollutants.

The  lower portion of Figure 2  shows the configuration of
components  and  information  flows  for simulation of  the
release,  transport and transformation of air pollutants and
determination  of populations exposed.  The  modeling  effort
must  be  carried  out on a site-specific basis.   Thus, the
information developed by the system simulation in  the upper
                                                          223

-------
                               INPUT

                     Facility & Control Selections

Selected fuel-cycle
module with
alternative levels
of control


RESIDUALS
&
COSTS


rri Production costs (including
"^ distribution & control )
0 Residuals by pollutant
for each control level
                     Site Selections

Distance to consumers
Dispersion parameters
Potential receptors

IB—
TRANSPORT
                                                                      —-rj
               Ambient concentrations
                    (by pollutant)
                  for each selected
                    control level
                                                                      	»EJ
                Exposures of receptors
              to ambient concentrations
Value assignments
for selected damages
GQ — •
DAMAGES


                                                                             Q)       Monetized damages

                                                                             |T|     Non-monetized damages

i b

LJUIUJ — •
TRADE OFF-
ANALYSIS
                                                                                    Ranking of Site/Control
                                                                                         alternatives
                                                                                     Ranking of Implemen-
                                                                                      tation alternatives
                                                                                    Possible re-ordering of
                                                                                   Site/Control alternatives
                             Figure 1.   Elements  in Scenario  Specification
ADDITION ANO
                                                CAPITAL REQUIREMENTS
                                                  SOCIO/PDLITICAL
                                                CONSTRAINTS SITES

                                                 JL
POLICY
_i_
                     CHARACTERISTICS W
                     UI STING CAfACITT
                    AS OF JANUADT l.tlM

unr
flf
SfAiDN
-££,—
DLHUtO

DCr
B
SEA
KHhr OCMAW1

PUUWIMG
hr
Y
VON
6 AND M COSTS
ACTUAL CAPACITY
AMI TICKS AND
COXVMtSIONS

K
FUEL COST PER mur L^
- 1
EXPiKSE AW
DISPATCHING

COSIi
FUEL CONSUMPTIOH
(Ogtput)
, 4
HtGOLATWr
COSTS. PRICES
AhO RCVtMJLS

OF CAPACITY
CONTROL ADDITIONS AM) GENERATION HUT RAW* niNUf [HKV
TECIMLOGT RETROFITTED UNITS Mil AM* FUEL TIKI WKT



CHARACTERISTICS OF
EXISTING CAPACITY
COST Of CLIAM FlKtS L^ L^
r r
C L
CAPITAL REQU1REHEHTS
FM CAPACITY ADDITIONS.
CONVERSIONS. ETC.
CANCELLAT lOHS/OEFCRRALS

FIH.UICIAL





r
CAPITAL REOUIRUCKTS IHVENTMf Of
TOR CONTROLS 6UCRATIW UNITS
CAPITAL
REQUIREHENTS
(Output)
                                                INCOME
                                               STATEMENTS
                                               (Outputs)
               G{NEA*II*G iMIT
               ChAfcACIUlSIICS
             (fix- GeneriLfM Mil)
3 !y Module)

l CAPACITY
| AIR POLLUTANT ^ if
RESIDUALS
AND
UATCR
CONUM-TION

r-,
DISPERSION
PQLLUTAKT
ISOPlETHS

tf
evosuftc

umuus

                    ^        GENERATING UNIT
              RESIDUAL LOADINGS   CHARACTERISTICS
               TO AIR AKD UNO  (fro* Gcntrttloa
                             MU Hodyli)
                            Figure 2.   Module  Diagram  and  Information Flows
                                            in  the  ITA Model
                                                           224,

-------
portion of Figure 2 must be disaggregated so as to drive the
site-specific models shown in the lower portion of the figure.

                   Scenario Development

The exogenously specified inputs to the components displayed
in Figure 2 are elements of the various scenarios which are
being addressed in the ITA.

By  the  term  "scenario" we mean a specification  of  future
events or conditions  which is sufficiently complete to allow
an evaluation of principal costs  to the industry  and consum-
ers,  effects  on air quality, cost effectiveness  of pollution
control  alternatives,  and  resource  consumption  that  will
occur if  these events or conditions do, in fact,  come  about.
Our focus is on the pollutants released and resulting  human
exposures.   Releases depend on  a number of diverse  factors
which may  be  grouped  under  the headings  of economics,
technology and policy.

For  example, the chemical species and quantities of pollu-
tants released  depend  on  the   fuels  used,  the design of
generating units and the total amount of electricity produced.
These, in turn, are dependent  upon general economic  condi-
tions and the market prices for  the fuels which  compete for
usage by utilities.  Given production  levels, generating  unit
characteristics and the properties of fuels, pollutant releases
depend upon  the technology employed for electricity produc-
tion  and  the effectiveness of any control  technology  em-
ployed.   Environmental policies  such  as  mandated emission
limits affect pollutants released from any one source.  Energy
policy affects production insofar as the price and availability
of  fuels are  concerned.  More direct effects on production
may results from efforts toward conservation, electrification
or load management.

Exposures  also depend  on when  and where pollutants  are
released and  on demographic  patterns.   Timing of releases
depends  on the temporal pattern of  demand which may be
affected  by  policies of  demand management.   Siting of
sources  is affected  by economic,  technological  and  policy
constraints.   Siting  flexibility  depends on the  existence of
cost-effective technology  for  long  distance  transmission.
Environmental controls  may take the form of siting restric-
tions.

Table I presents, in outline form, the principal elements to be
considered in specifying a  scenario.   Elements to be consid-
ered are grouped  under  the  four headings  of Economics,
Technology,  Siting and Policy.   Complete specification  of a
scenario requires hypothesizing  specific occurrences under
each of the headings.

For the  first  year's effort, an initial  list  of 25 scenarios has
been prepared by  postulating specific occurrences for  the
scenario elements identified  in  Table I.   This list  will be
subject to further refinement as the  modeling effort  pro-
ceeds.

Figure 3  exhibits  the 25  selected scenarios in event  tree
format.  In selecting these scenarios, the first criterion was
the  elimination of  the less probable combination of elements
and the second was selection of those  most clearly focused on
the environmental problems associated with usage of coal.  In
the second and third years, greater emphasis will be placed on
other scenarios.

The 25 scenarios displayed  in the figure have been grouped in
five  sets  for  convenience  in   discussion.    The  lettered
elements  correspond  to the  list provided in Table I.  The
rationale for each group is as follows:

      Group A.  This  group provides a  basis for comparison of
.policies on a  more or less "business as usual" basis.   A  high
economic growth rate with responsiveness to utility needs in
price setting is postulated with no major effective programs
for conservation.  These conditions have been typical of the
last  two decades, although  not representative  of the  very
recent past.  A high degree  of  dependence on either nuclear
power or  coal  for  baseload  additions  is  postulated  with
conversion of  gas fired power plants to oil.   The policy
options of baseline and relaxed controls (P. and  P^) and

maximum  controls (P^)  with  one  intermediate  level  (P.,)
provide a  range with  which to evaluate possible costs and
benefits of alternate  levels of control.  These policy options
are combined with two siting alternatives in order to reflect
differences in  the populations  subjected  to,  or  protected
from, exposure to pollutants.

      Group B.  This group provides a contrast to Group A in
terms of showing how air pollution may be reduced from the
Group A baseline by factors other than control policies.  A
slow  economy is  predicted  with emphasis  on conservation
together with a  high degree of dependence of nuclear power.

      Group C.   This group  provides  a contrast to Group A
with  respect to  costs.   A slow  economy,  emphasis on
conservation and nonresponsive  regulatory policies are  pre-
dicted.  Extensive dependence on coal is  assumed.  Control
options are  imposed  under  these  conditions of  financial
adversity for utilities.

      Group D.  Group D includes an electrification policy so
that growth in electricity usage is greater than  in Group A.
This  growth could result from the  occurrence of  several
events such as deployment of electric  automobiles, increased
use of electric  space conditioning and extensive curtailment
of natural gas  supplies.   Extensive  dependence  on  either
nuclear or coal for baseload additions is  investigated under
conditions  of natural gas curtailment (which would contribute
to the need for electrification). The  more stringent  control
policy, P^, is used together  with the  baseline control level,
rather than Po,  as being more consistent with the increase in
pollutant releases that would be a result of the thrust toward
electrification.

      Group E.  Group  E provides a contrast to Group D by
isolating the impact of  the movement  toward electrification.
It  posits continued effective emphasis on conservation with
other elements the same as for Group D.

This listing of scenarios  is subject to change throughout the
ITA.  Individual scenarios may be dropped and others added.
The  basic  framework is expected to remain unchanged.  Of
course, all  the elements require extensive analysis to develop
the quantitative specifications.

  Description Of Components In The Simulation Framework

Specification of scenario elements provides the driving force
for the individual components  in the simulation framework
displayed  in  Figure 2.  The  role of  each component and the
inputs to and outputs  from  each component  are described
below.

Electricity Demand

Demand  for  electricity in  future years is the  fundamental
determinant  of utility growth.  Demand is thus a determinant
of  all  economic and   social   costs  and benefits  actually
accruing under any control policy.  Forecasts used  in the ITA
are  being . carefully  evaluated.    Forecasting  alternatives
include extrapolation  of past  trends (including  factoring
judgments  of  expert  individuals or  bodies),  econometric
predictions and technology forecasts.

The   Demand Component specifies  electricity demand  by
season for each year  and region of  interest.  Demand  by
season is   specified using a typical daily load shape curve for
each season and region of interest.
                                                            225

-------
Table 1.  Elements in Scenario Specification
Economics
Group I: High growth rate,
favorable conditions for
utility financing and:

e-| No significant
effect on pattern
or level of demand
through policy
initiatives.

&2 Continuing and ef-
fective conserva-
tion efforts aimed
at both pattern and
level of demand.

e3 Effective measures
toward increased
electrification.

Group II: Low economic
growth rate with emphasis
on conservation and:
e^ A regulatory policy
responsive to needs
to attract invest-
ment to the industry.
eg A regulatory policy
of restrained and
delayed price in-
creases which
translates into a
curtailment of
earnings.
Technology
t-| Extensive depend-
ence on nuclear
power for base-
load additions.
Natural gas plants
convert to oil .

t£ Extensive depend-
ence on coal for
baseload additions.
Natural gas plants
convert to oil .






















Siting
s-| No change in the
current balance of
considerations re-
garding remote
versus near load
center siting.

$2 An increase in re-
mote siting due to
technical break-
throughs or policy
decisions.






















Policy
Pi Baseline controls;
air quality stan-
dards are attained
by limiting emis-
sions.

P2 Relaxed controls;
air qual ity stan-
dards are attained
utilizing tall
stacks and inter-
mittent constrols.

P3 More stringent con-
trols on precursors
of sul fates.

p^ More stringent con-
trols on all air
pollutants.














  Figure 3.   Scenarios for First Year ITA
                   226

-------
     Inputs—Growth rates in energy demand for each region,
"regional" average load factor and load shape curves for each
season.

     Outputs—Energy  and peak demand by region and season
(to Planning, Production Expense, Dispatching and  Regulatory
Components).
Future capacity additions and conversions are being deter-
mined in accordance with announced plans of electric utilities
and modified  to  reflect different postulated  growth rates,
capital  requirements, and siting and  sociopolitical  con-
straints.   Control policies as well as economic rationale are
being  considered both with regard to environmental protec-
tion and programs aimed at national energy self-sufficiency.

     Inputs

     •     Demand (from  Demand Component).
     •     Conversions, i.e., from oil  to  coal, from gas to
           coal, etc.; Planned  Capacity Additions; Capital
           Costs;   Socio/Political  Constraints; Generating
           Unit Sites (exogenous inputs).

     Outputs  to Control  Technology  and Generation  Mix
Components:

     •     Planned conversions (oil to coal, gas to oil).
     •     New units by type brought on-line in a given year.

     Outputs to Financial and Regulatory Components:

     •     Capital  requirements for construction of units and
           control devices.
     •     Cancellation,  deferral  or  acceleration of  units
           scheduled to come on-line in future  years.

Control technology

Alternative methods  for  air   pollution  control   are  being
specified  in terms  of costs  and  capabilities.    Modeling
reflects both  the consequences of pollutant shifts  from one
medium to  another and the possibility for creation of new
pollutants as a by-product of control.

The  Control  Technology   Component specifies costs,  effi-
ciency and  impact on plant operation of pollution control
alternatives for SO?
thermal effluents.
                      NO ,  particulates, and chemical  and
      Inputs
           Capacity additions and conversions (from Planning
           Component).
           Characteristics  of  existing capacity (from  Gen-
           eration Mix  Component).
           Degree of control required (exogenous input).
           Cost of clean fuels (from Primary Energy Supply
           Component).
      •     Costs for meeting a given emission or effluent
           standard (to Financial Component).
      •     Characteristics of capacity additions and retro-
           fitted units (to Generation Mix Component).
      •     Degree  of pollution control (to Residuals Com-
           ponent).

Generation Mix

Most  recently available characteristics of existing capacity
are being specified on the basis of Federal Power Commission
data as updated  by information from utilities.  Component
input includes the characteristics of planned capacity addi-
tions and  modifications to existing  capacity  to reflect fuel
conversions and  retrofits for pollution controls.  The Genera-
tion Mix Component is essentially a file which contains the
characteristics of generating units for any particular year of
interest to the 1TA.

The Generation  Mix Component specifies the capacity profile
as of 1974.  It is updated during the simulation to show the
"state of the system" for each year of interest in the future.

     Inputs—characterization  of existing capacity  as  of
January I, 1974 (data available from FPC) according to:
                                                                         Size
                                                                         Age
                                                                         Type and composition of fuel(s)
                                                                         Heat rate
                                                                         Type of air pollution controls
                                                                         Type of cooling
                                                                         Stack height
                                                                         Location
                                                                         Ownership
                                                                         Status of Section 3l6(a) application
                                                                         Capacity factor
                                                                         0 & M expense
                                                                         Source of fuel

                                                                      puts—data available from other sources:

                                                                         Characteristics of additions to generating capac-
                                                                         ity  according  to size, fuel type and composition,
                                                                         type  of pollution  control,  etc. (from  Planning
                                                                         Component).
                                                                         Retirements and re-rates (from Planning Compo-
                                                                         nent).
      •    Inventory of generating units (output).
      •    Generating  unit  characteristics  (to  Residuals
           Component and Control Technology Component).
      •    Heat rates and fuel types for each class of facility
           (to Primary Energy Supply Component).
      •    O & M costs (to Production Expense and Dispatch-
           ing Component).

Primary Energy Supply

The chemical and physical characteristics and delivered costs
of primary fuels  are being  specified according  to  source of
supply. This facilitates the identification  and assessment of
environmental and socioeconomic  impacts associated  with
fuel  extraction and processing to be conducted in  years  two
and three of the ITA.

The Primary Energy Supply Component specifies the cost of
available fuels for each generating unit.
                                                                         Heat rates and fuel  types for each facility (from
                                                                         Generation Mix Component).
                                                                   •     Fuel cost per kilowatt hour generated  for each
                                                                         facility  (to  Production Expense and  Dispatching
                                                                         Component).
                                                                   •     Cost of clean fuels (to Control Technology Com-
                                                                         ponent).

                                                              Production Expense and Dispatching

                                                              Generating units are not operated in isolation; they function
                                                              as part  of an  integrated  system  in  which  production is
                                                           227

-------
allocated  to  units to meet  demand which varies by time of
day and by season.   Introduction of pollution controls  which
affect efficiency will cause shifts of load among units.  In
coming  years, there may be attempts to change load  curve
shapes  through special  pricing  regulations.    Changes in
allocation of load (e.g., between peaking and base load  units)
may  radically  change  pollutant  characteristics.   Seasonal
production patterns, of  course,  fundamentally  affect  the
release  of  pollutants  and,  consequently,  exposures.    The
dispatching  of  load  plus  the  operating  characteristics of a
generating unit determine the expense incurred by the  utility
and, thus, the costs borne by the consumer.  The ITA utilizes
a simple dispatching model to determine capacity factors and
production expenses.

The Production Expense  and  Dispatching  Component  calcu-
lates generating costs  for each  class of  unit  and  specifies
capacity factors such that the demand is met at least cost.
investor  owned  utilities  require  that  simulation  of  the
financial impacts of future environmental  policies treat  the
two types of firms separately.

The  Financial Component calculates  financial  flows and
updates balance sheets and income statements each year.
           Initial balance sheet (exogenous input).
           Profit and loss items (exogenous input).
           Rate schedules (from Regulatory Component).
           Production costs and  revenues (from Regulatory
           Component).
           Capital  requirements (from Planning Component)
           for  new  capacity,  transmission,  distribution and
           plant conversions.
           Capital  requirements for pollution control equip-
           ment (from Control Technology Component).
           Fuel cost for each  class of unit (from  Primary
           Energy Supply Component).
           0  &  M  costs  for  each  class  of  unit  (from
           Generation Mix Component).
           Demand by season (from Demand Component).
      •    Fuel consumption for each class of unit (output).
      •    Capacity factors for each class of unit (to Re-
           siduals Component).
      •    Production expenses (to Regulatory Component).

 Regulatory

 The  effects  of regulatory policies  on  the financial  and
 operating  characteristics of utilities  are being considered.
 This includes consideration of alternative pricing schemes and
 regulatory lag.  The methods of  treating production cost pass-
 through  and financing  of construction work-in-progress  will
 affect the rate of utility  response to needs for  pollution
 controls, capacity additions and fuel conversions.

 The Regulatory Component specifies electricity rates consis-
 tent with recovery of production  costs and  a  return on rate
 base.
      •    Production  expenses  (from Production  Expense
           and Dispatching Component).
      •    Demand for electricity  (from Demand  Compo-
           nent).
      •    Regulatory environment (exogenous input).
      •    Rate base  and financial  needs (from  Financial
           Component).
     •     Updated balance sheets and income statements
           for each year (output).
     •     Aggregate capital requirements (output).

Residuals And Water Consumption

Existing  data bases for the residual releases from power plant
operation are  being refined and assigned to existing and new
facilities.  Generating unit characteristics and fuel properties
are being considered in order to define residual release rates
as functions of these independent variables.   The residuals
model not only considers the removal efficiencies of control
equipment,  but also reflects increased residual  releases due
to  reduction  in  plant  efficiency  resulting  from  control
technology application.   The first year's effort focuses  on air
pollutants, especially SC^, NOX, and primary  particulates.

Other residuals considered will  include  trace  elements and
waste heat. Cross-media effects are also considered.

Since calculation  of evaporative water consumption requires
some of the same  inputs as residuals generation, it is included
in this component.
           Unit capacity factors (from Production  Expense
           and Dispatching Component).
           Properties  of the  fuels  (from  Primary  Energy
           Supply Component).
           Characteristics of the generating unit (from Gen-
           eration Mix Component).
           Characteristics of control devices  (from  Genera-
           tion Mix Component).
      •    Electricity rates (output).
      •    Production costs,  prices and  revenues (to Finan-
           cial Component).

 Financial

 Costs of both new  facilities and of pollution control equip-
 ment are being evaluated in the context of utility financing.
 Needs  for capital,  ability  to finance  capital  expansion,
 earnings and the ability to recover capital and operating costs
 in revenues are basic considerations in industry decisions with
 regard  to development.   The  impact  of control  policies is
 being  measured  in  terms  of  effects  on needs for capital,
 earnings, prices and return on investment. The fundamentally
 different  financial  structures  of  investor  owned  and non-
     •     Air pollutant release rates  from each generating
           unit (to Dispersion Component and output).
     •     Water  consumption  rate for  condenser cooling
           (output).
     •     Solid waste generation rates (output).
     •     Waste heat discharged (output).

Dispersion

Pollutants are being traced  from release at the power plant
to eventual impact on sensitive receptors.  Chemical trans-
formations  and  interaction  with  other   materials  in  the
environment will  be  included in  the assessment.   The  first
year's study considers the dispersion of  air pollutants only. In
view of the results of very recent research, a major emphasis
must be placed on transport of pollutants on an inter-regional
scale (i.e., 100 to 1000 miles from the source).
                                                           228

-------
The  Dispersion  Component calculates changes  in  local air
quality  due  to releases of  pollutants  calculated in  the
Residuals Component.   Separate  models are  provided for
short range and long range atmospheric transport.
     Inputs
           Release rates for each air pollutant (from Resid-
           uals Component).
           Stack parameters for each generating unit (from
           Generation Mix Component).
           Location of each generating unit (from Generation
           Mix Component).
           Local meteorological parameters (exogenous in-
           put).
      Outputs
           Ambient concentrations  of  air pollutants at dis-
           tances less than 50 km from the generating unit.
           Concentrations are specified according to location
           parameters, e.g., census tract,  zip  code, county
           (to Exposure Component and output).
           Contributions  to  sulfate concentrations  in  se-
           lected impact  areas greater than 100 miles down-
           wind  from  the  generating units  (to  Exposure
           Component and output).
Exposure
Populations at risk to air pollutants for specific geographical
regions are being identified such that projected patterns of
demographic growth correspond to growth in regional demand
for electricity.  The  demographic modeling  reflects changes
in national fertility rates as well as secular economic trends.
Since populations at risk  are defined in  terms of the spatial
relationship between  sources and receptors and the conditions
of pollutant transport, the interaction of siting, residuals and
air dispersion models is  crucial to  exposure model develop-
ment.

The Exposure Component calculates populations exposed to
increments in ambient pollutant concentrations.
      Inputs
           Pollutant  isopleths (from Dispersion Component)
           according to locationol parameter (census tract,
           zip code, etc.).
      Outputs
           Populations exposed to specified  levels of pollu-
           tant concentrations.
                                                           229

-------
                         ENVIRONMENT IMPACT MODELLING FOR PROJECT INDEPENDENCE
R. A. Livingston
Office of Planning and Evaluation
U. S. Environmental Protection Agency
Washington, D.C.

R. W. Menchen
Hittman Associates, Inc.
Columbia, Maryland
                     G. R. Kendall
                     Teknekron, Inc.
                     Berkeley, California
                     H. P. Santiago
                     Office of Planning  and  Analysis
                     Federal Energy Administration
                     Washington, D.C.
                       I. SUMMARY

An application of the environmental residuals tech-
nique for evaluating the environmental implication of
energy policy studies is described.  This paper covers
the adaptation of the techniques to the particular
needs of the Project Independence Evaluation Systems,
some typical results of the analysis, and its limita-
tions.  Several different methods of scenario and
residual comparison are investigated.  The conclusions
are that the residuals approach can be a useful tool
in comparing alternative scenarios, but a tradeoff
must be made between degree of detail and compre-
hension of results.  Areas for further work include
extension of residuals to cover more items of interest,
introduction of time dependency, and development of
aggregate measures for comparison.

                     II. BACKGROUND

This project arose from Environmental Protection
Agency's participation in the Project Independence
interagency effort initiated in March of 1974 to
evaluate national energy problems and provide a
framework for developing a national energy policy.
The effort assembled a comprehensive energy data base,
developed a methodology for analyzing future energy
supply and demand alternatives, and investigated the
impacts and implications of major energy strategies.

The results of the initial study were presented to the
President in December 1974.  Subsequent studies uti-
lizing the Project Independence Evaluation System
(PIES) were published in connection with the Draft
Environmental Impact Statement for the Energy Inde-
pendence Act, March 1975,2 and the current version of
the Project Independence Report.

EPA was given the responsibility of assessing the
environmental impacts of the energy scenarios pro-
duced by PIES.  The assessment had to be performed
rapidly, repetitively and in a consistent fashion.
The assessment of the scenario had to be done using
quantitative techniques as much as possible.

              III. CONSTRAINTS ON ANALYSIS

A. PIES Model and Scenario Constraints

The Project Independence Evaluation System (PIES) is
a set of computer models of the technologies, demand
and the markets through which energy commodities are
extracted, transported, transformed and consumed.
Regional production, processing and conversion activ-
ities are represented within the energy network as
nodes, with links depicting transportation and dis-
tribution possibilities.  The PIES model simulates a
market system which takes into account prices,
resource requirements and capacity constraints, and
constricts a set of energy flows that satisfies the
final demands for energy.  It is a least cost linear
program.  PIES models energy supply side and adjusts
prices (thus demands) until the system achieves an
equilibrium balance at which no consuming sector
would be willing to pay more for  an  additional  unit
of any energy product, and no supplies would provide
an additional unit of any energy  product  for less
than the prevailing prices.^

An individual energy scenario is  developed on the PIES
model by inputting a set of supply and demand data
representing particular energy policies or  conditions,
including the world price of oil.  The system then
computes a least cost solution and provides the out-
put in terms of quantities produced  and consumed,
with associated prices.  The solution applies only to
one point in time and for one world  price of oil.
The PIES system is not a dynamic  model.

B. glES Output Constraints

The PIES output becomes the data  base for the environ-
mental assessment.  However, it was  designed to suit
the objectives and data availability of t^ie energy
analysis, and these did not necessarily correspond to
the needs of the environmental analysis.  The infor-
mation content and structure of the  PIES  output thus
constrained the scope of the subsequent environmental
information.

On the supply side, the PIES output  gave  production
statistics on an individual region basis  for coal,
oil, natural gas, and U235 extraction.  However, the
regions for each energy source were  based upon
traditional boundaries, as defined by the relevant
source of statistics.  For example,  coal  was organi-
zed by the Bureau of Mines coal provinces,  petroleum
and natural gas by NPC petroleum  provinces  and  so on.
Similarly, oil refining was broken down by  Petroleum
Administration Districts  (PAD) and electricity  by
Electric Reliability Council regions.  Generally,
the regional boundaries of one activity did not
coincide with that of another, which prevented  a
direct comparison of energy scenarios on  a  geo-
graphical basis.

Second, the PIES output specified only the  total
quantity of activity for  a particular region, without
specifying the size of  the facilities or  their  geo-
graphic location within the region.  For  example, oil
refining activity was described as  total  barrels of
throughput in a PAD.  This production could be  dis-
tributed in any number  of  locations  from  Delaware
to Florida, and in any  size facility from 10,000 to
greater than  250,000 barrels  per  day.

Finally, source factors  significant  from an environ-
mental viewpoint,  such  as  sulphur content of  coal,
were  not given  in  the  output.

C. Environmental  Data  Constraints

In addition  to  the  limitations imposed  on the envi-
ronmental  analysis by  the PIES output,  the analysis
was  further  constrained by the availability of
quantitative  environmental data,  and methodological
                                                       230

-------
problems of environmental analysis.  Warner and
Preston in their 1974 review of environmental impact
assessment methodologies, identified 17 different
approaches to environmental impact assessment and
concluded that there was "no universally applicable
procedures for conducting an adequate analysis."5

Similarly, a number of studies have attempted to
compile the environmental data related to energy on a
systematic basis. >'>°>9  They differed among each
other in sources of data, assumptions, pollutants
covered and method of aggregation. 10  For the purposes
of the Project Independence analysis, it was decided
to use the data base prepared by Hittman Associates
for the Council on Environmental Quality.  It was felt
that this provided the most thorough, flexible and
widely utilized set of data available at the time.
However, in making this choice, the environmental
analysis was then limited to the set of pollutants
as specified in the CEQ study.  This covered seven
water pollutants, and six air pollutants, as well as
land use, solid waste, and occupational health.

D. Constraints of Project Independence Organization

The scope of the environmental analysis was also
limited to a certain extent by the role of EPA's task
force within the overall structure of Project Indepai-
dence. In total there were 21 interagency task forces
set up to address various aspects of energy supply
and demand. In particular, questions related to water
use and availability were assigned to the Water
Resources Council and some socio-economic matters
were assigned to a Manpower Task Force headed by the
Labor Department. H
The other aspect of the Project Independence organi-
zation that limited the scope of the environmental
analysis was the schedule, which originally allotted
two months for the entire process of development and
analysis of energy scenarios.  This ruled out the
development of any major new environmental techniques,
and required that the environmental analysis involved
be capable of being done in a short time.

      IV. OBJECTIVE OF ENVIRONMENTAL ANALYSIS

Given the constraints outlined above, several objec-
tives were chosen as a basis for designing a
methodology for doing the environmental evaluation.
These were:

A. Emphasis on Interscenario Comparison

The primary purpose of the environmental evaluation
was to enable energy policy makers to appreciate the
relative environmental ranking of different strategies
to solve energy problems.  However, this does not
require that absolute environmental quality be pre-
dicted for a given scenario.

B. Level of Analytical Detail Consistent with PIES
   Output

The PIES Model operated on an aggregated regional
basis, and looked at macroeconomic questions.  The
general lack of site specific information for energy
production or consumption ruled out dispersion
modelling or health effects calculation for measuring
environmental impact.   To do so would have produced
spurious accuracy, not supported by the quality of
information available.

C. No Environmental Constraints in PIES Model

A theoretically rigorous energy model should take
into account the effect of environmental regulations
and limitations on energy supply and consumption.
However, this was not an objective for the Project
Independence evaluation for two reasons.  The first
was the problem described of accurately' calculating
environmental quality with the data available.
The second was the economic basis of the PIES model.
Entering environmental variables into the model's
calculation would have required coverting environ-
mental impacts into external social costs.  While
theoretical models exist for calculating external
costs,   a satisfactory practical method was not
available.

D. No Preemption of Environmental Impact Statement

The environmental analysis performed in this project
was not intended to satisfy the requirements of the
EIS.  This was partly because the Project Indepen-
dence effort was a policy study, not involving
specific physical activities which could be identi-
fied as requiring an EIS.  Another reason was to
avoid the possibility that by doing an EIS for
Project Independence as a whole, individual projects
could be relieved of the responsibility of having to
do specific EIS.

E. Rapid Response

Since the environmental evaluation came at the end of
the process of generating energy scenarios, it would
inevitably have a very short timeframe for analysis
before the final report would be written.   Therefore,
it became essential to be able to do the evaluation
on short notice, in a rapid fashion.

                 V. ANALYTICAL APPROACH

Given the constraints and objectives  outlined above,
the environmental evaluation evolved  into  a compari-
son of energy scenarios using discrete residuals
on a consistent regional basis.

A. Residuals

The term "residuals" means any measurable  quantity
which is associated with a given activity,  and which
results in environmental impacts.   Air pollutants
and water pollutants are included in  this  definition,
as well as things such as land use,  solid  waste,
water use, and manpower, which are not usually con-
sidered pollutants.  Unquantifiable items  like
esthetics are not considered to be residuals.

It should also be noted that residuals are the pre-
cursors of environmental impacts and  not the impacts
themselves.  For instance, the S02 emitted by a power
plant is a residual, but the health effects caused by
that S02 are an environmental impact.   The number of
acres of land disrupted by strip mining is a residual,
but the loss of the ecosystem on that land is an
environmental impact, and so on.  Thus,  the defini-
tion of residuals avoids the problems and  uncertain-
ties of calculating transport of pollutants and
their effects.

Finally, by eliminating any reference to local
environmental conditions, such as terrain,  meteorot-
ogy ,  population, etc. it becomes possible to con-
ceptualize and compare representative energy activi-
ties rather than being forced to speak in  terms of
a particular coal mine or a particular power plant.
This aspect is important when analyzing future energy
choices, when the exact location of an activity is
unknown, thus making it impossible to calculate the
changes in ambient environmental conditions such as
air quality or water quality.  However,  it is
possible to compare the residuals associated with
the alternatives since these are independent of
                                                      231

-------
location.
                                                         B. Regional Allocation  System
Thus, the advantages of residuals are that they deal
in measurable, predictable quantities, in an objec-
tive way.  Subjective elements, such as the
ultimate environmental impact, or the relative
weighting of different pollutants, are not con-
sidered.  However, if one wishes to proceed  to this
kind of analysis, it is still necessary to know the
residuals as the first step.

Furthermore, this makes it possible to trace back
through all the steps involved in producing and trans-
porting, converting, and consuming energy.  Thus
residuals analysis can account for all the associated
pollution that may not occur at the power plant
or other energy facility, itself, but that must also
be charged to the production and consumption of a
unit of energy.

In practice, the residuals technique involves a
matrix of coefficients.  One dimension of the matrix
is energy activities, the other the residuals of
interest.  Each element in the matrix thus relates
the  production of a given residual to the throughput
of energy involved in a specific activity. Separate
matrices are developed for each major energy type.
A sample coefficient matrix for coal is presented
in Figure 1.  It  is necessary  to define a vector,
E^,  composed of a set of variables e -^j   which
describes a quantity of energy in a specific series
of steps, or trajectory, from mining or extraction,
to end  use.  The  subscripts thus describe the
jth  step, eg. mining, in the ith energy source eg.
coal.  Then by multiplication, the total set of
residuals, rik  associated with this trajectory can
be calculated:
 (1)
              ik
                   "ij
"ijk
However, since more than one energy trajectory is
usually involved, each with its characteristic
residuals matrix, the total residuals for an energy
scenario are:
 (2)
             rilc

However, equation (2) should be modified in the case
of Project Independence analysis to take into account
the regional nature of the energy reserves.  Thus
for region 1, the residuals are:
 (3)
             rkl
                                  "ijkl
Since each type of energy production specified in the
PIES output had its own set of regions, it was also
necessary to  introduce an allocation factor a
such that:
                  Iql
 (4)
                             J
                                          mijiq
Thus in order to  arrive at a quantitative measure
of the residuals associated with a given energy a set
of regionalized ,  energy specific residual matrices
had to be generated as well as  a set of allocation
factors to convert from energy regions into a con-
sistent set of regions for residuals analysis.  The
residuals matrices were developed under contract by
Hittman Associates, Inc., utilizing residuals
matrices developed under a previous contract for
CEQ13, and modified to suit the specification of
PIES.  This effort is described in a separate
paper.
From equation  (4) it can be seen  that  the  set of uni-
form regions into which the various  energy regions are
allocated, must be a common subset of  each energy
region.  These uniform regions must  also make sense
from an environmental standpoint.  This task was
assigned to another contractor  ERGO (Energy Resources
Co.).

The basic question to be decided  was which type  of
region should be used.  Air Quality  Control Regions
(AQCR) were not used originally because their
boundaries were drawn along political  subdivisions
rather than topographically separate regions,  and
because in some cases, they covered  too broad an area.
Instead it was decided to use river  basins,  as defined
by the National Oceanographic and Atmospheric Adminis-
tration.  These basins represented national regions
for water quality analysis, and their  physical
boundaries also provided a. fairly good basis for air
sheds.

The development of the river basins  model  and the
allocation procedure is described in more  detail in a
separate paper.15 Subsequently, a subroutine was
added to the model which provides the  ability to
allocate the residuals to AQCRs.

C. Other Considerations of Residuals Analysis

1. Energy End Use—In the initial Project  Independence
effort, the environmental evaluation associated  with
the end use of energy was not considered,  although
impacts associated with electrical generation  were
included.  End uses were not considered primarily
because the Project Independence  effort concentrated
on supply options to meet a given level of  demand.
Thus the level of demand was essentially not within
the control of policy measures under consideration.
The environmental evaluation was  intended  to bring
home the effect of conscious decisions  on the  part of
policy makers.  Secondly, the amount of effort
required to produce end use residuals  matrices and
allocation models was not available  in this  phase of
the project.  However, in subsequent efforts,  the
end use environmental evaluation  was implemented and
used.

2. Nuclear Energy—A set of residual coefficients were
derived to describe the non-radiological aspects of
nuclear energy, primarily U-235 extraction  and opera-
tional releases of radioactivity  from  power  plants.
No overall analysis of radioactivity was made. This
was because of the difficulties of comparing  the
production and decay over time of radioactivity with
other residuals.  Also, the level of nuclear power
usage was relatively constant among  energy  scenarios,
and thus would have little effect on their  relative
ranking.

3. Environmental Control Standards—It was  assumed
that all environmental control regulations  and
standards promulgated up to the time of the Project
Independence study could be in full  effect  in  1985,
the point of time chosen for the  analysis.   This had
the effect of reducing some water pollutants  to  zero,
and minimized land use impacts from  strip mining.

4. Socio-Economic Factors—No evaluation of  the  usual
socio-economic factors was made,  nor were  analysis of
environmental impacts of secondary development con-
sidered.  This was done both because it was  not
within the scope of the environmental  task force's
responsibility and because it involved data which was
not readily available.
                                                     232

-------
D. Scenario Comparison

Once the residuals had been calculated and compiled
on a consistent regional basis, it was necessary to
find a technique to compare scenarios.  A number of
different techniques were considered.

1. Single Index— The use of a measure which would
aggregate all the individual residuals into a. single
index was considered and rejected.  Although it
would have provided a simple means of ranking the
scenarios, it would have involved the use of weighting
factors to be applied to the residuals.  The state
of the art has not advanced to the point where
objective weighting factors are available, and
judgemental ones would have introduced a subjective
factor into the analysis.  Moreover, it was felt that
as much information as possible should be presented
concerning the environmental impacts, and thus it was
better to display the entire list of residuals,
rather than a single number.

2. Overlays—An attempt was made to display the
residuals as overlays on a national map.  However, it
proved impossible to present more than two pollutants
at a time, or, alternatively, to compare more than
two scenarios at a time  (Figure 2).  Moreover, the
river basins introduced  too much detail.

3. Tabular Presentation—The most successful approach
on a regional basis was  in the form of tables of
residuals.  In order to make the data manageable,
the data from the river basins were aggregated into
14 major regions, corresponding to the demand
regions used by FEA.  An example of this format is
presented in Figure 3.  Here the method of comparison
is simly to match the products to a particular
residual from one scenario to another.

4. Graphical—The regional tabular method is the most
straightforward.  However, it tends to take up a lot
of space and major trends are difficult to extract
from the details.  Consequently, in its publication
on Project Independence, FEA resorted to presenting
the residuals on a national basis in the form of
bar charts.  This is illustrated in Figure 4.
While definitely simplifying the presentation and
making it more comprehensible, this has the effect
of discarding the information on regional impacts,
which is one of the major objectives of the PIES
approach.

5. Differential Comparison—As a way of overcoming
some of the difficulties of the regional tabular
approach, a matrix was prepared showing the
differentials in residuals between two selected
scenarios, one dimension are the individual residuals
The matrix is presented  in Figure 5.  While it does
provide a concise detailed comparison, its dis-
advantage is that it cannot be simply extended to
comparison of three or more alternatives.

                   VI. RESULTS

It is not the intent of  this paper to cover in a
comprehensive fashion the results of the various
scenario analyses, these can be found in the reports
published by FEA,I6 and ERGO17'18 and Hittman.
However, there were some findings that kept
recurring.

One is that on a national basis energy related
pollutant loadings will either decline or remain
constant between 1972 and 1985.  This indicates the
effect of meeting pollution control standards, and
emphasizes the importance of accurate assumptions
concerning the efficiency and degree of use of
controls.

A second find, on a national basis, is that residual
loadings variations among scenarios in 1985, tends to
be less than the 'change between 1972 and 1985.  From a
national policy viewpoint, this suggests that the most
useful effort should be on making sure that environ-
mental standards are met, rather than attempting fine
tune energy supplies to determine the most environ-
mentally acceptable scenario.

Third, for some residuals, such as particulates, the
loadings associated with different supply scenarios,
are small in comparison with the quantities from end
use activities.  This suggests that more thought
should be given to controlling end use emissions, and
to energy conservation.

Fourth, some scenarios involving accelerated develop-
ment had the effect of reducing some residuals over
the business-as-usual scenarios in 1985.  This
apparently results from the more rapid retirement
of older facilities with less stringent pollution con-
trols, as well as switching away from coal to oil.

Finally, individual regions may show widely
different environmental impacts within a given
scenario and between scenarios.  This indicates that
comparing residuals solely on a national basis is not
sufficient for evaluating environmental impacts.

          VII. CONCLUSIONS AND RECOMMENDATIONS

The primary conclusion is that residuals analysis can
be useful as a practical tool in comparing environ-
mental impacts of energy alternatives.   While it is
not a substitute for a thorough study of particular
regional or local impacts associated with a definite
activity, it does provide quick "broad-brush"
answers more in keeping with the generalized informa-
tion produced in policy studies like Project Inde-
pendence.  It can be used as a way of identifying
problem regions which should receive more indepth
analysis.  Care must be taken however to use residuals
on a comparative rather than absolute basis.

There appears to be a definite tradeoff between the
amount of detail involved in the analysis and the
comprehensibility of the results.   Although the
environmental evaluation actually produces information
for over three hundred subregions of the U.S., it
becomes extremely difficult to relate this to the
overall implications of energy policy.   Consequently,
a more aggregated comparison technique, at 10 sub-
regions of the U.S. had to be carried out.  Even then,
it proved difficult to comprehend, and for the FEA
reports, it proved necessary to resort to national
summaries.

Areas for further work include developing a better
methodology for comparing and presenting residuals
and scenarios, so that the full capacity of the
techniques can be utilized.  Additional residual
coefficients need to be developed for items like
trace metals.  Some more extensive emissions and moni-
toring data is needed before a simple and practical
technique for converting residuals information into
air and water quality results can be developed.
Finally, the residual coefficients should be modified
to reflect problems that are not at steady state with
energy throughputs.  These would include efforts that
are cumulative over time, such as acid mine drainage,
land use or radioactive wastes, or that are site
sensitive, such as impacts due to construction.
                                                      233

-------
Improvements  in residuals modelling also implies
improvements  in energy modelling.   This includes
better  regional descriptions,  better characteriza-
tion  of industrial  energy facilities, additional
energy  resource characteristics,  and introduction of
dynamic elements into the model.
                    REFERENCES

 1.  FEA,  Project  Independence Report,  November 1974,
 Pg-  1.

 2.  FEA,  Draft  Environmental Impact Statement for
 Energy Independence Act,  March 1975.

 3.  FEA,  National  Energy Outlook (in draft).

 4.  FEA,  Project  Independence Report, November 1974,
 Appendix,  p^ 204

 5.  M.L.  Warren &  E.H.  Preston, A Review of Environ-
 mental Impact  Assessment  Methodologies, U. S. Environ-
 mental Protection Agency,  April 1974, p. 1.

 6.  Battelle  Columbus,  Environmental Considerations
 in  Future  Energy  Growth,  April 1973.

 7.  Teknekron,  Inc., Residuals from Electric Power
 Generation Fuel  Cycles,  December 1974.

 8.  Atomic  Energy  Commission, WASH-1224, Risk-Cost-
 Benefit  Analysis  of Alternate Sources of Electrical
 Energy,  December  1974.

 9.  Hittman Associates,  Inc., Environmental Impacts
 Efficiency and Cost of  Energy Supply and End Use
 Final Report HIT  593,  Council on Environmental
 Quality, November 1974.

 10.  Peter  Cukor,  "Comparison of Residuals Analysis,"
 Teknekron, Inc.,  internal  communication, January 1975.

 11.  Joel Haverman & J.  G.  Phillip,  "Project Inde-
 pendence ," National Journal Reports, November 2,
 1974, Vol. 6,  No.  44, p.  1637.

 12.  A. V.  Kneese,  B. T.  Bower, ed.  Environmental
 Quality Analysis.  Johns Hopkins Press,  Baltimore,
 1972.

 13.  Council on Environmental Quality, MERES and the
 Evaluation of  Energy Alternatives,  May 1975.

 14.  W. R. Menchen,  M. S. Mendis,  D.  F.  Becker,
 H.L. Schultz,  Hittman Environmental Coefficients for
 Regional Pollutant  Loading Analysis,  in press.

 15.  F. Lambie   "Environmental Residual Allocation
 Model," Energy Resources Co.,  in  press.

 16.  FEA,  op.  cit.

 17.Energy Resources  Co., Project  Independence Blue-
 print: Environmental Quality Analysis Final Report -
 Phase I.  December  1974.

 18.  Energy Resources Co.,  Assessment of the Environ-
mental Implication  of Project  Independence, in draft.

19. Hittman Associates Project Independence Final
Report,  in draft.

now
;s
: ,
:
:
3
;
:
^T^

MNE
WON.
AUGER
STR~
STRP*
MJNRH.
dETl.G
3ISTB
JARGE
PIPEU — '

ACTIVITY F .OCESS
Lono Will
— ^I5*S^ 	
>IS" SIU|M
Losn^sa: 	 ,
•-*&&£&,„..„.- 	
River Bar 90
Pipeline Sluiry
1214
DISSOLVED SOLIDS
4.65.01
6.56 "M
4,29*01
0 00.00~
BASES
0 00*00
K»4
—oo^oo—

000 KKJ
( 	 	 	 ATrVToUUTArjre — 1
THERMAL
|3TUMOMBTU>
0.00*00
PARTIC.
ULATES
000.00 '
404-02
600-03
230-0!
.aw -op
~ l7?7*OI
333101
1 74*01
'BTUI
NO,
0.00100
1 33*00
1 96.00
656*01
000-00
' 'i'lM^I
1
3B(HOQ
3,06.00
68901
i~i2ioo
           FIGURE 1. EXAMPLE OF RESIDUALS MATRIX FOR COAL MINING
FIGURE 2. ACCELERATED SUPPLY SOX AND PARTICULATES REGIONS EXCEEDING
        PRIMARY STANDARDS IN 1972 AND INCREASED LOADINGS IN 1985
                                                      234

-------
^u*^""m-nirMI"PWT"WL DEMAND REGION 4- EAST NORTH CENTRA^
TIMS/MI TOHS/MI £7 TUU/MT «*:./« I L* 70HS/DA. Kl b7U

OII6HOU. P I ] 1 n 3 0 T 2 9 J JJ
L IIJ KLS/D*n 109. 51] 0000 4.0B160E-3 0
(TO ClSiriCATIOl III SCr/MT) 0 0 0 0 0 0 0
ML LISUEFACTIO 1 3
L mllUl BBLS/W1I 0000000
"cQAL-rmO IE) TOn;/MI> ]oi.»Tt 0 0 1. 71101 0 7.30SK6E-J 2869.9"
01L-FUEB tE3 BBLS/SU) 122 292 0 0 2.5*07^-2 0 0 |60.2»7

MIDIOELECTdIC IE] i.*7TS) T.?«B«n 0 0 0 O 0 0
in«E»I£S IBSLS/D*!) 92fl)lD 0 0 1.ST33S 0,7 0
TOTALS 3S1.*I2 C]20.2I 111112.* 6B2.B1 Jl(, 790 2K17H2 I3<>.5 US".J «9i1.B]
FIGURE 3. EXAMPLE OF REGIONAL TABULAR COMPARISON
setsnuva
%Lm 	 H „ ....
S!-!!"" " 1 ^ 	 i — - ', ti^pq ^ „„ ,...r „.„,
ln,t""'-LM ^^^BlHBi L: •., i>M ra — 	
;;^;::;T.i, H^^^^^ ! 	 ••, 	 ?v/%% u —
SI."' 1 ••••• f- " -'i%^

' ' ' ' 1 ' ' ' ' 1 ' ' ' ' 1 ' ' 	 1 ' ' 	 1 ' ' ' ' J
n J500 5000 75CJ 10000
"ItTLCULiT"
iCEHAnlOl UQnB *cr (Ja>r'
"i'K, WPA D-..T , n. ,

S!;!;1"'"! ^•••^•KV , ,,w//i£f.*\ Q 	 	 —
',:,",',-'""" i EVHi^^Hi . l L &*:•:•] |Xj otr,,.
SiSKi™ tssi^^^^*1 !. ". '-3///MA
SSt}" lsi^^«B». - • I>».K*S
SSSS,,,™! ^•^^•B >»»f:-xl
J 	 |_, , , , , , , . . | . . . . i . . . . . ... i ' i < i |
.m..,o. "°°° ""'ffilSWHf"'"
^;0, w////m i i "., -».r -..„,

\;,;:;,"' " i::MBH^BBBH:::::;^:::::l t^/l ..,:-,,r-.,,
;:,;;r;""J le^pB^^HB^H:^-! t-:-l -1'1 '-": '•'•'-•
:::;;;:•;;:; k^^^BBi^lHi;^:-:-!
:t:;'t» l kiH^MI^HHRiiR-:*!
S?:;:.".,,,, I^^H^HH^Hkjk^^d
i i i i i i i i i i 1 i i i i i i i 	 i 	 i ' ' ' ' |
i5000 iULyiiR dTir>r^
ict««ioi '"" "r "'"'
™'m v/y////////////M \ \ »•••"-' =••
i^""™"! S»\S~!%« fen™.,,.
is;;:1" u i •*.,...-.-.i^i ky '•"««»
»CCILMA«D 	 ~— KT^^^J^/^

SSt? | m*..?.jix//M
SSK.™. «..^^:.a^^
P 1 1 1 1 1 1 1 1 1 1 1 ' 1 1 1 ' ' ' i ' ' ' ' i ' ' ' ' i ' • 	 '(
! J« ••"*•• !™sa™--- "!to

•ill-' 	 '"""•- 	 -«—-"""'
COMPARISON OF $H SCENARIOS

DEMAND REGIONS
POLLUTANT 123156789 10 Nat.
Acids
Bases A A A - A
TDS A A BAB A
SS A A A B A
Organics B B B B B B
Thermal A

NOX A A A A - A
SOX A A A - A A
HC A B B B
CO AAAAAA A
Aid. A A BAB
Solids A A A A - B B
F-Land
I-Land A A A A A A
TOTAL NO.
A 23 11 8913200 7
B 101000511!! 3
12 12 3 7 6 H 7 12 11 H 5
A Accelerated Supply has less loading
B = Business as Usual has less loading
If difference between strategies is
less than 5? or if difference be-
tween strategies is less than \% of
the National total of that pollutant
in Business as Usual
P~~~T^h?<$
V \ " 1 \ 4/w — Iv
\ \ J \r^~2, ^
^Vj1 f ' / s r^ */*
^^— — ^-J 7 ( 1 (
'" « < — ~-v\\ ^t-^^-^ \
D -Sr 10 \ \ // \ \
._ J^^Vi M [ """^ ^

FIGURE 5. EXAMPLE OF DIFFERENTIAL COMPARISON

























TOTAL NO.
(less Nat. )
A B
0 0 10
307
325
3 1 6
055
109

1) 0 6
1) 0 6
127
6 0 I
325
1 1 5
0 0 10
505
39
13
98



































FIGURE 4.  EXAMPLE OF GRAPHICAL COMPARISON
                                                    235

-------
                                   AN ENVIRONMENTAL RESIDUAL ALLOCATION MODEL
                                        M. Allen            F. Lambie
                                          Energy Resources Co. Inc.
                                          Cambridge, Massachusetts
                      Summary
     An environmental residuals technique was develop-
ed to quantitatively evaluate the environmental impli-
cations of Project Independence.  Three models are
discussed that compare the regional impacts of differ-
ent scenarios of energy development:  a Residual Allo-
cation Model to predict the quantity and distribution
of 15 energy-associated pollutant loadings, a Water
Use Model to assess the compatibility of water avail-
able and water required for projected energy use, and
an Air Quality Model to compare the impacts of the
scenarios on ambient air quality.  The approach is
useful for scenario comparison, but is limited in
degree of detail and absolute accuracy.  It is con-
cluded that the level of control technology achieved
is more critical environmentally than the choice of
scenarios.  Further work should include a refinement
and extension of the residuals studied and a more de-
tailed sensitivity analysis, especially with respect
to control technology and facility siting assumptions.

                   Introduction

     The viability of any long-range planning for
national energy development rests in part on the
critical nature of the environmental component.  Al-
though national strategies for developing energy re-
sources must be chosen ultimately from the set of
alternatives that are economically and technically
practicable, the pressing nature of environmental
problems dictates that, among such alternatives, the
optimal choice should consider the relative environ-
mental impacts of the strategies.  Precisely this
reasoning led to the formulation of a model by which
to compare the environmental implications of the
various scenarios generated by the Federal Energy
Administration's (FEA) Project Independence Evaluation
System (PIES) model1'2'3.  This Residual Allocation
Model, developed by Energy Resources co., assesses
energy scenarios on the basis of the allocated levels
and geographic distribution of pollution loadings
associated with energy supply and the resulting end
use patterns.  In addition, a Water Use Model indi-
cates for each scenario the general compatibility of
energy-associated water demands with regional projec-
tions of water supply.  Finally, an Air Quality Model
evaluates emissions predicted by the Residual Alloca-
tion Model in terms of their impact on regional am-
bient air quality.   (See Figure 1.)

     The purpose of these models is to allocate pre-
dicted pollution loadings into common geographic areas
so that the relative impacts of these loadings can be
analyzed.  The study considers 15 impact categories,
or residuals, defined by the Environmental Protection
Agency (EPA):
     Air Pollutants
        Particulates
        Nitrogen Oxides
        Sulfur Oxides
        Hydrocarbons
        Carbon Monoxide
        Aldehydes
Water Pollutants
  Acids
  Bases
  Total Dissolved Solids
  Suspended Solids
  Organics
  Thermal Discharge
COAL
ROOUCTION

NATURAL
GAS S OIL
PRODUCTION

'


DISAGGREGATION TO
FOSSIL FUEL REGIONS




SYNTHETICS
PRODUCTION




POUER
PLANT
PRODUCTION





DISAGGREGATION TO
RIVER BASINS OR AQCR


REFINERY
THROUGHPUT
V


'S **


END USE CONSUMPTION:
INDUSTRIAL
TRANSPORTATION
RESIDENTIAL S
COMMERCIAL



1

'
UATER
USE
MODEL
1



WATER USE

RIVER BASIN AND
AQCR LOADINGS
AND PRODUCTION

1

1

RESIDUALS BY
DEMAND REGION
\
'
AIR
QUALITY
MODEL
i

AIR QUALITY
     Land Use Parameters
        Solid Wastes
        Fixed Land
        Maximum Incremental Land
                                  Figure 1  FLOW CHART FOR ENVIRONMENTAL ASSESSMENT
     Projected regional levels of these residuals serve
as the basic indicators of environmental quality.  The
analysis consists of five distinct  stages:

1.  The PIES Model predicts levels  of energy activities
and resulting pollution loadings for a given scenario.
Unfortunately, the geographic regions used  for each
activity are different.

2.  The Residual Allocation Model allocates these pro-
duction and consumption levels and  the associated pol-
lution loadings among a consistent  set of smaller
regions.

3.  Impacts of pollution loadings are analyzed at
various levels of geographic aggregation.

4.  The Water Use Model estimates regional energy-
related water demand on the basis of scenario produc-
tion levels; this demand is compared to projected re-
gional water supply.

5.  The Air Quality Model assesses  scenario-related
ambient air quality on the basis of regional loadings
predicted by the Residual Allocation Model.

     The models were developed with the particular aim
of aiding time-constrained EPA and  FEA analysts in the
comparison of the environmental impacts of various
scenarios.  The need to guarantee realistic output
useful to decision-makers required  feedback between
the conceptual formulation and the  feasible approaches.
Thus, a "paper" model was conceived, the required
supporting data were sought, and the model was revised
to accept the best data found to be available.  As
with most working models, the critical factor in this
process was the availability of meaningful, reliable
data at level of detail sufficient  to support the
                                                      236

-------
allocation.  Since, as a practical matter, some data
bases are finely partitioned along one variable
(e.g. industrial category) and others along another
(e.g. plant location), a number of data bases often
had to be combined or used in series to determine an
activity's distribution.

     It should be noted that the Residual Allocation
Model and the peripheral air and water models are
comparative as opposed to predictive models.  Reliable
predictions of absolute pollution levels which will
result from various energy strategies are extremely
difficult, if not impossible, to make on the basis of
currently available information. Accordingly, the
model's results are valid only to the extent that they
provide a means of comparison between the forseeable
impacts that different scenarios can be expected to
have on the environment or between the relative en-
vironmental impacts of a single scenario on different
regions in the analysis.

                Residual Allocation Model

     The allocation of energy production and associ-
ated residuals is performed by two computer models,
called the Supply Model and the End Use Model.  As
their names suggest, the Supply Model allocates the
production and residuals directly corresponding to the
extraction and refinement of energy resources, whereas
the End Use Model allocates the activities and resid-
uals associated with the consumption of these re-
sources for various end uses.  Because the most cur-
rent and complete data is available for 1972, that
year is taken as the base year for the projections
made by the present models.  The models' target year
for the Project Independence Blueprint3 is 1985.

     The geographical format of the allocation con-
sists of disaggregating energy production and resid-
uals, predicted on a broad geographical scale by the
PIES model, down to the scale of 335 River Basins, de-
fined by the National Oceanographic and Atmospheric
Administration, and 243 Air Quality Control Regions
 (AQCR's), defined by the EPA.  The production and re-
siduals assigned to these smaller levels are then
available for reaggregation up to the larger regions
necessary for analysis of a wider scope.

     In the Supply Model, the residual allocation
scheme is based on the known location and production
of natural energy resources, as well as the location
and capacities of existing or planned conversion fa-
cilities.  Specifically, the allocation proceeds
according to the following rules:

1.  Production and residuals from existing coal mines
which were active in 1972 are allocated on the basis
of 1972 production levels.

2.  P-roduction and residuals from coal mines predict-
ed to come onstream by 1985 are allocated according to
the size, type, and location of known coal deposits.

3.  Production and residuals associated with oil and
natural gas production and the extraction of oil from
oil shale are allocated on the basis of known reserves.

4.  Production from coal gasification and liquefac-
tion is assigned on the basis of the sizes and loca-
tions of proposed plants.

5.  Power plant and refinery activity is allocated
according to the location and capacities of facilities
projected to be onstream in 1983.

     The form of the allocation is quite simple:  for
each energy activity, the amount of  residual  k  allo-
cated to River Basin  (AQCR)  j  is
where S. is the 1972 level of a surrogate quantity  as-

sociated with the activity in PIES region i  containing
River Basin (AQCR) j, S. is the 1972 level of  that  sur-

rogate in River Basin  (AQCR) j, E. is the forecasted
level of the activity in PIES region i; and  It .  is
the coefficient, specific to region i, of the  amount
of residual k generated per unit of activity .   The
choice of surrogates is dependent upon the availability
of measures well correlated to the activity, for which
data can be obtained reliably at the geographic  level
required.  Once the residuals for each activity  have
been assigned to River Basins or AQCR's, the model
sums the levels of each residual over all activities,
thereby arriving at the total River Basin and  AQCR
loadings for all residuals^.

     The End Use Model follows essentially the same
scheme.  Activity levels in this segment of  the  Allo-
cation Model refer to fuel consumption forecasts for
various fuels grouped by use categories.  The  PIES
model generates forecasts for these fuel consumption
levels for each Demand Region defined by the Census
Bureau.  The 11 categories treated by the PIES model
are:
     Industrial Sector
        Coal
        Natural Gas
        Distillate Oil
        Residual Oil
  Transportation Sector
     Gasoline
     Jet Fuel
     Distillate Oil
     Residual Oil
     Residential and Commercial Sector
        Natural Gas
        Distillate Oil
        Residual Oil

The variety of activities included in the consideration
of end use patterns is extensive, and for most of these
activities no comprehensive fuel consumption data are
readily available.  As a consequence, many different
surrogates had to be tabulated in order to effect the
disaggregation of consumption levels to River Basins
and AQCR's.  The End Use Model currently uses the
following surrogates:
     Fuel Category

All Industrial Fuels
Gasoline
Jet Fuel

Transportation
Distillate
Transportation Residual
All Residential and
Commercial Fuels.
   Surrogates  (1972 data)

Number of employees in each
major industry category.
State fuel consumption by
2-digit SIC.  National fuel
consumption by 4-digit SIC.

State consumption data and
population.

Number of jet takeoffs.

Primary rail track mileage,
interstate highway mileage,
population, and vessel bunk-
ering data.

Residual consumption at U.S.
ports.

Population.
                                                       237

-------
Although more appropriate measures of these consumption
levels exist, none were found that were reliable and
available at the geographic level required.

     After the disaggregation phase, the model converts
the consumption predictions into residual loadings for
the six air pollution parameters using a set of end
use pollutant coefficients developed for fossil fuels
(No water or land use residuals were considered since
the only direct environmental impact of the end uses of
energy is on air quality.)  The output then consists of
total levels for each of 6 air residuals in every
River Basin and AQCR'.

     As with any model, the Allocation Model depends
on several specific assumptions and limitations in
scope.  One of the most critical of these is the
assumed level of pollution control technology.  In
converting 1985 production and consumption levels to
residual loadings predictions, the PIES model supposes
that existing and promulgated control standards are
enforced and that surface mine reclamation laws will be
implemented. Table 1 compares the effect of control
technology on the loadings associated with energy
supply for one scenario.  Moreover, in allocating the
new facilities required for the realization of specif-
ic scenarios, the model assumes that facility siting
patterns will obey the distribution defined by facili-
ties projected to be onstream in 1983.  This assump-
tion is consistent with the objectives set forth in
the Non-Significant Deterioration legislation now under
consideration.  Of course, the model also contains the
inherent assumptions involved in the use of surrogate
quantities  for the disaggregation and the suppositions
regarding economic and demographic development that
are imposed  by the choice of 1972 as the base year.


                     Table 1

          EFFECT OF CONTROL TECHNOLOGY3
WATER POLLUTANTS

197! Control
Technology
U/tll with
Advanced Control
Technology
Major Rcgulaeod
Activities «
ACIDS
l.«53
0
Coal
mining
BASES D
(tons/day)
23.30
36.67
Coal
Blnlng
TDS SS OROANICS THERMAL c
(100 tons/day) (tons/day) {100 tono/dey) {E9Btu/day)
531.5 14.389 5-97 35.301
5*. 4 165 !.«» 24.021

Coal power
AIH POLLUTANTS

AS/Hl «llh
1972 Control
AS/ill with
Advanced Control
(UJor Regulated

ARTICULATES



Oil
refining
HO(X)



generation
SO(X) HC CO ALDEKIDES


Oil
Oil Coal power
refining generation
LAND USE

1972 Control
Technology
AS/111 kith
TechnoloE*

Kftjor Regulated
Activities
SDLICS
1.69?- 9
5*1.2
Surface
cotl BlnlnB
FIXED LAMD HAX. 1HCKE.1ENTAL LAND
367,551 23.910
356.210 ;i,*6o
Surface
eo«l nlnlne
    E9 IB 1,000,000,001
      The Allocation Model does not attempt to disag-
 grate the loadings in curies expected to result from
 the mining,  processing,  reprocessing, and waste
 management associated with the uranium fuel cycle.
 Curies are a measure of the activity of radionuclides,
and as dimensions of residual loadings they  do not
indicate accurately the quality of radiation in terms
of human health risks.  Furthermore, the pollution
loadings which are analyzed include only those direct-
ly attributable to the extraction, processing, conver-
sion, and end use of energy resources, i.e.  those
which can be quantized per unit of energy  in some rea-
sonable fashion.  This restriction excludes  from the
model's scope any pollutants resulting from  the con-
struction of energy facilities or from secondary de-
velopment induced by the exploitation of energy re-
sources. Finally, the model does not take  into account
pollution loadings, such as spills from pipelines,
trains, tankers, etc., that result from the  transpor-
tation or transmission of energy.  The End Use Model
does account, however, for vehicle emissions associated
with the transport of fuels.

                    Water Use Model

     Ideally, a model to indicate energy impacts on am-
bient water quality would provide the basis  on which  to
evaluate scenario water use implications.  Unfortunate-
ly, the physical and chemical dynamics of  hydrological
phenomena occur on such a small scale that nationwide
and even basin-wide predictions of water quality are
hardly possible.  A river which is anaerobic 30 yards
downstream from a paper mill can rid itself  of a sig-
nificant amount of BOD in 30 miles, defying  any analy-
sis which looks no closer than the River Basins used
for this model.  Because of this difficulty  of scale,
a model to predict water use patterns was  developed
instead.

     The Water Use Model uses an accounting  scheme to
assess the impacts of scenario-related energy develop-
ment on water use patterns.  The model consists of two
sections.  The  first  takes as input the energy produc-
tion levels predicted by the Supply Model  and from
these computes energy-associated water withdrawal and
consumption for each  River Basin.  The second section
evaluates the resulting energy use predictions in re-
lation to the amount of water available for  energy-
associated activities  (i.e. total water supply minus
non-energy use).

     More specifically, once all relevant  energy acti-
vity levels are known for every River Basin, the model
converts them to water demand estimates using a set of
water use coefficients  .  Summing the results of this
conversion over all energy activities within a basin
gives the total energy-related withdrawal  and consump-
tion of water for that River Basin.  The total water
available for all uses in each region is just the sum
of  inflows from other watersheds, groundwater supply,
and indigenous  supply from natural  runnoff,  minus ex-
ports to other  regions.  The water  available for energy
use is then  simply the total water  available minus non-
energy associated water consumption^.

                   Air Quality Model

     Although comparison of  air pollution  loadings
among  scenarios provides a useful means  for  evaluating
major  environmental  impacts  on  a  geographical basis,
the comparison  of  the effects  these loadings will have
on  ambient air  quality  is much more meaningful to de-
cision makers.  Unlike hydrological phenomena, atmos-
pheric mixing occurs  on  a  large  enough  scale to enable
rough  estimates of the  general  air  quality in the
AQCR's.   In  order  to accomplish  such an analysis, the
Air Quality  Model  relates  1972  air  quality to emissions
and then applies  this relationship  to the  emissions
predicted for a given scenario.   Using such a process,
comparisons  may be made  among scenarios of their
relative impacts  on  regional air quality.
                                                       238

-------
     Input to the Air Quality Model consists of energy-
associated emissions generated in AQCR's by the Resi-
dual Allocation Model for particulates and SO     (The
                                             x
lack of adequate monitoring data to evaluate 1972 air
quality for other pollutants precluded their consider-
ation.)  The model converts the emissions for a single
AQCR to an air quality measure and range using con-
version factors based on the ratio of indices- of 1972
quality to 1972 emissions data.  The indices of 1972
air quality are generally the minimum, median and maxi-
mum of the 1972 average annual concentrations for all
monitoring stations in each AQCR.  However, since not
all AQCR's were monitored for both pollutants in 1972,
some of the data used in the present model are taken
from 1974 data, or, in some cases, from data for AQCR's
judged to possess similar topographic, atmospheric,
demographic, and industrial characteristics.

     The crucial assumption in the Air Quality Model is
that the relationship between emissions and ambient
quality is linear and time independent in every AQCR.
This is justifiable only insofar as no drastic changes
occur  in the distribution or overall quantity of par-
ticulates and SO  emitted between 1972 and 1985.  It
also requires that each AQCR experience no drastic
climatological changes during the interim.  Moreover,
because the analysis is conducted on a basin-wide
scale, no distinction is made between point and area
sources of residuals or between stack heights at which
pollutants are emitted.  Site-specific air quality
models cannot be applied in this context because pro-
jections of the location of energy activities cannot be
accurate beyond the AQCR level.  At this point in time,
attempts at formulating a more precise functional rela-
tionship between emissions and quality are also hamper-
ed by  the fact that monitoring of emissions and air
quality only now approaches a comprehensive network.
Until  a substantial history of comprehensive monitor-
ing in all AQCR's becomes available, roll-back approxi-
mations like that used in the Air Quality Model will
be the best empirical relationships obtainable for this
level  of analysis.  For this reason, it is imperative
that the output of the model be used for comparison
purposes only.  The air quality estimates do not nec-
essarily provide realistic predictions of 1985 air
quality.  However, since the same analysis is used re-
gardless of the scenario being considered, the model
is useful as a basis for comparison among scenario
impacts on air quality7.

           Conclusions and Recommendations

     The results of the model are useful for comparison
of the environmental impacts of energy alternatives and
as indicators of specific regions needing further study
on the effects of energy development.  The output can-
not be used as a definitive indicator of environmental
"hotspots."  The major conclusion drawn from the analy-
sis performed up to now stems from the model's sensi-
tivity to assumptions regarding pollution control tech-
nology.  The degree to which control technology will be
implemented between now and 1985 is the single most
influential factor of the impacts which various scenar-
ios exert on the environment.

     Most of the important general limitations of the
model  have been outlined earlier.  The model is clearly
not site-specific in its methodology;  the allocation
of residuals goes no further than the River Basin
level, with no explicit consideration of topography,
climate, or severity of particular point sources.  The
model  also inherits all of the limitations of its vari-
ous inputs, including the PIES Model output, the resid-
ual and water use coefficients, and the data taken from
the many other sources.  Another crucial concept whose
limitations must be taken into account is that of a
"residual."  The fifteen residuals allocated by  the
model define relatively broad classes of environmental
impact.  For example, predictions of loadings  in organ-
ics, measured in hundreds of tons per day, do  not dis-
tinguish between chemically different hydrocarbons.
Loadings extimates for solids, measured in thousands  of
tons per day, lump together all residuals which  do not
enter the air or water.  They include such different
wastes as spent shale from oil shale processing  and
scrubber sludge from coal-fired power plants.  Resid-
uals provide a convenient, allocable set of impact cat-
egories by which the overall effects of energy develop-
ment can be compared.

     Several areas needing further study or attention
have become evident through the development and  appli-
cation of the Residual Allocation Model.  There  is an
obvious need for more extensive environmental  monitor-
ing.  Also, if energy planning is to advance,  better
regional census data must be gathered.  Much of  the un-
certainty imposed by the use of surrogates could be
mitigated if a more complete inventory of regional and
local energy consumption patterns were available.  A
sensitivity analysis of the model should be conducted
to include variations in the method of projecting
facility siting, along with an investigation into the
degree of control technology that one realistically
can expect to be implemented by 1985.

     The analysis should be expanded to include  at
least three additional areas of environmental  impact.
To begin with, some assessment ought to be made  of the
ramifications of facility construction on surrounding
areas.  The construction of water-diverting facilities
for a hydroelectric plant, for example, may take ten
or more years.  The resulting temporary and long-term
changes in the surrounding water quality are not in-
cluded in the present model.  Further study should
also be conducted into the possibility of presenting
radioactivity and trace metal loadings on a regional
basis in terms that relate to potential human  health
hazards.  Finally, it might be useful to introduce an
energy accounting scheme which would examine Btu pro-
duction and consumption on both the regional and na-
tional levels.  Such a program would indicate  the
relative energy productivity and demand distributions
among River Basins or AQCR's and would help measure
the degree of environmental damage export.  The  out-
put of the model then could be used to examine the
questions of who is producing the'energy for whom
and at what environmental cost.
                     References

      Federal Energy Administration, Draft Environmen-
tal Impact Statement: Energy Independence Act of 1975
and Related Tax Proposals, March 1975.

      Federal Energy Administration, Environmental
Report on Modifications to the Mandatory Oil Import
Program:  A $3 Import Fee, January, 1975.

      Federal Energy Administration, Project Indepen-
dence Report, Washington, D.C.:  Government Printing
Office, November, 1974.
     4
      U.S. Environmental Protection Agency, Summary
Report of the Pollutants Associated with the Resource
Activities of the Hittman Environmental Coefficient
Matrices.  Prepared by Hittman Associates, December,
1974.

      U.S. Environmental Protection Agency, Project
Independence Blueprint:  Environmental Quality  Anal-
                                                       239

-------
ysis Final Report, Phase  I.  Prepared  by  Energy Re-
sources Co.  Inc., December,  1974.

      U.S. Environmental  Protection Agency,  Environ-
mental Pollutant Coefficients  for  Fossil  Fuel  End Use
in  the Transportation,  Industrial  and  Residual/Commer-
cial Sectors.  Prepared by Hittman Associates,  1975.

      U.S. Environmental  Protection Agency,  Assess-
ment of the  Environmental Implications of Project
Independence.  Prepared by Energy  Resources  Co.  Inc.
 (in draft).
     Q
      U. S.  Environmental Protection Agency, Final
Water Withdrawal and Consumption Coefficients  for the
Project Independence Energy  Production Activities.
Prepared by  Hittman Associates, 1975.
Acknowledgements:
We wish to acknowledge Richard Livingston and Glen
Kendall.  Without their initiative and subsequent help
this model would not exist.
                                                      240

-------
                              HITTMAN REGIONAL ENVIRONMENTAL COEFFICIENTS  FOR THE
                              PROJECT INDEPENDENCE EVALUATION SYSTEMS  (PIES) MODEL

                                            Matthew S. Mendis, Jr.
                                              William R. Menchen
                                              H. Lee Schultz, III
                                            Energy Systems Department
                                             Hittman Associates,  Inc.
                                                  Columbia, MD
     The development and utilization of environmental
coefficients for the environmental/policy analysis of
energy strategies is described in this paper.  The
paper outlines Hittman Associates'  efforts as a mem-
ber of the Project Independence Blueprint Environ-
mental Task Force.  The extensive environmental data
bank developed for the Project Independence Evaluation
Systems (PIES) model is described along with its
utility in the policy analysis decision stream.  The
limitations of the data bank are discussed, and
suggested modifications recommended.  The technique
for determining environmental residuals as accom-
plished by the PIES Environment Report is also
described.  The conclusion is that the Hittman en-
vironmental coefficients can be utilized effectively
in a first-cut comparative environmental analysis of
energy scenarios.
                    Introduction
     The Hittman Regional  Environmental Coefficients
were developed for the environmental analysis of the
Project Independence Blueprint (PIB).  The purpose of
PIB was to analyze the economic, environmental and
social impacts of different possible Government
policies on future energy supply and demand.  In or-
der to achieve this, the Federal Energy Administra-
tion, in an interagency effort, developed the Project
Independence Evaluation Systems (PIES) model to fore-
cast energy supply and demand for different sets of
assumed government policies and imported oil prices.
Attached to the PIES model  was an "Environment
Report" submodel which was  developed by  the Environ-
mental Cross-Cut Task Force headed by the Environ-
mental Protection Agency (EPA).

     As a member of the PIB Environmental Task
Force, Hittman Associates,  Inc. (HAI) was assigned
the task of: 1) defining the environmental indicators
to be used in the energy assessment; 2) establishing
the environmental data requirements; 3) defining the
pollution abatement technologies; 4) identifying en-
vironmental  constraints to  be included in the price
equilibrium mode; and 5) developing environmental
data for quantitative analysis of each of the PIES
generated scenarios.

     The principal  and most important task was the
development of regional environmental coefficients
for the quantitative analysis of PIB.  To accomplish
this task, HAI aggregated  into the PIES model an
extensive environmental data bank previously developed
under contract with the Council on Environmental

Quality .  Where necessary,  the earlier coefficients
were updated and revised or new coefficients de-
veloped.   The coefficients  represent units of pollu-
tants (i.e., air and water  pollutants solid waste,
land use and occupational health) per unit of energy
supplied,  converted or consumed.   The environmental
coefficients were also regionalized according to the
energy activity considered.  For example, environ-
mental coefficients for coal supply were developed
for the twelve coal supply regions shown in Figure
1.  The regional coefficients reflect variations
in the characteristics of the energy resource, ex-
traction, processing, storage and utilization.

     The environmental coefficients developed for the
PIB were integrated into PIES as a subprogram.  The
subprogram utilizes the output of the supply/demand
portion of PIES and the Hittman Regional Environmental
Coefficients to generate the PIES Environment Report.
This report presents the environmental residuals
(i.e., total quantity of each pollutant) associated
with each energy supply, conversion and end use acti-
vity.  The environmental residuals were determined by
taking the product of the level of an energy activity
(i.e., tons of coal supplied per day) and that energy
activity's environmental coefficients (i.e., Ibs of
pollutant per ton of coal supplied).  A flow diagram
of the Environment Report algorithm is presented in
Figure 2.

     A generalized discussion of the Hittman Regional
Environmental Coefficients is presented in the
following sections.  A detailed presentation of these
coefficients is available in Reference 2.
                  Data Development
Basis for the Environmental Coefficient Matrices

     The basic references used in preparing the en-
vironmental coefficient matrices supplied to the FEA
Project Independence model were two reports pre-
pared by Hittman Associates for the Council on Environ-
mental Quality (CEQ) on the environmental impacts,
efficiency, and cost of energy supply and end useJ
These reports, issued in final November 1974, pre-
sented quantified data on the broad range of environ-
mental impacts to land, water, and air for each step
in the fossil fuel supply and end use chain.  The
reports covered the fossil fuel supply system com-
ponents, all electric power plant conversion for
coal, oil, and natural gas and some of the future
supply activities including high and low Btu coal
gasification, oil shale, and coal liquefaction.  New
data was obtained by HAI for the nuclear fuel cycle,
hydroelectric power plants, transportation of
energy resources and energy end use (transportation,
residential/commercial, and industrial) activities.

     The format of the data presented in the CEQ
reports was at a lower level of aggregation than
that required for the Project Independence model.
Since the CEQ impact data was derived for each step
in the energy supply trajectory, it was necessary
to aggregate the environmental impacts from several
steps in order to arrive at a set of coefficients
consistent with the level of aggregation specified
in the PIB model.  Also, the CEQ data is presented
                                                       241

-------
in terms of impacts per trillion Btu  input  to  each
process, and it was desired tO'put these  impacts  on
a physical units basis (tons of coal, bbl of oil,
etc.) to be consistent with the energy flow format
of the PIB model.

     Coal production provides an illustrative  example.
The PIB model specifies the tons of coal  produced,
ready-for shipment, from each of twelve coal supply
regions.  To get to this point, several unit opera-
tions or activities must be performed on  the coal
resource in the ground.  First the coal must be ex-
tracted.  It must then be transported locally  to  a
preparation site and stored.  It may  then be prepared
by washing to remove impurities or just sized  for
shipment.  Each of these activities may,  in turn, be
performed by one or more specific processes.   Ex-
traction of coal in the Hittman data  base is comprised
of underground (room and pillar, and  longwall) and
surface (auger, strip, and contour) mining  techniques
or processes.  Local transportation of the  coal can
be performed by mine rail, conveyor,  or trucks.
Preparation of the coal may involve washing (dense
media) or simple breaking and sizing.

     As a concrete illustration of the methodology
employed to manipulate the Hittman data for use with
the PIB model, consider coal production from existing
underground mines in Northern Appalachia.   From Bureau
of Mines' data for this region it was determined  that
75 percent of the coal extracted from the ground  re-
ceived some type of mechanical cleaning and that  the
remaining 25 percent was just mechanically  crushed or
sized.  Bureau of Mines' data further showed that 95
percent of the coal from this region was extracted by
room and pillar operations, while 5 percent was ex-
tracted by the longwall method.  These relative
fractions were the basis for determining the combina-
tion and weighting of processes used from the data
base.

     The procedure used was to work backward from the
point of coal ready for shipment and determine, using
the respective process efficiencies, the Btu input
needed to deliver this amount of coal.  The Btu input
numbers (recall that the data base is on a  trillion
Btu input basis) represent the multipliers  used with
the data base environmental coefficients to deter-
mine the environmental impact from that respective
operation.  A summation of impacts over all opera-
tions then gives the coefficients for use with the
PIB model.  Table 1 and Figure 3 illustrate the
derivation of one (of 18) PIB environmental coeffi-
cients for underground coal production from the
Northern Appalachian region.
      Table 1.
Derivation of PIES Coefficient
    From CEQ Data
 Room and Pillar
 Extraction
 Longwall Extraction
 Mine Rail Transport
 Steam Coal Preparation
 Breaking and Sizing
                    Data Base Solid
                    Waste Impact/
                    1012 Btu Input

                      1.19+03
      1.78+03

        0

      5.97+03

      2.54+00
 6.78+03 tons solid waste x ID12 Btu
  1012 Btu for delivery
      42.4x103 tons
        coal

Btu Input
Multipliers
1.71
.06
1.028
.778
.25
Total
1.59+02 tons
Tons Solid
Waste per
1012 Btu
2.03+03
1.07+02
0
4.64+03
6.35-01
6.78+03
solid waste
                                                Additional  data  were obtained through a litera-
                                           ture search  for  those energy activities not in-
                                           cluded  in  the  CEQ  report.   Specifically, environ-
                                           mental  matrices  were  developed independent of the
                                           CEQ report for the nuclear fuel  cycle, hydroelectric
                                           power plants,  energy  transportation, and all the end
                                           use activities.
                                            Data  Format

                                                 The  Hittman  environmental data was designed to
                                            facilitate  its  incorporation and utilization in the
                                            PIES  model  or any other similar energy model.  En-
                                            vironmental  coefficients are given on the basis of
                                            unit  of pollutant produced per unit of energy input
                                            or  output associated with an energy activity.  For
                                            example,  the nitrogen oxides (NOX) associated with

                                            the extraction of natural gas was determined to be
                                            2.75x10"
                                  tons N0x per 10
SCF natural gas.  Thus,
                                            if a region produces 250x10  SCF of natural gas per
                                            day, the associated environmental residual can be
                                            determined as:
                            250x1 0
                                                             x  2.75x10
-3  tons N0x

    106 SCF
                                                          -1
                                                  6.875x10"  tons-
                                                                  day
                                            The environmental data for  each  energy activity is
                                            presented in a matrix where the  energy activity and
                                            regions make up the rows  and the environmental  pollu-
                                            tants make up the columns.   An example matrix can
                                            be found in Figure 4.

                                                 A discussion of the  energy  activities,  the en-
                                            vironmental pollutants and  the regional  delineations
                                            that make up the Hittman  Environmental  Coefficient
                                            Matrices is presented below.
                                                              Data Description
                                            Definitions
                             In order to describe  the environmental  impacts
                        associated with energy  supply, conversion and end
                        use, a number of definitions  were adopted:

                             Term                    Example/Definition

                          Pollutant          SO- emission from the combustion

                                             of  coal  for steam generation

                          Process            Combustion of coal  for steam gen-
                                             eration  (results in a set of
                                             pollutants)

                          Activity           A combination of processes (i.e.,
                                             electricity generation by coal-
                                             fired  power plants)

                          Environmental      Unit of  pollutant per unit of
                          Coefficient        energy into an activity (i.e.,   «
                                             Ibs of SO  emitted per ton of coal

                                             into a coal-fired power plant)
103 tons coal for
    delivery
                                                       242

-------
  Environmental      The total  quantity of a pollutant
  Residual           for a given time period resultant
                    from an energy activity (i.e.,
                    tons of SO  emitted per day)
Controlled Vs.  Uncontrolled

     All  environmental  coefficients  incorporated in
the PIES  model  were designated controlled.   "Con-
trolled"  implies that impacts are consistent with the
use of control  technology which will  probably be re-
quired and/or available in 5 to 10 years.   As an
illustration, past laws that governed the  reclamation
of surface mined lands  minimally required  that effort
be made to restore the  land.  This included partial
backfilling and an attempt at revegetation.  However,
since the degree and success of reclamation were not
mandatory, (for the ''uncontrolled" condition) reclama-
tion was  not assumed for area stripping operations,
and only partial backfilling was assumed for contour
mines.  In the controlled situation,  contour backfill-
ing and revegetation were assumed required for either
type of stripping operation.  The attainment of this
high level of reclamation will require such practices
as stockpiling and redistribution of the topsoil,
segregation of toxic overburden, and seed  bed prepara-
tion.  Generally speaking, the controlled  condition
incorporates the environmental standards proposed or
soon to be implemented  by the EPA.  A more detailed
explanation of controlled as it is related specifi-
cally to each process in the energy activity chain
is to be found in the writeups preceding each energy
activity environmental  matrix in Reference 2.

     Uncontrolled environmental coefficient matrices
were developed for comparison of a "base case" to
the PIES scenarios.  "Uncontrolled," according to
the ground rules adopted in this study, means that
impacts are the current national or regional aver-
age value.  In the absence of current (1972-73)
data, impacts typify the use of least stringent
environmental controls.  The uncontrolled environ-
mental coefficients are not presented in Reference
2.  However, uncontrolled environmental coefficients
in the CEQ format are presented for fossil fuel
energy activities in Reference 1.
 Energy Activities

     The energy related activities evaluated for
 environmental residuals in the PIES data base were:

         •    Coal Supply:
                •  Underground
                »  Surface
         •    Coal Gasification:
                •  Low Btu
                •  High Btu
         •    Coal Liquefaction
         •    Shale Oil Supply:
                •  Underground
                •  Surface
                •  In-Situ
         •    Natural Gas Supply:
                •  Extraction
                •  Processing
         •    Crude Oil Supply:
                •-  Domestic Onshore
                •  Domestic Offshore
         •    U235 Extraction
         •    Energy Transportation:
                »  Coal:
                      Ra11 road
                   -  Barge
                      Pipeline Slurry
                •  Crude  & Syncrude Oil:
                      Pipeline
             *  Crude Oil:
                   Barge
                   Railroad
             •  Crude & Refined 011:
                   Tanker
             •  Oil  Products:
                -  Barge
                   Truck
                -  Rail
                   Pipeline
             •  Natural & Synthetic Gas:
                -  Pipeline
             •  Liquefied Natural  Gas:
                   Tanker
             •  Deep Draft  Port Facility:
                   Monobuoy Mooring System
           Power Plants:
                Coal  Fired
                Oil  Fired
                Gas  Fired
                Gas  Turbine Simple Cycle
                Low  Btu Gas/Steam  Turbine
                Combined Cycle
                Hydroelectri c
                Nuclear
           El  ctricity  Transmission  &  Distribution
           Oi   Refineries
                Existing
                New   (Fuel Oil)
                New   (Gasoline)
           Transportation Energy End Use
           Residential/Commercial  Energy End  Use
           Industrial Energy  End Use
     The energy activities analyzed for environ-
mental residuals are only those directly attributable
to energy material extraction, processing, and utili-
zation on a per unit of energy basis.  Because
residuals from other energy-related activities cannot
be estimated on the basis of a per unit of energy in-
put or output, environmental coefficients were not
developed for pollutants resulting from:

     •    Construction of energy facilities
     •    Conjunctive development induced by
          energy development
     •    Secondary pollutants resultant from
          interaction of primary pollutants
          with the environment
Environmental Pollutants

     The environmental pollutants considered in the
PIES data base were:

     •  Water Pollutants:
          Acids, Bases, Total Dissolved Solids,
          Suspended Solids, Organics, Thermal
     •  Air Pollutants:
          Particulates, Nitrogen Oxides, Sulfur
          Oxides, Hydrocarbons, Carbon Monoxides,
          Aldehydes
     •  Land Impacts:
          Solid Waste, Permanent Land Use
          (Fixed Land), Temporary Land Use
          (Maximum Incremental Land)
     •  Occupational Health:
          Deaths, Injuries, Man-days Lost

     The water and air pollutants are aggregated in
broad categories such as acids, bases, particulates,
etc.  The constituent pollutants (i.e., sulfuric
acid, calcium carbonate, trace metals, etc.) are not
identified for two reasons.  First, the time allo-
cated for development of the environmental coeffi-
cients for PIES precluded any further breakdown.
Secondly, the level of information on the energy
activities generated by PIES was such that accuracy
would not be enhanced if any further breakdown in the
                                                      243

-------
pollutant categories was attempted.  The broad  pollu-
tant  categories had the advantage of facilitating
qualitative environmental analysis of the  PIES
scenarios.

      Solid  wastes are considered to be all residuals
not entering the air or water that result  from  the
basic fuel  resource, or from the system processes
that  make fuels useful for consumption.

      The land impacts include areas required for ex-
traction, structures, disposal of solid wastes, roads,
ports, pipelines, storage, and buffer zones.  Both
fixed and incremental land effects are considered.
Fixed land  effects are those associated with facili-
ties  such as processing plants, pipelines  and storage
tanks, whereas incremental land effects are those
associated  with excavation, such as strip  mining,
and solid waste disposal.

      Occupational health is considered on  the basis
of deaths,  injuries, and man-days lost due to in-
juries.
 Regionalization

      In  order to be compatible with the PIES output,
 environmental  coefficients for each energy activity
 were  determined regionally to reflect variations in
 the characterization of the energy activities.  PIES
 regions  for each energy activity were defined to
 "correspond to natural data divisions appropriate to

 each  resource, conversion facility and demand."
 The results of this regional division are shown in
 Table 2.

         Table 2.   PIES Energy Regions
     Energy Activity

  Coal Supply
  High Btu Gasification
  Low Btu Gasification
  Coal Liquefaction

  Oil Production
  Natural Gas.Production
  Natural Gas Processing

  Energy Transportation

  All Electric Generating
  Power Plants

  Oil Refineries
  Oil Shale Recovery
  U-235 Extraction
  Electricity Transmission
   and Distribution

  Residential/Commercial End Use
  Industrial End Use
  Transportation End Use
                              Number and Region Definition
12 FEA Coal Supply Regions
  (See Figure 1)
14 NPC Petroleum Provinces of
  the U.S.

  National

 9 Census Regions

 7 Petroleum Administration for
  Defense (PAD) Districts


  National
 9 Census Regions
     Environmental coefficients were developed for the
PIES regions utilizing a weighting  system to reflect
the variation within a given  region of:

     •    Energy resource characteristics
     •    Energy production,  conversion  and
          utilization
     •    Regulatory requirements on energy.
          supply, conversion  and end use
          (i.e., environmental constraints)
     •    Energy consumption  patterns
                                Environmental Coefficient Matrices

                                     The result of the HAI/PIES  effort was the develop-
                                ment of a comprehensive set of environmental coeffi-
                                cient matrices for energy supply, conversion and end
                                use.  Thirty-three matrices were developed encompassing
                                the energy activities, pollutants,  and regions dis-
                                cussed above.  These environmental  matrices, the
                                associated assumptions, the methodology for their
                                utilization and a sensitivity analysis are presented
                                in  Reference 2.
                                                 Data Application
                                General
     The Hittman  Regional  Environmental Coefficients
developed for  the PIES  model  have potential for a
wide range of  applications.   The present format of
units of pollutant per  unit  of energy input or output
                                             1?
has the advantage (over the  CEQ format of 10   Btu's
into a system) of compatible application to most
existing data  bases.  The  use of consistent environ-
mental coefficients also enables energy policy makers
to obtain relative environmental rankings of various
energy strategies.   The regional characteristics of
most of the coefficients allow for the focusing of en-
vironmental analysis  to specific regions without a
loss of resolution.  Environmental coefficients and
the subsequent environmental  residuals analysis has
the advantage  of  allowing  the comparison between
energy alternatives without  getting into the site
specific nature of the  energy facilities.  Thus, it
becomes possible  to discuss  representative energy
activities rather than  specific facilities (i.e.,
a particular power plant).   Additionally, the
residual analysis has a further advantage in that it
is required as a  primary step if one wants to proceed
to an environmental quality  analysis and subsequent
environmental  impact assessment.

Application to PIES

     The output of PIES for  the Environment Report
is a series of energy activity levels relating to
energy supply, conversion,  transportation and end
use within a given region.   The environmental evalua-
tion of each PIES scenario  requires explicit con-
sideration of  the entire set of energy activities
and the environmental residuals that are generated
by these activities.  The  Hittman Environmental
Coefficients,  as  utilized  in PIES for residual analy-
sis, are described below.

     Consider  the following:   -i energy activities;
3 regions; and k  environmental pollutants.  Then we
can define the following:

     E..   the level  of the  ith energy activity
           in  the jth region

     Piik ~ the ^tn environmental coefficient
            for the ith energy activity in the
            j'th region

Utilizing the  definitions above, we can determine
various aggregations  of environmental  residuals (ER)
as is done in  PIES  by the following operations:
                                     ER..,  =
                                       1-3k
                    P..£  :  the residuals associated
                           with a regional energy
                           activity
                                                        244.

-------
£(E..
-
x P
    --
   1-3 K-
     x p
                            tne residuals  associated
                            with all  regional  energy
                            activities

                                :  the residuals
                                  associated with
                                  total  national
                                  energy activities
     The procedure of product summations to determine
environmental  residuals  allows for incorporation into
other regional  or national  energy models.
        Conclusions and Recommendations
     The immediate conclusion is that the Hittman
environmental coefficients can be utilized as an
effective tool in the policy analysis of energy
strategies.  The relative environmental residuals
comparison of various energy strategies can aid in
pointing out environmental problem areas of future
energy development.  The environmental residuals
technique allows for a rapid, broad analysis of
energy strategies and can indicate those strategies
or regions that should be studied in more depth.
However, the environmental residuals technique can
be utilized only as a means of comparison between
strategies, and not as a statement of environmental
quality.

      Several  recommendations  for  refinement  of  the
 environmental coefficients  technique  can  be  identi-
 fied.   These include:

      (1)   Development of  additional environmental
           coefficients for  pollutants  not considered
           in PIES  such as specific trace  metals,  etc.

      (2)   Consideration of  secondary  pollutants  that
           result from the interaction  of  primary
           pollutants with the environment
                                               1.
                                               2.
                                               3.
                                               4.
(3)   Development of techniques to reflect resi-
     duals that are time variant with energy
     throughputs (i.e., acid mine drainage,
     reclamable land use, radioactive waste,
     etc.).

(4)   Development of more uniform regions so
     as to optimize coefficient resolution
     within  the available data base

(5)   Finally, better techniques should be
     developed to utilize the residuals data
     for a generalized environmental  quality
     analysis (i.e., building in quality indi-
     cators  for each type of pollutant in order
     to reflect the sensitivity of any given
     region  to an increase in a given environ-
     mental  residual)
                                                            References

                                             Environmental  Impacts. Efficiency and Cost of
                                             Energy Supply and End Use, Vol.  I and II,
                                             Final Report,  HIT-593, Hittman Associates, Inc.,
                                             November 1974.

                                             Hittman Associates Project Independence Final
                                             Report, in draft, due for publication April
                                             1976.

                                             Federal Energy Office Memorandum from William
                                             W. Hogan - Balancing Task Force, to Gorman
                                             Smith May 2, 1974.

                                             Environmental  Impact Modeling for Project
                                             Independence,  Richard A.  Livingston, G.R.
                                             Kendall and W.R.  Menchen.
                                                                                1.  Northern Appalachla
                                                                                2.  Appalachia

                                                                                3.  Southern Appalachla
                                                                                4.  Midwest

                                                                                5.  Central  West

                                                                                6.  Gulf

                                                                                7.  Eastern  Northern
                                                                                     Great  Plains

                                                                                8.  Western  Northern
                                                                                     Great  Plains
                                                                                9.  Rockies

                                                                               10.  Southwest

                                                                               11.  Northwest

                                                                               12.  Alaska
                                         Figure  1.   PIES  Coal  Supply  Regions

                                                       245

-------
                     PIES Supply/Demand
                     Integration Model
                         Regional
                     Energy Conversion
Hittman Regional
  Energy Supply
  Environmental
  Coefficients
Hittman Regional
Energy Conversion
  Environmental
  Coefficients
                           Regional
                    Environmental  Residuals
                   [PIES  Environment  Report]


      FIGURE 2.   Environment Report  Generation
CEQ Room and Pillar Mining
Variables (Land Impact):
Coal Neat Value
Coal Density
Seam Thickness
Mine Depth
Angle of Draw
% Subsides
Mine Life
Process Efficiency
etc.





CEQ Room and
Pillar Mining
Land Impact
Coefficient





CEQ Longwal 1 M1 nlng
Variables (Land Impact):
Coal Heat Value
Coal Density
Seam Thickness
Mine Depth
Angle of Draw
Mine Life
Process Ef f 1c1 ency
etc.



CEQ Longwal 1
Mining
Land Impact
Coefficient



CEQ Steam Coal Plant
Variables (Land Impact):
Process Efficiency
Clean Coal Recovery
Raw Coal Feed
Magnetite Losses
Tramp Iron Losses
etc.




CEQ Steam Coal
Plant Land
Impact Coeffi-
cient



   FIGURE 3.  Flow  Diagram of  CEQ  Data Conversion
                to  PIES Coefficients
                             HATER IMPACTS (TONS/UNIT)
                                       /AIR IMPACTS (TONS/UNIT)/
REGION VIII-W.N. GT. PLAINS-OLD-U
REGION VIII-W.N. GT. PLAINS-NEW-U
REGION VIII-W.N. GT. PLAINS-OLD-S
REGION VIII-W.N. GT. PLAINS-NEW-S
REGION IX - ROCKIES - OLD - U
REGION IX - ROCKIES - NEW U
REGION IX ROCKIES - OLD S
REGION IX • ROCKIES - NEW - S
REGION X SOUTHWEST - OLD - U
REGION X - SOUTHWEST - NEW - U
REGION X - SOUTHWEST OLD - S
REGION X - SOUTHWEST - NEW - S
REGION XI NORTHWEST OLD-U
REGION XI - NORTHWEST NEW-U
REGION XI NORTHWEST - OLD-S
REGION XI - NORTHWEST - NEW-S
REGION XII - ALASKA NEW - U
REGION XII ALASKA NEW-S
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1.39-02
1.44-02
0
g
1.94-02
1.97-02
0
0
1.67-02
1.70-02
0
0
1.44-02
1.44-02
9.47-03
7.72-03
0
0
4.66-01
4.85-01
0
0
6.51-01
6.62-01
0
0
5.63-01
5.72-01
0
0
4.83-01
4.83-01
3.19-01
2.60-01
0
0
6.92-03
6.48-02
0
0
4.62-02
7.65-02
0
0
3.99-02
6.62-02
0
0
6.48-02
6.48-02
5.08-02
3.86-03
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
o'
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1.64-02
1.64-02
0
0
6.48-02
6.48-02
0
0
5.61-02
5.61-02
0
0
1.57-03
1.34-03
0
1.81-03
U
0
2.74-02
2.74-02
0
0
2.40-02
2.40-02
0
0
2.07-02
2.07-02
0
0
4.47-02
3.80-02
0
5.15-02
U
0
2.02-03
2.02-03
0
0
1.75-03
1.75-03
0
0
1.51-03
1.51-03
0
0
3.29-03
2.78-03
0
3.76-03
           FIGURE  4.   Environmental  Coefficient
                   Matrix for Coal  Supply

                              246

-------
                            INTEGRATED ECONOMIC-HYDROSALINITY-AIR QUALITY ANALYSIS
                                FOR OIL SHALE AND COAL DEVELOPMENT IN COLORADO
                  Charles W. Howe
                   Bernard Udis
                  Robert C. Hess
                  Douglas V. Orr
                 Jeffrey T. Young
               Department of Economics
                University of  Colorado
                   Jan F. Krelder
          Environmental Consulting Services
                  Boulder, Colorado
     The objective  of  this  study  was  to analyze the
economic, hydrologic,  water quality,  and air quality
implications  of  establishing a shale  oil industry and
expanding coal mining  in  Colorado.  The main tools of
analysis consisted  of  (1) the Colorado State Univer-
sity input-output  (1-0) model of  the  State;  (2) the
University  of Colorado (l-O)  models of the Upper Main
Stem of the Colorado  (UMS)  and Green  River Basin
economies;  (3) the  C.U. hydro-salinity model cali-
brated to the UMS,  White, and Yampa basins;  (4) the
C.U. air quality model; and (5) state 1970 date on
employment  by industry and  skill, the latter reduced
to a set of employment coefficients per million dol-
lars of output for  the various sectors of the State
economy.  The strategy was  to analyze three steady-
state scenarios:   a shale oil scenario, an under-
ground coal mining  expansion scenario, and a strip
mining coal expansion  scenario.  Changes in outputs
and employment due  to  oil shale and  coal expansion
were estimated.

     The  coal scenario consisted of  6 underground
expansions  totaling 10.35 million tons per year in
the UMS Basin and  5 strip mining operations totaling
12.45 million tons  per year in the Green River Basin.
The shale oil scenario consisted of  4 operations
totaling  134,000 bbls/day in the UMS  Basin and 3
operations  totaling 146,000 bbls/day  in the Green
River Basin.  The  steady-state increase in statewide
output levels due  to shale  oil was about $1.6 billion.
Increased payments  to  households totaled $134 million
statewide.  The  statewide increase in employment was
16,670.

     The  underground coal expansion  induced an expan-
sion in statewide  output  levels of $171 million, in-
creased payments to Colorado households of $27 mil-
lion, and  direct and indirect increases in employment
of 4964 persons.

     The  strip mining  expansion increased statewide
output levels $47  million,  increased  payments to
Colorado households by $3.8 million  statewide, and
increased employment by 1300 persons.

     The water implications included  direct and in-
direct consumptive  uses of  31,668 acre-feet per year
in the UMS, 34,164  per year in the White River Sub-
basin, and  8520  in  the Yampa.  Added  salt loadings
were 3576,  5568,  and 3204 tons per year in the 3
basins, assuming that  brine and spent shale problems
will be totally  controlled.  Shale oil production is
the major air polluter.   Some 17 phases of the shale
oil process contribute significantly  to air pollution.
It is predicted  that a significant degradation of air
quality will  occur  in  Garfield and Rio Blanco Coun-
ties from the postulated  280,000 bbl/day industry on
the assumption that processes similar to TOSCO II
will be used. Occasional conditions  of poor disper-
sion could  lead  to  much more severe  short term
episodes.  Ambient air quality  impacts  in  the vicinity
of the plant itself are seen  to be  critically depen-
dent on plant location with respect  to  topographic
features.

                Methods of Analysis

     The scenarios analyzed are given below in Tables
1 and 2.  They represented the  best  available esti-
mates of likely developments  as of June, 1975.   It
should be noted that only the direct and indirect
efforts of coal and shale oil production have been
analyzed.  The coal and oil produced have  been
treated as exports from Colorado, even  though some
are intended for use within the State.
Table 1.
Coal Scenario
Company /Location
Ultimate
Tonnage
Meth
-od
Uses
Colorado River Basin ,.
1.

2.

3.

4.

5.

6.


Colo. Consol. (Colum-
bine Glass) Paonia
Adolph Coors
Bowie-Paonia
Pittsburgh/Midway
Paonia
Atlantic Richfield
Somerset
Western Slope Carbon
Somerset
Public Service Co.
Cameo
TOTAL
Green River Basin
1.

2.

3.

4.

5.


Empire Energy
Moffat City
Utah International
Craig (Mttffat C.)
W. R. Grace
Moffat County
Peabody
Routt County
Energy Fuel
Routt County
TOTAL
2.0x10
5
4.0x10
f.
1.0x10
6
2.0x10
6
0.6x10
6
0. 75x10

10.35x10°
/:
2.0x10
g
2.6x10
g
3.0x10
g
0.85x10
C.
4.0x10

12.45xl06
UG

UG

UG

UG

UG

UG

UG

S

S

S

S

S


Export from
State
To Golden

Export from
State
Export (?)

To Pueblo

Thermal,
Cameo


Slurry Pipe
to Texas*
Thermal,
Craig
Slurry Pipe
to Texas*
Thermal,
Hayden #2
To Denver


                                            *  requires
                                              water  &
                                              power

     The basic tool of economic analysis used was in
the input-output (1-0) type of model.  Excellent ex-
positions of this type of model are available in the
literature (e.g. Baumol-1- or Miernyk^) but,  in brief,
such a model shows the linkages which exist among the
various economic sectors of a region by virtue  of
                                                      247

-------
Company/Location
      Table 2.
Shale Oil Scenario

      bbls/day
                                    Retorting-, Mining
                                       Tech. -   Tech.
                                    TOSCO II

                                    Union
                                     Underfeed

                                    In Situ


                                    Paraho
                                    Superior
                                    3 Minerals

                                    TOSCO II

                                    TOSCO II
                                 UG

                                 UG
                                                  UG
                                 UG


                                 UG

                                 UG
Colorado River Basin—
  Colony                46,000

  Union                 50,000


  Occidental            30,000
   (Garrett Res.)

  Earaho                 8,000

    TOTAL              134,000

Green River Basin
  Superior              50,000


  Rio Blanco (C-a)      50,000

  Shell Oil (c-b),      46,000
   (formerly ARCO)     	

    TOTAL              146,000
I/ Omitting all Utah developments.
2/ We will assume TOSCO II for all plants.
supplying one another with inputs.  The linked sectors
include households and a local-state government sector.
When one sector expands its output (say to satisfy a
new national demand for an energy commodity like coal),
it demands more inputs from other economic sectors of
the region, and they, in turn, demand more from their
suppliers (including households which supply the labor
inputs required).  Some inputs are imported from out-
side the region, and such "leakages" finally cause the
total regional requirements to converge to new equili-
brium levels of output for each regional economic
sector.
     Should it be desired to know how much of a pollut-
ant will be generated by the regional economy or how
much of some natural resource like water will be re-
quired, it is then possible to multiply the output
levels by corresponding coefficients representing the
generation of the waste or use of the resource to ar-
rive at a total.  This .procedure is followed, for
example, in determining non-agricultural consumptive
water uses, the total level of wastes generated, and
employment by sector.  More complex models are needed
to determine the water use of agriculture and the pat-
terns by which air-borne wastes are distributed at
ground levels.
     The regions for which 1-0 models exist are:  the
Green River Basin, the Upper Main Stem of the Colorado
River (UMS), the San Juan River Basin, and the State
of Colorado.  The first two were used to represent the
economic structures of their respective portions of the
State of Colorado.  The Green and UMS 1-0 models were
used in conjunction with Gray's 1-0 model^ of the en-
tire State to analyze statewide and regional economic
effects.
     To trace water use in greater detail and to anal-
yze the salinity (total dissolved solids) effects on
water quality of the energy developments under study,
the hydro-salinity model (Udis, Howe, and Kreider^) was
calibrated to three separate sub-basins:  the Colorado
UMS below Glenwood Springs (excluding the Gunnison-
Uncompaghre systems); the White River; and the Yampa
River.   The outputs of the models include monthly and
annual river basin outflows, and total dissolved solids
loadings by month and year.
     The air pollution model does not cover regions but
calculates ground level concentrations of particulates,
sulphur dioxide, oxides of nitrogen, carbon monoxide,
and unburned hydrocarbons  for  areas  surrounding im-
portant point and  diffuse  sources.   A point source
would be, for example, a coal  mine or a thermal elect-
ric plant.  A diffuse source would be a town where
there are many small point  and mobil sources.
     The strategy  of applying  these  models to the
analysis of the  coal and shale oil scenarios consisted
of the following steps:
     1.  use the state-wide 1-0 model to get total
state output and employment effects;
     2.  use the UMS 1-0 model to get the output and
employment effects occurring in the  UMS region
(roughly Garfield, Mesa, Delta, and  Montrose Counties)
as a result of the coal and shale oil developments;
     3.  use the Green River Basin 1-0 model to esti-
mate the output  and employment effects of the coal and
shale oil developments assumed for that region (Rio
Blanco, Moffat,  and Routt  Counties);
     4.  assume  that the ''rest of the State" effects
are given by the quantities in (1) less those in (2)
and (3);
     5.  apply the hydro-salinity models to the im-
mediate basins where the developments are occurring,
since that is where any critical water problems will
arise;
     6.  apply the air pollution model to the import-
ant new point sources.
     For shale oil and the  new strip and underground
coal mining processes, the  major problem was to create
the column of technical coefficients showing the in-
puts from the regional (or  state) sectors.   Each en-
try requires two facts:  (1) the technologically re-
quired input from  the particular sector and (2)  the
portion of that  input likely to be supplied by the
regional (state) sector (the remainder being the
amount imported.  It was decided that the rows correr
spending to the  new energy  activities and showing the
distribution of  their output would all be zeroes,  the
output being completely exported from the State.  (This
is not the case  for all coal output;  see the Coal
Scenario in Table 1).
     The major sources consulted during the construc-
tion of the shale oil column were references 5 thru
10.  Reference 11 would have been extremely useful had
it been known in time.  Interviews with officials of
the Shale Oil Corporation provided new information and
verification of  data from other sources.
     The final data used to characterize the oil shale
sector were stated in terms of the annual inputs  into
a 50,000 bbl/day plant, the output of which was  eval-
uated at $12 per barrel.  These are  given in Table 3.

                      Table 3.
         Major Inputs Into  a 50,000  bbl/Day
           Shale Oil Plant  (1970 dollars)
                                            Electric power
                                            Payments to state and
                                             federal government
                                            Wages and salaries
                                            Imports (out of state)
                                            Depreciation
                                            Water
                                            Water use (consumptive)
                                                                                     $  9,300,000

                                                                                        9,500,000
                                                                                       15,000,000
                                                                                      165,200,000
                                                                                       20,000,000
                                                                                        7,200 acre-feet
                                                                                      32.87 acre feet/$106 output
                                          In retrospect, other "guesstimates" of additional in-
                                          puts could have been attempted and probably some of
                                          the "imports" (e.g.  ceramic balls for retorting)
                                          should have been allocated to Colorado.  In sum, the
                                          economic impacts are biased downward by omitting other
                                          positive inputs, but the amount of this bias is diffi-
                                          cult to estimate.
                                               Coal mining was characterized as new underground
                                          or new strip.  For the former, data were obtained from
                                          industry sources and a promise of confidentality was
                                          made.  However, the following classes of inputs were
                                          included:  wages and benefits, chemicals and explor
                                          aives, fuel and power, supplies, and other.  In the
                                                       248

-------
caae of chemicals and explosives, it was assumed that
they were available within the State, but not in the
UMS or Green BasiCna-.  Consumptive water use was esti-
mated to be 6.8 acre-feet per million dollars of out-
put (at $7/ton).
     The new strip mining sector was estimated from
U.S. Bureau of Mines data, Circular 1972 1C 8535.
Estimates were based on a 5 million ton per year oper-
ation and are given in Table 4.
                       Table 4.
         Major Inputs Into a 5 Million Ton/Year
            Western Strip Mining Operation

    Wages and salaries       $850,800
    Local taxes               250,000
    Federal taxes             314,400
    Chemicals & explosives    850,600
    Oil and gas               111,000
    Electric power            126,000            ,
    Water use (consumptive)   326.8 acre-feet/$10
                                               output
The same assumption was made regarding the source of
the "chemicals and explosives" input.  In all cases,
strip coal was valued at $2/ton and underground coal,
being of much higher quality and heat content suited
primarily for metalurgical uses, at $7/ton.
                        Water

     Water "use" must be specified both in terms of
how much is withdrawn from the source and how much is
actually consumed.  Return flows - the difference be-
tween withdrawals and consumptive use - can represent
a large part of the water diverted (1/2 to 2/3 for
residential uses, as much as 1/2 for irrigation) and
are quite important to the maintenance of flows for
downstream users.  It is also necessary to distinguish
between direct and indirect water uses.
     Water quality in this study is defined as total
dissolved solids (TDS), either in tons of total salt
load or in concentration.  TDS is affected by natural
sources, the contents of return flows, and the concen-
trating effects of consumptive water uses.  A change
in economic activity will induce both direct and in-
direct salt loadings and both are computed by the
hydro-salinity model.
     It is not yet known whether or not serious salt
problems will follow from shale oil development.  Pro-
blems might relate to the use or disposal of brines
recovered with the shale and the possible leaching of
salts from the spent shale.  Industry sources have
asserted that these will not be significant sources of
pollution.  The calculations in this study cover only
TDS additions from sanitary and clean-up water uses
typical of industry.
                     Air Pollution
     The impacts of the three energy development sce-
narios include possible degradation of air quality in
Rio Blanco, Routt, Moffat, Garfield, Gunnison, and
Delta counties.  An attempt was made to calculate the
dispersion or diffusion of airborne pollutants through
the region by use of a mathematical simulation model.
In this section, only the direct air pollution impacts
of energy growth are considered.  Although the input-
output framework provides a mechanism by which both
direct and indirect effects can be calculated, the
scope of this section includes only direct air quality
impacts.
     All energy extraction scenarios considered here
are centered in Northwestern Colorado.  This area is
typified by a proliferation of ridges, valleys and
mesas and is generally quite variable in form and con-
tour.  This type of conformation results in local
micro-climates influenced less by synoptic (mesoscale)
climatic events and more by local (microscale) char-
acteristics.   As a result, dispersion of airborne pol-
lutants varies from site to site and can only be mod-
eled approximately by the best of air pollution models
now extant.
     Mean temperatures for a year range  from 40°-60°F
and insolation is about 1500 Btu/(ft  )(day)  on  a hori-
zontal surface.  Precipitation  averages  from 8  to 16
inches per year in the region.   In  this  arid, sunny
coimate, fugitive dust emissions require more control
effort than in other areas of the United States.  High
surface winds associated with the passages  of cold
fronts may exacerbate the fugitive  dust  problem at
mine sites and temporarily unvegetated areas.
     With the exception of the  aerological  environment
of large municipalities, air quality  in  the region is
excellent.  However, few data on air  quality exist for
the area other than measurements of particulate con-
centrations made near the towns of Meeker,  Grand
Valley, Rio Blanco and Rangely by the Colorado  Depart-
ment of Health.  There are indications that natural
particulate hazes from windblown dust may exceed ac-
ceptable air quality standards periodically at  the
present time with no industrial development in  the
region.  Airborne hydrocarbons may  exist in some areas
of the region resulting from emissions from vegetation
(sagebrush).
     An air pollutant dispersion model APGDM has been
developed at the University of  Colorado's Bureau of
Economic Research.  The model is of the  Gaussian type
and is described in Udis, et al^.   This  simulation
model includes the effects of the following primary
variables upon the diffusion of pollutants  from a
given source into the atmosphere:   (1) stack height,
exit temperature, exit velocity  and plume rise;  (2)
wind speed, wind direction, ambient temperature,
atmospheric stability, temperature  gradient,  and inso-
lation; (3) inversion depth; (4) background air qual-
ity; (5) arbitrary receptor location; (6) terrain
variations downwind; (7) arbitrary  time  period.
     Since the behavior of plumes in  the present im-
pact areas may not conform to all Gaussian  model as-
sumptions, the dispersion results presented in  the
shale oil impact analysis must be viewed as approxi-
mate.
     Direct airborne emissions  from underground coal
mining are negligible.  The transportation, storage
and distribution phases of underground-mined coal are
also very clean since conveyors or  trains are used
for transport.  Surface mining  of coal can  result in
significant air pollution from  the  mining and trans-
port phases.
     Unlike the two coal extraction scenarios con-
sidered in this report, the development  of  the  postu-
lated 280,000 bbl/day shale oil  extraction  industry
will have major impacts on the  air  quality  in Rio
Blanco and Garfield Counties.   Since  sufficient tech-
nical data were available only  on the TOSCO II  pro-
cess , it has been assumed that  all  plants are to use
that process so that the related calculation can be
demonstrated.  Estimates of emissions from  a steady
state TOSCO II plant with underground shale mining
have been based on the Environmental  Impact Statement
prepared by the Colony Development  Operation-^  for
their proposed 50,000 bbl/day facility to be located
at Parachute Creek, Colorado.   Significant  emissions
arise from some 17 phases of the operation  includ-
ing shale transport and crushing, retorting, power
plant operation, and on-site kerogen  storage.   Cemen-
tation reactions and revegetation are assumed to con-
trol fugitive dust from large spent shale disposal
areas.  The total annual emissions  from  one 50,000
bbl/day TOSCO II plant and from the seven plants
postulated in the present scenario  are shown in
Table 5.
     The calculated emissions agree with those  pre-
sented in the FEA Project Independence Oil  Shale Task
Force Report  with the exception of NOX  emissions. The
FEA report, inexplicably, does not  include  the  domi-
nant NOjr emitter of the TOSCD II process (raw shale
                                                       249

-------
preheat system).   Fugitive dust emissions are expected
to be much smaller C46TPY for 50000 B/D plant) than
those associated with. the. procession plant13.

                       TaEle 5
       Annual Direct Emissions Prom Shale Oil
  Extraction (tons/year)* Assuming TOSCO II Technology-
                              Production Level
Pollutant	50,000 B/D	280,000 B/D
Particulates             3075              17220
SO,                      6950              38920
NO|                     24600             137760
CO                        250               1400
Hydrocarbons             1700               9520

* These figures are based on the assumption that all
shale oil projects would be using the TOSCO II process
and experiencing a 90% load factor at each plant.

     Model outputs are in terms of incremental pollu-
tion increases for the region surrounding each of the
seven postulated plants considered.  The physical dis-
tributions of long term average pollution are presented
on isopleth (constant pollutant quantity) maps which
are a convenient visual means of determining the nature
of pollutant distribution in the areas around a given
source.  Incremental isopleth maps for S02 and NO^
are illustrated in Figure 1.

                        Results

     Selected results are given in Tables 6-11 below.

                        Table 6
      Total Output Impacts of Energy Developments
             (millions of 1970 dollars)
               Shale Oil  UG Coal  Strip Coal   Total
UMS Basin
Green Basin
Statewide
% 1970
722
800
1,654
4.3%
120
-
171
0.4%
-
33
47
0.1%
842
833
1,872
4.9%
                        Table 7
         Total Increases in Income Payments to
                  Colorado Households
              (millions of 1970 dollars)
               Shale Oil  UG Coal  Strip Coal   Total
UMS Basin
Green Basin
Statewide
% 1970
64
72
136
1.7%
24
-
27
0.3%
--
3.9
3.9
0.05%
88
76
167
2.1%
                        Table 8
              Total Increases in Employment

               Shale Oil  UG Coal  Strip Coal
       Total
6,480
7,525
16,670
2.0%
4,326
- 1,206
4,964 1,300
0.6% 0.2%
10,806
8,731
22,934
2.7%
UMS Basin
Green Basin
Statewide
% 1970

                        Table 9
        UMS Increases in Annual Consumptive Use
                   and Salt Loadings
                      Shale Oil  New UG Coal   Both
Increased consumptive
  use (AF/yr)           31,320       348      31,668
Increased salt
  loading (ton/yr)       3,444       132       3,576
                        Table 10
     White River Increases in Annual Consumptive
                  Use and Salt Loadings
                                       Shale Oil
Increased consumptive use (AF/yr)       34,164
                                         Table 11
                              Yampa River Increases in Annual
                             Consumptive Use and Salt Loadings
                                                              Coal &
                                          New Strip Coal     Spillovers*
                  Increased consumptive
                    use (AF/yr)                8,148            8,520

                  Increased salt loading
                    (tons/yr)                  2,172            3,204

                  *Economic spillovers from shale oil development  in
                   White River Basin.
                                                                                   (a)
                                                               58
Increased salt loading (tons/yr)
5,568
                         (b)

Figure 1.  Longterm Average tsopletB. Maps for Rio
           Blanco  (c-a Tract)  (50,000 bbl/day) for S02
           and NO  ; 250,000 s-erles map,  contour inter-
           vals 2:xand 10 meg/cubic -meter, respectively.
                                                       250.

-------
                      References

1.  Baumol, William J., Economic Theory and Operations
      Analysis, Prentice-Hall, Englewood Cliffs, N.J.,
      1972.

2.  Miernyk, William H., The Elements of_ Input-Output
      Analysis. Random House, New York, 1965.

3.  Gray, S. L. , J. R. McKean, and D. D. Rohdy,
      "Estimating the Impact of Higher Education from
      Input-Output Models:  A Case Study," Rocky
      Mountain Social Science Journal, V. 12, No. 1,
      January 1975.
4.  Udis, B. , C. W. Howe, and J. F. Kreider,  The
      Interrelationship of Economic Development and
      Environmental Quality in the Upper Colorado
      River Basin, National Technical Information
      Service Accession No. COM-73-11970, Springfield,
      Va., 1974.

5.  Udis, Bernard et^ al^, "An Interindustry Analysis
      of the Colorado River  Basin in 1960 with
      Projections to 1980 and 2010, Appendix, Part II",
      June 1968.  Prepared under Contract No. WA 67-4
      with Federal Water Pollution Control Adminis-
      tration, U. S. Department of Interior, June 1968
      (unpublished).

6.  Colorado, Office of the Governor, Oil Shale Plan-
      ning and Coordination, IMPACT:  An Assessment
      of the Impact of Oil Shale Development —
      Colorado Planning and Management Region 11.
      Vol. I_, Executive Summary, December 1974.
7.  Colorado, Legislative Council, Committee on Oil
      Shale. Coal, and Related Minerals:  Report to
      the Governor and the Colorado General Assembly,
      Leg. Co. Research Publication No. 208,
      December 1974.
8.  Colorado West Area Council of Governments, Oil
      Shale and  the Future of a_ Region — A Summary
      Report. September 1974.
9.  U.  S. Federal Energy Administration, Project Inde-
      pendence Blueprint Final Task Force Report:
      Potential  Future Role of Oil Shale, Prospects
      and Constraints, Interagency Task Force on Oil
      Shale, Department of Interior, November 1974.
10.  Sladek, Thomas A., "Recent Trends in Oil Shale,
      Parts 1 and  2," Mineral Industries Bulletin,
      Colorado School of Mines Research Institute,
      November 1974   January 1975.
11.  Just, J. , B. Borko, W. Parker, and A. Ashmore,
      New Energy Technology Coefficients and Dynamic
      Energy Models, Vol. 1, The Mitre Corp.,
      January 1975.
12.  Colony Development Corporation, An Environmental
      Impact Analysis for a. Shale Oil Complex At
      Parachute  Creek. Colorado, 3 Vols., 1974.
13.  Federal Energy Administration, Project Indepen-
      dence — Potential Future Roles of Oil Shale,
      USGPO No.  4118-00016, Washington, D. C. 1974.
                                                       251

-------
                                       CSMP CONCEPT AND APPLICATIONS TO

                                     ENVIRONMENTAL MODELING AND SIMULATION
                                        Grace Chang and C. Lindsay Wang
                                           Systems Architects, Inc.
                                              Arlington, Virginia
ABSTRACT

The Continuous System Modeling Program (CSMP) is a
continuous system simulation language that allows
models to be prepared directly and simply from
either a block diagram representation or a set of
differential equations.  A CSMP program is
constructed from three types of statements:
Structure Statements which define the model, Data
Statements which assign numerical values to
parameters, constraints and initial conditions, and
Control Statements which specify the execution and
report generation options.  CSMP accepts most
FORTRAN statements to supply the user with logic
and algebraic capability.  Computer graphics and
interactive execution capabilities are also
available in CSMP.

In this paper, the fundamental concepts of CSMP
that are related to environmental modeling and
simulation are summarized.  Procedures for applying
the concept to environmental models are described.
Sample cases for environmental problems are
presented.

CSMP OVERVIEW

Continuous System Modeling Program III (CSMP III)
is an IBM program product which aids development
and execution of simulation models for continuously
changing systems.  Tt. is written in FORTRAN TV
language and ASSEMBLER language, and has been
installed at many IBM 360/370 facilities.

CSMP III is a continuous system simulation
language (CSSL) that allows the digital simulation
of continuous processes on large-scale digital
machine.

The program provides an application-oriented
language which permits models to be prepared
directly and simply from either a block diagram
representation or a set of ordinary differential
equations.  It includes a basic set of functional
blocks (also called functions) which can represent
the components of a continuous system and accepts
application-oriented statements defining the
connections between these functional blocks.

A CSMP III program is constructed from three types
of statements:
Structure Statements   which  define  the  model.
They consist of FORTRAN  statements and functions,
and functional blocks  (also called functions)
designed for CSMP.

Data Statements   which  assign  numerical values  to
parameters, constants  and initial conditions.

Control Statements   which specify options  for the
execution of the program and  the choice  of  output.

It accepts FORTRAN statements,  thus  supplying the
user with logical and  algebraic capability.  Hence
the user can readily handle complex  nonlinear and
time-variant problems.

This program is specifically  designed  to satisfy
the needs of scientists  and engineers, who wish to
simulate physical phenomenon  without having to
spend time and resources learning the  intricacies of
sophisticated computer programming.

Applications in which  CSMP III  can be used include
studies of nuclear reactors,  control system design,
parameter estimation,  studies of blood circulation
and other physiological processes, studies of
chemical refineries, natural  gas transmission,
process control, investigation  of aircraft landing
and take-off, plant growth, natural  resources
management, simulation of corporate  financial
policies and industrial dynamics.

CSMP III has the following basic functional
capabilities:

•  Powerful Standard Functions   The CSMP III
   language contains 42 powerful simulation
   functions for performing such operations as
   integration, differentiation, signal  and
   function generation, Laplace transformation,
   switching and logical operations.

•  Capability to Develop Additional  Functions
   By combining standard CSMP III functions and/or
   FORTRAN statements, the user may  build larger,
   more powerful functions specifically  suited to
   his particular field  of study.  These functions
   become part of his  CSMP III  language  and they
   may be used in a manner identical to  the
                                                      252

-------
standard CSMP III functions.
   pen at the appropriate point on  the  curve.
•  Extensive Function Generation Capability   The
   user may incorporate arbitrary or experimental
   data into his model.  Such data may be the
   function of one or two variables.  Interpolation
   between data points is handled automatically,
   including interpolation of functions of two
   variables.

•  Powerful Array-Handling Capability   The
   storage, manipulation, and printing of arrays is
   easily performed.   Integrator arrays are also
   easily specified and handled.

•  FORTRAN-Based System   FORTRAN statements can be
   intermixed (with a few minor exceptions) with
   CSMP III statements, thereby placing the logic
   and algebraic capability of the FORTRAN
   language at the user's disposal.

•  Extensive Library Facilities   The library
   facilities of CSMP III allow the user to
   develop and maintain libraries of functions,
   sub-models, arbitrary or experimental data,
   tables, and complete models.

•  Wide Selection of Integration Algorithms   The
   user has a wide range of integration algorithms
   from which to choose   both single and double
   precision;  fixed and variable step, including
   one specifically designed for "stiff" equations.

•  Numerous Output Options   The values of one
   through 55 selected variables may be printed
   during the simulation run.

t  Improved Coding and Debugging Aids   The CSMP III
   language   including FORTRAN when used in
   conjunction with CSMP III   is free-form.
   Extensive debugging aids are available to the
   user to check out his CSMP III and his FORTRAN
   coding.

•  Flexible Installation   CSMP III may be tailored
   to the user's particular hardware configuation.

The CSMP III graphic feature provides the
following capabilities:

•  Interactive Interrogation of Results   Using the
   graphic device (such as IBM 2250 graphic display
   terminal),  the user may quickly display and
   analyze the results of the simulation run and
   select those variables which are to be printed
   or print-plotted for later reference and
   evaluation.

   One to four grids  may be simultaneously displayed,
   with one to four variables plotted per grid.

   Graphic plots to logarithmic scales are readily
   available.

   The value of a plotted variable may be obtained
   merely by touching the display with the light
•  On-Line Reference Manual   Whenever the user is
   in doubt about the user of a CSMP III statement,
   he can immediately obtain a graphical display
   of instructional messages relating to rules and
   proper usage.

•  Interactive Simulation Run Control   By
   dynamically displaying selected variables
   during a simulation run, the user can monitor
   the simulation and interrupt the run at will to
   change the model, model data, execution
   specifications or to vary the display itself.

•  Interactive Model Development   With its highly
   versatile set of editing features, Graphic
   CSMP III makes it easy for the user to develop
   simulation models, completely "on-line".

   With merely a few touches of the light pen, the
   user may store and retrieve data, sub-models,
   or entire models using the CSMP III library.
   This assures continuity of model development
   and helps the individual user to quickly
   incorporate commonly used sub-models and data
   into his model.

CSMP OPERATION OVERVIEW

CSMP III uses five phases, in the following order,
to build and execute a CSMP III model:  Input
Processor, Translator, FORTRAN, Linkage Editor,
and Execution.

1. The Input Processor phase reads the next
   CSMP III model from the Input file, accesses
   and retrieves any data referenced in the
   symbolic library by INCLUDE statements, and
   builds the input for the Translator phase.

2. The Translator phase analyzes the CSMP III
   statements from the Translator input file and
   builds two separate files:  a FORTRAN input
   file containing FORTRAN subprograms representing
   the logic of the CSMP III model's structure,
   and an Execution input file containing the
   CSMP III data and execution control statements.
3. The FORTRAN phase converts the FORTRAN
   subprograms from the Translator phase to a
   machine-language object module.

4. The Linkage Editor phase combines the machine-
   language object module produced by the FORTRAN
   phase with the precompiled CSMP III load
   module library (for integration, plotting, etc.)
   to produce the Execution phase load module.

5. The Execution phase (built by the Linkage
   Editor phase) first interprets the data and
   execution control statements from the
   Translator phase for the next run and then
   proceeds to execute that simulation run,
   storing simulation results on the Prepare
                                                      253

-------
   data set when required.  This is repeated until
   all the execution runs have been exhausted.
   Print documents are generated during each exe-
   cution run, while output documents are
   generated at the end of each execution case.

STRUCTURE OF THE MODEL

The CSMP formulation of a model is divided into
three segments   INITIAL, DYNAMIC and TERMINAL
that describe, respectively, the computations to be
performed before, during and after each solution.

INITIAL Segment   which is intended exclusively
for the computation of initial condition values
and those parameters that the user prefers to
express in terms of the more basic parameters.
This segment is optional.

DYNAMIC Segment   which is the most extensive in
the model.  It contains the complete description
of the systems dynamics, together with any other
computations required during the solution of the
system.  The structure statements within this
segment are generally a mixture of CSMP and
FORTRAN statements.

The DYNAMIC segment is required.  This segment
may be declared explicitly by a DYNAMIC statement
or implicitly by the absence of INITIAL, DYNAMIC,
or TERMINAL statements.

TERMINAL Segment   which is used for these
computations required at the end of the run, after
completion of the solution.  This segment is
optional.

These segments represent the highest level of the
structure hierarchy.   Each of the segments may
include one or more sections which represent
rational groupings of the structure statements
and may be processed as either paralleled or
procedural entities.


SEGMENT
INITIAL /
DYNAMIC A
TERMINAL \





. SECTION «^
/
	 SECTION -^
X.
\
\ SECTION ^
SORT ^
NOSORT
STATEMENTS
•^ STATEMENTS
-^ STATEMENTS
STATEMENTS
-"""' STATEMENTS
-\ STATEMENTS

STATEMENTS
r±T STATEMENTS
"\ STATEMENTS

           Structure of the CSMP TV Model
These sections contain the structure  statements
that specify model dynamics and associated
computations.

ELEMENTS OF THE CSMP III

The basic elements in the preparation of CSMP
statements are:

1.  NUMERICAL CONSTANTS   which are unchanging
    quantities specified in numeric form in
    the input statements.

2.  SYMBOLIC NAMES   which represent  quantities
    that may either change during a run  or be
    changed by the program between successive runs
    of the same model structure.

3.  OPERATORS   which are used instead of
    functional blocks to indicate basic  arithema-
    tical functions or relationships.  As  in
    FORTRAN, these operators are +, -, *,  /, **,
    + and ( ).

4.  FUNCTIONAL BLOCKS   which are used for more
    complex mathematical operations,  such  as:
    integration, time delay, quantization  and
    limiting.
                  BLOCK REPRESENTATION
                                       •*-*,
INPUTS
                                              OUTPUTS
V Y2 ..... Yn
                                                                          MATHEMATICAL EXPRESSION
                                                                                     . XL X2...XB)
                                                                        EQUIVALENT CSMP III STATEMENT

                                                          Y1,Y2,....YM=DEVICE(P1,P2,...P5,X1,X2 ..... XN)
                                                              Example:
                                                                        Y = 1NTERICIC,X)
    which states the output, Y, is obtained by
    integrating X, with Y at the starting time
    is equal to 1C.

5,  LABELS - which are the first word of CSMP
    data and control statements that tell the
    program the purpose of the statement.  Some
                                                      254

-------
  statements contain only the label, such as
  INITIAL, NOSORT,  and NEDMAC.  Other contain a
  label and appropriate data.
  Example:

  TIMER
  DELT = 0.025
                        FINTIM = 100
  which specifies the integration  interval and the
  "finish time" for a run.

PROBLEM DESCRIPTION

Oxygen balance studies of a polluted stream
usually result in one or more dissolved  oxygen
profiles along the course of the stream.  Dissolved
oxygen is a very commonly used water quality
criterion; it is an important general index of
quality albeit not all-pervasive.

Following is a general equation describing dissolved
oxygen relations in a stream receiving oxygen-
consuming waste:
— - -(K  +
dt      L
K )B +
 *
                     R
CD
where
   S = The rate of change of BOD [Biochemical
        Oxygen Demand) with respect to time

   B   BOD present

   R   The rate of BOD addition due to runoff and
       scour

   K.= The rate constant for deoxygenation
    K,= The  rate constant  for  sedimentation


A related expression, using dissolved
oxygen deficit, D,  rather than  BOD,  B is
the rollowing:
          dD  _ „
          ar  - Ki
         K2D
               A
                                       (2)
where
     dD _ The rate  of change  of dissolved
     dT   oxygen  deficit  with respect to time

     D   The  existing oxygen  deficit  (the
         difference between the saturation
         concentration  and the existing dissolved
         oxygen concentration)

    A   The net rate  of oxygen production due to
        photosynthesis  and respiration of phyto-
        planton and/or  waterweeds

    K. = The rate constant for reaeration

 Integrating these two equations, we get:
                                          "t •  
-------
                                                         	0.XVGEN.	BALANCE	IN.. P.OLUJ.TfcD	»AJE«S
                                                           K1
On the oxygen sag curve.   C  is  dissolved oxgen (DO)
saturation; D  is initial  DO deficit;  D  is
critical DO deficit; C  is critical  DO level
MODEL DESCRIPTION

Figure 2 shows a complete  listing of the CSMP III
statements for the sample  problem.

The INCON and PARAMETER statements assign the
values of initial conditions  and parameters.
K, = (0.26, 0.27) means that  two simulation runs

will be made; each with a  different K.  value.

DYNAMIC card indicates  the end of the initializa-
tion statements, and the beginning of the dynamic
portion of the simulation.  The INTGRL function is
used to perform integration.   TIMER FINTIM specifies
the finish time for terminating this simulation.
OUTDEL indicates the time  interval of output
printing and plotting.  PRINT statement presents
the variables which are printed during execution
of the run.  TITLE allows  the user to specify
the text of a heading to appear at the top of each
page of the print document.   OUTPUT statement
lists the variables to  be  print-plotted after
completion of the case.  LABEL specifies the text
of a heading appearing  at  the top of each page of
the print-plot document.   PAGE MERGE indicates
that two curves, B and  D,  are to be merged on a
single output print-plot.   The END and STOP
statements define the end  of  the model.
INCON
PARAMETER
  PARAMETER

DYNAMIC	
             B0=5.2.
 I = (0.26i0.27) .
  AsO.,«Ji
                       00=6,9
                              K2aO.Hi
                                         K3=0.36
             OR=-(K1»K?)>B»R
             On=KI»B-K2*D-A
             8=INT6RU(BO.DB)
             0 = I NTSR L_ ^ 10 D)
TIMER

PRINT
fl NT IMC 2 5._.	Q U IDE. Lj 0 ,5

OBfDD.BiO	
TITLE
OUTPUT
LABEL	
               OXYGEN   BALANCE   IN   POLLUTED   HATERS
             8,0
PAGE
EttQ
             VERGE
                 OXYGEN  DEFICIT CURVE AND BOD CURVE
                  FIGURE 2 SAMPLE  INPUT
RESULT
Figures 3, 4, 5 and 6  show the tabular printing
output and merged print-plotting output for the
run with a K-^ = 0.26 and 0.27  respectively.  From
these outputs we can find the  critical time is 3.0
and critical oxygen deficit is 7.0585 for 1C. = 0.26,
and the critical time  is 4, critical oxygen deficit
is 7.1567 for K  = 0.27, respectively.
,0
1.00000
1.50000
08
on
B _ -
-.12400 ,16300 5,2000
-.31096 ,10823 	 5. OL77 —
-.22809 6.B660E-02 4.8840
-.16729 1.0214E-02 4.7660 	
2.00000 -.12270
I, 50000 	 -«._9.9?3E-.02__
3.00000 -6.6004E-02
3,50000 	 -I.eiJiOE-.p.Z 	
4". 00000 -3.5506E-02
U. 50000 -2.6041t-0_2 	
5.00000
5.50000
6.00000
6.500JO
7.00000
7.50000
8.00000
8.5000.0
9.00000
	 9j50000
10.0000
lOjSOOO
1 1,0000
11.5000
12,0000
12.5000
13.0000
13.5000
14.0000
11.5000
15.0000
15,5000
16,0000
16,5000
17.0000
17.5000
18.0000
18.5000
19.0000
19,5000
20.0000
20,5000
21 .0000
21.5000
22.0000
22.5000
23.0000
23.5000
21.0000
21.5000
25.0000
-1.9099E-02
	 -1 ._4.009E.-02_ _
-1.0274E-02
-5.5265E-03
-4.0531E-OJ
-2.9716E-03
-2,I801E-03 	
-1 .5971E-03
•1.171 1E-03
•8.5831E-01
-6.2913E-01
-4.6158E-04
-3.3855E-04
•2.1700E-04
-1.8120E-04
•1.3256E-04
-9.6321E-05
-7.0572E-05
-5.1498E-05
-3.7193E-05
-2.7657E-05
-2.0027E-05
-1.4305E-05
-1.0490E-05
-7.6294E-06
-4.7684E-06
-2.8610E-06
-1.9073E-06
-9.5367E-07
.0
.0
.0
.0
.0
.0
.0
.0
.0
.0
.0
J.9693E-02
5.S029E-03.
-4.5652E-03
-K6132Y-02
•J.9125E-02
• 2.0930F.-02
•2.2234E-02
-2.216IE-OE.
-2.1793E-02
-2.a2E7.EjJi
1.7140
4.661J 	
4.6226
	 1.5.9.12 	
1.5734
1.5169
	 1.J38.7 	
1.5327
"..52.83
4.5250
4.5227
-2.05J1E-02 4.5209
- .9756E-02. 4,5196 	
- .8935E-02 4.5187
- .8096E-02 4.5180
- .7254E-02
- .6424E-02
- .5614E-02
• .4B2BE-02
- .4072E-02
- .3346E-02
- .2652E-02
- .1989E-02
- .13S8E-02
• .0757E-02
-1.01BBE-02
-9.6464E-03
-9.I328E-03
-8.6468t-03
-B.1859E-03
-7.74S7t-03
-7.3348E-03
-6,_9431E-03
-6.57I5E-03
-6.2206E-03
-5.8876E-03
-5.5724E-03
-5.274PE-03
-4.9919t-03
-4.7217E-03
-1.471BE-03
-4.2324E-03
• 4.0058I--03
-3.79HE-03
-3.5884E-03
-3.39631-03
4.5175
4.5171
4.5169
4.5(67
4.5165
1.5163
4.5163
4.5162
4.5162
4.5162
1.5162
4.5162
4.5162
1.5161
1.5161
1.5161
' 1.5161
4.5161
4.5161
4.5161
. .".516 1_
4.5161
. ".5161 .
4,5161
4.5161
4.5161
4.5161
4.5161
4.5161
4.5161
6,9000
	 6.9671
7.0107
7.0376
7.0523
	 7.0564.
7.0585
	 1.0.5.41.
7.0474
7.0385
7.0285
	 7.0177.
7.0067
6.9956
6.9846
6.9738
6.9634
	 6.9533.
6.9436
6.9344
6.9255
6.9171
6.9091
6.9015
6.8943
6.8871
6.8809
6.8748
6.6689
6.8634
6.R56?
6.6532
6.8465
6.8441
6.8398
	 6.6359_
6.8321
6.8285
6.8251
6.6P19
6.6189
6,8161
6.8133
6,6)06
6.6083
6.8060
6.6039
6.6016
6.7999
6.7960
6.7963
     FIGURE 3 TABULAR PRINTING OUTPUT
                                                                                           0.26)
REFERENCES

1.  CSMP III Program Reference Manual (SH19-7001-2)
    IBM Corporation  (1972)'
                                                 	     2.  CSMP III and Graphic Feature General Information
                                                   (GH19-7000-1),  IBM Corporation fl97iy
                                              3,  CSMP III Operations  Guide (SH19-7002-1) , IBM
                                                  Corporation  (1972)

                                              4.  CSMP III Graphic  Feature Program Reference
                                                  Manual  (SH19-7005~1) .  IBM Corporation (1972)

                                              S,  CSMP III Graphic  Feature Operations Guide
                                                  (SH19- 7004-1)  IBM Corporation CLdT^
                                              6.  Gordon, Geoffrey,  "System Simulation,"
                                                  Prentice-Hall,  Inc.   (1969)

                                              7.  Leonard, Caiccio,  Water and Water Pollution
                                                  Handbook, Volume 1,  Marcel Dekker, Inc., New
                                                  York  (1971)

                                              8.  Thomas R. Camp, Water and Its Impurities, Reinhold
                                                  Publishing Corporation, (1963)
                                                      256

-------
Kl
          .26000

TIME
.0
.50000
l.onoo
1 ,5000
2.0000
2,5000
3.0000
3.5000
1,0000
1,5000
5.0000
5.5000
6.0000
6.5000
7,0000
7,5000
6.0000
8.5000
9,0000
9.5000
10.000
10.500
11,000
11.500
12,000
12.500
13.000
13.500
11.000
11.500
15,000
15.500
16.000
16.500
17.000
17,500
16,000
16.500
19.000
19.500
20,000
20,500
21 ,000
21.500
22.000
23.000
23.500
21,000
21.500
25,000

R
5.2000
5.0177
1.BH10
1.7860
1.7110
1.6613
1.6226
1,5912
1.5731
1.5581
1.5169
1.5387
1.5327
1.5283
1,5250
1,5227
1.5209
1.5196
1.5187
1.5160
1.51 75
1.5171
1.5169
1,5167
1.5165
1.5161
1.5163
1.5163
1.5162
1.5162
1,5162
1.5162
1,5162
1.5162
1.5161
1.5161
1.5161
1,5161
1,5161
1,5161
1.5161
1.5161
1.5161
1.5161
1.5161
1.5161
1.5161
1.5161
1.5161
1.5161
6.7
1,1
I"
I
I
I
I
1
I
I
I
I
I
I
1
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
1
I
I
1
1
I"
60
00




I
I
T
1
I
I
I
I
1
I
I
I
I
I
I
I
	 .1
1
I
1
I
1
I
I
I
	 j
1
I
1
I
*
*









4
+
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4*

«4
»4
 (\J o- co r»-
1 in S3i 9 9
CD1 |
IV
1
UJ
, 0 9 Kl — •
e o-1 cc o
i 1
9 "i r\j _*
CD . . . .
O » 1 1 1
i |
0 C O
coo
o o o
UJ O1 O O
a. 0,0 in

KI co Q- -or*- — -J w KI -* 9
KI o o in *> in — r—ru^o
nj9 in in in in] in 9 9Ki
j

cc o «. r^. in c
in o ,n rv o co
-O •£> in U"li J~> 9
1
9 =1
J
•£> <& t- O- f\i ec
e 9i ec rv. r- —
KI ru — ~— o o

i
9 9 9 9] 9 9' 9 =3
999 9 9999 99
; 1
I\J fV CM K* 9 Kl """1 KY CM f\f
coco, oolccoo
1 1 1 1 1 ll t 1 1 1
LIJ uJ u.' UJ JJ uJ JJ U-t LL. UJ
KI o r\j ru in etiKi in CD ru
i i
coo
III
1 1 1 1
1
o o o c
o c o c
00,00
0 O O O
o tf\- o in

o c
1 1
UJ UJ
KI ru
i i
o o
0 0
C 0
c c
o in

o o
1 1
UJ UJ
1 1
o c
e o
e o
O 0
c in
..|«>«>
1 1
C C
1 1
UJ LL.
1 1
O 0
o c
O 0
o in

9 9
ru oj
i i
Kl CC
CC CO
9 9
1 1
LL. LU
•C 9
r- 9

0 OJO 0
1 1
o c
o o
o o
c in
1 i
c o
o o
o o
0 O
O I/I
rl 4 CO* co"
i
r\j AJ
I i
ILU UJ
9 in
c •£

T T
UJ UJ
00000 -1.6
5(1000 -1,1
o-o-
-C IT
-C —
C7- O-

in m
9 9
9 9
9 9
nj KI
0 O
UJ LIJ
CC 0
ryo-

0 0
1 1
t 1
c c
e o
o o

cc ru
•o ru
«""

in in
9 9
9 9
9 9
O C
o- m
9 O-

1 t
LU UJ
1 1
O C
o o
c o
c in i
O- I*-
T~- Kl
t** r-.
0 0

9 9
9 9
9 9
0 0
UJ LU
Kl Kl
•""

O C
1 1
UJ UJ
1 1
o o
O 0
c e
o in
1
CO -J'C  in in in
0 OJC 0 O O

9 9
9 9
9 9
9 9
O C
r~ K
9 —
ru KI
i i
0 C
1 1
LU UJ
1 1
0 O
o o
o o
o in

9 9
9 9
»"
9 9
0 O
9 r^
in -c
in o-

o o
i i
UJ uJ
1 1
0 C
o o
O 0
o m

9 9
9 9
9 9
O C
UJ LiJ
9 •£>
9 O*
in ru
c r-
in 9
0 0

9 9
9 9
9 9
C C
LL) U-J
Kl f^
f\J fVJ

O 0
UJ UJ
i i
0 0
O 0
C 0
C O
o £
1 1
o o
o c
o o
O .O Kl f\J — f\P9 J3O 9!O %O
1

9 9 53 9 9 9
1
999 9.9 9
999 919 9>9 9'9 9,9 9
O O 0 C
0 C
f«- -C'cc KI m r^
CC 9 9 CC'Ki —
1 1
C O O O
Lv LjJ.UJ UJ
. . , ,
o o'o o
c o o o
	 1 	 , 	 ,__
c c
1 1
UJ UJ

c o
c o

Kl Kl Kl K-. r^i Kl
O 0 O C O C
KI r- Kl 9 cc ec
o a-'O- ec * KI
•c -c co — ,in o
C O 0 C C 0
O O'C O'O O
C 0.0 000

K» o cc r-
»*1 nj O O"
O C 0 O

9999
9999
9999
'
Kl «*V»O Kl
C O'O 0
o o rvi in
0* (V f\i ff-
IT nj o ,0
i i I i
O C'C O
1
O OiC 0
o o.o o
0 C'O 0

                                                    257

-------
    	0)
-------
                      GASP IV CONCEPTS APPLICABLE TO ENVIRONMENTAL MODELING AND SIMULATION

                                               A. Alan B. Pritsker
                                Purdue University and Pritsker & Associates, Inc.
                                               Lafayette, Indiana
                       Abstract

     Environmental modeling and simulation involves the
characterization of a system in order to determine the
dynamic performance of the system over a specified
period of time.   To obtain this dynamic portrayal of
the system variables, it is necessary to identify and
model the state  variables of the system and the points
in time at which logical decisions are made to change
the status of the system.  There has been a growing
trend to model systems that involve continuous variables
with discrete events superimposed in order to alter the
behavior of the  system status.  This paper presents the
fundamental concepts of the GASP IV simulation language
that are used to obtain combined simulations.  Speci-
fically the paper includes definitions and explanations
of the following basic simulation concepts :  system
status representation; time-events and state-events;
time advance procedures; and data collection and
analysis.  Two exaoples of combined models are pre-
sented that illustrate the concepts as applied to en-
vironmental modeling and simulation:
  1) Simulation of electroplating operations to evalu-
     ate different operating procedures and control
     policies regarding metal flow and concentration
     levels; and
  2) A model of an urban area with discrete events
     superimposed.

     This paper presents new modeling and  simulation
concepts that are useful for resolving environmental
problems.

          Description of GASP IV  8.9.10,17

      GASP  IV  is a  FORTRAN  based simulation language
 that can be used  for discrete, continuous, or  combined
 simulation.*  The  interactions between discretely and
 continuously  changing variables are  easily modeled in
 GASP IV.  Extensive  use  and applications have  been
 made of  GASP  IV.

      In  GASP  IV a  system  is modeled  in two dimensions,
 the time dimension  and  the state-space dimension.
 These  dimensions  are  further  decomposed  into manageable
 elements.   In the  time  dimension  this  involves the de-
 fining of  events  and  the  potential  changes to  the
 system when an  event  occurs.  The user must specify  the
 causal mechanisms  by which events can  occur.   GASP IV,
 however,  sequences these  events  in  simular time.   Thus,
 the user must define only the mathematical-logical re-
 lations  that  transpire  at  an  event  occurrence, and he
 is not required  to model  the  timing  of  the events
 during the  simulation.

      In the  state-space dimension the  system is decom-
 posed into its  entities which are described by attri-
 butes.  The  attributes  are further classified  as dis-
 crete or continuous.  The value  of a discrete  attribute
 remains constant  between event  times.   The value of a
 continuous attribute, hereafter referred to as a state
 variable,  may change between  event times according to  a
 prescribed  dynamic behavior.   Special  storage  arrays
 are provided  by GASP IV for storing values of  state
 * GASP PL/I is a PL/I version of GASP IV.
                                          11
variables and, if required, their derivatives and im-
mediate past values.

     A dynamic simulation is then obtained by modeling
the events of the system and by advancing time from one
event to the next.  Events usually cause changes in the
status of the system or in the equations defining the
state variables of the system.  However, change, either
discrete or continuous, need not occur at an event time.
Events could occur at decision points where the decision
is not to change the status of the system.  Conversely,
the system status may change continuously without an
event occurring as long as these status changes have
been prescribed in a well-defined manner.

     Those events that occur at a specified projected
point in time are referred to as time-events.  They are
commonly thought of in conjunction with next-event sim-
ulation.  Those events that occur when the system
reaches a particular state are called state-events.
Unlike time-events, they are not scheduled in the
future but occur when state variables meet prescribed
conditions.  In GASP IV, state-events can initiate
time-events and time-events can initiate state-events.

     The behavior of a system model is simulated by
computing the values of the state variables at small
time steps and by computing the values of the attributes
at event times.  The time step increment is automatical-
ly determined by GASP IV based on the equation form for
the state variables, the time of the next event, and
accuracy and output requirements.

     When an event occurs, it can change the system's
status in three ways:  it can alter the value of state
variables or the attributes of the entities; it can
alter the relationships that exist among entities or
state variables; or it can change the number of enti-
ties present.  Any of these changes can result from the
occurrence of an event.  Between event times, only the
values of the state variables can change and such
changes must be in accordance with prescribed equations.

     At each time step, the state variables are evalu-
ated to determine if the conditions prescribing a.
state-event have occurred.  If a state-event was passed,
the step size was too large and is reduced.  If a
state-event occurs, the model status is updated accord-
ing to the user's state-event subroutines.  Step size
is automatically set so that no time-event will occur
within a step.  This is accomplished by setting the
step size so that the time-event ends the step.

     Since time-events are scheduled happenings, certain
attributes are associated with them.  At the minimum, a
time-event must have attributes that define its time of
occurrence and its type.

     In addition to the just described functions of pro-
viding automatic time advance, event scheduling and con-
trol, continuous variable integration with variable step
size and user specified accuracy requirements, and
discrete-continuous interaction procedures; GASP IV also
provides subprograms that accomplish statistical data
collection, random deviate generation, program monitoring
                                                       259

-------
and error reporting, information storage and retrieval,
automatic statistical computation and reporting, stan-
dardized simulation reports, tabular and plotted
histograms, automatic plotting routine, and built-in
flexibility in output reports and other provided func-
tions.  Table 1 presents a list of the GASP IV subpro-
grams and user-written subprograms that are used to
accomplish these functional capabilities.

Table 1. Categorization of GASP IV and User-Written
         Subprograms According to Functional Capability
Function
                 GASP IV Provided
                                         User-written*
Time advance
  and status
  update
                 GASP
Initialization   DATIN, CLEAR, SET


Data storage     FILEM, RMOVE, CANCL,
  and retrieval  COPY, NPRED, NSUCR,
                 NFIND

Location of      KROSS
  state-events
                        STATE, SCOND,
                        EVNTS, and
                        specific event
                        subprograms

                        Main program,
                        INTLC
Monitoring
  of system
  simulation

Error reporting

Data collection
  and reporting


Miscellaneous
  support

Random deviate
  generation
                 MONTR
ERROR

COLCT, TIMST, TIMSA,
HISTO, GPLOT, PRKTQ,
PRNTS, SUMRY

SUMQ, PRODQ, GTABL,
GDLAY

DRAND, UNFRM, TRIAG,
RNORM, ERLNG, GAMA,
BETA, NPSSN, EXPON,
WEIBL, DPROB, RLOGN
UMONT



UERR

SSAVE, OTPUT
* Only those subprograms required by a specific appli-
  cation need be provided by the user.	
     Through the use of these subprograms,  a GASP  IV
simulation model is developed.   The model  includes a
set of event programs or state variable equations  or
both that describe the system's stochastic/dynamic be-
havior; lists and matrices that store information; an
executive routine that directs the flow of  information
 and  control  within the model;  and various support
 routines.

      GASP  IV concepts  provide  a view of the world that
 simplifies model  building.   These concepts facilitate
 the  representation of  the relevant aspects of system
 behavior.  As a programming language, GASP IV gives the
 computer programmer a  set of FORTRAN subprograms de-
 signed  to  carry out the most important functions in
 simulation programming.  Modeling concepts are trans-
 lated by GASP IV  into  FORTRAN  routines that can be
 easily  used.   GASP IV  provides the link between the
 modeling and programming activities that is so important
 to a successful simulation  study, as well as providing
 a common basis for modeling diverse systems and a well-
 developed  framework which fosters communication between
 simulation modelers.

      In the  following  sections,  two examples are given
 that illustrate the combined modeling capabilities in-
 herent  in  GASP IV as applied to  environmental problems.

  Simulation  of Electroplating Operations 2»3>"Ml3.m

      Cadmium  is used in the electroplating industry  to
 provide iron  and  steel  products  with protection against
 corrosion.  A cadmium  coating  also provides an attrac-
 tive appearance and  good  solderability.   The discharge
 of cadmium into the nation's waterways,  however,  poses
 an environmental  threat in  that  cadmium  has been  associ-
 ated with  several  chronic and  acute  effects in man and
 other species  even when present  in only  trace amounts.

     The barrel plating line simulated in this paper is
 represented  in  Figure  1.  Parts  to be plated are  placed
 in large perforated barrels.   Using  an overhead crane,
 an operator lowers  a barrel into the plating bath.  The
 parts in the barrel become  the cathode and  attract cad-
mium ions  from cadmium  anodes  which  are  periodically
 replenished by  the plater.   The  electrolyte is composed
 of sodium  cadmium-cyanide,  excess  sodium cyanide,  and
 sodium  hydroxide plus additional agents  and brighteners.
After a specified  time, the barrel  is lifted  out of the
bath.   The parts in the barrel retain a  certain volume
of bath liquid  at  the very  high  cadmium  concentration
of the  bath.   This is called dragout volume and dragout
concentration.  The barrel  is  then  immersed  in a running
rinse.  Running rinses  are  supplied  with  fresh water at
the bottom of the tank  and  empty via the  overflow at
 the  top of the  tank.  The dipping of a barrel  causes an
 increase in the rinse concentration  while  the  flowing
water decreases this concentration.  The  next  rinse is
an acid bath designed to brighten plated  parts with an
   Barrel
                    Barrel + Dragout
                               Barrel + Dragout
       PLATING BATH
                                    RUNNING RINSE
                          Effluent
                           Stream
                                   Barrel + Dragout
                                                                 ACID i  RINSE
                                                                                                 Barrel + Dragout
                                                                                            RUNNING RINSE
                                   Fresh
                                   Water
                          Dumped
                          Monthly
                         Effluent
                          Stream
Fresh
Water
                                        Figure 1.   A Barrel  Plating Line.

                                                       260

-------
attractive finish.  The  final rinse is another running
rinse.  The barrels are  then emptied, washed and reused.
The discharge of cadmium is  due to the continuous ef-
fluent flow from the running rinses and the periodic
dumping of the acid rinse.

     The process is modeled  in terms of the events at
which cadmium concentrations or equations describing
these concentrations can be  altered in the various
parts of the process.

     The insertion and removal of  barrels in the rinse
tanks and the dumping of tank contents are the time-
events of the process.   Each barrel has attributes as-
sociated with it that characterize the type of parts in
the barrel, the dragout  volume in  the barrel, the con-
centration of this dragout volume, and codes denoting
the next processing point (next event type) for the
barrel and the time of occurrence  of this event.  When
a barrel is placed in a  running rinse, it causes a
surge of effluent equal  to the displacement of the
barrel.  The effluent due to the surge is assumed to
have the current cadmium concentration of the rinse and
occurs Instantaneously.   The barrel then Immediately
causes an Increase in rinse  concentration dependent
upon the current amount  of cadmium in the rinse, the
volume of the rinse, and the volume and concentration
of the dragout in the barrel.
                                               The barrel stays in  the  rinse until it is scheduled
                                          to be withdrawn.  During  this stay in the rinse, the
                                          concentration of the rinse decreases due to the fresh
                                          water supply.  Immediately after  withdrawal, the tank
                                          begins to refill, and the concentration decreases in «
                                          different manner than when the barrel was in the tank.
                                          This is because fresh water is entering the tank but no
                                          effluent is leaving.  When the tank is completely re-
                                          filled, the effluent again starts pouring from the tank.
                                          Other tanks of the system are modeled in a similar
                                          fashion.

                                               In Figure 2, a plot  for  one  of the simulation runs
                                          of the GASP IV model of the electroplating line is pre-
                                          sented.  Time, in three minute intervals, is plotted on
                                          the independent axis.  Cadmium concentration, in parts
                                          per million, and also cadmium amounts in ounces are
                                          represented on the dependent  axis.   The symbol 1 repre-
                                          senting cadmium concentration in  rinse tank 1 has a
                                          sawtooth behavior pattern.  Increases in this variable
                                          are the result of barrel  inserts  into the tank.  The
                                          die-away curves following the increases are the result
                                          of fresh water diluting the concentration as it enters
                                          the tank.  The total process  effluent is plotted using
                                          the symbol 9.  As expected, the plot shows a continual
                                          Increase in cadmium released  as the process continues.
                                          Figure 3 shows a statistical  summary of dragout volume
                                          observations made during  the  simulation.  A histogram
                       !• COM 1
                       I- COUC »
                       I* CONS 1
                       •-TOTAL ff
                                                                            oo  os  ti  ti n
                                                                                         curlicues
                                                                                         ii 11
                          Figure 2.   Plot of State Variables for Plating Line  Simulation.
                        5I»*ut»TtON •I*OJ*CT

                        OATS  z/  «/ ivi
                                                          't   3  nr  c e SIGAI.
                                                           «ux KV«.«.E»   10*   1
                         SET
                         SET
                         SET
                         SET
00V If
oev «!>
00» 15
•Fit I
•€•< 3
                                        ••STtTTSMCS FOR V«»T«1LIS 8ASE" ON OSSE'YITtON"
                                      STD nrv      SO Or **f!N        CV        HINIH
                                     l.Ci!t',E-lt
                                     J.'ilt'''1!
                                                             «. .75S1E-91
                                                             J.".07«iE-JJ
                                                             J.kMl«-»l
")T-U   t'"
frat'-ii   17:

M!SF>]1   5-3
                           Figure  3.   Statistical Summary for Plating Line  Simulation.
                                                        261

-------
                                                 ••HISTOGMf HVMICH  »••
                       OBSV
                       F«EO
REU
FREQ
CUW.
F»ECJ
OP«N UPPER
CEU tlltIT
I 0
0 0
0 0
0 0
0 C
0 0
0 0
0 0
0 0
in
26
36
25
33
29
37
62
57
51
39
25
21
10
fc
2
z
0 0
11 C
0 0
0 t
300 '923
ICO .OPO
its .COO
ooa .ceo
ooo .act
ceo .coo
uoo .00
oca .coo
ceo . c t
020 .',!
071 .17J
050 .22
123 .541
101 .79
353 .?2
042 .?t
120 .=«
OOJ .59
004 .99
304 1.C3
OOJ l.CO
OCJ l.CO
GCJ l.CO
OCO l.CC























.
.1000000
.OCOOE'Ol
.5030E»01
.UOOE'Ol
.5000E*Ol
.3000EIU
.5000 >C1
.30P3 * 01
.53SO >C1
.SOCO 401
.3000E»C1
. 300GE»02
.1330;'G2
. 150GE*-2
.2CCOE»02
.?500E>02
.:OOOE<02
. 3500E *02
.40COE«C2
.4503E«02
.50JOE>iZ
INF









•C
*• c
... c
.. e
e
.. e
• e
... e
..«•• £
.... e
• * c
. c
• c
e
c
c
c
c
c
c
c
                          Figure A.  Histogram of Peak Cadmium Concentration  of Rinse 1.
of interest to the modeler and not currently available
from direct measuring procedures used is shown in
Figure 4.  It represents observations on peak concen-
tration values (occurring immediately after a barrel
insertion) of rinse concentration of rinse 1.

     An interesting feature of this model is the trade-
off between production rate and amount of pollution in
the effluent.  One way to decrease the amount of cadmium
in the effluent is to decrease the amount of cadmium in
the dragout from the plating bath.  This can be accom-
plished by decreasing the process flow by requiring a
dwell time over the plating bath.  Simulation runs were
made to evaluate the tradeoffs between the increased
cost due to production slow down as a basis for meeting
EPA standards on cadmium concentrations in the effluent.

   GASP IV Model of Cadmium Flow in an Urban Area 15

     In this example, the sixty square mile region of
extreme northwestern Indiana which includes the cities
of Gary and East Chicago was modeled.  There are several
hundred sources of cadmium emissions in the region which
can be to either air or water or both.  The impact of
each source on the various ecosystems in the region is
unclear.  Questions that were raised and for which the
model was designed are:  1) what are the flow patterns
and characteristics of cadmium in the urban area under
study?; 2) what are the levels of cadmium on urban
structures?; and 3) what control policies may be useful
in meeting pollution standards?

     The major compartments for one portion of the
model developed are shown in Figure 5.  In this flow-
chart, the rectangles represent compartments (or levels
in systems dynamics terminology), the circles represent
generation rates, and the lines between circles and/or
rectangles represent transfers.  Equations relating the
compartments to one another were developed, and these
were used to define the state variables for the system.
Source emission data was used in order to obtain the
input components for the model.  The inputs along with
the equations for the state variables constitutes the
continuous portion of the model.  Superimposed on this
continuous model are rainfall events which wash down
the urban structures and provide inputs of cadmium to
                                 Sediment



1

Water Out
                                                                         Sludge
                                                              Figure 5.   Major Compartments for Urban Submodel.
                                                       262

-------
the municipal plant.  These discrete events are sign!-    12.  Schooley, R. W., "Simulation in the Design of a
fleant in that they can cause overflow conditions to           Corn Syrup Refinery," 1975 Winter Computer
occur at the municipal plant.  Such overflow conditions        Simulation Conference, December 18-19, 1975,
occurring at the time when cadmium and other particulate       Sacramento, California.
matter is washed from the urban structures could be a
significant component to the levels of pollution in the   13.  Sigal, C. E., "Designing a Production System with
waterways to which the overflow is directed.                   Environmental Considerations," Proceedings of the
                                                               AIIE Fall Institute Conference. New York, 1973.
     The time behavior for each level of each compart-
ment was obtained during the simulation.  In addition,    14.  Sigal, C. E., "Modeling Cadmium Discharge from an
the peak values for particulate matter on the urban            Electroplating Line with the GASP IV Simulation
structures was collected along with the percent of time        Language," Proceedings of the First Annual NSF
that the municipal plant was bypassed and the amount of        Trace Contaminants Conference, Oak Ridge National
pollution in the effluent that was bypassed.  Sensltivi-       Laboratory, pp. 89-107.
ty studies were performed for assessing the different
rates of increase (or decrease) of total emissions and    15.  Talavage, J., and M. Triplett, "GASP IV Urban
their effects on the levels of the state variables             Model of Cadmium Flow," Simulation, Vol. 23, No. 4
describing the system.  Results from this study have           (October 1974), pp. 101-108.
been published previously.15
                                                          16.  Wong, G., "A Computer System Simulation with
                        Summary                                GASP IV," 1975 Winter Computer Simulation Con-
                                                               ference, December 18-19, 1975, Sacramento,
     The basic concepts of combined simulation inherent        California.
in the GASP IV simulation language have been presented.
The interaction between state variables, time-events,     17.  Wortman, D. B., "The Anatomy of GASP IV,"
and state-events has been demonstrated through examples.       Slmuletter, Vol. 6, No. 1 (October 1974), pp. 60-
The two examples presented in this paper illustrate the        64.
types of environmental systems that can be studied and
analyzed using GASP IV.

                      References

1.  Fishman, G. S., Concepts and Methods in Discrete
    Event Digital Simulation, New York:  John Wiley &
    Sons, Inc., 1973.

2.  Grant, F. H., and A. A. B. Pritsker, "Models of
    Cadmium Electroplating Processes," NSF(RANN) Grant
    No. GI-35106, Purdue University, December 1974.

3.  Grant, F. H., and A. A. B. Pritsker, "User's
    Manual for the Electroplating Simulation Program
     (ESP)," NSF(RANN) Grant No. GI-35106, Purdue
    University, December 1974.

 4.  Grant, F. H., and A. A. B. Pritsker, "Technical
    Description of the Electroplating  Simulation
    Program  (ESP),", NSF(RANN) Grant GI-35106, Purdue
    University, December 1974.

 5.  Green, R., "AN-PTC-39 Circuit Switch Simulation,"
     1975 Winter Computer Simulation Conference,
    December  18-19,  1975, Sacramento,  California.

 6.  Mihram, G. A., Simulation:  Statistical Foundations
    and Methodology, New York:  Academic Press, 1971.

 7.  Nagy, E. A.,  "Intermodal Transshipment Facility
     Simulation:   A Case Study," 1975 Winter Computer
     Simulation Conference, December 18-19, 1975,
    Sacramento, California.

 8.  Pritsker, A.  A.  B., The GASP IV Simulation Language,
    New York:  John  Wiley & Sons, Inc., 1974.

 9.  Pritsker, A.  A.  B., "Three Simulation Approaches to
    Queueing  Studies Using GASP IV," Computers &
     Industrial Engineering, Vol. 1, No. 1, 1976.

10.  Pritsker, A.  A.  B., "GASP IV Simulation for
    Scientists of Systems," Proceedings of the Annual
    Meeting of the Society for General  Systems
    Research, January 27-30, 1975, New York.

11.  Pritsker, A.  A.  B., and R. E. Young, Simulation
    With GASP PL/I, New York:  John Wiley & Sons,  Inc.,
    1975.

                                                       263

-------
                  RADIONUCLIDE REMOVAL BY THE pH ADJUSTMENT OF PHOSPHATE MILL EFFLUENT WATER
                                      David L. Norwood and Jon A. Broadway
                                    Eastern Environmental Radiation Facility
                                           Montgomery, Alabama 36109
                       Abstract

      Application of the GASP IV simulation system to
the waste water treatment process in a phosphate ore
milling industry has been presented.  Specific
attention has been directed to a quantitative evalua-
tion of precipitation of radionuclides due to the
liming treatment (pH adjustment) used in a wet process
plant and the residual radionuclides in effluent water.
The variation in output radionuclide concentrations
was studied as a function of important system parame-
ters such as flow rate, liming rate, and pH.  Extension
of this modeling capability to other large industrial
applications has been discussed and implications for
further study have been indicated.

                       Introduction

      A study of effluents from the phosphate mining
and milling industry in Florida has been underway since
1974 by the Eastern Environmental Radiation Facility
(EERF) in Montgomery, Alabama.  One goal of this study
has been to develop a simulation of the liming process
used to treat waste water before its discharge into
the environment.  This liming treatment is used to
adjust the pH of the waste water for the removal of
flouride and phosphorous.  It was shown by the EERF
that radionuclides are also removed by this process.

      Results from further field work which will pro-
vide information on the parameters important in the
radionuclide removal process will be incorporated into
the model presented in this report.  The objective of
this study is to establish a computer model which will
be helpful in estimating the radionuclides, flouride,
and phosphorous which will be present in phosphate
industry effluents.

      Field measurements made during 1974 produced data
                                               99 ft
which indicate that the liming process reduces    Ra
concentrations over a range of 95% to greater than 997«.
The data used in this report were collected at an
operating wet process phosphoric acid plant in central
Florida.  An effort was made,,however, to see that the
model presented here is sufficiently general that it
could easily*be adapted to other plants which also use
the liming method of pH adjustment.  The parameters
which characterize this model are all defined in one
subroutine, which sets all initial conditions, or are
set up as input values to the model.

          Overview of the Effluent Treatment

      The wet processing of phosphate ore involves the
addition of sulfuric acid to the phosphate ore to
produce gypsum and phosphoric acid.  The gypsum is
removed from the process water by allowing it to settle
out.  The process water is retained in a lake for con-
tinued sedimentation and reuse in the plant.  Due to
rainfall into this lake it is sometimes necessary to
release some process water to the general environment.
Prior to this release, the process water is routed
,through a series of ponds where it is treated with a
lime slurry to effect pH adjustment.  These ponds also
allow for settling out of solids precipitated when the
pH is raised.  Both the raising of pH and adequate
settling time are necessary for effective removal of
flourides, phosphorous and radioactivity.
 plant in central Florida.   This process consists of
 adding lime to the process water as it enters the
 system of ponds.  The process water contained in the
 holdup pond is typically at pH of 1.5   3.0 before
 the liming treatment begins.  For the purpose of the
 modeling process contained in this paper, the process
 water was assumed to be a pH of approximately 2.5 at
 the start of the first liming stage.  Contact with
 the lime causes sedimentation and pH increase until
 the second liming occurs.   The second liming stage
 starts at a pH of approximately 4.0 and continued con-
 tact with the lime solution causes increased sedimen-
 tation and the pH is increased to a. range of 7 to 10
 at  the point of release to the surface water system.

    Laboratory Measurement  of Sedimentation Rates

      One portion of this study involved a series of
 laboratory experiments to  characterize the sedimenta-
              996
 tion rate of    Ra from the process water treated by
 the lime as a function of  time for given values of
 starting pH levels.  The actual process water and
 lime slurry as used by the phosphate plant were used
 in  the laboratory study.

      A process water sample of six liters was stirred
 continuously while lime was added to reach the pH
 value for the sedimentation study.  Lime addition was
 stopped at a pH of 2.5 and a timer was then started
 to  measure sedimentation rates.  Samples of the super-
 nate were removed at t = 0, 5, 10, 100, and 400
             9 9 ft
 minutes for    Ra analysis by the radon emanation
 method as described in the American Public Health
 Association's Methods for  the Examination of Water
                 2
 and Waste Water.   A similar experiment was performed
 for the second sedimentation process with a second
 process water sample with  adjustment of the solution
 to  an initial pH of 4.0, in order to approximate the
 conditions at the start of the second liming step in
 the industrial process.  Measured concentrations of
 9 9ft
    Ra at the two initial pH levels are given in Table
 1.   Total    Ra in this case is the sum of the
 dissolved and undissolved
                           226
                              Ra.
226
                       Table 1
   Ra Concentrations in Process Water After Liming as
           Measured in Laboratory Experiment
                 Total    Ra for
                                         r\ r\ /•
                                   Total    Ra for
      The treatment under consideration in this study
is the double liming process used at a wet process
Sedimentation
time after       	    .„.	    	
initial pH is    initial pH - 2.5   initial pH = 4.0
obtained (min.^      (gCi/1)           (gCi/1)
      0                5.5                6.6
      5                2.1                8.42
      10               2.94               6.62
      100              1.54               0.54
      400              2.16               0.42
      For a given pH we have assumed that the
 retention rate of the radionuclides in the effluent
 water after treatment is dependent solely on the time
 spent in the liming ponds before release to the out-
 side environment, and since the settling rate at any
 time is proportional to the amount of radionuclides
 present in the process water, we are led to an
 equation of the form :
                                                       264

-------
 where:
C = C e'
     o
t = time
C = concentration at time t,
C  = initial concentration,
 and            X = settling factor to be determined.
     Using a stepwise Gauss-Newton iteration procedure
on the parameters X and C  a non linear least square
curve of the form (1) was calculated.  This was facili-
tated by running the BMD07R program from the Bio-Medical
Statistical package of programs developed by the UCLA
Health Sciences Computing Facility  on an IBM 370 com-
puter operated by the Optimum Systems Incorporated, of
Bethesda, Maryland.  This program was run once with
each of the two sets of data given in Table 1.  The
complete results of the two runs are given in the
Appendix.  For an initial pH of 2.5, the procedure pro-
duced a value of X = 0.0018, and for an initial pH of
4.0 the procedure produced a value of  X= 0.020.  Al-
though these two values need further refinement, they
do give a reasonable approximation of the results ob-
tained in actual field measurements.  Further experi-
ments can be run with an assortment of pH values in an
effort to develop a single, reliable equation relating
the concentration at time t to both the pH and the time
spent in the liming ponds.  When this final equation is
determined, the model will provide  a simple mechanism
for  estimating the effects of various combinations of
liming practices at  the  two liming  points.
                       GASP IV

      A decision to use GASP IV as  the modeling
  language was made because of its combined
  discrete/continuous modeling capabilities and because
  of  the authors' familiarity with FORTRAN, the host
  language of GASP  IV.  Actually, the initial system
  presented here could have been modeled with a strictly
  continuous language, but  future embellishments will be
  more easily implemented if we also have the discrete
  event case available.

      GASP IV is a combined discrete/continuous FORTRAN
  based simulation language which comes to the user as a
  set of FORTRAN subroutines .  Because of its complex-
  ity, it requires more effort to use successfully than
  some of the other strictly discrete or strictly
  continuous simulation languages, however, if a model
  has both discrete and continuous components, then
  GASP IV can be well worth the extra effort.  The fact
  that it is FORTRAN based simplifies the writing of
  subroutines required to customize GASP IV for the
  user's application and generally obviates the need to
  learn another high level computer language.  The user
  has to provide subroutines (in FORTRAN) to process
  and schedule events and to initialize his continuous
  and non-GASP variables, and to allow for any addition-
  al output and/or error messages which are desired but
  not supplied by GASP itself.

                     The GASP Model

      The basic model which is described in this report
  is an attempt to simulate the flow of liquid effluents
  from the process water pond through the liming ponds.
 We assume that the initial liming occurs at the
  entrance to the liming system,  and that the second
  liming occurs somewhere between entry into the system
 and exit from it.   For simplification, we have
  initially made the assumption that the system we are
 dealing with has reached equilibrium in the sense
 that the volumetric flow rate of the effluents through
 the system is essentially constant.  As more precise
 information about  the topography of the liming ponds
 is obtained we will easily be able to incorporate this
into the model, but for now, only small errors will
probably be induced by the assumption of constant
volumetric flow rate.  We have also made the  assump-
tion that the cross-section of the liming pond system
at any point is a segment of a circle.  This  does not
seem like an unreasonable assumption and it allows us
to calculate the cross-sectional area from other known
           4
parameters.

     In setting up the GASP model for the flow of the
effluents through the liming ponds it seemed  that
distance traversed through the liming pond system
would be a more natural independent variable  than
actual time spent in the liming pond system.  Thus,
for purposes of this simulation the GASP variable
TNOW was used to represent the distance that  the
effluent had traveled through the system.  This is
one indication of the adaptibility of the GASP TV
simulation language.  In fact, no inconsistencies at
all are introduced by using distance instead  of time
as the independent variable.  Thus, in addition to
TNOW, the GASP variables TTBEG, TTLAS, TTNEX  and TTFIN
refer to distance from entry into the liming  pond
system at initial liming, the last update point, the
projected next update point, and exit from the system,
respectively.

     The state variables SS(1) and SS(2) are  used to
denote the concentration at distance TNOW from entry
into the liming system, and the time, in minutes, that
it took to traverse the distance TNOW.

     Other variables introduced into the program are
VF = Volumetric Flow Rate of the effluent through the
system (recall our assumption of a constant VF
throughout the system), TM = elapsed time from last
state event, DLIM2 = the distance from entry  into the
system that the second liming occurs, and a few
variables which are used only in the user added
FUNCTION AREA which calculates the cross-sectional
area of the pond as a function of distance from entry
point.

     For our purposes at EERF, we have modified the
original GASP IV software somewhat to achieve faster
throughput and turn around of our runs, at the expense
of storage for the rather large arrays which  are pro-
vided with the stock version of GASP IV.  A list of
the arrays which can be conveniently reduced  in size
in this manner is provided on pages 77 and 80 of Dr.
Pritsker's book, The GASP IV Simulation Language.
Following is a brief discussion of each of the user
written subprograms used in this model.  A block
diagram of their interrelationship to each other and
GASP IV is given in Figure 1.

Subroutine STATE

     Subroutine STATE first calculates the time
increment since the last update of the concentration.
Since distance is the independent variable, DTNOW
gives us the distance traveled since the last update.
Thus, if we know the volumetric flow rate and the
average cross-sectional area of the ponds over that
distance we can easily calculate the time lapse, (TM)
by

          TM = DTNOW*AREA(X)/VF                   (2)
where     AREA(X)   cross-sectional area at
          distance X from initial liming
and       TTLAS 
-------
Function Area

     The basic assumptions  made in calculating the
cross-sectional area  of  the pond system is that the
cross-section is a segment  of a circle.  We are then
able to get the cross-sectional area if we can get
the chord of this segment of a circle and the
perpendicular distance from the chord to the arc of
the segment by using  the law of cosines to get the
radius and applying the  standard formula for area of
a segment of a circle.^  The chord is just the width
of the liming pond at that  point and since the ponds
are kept dredged to a depth of about 7 feet, that
suffices as the other measurement.

Subroutine INTLC

     The initial conditions subroutine is used to
input the initial values for the volumetric flow rate
(VF), distance at which  the second liming occurs
(DLIM2), and the initial concentration (SS(1)), and
the initial time (SS(2)).

Subroutine SSAVE

     This subroutine  is  used to tell the GASP IV
executive which variables  (in this case distance,
TNOW, and time, SS(2)) are  to be plotted by the GASP
IV provided plot routine.

                      Conclusions

     Three simulations have been run at this time using
three different volumetric  flow rates.  The results of
                                 3
using a typical low VF of  10.6 m /min, and "average"
            3                             3
VF of 16.0 m /min and a  high VF of 38.9 m /min are
shown in Figure 2.  From these three outputs the
effect of modifying the  volumetric flow rate is
obvious.  The higher  the flow rate, the less time the
water spends in the liming  system and consequently
the more radionuclldes that are released to the
environment.  Obviously  there is an optimal VF some-
where since a stagnant liming pond system (VF = 0) is
clearly not ideal.  As more information is gathered
concerning the phosphate plant release to the process
water pond and their  behavior in that pond, this model
can be used to determine the optimal VF.

     This model is only  a  first step in the process of
simulation removal of radionuclides from the effluents
of a phosphate plant. However, with this basic model
and the discrete/continuous capabilities of GASP IV
more complex functions such as seasonal effects and
discrete lime addition can  be included.  Such refine-
ments will increase the  model's effectiveness as a
tool in the evaluation of  the release of radionuclides
into the environment  from phosphate plants using the
liming procedure.

                       References

1.  Guimond, R. J. and Windham, S. T., Radioactivity
Distribution in Phosphate Products, By-Products,
Effluents and Wastes, Technical Note, Environmental
Protection Agency, Office of Radiation Programs,
August 1975.

2.  American Public Health  Association, Methods for
the Examination of Water and Waste Water.

3.  Pritsker, A. A. B.,  The GASP IV Simulation
Language, New York, John Wiley & Sons, 1974.

4.  Standard Mathematical Tables, 14th Edition,
Chemical Rubber Company, 1964.
5.  BMD Biomedical Computer Programs, University  of
California Press, Berkeley, California, 1973.
                lilitin il  Sibpnirms  li mill.

            HQ —^-[T1 = SHkprifrm A cills  $lkpri|ni I.

      Sikttitlii tASP li  tki SASP IV pr«»llil  incitiii  tiknitlii
    111 Slkriltln DATIN Is tkl  SASP IV rrllllll lltl ilMt tllcilllii
                          Fifirl I.
          OUTPUT OF 6»SP SIMULATION

          USINC V»mi)US VF  FACTOKS
 Actllll Fllltf
Hlliu ninllli
          111     201     JM     4M500     110
             Flfirt  i. Olltlltl Fril Flrtt  Llllll |llt|[l|
                                                        266

-------
                               APPENDIX
 Ai. t
 3.
 4.
 5.
 6.
 7.
 8.
 9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23 *
24.
23.
26.
27.
28.
29,
30.
31.
32,
33.
34.
35.
36.
37.
38.
39.
40,
41,
•42,
43.
44,
45.
46.
47.
BMDX85 - NON LINEAR LEAST SQUARES - REVISED NOVEMBER  19?1971

HEALTH SCIENCES COMPUTING FACILITY, UCLA
PROBLM CODE
NUMBER OF VARIABLES
INDEX OF THE DEPENDENT VARIABLE
INDEX OF THE WEIGHTING VARIABLE
NUMBER OF CASES
NUMBER OF PARAMETERS
TOLERANCE
EPSILON
MAXIMUM NUMBER OF ITERATIONS
NUMBER OF VARIABLE FORMAT CARDS
ALTERNATE INPUT TAPE NUMBER
REWIND OPTION
VARIABLE FORMAT
MINIMA
                         PH4,
                          0.000010
                          0.000010
                           100
                             1
                             5
                           NO
                        (2F6,2)
MAXIMA
ITERATION
               •-1.0000E
                l.OOOOE
                    20
                    20
                                     ,OOOOE 20
                                     , OOOOE 20
    0
    1
    2
    3
    4
    b
    6
    7
    8
    9
   10
0
0
0
0
0
0
0
0
0
0
0
  ERROR
  MEAN
  SQUARE
2.3969E 00
1.2664E 00
1.2617E 00
1.2604E 00
.1. .2602E 00
 .2602E 00
 .2602E 00
 >2602E 00
 .2602E
 -2602E
                       PARAMETERS
                       6.6000E
                       7.87341:!
00
00
00
00
00
00
1.2602E
                   00
                   00
                   00
  836 IE
  8625E
  8521E
  8563E
  8546E  00
 7.8553E  00
 7.8550E  00
 7.8552E  00
 7.855IE  00
ASYMPTOTIC STANDARD DEVIATIONS OF
-1 .5000E--02
-2.0S44E--02
-1.9339E-02
-2.0017E-02
-•1 .9760E-02
-1.9866E-02
-1.9824E-02
•-1 .9841E-02
-•1.9834E-02
-1.9837E-02
-1.9836E-02
THE  PARAMETERS
           8.0854E-01
ASYMPTOTIC CORRELATION
           1
  1       1.00000
         -0.48216
           F
                       9.8005E-03
                       MATRIX OF THE
                              PARAMETERS
  2
CASE
           85512
           11343
           44178
           08066
              -0.48216
               1,00000
                 Y-F
                    -1 .?F;F;I
.30656
.17822
 54066
         0,00281
                     0.41719
                      STANDARD
                      DEVIATION
                      OF ESTIMATE
                          0.80854
                          0.64148
                          0,65921
                          1.01017
                          0.01089
                                     VARIABLES
                                  0
                                  5
                                 10
                                100
                                400
                                               ,0
                                               ,00000
                                               .00000
                                               ,00000
                                               ,00000
                            60000
                            42000
                            62000
                            54000
                            42000
                                 267

-------
BMDX85
         NON LINEAR LEAST SQUARES - REMISED NOVEMBER  :l 9 r 19 71
 3.      HEALTH SCIENCES COMPUTING FACILITY? UCLA
 4.      PROBL.M CODE                    Pl-l 2.5
 5,      NUMBER OF VARIABLES                 2
 6.      INDEX OF THE DEPENDENT VARIABLE     2
 7.      INDEX OF THE WEIGHTING VARIABLE
 8,      NUMBER OF CASES
 9,      NUMBER OF PARAMETERS
10,      TOLERANCE                        0
11,      EPS 11...ON                          0
12,      MAXIMUM NUMBER OF ITERATIONS
13,      NUMBER OF VARIABLE FORMAT CARDS
14,      ALTERNATE INPUT TAPE NUMBER
                                  .000010
                                  ,.000010
                                  100
15.
16,
17.
1 8 .
19,
20.
21.
22 .
2 3 .
24,
25.
26,
27,
28,
29,
30.
31.
32.
33,
34.
35,
36,
37.
38.
39.
40.
41.
42.
43.
44.
45.
46.
47.
REWIND
OPT
.ION
VARIABLE FORMAT-
MINIMA
MAXIMA
ITER












1
AT


0
1
'!>
3
4
«J
6
7
8
9
0
ION


0
0
0
0
0
0
0
0
0
0
':>
ASYMPTOTIC






ASYMPTOTIC

1
O
CASE






...




1
r'.i
';)
'?
''.1
';>
2
2
2
':>
o
ERROR
MEAN
SQUARE
.2333E
.7946E
.7034E
.6962E
.6956E
.6955E
.6955E
.6955E
.6955E
,6955E
,6955E
STANDARD

9
C
1
1 .
0.
F

, 229 IE-
QRRELAT

00000
42726
1 .OOOOE
l.OOOOE
(2F
20
20
NO
6 . 2
	 -1
1 ,

}
OOOOE 20
OOOOE 20




PARAMETERS


01
00
00
00
00
00
00
00
00
00
00


5.5000E
3. 238 IE
3.2648E
3.2917E
3.3002E
3.3027E
3.3034E
3.3036E
3.3037E
3.3037E
3.3037E
DEVIATIONS

01
ION

-0
1



00
00
00
00
00
00
00
00
00
00
00
OF



""O »
-••9,
. •)
J, *
.... •)
.... 'I
"~ .1. *
•j
.1. *
-•1.
'~ .L »
••••i.
. -|
.1. *
THE



OOOOE -04
9317E-04
S529L-03
7475E--03
8051E-03
8217E-03
8263E-03
3277E-03
82 8 IE --03
8 28 2 E- 03
82S2E-03
PARAMETERS
















2.6167E--03
MATRIX OF THE
2
,42726
,00000






Y-F STANDARD
DEVI A

1
2
3
4
5







3
3
3
';>
1


.30371
.27365
.2
,7
,5
4386
5173
9007

2 ,
•i
.L 4
-0.
~ J. t
0.
OF
19629
17365
30386
21173
56993
PARAMETERS



VARIABLES





TION
ESTIMATE
0
0
0
0
1
. 92
291 0.0 5.
.89705 5.00000 2.
.87330 10.00000 2.
,79775 100,00000 1,
.52825 400.00000 2.
50000
10000
94000
54000
16000
                         268

-------
                                                AN APPLICATION
                                         OF BIASED ESTIMATION THEORY
                              TO ITERATIVE MAXIMUM LIKELIHOOD SEARCH STRATEGIES

                                             David J. Svendsgaard
                                           Mathematical  Statistician
                                      Health Effects Research Laboratory
                                     U.  S. Environmental Protection Agency
                                    Research Triangle Park, North Carolina
     In  scientific investigations, bias is generally
regarded as  something that is not wanted.   In theoret-
ical  statistics,  biased estimation has been studied
with  a view  toward obtaining improved estimators.
Often times, such improved estimators require sub-
jective  input from the investigator — another
something that is not generally accepted in scientific
investigations.   This paper documents a situation
where the use of  biased estimation works better than
unbiased estimation.   Although we believe the
situation is important in its own right, we consider
it as evidence that biased estimation theory deserves
serious  consideration in environmental modelling work.
     Here, we describe a modification to an existing
iterative scheme  for obtaining the maximum likelihood
estimators of the parameters of the cumulative
logistic function adjusted for natural responsiveness.
This  modification incorporates the concept of biased
estimation,  while the original scheme is viewed as
using unbiased estimation.  Empirical results indicate
that  a higher percentage of successful convergences
occur when the modification is used.  We believe that
the modification  is less dependent on the starting
values,  and  that  the modification requires subjective
input in a form that is easy for most potential users
to specify.
     We conclude  our discussion with comments on the
fitted dose-response model.
                     Introduction
Background
     Parametric dose-response models have great
 utility in quantifying the relationship between an
 agent and a health effect of that agent.  Knowledge
 of such relationships is key information for the
 determination of air quality standards.
     The parameters of such models are estimated from
 observed data.  The maximum likelihood estimators
 (MLE) of these parameters have good statistical
 attributes, and so the MLE is often calculated.
 Even for simple mathematical models, this calculation
 usually involves the use of iterative methods.  Such
 iterative schemes do not always successfully converge
 to the MLE.
     We have attempted to determine the MLE for the
 cumulative logistic function adjusted for natural
 responsiveness.  The data was obtained in an epidem-
 ic! ogical panel study.  Generally, there are no
 control groups for such studies and existing iterative
 methods can perform poorly under such circumstances.
 Difficulty in using one such method was experienced
 in this case.
     The success of the iterative method we used
 depended heavily on our ability to start the iteration
 with good starting estimates of the parameters
 (starting values).  If one set of starting values
 doesnht result in convergence, one tries another set.
 Complex subjective judgments may be required to obtain
 good starting values.  Since starting with the MLE
 as a set of starting values will usually result in
 convergence, one can always be accused of using poor
 starting values when the iterative scheme fails.
Statement of the Problem

     There are a couple of ways that an iterative
scheme can fail.  When very poor starting values are
used, convergence to a relative maximum may take
place, or the specified number of iterations may be
used up before convergence to the MLE has occurred.
Another type of failure results when the revised
estimates overshoot the MLE and get progressively
worse.. This type of failure occurs when a certain
matrix requiring inversion is ill-conditioned.  It is
this type of failure that was experienced in our case,
and it is this type of problem that our modification
is designed to handle.

Biased Estimation
     This type of failure can be viewed as the fault
of unbiased estimation.  At each iteration the
revised estimates are derived from a solution vector.
This solution vector can be considered the least
squares estimator of the parameters of a particular
linear model.  Although the least squares estimator
attains minimum variance among unbiased estimators,
this variance can be large.
     The theory of biased estimation is based on the
fact that mean squared error (bias plus variance) can
be smaller for a slightly biased estimator with much
smaller variance, than for an unbiased estimator.
This theory is applied to develop a class of
estimators appropriate for this problem.  The result
is an iterative maximum likelihood search strategy
in which the user of the method simply specifies a
bound on the difference between the MLE and the
starting value for the proportion of natural responses.
The method using the least squares estimator can be
viewed as considering this difference to be unbounded.
Thus, the user can specify a very crude bound and
expect to do better by incorporating this bound into
the modified method.

Overview

     First, we present the theory involved.  This
includes a description of the dose response model, a
description of the iterative method, the development
of a modification to this iterative method, and a
discussion on the subjective input required to use
the modification.
     Second, we do an empirical evaluation of this
modification.  The data used in the evaluation is
described.  Next, we describe the evaluation, list
the results, and draw conclusions.
     Finally, we make some comments on the application
of this model when it is fitted to epidemiological
panel studies.
                                                      269

-------
               Theoretical  Development

Description of the Cumulative Logistic Adjusted for
Natural  Responsiveness Dose-Response Model

     Consider a dose-response experiment where dose
levels I. at N different levels are applied to n.

subjects and at each dose level r.  subjects respond.

     If the administration  of a dose Z causes  a pro-
portion P(Z) of the test subjects to respond and other
independent factors acting  on the test subjects during
the experiment causes a proportion  C to respond, then
the total expected proportion responding will  be
     P'(Z)   P(Z) + C   P(Z)C   C + (1   C) P(Z).
     This equation is called Abbott's Formula.   If
P(Z) is the cumulative logistic function
(1)
     P(Z)
                    1
            1  + exp - (A+BZ)
then P'(Z) is  known as the cumulative logistic
adjusted for natural  responsiveness.

Tolerance Concept

     This model is motivated  by the concept  of a
tolerance.  An individual's tolerance is  defined  as
that level of dose Z  such that doses higher than Z

always cause a response.   The purpose of  the model is
to make inferences about  the  distribution of tolerances
in a target population.

Sampling Considerations

     In fitting such models,  it is  usual  to  assume
that the n^ subjects exposed  to a dose level  Zi were
randomly selected from the target population.   Assume
in addition that the proportion of  tolerances less
than Z is P(Z) for each Z. When these two assumptions
are correct the probability of observing  r,  responses
from the n^ subjects dosed at level  Z. for i=l,2,...,N

                 N

                 *  ("])  PCZ/1' (l-P^))"1"^.
is
    KA.B.C) =
     In general, L(A,B,C)  is  called  the-likelihood
function.  Those choices of A,  B,  and  C  (denoted  by

A, B, and C) that maximize L(A,B,C)  are  the  MLE.

Description of an Existing Iterative Method

     Based on an approximation  to  the  first  derivative
of the log likelihood evaluated at the MLE by  the first
order terms of the Taylor-Maclaurin  expansion,
Finney1 has derived the expressions  used in  iterative-
ly solving for the MLE of the parameters of  the probit
dose-response model adjusted  for natural  responsive-
ness.  These expressions can  be easily applied  to the
logistic model described in (1).   When AS, B$  and C$

are the starting values for the parameters A,  B,  and
C respectively, at the iteration number  S, the
revised estimates (X  , B .,,  C .,) are
      s+i
            The  formulae  for aQ,  al  and a2 are algebraically

       the  same  as  the formulae for  the least squares
       estimator of the parameters of the linear model

            yi    aQXoi +  ajXi..  +  a2X2i  + £.                (2)

       for   1=1,2	N;
       where the e^  are uncorrelated random  variables with
tJ D
and
                                                                                              The x
       zero  mean  and  common  variance a-,   me Aoi,
       x2i are  all  determined  from AS,  B$  and GS,  and the
       formula  for  the y.  also involves ri.   The formulae
                                                        and
                                                           below  define  the  x-n-  and
                                                           j=0,l  and  2.

                                                                    Ii  -c
                                                                    n.     s
                                                               P
                                   for i=l ,2	N and
                                                                      1  - C
                                                               P.  =
                                                                i    1  + exp - (As + Bsz.) '

                                                                    i  - P,-;
                                                                        Wf
                                                               wi  =
                                                                    (A
                             Pi-Pi,
                               Wi   '
                                                                          ..
                                                          and
                        :T
           If Cj 1s obtained from inspection of the data,
      Aj can be obtained by iteration to maximize the
      likelihood for this value of Cj.   The appropriate
      expressions involved in this first stage of iteration
      are algebraically equivalent to the least squares
      estimators
                                                             [:;]=[:;]
                                                          for the parameters of the model
                                                               y, = ax,  + alXl,  + e.,      1=1,2,..., N;     (3)
                                                                [    001        I     I
                                                          which is obtained from (2) by  setting  «2 to zero.
                                                          Of course, starting values for A and B are required
                                                          even for this first stage of iteration, and for B   one

                                                          could use zero and set
                                                                    In
                   P-Ci
                   1-P"
                                                          where P" =
           This choice of starting value for A  maximizes

      the likelihood function for these values of B  and
                                                   0
      Cj.  Having obtained revised estimates for A and B
      after a number of iterations from this first stage,
      one could proceed to the second stage of iteration
      using (2) and these revised estimates as starting
                                                      270

-------
values.
     Based on empirical results, it appears that the
success  of such a two stage scheme depends on how well
Cj is chosen and on what criterion is used to deter-
mine when to stop first stage iterations and go to
the second stage.  A very stringent criterion is
wasteful of computer time if Ct is chosen close to C.
When a very loose criterion is employed, e.g. omitting
the first stage entirely, failure rates are high.
The failures are usually the result of high correla-
tions among the x.- resulting in a singularity in the

matrix requiring inversion.  Invariably, such a
singularity is preceded by an unreasonably large
overestimation of the magnitude of a2.
     When we regard (2) as the true model whose
parameters are to be estimated, the problems mentioned
in the last paragraph are a familiar weakness of
least squares estimation.  That is, even though the
least squares estimator achieves minimum variance
among unbiased estimators, this variance can be very
large when the independent variables are highly
correlated.  In those cases cited above, we view this
variance as being intolerably large.  Possibly a
slightly biased estimator with much smaller variance
can achieve small enough mean squared error (bias
plus variance) to usually avoid those type of
problems.

Development of an Alternative Estimator via the
Consideration of Biased Estimation Theory
     Consider the class of biased estimators of the
 form
                                                    (4)
 where  kp, KI and k2 are constants,  we shall consider
 using  the a- in place of the a. in a single phase
 iteration scheme.  Note if the k^ are zero then the

 a.j  are the least squares estimators of the model
 described in (3) which are biased when a2 is nonzero.
 It  is  also possible to obtain the u. from (4) by

 letting the ki take on certain values.
     We shall use an error criterion that seems
 appropriate for this problem and show how the k^ can

 be  chosen so that for a wide range of a2 values,
 better performance in terms of this criteria is
 attained by an estimator using the chosen k^ 's than
 is  attained by using either of the estimators
 mentioned in the above paragraph.  The choice of the
 k..  is  made by specifying an upperbound on a2/cr .
     It turns out that the formula for the k. that we
 suggest using involves the specified upperbound and
 the correlation between the x...  Generally, this

 correlation changes from one iteration to the next,
 so  that even though only one bound is used, the ki

 vary between iterations.  In those cases where
 successful convergence takes place, the estimators
 are similar to those that were suggested for the two
Initially, the x.. are highly
 stage scheme.
                                 In later  iterations,

 the  correlations become smaller and the a-'s approach
 the  u.. 's.  Thus, instead of selecting a criteria for
 deciding when to jump from the first to second stage
 of iterations, the use of the proposed estimator
 eliminates the need to make this decision  by
                                            automatically incorporating it into the computation
                                            process based on the value of the specified upper-
                                            bound.
                                                 Let the true model at each iteration be
                                            and consider approximating n over an interval of
                                            interest ? £X<5i with
                                                 ri=a X
                                                    0 0
                                            where (a , aj, a2) is a vector of estimated regres-
                                            sion coefficients.  A reasonable measure of the

                                            closeness of n to n is integrated mean square error


                                                       ,-Nn
                                                                  E  (n-n)2
                              dx
                                                 where  ft
       ••i;
where
                                                         M ('
                                                          a2  J,
                dx.   Note  that  J=B+V,
                 (E(n)-n)2  dx,
and
                                                      _Nn f
                                                       °2J£
              Var n dx.
                                                 Let Y be the vector whose ith element is y.

                                            and denote the design matrix whose ith row is
                                            (xo-j.x^.Xz.) by

                                                     x = [X! ;  x2]

                                                    Nx3   Nx2  Nxl '

                                                then
                                                and Var Y = a2IN.

                                                We  shall consider  the  class  K  of  estimators of
                                            the  general form in  (4)

                                            where
                                           and
                                           where H=IN-X1(XiX1)"1Xi.
                                                    N
                                           Now
                                                       271

-------
where  .- -,
                    "1
            - (X{Xi)"X1X2
and E(a2)=a2.
Denote W  ~

      3x3
             2x2
             1x2


          and k=
2x1

 W3

Ixl"
.-(
                                                   dx,
     The integrated mean square error of estimators
in K is
                             trace
       -a' '--.--

          +N trace W  k'CX^HXz)"1^


When k=y, the estimator is unbiased and when k=0,  V  is

smallest for any estimator in K.   Differentiating
J(k) with respect to k and setting the derivatives

to zero yields that the minimum J estimator in K is
a
-o
<*1
0
•A If*

A
+ k* o2

ai/a2
           ai/a2+(X2HX2)2  '

This value of k requires that one know a|/a2,  but
using
      +     /
     k = maxlO,
            y
it can be shown that

     J(k+)< min (J(0),J(v))
              -i
when
     M-(X2HX2)

     M(X2HX2)+3
                                                   (5)
                                                   (6)
     So if k  is obtained by specifying any value for

M such that (6) is true, smaller J will be achieved
using (a , aj, a2) with k=k+ then when k=0 or y.
Considerations on Choosing M

     One way to choose a value M that bounds a§/a2  is

to find an upper bound for a| and a lower bound for

a2.  Therefore, first consider a2 as being C-CS>

since C  = a2 + Cs-  Partial justification for

considering a2   C-C  can be based on a result from

large sample theory:  "...if first approximations are
of nonzero efficiency, one cycle of computation will

yield fully efficient estimates".1   By (1), C is a
proportion so if we agree to set Cj tc some value

between zero and one then (C-Cj)23 a2, will be less

than one.
     Considering only the variability contributed

to y.j by P^ we have


                   (1-C) (C+(
                                                                    Var
                                                                       yi
                                                         When we assume that the starting values are equal to
                                                         the respective true parameters of the model, we have

                                                               2
                                       that a
                                       M=l.
                                                                  is one.  For these reasons we suggest using
                                                              Smaller values for M such as Pj might be used,

                                                         but we have only shown that  (6) is true when (5)
                                                         is true for the case where k is deterministic.  The

                                                         use of any information gained from the data in fixing
                                                         k would require that k be treated as random.

                                                              Actually, considering the error criterion we are
                                                         using, even when M is carefully chosen the error may
                                                         be unacceptably large.  So if the iteration routine
                                                         fails one should consider varying M, however choosing
                                                         M larger. than 1 seems unnecessary.

                                                                 Empirical Evaluation of Modification

                                                         Description of the Data Used in Evaluating the Itera-
                                                         Methods
                                           Hammer et al  reported on data from student nurses
                                       in Los Angeles wFio completed daily symptom diaries
                                       during the period of their training.  The total number
                                       of yes/no responses of four symptom categories pooled
                                       over days having the same range of maximum hourly oxi-
                                       dant levels were computed from Table 3 of this refer-
                                       ence.  The doses were the midpoints of these oxidant
                                       ranges.  The symptom categories are Headache, Eye Dis-
                                       comfort, Cough and Chest Discomfort with no accompany-
                                       ing fever, chills or temperature.  Due to the restric-
                                       tions, the positive responses are indicators of only
                                       mild personal discomfort.
                                           Assuming independence between days of responses
                                       from the same student nurse, the number of positive
                                       responses are taken to be binomially distributed with
                                       parameters P! and n. where

                                           P. - C + (l-C)/(l+exp-(A+BZ.) for i=l,2,.,.,9.

                                       The MLE's for the parameters (A, B, C) of this model
                                       are listed in Table I for each symptom.  The i's are
                                       ordered so that Z1 is the lowest oxidant range and
                                       Z9 is the highest.
                                                      272

-------
Description of Evaluation

     Iterations were run using three different start-
ing values for C and four different bounds on a£/a2.
The starting values for C were:
     (a)  The observed proportion of total responses
corresponding to the lowest oxidant level.  For all
four symptom categories this proportion turned out to
differ from the MLE for C by less than 0.01.  The min-
imum observed proportion was used as a starting value
for C in the case of Chest Discomfort in order to
avoid the computation of the logarithm of a negative
number in the process of getting a starting value for
A.
     (b)  One half the proportion used in (a).
     (c)  Zero.
     The starting value for B was zero and the start-
ing value for A was
          In
where F - (zr. )/sn. .

     The four different values for M used were Pf,

P§, 1, and ».  Infinity corresponds to using the least
squares estimator for (2).
     Iterations were continued until the change in the
likelihood was zero (to the accuracy of the computer),
or until 30 iterations, or until the program faulted.
The maximum number of iterations was 25.

                      Table I
    Maximum Likelihood Estimates for Each Symptom
Symptom
                                      B
Headache
Eye Discomfort
Cough
Chest Discomfort
-4.6269
-5.0457
-9.9495
-13.2602
.0407
.0931
.1690
.2243
.0954
.0417
.0944
.0177
 Results

     Table II lists the occurences of successful con-
 vergence.  From this table it can be seen that the
 percent of successful conversions was higher in all
 cases when a finite bound on a£/a2 was used.  When

 Pf was specified as a bound, the failure was due to

 convergence to the MLE of A and B for the starting
 value Cj.  The highest percent of successful conver-

 sions when a starting value for C of either 0 or

 l/2Pj was used occurred when M was set equal to one.

 Conclusions

     If good starting values are used, convergence can
 always take place using Finney's formula.  However,
 how good these starting values must be depends on the
 data.
     The modification of Finney's formula developed
 here yields successful convergence for starting values
 that are too poor to be used directly in Finney's
 formula.  When very poor starting values for C are
 used neither the modification nor Finney's formula
 works all the time.  However, for some values of M,
 a higher success rate can be obtained using the modi-
 fication.  Moreover, when seemingly adequate C start-
 ing values are used, successful  convergence is ob-
 tained using the modification in cases where the use
 of Finney's formula had failed.
     The use of smaller M values seems to yield poorer
 success rates for poor choices of C starting values,
                                                      but can achieve improved success rates for seemingly
                                                      adequate C starting values.
                                                           It is recommended that a number of C starting
                                                      values be used with this modification.  When no other
                                                      information is available, M should be taken to be one.
                                                      If convergence fails due to a singularity, a tighter
                                                      bound should be tried.  A plot of the likelihood
                                                      obtained at the final iteration versus the correspond-
                                                      ing C value is helpful for deciding if more starting
                                                      values should be tried and what values to use.

                                                                             Table II
                                                        Successful (S) and Failing (F) Iteration Attempts
                                                                  Symptom
Starting Value for C
p?
PI
1

Eye Discomfort
Headache
Cough
Chest Discomfort
% Success
Eye Discomfort
Headache
Cough
Chest Discomfort
% Success
Eye Discomfort
Headache
Cough
Chest Discomfort
% Success
Eye Discomfort
Headache
Cough
Chest Discomfort
% Success
S
F
F
F
25
S
F
F
F
25
S
S
F
F
50
F*
F*
F*
F*
0
S
F
F
F
25
S
F
F
F
25
S
S
F*
F*
50
F*
F*
F*
F*
0
S
S
S
S
100
F
S
S
F*
50
F
S
S
F*
50
F*
F*
S
F*
25
                                                      *  Indicates  failure occurred due to program default.

                                                      Some Comments on Applications of Dose-Response Model
                                                      When Fitted  to  Data From Panel Studies

                                                           Both  the logistic and the probit models were fit
                                                      to  the  Nurse eye discomfort data.  The fit of both
                                                      models  was almost  identical.  The  Probit fit just a
                                                      little  better,  but the difference  in fit provided no
                                                      substantial  grounds for choosing between the models.
                                                      The adjustment  for natural responsiveness was signifi-
                                                      cantly  different from zero (p<_0.05).  There was also
                                                      significant  lack of fit (p<_ 0.05)  for both models.  A
                                                      test of equal proportions reporting eye discomfort on
                                                      days having  the same maximum oxidant reading was also
                                                      rejected.
                                                           It is felt that these models  when adjusted for
                                                      natural  responsiveness adequately  describe the rela-
                                                      tionship between eye discomfort and oxidant measure-
                                                      ments for  these nurses.  The fact  that there was
                                                      significant  lack of fit for these  models is attribut-
                                                      ed  to dose error resulting probably from spatial
                                                      variation.   That is, the fixed air sampler is only a
                                                      crude indicator of dose for these  nurses.
                                                           For purposes  of selecting an  air quality stan-
                                                      dard, it is  felt that this modelling effort is
                                                      sufficient.

                                                                          References

                                                      [1] Finney, D. J. (1971).  Probit Analysis.  Univer-
                                                           sity  Press, Cambridge.  Chapters 4  and 7.
                                                      [2] Hammer, D. I.; Hasselblad, V.;  Portnoy,  B.;  and
                                                           Wehrle, P. F. Los Angeles Student  Nurse Study.
                                                           Arch.  Environ. Health 28, 255-260.
                                                      273

-------
                  ECONOMIC AND DEMOGRAPHIC MODELING RELATED TO ENVIRONMENTAL MANAGEMENT

                          Allen V.  Kneese, Professor,  Department of Economics
                        University  of New Mexico,  Albuquerque, New Mexico 87131
     I will  select a few main modeling issues for dis-
cussion.  First, economic-social-environmental  problems
are accumulating at a great rate.   As a consequence, I
fear that there will be a great temptation to apply
models to complex issues when, in  fact, they are not
well designed to deal with them, and then to base poli-
cy on conclusions which may be quite unrealistic.
Second is the old but still very  important issue of how
far one should carry the explicit  incorporation of
interdependencies in models, and the related question
of whether one is better advised to use optimization or
simulation approaches and under what circumstances.

     I will  conclude the paper by  discussing an optimi-
zation model designed to test a number of hypotheses
about quantitative modeling and the environment.  From
these hypotheses I will single out one which I  feel is
currently of particular importance to the Environmental
Protection Agency.  When EPA was  formed, a main ratio-
nale was that the environmental media, the land, the
water, the air, should be treated  simultaneously in the
policymaking and administrative process.  This  has not
occurred, and I wish to indicate evidence concerning
its importance and to discuss the  role of a formal quan-
titative model in generating this  evidence.

                      Introduction

     "But I  deeply fear a much worse outcome.  We are
seeing a proliferation of costly attempts to establish
environmental management 'data banks' containing every-
thing up to and including the kitchen sink.  These are
to be linked by some forms of vaguely specified models
constructed by loosely organized  interdisciplinary and
interuniversity teams.  When these huge jerry-built
structures come crashing down, as  many of them surely
must, we may well see a backlash  on the part of spon-
soring agencies deeply embarrassed by their inability
to show useful results from enterprises which have run
into the millions of dollars.  This may mean that all
economic ecological modeling enterprises become dis-
credited, even sober and well thought through ones.  If
this happens, it could greatly retard the further suc-
cessful application of management  science to this impor-
tant area of national concern."!

     The paper from which the above quote is taken was
published in 1973 but was written  in 1970.  It was in
response to the euphoria and enthusiasm about mathemati-
cal modeling applied to social problems which character-
ized the latter part of the 1960's.  Most unfortunately,
the fears expressed in this paper  have come true. Today,
to mention mathematical modeling  in proposals going to
the NSF-RANN program for funding  is the end of the pro-
posal.  We hold this conference in an atmosphere char-
acterized by enormous skepticism about mathematical
modeling in general and about large quantitative models
in particular.  The skepticism is  a reaction to a number
of things:  the exaggerated claims which have been made
for modeling, modeling without data, indulging in the
circular reasoning of drawing conclusions about the real
world from assumed relationships  in models, specifica-
tion of irrelevant objective functions, the definition
of "systems" which correspond to no present or potential
decisionmaking unit, and so forth.  One result has been
the creation of a number of models which, at worst, pro-
duced results that are deceptive or, at most, are use-
less and costly.  This is not to  say that there have not
been some notable successes, but at the moment we  are
suffering heavily from past mistakes.

               Asking Models Questions
          They Were Not Designed To Analyze

     Attitides toward models in the natural  resources
and environmental area tend to fall at polar extremes;
on the one hand are the totally skeptic and  on the other
those who uncritically accept whatever data  a model pro-
duces.  As an example of the latter, the recent Ford
Foundation Energy Report is a case in point.  Among
other things, the Ford Energy Report incorporated a
mathematical projections model to answer questions about
the economic effects of reduced rates of energy use.
Before discussing projections, a quote from Mark Twain
creates the proper mood for attempting a long-range look
into the future.

     "In the space of one hundred and seventy-six years
the Lower Mississippi has shortened itself 242 miles.
That is an average of a trifle over one mile and a third
per year.  Therefore, any calm person who is not blind
or idiotic, can see that in the old Oolitic Silurian
Period, just a million years ago next November, the
Lower Mississippi River was upward of one million three
hundred thousand miles long. By the same token any per-
son can see that seven hundred and forty-two years from
now the Lower Mississippi will be only a mile and three
quarters long.  There is something fascinating about
science.  One gets such wholesale returns of conjecture
out of such a trifling investment of fact."

     The language of the Ford Foundation Report with
respect to its model is revealing.  "An economic model
developed for the project by Data Resources  Incorporated
provides a broad-based measure of the impact of reduced
energy growth and concludes that a transition to a
slower growth—even zero energy growth—can indeed be
accomplished without major economic cost or upheaval.
The study indicates that it is economically efficient  as
well as technically possible over the next 25 years to
cut rates of energy growth at least in half.  Energy
consumption levels would be 40 to 50 percent lower than
continued historical growth rates would produce at a
very moderate cost of GNP--scarcely 4 percent below the
cumulative total under historial  growth in the year 2000,
but still more than twice the level of 1975."  No quali-
fying statements are made.2

     First, as all of us here are painfully aware, the
degree of accuracy of all quantitative models of the
economy is questionable, especially when they are used
for projecting for long periods into the future.   The
idea that they could identify a 4 percent difference in
cumulative GNP over many years is incredible.  If there
are not problems with the structure of the model, there
are data problems.  For example, the parameters of the
model used in the Ford Foundation study were estimated
from data from the period covering 1950 to 1970.   In
many respects we have moved entirely outside the range
of variation covered during that period, especially in
the areas pertinent to the model, such as fuel prices,
domestic energy sources, and international trade condi-
tions.

     Second, one may have questions about the specific
structure of the model.  It is, for example, highly
aggregated; it contains only nine sectors of which five
are energy sectors.  Is it really possible to address
                                                        274

-------
the problem at hand with so few production sectors?
Moreover, it appears that the basic structure of the
model  will predetermine one of the main results that it
is said to have found, i.e., that energy usage does not
have much to do with economic growth.  The relationship
between energy input and economic output and energy
cost is based on the assumption that the increasing use
of energy per unit output during the period when energy
costs were falling can be extrapolated to a future
situation in which energy costs rise.  That is, the
energy output relationship is reversible given a re-
versal of the historical trend in real prices.  Aside
from the question of whether dynamics of the real eco-
nomic system permit such reversibility, there is the
question of relationships between energy input and man-
hour productivity.  One may reasonably hypothesize that
one factor in the increase in man-hour productivity in
the post-war period is the substitution of inanimate
energy for human energy.  If this substitution effect
is important, then a reversal of the situation would
surely result in a reduction in productivity and a
slower rate of economic growth.  In the model, however,
the factors determining aggregate output appear to be
entirely uncoupled from any relations of this sort.
Productivity is exogenously given.  Should we then be
surprised that the cost and rate of growth of energy
use do not much affect the rate of GNP growth?  That
they do  not is one of the principal conclusions of the
study, but whether it is a conclusion or whether it is
built  into the assumptions of the model is questionable.

     This problem concerning the appropriateness of
asking models to answer policy-type questions is unan-
swered.   It may be wise, at least when such models are
used  by  public agencies and particularly Federal agen-
cies,  to create a model review board.  Its duty would
be to  assess and pass judgment on the suitability of
various  models to address agencies' problems.

         Models of Economic Environmental Systems

     To  try to understand the results of policy actions,
different institutional structures for decisionmaking,
alternative technologies, etc., a number of models of
economic-ecological systems have been built.  Since
these  are of more direct interest to our session than
the energy model discussed in the last section, a few
generalizations about this type of modeling would be
useful before moving on to a specific application.

      It  is often said that the first principle of ecol-
ogy  is that everything  is connected to everything else.
This  is  perhaps true but somewhat unhelpful;  however,
 it does  bring into focus the question of how  far mod-
elers  should go in the  explicit incorporation of inter-
dependencies.  This is  a general problem in systems
analysis but it takes on additional force in  connection
with  environmental problems because of the prominence
of "ecological thinking" in the field.  The "frog in the
 hole  in  the bottom of the sea" chain of reasoning of
 some  ecologists has led to visions of the environmental
management problem which push it beyond the bounds of
 successful modeling.  Years ago, when operations re-
 search was first being  explored by economists, Robert
Dorfman  wrote a fine article stating the general point
very well.3  In it, he  said:

      "As a result of complexity the operations analyst,
like every other worker, lives always near the end of
his tether.  He simplifies his problem as much as he
dares  (somewhat more than he should dare), applies the
most powerful analytical tools at his command, and with
luck just squeaks through.  But if all established
methods  fail, either because the problem cannot be
forced into one of the  standard types or because after
all acceptable simplifications it is still so large or
complicated, the equations describing it cannot be
solved.  When he finds himself in this fix, the opera-
tions analyst falls back on simulation or gaming."

     One result of the ability of simulation to treat
relationships beyond those manageable in optimization
problems is that much discipline and order is lost, and
the problem of choosing among alternative outcomes of
the simulation can easily become impossible.

     Consider a small simulation model in which there
are 28 variables (an actual environmental model may
easily have many hundreds), each of which may be set at
any one of three levels.  There are then 328 possible
designs of the system.  This is approximately 23 thou-
sand billion.  If it takes 2 minutes of computer time
to simulate each design, about 100 million years could
be required to complete the simulation.  Of course no
simulator would attempt the complete enumeration of out-
comes in a large problem, but this calculation does
suggest the complexities involved.  This and the paucity
of data available for defining relationships specifying
coefficients have been among the problems which doomed
some of the more ambitious ecological modeling efforts
to costly failure.

     The elementary set of needs in economic-ecological
modeling is that (1) the model must be persuasive, i.e.,
it must represent something in the real world with
sufficient fidelity that a decisionmaker could with some
confidence base a decision on it and (2) that some
reasonably straightforward and efficient criteria must
exist for choosing among vast numbers of alternative
results.

     Of course simulations can be very useful if in-
formed judgment readily yields a few alternative systems
for analysis.  But it seems this is an unusual situation
when large environmental systems are at issue.  Simula-
tion models can also be supplied with objective func-
tions and one or another form of sampling can be used to
generate a "response surface."  The principles of sam-
pling for this kind of problem are not well  understood,
however, and providing an adequate sample may be an
extremely large problem in itself if the number of vari-
ables and alternative scales is great and the response
surface is irregular.  Otherwise, the process may come
to a halt at the top of a gentle rise while totally
ignoring the neighboring mountain peak.

     Perhaps it is essential that we accept some form of
optimization models, with all their limitations, as the
only ones likely to be useful for decisionmaking in
large problems like environmental management--although
our experience is not so extensive that this conclusion
can be drawn with certainty.  Optimization models can,
of course, incorporate simulation submodels to provide
descriptive linkages in a set of nested models.  But
they do not sacrifice specifying a criterion function
and require an orderly approach to the optimal solution.
Selected parameters of the optimization model can be
varied and new solutions found.  This would almost
always oe desirable in any real decision situation.  The
device of an objective function with constraints and a
specific solution procedure is not abandoned, however.

     Clearly it is necessary to recognize that these
models can never be "comprehensive" in the sense that
they consider all linkages and all alternatives.  Great
care must therefore be taken to specify what aspects of
reality are and are not included.  In this connection, a
well functioning market system can be of great help,
which is often ignored in the more ecologically oriented
models.  We may exclude many aspects of resource use
from explicit consideration in our environmental policy
or management models on the grounds that they are appro-
priately handled by the market exchange system.  The
interface between these processes and the model as such
                                                        275

-------
is through the system of values generated by the market,
that is, prices.   The model  itself can then focus ex-
pi ic^'tly on those aspects of resource use where the
market exchange is known to fail seriously as an allo-
cative device, e.g.,  with respect to allocating common-
property environmental resources of air, water, and
associated ecological systems.

     Static and dynamic elements are likely to be par-
ticularly difficult to handle adequately in optimiza-
tion models, and these are important deficiencies.
Sensitivity analysis  and the like can help, but the
wise modeler will never let himself think that his
models will provide a complete basis for decisionmaking
either on conceptual  or empirical grounds in a field as
complex as environmental quality.  Models must be
viewed as potentially helpful tools which constitute an
element, albeit a major one, in the decisionmaking  pro-
cess.  They are tools which can reveal obscure impacts
of common-sense policies and quantify them to some
extent.  They must be built and used because of the in-
herent logic demands  of the problem and because we  have
nothing better.

               The Regional  Residual Model

     I would like to conclude this introductory paper
by discussing in general terms an environmental modeling
enterprise which took place in the Quality of the En-
vironment program at Resources for the Future while I
was program director.  The specific form and structure
of the model was very much a result of the preceding
type of consideration.  It was developed by a team  of
researchers representing several disciplines and is
often called the Russell-Spofford model, or more fully,
the Regional Residuals Management model.  It isa static
optimization type model built for application to the
Delaware Estuary Region for several purposes.  For  ex-
ample, it could help test the impact on the cost of en-
vironmental management of introducing exotic technolo-
gies, such as stream reaeration, into the system.  It
could play out economic, in the sense of efficiency and
distributional, implications of setting ambient stan-
dards at various levels in the region.  In a politicized
version it was useful in testing certain hypotheses on
how the structure of legislative processes would affect
decisions on environmental quality, e.g., referenda
versus COGS versus small district representation in
legislative assemblies.  (For a relatively full report,
see Reference 4.)

     Here I wish only to say something about the struc-
ture of the model and particularly the light it sheds
on one of the main hypotheses we sought to test with
it--that there are important nonmarket linkages among
the environmental media of land, water, and air, and
that treating each in isolation as is done in current
legislation and administrative practice is likely to
lead to unanticipated and probably inefficient results.
This is what might be called the basic EPA hypothesis.

               The Russell-Spofford Model*

     The Russell-Spofford model is, as already implied,
designed to deal simultaneously with the three major
general types of residuals—airborne, waterborne, and
solid--and reflects the physical links among them in a
regional context.  It "recognizes," for example, that
the decision to remove waterborne organic wastes by
standard sewage treatment processes creates a sludge
which, in turn, represents a solid residuals problem;
*This portion of the paper is based largely on material
prepared by my former associates at RfF, Clifford
Russell  and Walter Spofford.
the sludge must either be disposed of  on  land  or burned,
the latter alternative creating airborne  particulates
and gaseous residuals.

     The model also can incorporate the nontreatment
alternatives available (especially to  industrial  firms)
for reducing the level of residuals generation.   These
include:  input substitution  (as natural  gas for coal);
change in basic production methods (as in  the  conver-
sion of beet sugar refineries from the batch to  contin-
uous-diffusion process); recirculation of  residual-
bearing streams (as in recirculation of condenser
cooling water in thermal-electric generating plants);
and materials recovery (as in the recovery and reuse of
fiber, clay, and titanium from the "white  water"  of
paper-making machines).  These alternatives are included
by means of industrial linear programming  submodels.

     The model uses environmental diffusion models but
it is also capable of incorporating environmental simu-
lation submodels.   In practice, the latter takes  the
form of an aquatic ecosystem model which translates re-
siduals discharges into impacts upon various species of
concern to man.

     In addition to these features, the model  incor-
porates a unique political (collective choice) feature.
I think it is fair to say that this model  is at  the
frontier of quantitative research in environmental eco-
nomics.

     The model containing these features is shown sche-
matically in Figure 1.  The three main components of the
overall framework may be described as follows:

     A Linear Programming Model.   This model  relates in-
puts and outputs of selected production processes and
consumption activities at specified locations  within a
region, including:   the unit amounts and types of resi-
duals generated by the production of each product, the
costs of transforming these residuals from one form to
another (gaseous to liquid in the scrubbing of stack
gases), the costs of transporting the residuals from one
place to another,  and the cost of any final discharge-
related activity such as several  types of landfill opera-
tions.

     The programming model, which actually consists of
an array of submodels pertaining to individual industrial
plants, landfill operations, incinerators, and sewage
treatment plants,  permits a wide range of choices among
production processes, raw material input mixes, by-
product production, materials recovery, and in-plant
adjustments and improvement.  All these choices can re-
duce the total quantity of residuals to be disposed of.
That is, the residuals generated are not assumed fixed
either in form or in quantity.  This model also allows
for choices among treatment processes and hence among
the possible forms of the residual to be disposed of in
the natural  environment and, to a limited extent, among
the locations at which discharge is accomplished.

     Environmental  Models-Physical, Chemical  and Bio-
logical .  These component models describe the  fate of
various residuals after their discharge into the natural
environment.  Essentially, they may be thought of as
transformation functions operating on the vector of re-
siduals discharges and yielding another vector of am-
bient concentrations at specific locations throughout
the environment (these are the now familiar diffusion
models) and, in some instances, impacts on living things
(these are aquatic ecosystem models reaching beyond the
Streeter Phelps formulation).  In aquatic ecosystem
models, living creatures which participate in  these pro-
cesses are explicitly included in the model and the out-
put is stated in terms of impact on living things (e.g.,
                                                        276

-------
plankton and fish)  rather than on physical  parameters
such as dissolved oxygen.

     A Set of Receptor-Damage Functions.  These func-
tions relate the concentration of residuals in the en-
vironment and the impact on living things to the re-
sulting damages, whether these are sustained directly
by humans or indirectly through impacts on material
objects or on such  receptors as plants or animals in
which man has a commercial, scientific, or aesthetic
interest.  Ideally,  for a full-scale overall efficiency
version of the model,  the functions relating concentra-
tions and impacts on species to damage should be in
monetary terms.  In  fact, the model as finally imple-
mented is designed  to  meet environmental standards.
However, its solution  technique is based upon "penalty
functions" and, therefore, proceeds as though violations
of the standards carried a price.  Since the objective
is to meet environmental standards the price of viola-
tion is set very high.

     The linkage between the components of the model
and the method of optimum seeking may be illustrated as
follows:  Solve the linear programming model initially
with no restrictions or prices on the discharge of re-
siduals.  Using the resulting initial set of discharges
as inputs to the models of the natural environment, and
the resulting ambient concentrations and impacts on
living things as the arguments of the penalty functions,
the marginal penalties can be determined as the change
in penalties associated with a unit change ina specific
discharge.  These marginal penalties may then be applied
as interim effluent charges on the discharge activities
in the linear model, and that model solved again for a
new set of production, consumption, treatment, and dis-
charge activities.   With appropriate bounds constraining
consecutive solutions, the procedure is repeated until
a position close to the optimum is found.  This process
can be looked upon as a steepest ascent technique for
solving a nonlinear programming problem.

     The Russell-Spofford model was designed for the
analysis of residuals  management in regions where the
scale and severity of the problems justify a consider-
able investment in data and analysis.  The model is
still  in the process of being applied to the Delaware
Valley, which is discussed in the next section.

            The Lower Delaware Valley Region

     The Lower Delaware Valley region, chosen for this
application, is a complex region with many individual
point and nonpoint sources of residuals discharges.   It
is defined by county boundaries, shown in Figure 2.  The
grid superimposed on the figure is used for locating
air pollution sources and receptors in the model.  It
is related to the Universal Mercator grid.

     The region consists of Bucks, Montgomery, Chester,
Delaware, and Philadelphia counties in Pennsylvania;
Mercer, Burlington,  Camden, Gloucester, and Salem
counties in New Jersey; and New Castle County in Dela-
ware.  The major cities in the area are Philadelphia
(coterminous with Philadelphia County); Trenton in Mer-
cer County; Camden  in  Camden County; and Wilmington in
New Castle County.   Overall, the population of the area
in 1970 was a little more than 5.5 million1.  Of this,
35 percent is accounted for by Philadelphia alone, with
a further 5 percent found in Trenton, Camden, and Wil-
mington.  However,  other parts of the region are also
heavily urbanized.

     The region as  a whole contains an abundance of
manufacturing plants.   In fact, it is one of the most
heavily industrialized areas in the United States.  It
has, for example, 7  major oil refineries, 5 steel plants,
16 major pulp and paper or paper mills, 15  important
thermal power generating facilities, numerous large and
small chemical and petrochemical plants, foundaries, and
large assembly plants for the auto and electronic indus-
tries.  This, of course, made the task of identifying
sources of residuals discharges, estimating the costs of
discharge reduction for them, and including them in the
regional model an enormous one.  The model used contains
125 industrial plants, 44 municipal sewage treatment
plants, and 23 municipal incinerators, which are all
dealt with as point sources.  In addition, there are 57
home and commercial heating sources with controllable
discharges, each of which is treated as an area source,
i.e., not tied to a specific stack location.  Other
point and nonpoint sources distinguished in the region
are incorporated as background discharges.

     The large population of the region naturally pro-
duces vast quantities of residuals from consumption
activities requiring correspondingly large facilities
for their handling and disposal.  There are 7 municipal
sewage treatment plants with flows greater than 10 mil-
lion gallons per day (mgd) and 17 with flows greater
than 1 mgd, counting only those discharging directly to
the Delaware Estuary.  On the major tributaries to the
estuary and the Schuylkill River, there are more than
120 municipal treatment plants of widely varying sizes.
For the disposal of solid residuals there are 17 incin-
erators currently operating with an aggregate capacity
of about 6,000 tons per day, and many major and minor
landfill operations.  Together, on an annual basis, the
heating of homes and commercial buildings is responsible
for about one-quarter of the total discharges of SOg and
10 to 15 percent of the particulate discharges in the
region.

     The major recipient of waterborne residuals in this
area is the Delaware Estuary itself.  The Estuary is
generally taken to be the stretch of river between the
head of the tide at Trenton and the head of the Delaware
Bay at Listen Point, Delaware.  For analysis purposes •
the estuary was divided into the same 22 reaches shown
in Figure 3.

     The low flow of the river varies widely from month
to month and year to year.  For aquatic ecosystem model-
ing purposes, a relatively low flow period was selected.

     For the modeling of air quality, the atmospheric
conditions used represent the annual joint probability
distribution of wind speed, wind direction, a.nd stability
conditions for 1968, assumed uniform throughout the
region.  Conditions representing rare events were not
used in the model for either air or water quality analy-
ses.  Ideally, explicit attention would also be given to
this aspect of the modeling, but mathematical program-
ming models do not lend themselves well to the ideal
analysis of systems in which random events occur.  As I
mentioned earlier, this is one of their weaknesses.

                 Contents of the Model
     The model framework was discussed in general terms
earlier in this paper in connection with Figure 1.  Here
the discussion is more detailed in order to grasp the
nature of the actual application of the concepts out-
lined there.  The model is designed to provide the mini-
mum-cost method of simultaneously meeting several sets
of exogenously determined standards.  Two of them are of
interest here.

     Minimum Production Requirements.  This means bills
of goods for the individual industrial plants, heat re-
quirements for home and commercial space heating, and
specified quantities of liquid and solid residuals re-
quiring some disposal action by municipalities.
                                                       277

-------
     Levels of Ambient Environmental Quality.  This is
represented, for example, by maximum concentrations of
SOg and suspended particulates at a number of receptor
locations in the region, minimum concentrations of dis-
solved oxygen and fish biomass in the estuary, maximum
concentrations of algae in the estuary, and restrictions
on the types of landfill operations which can be used
in the region.

     In Figure 4, more detail is given on how the model
functions.  In the upper left of the diagram is found
the basic driving force for the entire model, the linear
programming model of residuals generation and discharge.
It is in this part of the model that minimum "produc-
tion" constraints are found.  A key output of this
part, as mentioned in connection with Figure 1, is a
vector of residuals discharges, identified by substance
and location.  These discharges feed into the environ-
mental models--the model of the aquatic ecosystem and
the dispersion model for-the suspended particulates and
S02 discharges.  This section of the overall model, in
turn, produces as output a vector of ambi-ent environ-
mental quality levels (for example, S02 concentrations)
at numerous designated points in the region.  These con-
centrations are then treated as input to the "evalua-
tion" submodel found in the lower right of the diagram.
Here the concentrations implied by one solution of the
production submodel are compared with the constraints
imposed for the model run, and the penaty function pro-
cedure is used to iterate the model until all constraints
are met, within some specified tolerance.

                  The Production Model
     In fact, as was also indicated in connection with
Figure 1, the production model consists of a number of
sets of linear programs with each set arranged in a
module.  The modules reflect the chronological develop-
ment of the model as it was expanded over time to en-
compass more and more of the activities in the region.
A summary of this part of the model is shown in Table 1.
The modules are shown in the first column.  The designa-
tion MPSX derives from the particular computational
routine used in the analysis.  The next three columns
give the dimensions of the LP matrix for each module
and the number of discharges.  Residuals generated by
the linear programs for these activities reflect opera-
ting conditions as of about 1970, and represent genera-
tion under steady-state conditions.  Variability of
residuals generation in the various activities was not
considered.  It will be noted that the overall program
has over 3,000 rows and nearly 8,000 columns.  Included
are 306 sources of discharges, with options for reducing
discharges.  For the number of discharges and types of
residuals being considered, there are nearly 800 speci-
fied residuals being discharged to the various environ-
mental  media.  The next column gives the type of acti-
vities in each module.

     Only the powerful capacity of contemporary compu-
ting machines makes it feasible to solve a problem of
this size.  Even so, scaling down the model to fit the
capability of even a large computer was a difficult
practical  problem.

                The Environmental Models

     The overall model incorporates a 22-reach nonlinear
ecosystem model  of the Delaware Estuary.  Inputs of li-
quid residuals discharges to this model include:  or-
ganics  (BOD), nitrogen, phosphorus, toxics, suspended
solids, and heat (Btu).  Outputs are expressed in terms
of ambient concentrations of algae, bacteria, zoo-
plankton,  fish,  oxygen, BOD, nitrogen, phosphorus,
toxics, suspended solids, and temperature.  Three of
these outputs--algae, fish, and oxygen—are constrained
 (all can  be constrained  in  the  model).   In addition, the
model  includes  two  57  x  251  (57 receptor locations and
251 dischargers)  air dispersion matrices, one for sulfur
dioxide and one for suspended  particulates.   These re-
late ambient  ground-level concentrations to residuals
discharges  (S02 and particulates).

                       Results
     At the time of  this  writing,  production runs with
the large model have just gotten  underway.   A few pre-
liminary results,  however,  can  be  presented with respect
to the "EPA hypothesis."

     First, the model  shows  that  in  this  realistic set-
ting of an actual  case there are  significant linkages
among the management aspects of the  different residuals
types.  Tighter ambient standards  for  the atmosphere do:
significantly affect the  cost of maintaining water qual-
ity standards and  vice versa.   This  can  be  seen  by con-
sidering the following example  results from production
runs with the model.
                                Air
              Easy ambient
                standards
Tight ambient
 standards
  Easy
  ambient
 standards


  Water
  Tight
  ambient
 standards
$395,640
$422,031
1,064,892
1,309,271
            High quality landfill required for all runs.
     All the numbers in the table refer  to  total addi-
tional costs to the region for meeting environmental
standards in dollars per day.  The sample runs show that
going from easy (relatively low) water standards to tight
(relatively high) water standards costs  about $26,000
per day when only easy air standards are imposed.  If,
however, tight air standards are required,  going from
easy to tight water standards costs about $244,000, or
almost ten times as much.  Thus, the model  tends to
support the EPA hypothesis that, taken in a realistic
regional context, there are linkages among  the different
environmental media in the realistic setting of an im-
portant region and that they are of substantial magni-
tude.  Policymaking, planning, and administration which
ignore  them are likely to encounter untoward surprises.

     Perhaps even more important, the Delaware applica-
tion indicates that it is possible to develop an inte-
grated residuals management model for a  large region at
a manageable cost.  The cost of this model  (granted that
much of the basic data had been collected already) was
about 10 man-years of effort on the part of the senior
researchers, some research assistance, and  perhaps
$100,000 worth of computer time at commercial rates.  In
dollars, the cost could be put at roundly $1 million, or
about 1 day's worth extra cost to the region of operating
with tight environmental standards.

                      Conclusion

     In conclusion, it seems appropriate to return to
the question of the choice of models.  Mathematical
models for analysis of environmental and other resources
problems must be designed for specific purposes if they
are to be useful.  There is no such thing as a general
model.  If models are regional in character, it is
                                                       278

-------
usually even difficult to transfer them to another re-
gion.   Such models must not  be  asked  questions they
were not designed to answer.  This seems obvious but
there are important instances in which this is happening.

     On the matter of optimization versus simulation
models, the question may be  discussed by summarizing
the considerations which enter  into the choice in rela-
tion to residuals-management-type  models.  It should be
remembered that, as illustrated by the Russell-Spofford
model, combinations of simulation  and optimization are
possible.

     1.  Mathematical optimization imposes a valuable
discipline on the modeler and the  modeling process.

     2.  If ambient standards are  the targets to be
achieved by a control strategy  and there are a large
number of dischargers, a large  number of receptors, and
a  large number of possible discharge  reduction options,
mathematical optimization is usually  the reasonable way
to proceed.

     3.  If there are only a few major dischargers and
a  few  options for reducing residuals  discharges, simula-
tion may be easier and sufficiently efficient.

     4.  If there are many similar sources of residual
discharges, but only one or  two options for reducing
discharges at each source, and  the objective is defined
in terms of required reductions or percent reduction at
the sources rather than in terms of ambient conditions,
simulation again may be easier and sufficently efficient.

     5.  Large mathematical  optimization models are ex-
pensive and difficult, unless they are linear.

     6.  Large, nonlinear mathematical optimization
models frequently have multiple "optima" and it is
usually difficult to identify and  deal with this situa-
tion when it exists.

     7.  The linearity assumptions may or may not do
great  violence to the real world.

     8.  Synergistic and antagonistic effects are more
difficult to handle in mathematical optimization than
in simulation models.

     9.  Economies of scale  or  analogous increasing-
return situations cannot be  handled satisfactorily by
any mathematical optimization technique.

    10.   Time-dependent phenomena are difficult to
handle in optimization models.

     The balance of these considerations will usually
point  to optimization as the preferred approach of the
problem is large and complex.

     Finally, it should be remembered that any approach
to analyzing a complex environmental  management problem
involves obtaining, arranging,  and handling large amounts
of empirical data.  In many  cases  the major problem is
that of obtaining empirical  data on activities—produc-
tion process, residuals generation, and residuals dis-
charge reduction options and their costs--and on the
effects of discharges on ambient conditions.

                       References

1-   Allen V. Kneese, "Management  Science Economics and
     and Environmental Science," Management Science,
     Vol. 19, No. 10, June 1973, p.  1126.

2.   Energy Policy Project of the  Ford Foundation, A
     Time To Choose: America's  Energy Future, Cambridge,
    Mass.:  Ballinger Publishing Company,  1974,  p.  135.

    Robert  Dorfman, "Operations Research," American
    Economic Review, Vol. 50, No. 4, 1960.

    Blair Bower and Allen V. Kneese, Residuals  Environ-
    mental  Quality Management, to be published  by
    Resources for the Future, Inc.  Also  a publication
    by  the  modeling team is in preparation.
 Regional  linear
 programming model—
 industrial, house-
 hold and  governmental
 activities
                                           Effluent
                                           restrictions
                                           or charges
                                       Evaluation section
                                       ambient standards,
                                       damage function
Figure 1. Schematic of Regional  Residuals  Management
          Model
           - COUNTY LINE
           - STATE LIN I
           - REGION BOUNDARY
   Note: The grid is in kilometers and is based on the Universal Transverse
       Mercator (UTM) grid system.
        Figure 2.  Lower Delaware Valley Region
                                                        279

-------
                     Source:  FWPCA, Delaware  Estuary Comprehensive  Study.


                         Figure 3.  Map  of the Delaware  Estuary Showing
                                     Analysis  Sections
LINEAR PROGRAMMING MODEL OF

PRODUCTION, RESIDUALS DISPOSAL, ETC.
Marginal
COSTS OF PRODUCTION,
MODIFICATION, ETC.
Models of
Generation &
Modification
of Residuals
In Production
Processes



Generation
& Modification
if Consumption
Residuals

Constraints o
E.G: Increase In Munlcl
Increase In elect
Modification
Costs of
Recycling
Recycling
Alternatives
(Residential
Commercial,
Industrial
Waste Paper)
i the Distribu
pal Sewer and
ricity and Hoi
TRIAL "EFFLUENT CHARGES"
Instream
Aeration
"Discharge
of
Oxygen
Discharge of
Residuals.
Differentiated by
Type and Location
of Discharge)

tlon of Costs
Water Bills, percentage
e-heating costs.
Local and | (Horsepower
R.gional Options ACTIVITY LEVZLS
Discharges
V . S
VIA
CONSTRAINT LEVELS
                                                                          Penalties

                                                                         Attributed to

                                                                          Individual

                                                                        __Dlscharger3
                                                                       §o«>w c
                                                                       •H -n O
                                                                     E u tna-H
                                                                     •H UI-HC u
                                                                     C 3 W-H-H
                                                                     1J-0
                                                                          §> m
                                                                          we
                                                                     3 tH«0-H O
                                                                     E tu  3-H
                                                                     •H   • C U
                                                                     c a> o--^-"
                                                                     •H ecu  —
                                                                       g *i
                                                                       •H  «a-H
                                                                     13 *>  U U
                                                                     O 3 ^ O
                                                                     3^1 >**J-H
                                                                     O-^ J3-H-C
                  A
     t
                                                                                                           T
                                                   J
                                   ENVIRONMENTAL MODELS
              ENVIRONMENTAL

                EVALUATION

                 SECTION
Physical Dispersion Models
for suspended parclculates 	 .Ambient Concentrat-
«nd sulfur dioxide ' ion, o[ residuals
Biological Systems Models .Ambient Concentrations
Aquatic Ecosystem «' Residuals, Biomass
«°DH- pianito-n/18"' *°°'
\
)
•^

Constraints
(vith penalty functions).
on Ambient Concentrations,
Species Populations, etc.
                     Figure  4.  Schematic Diagram of the  Regional Residuals
                                 Management Model
                                                  280.

-------
                          Table 1. Delaware Valley Model

                      Residuals Generation and Discharge Modules
Module
Identifi-
cation
MPSX 1




MPSX 2





MPSX 3


MPSX 4



MPSX 5






MPSX 6





Total
Size of Linear Program
Rows
286




741





564


468



923






228





3210
Columns
1649




1474





1854


570



1778






394





7719
Discharges
130




114





157


180



86






116





783

Description
Petroleum Refineries (7)
Steel Mills (5)
Power Plants (17)


Home Heat (57)


Commercial heat (57)


"Over 25 ^agms/m3"
dischargers (75)

Delaware Estuary
Sewage Treatment
plants (36)

Paper plants (10)
Municipal Incinerators (23)

Municipal solid resid-
uals handling and
disposal activities

Delaware Estuary industrial
dischargers (22)*
Instream aeration (22)



Extra
Cost
Constraints


57 Electricity
(percent
extra cost)
57 fuel
(percent
extra cost)
57 fuel
(percent
extra cost)


36 sewage
disposal
(5 per house-
hold per
year)


57 solid re-
siduals
disposal
(percent
extra cost)


57 instream
aeration
(absolute
extra cost)

* Twelve of the Delaware Estuary Industrial Wastewater discharges in MPSX 6 are
  also represented by SO.and/or particulate discharges in MPSX 3.
                                          281

-------
                   ECONOMIC IMPLICATIONS OF POLLUTION-INTENSIVE EXPORTS  BY DEVELOPING COUNTRIES
                                                 Peter A. Petri
                                            Department of Economics
                                              Brandeis University
                                             Waltham, Massachusetts
                       SUMMARY

     A new, detailed world model is described and used
 to  identify some economic implications of exports of
 pollution-intensive products by so-called "fourth
 world" countries.  For these nations, pollution-
 intensive exports are found to be unattractive
 relative to conventional types of exports from the
 viewpoint  of various economic criteria.  Still,
 pollution-intensive exports are found to generate
 net earnings of foreign exchange, a factor critical
 to  the development prospects of resource-poor, low-
 income regions.

                   1. Introduction

     At the request of the United Nations, a team of
 economists at Brandeis and Harvard Universities has
 recently assembled a comprehensive, disaggregated,
 multi-regional model of the world economy.*  The
 purpose of the model is to shed light on the alter-
 native paths the world economy could follow over the
 next three decades.  To this end, the model contains
 a large amount of detailed information on natural
 resources, agriculture, industry, and the environ-
 ment.  It is used here to assess the economic conse-
 quences of certain alternative export strategies for
 a group of developing nations.

     In a series of projections described elsewhere,
 the model has shown that the attainment of reasonable
 growth targets for a large part of the developing
 world will require substantial improvements in their
 export capabilities.  If the export positions of
 these nations were to evolve along historically
 established trends, the sharp disparities in income
 that now prevail would widen in the future.  The
 prospects for developing countries without sub-
 stantial resource endowments (the so-called "fourth
 world") are especially vulnerable to the projected
 external payments imbalances.  These problems
 motivate the search for alternative export strategies
 for these nations—including two particular approaches
 investigated in this paper.

     The first of these strategies would involve
 stepped-up exports of commodities that are conven-
 tionally thought to represent the comparative advan-
 tages of a developing economy.  The second strategy
 would focus on exports of products manufactured by
relatively pollution-intensive processes.  This  latter
approach rests on the assumption that the absorptive
capacities of the environment are greater in areas
that have not yet experienced extensive  industrializa-
tion, and that this fact might obviate the need  for
costly abatement measures.  It is not our purpose to
test the validity of these assumptions.  Rather, we
concentrate on the purely economic  (in contrast  to
environmental) implications of the  two alternatives.

     An important secondary objective of this paper is
to provide a direct, though simple, example of how the
new world model can be used to study questions in
international policy.  The next section offers a general
overview of the model's structure,  though obviously its
precise technical details cannot be adequately described
in so short an essay.^

                    2. The Model

     The world model consists of 15 sub-models, each
representing (in terms of some 175  equations and 229
variables) the economic structure of a particular
region.  The sub-models are linked by a network of
trade, that is, by detailed inter-regional flows of
goods, services, and various types of capital.

     The parameters of the system,  that is, the
quantitative descriptions of the regional units, were
first estimated for 1970, and then projected forward
to 1980, 1990, and 2000.  These four parallel systems,
describing the state of the world economy at the end
of each decade, are linked in turn by the growth (in
the case of capital) and depletion  (in the case of
resources) of certain detailed stocks.

     Figure 1 illustrates, albeit in highly simplified
form, the critical interdependencies within a given
regional sub-model.  Figure 2 gives some indication of
how the sub-models are linked.  The system is in fact
a good deal more flexible than Figure 1 indicates.
New variables or equations can be readily added.  Also,
the direction of the logical flow can be changed by
varying which variables are prespecified and which are
endogenously determined in a given  application.
Figure 1 presents the basic specification used in the
analyses of later sections; this structure reflects
one of many possible specifications of the system.
  Officially, this project was described as the "Study on the Impact of Prospective Environmental Issues and Poli-
cies on the International Development Strategy."  The senior research team was headed by W. Leontief, and included
A. P. Carter, J. J. Stern and the author.   The model used in this paper is, of course, a joint product of the group;
however, the other members of the team do not necessarily share the particular conclusions and views expressed here.

  P. A. Petri and A. P. Carter, "Resources, Environment, and the Balance of Payments:  Application of a Model of
the World Economy," delivered at the Third Reisenburg Symposium on the Stability of Contemporary Economic Systems,
forthcoming in the Conference volume.

  Preliminary "Technical Report" prepared for CDPPP, United Nations, May 1975, describes the model more fully.  A
final version will be published by United Nations shortly.
                                                       282

-------
Figure 1. Internal Structure of a Region
/^Development^
\^ Targets J













X -,
Consumption 	 J Gross Dom.
Level fi | Product /^
l!^ '

ffinvironmentalj
\ 	 Standards^/

„ r~
Abatement'
Levels /£
T_
Net
Emissions/^
^
X k

Intel
De
	 N
44
r>
\j -\,
v
"^N
£>
I

mands /^
— (5) —
)
Investment
Levels f$

_
1 1
si/ >

M
Final
Demands

Tot
Dem


— (
^

ii
:al
rnds
fa

— Production,
Extraction/^
T ,
— © — ' v
\f
GrosS Capital
Emissions /£ Required /£
t. J
Key
^HE
x 16"
0
to Symbols
?)
—
&>
< —



(<§S

Labor
Required /J"

Government _
Purchases^
-7-[ I f
^

	 Apopulat ion7\
~\JJrban Pop^/
Export ^ Impc
Level (i Leve


Detailed ,-
Exports fa ^
J I


Extraction
Limits /£

^

Resource
Depletion/^"



(
•, Detailed
Imports fa


/
* \
\
Tri
Bal.

\
/ PRICE \ 	 Pr
\ MODEL J ^Proje
\
/
'" ^
«i ^r
^ j
.^world
y 	 
-------
                       Figure 2

               Interaction of Regions
                  World Trade Pools
          GDP
     /Region A

     \  Structure f-	'
                                  GDP
 Region B's

  Structure
                    I
        Excess or\
       dereraployment/
Bal. of Pay
Surplus/Def
     The first full row of variables in Figure 1 shows
how  the overall level of consumption is determined.
Here exogenous targets for gross domestic product
(GDP) are given and the endogenously determined levels
of investment, government expenditures, and foreign
trade follow from them.  The GDP targets in this case
were supplied by the United Nations, and are consis-
tent with the goals of the U.N.'s Second Development
Decade.  Government expenditures are determined
partly by the GDP, and partly by the size of the
urban population as it influences the requirements for
urban services.  Investment and import levels are
determined by relationships further along the flow-
chart .

     Exports are exogenous to a given region but
endogenous to the world system as a whole.  Figure 2
describes the trade model that links the 15 regional
sub-models.  The trade model leaves the determination
of each region's imports to the structural equations
describing that region.  Import requirements are then
summed across regions, and the resulting world
demands are allocated to the regions as exports.   The
allocations are accomplished by export share coeffi-
cients specific to each traded commodity.  These
regional export shares, initially projected with
regression studies based on historical and cross-
section data, can be readily changed to accomodate
alternative assumptions, as is done later in this
paper.

     Given the detailed list of exported commodities,
and  the levels of consumption, government expenditures
and  investment, it is now possible to specify the
detailed final demands facing each regional economy.
The  transformations from overall demand levels to
demands for specific commodities are accomplished by
various converter coefficients representing the compo-
sition of consumption, investment, etc.  These co-
efficients depend on the region's per-capita income,
and also reflect,  in some cases, region-specific
influences on the commodity breakdown of demand.   The
detailed final demands are added to intermediate
demands (the input requirements of producers)  to arrive
at the total domestic demands facing the regional
economy.
                                                      284
      Unlike most input-output based models, the system.
 treats  several key commodities and activities in their
 natural physical dimensions.   Five agricultural commo-
 dities,  nine exhaustible resource commodities, as well
 as  eight pollutants are measured in metric tons or
 other relevant physical units.  This feature makes it
 possible to use physical quantity data and to check
 the results against other detailed projections.

      Domestic demands  can be  satisfied either by
 imports  or  by local production.   Depending on the
 commodity,  two different approaches are used to effect
 these allocations.   In the case  of exhaustible resource
 commodities,  importing regions are assigned output
 levels  consistent  with the amount of regional resource
 reserves still available,  and imports are used to fill
 all remaining unsatisfied demands.   In the case of
 manufactured products,  import dependency ratios are
 specified using, once  again,  regressions based on
 historical  and cross-section  data.   These regressions
 indicate that a region's import  coefficients generally
 vary inversely with economic  size,  and depending on
 the commodity,  either  increase or decrease as  a
 function of the region's development level relative
 to  the  development  levels  of  its  foreign competitors.
 With one or the other  of these approaches,  the
 domestic demand for each commodity  was apportioned
 between  imports  and domestic  production.

      In  turn,  the  domestic output levels  determine  the
 demand  for  intermediate  inputs and  the requirements
 of  various  types of capital and  labor.   These  require-
 ments are calculated using input-output coefficients
 that vary with  the  region's development level, with
 time, and in  some  instances,  with certain region-
 specific conditions.

     The  capital requirements of  producers,  along with
 the capital requirements of households and  of  the
 abatement activities are next used  to determine  the
 economy-wide  level  of  total investment.   These compu-
 tations  begin with  the capital stocks  available  at
 the beginning of each  decade,  and arrive  at  the
 investment  levels that provide for  the replacement of
 worn-out  plant  and  equipment  and  for the  required net
 expansion of  capacity.

     As  shown on the left  side of Figure  1, production
 also results  in "gross emissions" of various specific
 pollutants.  A  set  of abatement activities,  e.g.,
 various  levels of waste-water treatment,  is specified
 as  a means  of controlling  the pollutants  generated in
 production and by households.  Untreated  emissions
 plus the  residuals  emitted by the abatement processes
 are  summed  to show  the net emissions  released into the
 environment.  The usefulness  of this  information is
 limited,  of course, by the present  inability of  the
model to  determine  specific local concentrations
within its  large geographical areas.

     On  the right side of  Figure  1,  the detailed
 export and import levels are  valued  at  projected prices
 in  order  to arrive  at projected trade  balances.  The
price model is independent of  the physical  system dis-
 cussed so far, though it is based on many of the same
 assumptions, relationships, and structural  coefficients.

     The price model relies on the  inter-industry
 structure of an advanced developed  economy  to examine
 the price implications of  the various  projected  changes
 in  the structural coefficients.   The prices obtained
are normalized to keep the value  of  a  bundle of  con-
 sumption  goods constant  throughout  the projection
period, and should  therefore  be interpreted as rela-
 tive rather than absolute  price projections.

     The  inputs structures of  the resource  industries
played an important role in the estimation  of  future
prices.   As each region exhausted its  high-grade
 reserves of a specific resource it was  assumed to move

-------
to lower quality deposits.  This meant, in turn, that
extraction would entail higher input requirements
(per unit of usable resource output) and hence higher
costs.  Specific (though obviously tentative) data
about the quantity and quality of deposits in each
region were used to trigger the changes in extraction
input requirements.  Of the relative price changes
projected, a large part, can be traced to the successive
exhaustion of high-grade reserves of a number of
resource commodities.  Other, smaller effects arise
because labor productivities are projected to grow at
unequal rates in different sectors, and because
various kinds of input substitution are expected to
take place.

     Our description has, so far, dealt with a
particular structural specification based on exo-
genous GDP targets.  Alternative specifications
make it possible to ask rather different types of
questions.  For example, it might be useful to know
what GDP levels might be attained given, say, limits
on the available labor force.  Figure 2 shows a
schematic approach that assigns a unique specifica-
tion to each regional sub-model, depending on the
particular factors that are expected to govern that
region's future development.  In Region A, the supply
of labor is assumed to limit the level of GDP.  Con-
ceptually, the appropriate level of GDP could be found
by trial-and-error, as the level that (through the
interactions described in Figure 1) results in
"correct" labor requirements.  In the case of Region B,
balanced (or appropriately imbalanced) trade might be
viewed as the relevant target.  The computations do
not in fact need to depend on a trial-and-error pro-
cedure:  the results of the new specification can be
obtained directly by modifying and solving the original
simultaneous equation system used to describe the
region.

     A further implication of this approach is that
several different instruments could be used to achieve
the same ultimate outcome for any given target vari-
able.  In order to obtain balanced international
accounts, for example, the region's share of world
exports might be assumed to increase even though its
GDP is held constant.  Alternately, its import-
dependency coefficients might be changed, or new
assumptions might be introduced concerning inter-
national aid and capital flows.

                  3. The Experiment

     The two hypothetical trade development programs
 (representing the conventional and pollution-intensive
export strategies) will be examined in the context of
the'GDP-target specification shown in Figure 1.
Earlier projections based on this structure have
generally shown sizeable and deteriorating balance of
payments deficits for developing regions without sub-
stantial resource endowments.  These deficits are due
to the rising prices of imported resource commodities,
to the high import-intensity of capital formation in
developing regions, and to the relatively slow growth
of markets for the key exports of the developing
world.

     Against this background we turn now to examine
the Implications of a pollution-intensive export pro-
gram for three developing regions:  Latin America  ^
(Medium Income) , Asia (Low Incomg , and Arid Africa.
It is clear, of course, that any strategy  that  results
in a vigorous expansion of exports will also  improve
the exporter's balance of payments position.  In
order to identify the specific characteristics  of
the pollution-intensive program, we perform a con-
trolled experiment; one that contrasts the effects
of this strategy with the implications of more  tradi-
tional approaches to export development.

     In Experiment A,export increases are assumed to
occur in five sectors (textiles and apparel,  wood
products, furniture and fixtures, printing, and
miscellaneous manufactures) characterized by  labor-
intensive and relatively standardized production
technologies.  In Experiment B,the same amount  of
exports is assumed to be produced by a different
group of sectors (primary metal processing, paper,
industrial chemicals, rubber, and other chemicals),
industries that are generally recognized as the most
pollution-intensive among manufacturing processes.
     Both programs are constructed to generate  (by
the year 2000) $50 billion of new exports for the
three regions taken together.  The $50 billion is
divided among the regions in proportion to the
currently projected exports.  The increases are further
assigned to specific products—within the industry
group relevant to each experiment—in proportion to
overall world trade in these products.  The hypotheti-
cal export increases calculated this way are  shown in
Table 1.

  Table 1. Alternative $50 bill. Export Strategies
           ($ bill. , 2000 relative prices)

               A. Conventional Exports**
                     Latin                 Arid
                    America     Asia      Africa
Textiles, Apparel    13.0       20.5        1.69
Wood Products         1.7        2.7          .23
Furniture, Fixtures    .3         .4          .03
Printing              1.2        1.8          .15
Miscellaneous Mfg.    2.3        3.6          .30
Totals   (Z - 50.0)  18.5       29.1        2.40

           B. Pollution-Intensive Exports

Primary Metals        6.0        9.4          .78
Paper                 4.0        6.3          .52
Rubber                1.0        1.6          .13
Ind'l Chemicals       4.4        7.0          .57
Other Chemicals       3.0        4.7          .39
Totals   (£ = 50.0)  18.5
    **
                         29.1        2.40

Detail may not add due to rounding.
 K
  These three regions include (a) Argentina, Brazil,
 Mexico, Chile, Cuba; (b) Bangladesh, Pakistan, India,
 Indonesia; (c) Egypt, Ethiopia, Morocco, Sudan, plus
 in each case smaller countries.
    The experiments were implemented by inflating
each region's export share coefficients so as to
generate the export increases of Table 1.  Both pro-
grams are assumed to be phased in gradually over the
1970-2000 period, reaching the specified increases in
the year 2000.

    In the context of fixed GDP targets, the export
increases have two immediate effects:  first, exports
displace consumption goods in production and, second,
the export increases reduce the region's trade deficit.
Secondary consequences follow from the fact that the
input structures of the export-oriented sectors may be
quite different from the input structures of the con-
sumption-oriented activities that they replace.  These
precise consequences depend, of course, on the sectoral
mix of the export program, and shall be examined
shortly.
                                                       285

-------
      While using  the  fixed-GDP  specification,  we shall
 not  be  able to  measure  directly the  growth-generating
 consequences of either  program.   To  do  so,  we  would
 have to identify  a  specific  limit on each  region's
 development (as is  done schematically in Figure 2)
 and  experiment  with the alternative  export  strategies
 under that particular constraint.  To the  extent that
 the  programs affect the limiting factor differently,
 they would generate  different  rates of long-term growth.

      At this stage, however,  the fixed-GDP  context
 offers  a more general way of  assessing  the  effects  of
 each strategy on  a  set  of different  factors that
 potentially limit growth (including  foreign exchange,
 capital, and labor)—without  prejudging which  of these
 factors is most acutely limiting.  This is  done in
 Section 4.   The next  logical  step, the  quantification
 of the  ultimate development impacts  of  these effects,
 is left to future work.

     4.  Comparison of  Alternative Export Strategies

      Table 2 presents the balance of  trade  implica-
 tions of the two  strategies.  In neither case  is the
 net  improvement (for  the three  regions  taken together)
 as great as the $50 billion export increase; both
 export  strategies involve leakages in the form of
 added import requirements.  The  import  leakages  of
 pollution-intensive exports are  some  $5.5 billion
 higher  than those of  the conventional strategy.
 Pollution-intensive industries  tend  to  have  above-
 average raw material  and capital requirements,  and
 these two  kinds of  commodities  are heavily  imported.
 In addition,  the  inter-industry  purchases of
 pollution-intensive industries  tend  to  favor the
 least developed and therefore most import-dependent
 sectors  of a developing  economy.

        Table 2.  Balance  of Trade Results, 2000

           A.  Improvement of LDC  Balances

           ($ bill., 2000 relative prices)
                  Central
                Projection
Latin America      -84.6
Asia               -81.6
Arid Africa        - 7.9
  Subtotal:       -174.1
Improvements over
  Central Projection:
Export Strategy
   A	B

 -65.5    -69.1
 -58.8    -60.9
 - 6.4    - 6.2
-130.7   -136.2
 +43.4
          +37.9
 B. Regional Distribution of the Losses Offsetting
        the LDC Improvements (percentages)
Western Europe (High Income)
North America
Japan
Soviet Union
Eastern Europe
Western Europe (Medium)
Middle East
Asia, Centrally Planned
Latin America (Low Income)
Tropical Africa
Southern Africa

  Total Losses
  Total Losses in $
     The rest of the world is differently affected
by the two export alternatives.  As the second part
Export
A
44
9
16
4
10
6
3
3
1
1
1
100%
-43.4
Strategy
B
41
27
9
10
4
1
2
3
1
1
1
100%
-37.9
                           of Table 2 shows, the predominant  effect of both
                           strategies is to reduce  the  trade  surpluses (or to
                           increase  the deficits)  of advanced  developed
                           countries.  The conventional exports are apparently
                           diverted more sharply from Europe  and Japan, while
                           the pollution-intensive  products are obtained more at
                           the expense of North America and the Soviet Union.

                                The internal consequences  of  the trade alterna-
                           tives are summarized in  Table 3.   The most  striking
                           differences emerge in the overall  capital stock re-
                           quired in production.  In Experiment A,  the diversion
                           of gross product from consumption-oriented  activities
                           to conventional exports  has in  fact  saved capital  in-
                           puts.  In contrast, the  pollution-intensive program
                           imposes added capital requirements on the regional
                           economy.  While the $34.4 billion  difference between
                           the programs is small relative  to  the  total capital
                           stocks of the three economies,  it  is  large  when com-
                           pared to the capital requirements  of  the  programs
                           themselves.  It would take about 40% more capital to
                           implement the pollution-intensive  strategy  than it
                           would to produce equally valued conventional exports.


                                     Table 3.  Additional Requirements
                                      of Pollution-Intensive Strategy
                                 over the $50 bill.  Conventional Strategy

                                   ($ bill., 1970 prices except as noted)
                           Building & Plant Capital

                           Equipment Capital

                           Subtotal:  All Fixed Capital
                            As % of Economy-wide Stock (%)

                           Annual Investment

                           Employment (mill, myr.)
                                     All Three
                               Developing  Regions

                                        18.1

                                        16.3

                                        34.4
                                        1.2

                                        2.3
                                        1.5
     The employment differences between  the programs
are small and mixed.  Quite surprisingly, both pro-
grams tend to generate about the same amount of
employment when all indirect effects are taken into
account.  Differences might be found if  employment
were further itemized by skill categories, but this
refinement has not yet been implemented  on the world
system.

     The environmental implications are  shown in
Table 4.  While the economic differences between the
alternative programs are found to be small relative
to overall economic magnitudes, this is not the case
with respect to the net emissions of specific pollu-
tants.  These added pollutant loadings are not easily
interpreted in the absence of geographically detailed
projections.  Nevertheless, they suggest that a
pollution-intensive export program that is large
enough to affect the balance of trade will also have
non-negligible environmental consequences.

     If the difficulty of achieving one or the other
of the export targets is related to the  extent of the
required structural transformation of the economy,
our solutions can also shed light on the relative
feasibility of the two programs.  Table 5 shows the
output changes implied by each of the programs
relative to the standard projections for 2000.

     The structural changes implied by the pollution-
intensive alternative are typically larger than those
for the conventional program, and especially so for
the two poorest regions, Asia  (Low Income) and
Tropical Africa.  This was to be expected, of course,
                                                      286

-------
since the conventional export bundle was designed to
emphasize the strengths of a developing economy.  The
radically different output profiles obtained under
Experiment B suggest that major changes would have  to
take place (in terms of new infra-structure, manpower
training, and the establishment of supplying indus-
tries) before the new activities can be absorbed in
the economic fabric.


         Table 4. Additional Net Emissions
  with Pollution-Intensive Export Strategy. 2000

   (percentages relative to projected emissions)
Pesticides
Particulates (Air)
BOD
Nitrogen (Water)
Suspended Solids
Dissolved Solids
 Latin
America
  -1
  68
  12
   8
  16
   8
Asia
 -2
 11
  5
  0
 23
  8
 Arid
Africa
 -1
  9
  3
  0
 U
  3
                                         In sum, several  economic criteria mitigate in
                                    favor of the conventional  and against the pollution-
                                    intensive strategy—provided, and this is quite
                                    important, that  a choice exists  at all.   Even the
                                    pollution-intensive approach, relying as it does on
                                    some resources that are especially scarce to a
                                    developing economy, does generate foreign exchange,
                                    and would, in the absence  of  other earning oppor-
                                    tunities, most likely  contribute  to the development
                                    process.
        Table 5. Additional Output Required
      to Implement Export Strategies in 2000

     (percentages relative to projected output)

              A. Conventional Exports
 Textiles, Apparel
 Wood Products
 Furniture, Fixtures
 Printing
 Miscellaneous Mfg.
 Five Industries
  Together
 Latin
America
  27
  16
  -2*
   8
  24
  15
Asia
 20
 32
 -2*
 37
 23
              17
          B. Pollution-Intensive Exports
 Primary Metals
 Paper
 Rubber
 Ind'l Chemicals
 Other Chemicals
 Five Industries
  Together
  19
  26
  11
  20
  17

  22
 67
 75
 41
 35
 33

 49
 Arid
Africa
  19
  50
   0
  40
  18
                        16
  42
  40
  33
  17
  30

  30
   Since in our computation exports  displace  consump-
 tion  in (fixed) GDP, and since  furniture  is  important
 in consumption and not so in  the  export package,
 furniture output would decline  if the  strategy were
 implemented.
                  5. Conclusions

     By postulating two hypothetical  export  programs,
 we have attempted to identify  some  economic  conse-
 quences involved in the choice between  conventional
 and pollution-intensive export strategies.   The
 pollution-intensive approach is found to be  more
 expensive in terms of capital, and  requires  more
 imports.  Moreover, this approach would imply  size-
 able shifts in the output mix  of the  developing
 economy, and might create non-negligible environmental
 repercussions.  Also, at least initially,  the  pollution-
 intensive sectors would tend to operate as an  enclave
 within the regional economy.
                                       Some  dynamic  theories  of  development  suggest  that
                                     the  early establishment  of  basic  industries—as the
                                     pollution-intensive  sectors typically are—can
                                     actually hasten development through  a variety of
                                     backward and  forward linkages with other sectors
                                     of the  economy.   These effects would have to be
                                     large in order  to offset the static  disadvantages
                                     cited above.
                                                       287

-------
                                      A TAXONOMY OF ENVIRONMENTAL MODELS
                                           Robert U.  Ayres
International  Research  and  Technology Corporation,  1501  Wilson Boulevard, Arlington, Virginia
                                                                                                    22209
                   The Role of Models

     Problems of the environment are essentially pro-
blems of production and consumption, as concerned with
Teal physical materials and energy.   These real  physi-
cal materials must be derived from the natural  envi-
ronment where they are distributed unevenly.   Their
usage involves successive stages of processing and
transformation which inevitably result in social costs
(externalities), ranging from noise to the discharge
of toxic waste materials.  Similary, so-called "Final"
consumption—that convenient abstraction from classi-
cal economics—is not final at all in terms of dis-
position of real materials and energy.  On the con-
trary, consumption of material goods means in practical
terms that the goods have lost their utility value and
become wastes.  But these still have to be disposed of
and are still capable of causing very serious harm to
the natural environment  and to man, depending upon
the location and method of discharge or disposal.
     In short, a whole collection of new problems
associated with stocks and flows of physical  materials
and energy has come to the fore.  They were with us
all along, but in the 1930's other problems associated
with organizing the economy and fully utilizing its
resources were far more urgent.  Natural resources, on
the other hand, are becoming painfully scarce in the
industrialized world.  Many of the most abundant
natural resources of the earth have already been dis-
sipated, not to say wasted.  The surface of the earth
still has abundance to offer, but we must dig deeper,
scrape the bottom of the ocean, or look at more remote
areas of mountains, deserts, and jungles.  Extracting
these resources is creating newer and more serious
problems also.  Delicate ecosystems such as the Arctic
tundra or the tropical oceans are beginning to show
adverse"effects from this intensification of digging,
drilling, and quarrying.  Mine wastes, oil leakage,
combustion products of a vast variety of types and
kinds of pollution are fouling our environment,  and
making survival impossible for many harmless species
of plants and animals which formerly shared the earth
with us.  Some have even questioned whether human life
itself can survive for long amidst this environmental
carnage.
     No doubt, some of the immediate fears of resource
exhaustion are overblown, but it is not our purpose
here to evaluate their validity.  It is enough to say
simply that the ''new" problems facing economists in
the 1970's and 1980's are intimately associated with
the real properties of physical materials and energy,
and above all, with their stocks and flows—that is
with their physical quantities.  To deal with these
problems we need first of all an appropriate economic
theory.  The conventional paradigm addresses the
economy as a set of relationships between production,
investment and consumption expressed in monetary terms,
and defines its concerns as determining conditions for
maximizing consumer utility and social welfare by
optimizing these relationships.  But another paradigm
is needed in which the economy is viewed as a set of
transformations of physical materials from the raw
state through successive stages of extracting and pro-
cessing to goods and services, and finally to waste
flows.  Even physical dispersion and biological  impacts
must be considered.  The problem of optimization is
correspondingly broadened.  This broader theory must
address the problem of production of externalities as
well as economic services, and the allocation of such
externalities.  It must deal with the problem of defin-
ing and maximizing social welfare subject to resource
supply pervasive constraints, laws of thermodynamics,
                                                     and the existence of externalities resulting from
                                                     waste residuals; and it must provide theoretical tools
                                                     to facilitate our understanding of the appropriate
                                                     mechanisms for managing the economy.
                                                          One characteristic of the "new" problems in
                                                     environmental  economics is that they are increasingly
                                                     at the "micro", rather than the "macro" level of ag-
                                                     gregation.  Resource and environmental concerns tend
                                                     to involve consideration of technological particulars,
                                                     as contrasted with interrelationships among broad ag-
                                                     gregates such as population or Gross National Product.
                                                          This increasing concern with detail is character-
                                                     istic of the development of any science.  In the early
                                                     stages of development of theoretical chemistry, it was
                                                     convenient to proceed by lumping all the chemical
                                                     elements together, calling them "matter" and looking
                                                     for generalized "laws of matter".  But the number of
                                                     valid inferences that can be made from this simple
                                                     model is quite limited.  Each element is different,
                                                     each has different properties, each reacts differently
                                                     from the others.  Sooner or later the chemist is
                                                     forced to recognize and take into account these dif-
                                                     ferences by developing a more elaborate model.
                                                          However,  the chemist continues to consider mole-
                                                     cules essentially as complete entities.  These  are the
                                                     objects of his research.  It is precisely the combina-
                                                     tions between  atoms (of similar or dissimilar species),
                                                     which are governed by electromagnetic forces, that
                                                     chemistry is all about.
                                                          Note that the analytical methods of chemistry are
                                                     of no value in studying reactions that occur within
                                                     the nucleus of a single atom.  These are governed by
                                                     different—stronger but shorter range—forces which
                                                     have essentially no influence on the interactions be-
                                                     tween atoms and molecules.  Conversely, the electrical
                                                     forces which.govern inter-atomic relations have no
                                                     measurable effect on the probabilities or rates of
                                                     nuclear reactions.  The two classes of phenomena are
                                                     virtually independent of each other, though neverthe-
                                                     less governed  by the same fundamental physical  laws.
                                                          On the other extreme, consider the gravitational
                                                     forces that control the motions of stars and planets.
                                                     These are the  only forces in the universe that  are ef-
                                                     fective at truly long distances.  But, by the same
                                                     token, the gravitational forces are incredibly  weak by
                                                     comparison to electromagnetic forces.  Only when
                                                     enormous numbers of particles are collected together
                                                     as "mass" is the gravitational effect significant--
                                                     whereas the electrical forces are cancelled out at a
                                                     distance by the fact that positive and negative charges
                                                     are present in equal numbers.  The laws of chemistry,
                                                     then, have little to say about the motions of cosmo-
                                                     logical objects.  Similarly, astrophysics contributes
                                                     little to chemistry.
                                                          Each branch of science lumps together and  aggre-
                                                     gates "over" the objects and forces that are too weak
                                                     or too short range to influence the phenomena with
                                                     which it concerns itself.  However, it also aggregates
                                                     into categories those objects and forces with which it
                                                     is concerned.   Chemistry aggregates atoms and molecules
                                                     by species.  It does not examine single atoms.  Similar-
                                                     ly, other sciences such as physics, astronomy,  and
                                                     biology tend to classify their objects in such  a way as
                                                     to distinguish differences that matter from differences
                                                     that do not matter at that level of aggregation.
                                                          Aggregation, ther, is at the heart of theoretical
                                                     science.  If an investigator examines only individual
                                                     cases, in an individual way, patterns are indistin-
                                                     guishable and  one is soon lost in a sea of particulars
                                                     to which no general significance can be attached.  On
                                                     the other hand, if aggregation is carried too far,
                                                     unlike elements are lumped together, essential
                                                       288

-------
differences are obscured and again, the expansion of
knowledge is limited.
     The environmental  or social scientist's problem
in this respect is obviously more complex than that of
the chemist, who has to deal with a very limited num-
ber of clearly defined elements, with known atomic
structure,  which he combines or separates in endless
ways.  The  environmental or social  scientist, on the
other hand, has no "unit" of absolutely fixed value or
quantity.  Measures of utility or value are ambiguous.
Even physical  measures of the quantity of a commodity
(tons or cubic feet) fail to take account of physical
transformations, not to mention variations in the
quality.  Not only is everything influenced by every-
thing else, but everything fluctuates in relation to
everything  else.
     In order to find answers to many of the pressing -
problems of an era of rapid technological change, an'
environmental  economist must be able to carry out
analysis at a level of aggregation that is appropriate
to take into account the widely different resources,
mater-Lais,  forms of energy and production processes to
which technological changes specifically apply.
     A list of examples of "new" problems facing
economists  arising from the resource-environment-
technology interface could be made arbitrarily long.
Almost every major technological decision has a
resource/environmental dimension.  The controversy
over the SST is an excellent illustration.  To build
and operate such an aircraft will result in increased
demands for hydrocarbon fuels, and it will result in
physical disturbances to the stratosphere that may ul-
timately affect the intensity of both ultraviolet and
visible solar radiation on the Earth's surface.  All
of these impacts have potential economic consequences
of significant magnitude, which require assessment.1
Similarly,  problems arising initially out of perceived
resource-needs immediately reveal technological and
environmental  aspects.  Thus, the exploitation of
Alaska's north slope involved building an enormous
north-south pipeline across Alaska, with immense
potential for environmental disturbance.  Proposals to
exploit Colorado oil shale or Wyoming-Montana coal are
seen to require diversion of large amounts of scarce
water away from traditional agricultural uses.  The
proposals to solve a resource problem by relying more
heavily on nuclear power, especially "breeder reactors"
and  plutonium reprocessing, are also evidently fraught
with environmental risks.
     The history of the last few years has been—and
undoubtedly of the next decades will be—increasingly
preoccupied by the need to choose among and between
complex, expensive and uncertain technological pro-
grams; each of which involves large potential environ-
mental risks and hazards as well as possible benefits.
The  cheap alternatives and easy choices are no longer
available.
     And the factors which must be weighted to arrive
at rational choices are intrinsically concerned with
detailed technological and environmental questions.
What is the marginal impact on the human environment
of one unit more (or less) of mercury? or PVC?  What
is the net environmental benefit of electric cars using
nuclear power vis-a-vis ICE powered cars using gasoline?
What if we burn coal to make electricity in large power
plants in remote areas vis-a-vis converting it to gas
and burning it in local "total energy" installations?
How much will  it cost to desulfurize coal? oil?  What
are the potential markets for by-product sulfur? Etc.,
etc.
     Lawrence Klein has defined a model as a "schema-
tic simplification that strips away the non-essential
aspects to  reveal the inner workings, shape, or design
of a more complicated mechanism".2 Aggregation is the
key to model design, just as it is the key to theoreti-
cal science in a more general sense.  The fact that a
model cannot hope to reproduce  "all"  the  details—even
the more important ones—of a complex reality  is  an
inherent limitation on what can  be  accomplished by  it.
Yet its comparative simplicity  is also a  strength.  An
excessively detailed model would be cumbersome to
handle and expensive to maintain, yet still  imperfect.
On the other hand, the simplification of  reality  im-
plicit in a model can be a trap, for  if the  model omits
key factors that may have a determining effect on
possible outcomes, it can depart too  far  from  reality
and lead to false conclusions.   It  is all too  common
for even experienced investigators  (who should know
better) to be hypnotized by the neat  rows of figures
emerging from a computer and mistake  them for  a por-
trayal of real facts.
     Central to the design problem,too, is the decision
of what factors the model should explicitly  take  into
account.  This judgment is determined by the questions
the model is designed to answer.  If  the  investigator
wants to know how fast DDT is building up in the ocean,
he may project DDT use in cotton farming and malaria
control.  This can be useful, although it does not tell
him where the effects will be concentrated or which
species will be affected.  Its predictive accuracy, of
course, hangs on the accuracy of the  projections  of the
two aggregates: cotton production and public health
use.  If the tradeoff between malaria control  and over-
population is to be explored—by country—a  much  more
detailed analysis of alternate strategies is needed.
To accomplish the practical purposes  of applied
environmental analysis, clearly, we must build com-
puterized accounting and optimization models to reflect
a wide variety of such factors.
         Qualitative vs. Quantitative Models

     The first and most important distinction that
needs to be made is between qualitative and  quantita-
tive models.  The term "qualitative"  may seem  inappro-
priate as applied to models, but it is not.  Diagrams
and pictures—said to be "worth  a thousand words" —
are clearly models, by Klein's definition.   (Even a
photograph is a "model", since it represents a 3-
dimentional  dynamic reality in a 2-dimentional  static
form.) For that matter, a verbal description of a real
event is also a kind of model.  Qualitative models can
be highly precise and rigorous in expressing certain
types of information.  A classic example is the famous
periodic table of the elements, developed by Mendeleyev.
It is clearly a model, and clearly qualitative.   There
are no measures involved.
     Despite the obvious importance of qualitative
models, the remainder of this paper will  be concerned
explicitly with quantitative models.
         Simulation vs. Optimization  Models
     A second fundamental dichotomy can be drawn  be-
tween simulation (or forecasting) models and optimiza-
tion models.  While both types are applicable in
economics, the latter type is more fundamental  to the
discipline,  whereas physical  models are more likely to
be of the former kind.  Indeed, economics has generally
been defined as the study of optimal  allocation of
scarce (limited in availability) resources among  com-
peting ends  or uses (in ordinary language, this pro-
cess is often simply called "economizing").
     In particular, classifical  economics is concerned
with the allocation of investment capital  among compet-
ing projects and localities, the allocation of income
among competing expenditive categories (or means  of
achieving satisfaction), and so on.   The newer pro-
blems of economics, as already noted, concern optimal
allocation of natural resources among sectors of
society, selection of technologies to maximize desired
outputs and  minimize costly inputs and/or costly wastes;
selection of optimal  pollution abatement strategies,
investment schedules, etc.
                                                       289

-------
               Static vs. Dynamic Models

     A third key distinction is between static and
dynamic models.  The static/dynamic distinction con-
cerns the treatment of time.  In a static model, time
is not a variable: the solution is valid either for a
particular point in time (only), or for all time, de-
pending on whether the exogenous variables of the pro-
blem are themselves time dependent or not.  "Cross-
sectional" survey data is often used.  In a dynamic
model, time is an explicit variable and the solution
evolves with time.  Longitudinal (time-series) data is
appropriate.  Clearly the dynamic optimizing case is
most general, but is also most difficult to formulate
(and to compute).  Static non-optimizing models are of
comparatively little importance.  The only significant
example I know of in economics is the input-output
model.  I know of no good examples in the physical
sciences.  Combining the two dimensions, there are four
major categories:
     •  static simulation
     •  dynamic simulation (forecasting) models
     t  static optimizing models
     •  dynamic optimizing models.

           Causation vs. Correlation Models

     Quantitative models may also be divided along
another axis, depending on the use of phenomenological
causality.  This is not a distinction that can always
be made cleanly by an outside observer, since it some-
times depends on the modeller's intentions.  There is a
wide spectrum.  At one extreme one finds "blind" ex-
trapolation of exponential curves as straight lines on
log-paper--where no strict causality is even suggested
and the forecaster assumed (in effect) that the curve
has a kind of independent life.  At the other extreme
is the completely analytic model where everything is
explained in terms of a fundamental physical theory,
such as the "laws of gravitation".  Obviously even
fundamental  theories are not unchangeable.  (Too many
have been upset during the present century for us to
feel confident that we know all the basic laws govern-
ing matter and energy, still  less living organisms.)
Thus dependence on "causality" is always relative.
     Econometric models, based on correlative relation-
ships between variables determined by statistical
analysis of time-series data, tend to be weak in
causality.  In a sense, the use of sophisticated
statistical  methods can be a substitute for understand-
ing underlying mechanisms and relationships.  (This is
not necessarily the case: a good theoretical model can
be improved by the application of refined statistical
techniques for empirical estimation of key parameters.
But often statistical means are employed to select
relationships between variables precisely because
fundamental  theory is lacking; indeed, the theory may
develop more rapidly as a consequence of such analysis.)

            Realistic vs. Abstract Models
     Causal  models, based on fundamental theory—as
opposed to correlative models developed by statistical
analysis of empirical data—may be either realistic or
abstract.  In the latter case they are intended to
represent real phenomena and to assist either in fore-
casts or in  determining optimum arrangements.   On the
other hand,  abstract (data-free) models are intended
only to generate "theorems", elucidate limiting cases,
and so forth.  They tend to be deliberately oversimpli-
fied.  Much  of theoretical  resource environmental
economics in recent years has been concerned with ex-
ploring the  properties of very simple models that
(presumably) exemplify more general principles.

            Short-term vs.  Long-term Models

     An important subsidiary distinction, applicable to
realistic forecasts only, must be made between short-
term and long-term dynamic simulation models.   The
 distinction  is  extremely important in practice, yet
 often  overlooked (or not understood) even by many
 practitioners.   The difference hinges on how data is
 used  in  the  model  and how the results are interpreted.
 Briefly,  long-term models are concerned with moving
 averages  or  trends, in which temporary departures from
 equilibrium  are deliberately ignored.  Indeed, the more
 precisely calibrated the model is for use in short-term
 forecasts, the  faster it will depart from the long-term
 trends.   Conversely the more closely it is tied to
 long-term trends the lower the correlation with short-
 term fluctuations.   Hence fluctuations in the input
 (i.e.  historical)  data are regarded as "noise" and are
 normally  smoothed  over.   On the other hand, short-term
 models are precisely concerned with the fluctuations
 away from equilibrium.   Hence they tend to utilize all
 historical data, with the smallest possible time incre-
 ments—usually  quarterly—and, as a rule,   recalibrate
 and recompute after each new set of data points are
 added  to  the series.   And because the non-equilibrium
 aspect is  vital, multiple correlation regression
 analysis  is  heavily used in developing the equations.
     The  important  point here is that short-term models
 are predictions because  they take off from an actua-l
 set of initial  conditions,  such that all  values of
 variables  and parameters are guaranteed to be realistic,
 at least,  at time  zero.   The predictive value declines
 fairly rapidly  as  the forecast horizon is  extended, of
 course, because the starting point is always  off-
 equilibrium.
     Long-term  models,  on the other hand,  do  not give
 predictions, but are used instead to make  projections
 usually in sets of  possible alternatives).  A long-term
 model has no predictive  value even in the  short-run,
 because it is concerned  with trends  and smoothed
 averages.  These are  seldom in agreement with actual
 current values  of the model  variables.   And because
 equilibrium conditions are  mainly of concern, it is
 reasonable to depend  heavily on  accounting  identities
 (e.g., input-output relationships,  materials, energy
 balances, etc.)  which can  be relied  upon to change
 fairly slowly,  if at  all.   The purpose of  a long-term
 model is  to examine the  quantitative consequences of
 changes in exogenous  trends  in  parametric  relationships
 or in constraints.   Conclusions  drawn from  long-term
 models should always  be  explicitly contingent on the
 particular set  of starting  assumptions.   (A contingent
 statement is always  of the  type:   "if ...  then  ..."'.
 The assumptions  are  an intrinsic  part of the  statement.)
     Because short-term  and  long-term models  are used
 for different purposes  (and  supported by different
 institutional sponsors)  the  developers  are  often out of
 touch with each  other and—at times—unjustifiably
 contemptuous of  each other's  methodologies.   In  parti-
 cular, there is  a tendency  for each  to  encroach  on the
 others' domain,  rather than  to develop  a synergistic
 dialog.  But it  is  becoming  increasingly urgent  that
 such a dialog be initiated,  both  to  minimize  duplica-
 tion of effort  and  rediscovery of the wheel and for
 more fundamental reasons.   The  latter have  to do with
 the need to develop  realistic  long-term dynamic opti-
 mizations in which  short-run  departures from  equili-
 brium play an explicit role.
     For the moment  it is worth  noting, simply, that
 realistic optimization models  currently tend  to draw
more on engineering and  physical  information  than on
 statistical (time-series) analysis—which may explain
why econometricians have  largely  ignored this area in
 the past.

                 Levels of Aggregation

     Considerations  noted earlier suggest  that one use-
 ful way of further  subdividing  realistic models  is by
 level  of aggregation.  There  exists  a  natural hierarchy
of aggregation  levels in  economics,  each level  useful
 for some particular purpose.   Most  highly aggregated
                                                      290

-------
(call  this  Level  1) is the model based primarily on
such broad  measures as total population, total labor
force,  unemployment rate, gross national product, gross
capital  formation, private consumption expenditures,
wholesale and consumers'  price indexes, rate of price
inflation and so  on.   Non-economic models at the same
level  would introduce total  energy flow, biomass, etc.
Some of these aggregates, notably population and labor
force,  can  be estimated for a number of years ahead
because birth and death rates change relatively slowly,
and age distributions of the population can be pro-
jected with fair  accuracy.
     Beyond these pivotal aggregates, forecasts of
economic quantities become increasingly difficult and
have to be  based  on a priori assumptions, such as the
proportionality of inputs to outputs or the assumption
that relationships between the various quantities that
have held true during past periods of time will con-
tinue to hold true in the future.  Such relationships
often take  the form of equations, by which, if certain
macro-variables such as population, labor force and
productivity are  assumed, the other quantities can be
estimated.   The trends projected by such techniques
are more reliable for short-term forecasts than for
long-term ones; and therefore, as forecasts extend
further and further into the future, they are likely
to become increasingly unrealistic.
     A special problem with long-term forecasts at
this level  of aggregation is that the key elements af-
fecting long-term changes in the economy are not
necessarily the same as those affecting the short term.
For the long-term, the key elements include technologi-
cal changes, possible raw material shortages, rising
energy costs, material substitutions, changes in social
customs, changes  in the educational level of the popu-
lation, environmental deterioration, and many other
factors not explicitly accounted for in the aggregates
generally used in Level 1.
     A second level of aggregation (call it Level 2)
involves dividing up the economy by industry sector
and/or region.  A familiar example of this is the
input-output model which takes the form of a matrix
recording the pattern of flow of materials and energy
(or the pattern of purchases and sales) between indus-
try  sectors, between each sector and the government,
and between each  sector and the final customer.  Such
a table does not  identify the particular commodities
or energy forms that flow into and out of the sectors,
nor the transformations that take place in the produc-
tion processes, but it accounts for all inflows and
outflows in total.  These inflows and outflows must
balance, both for the system as a whole and for each
individual  sector  after accounting for waste and for
materials drawn from the environment.
     The development of input-output models provided
for the first time a comprehensive view of the struc-
ture of the economy,  like a still photograph that
catches an  action in mid-motion.  These models
facilitated studies to be made that had formerly been
extremely tedious, if not impossible.  With them one
could determine the effects a change in one sector
might be expected to have on the other sectors.  For
example, if automobile size and weight are reduced,
the direct  effects on the iron and steel, coal mining,
petroleum refining, glass, nonferrous metals, synthe-
tic rubber, chemical, machine tools, and many other
sectors can be traced.  But beyond these direct effects
are secondary and tertiary effects: the reduced demand
for intermediate  products entering into the automobile
industry affects  communications, transportation, elec-
tric power, and so on.  The effects ripple through a
maze of inter-relationships.
     For some industries  the level  of aggregation used
in published input-output tables—even if regionalized--
is still  much too broad for accurate analysis.  For
example,  the sector "Industrial  Chemicals" includes a
wide variety of products  made from different raw
materials  by   different processes and used in differ-
ent sectors.  Thus benzene is derived  both from coal  and
from petroleum.  It is used  in petroleum refining  (to
gasoline), in manufacturing  synthetic rubber,  in making
the plastic styrene, and in  many other chemicals.  A
shift in the proportions of  these products within the
sector can alter significantly the inputs and  outputs
for the sector.
     At the commodity level  of disaggregation, it is
apparent that the same commodity can, in many  cases, be
produced by several alternative processes.  An example
is the production of PVC bottles.  At least 20 differ-
ent processes may be involved in PVC manufacture and
these can be combined in more than 60 different ways-
each with different environmental impacts.  In other
words, there are over 60 possible chains of processes
leading from raw materials to finished PVC, all requir-
ing different inputs and yielding different wastes.  A
study of the environmental consequences of regulations
affecting process technology or energy would need to
take these alternative chains of processes into account.
To deal with problems where  this level of detail is un-
avoidable requires a still higher level of disaggrega-
tion (Level 3).
             Determinism vs. Uncertainty
     A final distinction of  utmost importance must be
made between models that are endogenously deterministic
with fixed inputs (in the clockwork sense) and models
that explicitly provide for  stochastic or irregularly
variable inputs.  Variable inputs may be distributed
according to different rules, ranging from normal
(Gaussian) or log-normal distributions to ad hoc
heuristic "scenarios" based on extra-model intelligence.
     In the case of physical models stochastic or normal
distributions are common.  For instance, the "mix" of
local weather conditions, genetic variability, or other
factors is likely to be subject to a normal type of
distribution.   Apart from this, however, indeterminacy
is not a serious limitation for physical  models since
it comes into play only at the atomic or sub-atomic
level of disaggregation.  [Indeterminacy Principle,
formulated by Heisenberg, states that the product of
uncertainties of complementary variables such as momen-
tum and position will always exceed a specified minimum
value (called Planck's constant, h).]
     In the case of economic, social  or political
models the situation is significantly less favorable
for the modeller, however.  Here the indeterminacy
principle becomes a factor if one attempts to predict
the behavior of individuals, committees or structured
organizations (including governments) having a small
number of effective decision-makers.   This is a logical
implication of the free-will of the individual.  But,
even if humans were actually instinctually pre-pro-
grammed, in the same sense as insects, an indeterminacy
principle would still be applicable because it is
clearly impossible to monitor an individual human's
behavior (including thoughts) closely enough to pre-
dict his or her actions without disturbing the object
of the surveillance to the point of affecting those
actions significantly.
     To deal with indeterminacy in the realm of human
behavior it is convenient to introduce "policy"
variables (or parameters) in the models.  Rather than
trying to predict what human behavior will be, which
is impossible where a small number of effective
decision-makers are involved, the model must be formu-
lated to explore the implications of alternative
decisions or "policy options".

            Mapping Model Types to Issues

     Space limitations do not permit a detailed ex-
ploration of this topic.  A few brief comments must
suffice to conclude this paper.  How can one decide
what type of model  is most appropriate to a given
policy issue?  The foregoing taxonomy constitutes a
                                                      291

-------
kind of hierarchical  checklist for the classification     1.
of models.   In effect it defines a large number of
possible "pigeon holes".  On reflection, it is clear
that the same screening process can also be applied to
problems by examining the relevant variables and rela-
tionships:
•  Are the pertinent data quantitative?
•  Is the question one of prediction?  Or is it a
     question of characterizing the best (optimal)        2.
     solution?
•  Is time an explicit variable?
t  Is there an underlying phenomenological  theory
     available?  Or must one rely on observed correla-    3.
     tions between independent observations?
•  Is realism desired?  Or is it the object of the
     exercise to deepen ones understanding  of the         ^
     fundamentals by systematic simplification?
•  If time is a variable and realistic simulation is
     desired, is the time-scale short or long?
•  What is the level  of aggregation at which the key      5.
     phenomena are observable (viz. National? sectoral?
     regional? commodity? process?).
•  Are there stochastic or random elements  in the pro-
     blem?  Is the behavioral response by individual
     decision-makers or small groups of decision-
     makers a factor in the problem?
Disregarding the last three items for quantitative
models there are 12 possible combinations of the
various characteristics noted above, of which 8 or 9
(at least) seem to be relevant (i.e. the boxes are
occupied).  For two of the categories a further short/
long subdivision seems called for.  Any of the cases
may be at any level of aggregation and may involve
stochastic or human choices.
REALISTIC NON-REALISTIC
Non-causal
(Statistical)
Static ?
Simulation
Dynamic Short
Simulation Long
Static
Optimization
Dynamic
Optimization
Causal -
Empirical
X
Short
Long
X

Causal -
Abstract
X
X
X
X
As noted above, classifications are not always unam-
biguous.  For instance, an "equilibrium" air pollution
dispersion (e.g. "plume") model would certainly be
classed as a causal-empirical, but it is not quite
clear whether the term "static" or "dynamic" is appli-
cable.  A pollution forecasting model utilizing
empirically determined pollution coefficients for
highly aggregated sectors would certainly be classed
as static-simulation, but there might be a question as
to whether truly causal relationships are used.  If
the pollution output for a sector were developed based
on more detailed process-level analysis, explicitly
incorporating materials and energy balances, there
would be no ambiguity, of course.
     As a matter of possible interest, SEAS belongs in
the causal-empirical-dynamic simulation (long-term)
category.3  The Materials-Process-Product Model (IR&T)1*
and the Russell-Spofford Model (RFF)5 probably belong
in the causal-empirical static optimization.  Examples
of abstract models in the field of biology, ecology,
and environmental economics are quite plentiful.  How-
ever, realistic dynamic optimization models have not
yet been developed to my knowledge.
A team of investigators composed of Wassily  Leon-
tief, Anne Carter, Peter Petri, and Joseph Stern,
"Technical Report on the Study on the  Impact of
Prospective Environmental  Issues and Policies on the
International Development  Strategy", prepared for
the UN Center for Development Planning  Projections
and Policies Office, forthcoming.

Lawrence R.  Klein, "What is a Model"
(unpublished).
U.S. Environmental Protection Agency,  Strategic
Environmental Assessment System,  Prototype  Documen-
tation, Phase III, 1975.

Robert LI. Ayres, James Saxton and Martin Stern,
"Materials Process-Product Model",  International
Research and Technology Corporation, July 1974.

C.S. Russell and W.O. Spofford, "A  Quantitative
Framework for Residuals Management  Decisions", in
A.V. Kneese and B. Bower, Environmental Quality
Analysis: Theory and Method in the  Social Saienaes,
Resources For the Future, Inc., Johns  Hopkins
University Press, 1972.
                                                      292

-------
                           A STOCHASTIC MODEL FOR SUBREGIONAL POPULATION PROJECTION
                                                  Peter M.  Meier
                                         Energy  Policy  Analysis Division
                                             Dept.  of Applied  Science
                                         Brookhaven National Laboratory
                                             Upton, New  York   11973
ABSTRACT

A stochastic method for the projection of subregional
population is developed, to be used in conjunction
with regional OBERS projections.  The model is based
on a Monte Carlo simulation and numerical integration
technique to generate probability distributions of
future population , given some analytical formulation
of population density in urban areas, and is addres-
sed particularly to consulting engineers who must
make decisions on design capacity for waste treatment
facilities.

INTRODUCTION

Projections of population,  and the associated speci-
fication of service area, waste flows and pollutant
loadings, lie at the very heart of the current nat-
ional effort to reduce water pollution.  Yet, as the
recent controversy over excess capacity in intercep-
tors has amply demonstrated,  the level of competency
current in the environmental engineering profession
in matters of socioeconomic planning is far from
adequate.  This deficiency is all the more serious
in light of some of the very sophisticated optimi-
zation and design techniques employed in the engineer-
ing of the pollution abatement facilities themselves;
it clearly makes little sense to devote substantial
resources to detailed design and engineering optimi-
zation if the initial premise of design capacity and
expected loadings is in substantial error.  In view
of the crude methods used to project population,^ and
the rigid use of design standards derived decades
ago,  it should come as no surprise that a number of
recent investigations have found fault with current
facility planning practices.  Indeed, it is a sorry
reflection on the profession that the current EPA
guidelines for planning treatment facilities find it
necessary to provide sample calculations of a present
worth analysis.

In this paper, we focus on one particular area of
socioeconomic planning, population projection.  The
intent here is to illustrate some lines of analysis
that might profitably be applied to any large public
sector capital investment,  but with particular em-
phasis on the planning of waste treatment facilities.
Experts in mathematical modelling will find little in
the way of sophisticated analytical formulation, but
then its content is addressed to the practicing pro-
fessional.  Indeed, the most important criterion for
the successful development of a modelling technique
is the degree to which it is consonant with the
ability of the consulting profession to apply it, and
the ability of the public and the political process
to understand its assumptions.  Highly complex models
will not aid the planning process if only their
originators can fully use them or fully understand
them.

THE OBERS PROJECTIONS

The OBERS projections of population and economic ac-
tivity, prepared by the U.S. Department of Commerce
and the U.S. Department of Agriculture for the Water
Resources Council,* are now in fairly broad use as a
basis for planning activities by Federal Agencies.
And the U.S. Environmental Protection Agency (EPA)
now requires that plans prepared under Section 201 of
PL 92-500 relate the proposed facility capacity, and
the forecast population to be served, to the applica-
ble OBERS projection.  Unfortunately, the smallest
spatial unit for which an independent OBERS projection
is available is the Standard Metropolitan Area (SMSA),
yet the service area of even many large regional
wastewater treatment facilities often cover far less
than an entire SMSA.  There thus seems some need to
develop a rational method of sub-regional population
projection within the framework of the regional OBERS
forecast.

The classical solution to this problem is the so-
called step-down projection technique, in which the
total regional growth is assigned to sub-regional
areas on the basis of some deterministic allocation
scheme.   Unfortunately, such simple allocation for-
mulae  tend to be quite inadequate for the level of
disaggregation necessary, say, for interceptor sizing,
which requires minor civil divisions or census tracts
as the basic spatial unit.  Moreover, the most obvious
deficiency in current methods of population projection
is their deterministic nature, even despite the widely
known and accepted circumstance that most population
projections prove to be in error.&  The urgent prior-
ity, then, is to formalize the notion of the probabil-
istic population projection in terms suitable for use
in design algorithms.'  That, of course, demands ex-
pression of a population projection as a probability
distribution, rather than as the currently adopted
expedient of specifying a "high" and a "low" projec-
tion, with a "most likely" case somewhere in between.
Indeed, given a probabilistic expression of population
and a corresponding probability distribution of ex-
pected flows, a number of algorithms developed in the
Operations Research field can quickly identify the
correct investment strategy, taking into account scale
economies, interest rates, and planning horizons.
This approach, however, can only be used if
probabilistic projections are available;  and
until that occurs, one can hardly fault those who
adopt the traditional hedge against uncertainty that
rests on overdesign.

The two major objectives of this paper, then, are to
develop a probabilistic approach to step-down pro-
jections, and to suggest a step-down procedure that
rests not on simple arithmetic allocation ratios but
on the overall patterns of regional population dis-
tribution, obtaining the subregional population of
interest by integration of the chosen functional re-
presentation over the appropriate spatial limits.

In the interest of clarity, we shall use a simple
exponential model of urban population density as the
basis for exposition; more complex analytical repres-
entations would pose only additional computational
effort, without change to approach itself.
                                                      293

-------
THE DETERMINISTIC CASE

In order to develop the sub-regional integration pro-
cedure, consider first the deterministic case of the
classic exponential model of urban population
density,^ given by
                                                 (1)

where d(r,t) is the population density at time t at
distance r from the city center, d(o,t) is the pop-
ulation density at the city center, and a(t)  is the
density gradient at time t.  The total number of in-
habitants within some radius R is then given by the
integral     R 6

     P(R,t) =|Jd(o,t) e'O^^r-dr-de             (2)
              9d(o,t)
              cc(t)
                                a(t))e-a(t)RJ    (3)
where
ians. 1
        is the sectoral angle of the city,  in rad-
Given this model, we note that changes in population
can only be accommodated by changes in the density
gradient 
-------
cannot solve Eq. (3) explicitly for a(t), one can
again turn to a numerical solution procedure, and
a(t) is in fact given by the root of

                  9d(o,t) r                     -,
 y(a(0)=P(R,t) - 	_|l - (1 + a(t))e-a(t)R|= 0
                  a(t)                            (9)

In general, for a model of spatial population distri-
bution of g parameters, g-1 parameters must be pro-
jected into the future, the g-th being determined by
the specified parameters and the OBERS projection
P(R,t).

At this point, then, having a projection for cr(t) ;
d(o,t),t=l,..n, we can apply Eq. (5) to project the
population of any subarea j.  The beauty of the
approach, of course, lies in the fact that one can
generate projections for an entire set of subregions
(by merely changing the limits of integration in Eq.
(5)) , projections which are not only all consistent
with the overall SMSA projection, but also with each
other.  This is in contrast to the procedure followed
in most regional plans for wastewater or water supply,
where the population of each community tends to be
projected separately, with total regional population
given as the total (which may or may not agree with
an independent projection for the region as a whole).

THE STOCHASTIC CASE

In addition to the measurement and model specification
error discussed above, the most obvious source of
uncertainty lies in the OBERS projection itself.  These
projections are based on certain expectations regard-
ing national birth and death rates, inter-regional
migration patterns, and shift-share analysis of region-
al economic activity; and, although the OBERS project-
ions are generally regarded as methodologically sound,
even their most avid proponents do not claim perfect
foresight.  To be realistic, then, even the OBERS
projections must be expected to show error, once we
have the benefit of hindsight.  However, for purposes
of  this paper  (and indeed for facilities planning
under  current EPA Guidelines) , we can assume that the
OBERS projections are the best in existence, and we
shall  simply assume that any uncertainty is sufficient-
ly  small in comparison to other sources that they may
be  ignored.  Indeed, it should be intuitively obvious
that the uncertainty associated with the projection of
large  spatial units is much less than projections of
small  regions, a notion fully supported by statistical
evaluations of projection accuracy.

Given  the premise, then, that the major source of un-
certainty lies in our ability to accurately predict  the
distribution of population within the SMSA, what does
this imply for our model?  In terms of the exponential
representation used in this paper, it simply means that
the projection of d(o,t) into the future (or, alter-
natively, of a(t)), is a stochastic rather than a
deterministic procedure.  Thus, rather than specifying
a single set of values d(o,t),t=l,..n (which in turn
specifies a single series of a(t), and subsequently
a single deterministic projection for Pj(t)), we admit
that for each  time point, the value of d(o,t) is in
fact given by some probability distribution.  This,  in
turn,  implies  that ct(t) is also a random variable, as
is Pj(t).  Thus, by generating a probabilistic state-
ment for d(o,t), one can also generate the desired
probabilistic projection for Pj(t).

Unfortunately, even using a model as simple as the one
used here, analytical solution is quite intractable.
That is, given a known probability distribution for
d(o,t), it is not possible to easily derive an ex-
pression for the probability distribution of Pj(t).
    Indeed, even the relationship between the mean and
    variance of d(o,t) and Pj is not easy to determine
    analytically, in view of the difficulty of explicit
    solution of Eq. (3).  But even in the event that a
    model were chosen that allowed explicit solution of
    parameters, the effort of analytical solution would
    be beyond the capabilities of most non-mathematicians.

    This type of situation, however, is almost ideal for
    applying the so-called Monte Carlo, or stochastic
    simulation technique.  This technique simply repeats
    the projection procedure defined for the deterministic
    case some number of times, say q times; but each time
    the projection is repeated, a value for d(o,t) is
    drawn from an urn which contains a set of d(o,t)
    values that corresponds to the probability distribu-
    tion chosen.  The result is that the procedure genera-
    tes q projections for Pj(t); but since each projection
    depends on a different value of d(o,t), P^(t) will
    also differ from projection to projection.  In turn,
    the q projections for Pj(t) define a probability
    distribution, whose moments can readily be calculated.
    Of course, the urn in a computer program is simply a
    subroutine that generates successive random values in
    accordance with the desired probability distribution.

    A sample calculation will illustrate the procedure.
    Figures 2, 3, and 4 show the result of such a Monte
    Carlo simulation projection for part of the Trenton,
    N.J. SMSA.  Figure 2 shows the distribution of
    d(o,1985) and d(o,2000), as generated by the random
    value subroutine; Figure 3 shows the distribution of
    a(1985), a(2000), and finally Figure 4 shows the dis-
    tribution of sample projection values for Pj(1985)
    and Pj(2000).  From this distribution of sample
    values, we compute the desired lower moments of the
    projection; the expected values in this case compute
    to
              E{Pj(1985)}=  19600

              E{Pj(2000>]=  21200
      d(o,1985)=d(o,2000)
      26000
            Figure 2:
 30000                    36000
Distribution of d(o,1985)
295

-------
                               2000
    .42
                    .46
                                       .50
                                     1985
                                                .52
   •42              .46                .50

        Figure 3:  Distribution of a(t)
    .52
            2000
  18000
                   20000
                                             23000
                                          1985
  18000            20000

  Figure 4:  Distribution of P.(t)
23000
This particular series was generated by assuming a
normal distribution for d(o,t);  we note that the dis-
tribution of a(t), however, is skewed,  as  is the
resulting distribution of Pj(t).
Depending on the assumptions made for the  distribution
of d(o,t), different types of probability  distribution
would be generated for Pj(t).
It  could be argued  that  the  procedure  here is some-
what arbitrary,  given  the  infinite  number of possible
probability specifications for  d(o,t).   However,  a
closer examination  indicates that  the  definition  of
d(o,t) is  little different from the definition of most
of  the parameters selected,  say, in designing a treat-
ment facility;  the  engineer  applies his  Judgement and
experience in selecting  the  major  design variables,
and, as he runs  through  the  entire  process chain,
returns to make  adjustments  to  previously selected
design parameters so that  the final design product
has overall consistency.  In a  similar manner,  then,
an  experienced planner makes judgements  on the  ex-
pected development  of  central population density,  a
judgement  that would be  based on analysis of land  use
developments in  the region,  likely  trends in zoning,
urban renewal, and  central city revitalization  efforts
and so on.  Different developments  can readily  be
associated with  different assessments of likelihood,
from which a probability distribution of outcomes  can
readily be generated.  The use  of the model,  then,
forces the decision-maker to  think  about the  factors
that determine the  distribution of  population,  a pro-
cess that  is not engendered  by  the  more  crude methods
of  fitting regression lines  to  historical data, or
simply drawing arbitrary extrapolations.

CONCLUSIONS

In  this brief exposition  we have suggested  some
approaches to sub-regional population projections
that should lie within the bounds of comprehension of
the average consulting firm.  The emphasis has been
on developing the theme of the  stochastic  population
projection, and  the imposition  of a planning frame-
work that forces the individual who makes  decisions
on design capacity of treatment facilities to give
more detailed thought to the underlying  forces of
urban and  regional  population growth patterns than is
now customary.   More complex representations of urban
spatial growth than the  one  employed here could
readily be incorporated, and indeed the  computer pack-
age is designed  in  such  a way as to encourage the
planner to experiment with alternative formulations.
Hopefully  by such experimentation and simulation,   the
decision-maker gains a better understanding  of  the
relationships between planning  assumptions and  the
sensitivity of population projections used as a basis
for large  capital investments,  in addition to pro-
viding the design engineer with a rigorous definition
of  uncertainty in terms  of well-defined  probability
distributions.

ACKNOWLEDGEMENT

This paper is based on work  supported by the Division
of  Biomedical and Environmental Research,  U.S.  Energy
Research and Development Administration,  as  part of
the Brookhaven Regional  Energy  Studies Program.
The content of the  paper, however,  reflects  the per-
sonal judgements  of the  writer, and should not  neces-
sarily be  viewed as the  position of either the
Brookhaven National Laboratory, or  the U.So  Energy and
Research Administration.

NOTES

1.  See e.g., Urban Systems  Research and Engineering,
"Interceptor Sewers and  Suburgan Sprawl:  the Impact
of  Construction  Grants on Residential Land Use",
Report to  CEQ, Sept.,  1974;  and e.g. editorial  comment
in  recent  issues of the  WPCF Journal, especially Vol.
47, No. 7, p. 1823  (July 1975).

2.  One need only scan the most recent texts in the
field, e.g. Metcalf and  Eddy's  recent book,  or  recent
                                                     296

-------
editions of Fair, Geyer, and Okun, to ascertain the
primitive nature of current techniques of population
projection in comparison to the remaining facets of
engineering design.

3.  For example, R. Zanoni and R. Rutkowski, in,"Per
Capita Loadings of Domestic Wastewater", J. WPCF.
Vol. 44, No. 9, p.1757 (Sept.1972), point out that
the per capita loading factors of BOD and Suspended
Solids, still prescribed by many state standards for
facility design, are based on wastewater characteris-
tics of two and three decades ago, and often quite
different to currently encountered values.

4.  U.S. Water Resources Council, Washington, D.C.,
"OBERS Projections of Regional Economic Activity in
the U.S.,  Series E Population, Vol.5, Standard
Metropolitan Statistic Areas", April, 1974.

5.  For an overview of these methods, see e.g. W.
Isard et al., "Methods of Regional Analysis:  An
Introduction to Regional Science", MIT Press, Cam-
bridge, Mass., 1960, Chapter 2.

6.  P. M. Berthuouex, in "Some Historical Statistics
Related to Future Standards", Journal. Env. Engr. Div.
ASCE. Vol. 100, No. EE2, p.423, April, 1974, includes
a very  interesting analysis of the error distribution
of  population projections made by consulting engineers
over  the past few decades.

7.  See e.g. Berthuouex and L. B. Polkowski, "Optimum
Waste Treatment Plant Design under Uncertainty", J.
WPCF. Vol. 43, No.9,p.1589 (Sept.,1970) for an illus-
tration of engineering optimization given an input of
specified probabilistic characteristics.
data base runs to several hundred dollars for an
average SMSA.

15.  Meier and McCoy, Note 10, supra, at p. 14.

16.  Many firms, for example, have access to the
national computing networks of the Boeing and McDonnell
Douglas Companies via remote batch station, especially !
firms active in structural engineering.

17.  The use of Monte Carlo Simulation in population
projection is developed in Meier, Note 8, supra.
 8. For further details on probabilistic population
projections and their use in Environmental Engineering
Design, see P.  M.  Meier, "Population Projection at
Design Level",  Journal. San. Engr.  Div.. ASCE. Vol.98,
No. SA6, p.883  (Dec., 1972).  The relationship be.tween
interest rate,  scale economies, and projection uncer-
tainty is given in P. Berthuouex and L. Polkowski,
"Design Capacities to Accommodate Forecast Uncertain-
ties", Journal. Sanitary Engr. Div.. ASCE, Vol. 96,
No. SA5, p.1183-1210.

 9. This, of course, is the classic model of R. Clark,
"Urban Population  Densities", Journal of the Royal
Statistical Society. Series A, Vol. 114, No. 4, p.490,
(1951).  Countless further empirical studies have
confirmed the general validity of this model for all
but oriental cities.

10. For definition and discussion of the spatial geo-
metry of SMSA's, see P. M. Meier and M. McCoy, "An
Analytical Approach to the Determination of Urban
Population Density Gradients and its Application to
Energy Planning Problems", Energy Policy Analysis
Group, Brookhaven  National Lab., Report BNL 20916,
Jan, 1976.

11.  H. Winsborough, "City Growth and Urban Structure",
Journal of Regional Science. Vol. 4, No. 2, p.
(1962).

12.  A.Guest, "Urban Growth and Population Densities",
Demography. Vol. 10, p. 53 (1973).

13.  See Meier  and McCoy, Note 10,  supra. Chapter VIII.

14.  The National  Planning Data Corporation of
Rochester, N.Y« has made census tract area determin-
ations for all  SMSA's using advanced electronic
planimetry; however, use of this proprietary computer
                                                       297

-------
                                  USE OF THE CLIMATOLOGICAL DISPERSION MODEL

                                    FOR AIR QUALITY MAINTENANCE PLANNING

                                       IN THE STATE OF RHODE ISLAND


                                               Peter H. Guldberg

                                    Walden Research Division of Abcor, Inc.

                                               Cambridge, Mass.

                                               Thomas E. Wright

                                Rhode Island Division of Air Pollution Control

                                               Providence, R.I.

                                             Audrey R. McAllister

                                        Environemental Protection Agency

                                                  Boston, Mass.
      An  air  quality modeling analysis was performed
 in  preparation  of an Air Quality Maintenance Plan
 (AQMP) for the  State of Rhode Island.1  The Climato-
 logical  Dispersion Model (CDM), developed by ERAS
 was  used to  project future air quality levels and to
 test maintenance strategies for the years 1978, 1980,
 and  1985.

      The choice of the CDM for maintenance analysis
 over the Air Quality Display Model3 (AQDM) is dis-
 cussed.   The accuracy of the CDM is demonstrated,
 and  suggestions for improvement of the model  are made.

                     Introduction
                                 state of
                                 RHODE ISLAND
      In June, 1973, EPA published regulations4 re-
 quiring all states to identify areas that might, as
 a  consequence of current air quality and/or of the
 projected growth rate of the area for the next 10
 years, have the potential for exceeding any National
 Ambient Air Quality Standard (NAAQS).  States were
 also  required to submit a detailed analysis of the
 impact on air quality of projected growth in each
 such  designated Air Quality Maintenance Area (AQMA).
 Where NAAQS maintenance problems are identified by
 analysis, the states must submit a long-term Air
 Quality Maintenance Plan (AQMP) containing measures
 to ensure maintenance of NAAQ'S for a 10-year period
 from the date of submission of the plan.   The sub-
 mittal of long-term plans will  be made  according
 to time schedules  to be published by the  Administra-
 tor no later than  July  1976.   In the interim,
 EPA Region I is  requiring states in  their jurisdiction
 to submit attainment and short-term  maintenance (i.e.,
 1975 to 1978)  plans for Set I  pollutants  (S02 and TSP)
 only.
     Based upon  information supplied to EPA in 1974
 by the Rhode Island Department of Health, Division
 of Air Pollution Control (DAPC), one AQMA in Rhode
 Island was identified  by the Administrator as having
 the potential  for  violating NAAQS in the  10-year
 period between 1975 and 1985$.   The  boundaries of
 this AQMA are shown by the shaded area  in Figure 1;
 they include 21  municipalities  centered around
Metropolitan Providence.  The pollutants  for which
 the Metropolitan Providence AQMA has been identified
are Sulfur Dioxide  (S02),  Total  Suspended Parti-
culates (TSP), and  Photochemical  Oxidants.
                                  The Metropolitan Providence
                                  Air Quality Maintenance Area

Figure 1.  AQMA with Potential for Violating
NAAQS Between 1975 and 1985

                   Objectives of Study

     Technical assistance was provided to the State
of Rhode Island DAPC to develop an AQMP for the
Metropolitan Providence AQMA..  As a result of this
work, the state submitted a draft summary of their
short-term plan to EPA in January 1976.  Air quality
projections and control strategy analyses were per-
formed for the years of 1978, 1980, and 1985 to:

       Determine areas in the Metropolitan Provi-
       dence AQMA where annual NAAQS for S02 and
       TSP will be exceeded
                                                     298

-------
     •  Evaluate control  strategies which will  ensure
       maintenance of standards in these areas
       through 1985.

     An investigation of photochemical  oxidants  was
not undertaken as  part of this  study.


                  Technical  Approach

     Air quality modeling analyses were performed
to project future air quality levels and to test
maintenance strategies.   The work effort entailed
the execution of the following  tasks:

     • The Rhode Island point source emissions
       inventory was updated to the base year of
       1974, and the information was submitted to
       the NEDS data bank at EPA.

     • Modifications to the COM were made to allow
       for representation of stable atmospheric
       dispersion conditions.

     • Utilizing the base-year point and area source
       emission inventories as inputs to the COM,
       a validation of the model was performed by
       comparing predicted annual S02  and TSP
       levels for 1974 with measured air quality
       data in the AQMA from EPA's Storage and
       Retrieval of Aeormetric Data (SAROAD) system.

     • The COM was calibrated using the statistical
       relationship between measured and predicted
       pollutant concentrations derived in the
       model validation.

     • Using data from the State of Rhode Island
       Land Use Plan6 and methodologies outlined
       in EPA's AQMP Guidelines, growth factors
       were developed for projecting point and
       area source S02 and TSP emissions for 1978,
       1980, and 1985.

      • The above growth factors were applied to the
       base-year  (1974) emissions inventories for
       1978, 1980, and 1985.  These projected base
       case emissions incorporate the effects of
       Federal New Source Performance Standards (NSPS)
       and the Providence Transportation Control Plan
       and assume full compliance with current state
       air pollution control regulations.

      • Utilizing the projected  inventories as inputs
       to the COM, future annual S02 and TSP levels
       in the Metropolitan Providence AQMA were pro-
       jected for the years 1978, 1980, and 1985.
       Based on these modeling  projections, areas in
       the AQMA where annual NAAQS for S02 and TSP
       will be exceeded were identified.

      • Control strategies were  entered into the
       future air quality predictions through ad-
       justments to the projected point and area
       source emission inventories.  The COM was
       used to evaluate the effectiveness of the
       various control strategies in maintaining
       annual NAAQS for SD2 and TSP through 1985.
      Description of Atmospheric Diffusion Model

Model Choice

     Successful application of generalized models to
specific emission sources requires definition of the
source characteristics.  The air quality maintenance
analysis undertaken in the current study required an
atmospheric transport and diffusion model capable
of predicting annual average S02 and TSP concen-
trations at specified receptor points due to an
array of both point and area emissions sources.  In
addition, the reactive nature of S02 necessitated
the use of a model which could simulate pollutant
decay processes as a function of atmospheric resi-
dence time.  The two most widely used and accepted
atmospheric diffusion models which met the above
criteria are the Air Quality Display Model3 (AQDM)
and the COM2.  Both models are based on the Gaussian
plume configuration, i.e., they simulate atmospheric
transport and diffusion processes by assuming the
concentrations of pollutants downwind within a plume
generated by point and area source emissions can be
represented by a Gaussian distribution in both the
crosswind and vertical directions.  Emissions sources
are assumed to be continuous for the time analyzed.
As the plume expands due to diffusion and turbulence,
it is diluted and transported downwind, principally
by the mean wind.  The rate of expansion is charac-
terized by a series of empirical dispersion curves
which are dependent on the stability of the atmos-
phere, as determined in studies made by Pasquill^
and reported by Turner^.  A stability-dependent mix-
ing height is also used to simulate diffusion pro-
cesses in the atmospheric mixing layer.

     The COM differs in some respects from AQDM,
which has been used extensively for simulation
purposes.  Although both predict long-term pollu-
tant concentrations, the COM determines emission
contributions from area sources more accurately than
AQDM, using numerical integration techniques.  Ef-
fective emission heights for point sources are cal-
culated in both models using the well accepted
Briggs* plume rise formulaelO,!!, and both models
make use of an exponential decay term to simulate
the reactivity of S02 with other atmospheric con-
stituents.  However, the COM allows a realistic
atmospheric stability-dependent power law increase
in wind speed with height that is lacking in EPA's
AQDM.

     Finally, a validation study conducted at the
National Environmental Research Center^2 has shown
that the CDM yielded smaller errors than the AQDM,
with concentration maxima and means nearer those
of the measured data.  For these reasons, the CDM
was judged to be the model best suited to air
quality maintenance analysis and therefore, was
chosen for use in this study.
 *The original version of EPA's AQDM described in
  the user's manual? has been subsequently modified
  to replace the Holland plume rise equations with
  those developed by Briggs.
                                                       299

-------
                 Model Modifications

      The  COM  model  computes concentrations  at  re-
 ceptor points which are  assumed  to  be  located  in
 urban areas.   The  lower  layer  of the urban  atmosphere
 is  generally  more  unstable than  the corresponding  ad-
 jacent rural  atmosphere.   In fact,  the turbulence
 and heating present in the lower urban atmosphere
 precludes the occurrence of stable  atmospheric con-
 ditions associated  with  nighttime radiational  cool-
 ing in the rural environment.  To account for  this
 effect, the COM  uses empirical dispersion coeffici-
 ents associated  with less stable atmospheric con-
 ditions in computirg pollutant concentrations.  While
 this procedure is  correct when both receptor and
 source are located  within the  urban environment, it
 was felt  that some  adjustment  was necessary in order
 to  model  the  rural  portion of  the Metropolitan
 Providence AQMA.   Rural  receptors are  by definition
 located far from the urban core, and the travel time
 from the  source  to  the receptor  is  dominated by the
 rural component.   Thus,  in calculating pollutant
 concentrations at  rural  receptor sites, the dis-
 persion of emissions was determined using the  em-
 pirical dispersion  coefficients  of  Pasquill8 directly
 without adjustments for  the urban environment.  This
 allowed modeling of stable atmospheric conditions
 when applicable  in  the rural environment.   In
 addition, the Briggs plume rise  formulae used  in the
 COM were  updated to include equations  for plume rise
 under stable  atmospheric conditions.

                      Model Input

      The  COM  accepts as  input  the joint frequency
 distribution  of  meteorological conditions,  the
 average afternoon  and nocturnal  mixing heights,
 the locations and  emission rates of both area  and
 point sources, and  the locations of the desired re-
 ceptor points.  At  each  receptor, the  concentration
 due to each point  and area source is calculated.
 These calculations  assume transport and diffusion
 processes which  represent the  frequency distribution
 of  meteorological  conditions input  to  the model.

      The  basic meteorological  input  to the model con-
sisted  of standard   "Day/Night STAR"  data from the
National  Climatic Center in Asheville,  North Carolina.
Weather observations for Providence, Rhode Island
(Station  #14765}  are taken hourly by the National
Weather Service at  Green  Airport.  These data,  aggreg-
ated  to eight observations per  day and distributed  as
STAR  data, are representative of the meteorological
conditions for the  State  of Rhode Island.   The  result
is a  joint frequency distribution which gives  the  joint
frequency  of occurrence  of a  wind direction  sector,
wind  speed class, and stability class.   There  are  16
different  wind direction  sectors, 6  wind speed  classes,
and 6 atmospheric stability classes.  The  COM  is well
suited  for AQMP evaluation since  the NCC can provide
"Day/Night STAR"  data for any year at  any  station  in
the U.S. where daily weather  observations  are  taken.
Measured data  in  the joint frequency distribution  are
divided into two  classes  indicating  their  occurrence
during either  the day or  night.  This  information  is
used along with factors  input  to  the COM,  which esti-
mate the diurnal  variation of emissions, to  more
accurately predict  ambient pollutant concentrations.
The  effects of seasonal  variations in  emissions on
ambient concentrations can also be accounted for through
the  use of quarterly emissions  data  in  COM with
quarterly  "Day/Night STAR" data available from  NCC.
      For  the model  validation,  in which predictions
 of  annual  concentrations  of S02 and TSP in 1974 were
 made,  the 1974  annual  STAR data were used as input
 to  the COM model.   To  model future years, a 10-year
 climatological  average STAR data set for 1964 to 1973
 was used.   Although use of STAR data from a parti-
 cular  year may  have yielded higher predicted con-
 centrations at  some points in the AQMA, it was not
 possible  to objectively determine the annual worst
 case meteorological  conditions  for the entire AQMA
 due to complex  source-receptor  relationships in the
 urban  area studied.  Therefore, climatological  aver-
 age meteorological  data were judged to be the best
 representation  of  possible future conditions.

           Model  Validation and  Calibration

     For  the purpose of verifying the accuracy of CDM
 in  predicting annual average SO? and TSP concentra-
 tions  for  the State of Rhode Island, a model  valida-
 tion exercise was  performed.  This validation
 consisted  of comparing air quality model  predictions
 for 1974 with actual 1974 annual  concentrations
 measured at 22  intermittent and continuous-monitoring
 stations   throughout the  AQMA.   The measured data
 were obtained from  EPA's  SAROAD data file at the
 National Aerometric Data  Bank.

     Scatter diagrams  of  measured versus  predicted
 1974 annual  average concentrations of SO, and TSP
 are shown,  respectively,  in Figures 2 ana 3.
  loo _
 „ eo
   60
   40
               I
              20        40       60        80

               X_:  Predicted S02 Concentration  (ng/m3)
                                                  100
Figure 2.  Comparison of Measured and Predicted 1974
Annual S02 Concentrations in the Metropolitan
Providence AQMA
                                                      300.

-------
   I
   +J
   £  60
   s
   I
   a.
   £
   •v  40
      20
                          I
                                    I
                20        40        60        80

                -Xp: Predicted TSP Concentration (pg/m3)


Figure  3.   Comparison  of  Measured  and  Predicted  1974
Annual  TSP  Concentrations in  the Metropolitan
Providence  AQMA
     The correlation  coefficients (r)  between measured
and predicted  concentrations  are 0.86  and 0.78, re-
spectively,  for SOp and  TSP.   These results indicate
good correlation between these quantities and demon-
strate the accuracy of the COM in predicting air
quality levels throughout the AQMA.  The correlations
may also be interpreted  quantitatively, since r2 is
equal to the percentage  of the variance of the meas-
ured concentrations that can  be accounted for by a
linear relationship with the  predicted concentrations.
These values are 74%  and 61%, respectively, for S02
and TSP.  The standard errors of estimate for the
linear regression relationships were found to be
10.0 yg/m3 and 8.3 yg/m3, respectively, for S0? and
TSP.

     The purpose of  a calibration is to adjust model
estimates based on the relationship between measured
(X  } and predicted (X )  concentrations determined
fr8m the model validation.  This is accomplished by
a linear regression  on the validation  data of which
slope and intercept  are then used as correction
factors in making future predictions.  Referring to
the scatter diagram  for S02  in Figure  2, the linear
regression line of best fit was found  to be Xm
0.52X  + 9.9 by the  method of least squares.
Referring to Figure 3 for TSP, the linear regression
line was found to be Xm   0.67X  + 31.9.

     These equations  were used to calibrate the
model for all future modeling predictions.  Mote that
both lines have a positive intercept.  There are
three possible physical  interpretations  of these in-
tercepts:  (1) background pollutant concentrations,
 (2) systematic bias in the measured data, and  (3)
systematic bias in the model   predictions.  For  par-
ticulate matter, 31.9 yg/nr  is within  the  normal
background level range due to road dust, pollen, and
other fugitive dust sources,  supported by  the  lowest
measured annual average TSP  value  in  the State in
1974 of 23 yg/m3 in Washington County.   For SOp, the
background level of 9.9 yg/m3 is supported by  the
lowest measured annual average value  in  the State
 in 1974 of 11 yg/m3.   in  the  town of Westerly.
             Future Base Case Air Quality

     Calibrated modeling predictions of future air
quality levels indicate that the annual primary
standard for SO? of 80 yg/m3 will not be exceeded
in the Metropolitan Providence AQMA through  1985.
The maximum predicted concentration was 54 yg/m3,
occurring  in the city of Providence in 1985.   Even
if the standard error of estimate of the S0p_ model
predictions of 10.0 yg/m3 is added .to this value,
the result is still well within the standard*.

     Future TSP levels are also predicted not  to
exceed the annual primary standard of 75 g/m  .  How-
ever, TSP  levels are predicted to exceed the annual
secondary  standard of 60 yg/m3 by 1.4 yg/m3  in the
city of Providence in 1985.  It should be noted that
this increment is less" than the standard error of
estimate of the model calibration of 8.3 yg/m  .
Concentration isopleths and the predicted area of
violation  for this case are shown in Figure  4  as an
example of results obtainable from the COM.
 Figure 4.  Predicted Base Case 1985 Annual Average
 TSP Concentrations in Rhode Island ( g/m3)

       The annual secondary TSP standard of 60  g/m3
 is to be used only as a guide in assessing imple-
 mentation plans to achieve the 24-hour secondary
 TSP standard of 150  g/m3.  Therefore, an analysis
 was undertaken to extrapolate statistically the
 future base case annual average TSP concentrations
 predicted by COM to 24-hour maximum values, using
 the techniques of Larsenl3.  The purpose  of this ana-
 lysis was to verify that  the predicted TSP levels ex-
                                                           *The State of Rhode Island has adopted air quality
                                                            standards which are identical to the NAAQS.
                                                       301

-------
ceeding the annual secondary TSP standard in the AQMA
imply violations of the 24-hour secondary standard
and hence the need for air quality maintenance
measures to further control parti cul ate emissions.
Note that the secondary 24-hour TSP standard is
not to be exceeded more than once per year.  Thus,
in order for a violation of the standard to occur,
the second highest 24-hour TSP concentration in a
given year must exceed 150 yg/nr.  Larsen's statisti-
cal techniques were used to extrapolate such values,
and the results show a predicted violation of the
24-hour standard in the Metropolitan Providence area,
with valtres as high as 188 yg/m  in downtown Provi-
dence through 1985.  The output from the COM is
easily used in this extrapolation, and one possible
improvement to the model would be to add a Larsen
subroutine to the computer code so that both annual
average and 24-hour maximum concentrations could be
output simultaneously.
     Thus, the modeling results indicate that further
control of particulate emissions in the Metropolitan
Providence AQMA will be necessary to maintain am-
bient air quality standards for TSP through 1985.

   Evaluation of Air Quality Maintenance Strategies

     The particulate control strategies tested for
air quality maintenance were:
     • An annual  inspection and periodic maintenance
       program for all  large industrial, commercial,
       and institutional boilers included in the
       point source inventory,  including electric
       generating power plants
     • Use of unleaded gasoline by all  1975 and
       newer light-duty motor vehicles included
       in the area source inventory.

These strategies were evaluated by making appropriate
adjustments to the future projected emission inven-
tories and then rerunning the COM to predict future
air quality.  The results indicate only the use of
the second strategy will maintain annual TSP levels
in the AQMA below the secondary standard through
1985.  Projected effects of the first strategy are to
reduce future TSP levels, at most, by 1 yg/m^.
     Al'though  predicted concentrations  output  by
the COM show the split between  total  area  and  total
point source contributions,  a detailed  source-
contribution file* is  not produced.   We consider  this
omission to be the principal  weakness of the COM
used in this study as  such data would eliminate the
need to adjust emission inventories  and constantly
rerun the model  in order to test different control
strategies.  By compiling source contributions  at
all receptor points, alternative maintenance strate-
gies can be quickly evaluated through the application
of appropriate scaling factors  to only those sources
affected by the strategy.  Since this work was done,
a version of COM with this capability has been de-
veloped for EPA Region V.

                     Conclusions

      The CDM was used to identify areas in the
Providence AQMA where annual NAAQS for S02
and TSP are likely to be exceeded, and to test
control strategies to maintain  air quality standards.
The CDM was found to be an accurate model, well
suited to air quality maintenance analysis.  The
principal weakness of the CDM was its inability
to produce a source-contribution file that would
allow a more analytical approach to the evaluation
of maintenance strategies.  We note that subsequent
  to this  study,  a  version  of CDM has been developed
  for  EPA  Region  V  which  includes this capability.
  The  usefulness  of the CDM in AQMP evaluation could
  also be  improved  upon by  the addition of a Larsen
  subroutine  to statistically extrapolate annual
  levels to 24-hour maximum concentrations.

                       References

  1.   Guldberg, P.M.,  Kemerer,  B.L.,  and Shah, M.C.,
      Technical Support to  the State  of Rhode Island
      on Development of an  Air Quality Maintenance
      Plan, Publication No.  EPA901/9-75-001, pre-
      pared by Hal den  Research Div. of Abcor, Inc.,
      Cambridge,  Ma.,  under  contract  No.  68-02-1377,
      Task Order  6,  September  1975.

  2.   Busse, A.D.,  and  Zimmerman,  J.R.,  User's Guide
      for the Climatological  Dispersion  Model, U.S.
      Environmental  Protection  Agency,  Publication
      No. EPA-R4-73-024,  Research  Triangle Park, N.C.,
      December 1973.

  3.   Air Quality Implementation  Planning  Program,
      prepared for  the  Environmental  Protection
      Agency, Washington, D.C.  by  TRW  Systems  Group,
      1970.

  4.   Federal Register, Vol. 38, pg. 15834; as amended
      by Vol. 30,  pg. 16343, May 8, 1974, and  by Vol.
     40, pg. 25814, June 19, 1975.

  5.   Federal Register, Vol. 40, pg. 18726.

  6.  State Land Use Policies and  Plan, Rhode  Island
     Statewide Planning Program,  Report No.  22,
     Providence,  January 1975.

 7.  Guidance for Air Quality Maintenance Planning
     and Analysis,  Volume 5. 7. and 11. U.S. Environ-
     mental  Protection Agency, Publication Nos.  EPA-
     450/4-74-006,  -014,  and-008, Research Triangle
     Park, N.C.

 8.  Pasquill,  F.,  Atmospheric Diffusion, D. Van
     Nostrand Company, Ltd., London,  1962.

 9.  Turner,  D.B.,  Journal  of Applied Meteorology,
     February 1964.

10.  Briggs, G.A.,  "Some Recent Analyses of Plume
     Rise Observation," Proceedings of the Second
      International  Clean Air Congress, New York,
     Academic Press, 1971.

11.  Briggs, G.A.,  "Discussion on Chimney Plumes  in
     Neutral  and  Stable Surroundings," Atmos. En-
     viron., 6:   507-510, 1972.

12.  Turner, D.C.,  Zimmerman, J.R., and Busse, A.D.,
     An Evaluation of Some Climatological Dispersion
     Models, presented at the 3rd meeting of the
     NATO/CCMS Panel on Modeling.

13.  Larsen, R.I.,  A Mathematical Model for Relating,
     Air Quality  Measurements to Air Quality
     Standards,  U.S. Environmental Protection Agency,
     Office of Air Programs, Publication Mo. AP-89,
     Research Triangle Park, November 1971
 urn>  the  source-contribution  file  produced
 by  EPA  s  Implementation  Planning  Program. 3
                                                      302

-------
                                    IMPROVEMENTS IN AIR QUALITY DISPLAY MODEL
                                          Chandrika Prasad, Ph.D., P.E.
                                        State Air Pollution Control Board
                                            Richmond,  Virginia  23219
     The  Air  Quality Display 'Model (AQDM)  is  widely
used to predict  the  concentration of sulfur dioxide
and suspended particulates  in  the ambient  air.   This
paper describes  some of the difficulties encountered
in the use of this model and the modifications  to im-
prove the model.
     It is suggested that (l)  in calibrating  the model
the line  of the  best fit be determined on  the basis of
the measured  minus the background concentration and
the calculated concentration,  (2) the actual  number of
samples (which were  used to compute the standard geo-
metric deviation)  be used in computing the highest and
the second highest concentrations, (3) in  computing the
source contributions to a receptor the y-intercept and
a part of the background not be apportioned in  the in-
dividual  source contribution and (k) a. few modifica-
tions to  the  input and output  formats be made.   These
modifications add to the usefulness of the model and
the ease  in reviewing the results.

                      Nomenclature

A  = y-intercept
B  = slope of the  regression line
Ci = uncalibrated concentration due to source i at
     a receptor
C  = uncalibrated total calculated concentration at
     a receptor
D  = standard geometric deviation
E  = a variable  (see Appendix  A)
F  = plotting position frequency
G  = background  concentration
i  = a subscript
N  = number of samples
R  = rank order  (highest, second highest,  etc.)
S^ = concentration contribution (calibrated)  from
     source i to a receptor
S  = concentration contribution (calibrated)  from all
     sources  to  H, receptor
T  = total concentration (calibrated) at  a receptor
     (annual  arithmetic mean)
^max = highest concentration (short-term)
u  = uncertainty
y  - measured air quality concentration
z  = number of standard deviations from the mean .

                      Introduction

     Atmospheric  simulation models are frequently used
to relate pollutant  emissions  to pollutant concentra-
tions.  Some  of the  models most readily available to
air pollution control agencies and representative of
the state-of-the-art of atmospheric simulation  models
are given in  Reference 1.
     The  models  estimate the concentration for  a 1-hour
period or for seasonal or annual averages. The long-
term models such as  AQDM2* and COM3 are widely used
to predict the concentration of sulfur dioxide and sus-
pended particulates.  The AQDM is preferred and is the
minimum acceptable1*  by the Environmental Protection
Agency for air quality analysis for these  two pollu-
* Numbers in superscripts denote references cited.
tants in developing the air quality, maintenance plans.
     The AQDM is essentially a. long-term model.  The
model determines the impact of a variety of sources at
a given receptor for a given set of meteorological con-
ditions.  It then weighs this concentration by the fre-
quency with which that particular set of meteorological
conditions occurs and then sums up over all meteorolog-
ical conditions, thus producing a long-term average
concentration.  Basic input to the model is a compre-
hensive emission inventory on both point and area
sources.  The meteorological input to the model is a
joint frequency distribution of wind speed, direction,
and stability classes along with an average mixing
depth.
     Through a mathematic simulation of the atmospher-
ic diffusion process, the model determines the arith-
metic average of ground level concentrations over an
annual period.  To provide a comparison with the Nation-
al Ambient Air Quality Standards the program includes a
statistical model? which is used to relate the annual
arithmetic mean to annual geometric mean and the high-
est concentration for a selected number of receptors.
     Climatological Dispersion Model (CDM) is another
long-term model frequently used.  The AQDM and CDM have
differences in calculation techniques; however, the two
require  basically the same inputs and have same types
of outputs.  A significant difference between the two is
a. source contribution table generated by the AQDM
which allows the impact of each source on air quality
to be evaluated.  Such a source contribution table is
highly desirable, especially when developing a strategy
to maintain the standards.  Such a table is also useful
in evaluating the impact of a new source or the impact
of a control program for an existing source.

                Modifications To The AQDM

     Several difficulties were encountered in the use of
the AQDM.  These are discussed in the paragraphs which
follow.  Also discussed are the methods to overcome
these difficulties by modifying the model.  These modi-
fications cover the following areas:

            Calibration procedure, used to compute y-
            intercept and slope of the line of the best
            fit during calibration of the model

            Statistical model,  used to compute annual geo-
            metric mean and highest concentration

            Source contribution table, used to generate
            individual source contribution to selected
            receptors

            Input and output formats, used for data input
            and results output.

 Existing Calibration Procedure

     Before using the AQDM to estimate regional air
quality, the model is first calibrated using existing
air quality data.  This is accomplished by making an
AQDM run using the emission,  meteorological, and mea-
sured air quality data for a specific year.  The cali-
bration procedure begins with the use of the model to
calculate concentration values at each of the monitor-
                                                       303

-------
ing stations for which measured air quality data are
available.  The line of the best fit (which describes
the relationship between the measured and calculated
concentration) is obtained using the least-square tech-
nique.  One of the following two procedures is used to
handle the background concentration.

     Use Of Background Concentration As Input.  If
an accurate value of the background concentration is
available, it is included in the input data.  The AQDM
program adds the background concentration to the calcu-
lated concentration before attempting to determine the
line of the best fit as shown in Figure 1.  The regres-
sion line in this case can be described by the equation:
                 y = A + B( C + G
                                            (1)
     In this case, since the accurate value of the
background concentration, G, was used as input, the y-
intercept. A, can be considered as the unknown and
might be  attributed to natural sources and/or    /
man-made  sources outside the area being modeled, or it
may be considered as an uncertainty due to uncertain-
ties in the input data on emissions, meteorology, and
measured air quality.  For the sake of brevity, let it
be called the uncertainty and denote it by 'U'.  Hence,
in this case, U = A.
     100
      80
   ho
   =1
   td
   
-------
Figure 1, 2, and 3.

Computation Of Calibrated Concentration

     Once the model is calibrated, the y-intercept and
the slope of the regression line are used to calibrate
or adjust the calculated concentrations at all the re-
ceptors.  The model (without modifications) uses the
following relation to compute the calibrated concen-
tration:

                 T=A+B(C+G)                (2)

     In the modified version of the program, this is
done by using the relation:

                 T   A + G + B C                    (3)

Existing Procedure In The Statistical Model

     The air quality standards for sulfur dioxide and
suspended particulates are in terms of annual mean con-
centrations.  In addition, the standards include a 21*-
hour value not to be exceeded more than once per year.

     The AQDM program includes a statistical model to
convert the annual arithmetic mean to the annual geo-
metric mean and the highest concentration.  To do this,
the model requires the value of standard geometric devia-
tions for all the receptors which are selected for the
statistical output.
     It should be noted that the standard geometric de-
viations are available only from the past historical
data on measured air qualities and are available only
for those receptors where there is a monitoring station.
At this time there is no technique available to project
standard geometric deviations for future years.  At the
same time, the standard geometric deviations are based on
a limited number of samples, the prevalent sampling fre-
quency being every third to every sixth day,  with the
sample  size ranging from 60 to 120 per year.  In comput-
ing the highest concentration the model assumes a sample
size of 365 (continous sampling).  It is further realiz-
ed that to obtain 365 samples (2U-hour samples) using  a.
Hi-Vol  sampler will require at least two samplers side
by side.  The mathematical relations used in the model
to compute the highest concentrations are given in
Appendix A with the only difference that the model uses
the number of samples to be 365 in all cases.
     If the standard geometric deviation, which is based
 on a  limited number of  samples, is assumed to be the
 same  for continous sampling  (or 365  samples)it  introduces
 significant errors in the computed values of the high-
 est concentrations.  Figure  k shows  the  ratio  of the
 highest  concentration to the annual  arithmetic mean as
 a function of sample size for several values of the
 standard geometric deviation.
     At the same time the model does not compute the
 second  highest concentration which is actually  desired
 for a. comparison with the air quality standards.

 Modified Procedure For  The Satistical Model

      It is suggested that for direct comparison with
the standards,the  second highest concentration be com-
puted using the actual number of samples  (which were
used to compute the standard geometric deviation) for
 the selected receptors  using the procedures given  in
 Appendix A.  Figure  5 shows the ratio of the  second
 highest concentration to the annual  arithmetic  mean as
 a function of the  number of  samples  for  several values
 of the  standard geometric deviations.

  Existing Procedure For  Source Contribution Table

     The AQDM provides  a table which gives the  contri-
bution  from each source to each of five  selected recep-
tors.   If the five receptors are not selected by the
                    Number of Samples,  N

  Figure  k-.,Ratio of the Highest Concentration to the
            Annual Arithmetic Mean as function of
            sample size and standard geometric
            deviation.
        65
125
   185      2l*5
Dumber of Samples,
                           305
365
                                         N
   Figure 5. Ratio of the Second Highest Concentration
            to the Annual Arithmetic Mean as function
            of sample size and standard geometric
            deviation.

user, the program automatically selects five receptors
with the highest, second, third, fourth, and fifth high-
est concentrations.  The source contribution is given
both as a concentration and a percentage of the total
concentration for that receptor.
     The model starts with computing the uncalibrated
form of the total concentration for the specific recep-
tor under consideration by using the following relation
(which is derived from Equation 2):

                  c   ^   o                     (U)

     The model also calculates the concentration due to
each source at this receptor.  Let C^ be the concentra-
tion contribution due to source i to this receptor.  The
source contribution, S^,  due to source i to this recep-
tor is calculated using the following relation:
                                                        305

-------
                      r                -i  c-i
                 3± + [ A + (  B   1 )  Gj —-1
 (5)
     The total contribution from all sources to this
receptor is given by the summation of equation 5 over
all sources.
                                            2* C.
                 2, B c.  + TA+(B-I)G 	    (6)
      S = I Si
                                              C
      or,
           S=BC+A+
                               1 )
 (7)
     Adding the background,  Gs to the total source con-
tribution, S, gives the total calibrated concentration
at this receptor as

          T=S+G=A+B(C+G)               (8)

which is the same as Equation 2.

Modified Procedure For Source Contribution Table

     A close examination of  Equation 5 indicates  that  a
part of the background, G, and the y-intercept,  A, are
apportioned in the ratio of the individual source con-
tribution, C^ , to the total  concentration, C .   If
there is a change in the total concentration,  C,  due to
emissions data changes in sources other than i,  the con-
tribution from source i to the same receptor will change
even though all other conditions  are unchanged.   This
presents problems when comparing  the results of  one
AQDM run with the other.
     The procedure described below overcomes this diffi-
culty and gives consistent results in each case.
     The source contribution from source i to a  given
receptor should only be adjusted  for the slope of the
regression line and should be computed as follows:

                      Si   B Ci                     (9)

     The total source contribution , S , will be the sum-
mation of Equation 9 over all sources as given by
                       S = B C
(10)
     Adding the background ,  G , and the  y-intercept,  A ,
will give the total concentration

                    T=A+G+BC                (11)

which is same as Equation 3.
assembled together.  Similarly all the PLUME  cards  are
assembled together in the same order as the SOURCE
cards,and are usually placed behind the last  SOURCE
card.  In processing these emission data  (Source  and
Plume Data), the Plume Data read from the  first  PLUME
card are assigned to the first SOURCE card, the data on
the second PLUME card are assigned to the data  on the
second SOURCE card,and so on.  The model  (without modi-
fications) makes no comparison to ascertain that  Plume
Data are assigned to the correct Source Data.

     TABLE 1.  An Example Of SOURCE Data  Input

Column 1°
SOURCE =





LOCATION
HORIZ
20
290.2
351*. 3
223.0
223.0
221.5
221.5
VERT.
30
1*21*3.0
It267.7
1*321*. 6
1(321*. 6
1*312.3
1*312.3

AREA
1*0
0.0
0.0
0.0
0.0
0.0
0.0
EMISSION (TPY)
S02
50
53.818
72.1*67
0.000
0.000
12.657
8.1*38
TSP
60
2.037
9.267
l*.2l+7
0.767
38.062
11.671
STACK
HT.
70
122.0
122.0
15.0
16.7
58.2
58.2
SOURCE
ID*
80
'9952'
•99531
'5701'
'5703'
'1+751'
•1*752'
        * Source ID Is Additional Input Under Modified Program.

            The modified program reads the source identifica-
       tion number entered in columns 75 to 80 of SOURCE and
       PLUME cards.  For each point source,identical identifi-
       cation numbers are used on both the SOURCE and PLUME
       cards.

            TABLE 2.  An Example Of PLUME Data Input

ColumnlO
PLUME =





GAS VEL.
20
12.1*
21.6
53.3
12.5
12.8
21.5
STACK DIA.
30
5.0
7-9
i*.o
i*.o
i*.o
1*.3
GAS TEMP
1*0
392.0
391.0
1*19.0
,. 1*52.0
1*01.0
395-0

50-70






PLUME ID*
80
'9952'
'9953'
'5701'
'5703'
'1*751'
'1*752'
 * Plume ID Is Additional Input Under Modified Program,

     For area sources the PLUME cards are internally
generated by the program.  The modified program compares
the source identification number with the plume identi-
fication number and if a mismatch is found, further
processing of the data is stopped and the mismatch in-
formation is printed as shown in Table 3.
 Modifications To The Input And Output Format

     Source Data.  One of the basic inputs to the
model is a comprehensive emission inventory on both
point and area sources.  The emission data for each
emission point are entered on two cards.   The first
card (designated as SOURCE =) contains the data on
source location, emission rates and stack height,as
shown in Table 1.  The second card (designated as PLUME
=) contains stack diameter, exit gas velocity and tem-
perature, as shown in Table 2.  All the SOURCE cards are
            Sampling Station Data.  Input format  for the
       measured air quality data has been modified (as shown
       in Table 1*) to include an identification number for
       data from each sampling station,and the program is modi-
       fied to read these additional data.  In the 'CORRELATION
       DATA' output these identification numbers are reprinted.
       At the same time,the modified program prints out cali-
       brated concentrations.  This is helpful in reviewing the
       Correlation Data output, an example of which is shown
       in Table 5.
                                  TABLE 3.  An Example Of Source Data Output
SOURCE
NO.
1
2
3
1*
5
6
SOURCE LOCATION
(KM)
HORIZ .
290.2
351*. 3
223.0
223.0
221.5
221.5
VERT.
1*21*3.0
1*267-7
1*321*. 6
1*321*. 6
1*312.3
1*312.3
SOURCE
AREA
0.0
0.0
0.0
0.0
0.0
0.0
EMISSION RATE
(TON/DAY)
S02
53.818
72.1*67
0.000
0.000
12.657
8.1*38
TSP
2.037
9.267
l*.2l*7
0.767
38.062
11.671
STACK DATA
HT.
(m)
122.0
122.0
15.0
16.7
58.2
.58.2
DIA.
(m)
5-0
7-9
i*.o
lt.0
i*.o
1*.3
VEL.
(m/s)
12.1*
21.6
12.5
53.3
12.8
21.5
TEMP.
(°K)
392.0
391.0
1*52.0
1*19-0
1*01.0
395.0
ID NO.
S*
9952
9953
5701
5703
1*751
1*752
P*
9952
9953
5703
5701
!*751
1*752



S/P MISMATCH
S/P MISMATCH


       * Additional output generated by the modified program.
         This AQDM run was unsuccessful due to source/plume data mismatch.

                                                        306

-------
    TABLE 1*. An Example Of  SAMPLING STATION Data Input
                   Acknowledgement

Column 10
FAROE =




HORIZ.
COORD.
20
386.0
386.5
381*. 5
381*. 5
328.0
VERT.
COORD.
30
1*075.0
1*075.1
1*069.7
1*071.1*
1*060.8
MEASUARD ANNUAL
ARTHMETIC MEAN
1*0
71*. o
85-0
81*. 0
97.0
65.0
STATION
ID*
50
'176A1
'176D'
'176E1
'176F'
'178B'
  * Additional Data Input Under  Modified Program.

    TABLE 5- An Example Of  CORRELATION Data Output
SITE
HO.
1
2
3
1*
5
RECEPTOR
LOCATION
HORIZ.
386.0
386.5
381*. 5
381*. 5
328.0
VERT.
1*075-0
1*075.1
1*069.7
1*071.'*
1*060.8
PARTICULATE COHC.
OBSERVED
71*. o
85.0
81*. 0
97.0
65.0
CALCULATED
59-0
1*3.0
63.0
66.0
23.0
ID
NO. »
176A
17 6D
17 6E
17 6F
178B
CALI- *
BRATED
COHCEN.
81*. 8
72.7
87.7
89.8
57.0
 * Additional Output Generated  By  The  Modified Program.

      Source Contribution  Table.   The  AQDM program
has been modified  so that  all sources  (point  or area)
contributing more  than  ~L%  to any of the five  receptors
are identified  and their source identification numbers
be printed  as shown in  Table 6.  At the same  time the
maximum percentage contribution to any of the five re-
ceptors are printed.  This makes the output very useful.

     TABLE  6. An Example Of SOURCE CONTRIBUTION TO
           FIVE MAXIMUM RECEPTORS Output
SOURCE
HUMBER
367

368

369

370

371

RECEP.
21*1*
0.0635
0.12$
0.0361
0.07$
0.0065
0.01$
0.0022
0.00$
0.0281*
0.05$
RECEP.
60
0.01*70
0.09$
0.0301
0.06$
0.001*5
0.01$
0.0015
0.00$
0.0230
o.oi*$
RECEPT .
213
1.3050
2.33$
0.01*12
0.07$
0.0196
o.oi*$
0.0065
0.01$
15-5961*
27.88$
RECEP.
1*9
0.0719
O.ll*$
0.01+15
0.08$
0.0057
0.01$
0.0025
0.00$
0.0319
0.06$
. RECEP.
225
0.2610
0.50$
0.1118
0.22$
0.1100
0.21$
0.0373
0.07$
0.0866
0.17$
ID *
CODE

1701

1771

1801

1802

1811
$ *
FLAG

>2$







>27$
 * Additional  Output  Generated By The Modified Program.

                      Conclusions

     The  various modifications described above can be
grouped together under the  following two categories:

Modifications  To The  Mathematical Logics

     Under this category  it was suggested that
     (l)  During calibration of the model the  line
the best  fit be determined by  comparing the computer
calculated concentrations with the measured minus  the
background concentration
     (2)  The second highest concentration "be  computed
using the actual number of  samples to compare the  re-
sults with the short-term National Ambient Air Quality
Standards
     (3)  The y-intercept  and the "background not
be apportioned while  computing the source contributions
from individual sources to a receptor.
     The  only  additional  input required to carry out the
modifications  under this  category is the data on the
number of samples  for those receptors which are  select-
ed for the statistical output.

Modifications  To The  Input  And Output Formats

     Under this category  the additional input data re-
quired are (l) the SOURCE and  PLUME  identification
numbers and  (2) the sampling station identification
numbers.  These data  are  reproduced  in the outputs on
SOURCE DATA, CORRELATION  DATA  and SOURCE CONTRIBUTION
TO FIVE RECEPTORS.
     These modifications  are helpful in reviewing  the
output  results and avoid  a lot of cross-referencing.
     The modifications to the Input and Output  formats
as described previously were incorporated into  the
model by Erik Hougland (presently with the Tennessee
Valley Authority) while working at Virginia Polytechnic
Institute and State University on a contract from the
Virginia State Air Pollution Control Board.  His work in
this area is acknowledged.  The author also wishes to
thank his colleague Limon E. Fortner for first  pointing
out the problem associated with generation of the
Source Contribution Table.

                      References

 1. "Guidelines For Air Quality Maintenance Planning
    and Analysis, Vol. 12, Applying The Atmospheric
    Simulation Models To Air Quality Maintenance Areas,"
    EPA-l*50/l*-7l*-013, U.S. EPA., Research Triangle Park,
    N.C., Sept. 1971*.
 2. "Air Quality Display Model," TRW Systems Group Man-
    ual for NAPCA, HEW, Contact No. PH-22-68-60,
    Washington, D.C., Nov. 1969.
 3. "User's Guide For The Climatological Dispersion
    Model," Environmental Monitoring Series, EPA-RU-73-
    021* (mis PB 2273l*6AS) NERC, EPA., Research
    Triangle Park, N.C., Dec. 1973.
 1*. "Maintenance of Ambient Air Quality Standards, Re-
    quirements For Preparation, Adoption and Submittal
    of Implementation Plans ," Federal Register, Oct. 20,
    1975-
 5. R. I.  Larsen, "A Mathematical Model For Relating Air
    Quality Measurements To Air Quality Standards," U.S.
    EPA.,  Publication No. AP-89, Research Triangle Park,
    N.C.,  Nov. 1971.

                       Appendix A

Computation Of Highest And Second Highest Concentration

     First compute the plotting position frequency, F ,
using Larsen's technique5 as given below:
                                                                            F = 100
                          R-O.i*
                            N
                                                                                                            (A-l)
                                                               Next compute the number of standard deviations from
                                                          the mean from the relations:
         Z = E -
 E=  {in (A


2.52 + 0.80 E + 0.01 E2

1 + 1.1*3 E + 0.19 E2 + 0.001 E3
                                                   (A-2)



                                                   (A-3)
     Finally, compute the highest concentration by the
relation given below:
                  Tmax   Tg
                                 (A-l*)
where the annual geometric mean, Tg , is related to the

annual aritmetic mean, T, by the relation:

                          T
                 T  =
                  g   EXP (0.5 ln2D)
                                                   (A-5)
     In the procedure described above use R = 1 and
R = 2 for computing the highest and the second highest
concentrations respectively.
                                                       307

-------
                           AIR POLLUTION MODELING IN THE DETROIT METROPOLITAN AREA
             A. Greenberg
             B. Cho
             Air Pollution Control Division
             Wayne County Department of Public Health
             Detroit, Michigan
                  James  A.  Anderson,  P.E.


                  Wayne  State  University and
                  Physical  Dynamics,  Inc.
                  Detroit,  Michigan
                       SUMMARY

The Air Quality Display Model (AQDM) and the Climato-
logical Dispersion Model (COM)  have been used by the
Air Pollution Control Division of the Wayne County
Department of Public Health and Wayne State Univer-
sity to simulate annual averages of suspended
particulate and sulfur dioxide in the Detroit Metro-
politan area.  Several meteorological models includ-
ing the STAR program were evaluated to determine the
effect of meteorological input data on predicted con-
centrations.  The Briggs and Holland' plume rise
formulas were similarly evaluated.  Correlations of
predicted concentrations with observations by the
Wayne County Air Pollution Control Division range
from .75 to above .90, depending on the combination
of models used.

Applications of air pollution modeling at the Wayne
County Air Pollution Control Division have dealt with
developing a better understanding of how sulfur
dioxide and particulate controls should be applied
and in evaluating the impact of new sources on air
quality.  In addition, several divisions of Wayne
State University are using the air pollution diffu-
sion models in a variety of applications. The College
of Engineering is developing algorithms for determin-
ing optimal air pollution control strategies with
limited energy resources.  The College of Lifelong
Learning uses the models in an environmental simula-
tion game called ENVIRO-ED as part of an educational
effort to develop better public awareness of the en-
vironmental problems in the Detroit region.  The
Ethnic Studies Division of the Center for Urban Stud-
ies uses the same models for analyzing the socio-
economic impact of air pollution on Detroit area
residents.

As a result of the success with the AQDM and COM, dif-
fusion models have become a primary tool in the
Detroit area for predicting the future impact of pol-
lutant sources on ambient air quality and for criti-
cal decisionmaking such as locating major emission
sources, assessing fuel and/or process changes,
locating air monitoring stations, and reviewing
emission standards.

                     INTRODUCTION

The Detroit metropolitan area,  with a population of
approximately 4.7 million people, is the major
industrial center of Michigan.  The region consists
of seven Michigan counties:  Livingston, Macomb,
Monroe, Oakland, St.Clair, Washtenaw, and Wayne,
covering 4500 square miles and a portion of Ontario,
Canada.  The city of Detroit and Wayne State Uni-
versity are both located in Wayne County.  The Air
Pollution Control Division is responsible for the
enforcement of applicable air pollution regulations
in Wayne County which in part are designed to meet
the National Ambient Air Quality Standards (NAAQS).
The Division operates a basic air monitoring network
of 14 continuously recording sampling stations that
telemeter data back to a central processing station,
plus three additional manually operated high volume
air samplers.  Sulfur dioxide  concentrations  are re-
corded continuously at  the  14  stations  and suspended
particulate is measured at  all 17  stations on a 24-
hour sample - 6 days per week  schedule.   This system
provided the ambient air quality data for verifying
and calibrating the AQDM and CDM.   Two  major  airports
in the region are National  Weather Service stations
and provide local climatological data (LCD) for the
meteorological needs of the models.

The 1974 emission inventory of sulfur dioxide and
suspended particulate that  was used  in  this study
identified 506 point sources,  395  of  which were in
Wayne County.  To qualify as a point  source,  the
source must have emitted at least  5  tons  per  year and
be part of a plant that emitted at least  25 tons per
year.  The remaining known  emissions  were included
in the area source inventory.

MODEL INPUT DATA

The two major input data sets  required  for diffusion
modeling are an emission inventory for  point  and area
sources, and the various meteorological parameters.
The sensitivity of the model to this  data is  such that
every effort should be made to obtain the most  ac-
curate data possible.

EMISSION INVENTORY

The total inventory of emissions for  the  year  1974 was
divided into three major categories:  point sources
(public utilities and major industrial, institutional,
and governmental facilities);  area sources (resident-
ial, commercial,and small industrial);  and mobile
sources (automobiles, aircraft, and vessels).

Point source emission data  is  determined  by the amount
of fuel consumed and the manufacturing  process.  Area
source emissions are based  on  the  fuel  consumption
in the residential neighborhoods,  small-size  domestic
and commercial incinerators, and on small manufactur-
ing processes,

Mobile source emissions primarily  depend  on the gas-
oline consumption data for  passenger  and  commercial
vehicles, but these emissions  were neglected  in this
study because their contributions  to  the  total mass
emission rate of sulfur dioxide and suspended  parti-
culates throughout Wayne County were  approximately 2%
and 6%, respectively.  In addition, these sources,
distributed throughout the  County, mathematically con-
tribute negligible quantities  at the  receptor  sites.
We did not feel that collecting the total mobile
source emissions for a grid square and  placing  them
at some point in the grid square is a proper  proced-
ure.  Rather, they should be treated  as moving  line
sources.

Area source information was only available from Wayne
County at the time. Using the  UTM  system, a grid of
5000-meter squares was established over Wayne County
to arrive at the area source emissions.   Unfortunate-
                                                      308

-------
ly,  our  area  source information was originally based
on grid  squares  one mile in length.  The layout of
Detroit  and Wayne County made the choice of the mile
grids very logical at the time.  However, grid sizes
of kilometer  integers are the only size acceptable to
the CDM.  Since  5 kilometers is very close to 3 miles,
this was the  smallest grid square we could use.

METEOROLOGICAL INPUT DATA

The various meteorological data such as wind speed,
wind direction,  sky cover, etc., were available in
LCD form from the Wayne County Detroit Metropolitan
Airport  which is about 20 miles southwest of Detroit,
or from  the Detroit City Airport which is in the
northeast part of the city (and Wayne County).  Neither
of these weather stations are in the immediate vicin-
ity of the major pollution sources, but Metropolitan
Airport  seemed to provide better overall correlation
and was  used  in  most cases.

Over a period of one year, 2920 atmospheric observa-
tions are recorded at each weather station. A meteor-
ological model then statistically calculates a joint
frequency distribution of wind direction, wind speed,
and atmospheric  stability.  The Day-Night STAR program
which was developed by the National Climatic Center
and is a part of the CDM, uses 16 sectors for direct-
ion, 6 classes of wind speed and 6 stability cate-
gories. (5)  For  the 1974 Metropolitan Airport data,
this model  aggregated the 2920 three-hour readings
into a distribution of 153 district probabilities
which were  used  for the CDM simulations.

A similar model  developed at Wayne State Univer-
sity, that enables the user to aggregate the meteoro-
logical data  into intervals as fine as the original
data, was used to evaluate the STAR program. The
LCD data was  originally collected in speed intervals
of 1 knot and directional intervals of 10 degrees.
Using the conventional 6 stabilities, and 10 degree
directional intervals, the number of distinct prob-
abilities in  the joint frequency distribution varies
from 316 to 857, as shown in Table 1.
      Speed Intervals
      (Meters per second)'
             1
             2
             3
             4
             5
Number of District
Probabilities

     857
     582
     448
     372
     316
       TABLE 1.  Joint Frequency Distribution

 The five different distributions summarized in Table 1
 were used as input to the AQDM to evaluate the useful-
 ness of this approach.  In all cases, more data are
 provided compared to the STAR model (which requires
 more computer time) , but as the "fineness" of the
 distribution increased,  the correlation of simulated
 versus observed data Increased. These results suggest
 that further work is needed in this area.

 Both the CDM and the AQDM used the curves of Pasquill-
 Gifford (4)  to  approximate stabilities in Detroit.
 The only difference between the two is the CDM uses
 an empirical power law to approximate the functions
 where the AQDM  uses a table.

 Four CDM computer simulations were made (Table 2). The
 first included  only Wayne County sources in the emis-
 sion inventory.  We then  hoped to improve the linear
                             regression analysis in  the  second  CDM simulation by
                             Including sources outside of Wayne County  that  could
                             affect Wayne County air quality.   These were few in
                             number, but an increase  in estimated concentrations  at
                             most stations of 1 to 2 micrograms per cubic meter
                             for suspended particulate and 2 to 6 micrograms per
                             cubic meter for sulfur  dioxide was found.   For  CDM
                             simulation number 3, the principal change  was to
                             increase the sulfur dioxide half-life to 3 hours.
                             Half-life in simulations 1 and 2 were both 1.25 hours.
                             Because of this increase in half-life, estimated
                             sulfur dioxide concentrations increased at most sta-
                             tions by 3 to 6 micrograms per cubic meter.

                             The fourth CDM simulation used a sulfur dioxide half-
                             life of 3 hours, but the meteorological data was rep-
                             resentative of Detroit  City Airport which  is located
                             in a residential section of Detroit,  while the  previ-
                             ous simulations all used a joint frequency distribu-
                             tion from Metropolitan  Airport.  The correlation of
                             the regression analysis for particulate did  not im-
                             prove by using Detroit  City Airport data,  but the cor-
                             relation for sulfur dioxide did improve.
                                                  PARTICULATE
                             Simulation  Intercept
                              Correlation Coefficient

                                        0.70
                                        0.84
                                        0.84
                                        0.77
                                            15.7
                                            12.3
                                             9.0
                                             6.6
                    SULFUR DIOXIDE

                        1.38
                        1.36
                        1.26
                        1.49

                      CONDITIONS
                          0.76
                          0.75
                          0.73
                          0.81
          Wayne County Sources Only
          All Sources In or Near County
          Sulfur Dioxide Life of 3 Hours
           (previously 1.25 Hours)
          City Airport Meteorological Data
           (previously Metropolitan Airport)
    TABLE 2.
Linear Regression Analysis of CDM
          Simulations
As can be seen in the table, the best correlation for
particulates was simulation number 2, due to the add-
ition of sources outside the county.  For sulfur di-
oxide, the best correlation was achieved with simula-
tion number 4 using  a half-life of 3 hours and in-
cluding sources outside the county.  With a background
defined by the y intercept, the high particulate
background, as will be seen, is justified and under-
standable.  The sulfur dioxide background is accept-
ably low in the area of only 0.003 parts per million.

                     CONCLUSIONS

The different simulations point out important practi-
cal aspects that must be considered when using the
state of the art diffusion models for decision making
purposes.

The emission inventory is extremely critical and
should be arrived at very carefully.  The Wayne County
Air Pollution Control Division determined the point
source emission inventory with data gathered princi-
pally by stack tests and emission factors (2) based
                                                      309

-------
on fuel consumption and the manufacturing process.
A significant amount of time was devoted to the emis-
sion inventory, yet the accuracy of the suspended
particulate inventory is still being questioned.
Stack tests are performed under ideal conditions not
always representative of day-to-day emissions.  Con-
sequently, the data supplied by industry had to be
carefully scrutinized before it was used to estimate
emissions.  For example, collector efficiency is
frequently supplied from the designer blueprints, but
it is unlikely the claimed efficiency is the normal
operative efficiency.  Even the particulate inventory
does not represent only the emission of suspended
particulates, but includes varying amounts of settle-
able matter, dependent on the type and efficiency
of the collector and the resulting size distribution
of the emissions.  Finally, the particulate emission
inventory certainly represents only a portion of the
total suspended particulate burden in the atmosphere.
Much of the uninventoried suspended particulate is
not the so-called natural background, but includes
man-made background, i.e., suspended particulate
emitted not from intended points of emission such as
smoke stacks, but from wind-blown factory dust, coke
piles, ventilation air, ground-up street dust from
mobile sources, etc.  The Wayne County Air Pollution
Control Division undertook a program, sampling the
ventilation air from an iron foundry and a casting
plant which are considered point sources based on the
quantity of emissions from their stacks.   We found a
greater particulate emission rate from the ventilator
ducts than from the respective smoke stacks.  It is
mistakenly believed that except for the natural
background,a conventional particulate emission in-
ventory can account for essentially all of the atmo-
sphere's particulate burden.

Because the man-made or unaccountable particulate
background depends largely on the type of industry
involved, the total particulate background (man-made
plus natural) will vary across the study area. The
heavy industrialized regions of Wayne County contain-
ing large power plants, iron foundries, steel mills,
coke ovens, cement plants, slag piles, etc., use
many processes which are dirty operations in the sense
that there are many chances for the evolution of
particulate matter at low heights above grade into
the open air.  It takes a source with excellent house-
keeping practices and "total enclosure" controls to
avoid these fugitive types of emissions. It is the
nature of the processes in a heavy industrialized
area that makes this a very difficult task to ac-
complish.

As the above regression analyses indicate, the CDM is
doing a very acceptable job of estimating particulate
concentrations.  The y intercept (which is total
background) is approximately 65 micrograms per cubic
meter county-wide.  As expected, the total background
is higher in those county areas recording the heaviest
suspended particulate concentrations.  By taking the
difference between the uncalibrated CDM estimated
particulate concentrations and the measured particu-
late concentrations, the result is total estimated
background concentrations.  This is done for each
receptor site.  (While this may not be an exact method,
it gives a feeling as to the magnitude of the unac-
countable sources.)  If natural background remains
constant throughout the area (not too bad an assump-
tion) the varying concentrations are due to man-made
unaccounted sources.  It turns out, as expected, that
we can draw isopleths of total background (Figure 1).
Figure 1, Total Particulate  Background (
              Wayne County,  Michigan
We find that a section of the heavy  industrialized
area has an annual mean background of over  100 micro-
grams per cubic meter.  Put another  way, if somehow
we were able to plug up all well inventoried and
intended sources of emission, the measured  particu-
late concentrations in certain areas of the county
would be well over. 75 micrograms per cubic meter, the
primary standard.  The importance of this cannot be
overstated.  It relates to the practicality of ever
achieving the suspended particulate  NAAQS as they
presently stand, and the importance  of enforcement
officials focusing on and paying careful attention to
all possible points of emission.

In Wayne County, we have seen the importance of fugi-
tive-type emissions on air quality as a result of a
comprehensive control program at the large  industrial
park of the Ford Motor Company Rouge Complex.  It
contains basic oxygen furnaces, coke ovens, power
plant, blast furnaces, large material storage piles,
etc. A substantial part of the control program was to
reduce or end the fugitive type of emissions attend-
ant to a large array of processes through different
types of good housekeeping practices.  The  once high
suspended particulate concentrations measured im-
mediately downwind of the complex and less  than a
quarter of a mile from the border have been reduced
in two years by 30%.

CONTROL STRATEGY DEVELOPMENT APPLICATIONS

Air pollution diffusion models are also being used in
the Detroit area to develop and evaluate alternative
air pollution abatement strategies.  The State of Mich-
igan is a long way from being "energy independent."
Estimates of the basic fuels imported to the state
range as high as 95% of demand, but the state obvious-
ly has a good supply of water, which  suggests that it
will continue to be a major manufacturing center and
perhaps even an energy converting center.   As the
supplies of oil and gas dwindle and  prices  increase,
however, greater emphasis will surely be placed on
using more coal, which raises at least two  questions.
First, would it be possible, practical, or  desirable
to convert many of the boilers that  once burned coal
                                                       310

-------
back to coal?  If the answer is yes, the second
question is,  which sources-should be burning coal
and how should emissions be controlled so as to reach
or maintain the NAAQS in the "best" way.  Many algo-
rithms have been developed recently in an attempt to
find the "optimal" solution to this problem (1).
However, the existing models usually have not in-
cluded fuel availability as a variable, and more
importantly,  the effect of a new set of fuel demands
on price.  Consequently, the "optimal" solution
often suggests a large-scale conversion from coal to
gas or oil, neither of which are in great supply.
Even if the fuels were available, the new set of de-
mands could very easily affect the prices to the
point where the "optimal" solution is no longer
optimal. This problem is being examined at Wayne
State University by coupling an energy supply-demand
model with an air pollution dispersion model similar
to the AQDM.   Basically, the energy model assumes
that the quantity of a particular fuel available is
a function of price up to the ultimate reserves of
the mine or well.  The model has a primary energy
system that simulates the extraction of basic fuel
resources and the conversion of these basic fuels to
coal, oil, or natural gas.  For example, coal should
be converted to natural gas in the primary energy
model. A secondary energy model then simulates the
conversion of the basic fuels to other energy forms
such as electricity and/or the transportation to the
final end user, where the final energy conversion
takes place.  The air pollution diffusion model is
used to predict the impact of all energy conversions,
hence pollutant generation, on the local environment.
The overall model is structured to operate in simu-
lation or optimization mode. When the system operates
in simulation mode, the air pollution model attempts
to find the combination of manufacturing levels and
energy end uses that will enable a set of pollution
constraints, such as the NAAQS, while maximizing the
net benefit to the region from energy utilization.
The net benefit is defined as production profits
minus the total cost of energy to the region, in-
cluding industrial uses.  Total cost of energy is
determined by adding the cost of the basic fuels at
various levels of extraction and all later conversion
and transportation costs.  The entire model is cur-
rently being tested on a small scale with a hypo-
thetical community of 40,000 people.  In the future,
we hope to extend the model to a major metropolitan
area such as Detroit.

One of the problems encountered in any project of
this scope is handling the large quantities of input
and output data.  The AQDM and COM models are quite
satisfactory for experienced users who are dealing
with a specialized air pollution problem.  However,
when working on a more generalized problem where air
pollution is one of many constraints, such as the
energy-environment dilemma, a data management system
is needed to integrate those models with other sim-
ulation programs and the many sources of input data.
In addition,  graphic display of the simulated output
data is very useful when examining regional patterns.
In response to these needs, we have developed an Air
Quality Information System (AQIS) that integrates
the CDM, AQDM, and other related simulation models
developed at Wayne State University with the U.S.
Census, Census of Manufacturers, Local Climatological
Data, apd health data for the Detroit area.  The
system also includes a capability that enables the
user to generate, among others, contour maps of air
quality.  The output from the mapping program can
then be examined live on a graphic display terminal
or routed to a plotter for hard copy.

EDUCATIONAL APPLICATIONS

ENVIRONED is a computer-assisted instruction package
that deals with the involvement of the individual  in
problems of energy consumption, resource depletion,
and environmental pollution  (3).  Part I of ENVIRO-
ED introduces the user to air pollution, water pol-
lution, and solid waste disposal problems and is
usually the first program used. Once the individual
becomes familiar with the problems, he/she is asked
to clean up the air over Detroit by imposing various
controls on emissions, allocating fuels, or limiting
certain industrial and residential activities.  Once
a control strategy is identified, a.simulation of  air
quality in the region and the control strategy costs
is done with the AQIS.  If the NAAQS are not met,  the
student is required to continue adding constraints
until the standards are met. The simulation data can
be displayed live on graphic display terminals such
as the TEKTRONIX, but few of the users to date have
had that type of hardware.

ENVIRO-ED has been used by the College of Engineering
at Wayne State University with excellent success and
the College of Lifelong Learning is experimenting
with the programs in its new General Studies degree
program.

OTHER APPLICATIONS

The Ethnic Studies Division of the Center for Urban
Studies, Wayne State University, is researching the
dynamics of ethnic neighborhoods in the Detroit area
to identify factors causing changes in these com-
munities and to attempt to predict future patterns.
Since many of these communities originally formed
near major manufacturing facilities, not too surpris-
ingly, air pollution has always been identified as a
major problem by the residents.  In order to quantify
the situation, the Ethnic Studies Division has used
the AQIS to simulate particulate and sulfur dioxide
concentrations in these ethnic neighborhoods.  When
these data are coupled with the 1970 U.S. Census data,
they provide a cross-tabulation of pollutant concentra-
tions with various population characteristics of
Detroit, such as ethnic group, age,and income dis-
tributions.

FUTURE WORK

In the future, we plan to continue evaluating the
effect of certain input data and program sub-models
on the estimated concentrations of sulfur dioxide and
particulate matter.  We will attempt to continue
updating and improving our emissions inventory with
emphasis on area sources since they had a greater
impact than point source emissions on selected re-
ceptor sites.  Size distributions of suspended part-
iculate and the half-life of sulfur dioxide in the
Detroit area will also be investigated, which should
improve the accuracy of the simulations.  In addi-
tion, our efforts to evaluate alternative control
strategies will receive special emphasis in case the
need for large-scale conversion to coal becomes
necessary in the region.  The AQDM and CDM models
have been found quite satisfactory for long-term sim-
ulations of sulfur dioxide and particulate. However,
similar success has not been enjoyed in our efforts
to model these same pollutants on a short-term basis.
Consequently, we are currently developing a real-time
simulation model for sulfur dioxide and particulate
                                                     311

-------
matter and will continue this work in the future.

               REFERENCES

1.  Atkinson, S.E. and Lewis, D. H., A Cost Evalua-
    tion of Alternative Air Quality, U. S. Environ-
    mental Protective Agency, 1974.

2.  "Compilation of Air Pollutant Emission Factors,"
    (AP-42),  U. S. Environmental Protection Agency,
    Research Triangle Park, North Carolina, 1975.

3.  Anderson, J.A., Harlow, C.D., and Bartalucci, S.,
    "ENVIRO-ED, A CAI Experiment in Developing Public
    Awareness,"  Energy, Ecology and Society, Mich-
    igan Academy of Science, Arts and Letters, 1974.

4.  Pasquill, F.,  "The Estimation of the Dispersion
    of Windborne Material,"  Meteorological Magazine,
    Vol.  90,  1961.

5.  Busse, A.D. and Zimmerman, J. R., User's Guide
    for the Climatological Dispersion Model, U. S.
    Environmental Protection Agency,  1973.
                                                      312

-------
                                     SENSITIVITY TESTS WITH A PARAMETERIZED
                             MIXED-LAYER MODEL SUITABLE FOR AIR QUALITY SIMULATIONS

                                                  Daniel Keyser
                                                Richard A. Anthes

                                        The Pennsylvania State University
                                          University Park, Pennsylvania
     Several  modifications to the one-layer tnesoscale
numerical model which Lavoie developed and applied to
Great Lake snowstorms are formulated and tested.  The
model atmosphere consists of a parameterized constant-
flux layer of fixed depth, a well-mixed layer capped by
an inversion, and a deep layer of stable air overlying
the mixed layer.  Time-dependent calculations of the
horizontal components of the wind velocity, potential
temperature and the height of the base of the inversion
are performed over a mesoscale grid.  Since the mixed-
layer assumption eliminates the dependence of the prog-
nostic variables on height, the low-level mean flow can
be predicted  far more cheaply than with multi-layer
models.

     The major refinements introduced in this paper lie
in the parameterization of the effects of the stable
layer on the  mixed-layer, the entrainment of mass, heat
and momentum  Into the mixed-layer by subgrid-scale
eddies, and the erosion of the inversion by heating.
The sensitivity of the model solutions to the initial
inversion height and strength, the stability of the
upper layer,  the vertical shear of the geostrophic
wind, and the height of the undisturbed level in the
overlying stable layer is investigated.  These tests
are performed for an east-west cross-section for mod-
erate flow over complex terrain.

                  The Mixed-Layer Model

     Multi-level primitive equation models can simulate
complex, mesoscale flow patterns realistically.  How-
ever, high-resolution, multi-level models require such
large amounts of computer storage and time that they
currently are not practical tools for everyday opera-
tional use.  A simpler numerical model which requires
far less computing power, but at the same time can
duplicate the main results of a complicated model,
would be a desirable alternative.

     Since meteorologists universally recognize the im-
portance of terrain patterns in "tuning" the local
weather conditions, it is necessary to design a meso-
scale model which can resolve topographic detail.  Var-
iations in surface heating and roughness also cause or
"force" mesoscale circulations.  Since these three
effects act at the surface, one intuitively might ex-
pect them to  be most important in the planetary bound-
ary layer (PEL).

     Under conditions of moderate flow or strong sur-
face heating, the PEL can be considered well-mixed; the
turbulent eddies distribute heat, moisture, and momen-
tum uniformly in the vertical.  Under such conditions,
one may treat the PEL as a single layer and consider
only horizontal variations in the flov.  Lavoie devel-
oped a prototype mixed-layer model and applied it to
mesoscale studies of Great Lake snowstorms and convec-
tive precipitation over Hawaii.3>^  We contend that
this type of model may be relevant  in modeling  low-level
flow in more general situations.

     Because the model explicitly predicts  the  atmos-
phere's behavior for the mixed-layer only,  the  behavior
of the rest of the atmosphere must  be parameterized.
In the remainder of this section, we present  the model
and discuss its assumptions and parameterizations.

Structure of the Model Atmosphere

     The lowest layer of the model  consists of  a thin
(50m) surface layer which contains  most of  the  wind
shear and a super-adiabatic lapse rate, and follows  the
variable terrain (Fig. 1).  The elevation of  the terrain
is denoted by ZQ; the height of the top of  the  surface
layer is denoted by Zs.  The main layer extends from Zs
to the base, h, of a stable upper layer.  This  second
layer is assumed to be well-mixed so that potential
temperature, 8, and the horizontal wind velocity, V,
are approximately constant in the vertical.   The upper
stable layer is marked by a zero or first order discon-
tinuity in 8 at its base and contains a lapse rate that
is vertically constant.  The winds  in the stable layer
are assumed to be geostrophic and may include a constant
shear in the vertical.
       1.  Hypotk&ttcal. cjiot>ts-t>e.c£ion
                     0(5 mixed-tayeji modeJt.
                                                       313

-------
     The height, h, of the mixed-layer is a material
surface with the exception that subgrid-scale eddies
may entrain mass, heat and momentum from the stable
layer above.  The entrainment depends on the surface
fluxes of heat and momentum.  The height, H, is the
level at which the mesoscale perturbation in the poten-
tial temperature structure induced by heating or the
terrain pattern is assumed to vanish.

     Fig. 1 shows a. cross section view of possible var-
iations of the potential temperature structure permitted
in the model.  The 9^ isentrope.intersects h as a first-
order discontinuity in potential temperature, while the
82 isentrope intersects h as a zero-order discontinuity.
iThe vertical isentropes between h and Zg are character-
istic of a well-mixed layer.  The 8-pattern between h
and H indicates the response of a stable layer to a
mesoscale perturbation in the PEL.
                                                          Y, in the stable layer.   The  fourth term represents a
                                                          restoring force associated with  horizontal deformations
                                                          in the height of the mixed layer.   The fifth term re-
                                                          presents the modification to  the pressure gradient force
                                                          due to the baroclinity of the PEL.   The last three terms
                                                          represent the effects of  surface friction, horizontal
                                                          mixing, and entrainment of u-momentum across h.   The
                                                          parameterized contribution of entrainment to the u-ten-
                                                          dency is denoted by  (-5^)
                                                              1             3  V3t'entrainment

                                                               The v-equation is simpler because in the one-dimen-
                                                          sional case north-south gradients  in v,  8, 8^,  and h
                                                          vanish.  The mesoscale pressure  gradient terms  do not
                                                          appear.

                                                               Thermodynamic equation.   The  thermodynamic  equation
                                                          governs the local time rate of change of 8 in the bound-
                                                          ary layer,
Model Equations

     The derivation of the system of equations describ-
ing the physical behavior of a mixed-layer is not pre-
sented here.  Instead, we present the final system of
equations and discuss some of its basic properties.  In
order to simplify the discussion further, we only con-
sider variations in the west to east direction.  Since
the vertical variations in the mixed-layer are sup-
pressed, only horizontal variations are permitted and
the model is effectively one-dimensional.  The one-
dimensional version retains the essential physics, but
simplifies the interpretation of the results of the
sensitivity tests that follow later.

     Equations of motion.  The partial differential
equations governing the evolution of the west-east, (u),
and north-south, (v), components of the horizontal wind
velocity in the mixed-layer are as follows:
                                                          li - _   38
                                                          3t     U 3x   p c  (h - Z )"
                                                                                                 36
                                                                                                '•St'entrainment
                                                          In  (3) FH(ZS) is the flux of sensible heat  by  eddy mo-
                                                          tions into the mixed-layer through Zg;  ps is the  air
                                                          density at Zs, cp is the specific heat  for  dry air at
                                                          constant pressure, KH is the eddy coefficient  for the
                                                          horizontal transport of heat, and (•§£)entrainment repre-
                                                          sents the effect of entrainment across  h.   The neat flux
                                                          term must be non-negative; otherwise, the mixed-layer
                                                          assumption might be violated.

                                                               Mixed-layer height equation.  The  prognostic equa-
                                                          tion controlling the development of h is
                                                          3h
                                                          3t
                                                             = -uf +W(h)
                                                 + s
                                                                                   (4)
                                                          where W(h) is the vertical velocity at the height h, and
                                                          S is the time rate of change of h due to entrainment of
                                                          mass from the stable layer into the mixed layer.
             jm
             3x
                  gft-y
                     29
li
3x
                                 h - Z
     — + r—•>
     -2   V3t'entrainment
     3x
                                                    (1)
                                                               Diagnostic equations and parameterizations.  The
                                                          vertical velocity is determined by neglecting local
                                                          density variations and integrating the continuity equa-
                                                          tion from Z  to h.
                                                          W(h) =   - W(Zs) -
                                                                  h

                                                                                      (h
                                                                                           Z t —
                                                                                           V 3x
                              h - Z
            3v
            3t entrainment
                                                    (2)
                                                          where W(Zg) is the vertical velocity at Zg.  The first
                                                          term represents the effect of sloping terrain,
                                                                      3Z
                                                          w(zs)
                                                                      	o
                                                                       3x
                                                                                                               (6)
     In (1) and (2), x is distance and t is time.  The
Coriolis parameter, f, is 10~^ s~  and is constant in
the x-direction; g is gravity, ug(H) and vg(H) are the
west-east and north-south components of the geostrbphic   _    0.1 FH(ZS)
wind at the height, H; 8 denotes the potential tempera-           p c     +
ture in the mixed-layer; 8^ is the potential tempera-              P p
ture in the stable layer at h, the height of the top of
the mixed-layer; y is the lapse rate of 8 in the stable
                                                               The entrainment term, S, follows a parameterization
                                                          introduced by Tennekes:?
                                                                            g(h-Zs)
                                                                                   )/A84
                                                                                                               (7)
layer; CD is the bulk-aerodynamic drag coefficient, and
KM is the coefficient of eddy-viscosity for horizontal
momentum.
                                                          where 90 is the potential temperature at the earth's
                                                          surface and the friction velocity, u*,is the downward
                                                          flux of momentum through Zs.  For a neutral boundary
                                                          layer u* is given by
     The first term on the right side of (1) is the ad-
vection of u.   The second term represents the linear
acceleration associated with the departure of the mixed-
layer wind from the large-scale, geostrophic wind at H.
The third term represents contributions to the pressure-
gradient force from baroclinity and the static stability,
                                                          u.  = C.
                                                                 1/2
                                                                                                               (8)
                                                                     o Deardorff1
                                                          (7) may be defined by
                                                                                  the inversion strength,
                                                       314

-------
A9.  = maximum
                                                    (9)
                0.09(h - Z
Equation (7)  states that entrainment of mass depends on
the surface heat and momentum fluxes.  The entire term
is modulated by A6j which represents the resistance of
the overlying stable layer to penetration by turbulent
eddies.
                                                          components are time-dependent and are determined by
                                                          their values in the interior^of the domain in a manner
                                                          prescribed by Anthes, et al.   The solutions are
                                                          smoothed in space and time once each hour during the
                                                          integration to suppress non-linear instability.

                                                                            Experimental Results

                                                              The following experiments indicate the sensitivity
                                                          of the model solutions to changes in input parameters.
                                                          Such information is useful if the parameter cannot be
                                                          specified accurately by atmospheric measurements, or if
                                                          the applicability of the parameterization is question-
                                                          able.
     The height of the undisturbed level, H, is arbi-
trarily assumed to be proportional to the depth of the
perturbation induced in the h field.
H
   h    + a(h    - h)
    max      max
                                                   (10)
where h^x and h are the maximum and average values of
h on the domain at a given time and a is a constant of
order one.
                                              Airflow over a Mountain

                                                  Before discussing the results of  the sensitivity
                                              tests, we present a model simulation  of moderate  air-
                                              flow over complex terrain.  This case demonstrates
                                              some of the physical capabilities of  the model  and
                                              serves as a control experiment for the following  sensi-
                                              tivity tests.

                                                  The experiment contains the following specifications:
     The potential temperature lapse rate or static
 stability, Y> in the stable layer is given by

   9H-9h
                                                    (11)
 '   H - h   '

 where 6  is determined by
       n

 9H   6 (hi) + Y1 (H - hi)  .                         (12)

 In  (12), the superscript, i, indicates the initial
 value of the variable.

     The geostrophic wind at H is determined by assum-
 ing the following profile:
                 3V
V (H)    V (h1) +
"
                     (H - h1)
                                                   (13)
The parameters Vg  and
                       g
                        2
                          must be specified since  they
 are external large-scale variables.

     The effects of entrainment on the momentum  and po-
 tential temperature in the PEL are calculated with the
 assumption of conservation of mass and enthalpy.  If
 Ah = S At is the change in the inversion height  due
 to entrainment over a time step, At,  then Oh, 6,  u,
 and v are calculated from
8h(t + At)
 . ,     .
 VV     ^'
             6h(t) + YSAt
(h* - ZS)  d>*
                          2Ah
                                           Ah]
                                                   (14)
                                                    (15)
                       h* + 2Ah -
 where <)> is u, v, or 6, and an asterisk indicates  the
 value of a variable before the effects of entrainment
 are considered.  The factor of 2 before Ah in  (15)  is
 a consequence of using centered time- differencing.

 Numerical Procedure

     Time and space derivatives in the preceding  equa-
 tions are approximated by centered- in-time and cen-
 tered- in-space finite differences.  The system is inte-
 grated forward from a specified set of initial condi-
 tions over a staggered grid that is "stretched" at  the
 boundaries.  The grid increment, Ax, increases near the
 boundaries in order to minimize their effect on the
 solutions in the interior of the domain.  The variables,
 9, 9, , and h are fixed at the boundaries; the velocity
                                                          Ax
                                                            min
                                                                   20 km
                                                          Terrain profile:   smoothed west-east Appalachian profile
                                                                            at approximately 38°N latitude.
                                                                       -3                     -3
                                                          C  = 1.5 x 10   over water; 7.0 x 10   over land
                                                          u  = 10 ms  ,  v
                                                           g             g
                                                                             Oms  ,Yf
                                                               Z     + 500 m = 1426.59 m
                                                                omax
                                                             = 5° K km
                                                          6   =  290°K,
                                                                      "1
                                                                            293°K
                                                                        ~3
                                              ps = 1.21 kg m

                                              No surface heating

                                              H: a   1 in (10)
                                                                                  1.03 kg m
                                                                                           "3
                                                                                           8V
Ky = KJJ = [5 x 104 + 0.08 (Ax)2  |^| ]  (|*-   )2  [m2 s'1]
                                         min

    The superscript, i, indicates the  initial value of
a variable;  the subscripts max and min denote maximum
and minimum values, respectively.  The formulation for
KM and KH increases the damping near the boundaries
where Ax is larger than in the interior of the domain.

    The initial u and v are calculated assuming a steady-
state balance between frictional, Coriolis, and pressure-
gradient forces.  This procedure helps minimize the ini-
tial shock of starting the model. 1  For this set of
specifications :
                                                              u  = 8.61 ms
                                                              u1  = 9.92 ms"1
                                                                        = 3.38 ms

                                                                        = 0.83 ms
                                                                                             "1
                                         over land.
                                         over water.
                                                              The  model is  integrated for 12 h using a 240 s-time
                                                          step.  The  solutions  contain transient oscillations over
                                                          the  integration period;  these oscillations result from
                                                          initial  imbalances  that  the terrain produces in the flow.
                                                          Additional  experiments indicate that about 18 h of inte-
                                                          gration  time  are  needed  to attain a steady state.  This
                                                          adjustment  time corresponds closely with the inertial
                                                         period of 17.5 h associated with  f
                                                                                                 ~^  "-*-.
                                                                                                          Inertial
                                                          gravity waves  apparently effect the adjustment process
                                                          towards a  steady  state;  after one cycle the model var-
                                                          iables are in  balance.
                                                       315

-------
     We desire a quasi-steady  state for the sensitivity
testing because we would like  to  investigate the model s
physical response to the fixed terrain profile with its
associated surface drag, and the  synoptic-scale pressure
gradient force.  Although  the  final steady-state is not
reached until after about  18 h, the solutions at 12 h
are quite similar to the ultimate steady-state solu-
tions.  For example, Fig.  2 depicts the 12 h and steady-
state solutions of u, W(h) and h.   The steady solution
is the time average between 18 h  and 24 h.  Because of
the qualitative agreement  between the 12 h and steady-
state solutions, the results from the sensitivity tests
will be compared at 12 h.
                                       roughly parallels the  terrain profile and the perturba-
                                       tion in W(h) has helped  produce a perturbation at  the
                                       coast.

                                       Sensitivity Tests

                                            The fields of mixed-layer height, h, at 12 h  will
                                       be compared since they represent an integrated response
                                       to the velocity components  and entrainment.  Fig.  3 con-
                                       tains the results of the tests.
  W(h)
(cmi-l)
                                      	I2h
                                      — ie-24h AVERAGE
          200   4 ,-Ln the. JinJL-
                                                                   k<-,  e£- 6-*-,  y'S a. a^d
                                                                   3.
                                                                3z  '

                                                         Variation of h  .  Two  experiments were run; one
                                                    with h1 - Zomax = 250 m,  the  other with h1 - Zomax =
                                                    1000 m.  The h curves are plotted so that their end-
                                                    points coincide despite differences in the numerical
                                                    values.  This presentation  demonstrates their relative
                                                    changes.  The greatest effect of changing the initial
                                                    height of the inversion occurs at the mountain, and the
                                                    least variation occurs over the water.  Kntrainment is
                                                    the factor causing the differences .  In these adiabatic
                                                    experiments, it is inversely  proportional to the PEL
                                                    depth, and is stronger over land where the roughness is
                                                    greater.

                                                         Variation of initial inversion strength, Sh1 - 9 .
                                                       316

-------
Experiments were run for inversion  strengths  of 1.5°K
and 6.0°K.  The case with the weaker   initial inver-
sion strength shows the greater  growth  in h  at 12 h.
As in the preceding case, the changes  are  negligible
over water where entrainment is weak.  The perturbation
in h at the coast is more accentuated  in the  less
stable case <6hi - 61 = 1.5°K).
     Variation of y .  Experiments with
                                                      -1
     	                         2.5°K km
and 10.0°K km"1 are performed.  The results indicate
that for weaker y*, stronger perturbations in h develop
at the mountain.  Weaker perturbations develop when the
upper layer is more stable.  Qualitatively, a large re-
storing force from the upper layer causes weaker upward
motions and less growth in h.  Also, entrainment is
weaker when y* is large.

     Variation of H.  We change H by varying the para-
meter a in (10) from 0.5 to 2.0.  At 12 h for a equal
to 0.5, H equals 2330 m; for a equal to 1.0, H equals
2510 m; and for a equal to 2.0, H equals 3100 m.  In-
creasing a by a factor of 4 increases H by about 1/3.
The h fields coincide very closely, so the variations
in H have little effect on the model dynamics through
the restoring force.  This case does not contain shear
of the geostrophic wind, so Vo(H) is independent of the
value of H.
     The one-dimensional mixed-layer model data can be
used as input for  a  cross-sectional air quality model
of the mixed layer.  We  have designed such an air quality
model that considers sources,  sinks and vertical diffu-
sion on an Eulerian  grid,  and treats the advective trans-
port with a par tide-in-cell technique.  Fig. 4 depicts
the meteorological state predicted by the mixed-layer
model and the associated concentration pattern of a pas-
sive contaminant in  the  PEL.  This example demonstrates
the potential of combining mixed-layer model data with
a typical air quality  model.
                                                                                                             H
                             ~g
      Vertical  shear  of  ug.   In our final test we intro-
 duce a vertical  shear in the geostrophic wind; -^^
 equal to  10 ms"-'- km"-*-.   This value corresponds to a
 north-south potential temperature gradient of -2.96°K
 (100 km)~l a  value  characteristic of frontal zones.

      At 12 h the h pattern exhibits a maximum at the
 mountain  ridge which is 500 m higher than in the case
 without shear.   The  perturbation at the coast is ampli-
 fied. Including shear  dramatically increases the kine-
 tic energy in  the  model.  The geostrophic wind, ug(H),
 is higher causing  stronger horizontal winds and verti-
 cal velocities which intensify the perturbation in the
 h-field.   In turn, H rises increasing ug(H).  At 12 h,
 H = 3096  m and ug(H) =26.7 ms~1.  The model results
 are reasonable for this strong a pressure gradient; how-
 ever, in  physically  realistic experiments shears would
 be weaker and  there  would be limits on H and ug(H).

      To summarize, changing h , 8j,    9 , y  and  Of
 cause relatively small  differences in the model's be-
 havior while changing  ug(H) produces a significant
 effect.   The geostrophic wind represents synoptic-
 scale forcing  while  the other parameters represent
 mesoscale processes.  These model results indicate that
 in the absence of  surface heating, the mesoscale motions
 are most  sensitive to  variations in synoptic-scale pres-
 sure gradients under moderate to strong wind conditions.

        Application  to  Air Quality Simulations

      Because the one-dimensional model results are en-
 couraging, we  intend to conduct further tests with di-
 abatic heating as  an additional forcing term.  We then
 plan experiments with  a two-dimensional version of the
 model that will  utilize real data in order to determine
 how realistically  a  simple model can depict mean bound-
 ary-layer behavior.

      The  output  of u,  v, W, 6, and h can serve as input
 for a regional scale air quality model of a well-mixed
 boundary  layer,  where  to a first approximation the con-
 centration of  a  pollutant is vertically uniform.  In or-
 der to maintain  large  horizontal gralients, particle-
 in-cell  techniques can be used in simulating the advec-
 tion of concentration  patterns.  If sources and sinks
 can be modeled realistically, reasonable estimates of
 pollutant concentrations should be obtainable from such
 a model.5 We  feel that mixed-layer models can generate
                                                                              i   V\  »-*  --*_
                                                                 10
                                                                        90
                                                                                 190      290      390
                                                                                     Land I Wot.,
                                                                                                          490
                                                                 4.  Cn.o&&-&e.(Ltion de.pi.cJMQ 12 k-mi.-iad-lja.yeM.
                                                          modeJt output and c.onc.e.nt^ation o£ a paw-tve. contaminant
                                                          deJii.ve.d  ^fiom the. aJui quaLity modeJL.  \JwtLc.aJL OMLOM at
                                                          x    50 km ie.pn.eJt,ante c.e.nteA od 20 km-wide. ajina. coatee otf
                                                          Atte.ngth 35  \ig  m~2  &~>.   Ve.po&-ition iA &imuJLate.d ea&t o-f,
                                                          the.  c.oat>t  (* >_  2SO  km);  the. de.po&i£ion velocity iA 3 ami"'.
                                                          kKAom >ie.p>ieJ>znt uiind diA.e.ctLon -in the. mixed-layex..  Dot-
                                                          ted  ti.ne& an.e.-it,optethi,o(i contaminant c.onc.ent>iation in
                                                          unitA oj, ug  m"3. Va&hed tine* ate. iAen&wpeA.  ?ote.ntiaJi
                                                          te.mpeAatufi&  at  12 h in the. mi.x.e.d-iayeA \xvu.M> ^lorn
                                                          290.55°K to  2<)0.90°K.
                                                                            Acknowledgements

                                                               This research  was supported by the Environmental
                                                          Protection Agency through Contract 800-397-03.  Daniel
                                                          Keyser holds a  National Science Foundation Graduate
                                                          Fellowship in Meteorology.  Gail Maziarz capably typed
                                                          the  manuscript.
                                                                                Sources
                                                          1.   Anthes,  R.  A.,  N.  Seaman, J. Sobel, and T. T. Warner,
                                                               1974.   The Development of Mesoscale Models Suitable
                                                               for Air Pollution Studies," Select Research Group
                                                               in  Air  Pollution Meteorology, Second Annual Pro-
                                                               gress Report:   Vol. 1, Environmental Protection
                                                               Agency, Research Triangle Park, N.C. 27711.
                                                          2.   Deardorff,  J. W.,  1972, "Parameterization of the
                                                               Planetary  Boundary Layer for Use in General Circu-
                                                               lation  Models," Mon. Wea. Rev., 100, 2, 93-106.
                                                          3.   Lavoie,  R.  L.,  1972, "A Mesoscale Numerical Model of
                                                               Lake-Effect Storms, J_. Atmos. Sci., 29, 6, 1025-1040.
                                                          4.   Lavoie,  R.  L.,  1974, "A Numerical Model of Trade Wind
                                                               Weather on Oahu," Mon. Wea. Rev.. 102, 9, 630-637.
                                                          5.   Nordlund, G. G., 1975,  "A Quasi-Lagrangian Cell Method
                                                               for Calculating Long-Distance Transport of Airborne
                                                               Pollutants," J_. Appl.  Meteor., 14, 6, 1095-1104.
                                                          6.   Queney,  P.,  1948,  "The Problem of Air Flow over Moun-
                                                               tains," Bull.  Amer. Meteor. Soc., 20, 16-26.
                                                          7.   Tennekes, H., 1973,  "A Model for the Dynamics of the
                                                               Inversion  Above a Convective Boundary Layer," J_.
                                                               Atmos.  Sci., 30,  4, 558-567.                  ~
 the meteorological data such a model requires.
                                                        317

-------
               PREDICTION  OF CONCENTRATION PATTERNS IN  THE ATMOSPHERIC SURFACE LAYER
                       S. Hameed
     Laboratory for Planetary Atmospheres Research
                Department of Mechanics
             State University of New York
               Stony Brook, N. Y. 11794
                     S.  A.  Lebedeff
               Institute for Space Studies
                   NASA, 2880 Broadway
                  New York, N. Y. 10025
                       Abstract

     We present a study of turbulent diffusion in the
atmospheric surface layer under conditions of neutral
stability.  The two-dimensional semi-empirical diffu-
sion equation is solved using the integral method,
previously described by us [3-6].  Concentration dis-
tributions at the ground are obtained for area sources
and for line sources situated perpendicular to the mean
wind direction.  It is found that the concentration
distributions are represented by simple formulas for
downwind distances x » zo, where zo is the roughness
length.  Also, at such large distances, the equation
for the shape of the boundary of the polluted layer
obtained by the integral method is found to be the same
as given by Lagrangian Similarity Theory.  This equa-
tion yields the value A = 0.65 for the ratio of the
mean vertical velocity to the friction velocity in the
neutral surface layer, which compares well with the
value A = 0.75 found by Kazanskii and Monin [7] from
experiment.

                     Introduction
     Prediction of pollutant concentration patterns
requires solution of the diffusion equation in which
the mean wind velocities and eddy diffusion coeffi-
cients appear as input quantities.  In practice the
mean wind velocity as a function of height is obtained
from measurements while the diffusivity function is
estimated with the help of a boundary layer model.
Usually models of the boundary layer are based on
simplifying assumptions, such as existence of steady
state conditions and homogeneity in the horizontal
direction, which do not correspond realistically to the
urban problem where pollution dispersion is of the
greatest interest.  Such simplified models are, none-
theless, of interest in dispersion studies because,
hopefully, they lead to an understanding of the basic
processes involved.  Also, methods developed for study-
ing such simple models could be of use in the future
when more realistic models of the urban boundary layer
become available.  In this paper we will focus our
attention on perhaps the simplest dispersion problem,
that is, the prediction of concentration distributions
from area and line sources in the atmospheric surface
layer.  The surface layer is the lowest part of the
planetary boundary layer in which Reynold stresses are
found to be nearly constant with height.  According to
Monin-Obukhov similarity theory the turbulent transport
processes in the surface layer are completely charac-
terized by only two parameters, namely, the friction
velocity u* = (T0/p)T, where To is the constant value
of turbulent stress and Q3is the air density, and the
length scale L = 	IH*	 , where k is the
                 Mg/T0) (q/Cpp)
von Karman constant, g is the gravitational accelera-
tion, TQ the air temperature, q the vertical heat flux
and Cp is the specific heat capacity of air.  In
neutral conditions q = 0 and |L  = °°; in stable condi-
tions q<0 and L>0 and when unstable conditions prevail
q>0 and L<0.  In particular, the mean wind velocity
u(z) and the eddy diffusivity K(z) can be represented
in terms of a function \
               ~ l   I1 »T ' Z *7 ' J     l^'
where zo is the roughness  length characterizing the
ground roughness which is  assumed to be uniform in the
x,y directions.  The determination of the function
4>(Z/L) has been the subject  of  several theoretical and
experimental investigations  [l];  in particular,
Businger et al. [2] have found  from observations that:
and
 = 1 + 4.7 z/L ,- L>0,
_         1	
  (1   15 z/L)''
                                          L<0
                                                     (3)
It may be noted that these  forms  of  0 is speci-
fied by the boundary condition:
                K(z)    = -Q
                                   (5)
Also,
                  c(x,z) = 0
and               c(x,z) = 0
With u(z) and K(z) given by equations  (1,2),  the dif-
fusion equation becomes:
              x = 0,
              z = <*>.
(6)
         —
         k2
                                           3c
                            dx =  3z
                                                     (7)
This equation can, of course, be  solved  by  numerical
methods but we will apply an integral method to obtain
its solution.  We have studied  application  of the
integral method to the diffusion  equation in our
previous work [3, 4, 5, 6] and  we find that it is con-
siderably simpler than numerical  integration and also
gives accurate solutions for the  concentration distri-
bution at the surface.  In the  present problem we find
that the method becomes particularly simple and yields
interesting results for the surface layer under condi-
tions of neutral stability.  In the following, there-
fore, we will solve equation  (7)  for the neutral sur-
face layer; the treatment of stable and  unstable
                                                       318

-------
surface  layers is somewhat more involved and will be
presented elsewhere [6].

     For neutral stability L = <*>, $ (Z/L) = 1 and f =
log z.   Hence equation (7) becomes


               ^log(z/z0) |f=|^z||             (8)

and the  boundary condition (5) reduces to:

                 ku*z -°- = -Q ;  z = ZQ               (9)

Let us define dimensionless variables:

           5 = ^r1 ,5 = f- and N(5,?) = ^£-
Then equations  (8,9) become:
                        3N _  3     9_N
                  09  ^  85 "  3?  ^ fit, '
                                                   (10)



                                                   (8a)


                                                   (9a)
                                                          The increase in the depth of the pollutant  cloud with
                                                          distance according to equation  (16)  is  compared with
                                                          the approximation  (17) in Fig.  1, and the corresponding
                                                          increase in the surface concentration over  an area
                                                          source No(5,l), according to equations  (14)  and (14a)
                                                          is shown in Figure 2.  It may be noted  that this solu-
                                                          tion is for a uniform area source which extends from
                                                          x = 0 to x = °°.  Concentration  distributions  for
                                                          sources of finite extent or spatially varying emission
                                                          strengths can be constructed from this  basic  solution
                                                          by superposition, as shown in [3,6].  Furthermore,  if
                                                          M(5,C) is the concentration distribution due  to an
                                                          infinitely long line source situated at right angles to
                                                          the mean wind direction, then,  as explained in [6],
                                                                                                              (18)
                                                          We differentiate  equation  (14)  and consider the case
                                                          6»1 and obtain:
                                                                                              d&_
In the integral method it  is  assumed that the distribu-
tion of the  contaminant above the area source is limi-
ted to a finite depth £ =  6(5)  and the contaminant
concentration and its flux vanish for larger values of

               N(?,O  = o ,  ? = 6(5),
                                                                                                              (19)
                                                                                  - z0   (5 + 11 6)
                                                                                             18

                                                          This result  is  shown  as the  dash-dot curve in Fig.  (2).
                              S = 6(5),
                                                    (11)
Thus 6(5)  represents  the top of the polluted layer.  We
have found [5,6]  that the integral method yields an
accurate solution of  a problem characterized by a
linear diffusivity function,  such as equation (8a), if
we assume the solution to be of the form:
            N(5,?)  = no(5)U   f-)2 log (I)
                               o        o
                                                    (12)
 Substitution of this expression in the boundary condi-
 tion, equation (9a), gives:      2
                                                    (13)
             no(5)  -
                       (6-1)(6-1+2 log 6)
Concentration at the  surface z = ZQ is obtained by
using equation (13) with equation (12) and taking C. =
1;                        ,*,>•,*
               »i if n \     (o-i)  log o              n/u
               "o^'1)  = (6-1+2 log 6)             (  '

     We now integrate the diffusion equation (8a) from
?   1 to £ = 6 to obtain:
         .6
            (log ?) (1-1)'
                     o
                         log ()dC = ?
                                              = 1  (15)
                                                   (16)
where we  have used conditions  (9a)  and (11)  on the
right hand  side.  Integration  over  5 then gives:

    'l§ <"'3+^2~2 + g^log *-* ~54~ 63+262-^- + y7

             (6-1) (6-1 + 2 log  6)

This is an  algebraic relationship for determining 6(5)
which together with equation (14) gives the  concentra-
tion distribution at z = ZQ.

     With reference to the definitions (10)  we note
that 6 has been expressed in units  of z0.  Since  ZQ is
usually small (~1 cm) we have  6»1,  except in  a small
region very close to the upwind  edge of the  area  source.
Hence we may use the approximation
                18
and replace equation  (14) by:
                              if 6 >
                                                   (17)
               N  (5,1) =  log  6
                o
                                                   (14a)
                                                                      Shape of the Contaminated Layer

                                                               Calculation of the shape of the boundary of the
                                                          contaminated layer due to a source located on the
                                                          ground has been the subject of several investigations
                                                          on the basis of the Lagrangian Similarity Theory [1].
                                                          Also, Kazanskii and Monin [7] have experimentally
                                                          observed the dispersion of smoke from a maintained line
                                                          source in the surface layer in near-neutral conditions.
                                                          They have explained the observed shape by arguing that,
                                                          since the friction velocity u* is the only velocity
                                                          scale in the surface layer, the mean vertical velocity
                                                          of particles at the top of the contaminated layer
                                                          should be:

                                                                                 || = Au»                    (20)

                                                          where Z represents the top of the layer and X is a
                                                          constant.  Also, neglecting the effect of horizontal
                                                          diffusion, the smoke particles may be taken to move
                                                          along the horizontal direction with the mean velocity
                                                          of the wind:
                                                                                    -    log
                                                                                                \
                                                                                                              (21)
                                                          Thus, combining equations  (20,21) one obtains for the
                                                          shape of the boundary:
                                                                              i
                                                                                                              (22)
                                                               In the integral method used in the previous sec-
                                                          tion, the shape of the boundary is given by equation
                                                          (16) which yields for 6»1:
                                                                            M = 11 log 6   26 .
                                                                            d6   18         27
                                                                                                              (23)
                                                          With reference to the definitions  (10), writing 6 = —,
                                                                                                              zo
                                                          and neglecting the constant term asymptotically, equa-
                                                          tion (23) becomes:                             ,
                                                                            3v    1 1       -7
                                                                                                              (24)
                                                          which on comparison with equation (22) shows that the
                                                          shape of the pollutant layer obtained by the integral
                                                          method is in essential agreement with that obtained by
                                                          Lagrangian similarity arguments in the region suffi-
                                                       319

-------
                 140
                                                                                                ieo
                                                 — x Downwind Distance
                                                 Z0
Figure 1.  Variation of the depth 6 with the downwind  distance  .   The error in equation (17) is 1.6 percent at
6 = 100.
                                                "	.	^ne Source Eq. (19)
                                                                                          180
                                               — * Downwind Distance
                                               zo
Figure 2.  Variation of ground level concentation with the downwind distance  £•   	 Area Source, equation  (14).
	 Area Source,  equation (14a).   	 Line Source, equation (19).  The error in equation  (14a) is nearly 10
percent at 5 =  100.
                                                         320,

-------
ciently removed from the upwind edge of  the  source.   If   7.  Kazanskii, A. B. and A. S. Monin:  (1957).   The form
the right hand sides of equabions  (22,24)  are  equated     of smoke jets.  Izves. Akad. Nauk.  SSSR,  Ser.  Geofiz.
we obtain:                                                 No. 8, 1020.  See also A. S. Monin  (1959)  Smoke propa-
                    \ — ^"^ = o 65                  (25)    gation in the surface layer of the  atmosphere.   Adv.
                         11    "                           Geophys. 6, 331.

which may be compared with the value A = 0.75  obtained
by Kazanskii and Monin from estimates of u*  and  (dz/dt)
in their experiments.  Considering the necessarily
ambiguous definition of the top of the contaminated
layer this agreement is encouraging because  it shows
the essential validity of the integral method.

                      Conclusions

     We have analyzed the dispersion process in  the
atmospheric surface layer under steady state conditions.
By applying the integral method to the case  of neutral
stability we find that for a semi-infinite area  source
the concentration distribution at the surface  z  = zo  is
given by:
                ca(x,z ) = _S_ log  (!_)
                           ku*      z
where
              i|~zlog  
-------
                              A TIME DEPENDENT, FINITE GAUSSIAN LINE SOURCE  MODEL
                    John C. Burr
       Environmental Evaluation Section Chief
                      Ohio EPA
                   Columbus, Ohio
                                                                               Robert  G.  Duffy
                                                                Water Quality  Surveillance  Section Chief
                                                                                 Ohio EPA
                                                                               Columbus, Ohio
Introduction

     Mathematical models of atmospheric diffusion range
between two extremes.  On the one hand there exist
models which depend upon the facility with which com-
puters are able to crunch numbers.  In some situations
they are the only means of obtaining a worthy answer.
However, cause and effect are difficult to trace.

     On the other hand are models which rely upon sim-
plifying assumptions to yield a closed-form solution.
These solutions represent a limited number of turbulent
conditions.  To extend their applicability, ad hoc
assumptions are often introduced to represent
additional conditions.

     The model presented here lies somewhere between
these extremes.  It expresses the concentration in
terms of the more readily observed physical parameters
and does so in a realistic and continuous manner.  The
dispersion sub-model further extends the model to
explicitly incorporate mechanical and thermal
turbulence.
The Field Equations
     The field equation for concentration c of a
conservative material at a point in a flow of uniform
velocity u is
                                                    , ,
—
at
              3x
                             — (K?3?/3y) + —
                             3y
.The speed u is parallel to the x axis, z is the vertical
axis, and the transfer coefficients in each direction
are represented by K^, K2, 1(3.  In general

                   K~K(x,y,z,t).

     Looking toward the dispersion sub-model, we adopt
a representation which satisfied Taylor's Hypothesis.1
That is, for times, t, which are small compared to the
Lagrangian time scale of the diffusion process, the rms
dispersion z is given by z = at, where a is the disper-
sion speed.  We then represent the transfer coefficients
by K.J = a.jai, where ai is the instantaneous scale of
dispersion.

     We now assume that a is invariant in space and
time, i.e., a-o^ = a2^t and solve equation (1) for the
boundary conditions c(x   ut, y, z, t2/2)
<5(x   ut, y, z, t2/2) where 6 is the Dirac delta
function, whence
                    -3
                                                   (2)
              ala2a3
                     exp   {(x   ut)2/2a!2t2
               y2/2a2t2
                            z2/2a32t2}
This equation represents the concentration due to an
instantaneous release at t = 0, x = 0, y = 0, z = 0
for fixed dispersion speeds.  It may be integrated in
closed form for various conditions of interest.  Two of
                                                          these, the unsteady  plume,  and  the infinite,  unsteady
                                                          line source have been  dealt with  by Lissaman.'
                                                          The Unsteady Line Source

                                                               Consider a  temporally  varying  line source
                                                          (Figure  1) of strength Q(t),  and_ length L, oriented  at
                                                          an angle  to the x-axis, with tt, as  before,  parallel
                                                          to the x-axis.   A receptor  located  at (x,y)  has  coor-
                                                          dinates  (x - ? cos )  relative to  an  arbi-
                                                          trary point (£;,<)>) of the  source.  At  the point (x,y,z)
                                                          the concentration for a time  period T = T2   Tj, may
                                                          be written
S(x,y,z,T)=CL(x,y,z,T)/0(T)
                                                                       T? L
                     -  ut
                                                                                                              (3)
                                                                                         cos<|>,y-Csin, z,t)d?dt
                                                          for a constant emission  rate,  Q(T)  over the period T.
                                                          After introduction of  the  explicit  form of c(x,y,z,t)
                                                          from equation (2), equation  (3)  may be integrated to
                                                           yield
                                                                 C(d,£,z,T)
                                       ,z2u2sin2
                                       2an2av2r
                                                                 x[erf{2-^[(d/ah2r)u  sin-  r/T2]}x

                                                                                      >- erf {2'^ a1 [u
                                                                     erf{2-5s[(d/an2r)usin
                                                           (erf{2"J'2ah"1[u cos
                                                                                          -erf{2"!sah"1[u
                                                           where  an H ai  =  32,  av = 33 and x,y have been trans-
                                                           formed to  trie source coordinates:
                                                                   d    x  sin  ((>

                                                          with         r2  =   2

                                                          and where  erf(X)
                                                                                  y cos $ ,  L   x cos § + y sin
                                                                           =  z2/av2  + d2/ah2
                                                                                 	  X'
                                                                                 ,IT / exp(-x2)dx.
                                                            Note  that  erf(-X)  = -erf(X).   Substitution of L/a?  for
                                                            L/an,  u/aj for  u/ah,  [(x/aJsin    (y/a2) cos <(>)]
                                                            for d/ah,  and  [(x/ai)  cos $ + (y/a2) sin <)>)]   for l/a.,
                                                            removes  the restriction that ai = a2.
Characteristics

     The solution appears rather complicated.  However,
when one notes that certain combinations  of  variables
appear more than once, it doesn't  seem quite so
formidable.

     One is tempted to simplify the equation by taking
the limit as TI approaches zero. However, this  produces
a discontinuity at t   0, L although  it yields
solutions having the proper form when 0 < £ < L or for
£ outside this range.  The difficulty arises when one
attempts to evaluate terms such as lim{£/T}
                                                        322

-------
                                                           Time
as t and T both approach zero
because there is no "right" way to evaluate the limit.
For this reason we recommend retaining Ti finite,
though small .

     In order to gain some insight into this solution
it is helpful to examine how it represents various
limiting cases such as:  symmetric conditions;
parallel, perpendicular, and zero wind; infinite line
source (L,l+°°);  and steady state (To -> °°) conditions.
For ease of examining these cases we'll work with the
form in which T^ -»• 0 and restrict the receptor to the
range 0 < l<  L:
  c(d,£,z,T)
                                   z2u2sin2
                            exp -
                   "Vz
       x(2 + erf {2"z[(d/ah2r)u sin  - r/T]}
                                                   (5)
x |erf i2~\~ 1 [u cos 4. + (L-£)/T] }-erf {2'^tu cos  - 1/1} }] )


Wind Vector

     A most satisfying  feature  of this model  is  that
the expression  retains  its significance as  the wind
vector approaches  zero;  that is,  as u and/or   approach
zero.   The terms  u sin  41 and u  cos 4>  represent  the
perpendicular and  parallel components of the  wind
respectively.   When the  wind is perpendicular ( - 90°)
to the line source the  third term reduces to:
     The  concentration  is  symmetric  about L   1/2  as
may be seen  by  substituting £ =  L/2  t A£.   It  is  a
maximum at the  midpoint, and  decreases  toward either
end.

     If the  source may  be  considered to have  infinite
extent, i.e., £/(/TahT) Z 4, this term reduces  to a
value of  2.  Thus for an infinite source  with perpen-
dicular wind, equation  (6) reduces to:

                       ,                           (7)
                        exp -
                                a  a
                erf{2^2[ah-2du/r   r/T]}]

which agrees with the expression derived by Lissaman.1

     When the wind is parallel  to the source the second
term reduces to 2 - erf{2-JSr/T}.  This term represents
the process whereby the emitted "cloud" diffuses to a
receptor located at a distance  d from the source with
no advection occurring.

     The zero-wind speed solution is  seen to be:

                           erf{2~SYT}]

                          F}  +  erf{2-^ /anT}]


with  the entire,  foregoing  discussion  about symmetry,
diffusion,  and  advection  being  applicable.
     Subject to the initial restriction regarding
Lagrangian time scales, and a periodically steady
source, the model provides a realistic representation
of temporal effects.  A very important benefit of the
explicit representation of time in this model is the
elimination of the question of what sampling time is
represented by the model.
    Equation (5) yields a steady-state solution
                                                                    /„      ; — \~1       ,
                                                                    (2ainavr/2?)  exp  -  {
                            z2u2sin2
(9)
The infinite source, steady-state solution,  (c.f.
Equation (6)), on the other hand is:
                   -1
                                                               ?   (2ahavrT^?)  exp
                                                                              "Ss
              erf{2"s(d/an2r)u sin
                                                 (10)
     Both equation (9) and (10) are independent of
source extent which, in fact, they should be.  The
two different forms illustrate the problem of taking
simultaneous limits of time and geometry.

     The approach to a steady-state solution in the
special cases of perpendicular, parallel,  and zero
wind may be inferred from the previous discussion.
In all cases the concentration increases as the
steady-state is approached.
Dispersion Relations

     The dispersion speeds are evaluated from the
Monin-Obukhov3 similarity theory of turbulence in the
lower boundary layer of the earth.  This model states
that a steady self-similar two-dimensional  boundary
layer can be described by ground roughness, ZQ, tem-
perature, T, and the height invariant heat and momen-
tum fluxes.  These fluxes are characterized by u*, the
friction velocity, and H, the convective heat flux.
The Monin-Obukhov scale length is defined by

            L = -u*3PCpT/kgH

where p is the density, Cp is the specific heat at
constant pressure, k is von Karmans constant, and g
is the gravitational constant.  Both turbulent and
mean speeds are expressible as u/u* = f(z/z0,z/L)
where the form of f is a function of the type of
velocity.  Lumley and Panofsky^ express the mean
horizontal speed as:
                                                                u   u*[ln z/zo   ^(z/L)]/k                  (U)

                                                           where t|i,  the non-adiabatic part of the profile is
                                                           plotted in reference 4.

                                                                According to the theory,  the vertical  fluctuation
                                                           velocity, av, is given by an equation of the form
                                                           av   u*F(Z/l).  Panofsky and McCormick5 postulated
                                                           that av should be a function of height, z,  the rate
                                                           of energy supply by mechanical  turbulence,

                                                                              EI = u*23u/8z

                                                           and the rate of supply of convective energy

                                                                              e2 = gH/pCpT.
                                                       323

-------
They derive an expression for av which may be written
as
                                     1/3
     av   1.05(u*3S/k + 2.4 zgH/pCpT)1'

where S,the wind shear.is given by S   (kz/u*)3u/3z.

     The effect of the convective heat flux, H, on the
dispersion speed is much smaller than the effect of
the wind speed and, consequently it doesn't require
as great preceision in its determination.  Measured
values of H may be used, or,  near the ground, it may
be estimated from temperature and insolation data.

     The wind shear, S, may be derived from measure-
ments of the variation of wind speed along the vertical
axis.  In the more common case where S is not known,
additional relationships are  needed among S, L, and ij;
in order to fit the observed  wind speed by means of
equation (11) and subsequently to solve for av from
equation (12).
equation
              4
and, by taking
   We make use of Ellison's6 interpolation
                   S"   18(z/L)S3 = 1
              z/L
                                     (13)



                                     (14)
By solving for £ and d£  from equation (13)  and using
partial fractions the integral may be evaluated in
closed form.  Equation (13) is solved for S  by use of
a Maclaurin series expansion for values of S near the
origin, and by two iteration forms for z/L positive or
negative.   They converge in five or six cycles.

     Ellison's interpolation equation represents well
the wind shear for unstable, neutral, and slightly
stable conditions.  However, neither it nor  any other
equation known to the authors does a good job of
representing more stable conditions.  Therefore this
model should be used in stable cases with caution.
Temporal Effects

     Since the model  contains an explicit representa-
tion of time one would hope to be able to describe the
effect of step changes in emission rate, wind vector,
and dispersion speed.  Important temporal changes
immediately come to mind such as varying vehicle speeds
and/or varying vehicle-to-capacity ratios.   Another
important practical condition occurs when pollution
is caused to shift back and forth across a receptor
under light and variable wind conditions.

     The principal problem of describing these effects
is that of either (a) representing the change in the
concentration after the step change of a variable or
(b) matching concentrations at the time of a step
change of a variable.

     We wish to consider the representation of concen-
tration produced by step changes in the emission rate,
Q, the wind, u, and the dispersion speed, a.  Thus
C(u,a,T)
                                     f.'.
will be the general  expression for the concentration at
time T.
                                             Change of Emission Rate

                                                  Imagine that a source operating under conditions
                                             ui,ai, undergoes  a step change in emission rate from
                                             Ql for times T <  TI to Q2 for times T > Tl.  The con-
                                             centration at time T > Ti may be calculated by adding
                                             a  differential  source, of emission rate (Q2   QI).
                                             which commences at time T}, to the continuing, old
                                             source:
     C(u,a,T)
                                                                            (Q2   Q1)c(u1,a1,T
                                             For  T  sufficiently  large,  c(u,a,T   TI)  becomes equal
                                             to s(u,a,T)  so  that if Q2  =  0,  the concentration even-
                                             tually decays to  zero.
Wind Change

     The wind, u, or its components u sin $ along d,
and u cos cf> along L, in equation (4) acts merely to
transport (advect) the pollutant from the source to the
receptor located at {d,£,z}.  Imagine that a source
operating at a rate QI with dispersion speed a^, is
subject to a step change of wind from uj at times
T < TI to U2 at times T > TJ.  Since the wind acts only
to transport the pollutant all we need do is describe
how the wind changes from u^ to u2 at times T » TI.
This is given by:
                                                        =  U(T
                                                                                        (u2   UI)TI/T.
                                             Thus,  if the  wind undergoes  a step change from ui  at
                                             times  T  < TI  to  u2 at times  T>Tj, the concentration
                                             at  any time T >  TJ becomes:

                                                           C(u,a,T)

                                             This  is  equivalent to what Lissaman^ calls the
                                             standard solution for a  moving receptor .  .  .  .
     If the emission rate also changes from Qj at times
T <
ential
commences at time Tj but which now is subject to a
wind u2.  In this case the concentration at time
T > T]^ is given by
                                                 Ti  to  Q2  at  times  T >  TJ  one again adds  a  differ-
                                                ial  source, of emission rate (Q2   Qi), which
                                                  C(u,a,T)    Q1C(u,a1,T)+(Q2   Q1)c(u2,a1J
Change of Dispersion Speed

     The foregoing are exact solutions for the concen-
tration existing at times T > T^ when, at T   TI, the
emission rate changes from Qi to Q2 and/or the wind
changes from uj to u2.  It is not clear that changes
in the dispersion speed, a, may be treated exactly.

     Physically the dispersion speed serves to specify,
as a function of time, the distance of a "labeled"
parcel of the dispersing "cloud" from an observer
moving with the "cloud".  Thus, a reasonable approxi-
mation would appear to be to treat step changes in
the dispersion speed, a, in the same manner as step
changes in wind speed are treated.  Thus if a source,
operating at a rate Q} under a wind u^, is subject to
a step change from a-, for times T < T^ to a2 at times
                                                          T >
                                                     define:

                                                       a = a(T
                                                                                    a2 + (a2
                                                       324.

-------
then the concentration at any time T > Tj becomes:

             C(u,a,T) = Q1C(u1,a,T)
      Within the constructs outlined here the effect of
combinations of step changes in emission rate, disper-
sion speed, or wind may be calculated.
Validation

      We have not had the opportunity to validate the
model.  However, Panofsky and McCormick5 present
validation data for the dispersion speed sub-model.
Lissaman^ found that concentrations of carbon monoxide
predicted by the infinite line source model correlated
well with measured concentrations without any adjustment
of the predicted values.
Conclusions

      A reasonably simple, closed-form model of the
finite line source has been found.  The model produces
finite solutions in a continuous manner for the limit-
ing conditions of steady-state, infinite source, and
zero-wind.  It predicts reasonable concentrations at
any point relative to the source.  The entire model,
including the dispersion relation, incorporates the
important parameters of wind, surface roughness, and
heat flux in a rational manner.  It can be extended to
model unsteady conditions.  The simple analytical form
of the model makes it suitable for construction of
network roadway models.
                                                                                   Figure I
Source-Wlnd-Receptor Geometry
 References

 1.  Taylor, G. I., "Diffusion by Continuous Movements",
    Proc. London Mathematical Soc., 20, pp 196-212
    (1921).

 2.  Lissaman, P.B.S. "A Simple Unsteady Concentration
    Model Explicitly Incorporating Ground Roughness
    and Heat Flux", Paper #73-129, 66th Annual Meeting
    of the Air Pollution Control Association  (1973).

 3.  Monin, A. S. and A. M. Obukhov, "Basic Laws of
    Turbulent Mixing in the Ground Layer of the
    Atmosphere" translated from Akademiia Nauk SSSR,
    Leningrad, Geofizicheskii Institut, Trudy, 151,
    #24. PP 163-187 (1974).

 4.  Lumley, J. L. and H. A. Panofsky,  "The Structure
    of Atmospheric Turbulence", Interscience  Publishers,
    John Wiley and Sons, New York (1964).

 5.  Panofsky, H. A. and R. A. McCormick, "The Spectrum
    of Vertical Velocity Near the Surface", Quart. J.
    Roy. Meteorol. Soc., 86, p 495 (1960).

 6.  Ellison, T. H., "Turbulent Transport of Heat  and
    Momentum From an Infinite Rough Plane", J. Fluid
    Mech., 2, p 456 (1957).
                                                        325

-------
                                         WATER QUALITY MODELING IN TEXAS
                         Joseph J. Beal, P.E.; Andrew P. Covar; and Dale W. White, P.E.
                                    Engineering Analysis and Modeling Section
                                            Texas Water Quality Board
                                                  Austin, Texas
                      Abstract

The State of Texas, acting through the Texas Water
Quality Board, has been intensely interested in water
quality modeling for the last three years.   In the
past, this effort has dealt mainly with the waste load
evaluation program, made necessary for the  allocation
of point source waste discharges by Public  Law 92-500.
A considerable amount of water quality modeling will
be required for the evaluation of treatment alterna-
tives which will be developed under Section 208 of the
same law.  This modeling effort will consider the ef-
fects of point and nonpoint waste sources on receiving
water quality, both under steady-state and  time
variable conditions.

                    Introduction

As far back as 1968, mathematical modeling  studies in
the State of Texas were being conducted to  determine
how much to restrict the discharge of pollutants.  Dis-
solved oxygen is the parameter most often evaluated by
our modeling studies.  Other parameters ranging in
difficulty from conservative substances to  eutrophica-
tion processes have been studied.  The objective of
this paper is to show how applied models are used in
planning problems and water quality management deci-
sions in the State of Texas.  The various types of
models currently in use are discussed along with the
State's future need for models.

        The Role of Modeling in Water Quality
        Management Prior to Public Law 92-500

Prior to Public Law 92-500, the Texas Water Quality
Act was the basic legal  authority for the Texas Water
Quality Board's surface water protection program.  The
Act directed the agency as follows:   It is  the policy
of this state and the purpose of this Act to maintain
the quality of the water in the state consistent with
the public health and enjoyment, the propogation and
protection of terrestrial  and aquatic life, the opera-
tion of existing industries, and the economic develop-
ment of the state; . .  .  and to require the use of all
reasonable methods to implement this policy."  The
agency under the "all reasonable methods" clause uses
a permit system as its  basic regulatory device to con-
trol  the point discharge of pollutants.  Obviously, if
you restrict the discharge of pollutants, the Act will
require the expenditure of public and private funds for
wastewater treatment systems.  Therefore, you must know
the degree of pollutant discharge restriction in order
to avoid wasting resources and whether or not the per-
mits are accomplishing  their objective.  This was
accomplished with descriptive studies of water quality
and later with mathematical modeling.

The first modeling endeavors by the Texas Water Quality
Board were conceptual models prepared by consultants.
During these early endeavors, the reports or users
manuals were complicated with technical language.  The
model limitations were  often not explained.  This re-
quired the users to contact the consultants for assis-
tance.   Due to these early problems, many people at the
state level failed to accept mathematical models of
water quality as useful  decision making tools.
        Planning Aspects of Public Law 92-500

The Federal Water Pollution Control Act Amendments  of
1972 (FWPCAA) established numerous requirements  the
States would need to satisfy in order to  be  eligible
for construction grant and program grant  funds from
the Environmental Protection Agency.

The period of 1973-1977 is generally referred to as
Phase I of the Act's implementation.  In  Phase I, the
emphasis has been on issuing discharge permits and
making construction grants in order to control point
sources of pollution which are easily identifiable
and correctable.  For many areas of the nation, the
achievement of this requirement will be all  that is
necessary for attainment of the 1983 goal.

In Phase II (1978-1983), the emphasis will be on solv-
ing the more severe and complex problems  produced by
point and nonpoint sources of pollution.  The identi-
fication of the programs necessary in achieving the
1983 goal in complex problem areas is to  be  accom-
plished through the preparation of plans  required by
Section 208 of the Act.

Section 303 of the Act sets forth requirements for
each State to establish water quality standards and
implementation plans.  Under this section each State
is required to have a continuing planning process
which will result in plans for all navigable waters
within the State.  These plans are required  to contain
certain items including the following: 1) total maxi-
mum daily load of pollutants for waters which cannot
achieve water quality standards using the minimum
wastewater treatment levels set forth in the Act,
2) adequate implementation for revised or new water
quality standards, and 3) controls over the  disposi-
tion of all residual waste from any water treatment
processing.

Pursuant to Section 303, the EPA issued regulations on
State Continuing Planning Process (40 CFR Part 130)
and Preparation of Water Quality Management  Basin
Plans (40 CFR Part 131).  These regulations  require
that revisions to basin plans after July  1,  1975,
shall "reconsider current actions with respect to the
most recent data or analysis and shall concentrate, if
appropriate, on the identification and evaluation of
methods and procedures (including land use require-
ments) to control, to the extent feasible, non-point
sources of pollution".  The current 40 CFR 130 and 131
guidelines also include the planning requirements of
Section 208 of the Act.

For each stream segment with water quality problems
caused by nonpoint source discharges, the following
minimal information is required: 1) type of  problem;
2) identification of waters affected; 3)  identifica-
tion of nonpoint discharges contributing to  problem;
and 4) alternative procedures and methods (including
land use requirements) to feasibly control significant
nonpoint source discharges (this evaluation  should
consider the technical, legal, institutional, economic,
and environmental feasibility).  The 40 CFR  Part 131
regulations further specify that controls over resid-
ual wastes be included in basin plans.  Residual
                                                      326

-------
wastes  to be considered include all residual waste from
any municipal, industrial  or other water or wastewater
treatment processing.   The regulations also, address
land and subsurface disposal practices.  Basin plans
are required to establish a process to control the
disposal of pollutants on land or in subsurface excava-
tions wherever such disposal causes or may cause viola-
tion of water quality standards or materially affect
groundwater quality.

There are two types of areawide planning in which the
TWQB is involved - Designated Areawide Planning and
Planning in Non-Designated Areas.

     Designated Areawide Planning is the planning
     required in areas designated by the Governor
     as having substantial water quality problems
     as a result of urban-industrial concentrations
     or other factors.  Eight areas have been des-
     ignated as 208 areas.

     Planning in Non-Designated Areas is the planning
     required in all  other areas of the State that
     are not considered to have substantial water
     quality problems.  These are considered to be
     State planning areas in which the Texas Water
     Quality Board is the Planning Agency.

The level of detail of planning for each State planning
area will be contingent upon the type and complexity of
problems in the planning area, and consequently, the
planning tools that are required differ from one area
to another.

The purpose of 208 type planning is to: 1) develop
methods of achieving or maintaining adequate water
quality in the Nation's streams, and 2) insure that
construction grant funds spent on construction of
domestic sewage treatment plants are spent in a cost
effective manner.

In other words, through this planning process should be
determined such things as whether or not any particular
stream has the ability to meet 1983 stream standards
through the application of effluent limitations on
dischargers or whether that particular stream is beyond
the point of ever meeting existing stream standards
for current designated uses.  In fact, the most current
EPA regulations (40 CFR 130) indicate that a State may
establish less restrictive uses than those contained in
existing water quality standards by demonstrating one
of the following:  1}  existing designated use is not
attainable because of irretrievable man-induced condi-
tions, 2) existing designated use is not attainable
because of natural background, or 3) the application
of effluent limitations for existing sources more
stringent than those required pursuant to the EPA
Effluent Limitation Guideline program in order to
attain the existing designated use would result in
substantial and widespread adverse economic impact
(of course, in order to make these kinds of determina-
tions, both point  source and nonpoint source modeling
efforts will  be required in these determinations).

The nonpoint source pollution program is an integral
part of the basin  planning and areawide planning pro-
grams.   The FWPCAA requires a nonpoint source program
element in all areawide plans conducted in areas desig-
nated pursuant to  Section  208(a)(2).  In developing the
nonpoint source planning element in the Basin Plan, the
nonpoint program in the designated 208 areas can be
more closely coordinated with the other nondesignated
areas.   Basin planning regulations require a nonpoint
source program element for each water quality segment
in which nonpoint  source discharges contribute to the
water quality problem.
The nonpoint source discharge program will also  provide
valuable input to, or require input from, other  State
programs:

1.  Water quality standards must reflect achievable
    goals.  At such time as these standards are  revised
    consideration will be given to the impact of non-
    point source discharges on water quality and the
    feasibility of controlling such discharges.

2.  Waste load evaluations should reflect the contri-
    bution of nonpoint source discharges to the total
    load.  Detailed information on the origin, magni-
    tude, and frequency of nonpoint source discharges
    will improve the accuracy and reliability of water
    quality models.

3.  The feasibility of controlling nonpoint source dis-
    charges will influence treatment level requirements
    for point source discharges.

4.  In some cases, waste control orders (discharge
    permits) will be required for nonpoint source
    discharges.

5.  Monitoring programs should be adequate to assess
    the magnitude and frequency of significant nonpoint
    source discharges.  Either routine monitoring or
    special surveys may be required to fulfill this
    requirement.

As indicated earlier, both traditional steady-state
point source modeling as well  as non-steady state run-
off type modeling is required to achieve the integra-
tion of a viable point and nonpoint source control
program.

        The Role of Modeling in Water Quality
         Management After Public Law 92-500

Each year the Texas Water Quality Board is called upon
to develop a statewide water quality management program
which can successfully provide guidance in the imple-
mentation of the Texas Water Quality Act and the
FWPCAA.  The aim of this program is to:  first, bring
about an improvement in water quality in areas where
violations of the Texas Water Quality Standards are
known to exist and secondly, to preserve the existing
quality of the navigable waters of the state where
conditions are already acceptable,  and further, to
implement the necessary requirements for areawide and
basin plans in order to insure good water quality in
the future.

One of the main efforts toward meeting these objectives
lies in preparation of waste load evaluations as re-
quired by the FWPCAA Section 303(d)(l)(C).   These waste
load evaluations as previously mentioned become a part
of the Basin plans, as basin planning strategy.   They
are taken into account during consideration of permit
applications as well as determination of new stream
standards.

Routinely waste load evaluations will  be updated or
revised as necessary to accomplish national  water
quality objectives in conformity with the requirements
of the Act and the Continuing Planning Process.  For
critical segments within each 208 Designated Area,
these updates are currently underway.

Texas has twenty-three designated river basins which
have been further divided into 297 discrete hydrologic
segments.  Of these, 230 or 78 percent are considered
as "effluent limited" where the minimum treatment re-
quired by law will accomplish our stream standards.
The remaining 67 segments or 22 percent are considered
                                                       327

-------
"water quality limited" where a higher level of treat-
ment is required to meet the desired stream standard.
Since the start of implementation of the FWPCAA, 89
waste load evaluation reports have been prepared.  Of
these, 59 segments were water quality limited while 30
were effluent limited.

A determination of the assimilative capacity of a
stream segment requires the quantitative assessment of
the effects on the environment of various alternative
measures.  This is the forte' of mathematical modeling.
As such, modeling plays a significant role in the deci-
sion making process of several facets of Texas Water
Quality Board activities, especially the formulation
of waste load evaluations and the issuing of waste
discharge permits.

The assimilative capacity of a stream is determined
in some cases by complex mathematical models utilizing
digital computers; while in other situations, simple
engineering calculations are used.  This leads us to
a categorization, or hierarchy of available steady-
state models to be used in the management process.

The first type of model, the type that has the greatest
weight in the decision making process, is the model
that is completely calibrated and verified for a par-
ticular stream segment.  This type of model would have
been verified during several different flow conditions
with predictions closely approximating known conditions.

The second level of model is somewhat similiar to the
first, but lacks verification while being adequately
calibrated.  This type of model may have been partially
verified for only one flow condition.

The third level in the hierarchy of models is that
model which has neither been calibrated nor verified,
but has been developed with "text-book" or assumed
values.  This type of model would not have had its
predictions matched with actual conditions, nor would
it have utilized actual load input conditions but only
estimations of what the loads might have been at a
given time.

The fourth and lowest level of modeling, although fre-
quently used for the first rough cut at solving dis-
solved oxygen problems, is the model  that does not
actually mathematically account for the transport of
waste materials, but computes the assimilative capacity
of a body of water assuming the body of water is a com-
pletely mixed reactor.  This type of model  is useful
for identifying a target load for a stream segment and
gives the user an idea of the magnitude of the problem
which must be solved.

An example of the first level of modeling would be the
Houston Ship Channel  model prepared during the Galves-
ton Bay Project.  This model was verified numerous
times for steady-state conditions.  Very few other
bodies of water have been modeled to this extent.   The
second level  of modeling is typified by the well-known
QUAL-I or QUAL-II type models when flow and loading
data is somewhat limited.   The third level  might be
OUAL-I with assumed values for much of the input data.
The fourth level  is based on the amount of oxygen that
can enter through the  water surface.

Each level  of modeling is  useful,  for different pur-
poses, and has a place in sound water quality manage-
ment.   For instance,  relatively great reliance can be
put on the predictive  capacity of a level  1  model  for
analyzing alternative  water quality management actions.
Treatment levels can  be closely evaluated for waste
load allocations  with  only limited additional  input
required  for  management decisions, these additional
inputs being  principally  cost  of treatment and benefits
of maintenance of water quality.   For a level  2 model-
ing situation, less  reliance can  be  placed on  model
output, and consequently,  other  factors may take on
more weight when considering possible alternatives.
These factors include overall  water  quality considera-
tions as well as cost of  control  measures.  When a 3  or
4 type modeling effort is  employed,  the management de-
cision is usually of the  nature  of requiring step wise
reductions in waste  loading, followed by an evaluation
of stream quality response, and  then a further control
action should the initial  step prove unsuccessful.

It should be noted here that the  Quality Board does not
rely solely on model predictions  during the planning
process.  This would be not only  unsound,  but  also un-
warranted.  Indeed,  all information  available  should  be
considered.  Consequently, public hearings are conduct-
ed by the Texas Water Quality  Board  after dissemination
of engineering reports concerning the modeling and
water quality management decisions under consideration
for implemention.  These hearings work in  two  ways.
That is to say, the Texas Water Quality Board  gains
additional pertinent information,  and at the same time,
provides the interested public with  a better knowledge
of what has gone on during the decision making process.

Problems more complex than determining the total  as-
similative capacity of a stream segment are: 1)  How
should the total assimilative  capacity be  divided
among the various waste discharges,  and 2)  How much
allowance for expansion should be provided?  '

In dividing the assimilative capacity among  dischargers
we have strived for an equal effort  among  all  dis-
chargers.  Some dischargers, however,  feel  that  we have
picked on them or that the assimilative capacity has
been divided inequitably.  In  some areas where the
majority of pollutant loadings comes  from  municipal
discharges, we have taken a position  that  the  larger
plants which account for the majority of the total load
should be required to improve  their wastewater treat-
ment.  An example of this is the  Dallas-Fort Worth Area
where 4 large wastewater treatment plants  account for
approximately 90 percent of the total  organic  load
while the remaining 5 small plants account  for approx-
imately 10 percent of the loading.  The four large
plants are required to discharge  an effluent with a
BOD5  of not more than 10 mg/1  on a monthly  average
basis while the five small plants may discharge an
effluent BOD5 of 20 mg/1  on a monthly average  basis.

In areas dominated by industrial  discharges, we gen-
erally use as a starting point the U.S.  Environmental
Protection Agency's industrial  effluent guidelines.
Federal  law has required the EPA  to define for most
industrial categories what are known  as: 1)  Best
Practical Treatment, and 2) Best  Available Treatment.
These guidelines were developed to impose  an equal
effort in waste treatment on each type  of  industry.
In the most complex waste load allocation  performed
by this Agency, the Houston Ship  Channel, where  165
wastewater treatment plants and 206  industrial dis-
charges were involved, the assimilative capacity was
divided up among the various industries  by requiring
each to provide treatment such that all  discharges are
an equal percentage of the differential  between  best
practical and best available treatment.  This  is illus-
trated mathematically by the following  equation:
   Allowable Discharge   BPT -   *    (BPT - BAT)

where:
        BPT   best practical treatment
                                                       328

-------
        BAT    best available treatment

          X    percent reduction required

Over a  period  of time,  the waste loading to any given
stream  is  constantly changing.   Therefore, as a waste
load allocation  is developed, a buffer for growth of
existing waste dischargers and the addition of new dis-
chargers to  a  stream segment is included.  This buffer
is to insure that as additional loads are imposed upon
the stream,  the  stream  is not immediately out of com-
pliance.  However, as this buffer is consumed and as
the quantity of  waste loads approaches the calculated
assimilative capacity of the stream, the staff of TWQB
must be aware  so that further action (either continued
monitoring if  the water quality is acceptable and not
deteriorating  or additional waste load allocations)
can be  taken.  Consequently, the TWQB has developed
an automatic data processing system to account for the
current waste  load conditions on a stream segment and
to compare existing conditions of loading to the waste
load allocation.  By using this system, it is possible
to quickly determine potential  problem areas and to
establish  priorities for analysis of stream segments
utilizing  limited engineering resources.

The modeling work previously discussed covers Phase I
of the  FWPCAA.  The Texas Water Quality Board is pre-
sently  in  the  development of methodology stage of
modeling for the 208 areawide planning program.  The
types of models  that will be used are discussed briefly
under modeling inventory.

                  Modeling Inventory

Over the last  four years, the Texas Water Quality Board
has acquired a number of computer models for use in the
various water  quality management tasks given to us.  The
models  vary in complexity from simple one-dimensional
steady-state river models to two-dimensional and time
variable estuarine models.

Our general  purpose models include: 1) the QUAL models
-  developed by the Texas Water Development Board and
modified by Water Resources Engineers, Inc., 2) the
AUTO-QUAL models   developed by the Environmental
Protection Agency, and 3) the ESTPOL models   developed
by Texas A&M University.  The primary use of these mod-
els is  in the  waste load evaluation work performed by
the Texas  Water  Quality Board.   These are basically
steady-state applications.

In time variable applications, the TWQB is now applying
or evaluating  1) the AUTO-QUAL model; 2) the STORM mod-
el - developed by Water Resources Engineers, Inc.; 3)
the STORM WATER  MANAGEMENT model - developed by Metcalf
&  Eddy, Inc.,  the University of Florida, and Water
Resources  Engineers, Inc.; and 4) RECEIV II - developed
by Ratheon for the EPA.  One or more of these models
will be used in  the upcoming 208 program dealing with
nonpoint waste loadings.

The TWQB has acquired two general purpose lake models.
These are 1) EPARES, and 2) RIVER-RESERVOIR, both
developed  by Water Resources Engineers, Inc.  We have
not yet had the  opportunity to use either of these
models, but we have tried to become familiar with their
basic features.

At times the TWQB has turned to basin specific models
to solve particular water quality management problems.
The largest example of  this type of work has been the
Galveston  Bay  Project.   In this project, a two-dimen-
sional  model of  Galveston Bay was developed along with
a  one-dimensional  steady-state model of the Houston
Ship Channel.  The development of these models was
accomplished by Tracer and Hydroscience,  Inc.   The  ship
channel model was used in the waste  load  evaluation of
that segment.  Hydroscience, Inc.  has also  developed a
basin specific model for the Trinity River  and  they are
presently under contract to develop  an eutrophication
model of Lake Livingston, a 82,600 acre lake northeast
of Houston.  The TWQB continues to look for new models
and for new applications of existing models.
                                                       329

-------
                       A DYNAMIC WATER QUALITY SIMULATION MODEL FOR THE THAMES RIVER
                                              D.  G.  Weatherbe
                                           Water  Resources  Branch
                                    Ontario Ministry of the Environment
                                              Toronto,  Ontario
     The Thames River basin is experiencing
problems of water quality and flooding,  heightened
by intensive agricultural use and an expanding
urban population.  A study was initiated to
provide solutions to these problems as well as
problems of erosion, unsatisfied recreational
demand, and conflicts in reservoir use.   In order
to provide a suitable tool for the analysis and
projection of the water quality problem, a dynamic
water quality simulation model was developed and
applied to the major growth center, the  City of
London.  This paper describes the major  objectives
of the water quality modelling, the model structure
and processes, as well as model input and output
summaries.  The application of the model to
evaluate various water quality management options
is described.
                      Need  for a  Study

                           The river  experiences  water quality impairment
                      problems caused by excessive  inputs of nutrients,
                      oxygen  demanding materials, bacteria and suspended
                      solids  from urban  and  rural sources.   The largest
                      city  in the basin,  the City of London,  is expected
                      to  grow from a  population of  220,000 in 1971,  to
                      500,000 in  2001, with  a proportional increase  in
                      sewage  discharges  to the  river.   Options for
                      controlling present and future problems for  the
                      City  of London  consisted  of increased levels of
                      sewage  treatment,  sewage  diversion directly  to
                      Lake  Erie by pipeline,  low  flow augmentation from
                      proposed reservoirs and urban growth restrictions.
                      This  part of the study was  initiated primarily to
                      evaluate these  water quality  options.
                                             FIGURE 1

                             THAMES  RIVER DRAINAGE  BASIN
                                     Province of Ontario
                                              Scale of Miles
                 LAKE
ERIE
           Thames River Study

Basin Description

The Thames River in southwestern Ontario  (Fig.
1), drains 2,250 square miles of mainly agricultural
land with a total 1971 population of  415,000,
334,000 in urban areas and 81,000 in  rural.
Major surface water uses include those for
sewage disposal, recreation and fish  and wildlife
habitats.  Municipal water supplies are either of
ground water origin or are imported from the
Great Lakes by pipeline.  Three multiple use
reservoirs in the basin, with a maximum combined
storage volume of 72,500 acre-feet, are used for
flood control, low flow augmentation  and recreation.
                      Study  Objectives

                      General  -  The  overall  objective of  the  study was
                      "to  develop  guidelines for  the  management  of the
                      basin's  water  resources to  ensure that  adequate
                      quantities of  water  of satisfactory quality are
                      available  for  the  recognized uses at the lowest
                      possible cost,  and that erosion and flood  protection
                      are  provided consistent with appropriate benefit-
                      cost criteria"!.

                      Water  Quality  Objective  The general water
                      quality  objective  defined during the study was to
                      maintain existing  water quality where it is
                      satisfactory for fish  and aquatic life  and
                      recreation,  and to improve  quality  to that level
                      in those areas where it is  presently degraded.
                          Appropriate dissolved  oxygen criteria to
                      achieve  this objective were identified  for major
                      sections of  the river, based on published  Ontario
                                                     330

-------
guidelines^
             These criteria were  redefined  in
statistical terms to allow comparison with model
output summaries.  For example. Criteria C, which
represents an acceptable quality of water with
some stress, to be applied for warm water fish
species in non-spawning periods, is stated as
follows:  "the dissolved oxygen concentration
should be above 5 mg/1 95 percent of the time in
a given month.  Concentrations may range between
5 mg/1 and 4 mg/1 for periods up to four hours in
length within any 24 hour period, provided that
water quality is favourable in all other respects".

      Dynamic Water Quality Simulation Model

Model Description

     The dissolved oxygen model used in the
Thames River Study takes account of the effects
of carbonaceous and nitrogenous oxygen demand,
atmospheric aeration, aeration at weirs, respir-
ation in bottom sludges, photosynthetic oxygen
production, and respiration of aquatic plants and
algae.  Model parameters are adjusted to account
for the effect of changes in temperature and
channel flow.
     The model expressed as a differential equation,
in terms of the oxygen deficit D, is given below
as a function of time, t, and distance, x.  The
oxygen deficit D is the difference between the
oxygen saturation concentration and the actual
concentration.
 jD + V3D
 8t   8x
           -KaD+KdL(x)+KnN(x)+S    P(t)+R
 where:
 D
 V
 t
 x
 Ka
     =oxygen  deficit, mg/1
     =velocity  of  stream, ft/sec
     =time, days
     =distance, ft
                              ,,-1
     =aeration  coefficient,  day •*• (O'Connor and
     Dobbins, 1958)3
Kd   =deoxygenation coefficient,  day~^
L(x)  =carbonaceous  oxygen  demand as  a function of
     x,  given by  L(x)=Lo e~Kni'x/v'l
     =initial concentration  of carbonaceous
     oxygen  demand, mg/1
     =oxygen demand removal  coefficient,  day~l
     =nitrogenous oxygen demand as a function of
     x.,  given by  N(x)=No e"*-" (X/V)
     =Initial nitrogenous  oxygen demand,  mg/1
     =nitrogenous oxidation  coefficient,  day"!
     =benthic bacterial respiration,  mg/l/day
P(t)  =photosynthetic oxygen  source as a function
     of  time, of  the form  P(t)=Pm Sin | (TT/p)
     (t-ts) | for  daylight  hours.  A  step function
     approximation  F (t), of  the function P(t) is
     used which assumes a  constant rate of
     photosynthesis over a time step,  h (two
     hours)
     =maximum rate  of photosynthetic production,
     mg/l/day
     =period of sunlight,  days (fraction)
     =time of sunrise, days  (fraction)
     =algal  respiration, mg/l/day
 Lo

 Kr
 N(x)

 No
 Kn
 S
 Pm

 P
 ts
 R
     This formulation and solution are as  expressed
 by O'Connor and DiToro  (1970)4 except for  the
 step function, F(t).
                                                                 	Gl	
                                                                   W2,
                                                                 H10  N
                                                                   W3
                                                                   L2_V
                                                                 H13
                                                                 H14
                                                                    W5
                                                                           Slj    North Thames River
                                                                        City o
                                                                        London
                                                                                      -S31
                                        Thames
                                                                                  H3
                        LEGEND

                        1-17  =  Junction  Points

                        S1-S5 = Sewage Treatment
                                Plants

                        G1-G3 = Generated  Flows

                        H1-H15  = Local Inflows

                        W1-W5 = Withdrawal Flows

                        	 = City of  London
                                                                       16
Figure 2:  Thames River water quality simulation
model   geometry of river system.
 River System Geometry

      The dissolved oxygen levels throughout a
 river are described by treating the river as a
 collection of reaches with constant conditions in
 each reach, calculating the effect of all input
 and withdrawals at the head of the reach and
 using the model formulation to calculate the
 concentrations of dissolved oxygen and waste
 constituents at the end of the reach.  Weir
 aeration is assumed to take place at the head of
 the reach .  Figure 2 shows the geometry of the
 river system modelled, indicating the reach
 junctions, sewage treatment plant inputs, local
 inflows (tributaries), withdrawal flows and
 upstream flows.

 Input Variations

      The dynamic water quality simulation model
 was developed to take account of the variations
 of natural and man-made conditions that affect
 water quality, to provide increased information
 about the possible effects of water management
 planning alternatives in the river basin.  The
 inputs to the system vary with time to reproduce
 the variations in conditions that occur in the
 real system.  Causes of variation in water
 quality accounted for in the dynamic simulation
 model, include:
                                                      331

-------
  i    Streamflow from the upstream main channel
     and from tributaries.   Daily streamflows
     were generated for the upstream gauge
     locations,  based on the historical record.
     The method  generates extensive traces of
     Streamflow  data from available historical
     records using stochastic techniques and thus
     allows the  water quality simulation model to
     be run for  as long a period as required
     (Singer, 1974)6.  Channel flow velocities,
     aeration rates, respiration rates and
     photosynthetic rates are affected by changes
     in Streamflow.

ii   Water quality from upstream and from tributaries.
     Probability distributions based on observed
     water quality are used in the model to
     reproduce daily variations in dissolved
     oxygen (DO), carbonaceous (CARBOD)  and
     nitrogenous oxygen demand (NOD).   Diurnal
     variations  of dissolved oxygen are also
     included.

iii  Waste treatment plant loads.  Observed daily
     mean effluent flows are reproduced by
     mathematically describing seasonal and
     within-week trends and adding a random
     component.   Within-day variations in treatment
     plant flows are also included.   Daily mean
     water quality parameter concentrations (DO,
     NOD, CARBOD) are randomly chosen from
     probability distributions based on observed
     data.  Table 1 describes the probability
     distributions for the water quality of the
     main channel flows and sewage treatment
     plants.

iv   Sunlight energy.  A probability distribution
     of sunlight energy for each month is used to
     calculate variations in the average photosynthetic
     rates of plants and algae for each reach,
     and each day.
Model Application
Model Runs
     Computer simulation runs were undertaken to
evaluate various cases defined by input conditions.
Each run consisted of the simulation  of dissolved
oxygen, carbonaceous and nitrogenous  oxygen
demand at each reach node, every two  hours,  for
thirty years, for each month simulated.   Typically,
the critical months of May, June, July  and August
were simulated.

     Input conditions for Streamflow, sewage
treatment quality, sewage flow, and sewage
outfall location were altered to define various
management possibilities as outlined  below:

Streamflow - Cases modelled consisted of unregulated
flows generated from historic records,  regulated
flows from the operation of three existing reservoirs
and regulated flows from the addition of two
proposed reservoirs.

Sewage Treatment Quality   Cases modelled consisted
of existing quality as described in Table 1,
based on 1972 data, improved quality  consisting
of nitrified secondary effluent approximated by
the quality of Greenway STP, shown in Table  1 and
zero pollutants defined by negligible concentration
of pollutants and high effluent dissolved oxygen.

Sewage Flow - Cases modelled were existing (1972)
flow rate with a. total of 27.6 MIGD  (51.2 cfs) on
the average and 1991 project sewage flow rate of
49.5 MIGD  (92 cfs).

Sewage Outfall Location - Cases modelled were
existing (1972) with 1991 flows distributed  to
existing STP's on a proportional basis,  a new
plant downstream accepting all flow increases and
complete sewage diversion to Lake Erie.
     Temperature.   Mean daily water temperatures
     are calculated in the model according to
     observed trends.   Oxygen saturation concentrations,
     aeration rates and respiration rates depend
     on temperature.
  TABLE 1:   Thames River Simulation,  Water Quality Inputs,  Description of Probability
            Distributions and Mean Sewage Flows
Model Output   For each month and each reach, the
model printout tables consisting of the number of
violations of dissolved oxygen criteria, the
distribution of and average duration of violations,
Carbonaceous O.D.
Input Location

Adelaide STP
Pottersburg STP
Vauxhall STP
Greenway STP
Oxford STP
North Thames R.
Thames R. (S. Branch)
10%c
(mg/1)
13.4
12.0
15.8
8.0
10.0
1.2
1.0
Median
(mg/1)
30.0
35.0
40.0
20.0
20.0
1.8
1.6
90%
(mg/1)
68.0
109.0
98.0
54.0
44.0
5.2
3.3
Nitrogenous
10%
(mg/1)
59.4
11.2
20.3
4.7
60.7
2.4
2.6
Median
(mg/1)
98.0
45.0
64.0
7.0
123.0
3.2
3.2
O.D. Daily Mean Flow
90%
(mg/1)
125.0
114.0
98.0
23.0
180.0
6.6
4.9
1972
(cfs)
5.1
6.6
6.3
32.0
1.4
	
	
  Notes:
  a    Carbonaceous oxygen demand,  (CARBOD)  estimated by multiplying BODc data by the
       CARBOD/BODs ratio determined through  laboratory analysis.   Ratio of 2 for the
       STP'i= and 1 for the stream inputs were used.
  b    Nitrogenous oxygen demand, (NOD)  determined by multiplying Kjeldahl nitrogen
       data by 4.57, the ratio of NOD to unoxidized  nitrogen,  determined by
       stoichiometric balance.
  c    10 percent of observations did not exceed value.
  d    Based on the period from May to October 1972
                                                      332

-------
                                                        ._  2
78   9 10  11  12
    MAIN  THAMES
                                               13  14  15  16
     THAMES south .branch
                             Reach number (end of reach)









r—

I
%

























\











I I
D












1
























^
i











I










77.
^
i
i





















i
i
t
%
\

cz
»
•
S
•I
1
Sewaj






|
m




5e

-



I
%
i
%
i
LEGEND
DO less than 5 mg/1
DO less than 4 mg/1
DO less than 3 mg/1
DO less than 2 mg/1
DO less than 1 mg/1
Treatment Plant



(-,
5% criteria
condition

i n
& Z U
     Figure 3:   Thames River water quality simulation model
       dissolved oxygen output summary for the month of July
     based on inputs for existing conditions (1972).
  12   34   56   78   9   10 11  12 13 14 15 16
NORTH I   THAMES I       MAIN THAMES
THAMES south branch
                      Reach number  (end of reach)
Figure 4:  Thames River water quality simulation model
  dissolved oxygen output summary for the month of
i^uly, based on input conditions of improved quality
effluent, existing sewage flow, existing dam operation,
existing outfall location.
and the total time  in violation.  Optional output
consists of cross tabulations of  output parameters
(dissolved oxygen,  carbonaceous and nitrogenous
oxygen demand)  with streamflow for each reach,
and a plot of output parameters.

Existing Conditions - The  output  table showing
the percent time  in violation is  shown graphi-
cally in Figure 3 for the  existing conditions,
defined by the model input combination of regu-
lated flow, existing sewage quality, existing
sewage flow and existing outfall  locations.   The
criteria condition  that allows occurrences of
dissolved  oxygen  less than 5 mg/1 for 5 percent
of the time is also shown.   From  this, it can be
seen that  significant criteria violations occur
in eight of the sixteen reaches.

Model Verification  - The model is constructed
with calibrated parameters based  on actual
surveys and actual  data are used  in the input
variations.   Consequently,  results are thought  to
represent  the real  conditions.  Intensive survey
data, and  long  term monitoring data confirm that
a dissolved oxygen  problem exists;  however,
continuous data comparable to model output are
presently  not available for verification.   The
                               model provides the  "best" estimate  of water
                               quality, but the  absolute values predicted cannot
                               be verified.  Consequently,  the model is  most
                               useful  for  comparing  the relative effectiveness
                               of optional control measures.

                               Conclusions Derived from Model Applications

                               Existing Conditions - Dissolved oxygen  conditions
                               presently represent an unacceptable quality.
                               Urban expansion without improvement should not be
                               allowed.

                               Nitrification - Improvement  of sewage quality  by
                               nitrification of  effluents significantly  improves
                               water quality in  the  river.   Figure 4 shows  the
                               predicted water quality resulting from  nitri-
                               fication of effluents. The  model predicts an
                               insignificant negative effect from  increased
                               sewage  flows when only nitrified effluents are
                               discharged.

                               Zero Pollutants - Treatment  to a zero pollutant
                               level further improves water quality, however,
                               violations  are  still  predicted by  the model.
                               This is due to  the combined  effects of  upstream
                               quality and algal and sludge respiration.
                                                     333

-------
 Flow Augmentation    The  addition  of  upstream
 reservoirs operated  to provide  flow  augmentation
 would significantly  improve water quality.

 Outfalls   The  location  of sewage outfalls within
 the  city  causes no significant  difference in
 water quality.  Diversion of  sewage  to  Lake  Erie
 would improve quality, but not  to the level
 provided  by  treatment to a "zero  pollutant"
 level.  This is because  diversion of sewage
 reduces both the waste load and the  flow to  the
 river.

 Verification   Dissolved oxygen data of a continuous
 nature are required  for  proper  verification  of
 the  model (a continuous  monitor was  installed in
 1974).

 Urban Runoff   The model parameters  and input
 variations include the indirect effects of urban
 runoff which were present in  the  stream during
 intensive surveys; however, the urban runoff
 effect is inseparable from other  effects.
 Consequently, studies are being undertaken to
 determine the significance of this source.

 Eutrophication  - The effects  of nutrient controls
 on dissolved oxygen  could not be  estimated,  since
 no quantification was available for  the nutrient
 plant growth   dissolved oxygen relationship.
 Studies are  being undertaken  to investigate  this
 phenomena.

             Waste Loading Guidelines

      Conclusions derived from model  runs were
 used in the  statement of allowable waste dis-
 charge rates for each reservoir construction
 alternative.  These statements, called waste
 loading guidelines, are based on  loading rates
 which produce marginally acceptable water quality.
 The  loading  rate, which produced  the model output
 shown in  Figure 4 was considered marginally
 acceptable,  in  spite of  the predicted criteria
 violations, because of the lack of model veri-
 fications and since treatment to  the zero pol-
 lutant level or diversion still produced criteria
 violations.  An arbitrary limit on the dilution
 ratio acceptable at low  flow was also incorporated
 in the identification of allowable waste discharge
 rates as  follows:  tertiary treatment (to stream
 quality - approximately  15 mg/1 total oxygen
 demand) should be initiated when the dilution
 ratio of  stream flow to sewage reaches 1.5:1, and
 increases in discharge should stop when the
 dilution  ratio reaches 1:1.
      "System Options", which were combinations of
 a waste disposal option and a reservoir construction
 option were defined on the basis of the above
 analysis.   Water quality benefits were assumed to
be constant for all options.   The present value
of costs  for each option was  estimated at various
 interest rates,  including the negative costs
 (i.e. benefits)  from flood control.  A least cost
ordering of the options then  provided input to a
 subsequent analysis of unquantified costs and
benefits for various  options.
               References

Thames River Basin Water Management Study,
Ontario Ministry of the Environment and the
Ontario Ministry of Natural  Resources,  1975,
Toronto, Ontario.
Guidelines and Criteria for  Water  Quality
Management in Ontario, Ontario Ministry of
the Environment, 1974, Toronto, Ontario.
O'Connor, D.J.; Dobbins, W.E., Mechanism
 of Reaeration in Natural Streams,  ASCE,
123, 641, 1958.
O'Connor, D.J.;, DiToro, D.M., Photosyn-
thesis and Oxygen Balance in Streams  ,
ASCE, J.  Sanitary Engineering Div., 90,
April, 1970.
Thackston, E.L. and Spence,  R.E.,  Review
of Supplemental Reaeration of Flowing
Streams.  J. WPCF, 38(10), October  1968.
Singer, S., Daily Streamflow Simulation
on the Thames River Basin.   Water  Resources
Paper 7,  Ontario Ministry of the Environment,
1974, Toronto, Ontario.
                                                     334

-------
                            DISPERSION MODEL FOR AN INSTANTANEOUS SOURCE OF POLLUTION
                                 IN NATURAL STREAMS AND ITS APPLICABILITY TO THE
                                            BIG BLUE RIVER (NEBRASKA)

                                         Mahendra K. Bansal, Ph.D., P.E.
                            Nebraska Natural Resources Commission, Lincoln, Nebraska
                                                                     Mathematical Model  of  Dispersion
     Dispersion behavior in natural streams depends
upon dispersion rates,  channel configuration, turbu-
lent flow characteristics,  and biochemical changes
taking place in the stream environment.  This is true
for an instantaneous source of pollution for all times.
This also holds good for a continuous source of pollu-
tion during transition  periods when mixing is not com-
plete in the reach.  Therefore, the prediction of
turbulent dispersion coefficients is important in the
determination of water  quality constituent concentra-
tion in natural streams.  However, the longitudinal
dispersion rates predicted by the QUAL model are low,
which results in higher concentration peaks of short
durations.  In steady-state conditions, for a. continu-
ous plane source of pollution, the dispersion behavior
in natural streams does not depend upon the dispersion
rates.  Under these conditions, an exact solution of
the dispersion equation is available, and as such a
finite-difference approximation technique should not
be used.  One- and three-dimensional mathematical models
of dispersion are presented in this study.  The tur-
bulent dispersion coefficients calculated were tested
for the Big Blue River  in Nebraska.  The dispersion
model developed is not  dependent on channel size or
regional location of the stream.
                     Introduction

     A pollutant,whether agricultural or domestic,
municipal or industrial, hot or cold, when discharged
into a stream, will mix and disperse according to tur-
bulent flow characteristics of the stream.  Presently,
significant advances have been made in the understand-
ing of the basic mechanism of dispersion, but the
problem of predicting the time-concentration distri-
bution of a water quality constituent still remains to
be settled.  There are some dispersion models avail-
able, but their applicability is often limited.

     The U. S. Environmental Protection Agency, in its
effort to deal with the stream pollution under Section
303 of the Federal Water Pollution Control Act Amend-
ments of 1972, adopted the QUAL-I and QUAL-II models
to help in formulation of the water quality management
plans for various river basins in Nebraska and other
states.  The water quality model, QUAL-I '  , was
designed to simulate the dynamic behavior of conser-
vative minerals, water temperature, carbonaceous BOD,
and dissolved oxygen levels in various segments of
natural streams.  The QUAL-II  model,which is a modi-
fied version of the QUAL-I model, additionally
simulates benthal oxygen demand, nitrogenous BOD,
phosphorous, coliforms, chlorophyll-A, and radioactive
constituents in natural streams.  The simulation of the
dispersion component are identical in the QUAL-I and
QUAL-II models.

     The Nebraska Natural Resources Commission,on
request from the State Department of Environmental
Control,tested the suitability of QUAL-I and QUAL-II
models in simulating the dynamic behavior of conserva-
tive minerals in the Big Blue River in Nebraska.  The
time-of-travel data1*, gathered by the U. S. Geological
Survey on the Big Blue River during August 1973 and
May 1974, were analyzed to respond to the turbulent
dispersion dynamism in natural streams.
     The concentration  distribution of  a water quality
constituent in  turbulent  streams  is governed by the
law of conservation  of  mass.   The diffusive mass-trans-
port equation for a  conservative  pollutant, where there
are no bio-chemical  changes  taking place in the stream
environment, assuming longitudinal flow and no other
sources and sinks in the  reach, is:
                           3
                                       3
where c = local mean  concentration;  x,  y,  z  =  spatial
coordinates  in longitudinal,  lateral, and vertical
directions,  respectively, measured  from the  center of
stream surface as datum; u  =  mean flow  velocity in
longitudinal direction;
                            D,  Dz
                                      turbulent  diffu-
sion coefficients in x,  y,  z  directions,  respectively;
and t = time elapsed since  injection  of  the  pollutant
or dye in a natural stream.   Equation 1  is based  on
the Fick's law of diffusion where  the transport asso-
ciated with the  turbulent fluctuations is proportional
to the concentration gradient.   For one-dimensional
flow, the diffusive mass-transport equation  reduces to:
                                                           3t

                                                                                                             .(2)
where D^ = longitudinal dispersion  coefficient;  and
V - average velocity of flow  in  the reach.   Equations
1 and 2, assuming dispersion  coefficients  are  constant
in a reach, correspond to:
             e
                    32c,
                        ,32c
                                                   .(3)
For sampling stations far downstream of the injection
site, where mixing in the lateral and vertical direc-
tions is almost completed, it can be assumed that
DL = DX, because:


32c >;>  3?c  32c _
3x      3y   3y

Otherwise, DL would remain a function of Dx, Dy, Dz,
32c/3x2, 32c/3y2,32c/3z2, and non-linearity of flow.
In this study,
and u/V.
                  is taken to be a function of D
Instantaneous Source

     Let M be the amount of conservative  constituent
injected as a plane source at any point of  the  stream.
The initial and boundary conditions are:

c(x,0) = 0    for all x,
f
•'—a
     A(x) c(x,t)dx = M - constituent losses in the
                         reach for all t,
c(°°,t) = 0    for all t,


-Jj0- ->• 0, as t -»• °°.
OX

The solution of Eq. 3 that satisfies the above condi-
tions , is well known and is given by,
                                                       335

-------
c(x,t) =
                   r  (x-vt)21
                exp - -i	'—
                   L   40^  J
                                   losses  in the
                                   reach	(4)
                                                                  m dt1
The one-dimensional mathematical model of dispersion,
which is also a solution of Eq. 3, adopted in this
study is :
                                                           The resulting concentration at time t due  to  the
                                                           water quality constituent dumped continuously at
                                                           origin from time 0 to t, is given by,
            exp
                r  (*-vt>2   kovti
                 - - •• ---
                L   4DLt      L  J
                                                                            /              -\
                                                              ~F= I o ^P  l  *-x"u^t"t    >	7-772 »
                                                              ^4TTDLJ       {  4DL(t-t')    J(t-t')A
                                                                                                         or
where kg is a loss factor and L is some characteristic
length.  In this study, k0 is kept unity, and L is
taken equal to x.  If the value of DL is known for a
reach, the concentration distribution can be computed
from Eq. 5.

     The three-dimensional model of dispersion, which
is a solution of Eq. 1, adopted in this study is:
                                                                         2DL
                                                                                                        dn
                                                           where n
                                                                                                            .(10)
c(x,y,z,t)=
                    exp
               ,7TDLt
             2H
                   exp
                                                                The exact solution of Eq. 10 is not available,
                                                           but it can be approximated as,
                                                           c - 5 erfc (-*
                                                                                                             .(11)
To use Eq. 6,  three of  the four unknowns c, DL, Dy,
and Dz must be known.   The value of DL  is first deter-
mined from the one-dimensional model of dispersion,
and Dy and D2  are  considered inter-related.  Based on
past work, the empirical  relationship is taken as,
                                                           where p is an undefined constant, and erfc is a com-
                                                           plementary error function.  The value of x must not
                                                           exceed the characteristic length L after which the
                                                           mixing in lateral and vertical directions is almost
                                                           completed.  The length L can be taken a multiple of
                                                           16DL/V.  For steady- state conditions, when t •>• <*> ,
                                                   .(7)
          Ox       A
where u = ^, V   -fr, H = ^; Q, A, B are discharge,
cross-section area, and top width of flow; x is the
reach length; and tp is the time to peak arrival of
the constituent concentration at a sampling station.
The value of Dz which gives the best fit between the
computed and measured time-concentration curves, is
taken to be the approximate value of Dz for that
stream reach.
                                                           c  - ^   for all
                                                              u
                                                                                                             (12)
                                                           If  M units are the total amount of the constituent
                                                           concentration dumped in the stream per unit time,
                                                           then,
                                                              •jp t  or c Q * constant, therefore,
                                                           c  Q
                                                           where cj,
                                                                                                             (13)
                                                                          incoming constituent concentration and
      The empirical equations  used  in the evaluation of    discharge in the stream; c,,, Q, - effluent concentra-
 turbulent dispersion coefficients,  DL,  Dy,  Dz,  in the
 longitudinal,  lat        '     	
 on past work ^,6,
                 teral,  and vertical directions,  based
                   '**  are:
5 DL
$^) = 6.45 -
                      0.762  log(-
       £—)  = -8.1 + 1.558 log(-
                                                   .(8)
                                                   .(9)
 where p,  \i,  and v are  the density,  coefficient of
 viscosity, and kinematic viscosity  of flow at stream
 water temperature;  and K is a regional dispersion
 factor assumed to be  unity for the Big Blue River.

      Equations 7, 8,  and 9 reveal  that the dimension-
 less  dispersion parameters are a function of the
 Reynolds  number of  flow, and channel configuration  of
 the stream.   Knowing  the dispersion coefficients, the
 concentration distribution for an  instantaneous
 source of pollution can be computed at any point of
 the stream.

 Continuous Plane Source
                                                          tion and flow discharged into  the  stream;  and  c, Q
                                                          resulting outgoing constituent concentration and dis-
                                                          charge downstream of the injection site  under  steady-
                                                          state conditions.

                                                          Dispersion Component in the QUAL-Model

                                                               The diffusive mass-transport  equation used in the
                                                          QUAL-I and QUAL-II models is the same as Eq. 2.  A
                                                          finite-difference method is used for its solution.  To
                                                          avoid instability of solution,  an  implicit technique
                                                          of backward-difference approximation is  used to solve
                                                          Eq. 3.  The linear equation adopted at time step n + 1
                                                          when its spatial distribution  for  all distance steps
                                                          i at time step n are known, is:
                                                           -n+1
                                                           where
                                                           and
                                                                  W  cn+1
                                                                   i  i+1
                                                                           Gn
                                                                                                      .(14)
                                                                                      Zi"aiGi-l
      Let m units  per  unit  area of  a conservative con-
 stituent be dumped  at the  origin for times  t>0 in a
 stream moving with  a  uniform velocity u in  x-direc-
 tion.  The concentration at  any point (x,t)  due to the
 constituent m dt1 injected during  a time element dt'
 at  time t '  is :
                                                                       + •££.
                                                                        Ax2
                                                                                      V
                                                       336

-------
              Ax
       ui  = I<
                       dl  „    zl
For a head reach,  W,  = —,  GI   —,  and
                       bl       bl
           n   AQx^ ACx-L At
     Zl = cl
In Eq.  14, c^_|_j is unknown, therefore, the solution
starts backward from the last reach, given by,
 n+1
                                                  (15)
where G.   ^i, and Z. = c° +
                             AQx  Acx  At
This is possible when Wj_ = 0.  The assumption of Wj_ in
the last reach to be zero for all time increments is
questionable.  During initial periods, when mixing is
not complete, W^ should not be zero.

     A perusal of numerical values of a, b, d, and z
would reveal that d is approximately zero for all
times, and the resulting concentration can be approx-
imated as,
 n+1
     = G  for
                  times
                                             (16)
 It is not true for the last reach only as it has been
 adopted in the QUAL model given by Eq. 15.

     It is to be further noted that the QDAL-I and
 QUAL-II models are designed for a continuous source of
 pollution.  This is evident from the values of Z-^
 which are computed cumulatively for constituent in-
 creaments, Acx£, injected at origin at time intervals
 of At integrated over 0 to t.  However, under steady
 state conditions, an exact solution is available for a
 linear uniform flow, given by Eq. 13.  In this case,
 the resulting concentration is not dependent on the
 dispersion coefficient.  Equation 13 is important
 because steady-state solutions are often required to
 be studied in evaluation of the water quality manage-
 ment plans.

     In the case of a continuous source of pollution,
 the finite-difference approximate solution is there-
 fore valid only for the transition period during which
 the mixing is not completed.  It should also hold good
 for an instantaneous source of pollution, where the
 concentration distribution is dependent on the disper-
 sion rates for all times.

     The QUAL model uses the Elder's equation9 to
 evaluate the longitudinal dispersion coefficient,
 described by,

 DL   5.93 U^ D

 where U* is the bed shear velocity, given by^/S^e.
 and Se is the energy slope calculated from the Man-
 ning's equation.  The resulting dispersion equation
 is expressed as ,
The mean velocity and depth  of  flow are calculated
from the relationships,

u' = aQ^, and D = yQ6
where a, 6, J,6 are constants of  proportionality pre-
determined from the velocity- and stage-discharge
rating curves.

     The values of longitudinal dispersion  coeffi-
cients calculated from Eq. 17 and Eq.  8 differ  con-
siderably.  It is later found that Eq.  17 cannot be
used for all the hydraulic flow conditions  in natural
streams.

      Dispersion Simulation  in  the Big Blue River

     A summary of the time-of-travel data and dis-
charge measurements collected by  the D.  S.  Geological
Survey between the Seward and Barneston stations  on
the Big Blue River during August  1973  is given  in
Table 1.  A fluorescent dye, Rhodamine WT,  20 percent
solution in acetic acid, 'was injected  at three  loca-
tions along the river and the fluorescence  was  moni-
tored at regular intervals for  about ten days at
fifteen sampling stations, as shown in Fig.  1.   It is
well known that a natural stream  profile is not uni-
form.  Its configuration changes  from  section to
section.  There are some dead pockets  present in a
stream reach, where the dye  is detained temporarily
and is released at later times.   In some cases,  the
measured concentration curve has  two or more peaks.
To make the hydraulic data as consistent as possible,
the observed top-width and area of cross section of
flow are plotted against the measured  discharge for
each gaging station.  Such a plot for  the Big Blue
River at Crete is shown in Fig. 2,  which indicates
that the cross-section area  is not zero for a zero
discharge, and the effective cross section  is then
determined.

     To verify the dispersion model for the Big  Blue
River, the time-concentration distribution  curves  were
computed from the longitudinal dispersion rates  adopt-
ed in the QDAL model (Eq. 17) and  also  from the  one-
and three-dimensional models of dispersion,  Eqs.  5 and
6.  They are shown plotted against the  site  measure-
ments in Figs. 3 and 4, respectively,  for the sampling
site near Seward.  The dispersion  coefficients  evalu-
ated from Eqs. 7, 8, 9, and  17 are also given in
Table 1.  It is evident from Fig.  3 and Table 1  that
the longitudinal dispersion coefficients computed  in
the QUAL model are much less   than those required to
simulate the observed time-concentration distribution.
The concentration distribution resulting from the
QUAL model, therefore, consists of higher peaks  span-
ning for short durations only.  The adoptability  of
the Elder's equation in simulation of  the dispersion
component in the QUAL model, therefore, needs further
investigations.

     In order to verify the applicability of the
longitudinal dispersion coefficient predicted from
Eq. 8, a test was made to check the response of peak
concentration at different locations along  the  stream.
Variations in the computed peak concentrations  and the
site measurements are shown in Fig. 5  for the stream
reach between the Seward and Crete  stations  on  the Big
Blue River.  Figure 5 indicates that the one-dimen-
sional model of dispersion, Eq. 5, behaved  satisfac-
torily.   And the values of longitudinal dispersion
coefficients predicted from Eq. 8  agree well with  the
turbulent dispersion characteristics of  the  stream.
22.6 Nu D°'833
                                                   (17)
 where N is the Manning's coefficient for the reach.
                                                       337

-------
                                                     TABLE 1
                                           TIME-OF-TRAVEL MEASUREMENTS
                                            BIG BLUE RIVER (NEBRASKA)
                                                             1973

SI.
No.
Dye
1
2
3
4
5

Distance
Traversed
(miles)


Flow
(cfs)
Dumped at Highway 34 at
1.20 25.50
5.36
20.46
25.26
31.06
25.00
25.00
25.00
25.00

X-section
Area
Cft2)
Seward (amount
24.20
59.00
36.10
40.00
22.00

Top
Width
(ft)
6 Ib)
36.00
38.00
32.00
36.00
24.00

Water
Temp.
25
25
24
24
24
Time
to
Peak
(hrs)
3.80
28.60
91.00
197.20
232.90
QUAL model
Disp. Coeff.
(ft2/sec)
0.92
0.91
0.91
0.91
0.91


Estimated

Dispersion Coefficients
DL
90.
28.
52.
36.
74.

84
60
95
32
99
(ft^/sec)
0.0368
0.0031
0.0103
0.0279
0.1046
DZ
0.000013
0.000037
0.000027
0.000032
0.000032
Dye Dumped at Site Below Dam Southeast of Milford (amount 20 Ib)
 6        5.80      32.30      27.10       26.00    28      17.00         0.65       123.66     0.0210     0.000027
 7       19.76     132.40     103.00       85.00    28      37.00         1.69       171.99     0.1122     0.000029
 8       49.00     152.00      85.00       50.00    26      87.00         1.87       293.33     0.0973     0.000057
 9       54.15     156.00     100.00       82.00    26      94.10         1.90       227.53     0.1785     0.000031

Dye Dumped at Damsite at DeWitt (amount 35 Ib)
10
11
12
13
14
15
5.15
23.07
29.25
32.70
38.50
48.00
194.00
210.00
200.00
192.00
295.00
300.00
161.00
96.40
112.00
112.00
190.00
148.00
110.00
79.00
110.00
127.00
126.00
86.00
28
28
28
28
28
28
6.30
32.50
45.00
67.00
90.50
161.20
2.21
2.23
1.78
1.73
2.27
2.29
200.31
380.74
271.15
218.86
210.29
269.21
0.0586
0.3572
0.4411
0.9610
0.6905
1.5992
0.000031
0.000034
0.000024
0.000022
0.000049
0.000081
1.
                      References
    Simulation of Water Quality in Streams and Canals,
      Texas Water Development Board, Austin, Texas,
      Report 128 (Aug.  1971).
2.  QUAL-I Simulation of Water Quality in Streams and
      Canals:   Program Documentation and User's Manual,
      Texas Water Development Board, Austin, Texas,
      (Sept. 1970).
3.  Computer Program Documentation for the Stream
      Quality  Model QUAL-II,  Water Resource
      Engineers, Inc.,  Walnut Creek, California,
      (May 1973).
4.  "Time-of-Travel Measurements on the Big Blue
      River, August 1973 and May 1974".  Unpublished
      Report,  Water Resources Div., U. S. Geological
      Survey,  Lincoln,  Nebraska.  Letters to Mr. Dayle
      Williamson, Exec. Sec., Nebraska Natural
      Resources Comm. of Feb. 5, 1974 and Nov. 8,
      1974.
5.  Bansal, Mahendra K., "Dispersion and Reaeration in
      Natural  Streams", Ph.D. Thesis, Univ. of Kansas,
      Lawrence, Kansas  (May 1970).
6.  Bansal, Mahendra K., "Dispersion in Natural
      Streams".  Jour,  of Hydr. Div., ASCE, HY-11,
      1867 (Nov. 1971).
7.  Bansal, Mahendra K., "Turbulent Dispersion in
      Natural  Streams and Laboratory Channels".  Proc.
      XV Cong., Int. Assoc. of Hydr. Res.,  Istanbul,
      Turkey,  vol. 2, B-6, 39 (Sept. 1973).
8.  Bansal, Mahendra K., Simulation of Dispersion
      Component in Water Quality Model, Big Blue
      River (Nebraska),  Tech. Publ., Nebraska
      Natural  Resources Comm., Lincoln, Nebraska,
      (July 1975) .
9.  Elder, J.  W., "The  Dispersion of a Marked Fluid
      in Turbulent Shear Flow".  Jour. Fluid Mech.
      Vol. 5,  pt. 4 (1959).
                                                                Fig.  1  Time-of-Travel Sampling Stations
                                                                        along the Big Blue River
                                                                        (Nebraska)
                                                       338

-------
                 DISCHARGE MEASUREMENTS
                    • X-SBCTON AREA
                    . TOPWtDTH
                                                             CD
                                                             0.
                                                             0.
                                                               1500
                                                                        OBSERVED  CONC DtST  •-—•—-.
                                                                        COMPUTED  OUAL MODEL: •  .   •
                                                                500
                CO    2OO    30O    4OO
                   DISCHARGE IN CFS

       Fig. 2  Discharge Measurements on the
              Big Blue River near Crete
              (Nebraska)
300
   Ol
    O          4          8           12
   TIME ELAPSED SINCE INJECTION OF DYE
                  IN HOURS

Fig. 3  QUAL-I Computed versus Measured
        Time-Concentration  in the Big
        Blue River near Seward (Nebraska)
                    N =
             WT. MO SUM: OOOM4*
             irr. VMTCM SUM= OUOOOIM
              9 - 9OM ft
                                                                                   sm NO i
                                                                                   TIMC OF DUMP: *OO »-«-73
                                                                                   TUM or curorr. aD
                                                                                      OTOM

                                                            Fig.  4  Simulated versus Measured Time-
                                                                   Concentration in the Big Blue
                                                                   River near Seward (Nebraska)
   0     3      D     19    20     29     30
   DISTANCE TRAVELED IN MILES FROM WJfCTKDN SITE

    Fig. 5  Peak Concentration Variation in
            the Big Blue River between
            Seward and Crete (Nebraska)
                                                 339

-------
                                 SELECTING THE PROPER REAERATION COEFFICIENT FOR
                                           USE IN WATER QUALITY MODELS
                                                 Andrew P. Covar
                                            Administrative Operations
                                            Texas Water Quality Board
                                                  Austin, Texas
                      Abstract

Various methods for calculating atmospheric reaeration
coefficients for use in mathematical  water quality
models are reviewed, and a rational  engineering method
is developed to guide the engineer in the selection of
a proper predictive equation.

                    Introduction

The waste assimilation capacity for oxygen demanding
materials of a body of water is primarily a function
of three parameters; the volume of the water body, the
allowable dissolved oxygen deficit,  and the rate at
which oxygen enters the water body from the atmosphere.
The first two are simple concepts and are easily cal-
culated, but the third, the reaeration coefficient, is
more complex and more difficult to obtain.  Many for-
mulae have been developed for the water quality analyst
to use in estimating this important coefficient, and
in the past, it has been the practice to use whichever
of these formulae produced the best "fit" to available
dissolved oxygen data.  While this may be an acceptable
practice in some circumstances, it does leave the esti-
mation of a very important parameter open to personal
interpretation and bias.  For this reason, a more
objective method is suggested.

                       Review

Streeter and Phelps1 (1925)

Streeter and Phelps in 1925 developed the classic oxy-
gen sag equation for the prediction of the oxygen
profile in a flowing stream.  The equation was
                   dD
                   dt
                               K2D
                                                    (1)
where dD/dt is the rate of change of the dissolved
oxygen deficit, L is the amount of oxygen demanding
material in the stream, D is the dissolved oxygen
deficit, and KI and l<2 are the rate constants of
decay and reaeration.

The prediction equation for K2, the reaeration rate
constant, was

                          m
                              x 2.31                (2)
                        ZU
                 N2 ~  (H')2

                          NOTE:  (all  equations give
                                K2 to the base e)

where U is the stream velocity,  H' is the depth above
minimum low water, Z is an irregularity factor, and m
is a function of the mean change in velocity per change
in gage height.  Several  of the  variables in this ex-
pression must be empirically evaluated.  This work did
set the precedent for equations  of this type and much
of the subsequent research has been on the evaluation
of constants to use in similar equations.
                                                          O'Connor-Dobbins2 (1958)

                                                          This work was based on the theories of turbulent flow
                                                          and the rate of renewal of saturated surface waters.   A
                                                          theoretical  development was presented along with cer-
                                                          tain lab findings which tend to support some of the
                                                          assumptions  made.  The predictive equation developed
                                                          for streams  displaying isotropic turbulence was
                                                                                _
                                                                             1(2 "
                                                                                                              ,  .
                                                                                                              (  '
where Dm is the molecular diffusion  coefficient and H
is the average stream depth.  An additional  equation
was developed for streams with nonisotropic  turbulence
but O'Connor has since recommended using  only  the form
shown.  Based on field data reported  by others and some
rather rough estimates of system geometry, a comparison
was made of reported values vs_ computed values.  The
authors reported good agreement.

Part of the criticism of their work  has centered on the
somewhat arbitrary manner in which they classified
streams as isotropic ys_ nonisotropic.  This  is not sig-
nificant in lieu of the O'Connor recommendation that
only the "isotropic" equation be used.  Other criticism
involves some of the theoretical assumptions made in
the development of the equation, and  the  estimates of
stream geometry in the field area.  Churchill, et al .3
(1962) imply that the field data were for polluted
streams which would interfere with verification.  There
is general agreement that the equation fairly accurate-
ly predicts reaeration coefficients for many different
systems.

Churchill -El more-Buckingham3 (1962)

This work was based on observed reaeration rates below
dams from which oxygen-deficient water was released at
a constant flow during the course of  the  experiment.
This work is generally felt to be the most extensive
and reliable set of field data available  for the calcu-
lation of reaeration rates.  Many different equations
were tested and one important finding was that the
reliability of the equation was not significantly im-
proved by the addition of terms for slope, viscosity,
surface tension, or any other of the many factors which
could have an effect on the reaeration process.  The
equation suggested for use
                                                                           _  5.026 U
                                                                           -
                                                                                     .969
                                                                                H
                                                                                  1.673
                                                                                           x 2.31
                                                                                                              (4)
                                                          is  of the same general  form as the O'Connor-Dobbins
                                                          equation.

                                                          Owens-Edwards-Gibbs4 (1964)

                                                          This  study involved the deaeration of six English
                                                          streams  with  sodium sulfite and monitoring the oxygen
                                                          recovery.   Combining their data with that of Gameson,
                                                      340

-------
et al.5 (1955) resulted in
                   _  9.41
                                 x 2.31
                                 (5)
The streams involved in this study were generally  less
than 2.0 feet deep.

Langbien-Durum6 (1967)

Langbien and Durum combined the field data of O'Connor
and Dobbins (1958) and of Churchill, et al_.  (1962)
with the lab data of Krenkel and Orlob (1963) and  of
Streeter, et al.7 (1936) and obtained
   = _1,3U  x 2.31
                                                    (6)
 Not all of Churchill's data were included  in  the  analy-
 sis.  The grouping of vastly dissimilar data  to derive
 a single equation could be questioned.  A  better  ap-
 proach might have been to apply the separate  equations
 only for stream conditions similar to  those used  in
 these derivation.

 Isaacs-Gaudy8 (1968)

 Using a circular trough with recirculating water, a
 regression analysis on reaeration data yielded
                       3.053  U
                         H1-5
              x 2.31
(7)
 The applicability of this equation  has  been  criticized
 due to differences in stream  flow characteristics  in
 circular tanks and those found  in natural  streams.

 Negulescu-Rojanski  (1969)

 Similar to the work of  Isaacs and Gaudy,  this study
 involved a recirculating flume  to yield
K  = 4.74
                                                    (8)
 Due to  the type of apparatus,  the  depths  were limited
 to less than 0.5 feet.

 Thackston-Krenkel-Orlob10'11'12 (1963,  1966,  1969)

 Much work has been done  by  these investigators using
 laboratory flumes and  some  field data.  There is a
 lack of agreement among  the various equations derived,
 and, as in similar work,  the applicability of labora-
 tory flume data to wider and deeper channels  can be
 questioned.  The reaeration process is  clearly related
 to stream turbulence and similarities between turbu-
 lent flow in a laboratory flume and the turbulence in
 a more  geometrically complex natural  channel  may not
 be as great as was assumed.

 Tsivogloul3.14 (1967,  1972)

 Using a gas tracer technique developed  in 1966, the
 author was able to directly measure the rate  of gas
 transfer between a stream and the  atmosphere.
 Tsivoglou concludes the  "reaeration rate  coefficient
 is directly proportional  to the rate of energy expen-
 diture  in nontidal freshwater streams."  In other
 words,  K2 equals the change in water elevation per
 unit time times some constant of proportionality.
 The equation suggested was
                                                             0.054
                                                                                            at 25°C
                                                            (9)
with typical units for Ah and  t  being feet and days.
Using 0.054 as the constant all  the  observed values for
K2 fell within ±50% of that value  predicted by the
equation.  The author further  concluded  that flow
changes by a factor of 2 or 3  do not significantly
affect K2.  While the tracer technique offers the first
reliable method for directly measuring the rate of gas
transfer in natural streams, there was considerable
"scatter" in the data.

Review Summary

In general, the laboratory studies involved flow re-
gimes much different than those  found in natural
streams.  These studies are useful in testing some of
the conceptual models which strive to explain the re-
aeration process but their applicability to natural
streams can be questioned.  The  investigators using
field data used the best techniques  available at that
time to measure or estimate the  reaeration coefficient.
For a more detailed critique of  previous work than has
been presented here, see Lau15 (1972) and Bennett and
Rathbunl6 (1972).

                  A Suggested  Method

The arbitrary selection of an  equation to predict the
reaeration coefficient can significantly bias the re-
sults of an analysis.  This is illustrated in Figure  1.
The only difference in the three dissolved oxygen sags
shown is in the reaeration coefficient as predicted by
the three equations suggested  by O'Connor-Dobbins;
Churchill, et^ aj_.; and Owens,  e_t ]»!_.   As you can see,
the O'Connor-Dobbins curve predicts  a minimum dissolved
oxygen concentration of 5.0 mg/1 while the Churchill,
et al. curve predicts a minimum  of 2.3 mg/1  and a zone
oT dissolved oxygen less than  5.0  mg/1 extending for
80 miles.  It would obviously  make a great deal  of
difference in an analysis which  equation was used to
predict this coefficient.  Some  of these differences
might be eliminated given a good set of  data and a
proper calibration procedure but adequate data are
frequently not available.

Of the several equations available to the water quality
analyst, the two suggested by  O'Connor and Dobbins and
Churchill, et^ al_. have much in their favor.   The two
equations are similar in form  and  have each been used
extensively.  Considerable effort  was made to verify
                                         1
                                         a  *
                                                                ' Saturation Dissolved Oxygen
                                              Churchill, et al.
                                                          40      60      i
                                                                   River Mile
                                                                                 lOO      120
                                                 FIGURE 1. Effects of Different K2 Equations on Dissolved Oxygen Sag
                                                       341.

-------
each with available data.   Figure  2  illustrates the
data used by O'Connor and  Dobbins  in the field veri-
fication of their equation  and  the data  used by
Churchill, et al_. in their  field work.   Also shown
is the data~~used" by Owens,  et_ al_.  in their work.   It
is evident that each equation was  derived from or veri-
fied using streams with significantly different flow
characteristics.  There is  very little overlap in the
data.  Since each of the authors cautioned against the
use of their equation for streams  with vastly different
hydraulic characteristics,  the  figure has been divided
into three regions each of  which roughly includes most
of the data used in one of  the  studies mentioned.  The
"A" line which divides the  data of O'Connor and Dobbins
from that of Churchill, et^  aj_.  is  also a line where the
two equations suggested by  these authors yield identi-
cal answers.  The fact that this equivalence line sep-
arates the data so neatly tends to support the work of
both groups.  The "B" line  was  arbitrarily set at a
depth equal to 2.0 feet to  define  the region containing
most of the Owens, jrb aj_. data.

It is suggested that each of these three works cited
is the most appropriate for combinations of depth and
velocity similar to that used in the original  research.
For depths less than 2.0 feet,  the equation suggested
by Owens, £t al_. seems to have  the strongest backing.
For streams with depths greater than 2.0 feet,  the
selection of a proper equation  would depend on which
area of Figure 2 best describes the  hydraulic proper-
ties of the stream in question.
The  effect of such a selection method  is  shown in
Figure  3.   For each of the three areas, the  reaeration
coefficient has been calculated and  plotted  on the
graph using the proper equation.  Note  the exact agree-
ment between the O'Connor and Dobbins equation and the
Churchill, et^ jH_. equation at all points  along the
upper line.   There is fair agreement between the upper
and  lower  areas of the graph at either  end of the line
drawn at a depth of 2.0 feet with progressively less
agreement  toward the center.

                      Conclusions

1.   Much research has been done concerning the  evalua-
tion of the  reaeration rate coefficient and  its  appli-
cation to  the field of water quality management.

2.   With some notable exceptions there is  generally
poor agreement between the various equations  derived.

3.   Equations  derived from very shallow laboratory
flume data should not be  applied to natural  streams
which probably have entirely  different hydraulic
characteristics.

4.   Equations  derived from field studies are best
applied in instances  where stream conditions  are
similar to those  from which the equations  were derived.
                    •<>  .5 .6 .7
                    Velocity (feet per second)
                    .G  .5  .6 .7 .8 .9 1

                    Velocity (feet per second)
       FIGURE 2. Field Data Considered By Three Different Investigators.
                                                                    FIGURE 3. K2 vs Depth and Velocity Using The Suggested Method.
                                                       342

-------
                     References

1.   Streeter,  H.  W.,  and Phelps,  E.  B.,  1925.   A Study
    of  the  Pollution  and Natural  Purification  of the
    Ohio  River.   U.S.  Public  Health  Service, Bull.  146.

2.   O'Connor,  D.  J. and  Dobbins,  W.  E.,  1958.   "Mech-
    anism of  Reaeration  in  Natural Streams."   Am.  Soc.
    Civil Engineers Trans., v.  123,  p.  641-684.

3.   Churchill,  M.  A.,  Elmore, H.  L.,  and Buckingham,
    R.  A.,  1962.   "The Prediction of Stream Reaeration
    Rates."  Am.  Soc.  Civil Engineers Journ.,  v.  88,
    no.  SA-4,  p.  1-46.

4.   Owens,  M.,  Edwards,  R.  W.,  and Gibbs, J. W.,  1964.
    "Some Reaeration  Studies  in  Streams."  Int.  Jour.
    Air and Water Pollution,  v.  8, p. 469-486.

5.   Gameson,  A.  L.  H., Truesdale, G.  A., and Downing,
    A.  L.,  1955.   "Reaeration Studies in a Lakeland
    Beck."   Inst.  Water  Engineers Jour., v.  9,  no.  7,
    p.  571-594.

6.   Langbein,  W.  B.,  and Durum,  W. H., 1967.   The
    Aeration  Capacity of Streams. U.S.  Geological
    Survey, Circ.  542.

7.   Streeter,  H.  W.,  Wright,  C.  T.,  and Kehr,  R.  W.,
    1936.  "Measures  of  Natural  Oxidation in Polluted
    Streams."   Sewage Works Journal, v.  8, no.  2.

8.  Isaacs, W.  P., and Gaudy, A.  F., 1968.  "Atmospher-
    ic Oxygenation in a  Simulated Stream."  Am.  Soc.
    Civil Engineers Jour.,  v. 94, no. SA-2, p.  319-344.

9.  Negulescu, M., and Rojanski,  V., 1969.  "Recent
    Research  to Determine Reaeration Coefficient."
    Water Research, v. 3, no. 3,  p.  189-202.
    10.  Krenkel, P. A., and Orlob, G. T., 1963.  "Turbu-
         lent Diffusion and the Reaeration Coefficient."
         Am. Soc. Civil Engineers Trans., v. 128, p. 293-
         334.

    11.  Thackston, E. L., 1966.  "Longitudinal Mixing
         and Reaeration in Natural Streams."  Vanderbilt
         University, Ph.D. dissertation.

    12.  Thackston, E. L., and Krenkel, P. A., 1969.
         "Reaeration Prediction in Natural Streams."  Am.
         Soc. Civil Engineers Jour., v. 95, no. SA-1,
         p. 65-94.

    13.  Tsivoglou, E. C., 1967.  Tracer Measurement of
         Stream Reaeration.  Federal Water Pollution
         Control Administration, June, 86 p.

    14.  Tsivoglou, E. C., and Wallace, J. R., 1972.
         Characterization of Stream Reaeration Capacity.
         U.S. Environmental Protection Agency, Rept.
         R3-72-012.

    15.  Lau, Y. L., 1972.  A Review of Conceptual Models
         and Prediction Equations for Reaeration in Open-
         Channel Flow.  Dept. of Environment, Canada.
         Technical Bull. no. 61.

    16.  Bennett, J. P. and Rathbun, R. E., 1972.
         Reaeration in Open-Channel Flow.  U.S. Geological
         Survey, Professional Paper 737.
343

-------
          RECEIV-II, A GENERALIZED DYNAMIC  PLANNING MODEL FOR WATER QUALITY MANAGEMENT

      Charles  V.  Beckers,  Peter E. Parker*,  Richard N.  Marshall and Stanley G.  Chamberlain

                                      Raytheon Company
                           Oceanographic  and Environmental Services
                               Portsmouth,  Rhode Island 02871
                  Introduction

     Under funding by the US Environmental
Protection Agency , Raytheon has developed
the RECEIV-II Water Quantity and Quality
Model2.  In this paper, we discuss the
background of the model development ^work
and describe the model in some detail.
To illustrate model applications,  we
present results from its use on the
Pawtuxet River (RI)3 and Norwalk Harbor (CT) .
We conclude the paper with a brief discussion
of some of the apparent limitations of the
model and areas in which improvements have
already been made by Raytheon.
                  Background


     The need for RECEIV-II stems primarily
from Public Law 92-500, the Federal Water
Pollution Control Act Amendments of 1972.
Several sections of the law call for the
states or their designated agencies to pro-
duce plans, commonly designated "303e" and
"208" plans, for achievement of the stream
water quality standards through selective
application of discharge limitations and
other institutional controls on the quality
of water reaching the streams.  In each case,
accurate forecasting of stream quality is
needed to determine the potential of the
proposed plans for achieving these goals.
Such forecasting is best done using comput-
erized, mathematical models of water quality5,
allowing rapid assessment of expected water
quality under rarely observed conditions,
e.g. the 7-day, 10-year low flow.

     Anticipating the need to produce such
plans on a number of New England waterways,
USEPA awarded Raytheon a contract to develop
and calibrate water quality models for some
18 waterways in Rhode Island and
Connecticut1.  The calibrated models were to
be capable of representing the following
appropriately linked water quality
constituents in each waterway:

        • phosphorus

        • coliforms

        • ammonia nitrogen

        •nitrite nitrogen

        •nitrate nitrogen

        • total nitrogen
 * Now at Institute of Paper Chemistry,  1043
   East South River,  Appleton,  Wisconsin 54911.
             • carbonaceous  biochemical oxygen
               demand  (BOD)

             • chlorophyll a_

             • dissolved oxygen

             • salinity

             • a non-conservative metal ion  to
               be  selected by Raytheon.
     While the waterways to be modeled are
extremely diverse, ranging from upland
streams through shallow lakes and  impound-
ments to estuaries, it was concluded  that
overall project constraints could  best be
satisfied through use of a single, general-
ized set of coding.  The generalized model
had to be capable of representing  the
following basin features:

        • Time-varying to handle tidal
          conditions and the chlorophyll a
          growth-death cycle.

        • Two-dimensional to permit
          modeling the broad, vertically
          homogeneous areas of the
          vertically well-mixed estuaries.

        • Multiple ocean inlets to cover
          such cases as New Haven  Harbor
          and Narragansett Bay, where there
          are multiple inlets at the  ocean
          boundary.

        • Multiple shallow dams to deal with
          the multiplicity of mill ponds
          found on the New England rivers.
          (In the State of Rhode Island alone,
           there are nearly 400 dams, most
          dating from the nineteenth  century
          textile industry.)

        • Conservative and non-conservative
          constituents to handle the variety
          of water quality constituents
          considered in the project.

In addition, contractual clauses required
that the models be capable of running on a
variety of computers and of employing metric
units.

     Since no existing model satisfied all
the requirements, it was decided to modify
the Receiving Water Block  (RECEIV) of the
USEPA1s Storm Water Management Model  (SWMM)6.
RECEIV was selected because it already included
many of the necessary features, particularly
the time-varying property, and could  be
straightforwardly modified to incorporate
the others.  It also had the incidental
advantage of providing previously  unavail-
                                               344

-------
able capability to SWMM.  To clearly indicate
its lineage,  the new model was named
RECEIV-II.
          Description of RECEIV-II

     RECEIV-II is a generalized,stand-alone
model intended for use in forecasting water
quality on a basin-wide scale, under alterna-
tive conditions of point and non-point
discharge, stream flow and desired waterway
usage.  The emphasis is on forecasting the
far-field effects of an individual discharge
or assembly of discharges.  Examples include
the installation of additional treatment
capacity at a municipal or industrial dis-
charge, development of a new industrial site,
population growth, institution of new zoning
regulations, or establishment of a new
control policy for flow regulation.

     Fundamentally,  RECEIV-II consists of two
separated models coupled together.  Hydraulic
conditions are modeled first, using the
QUANTITY section of RECEIV-II.  Data on the
temporal and spatial distribution of flow are
then automatically transferred to the QUALITY
section, in which water quality conditions
are computed.

     The analytical framework used to describe
the waterway discretizes space and time, per-
mitting numerical integration of the partial
differential equations of hydrodynamics and
water quality.  The spatial framework uses the
discrete element method7 in which state
variables such as surface elevation and con-
stituent concentration are computed at nodes
and transport (flow and velocity) is computed
in channels linking the nodes.  The temporal
framework consists of discrete, uniform
timesteps, with step duration selected in-
dependently in the QUANTITY and QUALITY
sections.  (For computational reasons, the
QUALITY timestep must always be greater than
or equal to the QUANTITY timestep.)

Fundamental Equations

      The fundamental equations of the QUANTITY
model are the reduced, one-dimensional form of
the equation of motion for uniform, incompress-
ible  flow in the open channels between the
nodes:
         =  -v .    .
      3t        3x
011  _ F
8x     f
(1)
 and  the continuity equation  expressing  conser-
 vation of mass for an incompressible  fluid in
 the  open-topped nodes:
                                            (2)
 where:
      v = velocity (m/s)

      t = time (s)

      x = distance along  the channel (m)
             H = water surface elevation
                 referenced to datum plan
                 of the model (m)

             g = gravitational acceleration
                 (= 9.8 m/s2)

            Ff = acceleration due  to fluid
                 resistance (m/s2)

            F  = acceleration due  to wind
                 stress (m/s2)

             Q = the net flow out  of the node (m3/s)

             A = the surface  area  of the node (m2)
      The acceleration due to fluid resistance is
      estimated by the Manning formula:
                   *,r                           (3)

      where  n = Manning's roughness factor  (s/m   )

             R = hydraulic radius  (m)

      The acceleration due to wind stress is
      estimated by the Ekman formula:
F  = — ' a
 w   R p
                        cos
                                                                      (4)
      where  K = windstress coefficient (=0.0026)
             —— = ratio of air density to water
             pw   density (=1.165-10~3)

             U = wind speed (m/s)

             ¥ = angle between the wind direction an
                 and the axis of the channel
      The numerical technique used to integrate
      equations (1) and (2) is detailed in the
      RECEIV-II Documentation Report2.

           The fundamental form of the equations
      describing volumetric average water
      quality constituent concentration in a
      node is:
                                                       dC
                                                       ar
                1 dM
                v ar
                                           M  dV
                                           V2" dt
                                   (5)
      where  C = volumetric average constituent
                 concentration (typically,
                 gm/m3)

             M = constituent mass in node
                 (typically, gin)

             V = volume of node (m3)

      Equation (5)  expresses the concept of
      conservation of mass in a control volume,
      frequently called a continuously stirred
      tank reactor (CSTR)8.   The derivatives
      on the right can be evaluated in terms
      of the flows and constituent masses
      crossing the boundaries of the node,
      and in terms of the bio-chemical reactions
                                              345,

-------
taking place in the node:

     dC
                               (c±-c)
where Q . = flows entering node from up-
       J   stream nodes (m3/s)
                                            (6)
Q.
 1
           flows entering node from point
           and non-point sources (mVs)
C. = concentration of constituent
 1   entering node from upstream
     nodes (typically,  gm/m3)

C. = concentration of constituent
 1   entering node from point  and
     non-point sources (typically,
     gm/m 3)

M  = rate of constituent mass
 ^   gained due to biological,
     physical or chemical processes
     in the node (typically,  gm/s)

M, = rate of constituent mass  lost
     due to biological,  physical or
     chemical processes in the
     node (typically, gm/s)
The interactions  among the eleven water  quality
constituents modeled in RECEIV-II are  pre-
sented in the (£M  + EM,)  terms.  For example,

the rather complex interactions  affecting
the dissolved oxygen are formulated as:
     = k,(C*-C9)  - k7C7  - a9

       - a9,5k5C5 + a9,e(G8-D8)C8  -(b/R)

                                    (7)

           nodal concentration of DO

           saturation concentration
           of DO

           DO reaeration rate

           nodal concentration of
           carbonaceous BOD

           rate of oxidation of
           carbonaceous BOD

           nodal concentration of
           ammonia nitrogen

           rate of oxidation of
           ammonia nitrogen to
           nitrite nitrogen

           stoichiometric ratio of
           oxygen in nitrite

           nodal concentration of
           nitrite nitrogen

           rate of oxidation of
           nitrite nitrogen to
           nitrate nitrogen
where
  C9

  C*


  k9

  C7
        C.,


        k.
         CB


         ks
        a9,5 =    stoichiometric  ratio of
                  oxygen  in  nitrate

        C8   =    nodal concentration of
                  chlorophyll  a

        Ge   =    "growth" rate of
                  chlorophyll  a

        De   =    adjusted "death"  rate of
                  chlorophyll  a

        a9(8 =    stoichiometric  ratio of
                  oxygen  produced per unit
                  "growth" of  chlorophyll a.
                                                           b  =
                                                            benthic oxygen demand
All reaction rates  (k's) are adjusted for
the effects of temperature during computation.
Equations for computation of BOD oxidation
rate, DO surface reaeration, DO reaeration
at dams, saturation DO and exchange at the
tidal boundaries are detailed in the RECEIV-II
Documentation Report? along with the
numerical integration procedures used in
the QUALITY section.

Program Structure

     RECEIV-II is written in ANSI-standard
FORTRAN9 to assure inter-computer compat-
ibility and has been successfully tested on
the CDC 6700, CDC Cyber 73, IBM 370/155 and
Honeywell 6000/60 computers.  It consists
of a main program and 14 subroutines that
perform various computational, input and
output functions.  The program can be
modularized to minimize the total computer
memory requirements, with the QUANTITY and
QUALITY sections typically loaded and run
independently.  Information transfer among
the program modules is via mass storage,
either mag tape or disk.  The coding of
RECEIV-II has been carefully structured to
preserve compatibility with the present
SWMM, but the two models have not yet been
integrated.

Model Input and Output

     In order to compute the primary model
outputs, RECEIV-II requires a wide range of
information on the river basin being
modeled.  Since the equations used in
RECEIV-II constitute our concept of how
the waterway "works", the data required
by RECEIV-II would be required under any
circumstances to provide an adequate
description of the waterway.  Table 1
summarizes the necessary input data, and
characterizes it according to the spatial
and temporal intensity needed.

     The primary computational outputs of
RECEIV-II are:

     • Stage or tidal height at each node
      (m above datum)

     • Flow in each channel  (m3/s)

     • Velocity in each channel  (m/s)

     • Constituent concentration in each
      node  (typically, mg/1)
                                              346

-------
                                                                     Results
The frequency of output for each of these
results is under user control and an integer
multiple of the basic timestep of the model.
    TABLE 1.  RECEIV-II  DATA REQUIREMENTS.
           CHARACTERISTIC
   DATA REQUIRED
  <5 O P
  H  O
H 00 PC H
H !3 U Cd
 i o <: w
 I O W FM
GEOGRAPHICAL

Source location
Node location
Node-channel connectivity
Dam location
Ocean interface location
Surface area
Bottom elevation
Length
Width

METEOROLOGICAL

Wind speed and direction
Evaporation rate
Precipitation rate

HYDRAULIC

Tidal height
Stage height
Velocity
Manning coefficient
Dam height, width & shape
Background flow

WATER QUALITY

Water temperature
Temperature compensation
     coefficient
Constituent concentration
Background concentration
Ocean concentration
Ocean exchange rate
Reaction rate
Benthic oxygen demand
Reaeration constants
Sunrise
Sunset
Maximum light intensity
Saturation light intensity
Extinction coefficient
Growth coefficient
Respiration rate
Michaelis-Menton
     constants
Nutrient ratios
Grazing rate

gOURCE CONDITIONS

Flow
Constituent concentration
     (or mass rate)
                                W
                                O
                                   WQ
                                   Q 2
H O
t£ O
  U W  \
   O  I
  P £i W i
00 H 5 P •

^gggi
               w
               o
                                            "^
                                           j pq
                                            u
                                            o
     Under the original USEPA funding,
Raytheon calibrated RECEIV-II to 18 New
England waterways, with varying degrees
of success using the data available in
197310'11.  The calibration results ranged
from good to unusuable, with streams and
rivers typically turning our better than
bays and harbors.  In at least one case,
the Quinnipiac River, state officials have
used the Raytheon-calibrated model to
develop 303e waste load allocations, with
only minor updating to reflect more recent
data.  Load allocations on other rivers and
harbors have subsequently been achieved
using improved data bases.

     Availability of data was invariably the
single most important factor controlling the
success of the calibration effort in each
waterway.  In several cases, we were unable to
find the minimum two sets of data needed for
calibration.  The data on bays and harbors
were substantially weaker than on streams and
rivers.  Despite the heavy use of the bays and
harbors for navigation, basic data on tides
and currents were noticeably absent.  As
discussed above, the data necessary to RECEIV-
II calibration are fundamental to an under-
standing of the processes active in a waterway.
Failure to develop adequate RECEIV-II calibra-
tions for a number of the 18 waterways is
indicative of the state of knowledge of those
waterways.

     Another factor contributing to difficulty
in calibration is a fundamental limitation of
the model in representing the stream bed as a.
rectangular channel.  RECEIV was originally
developed as a stormwater model, representing
relatively high-flow conditions.  In contrast,
application of RECEIV-II in the context of
303e load allocations requires modeling of
very low flow conditions.  In many cases, the
use of a rectangular channel approximation
breaks down under low flow, due either to
reduction in stream dimensions or separation
of the flow due to stream bed bathymetry.

     However, in those waterways for which
sufficient data were available, RECEIV-II has
performed quite well.  The Quinnipiac River is
one example, and Raytheon has recently complet-
ed preliminary waste load allocations using
RECEIV-II on Norwalk Harbor and the Pawtuxet
River, under contract to Connecticut and Rhode
Island, respectively.  In each case, addition-
al refinement of the basic calibration was
required to achieve satisfactory results.
This was accomplished through additional data
collection and detailed analysis of available
data.  The results of these two wasteload
allocations illustrate the desirability of
using a dynamic quantity and quality model
for 303e planning.

Pawtuxet River

     The preliminary results of the load
allocation work on the Pawtuxet River
illustrate the importance of nitrogenous
oxygen demand in achieving water quality
standards for rivers.  A single unit of
ammonia nitrogen discharged at the Cranston
Treatment Plant (approximately 14.4 MGD) has
the equivalent effect on dissolved oxygen at

-------
the Broad Street Dam of 5.5 units of carbon-
aceous BOD.  While the necessary removal of
carbonaceous BOD is thought to be achiev-
able by Cranston's consulting engineers,
there is concern for the ability to maintain
the equivalent nitrogen allocation necessary
to maintain water quality standards, espec-
ially during the colder months of the year
when biological nitrification processes are
difficult to maintain.  For this reason, the
dynamic capabilities of RECEIV-II have been
exercised to determine the sensitivity of
the Pawtuxet River to nitrogenous demand
under varying flow and temperature conditions.
It is of interest to note that, with
respect to the total treatment plant-river
system, winter conditions may turn out to be
the "critical period" for maintenance of the
water quality standards, not the summer
months as commonly assumed.

Norwalk Harbor

     A similar conclusion concerning the
"critical period" has been reached in the
case of Norwalk Harbor, but for entirely
different reasons.  Algal growth in Norwalk
Harbor appears to play a major role in the
oxygen cycle, especially as it affects the
carbonaceous and nitrogenous BOD assimilation
capacity.  During cold, dry weather, algal
oxygen production is sufficiently reduced
to constitute the maximum restriction on
total BOD discharged into the harbor.  In
cold, wet weather, higher nutrients loadings
tend to be offset by increased algal oxygen
production.  While warm, wet weather is
relatively non-restrictive on total BOD, it
is most restrictive on carbonaceous BOD
removal at the Norwalk STP.  The dynamic
features of RECEIV-II permit detailed
analysis of each of these cases without the
need to recalibrate the model for changing
conditions.  In addition to allowing study
of the seasonal variations, RECEIV-II also
allows confirmation of such details of the
oxygen cycle as the early morning minimum,
which results from algal respiration through
the nighttime hours.  Sensitivity analyses
also indicate Norwalk Harbor dissolved oxygen
levels to be very sensitive to variations in
sunlight intensity, as might be expected
from the strong dependency on algal popu-
lation.  A similarly strong dependency on
tidal range is exhibited, due to reduced ex-
change with Long Island Sound during neap
tides.
                 Discussion

     Thus, when sufficient data are avail-
able to adequately describe the waterway,
RECEIV-II has proven itself to be a useful
and dependable approach to forecasting water
quality.  Its unique dynamic features have
allowed the examination of aspects of water
quality that might otherwise have escaped
notice.  Because of the dynamic formulation,
it is preferred for application to estuaries,
for use in cases of un-steady discharge  (e.g.
stormwater runoff), for extrapolation to low
flow conditions, and for examination of
seasonal, daily or tidal variations in water
quality.
     As with all models,  experience in the
use of RECEIV-II has  highlighted areas in
which the model can be  improved.   When those
improvements have  related to features of the
model developed under USEPA contract, Raytheon
has publicized corrections and changes
through release of Program Modification
announcements.  The objective is  to assure
the integrity of Raytheon's model in its use
by the many  agencies  and  firms now holding
copies.  To  date,  nine  Program Modifications
have been distributed to  the user group
through the  USEPA.

     In addition,  Raytheon has found it nec-
essary to further  expand  the model's capa-
bility beyond that originally required for
RECEIV-II.   To satisfy  various contractual
commitments,  Raytheon has  developed the
proprietary  RECEIV-III  model.   Among the
numerous improvements incorporated in RECEIV-
III are coding to  model water temperature  and
organic nitrogen (coupled  to  chlorophyll 
-------
5.  Jon B.  Hinwood and Ian G. Wallis .  "Clas-
    sification of Models of Tidal Waters",
    Jour.  Hydr. Div. ,  Proc. ASCE; 101 (HY10) :
    1315-1331.  October 1975. - -

6.  Metcalf & Eddy, Inc., University of
    Florida, and Water Resources Engineers,
    Inc .   Storm Water Management Model.  US
    Environmental Protection Agency, Wash-
    ington, DC.  Report Nos .  11024 DOC 07/71,
    11024 DOC 08/71, 11024 DOC 09/71, 11024
    DOC 10/71.  July-October, 1971.  1132p.

7.  R.P.  Shubinski, J.C. McCarthy and M.R.
    Lindorf .  "Computer Simulation of
    Estuarial Networks", Jour. Hydr. Diy. ,
    Proc. ASCE, 91  (HY5) : 33-49. September
    1965.

8.  C.W.  Chen and G.T.Orlob.   Final Report r
    Ecological Simulation for Aquatic Envir-
    onments .  Office of Water Resources Res-
    earch, US Department of the  Interior,
    Washington, DC.  Report No.  OWRR C-2044.
    December 1972.  155p.

9.  ASA Sectional Committee on Computers and
    Information Processing.  American
    National Standard FORTRAN. American
    National Standards Institute , Inc . , New
    York, NY.  Standard ANSI X3. 9-1966.
    March 7, 1966.

10.  Raytheon Company.  New England River
    Basins Modeling Project Final Report,
    Volume  III - Documentation Report, Part  2
    - Appendix D: Connecticut River Basins
 p
ib
     Calbrations .  US Environmental Protection
     Agency, Washington, DC. January 1975.
     254p.

11.   Raytheon Company.  New England River
     Basins Modeling Project Final ReporE,
     Volume III - Documentation Report, Part
     3 - Appendix E:  Rhode Island River
     Basins Calibrations. US Environmental
     Protection Agency, Washington , DC .
     February 1975.  182p.
                                              349

-------
                                        MODIFICATIONS TO QUAL - II TO

                                          EVALUATE WASTEWATER STORAGE
                                                 John S .  Tapp
                                           Technical Support Branch
                                                Water Division
                                                EPA, Region IV
                                               Atlanta, Georgia
To help evaluate the feasibility of storing waste-
water to alleviate adverse effects on receiving water
quality during low flow conditions, the water quality
model QUAL - II was modified.   The general modifica-
tions are described and situations which lend them-
selves to application of the modified model are
discussed.

                  Introduction

Effluent limitations for wastewater discharges are
commonly established based on maximum temperature at
some critical statistical low stream flow condition,
normally defined by water quality standards.  Thus,
commonly utilized procedures of arriving at effluent
limitations based on water quality standards in some
situations do not take into account, (a) water temp-
erature changes, (b) flow rate fluctuations, or
(c) other seasonal variations.  For situations where
assimilative capacity is limited and the storage of
effluent from a wastewater treatment facility is
feasible, the normal seasonal changes in water temp-
erature and flow rates can be utilized to allow in-
stream water quality standards to be maintained, at
a cost less than by utilizing treatment alone.
Under extremely critical conditions, even the possi-
bility of no discharge could be investigated.

To use the wastewater storage approach requires
curves of stream flow versus wastewater flow at a
given quality to  maintain some instream constituent
concentration, generally dissolved oxygen.  To estab-
lish the curves of stream flow and allowable waste-
water flow, a mathematical model is very useful.
However, most dissolved oxygen models are not con-
structed such that these curves can be determined
without much trial and error input and manipulation.
This paper describes the modifications to a
commonly used model to allow easy generation, without
trial and error manipulation, of curves of stream
flow versus wastewater flow (at a given oxygen demand
concentration) to maintain a given instream dissolved
oxygen concentration.

                     The Model

The model selected for modification was QUAL - II as
developed by Water Resources Engineers1.  QUAL-II is
a modification of QUAL   I originally developed for
the Texas Water Quality Board.  As developed, QUAL-II
can simulate the dynamic behavior of conservative
materials; temperature; carbonaceous biochemical ox-
ygen demand; chlorophyll A; the nitrogen forms of
ammonia, nitrite, and nitrate; phosphorus; benthic
oxygen demand; dissolved oxygen; and coliforms.  All
constituents except temperature can also be simulated
directly in the steady state mode.  The general model
constituent layout is shown in Figure  1.
                    Figure 1
   The General Model Constituent Layout  for QUAL-II

QUAL - II is structed as one main program supported by
19 subroutines.  The general structure  of the model
is shown in Figure 2.  The description of the various
subroutines and their functions are  described by
Water Resources Engineers-*- and need  not  be repeated
here.  The main focus of the modifications to QUAL-II
was to take advantage of subroutine  FLOAUG to check
for a prespecified instream dissolved oxygen target
level.  If that level was not met all flows would
be incrementally increased based on  drainage area con-
tribution to arrive at a flow in a prespecified head-
water which would allow the dissolved oxygen target
to be met in all reaches below the discharge under
study.  All data necessary to utilize the modified
model is read in subroutine INDATA and model control
is maintained by the main program QUAL - II•

The detailed modifications made to QUAL-II are descri-
bed by Tapp .  Basically, the model  was  modified such
that a drainage area and a drainage  area factor would
be assigned to each reach, to each direct input trib-
utary, and to each headwater.  A range  of wastewater
                                                      350

-------
flows  for  the  specific discharge to be studied is
specified.   For  a given wastewater flow the model uses
subroutine  FLOAUG to check for a minimum instream dis-
solved oxygen  concentration.   If the dissolved oxygen
concentration  is  less than the minimum, then subrou-
tine FLOAUG increases the  flow in each reach, direct
input  tributary,  and headwater in proportion to the
drainage area  and drainage area factor.  When the
dissolved  oxygen concentration below the discharger
under study is above the specified minimum, the seq-
uence  returns  to the main  program where a new waste-
water flow is  taken  through  the same procedure.  The
output from the  model is a series of wastewater flows
with corresponding stream  flows for the headwater
above  the  discharger under study at a given tempera-
ture and given wastewater  effluent quality.  The
modifications  were developed  for the steady state mode
and have not been tested in  the dynamic mode.
                    Figure 2
   The General Model Structure of QUAL - II
                  Model Utility

 The type of curve which can be platted from the info-
 rmation developed by the modified QUAL-II model is
 shown in Figure 3.   For any stream flow in the head-
 water above the discharger under study, an allowable
 wastewater flow to meet a given dissolved oxygen tar-
 get level can be determined at a given stream tempera-
 ture and effluent quality.  Knowing the hydrology of
 the headwater above the discharge, the storage volume
 necessary to maintain the dissolved oxygen target can
 be determined.  This can be accomplished for a dis-
 charger and a given effluent quality by generating
 a curve of wastewater flow versus stream flow to meet
 a given instream dissolved oxygen concentration for
 the water temperature associated with each month of
 stream flow records.   A temperature simulation model
 based on meterological data could be linked with the
 hydrologic data to  provide daily water temperatures,
 if desired.  Knowing the water temperature and a
 daily stream flow value,  one could go to the waste-
water flow versus stream flow curve for that  tempera-
ture and effluent concentration and arrive at an allow-
able wastewater flow.  The average daily flow of the
discharger would then be compared with the allowable
wastewater flow and if the flow from the discharger
was greater than the allowable flow the difference
would be put into storage.  If the discharger flow was
less than the allowable flow then the difference would
be taken from storage and put into the stream along
with the flow from the discharger.  For the period of
hydrologic records under evaluation, this procedure
could be utilized to arrive at the maximum storage
volume required for the given effluent quality and
would be one point on a storage volume versus effluent
quality curve.  Where large amounts of hydrologic
records are to be utilized the wastewater flow versus
stream flow curves can be defined by fitted equations
and a curve at one temperature can be related to a
curve at another temperature by the use of tempera-
ture coefficients .  Once these relationships are estab-
lished the storage volume can easily be calculated by
simple digital computer routines.

Utilizing the above procedure, other storage volumes
are then determined for varying effluent concentrations
and a curve of effluent quality as measured by ulti-
mate oxygen demand (UOD) versus the storage volume
necessary to maintain a given instream dissolved oxy-
gen level can be determined as shown in Figure 4.
From the information of oxygen demand in the effluent
versus storage volume, costs of wastewater treatment
and storage costs can be used to develop the curves
shown in Figure 5.  The minimum point on the total
cost curve would give the least cost combination of
storage and treatment.
                                                                                          AT A GIVEN
                                                                                         TEMPERATURE
                                                                                         AND EFFLUENT
                                                                                         CONCENTRATION
                  STREAM  FLOW
                    Figure 3
   An Example of the Curve Which Can be Plotted from
     Points Generated by the Modified QUAL-II Model
                                                      351

-------
    LU
    ^J
    _l
    u.
    u.
    UJ
    Q
    O
                STORAGE  VOLUME
                       Figure  4
    A Curve of Oxygen  Demand in the Effluent Versus
                    Storage Volume
                UOD  IN EFFLUENT
                 STORAGE  VOLUME
by a straight line with a slope and a Y-intercept.
The slopes and Y-intercepts at different temperatures
were related by coefficients such that  for  each
instream dissolved oxygen level investigated the
allowable wastewater flow was only a function of the
stream flow and the temperature.  A digital computer
program was written to read in over 30 years of <•
stream flow records, assign the proper temperature,
and calculate and accumulate the storage volume of
wastewater.  Based on this output, an equation for
operation of the storage pond to meet the required
instream dissolved oxygen concentration was developed,

                   References

^Water Resources Engineers, Inc.,  "Computer Program
Documentation for the Stream Quality Model  QUAL-II,"
Prepared for the Environmental Protection Agency,
Systems Analysis Branch, Washington, D. C., 1973.
2Tapp, J.S., "User's Manual For Wastewater Storage
Option, QUAL-II", Environmental Protection Agency
Region IV, Technical Support Branch, Atlanta,  Georgia,
February, 1976.
       J. S., "In Plant Wastewater Storage and
Release for Water Quality Control:  A Case Study",
Presented at the ASCE Environmental Engineering
Division Second Annual National Conference on
Environmental Engineering Research, Development,
and Design, Gainesville, Florida, July 20-23, 1975.
                       Figure 5
    The Curves Showing the Least Cost Combination
              of Storage  and Treatment

Another example where  the modified QUAL-II model could
prove useful would be  a situation where instream dis-
solved oxygen standards could not be met with an indus-
try discharging at an  effluent quality defined as Best
Available Treatment (BAT).  In this instance,
effluent concentration would be left constant and stor-
age volumes tested against allowable instream dissolv-
ed oxygen concentrations. A study of this type was
conducted by Tapp^ where  the curves of allowable waste-
water flow versus stream  flow were developed by trial
and error, thus pointing  to the need for the modified
model.  In this study an industry which was the defi-
nition of BAT for its  class had an existing storage pond
of a given volume.  An idea of what instream dissolved
oxygen concentration could be maintained based on over
30 years of past hydrologic records on the stream was
desired.  A curve of allowable wastewater flow versus
stream flow for each average monthly water temperature
was plotted and broken into two sections each fitted
                                                       352

-------
                        WATER POLLUTION MODELING IN THE DETROIT METROPOLITAN AREA
       Michael Selak

       Robert Skrentner,  P.E.

       City of Detroit
       Detroit Water and  Sewerage  Dept.
       Detroit, Michigan
             Carl Harlow

             James Anderson, P.E.

             College of Engineering
             Wayne State University
             Detroit, Michigan
                   SUMMARY

The EPA Storm Water Management Model (SWMM)  has
been used by the Detroit Water and Sewerage
Department and Wayne  State  University to simulate
waste water flow in the Oakwood  Sewer District of
the City of Detroit.  This  District is a 1,500-acre
(3,705-hectares) residential/industrial area with
combined sewers from  which  flow  is pumped to the
Detroit waste water treatment plant and/or to the
Rouge River during periods  of high rainfall.  After
several minor modifications to the SWMM, the
simulation results from the Runoff and Transport
blocks of SWMM compared favorably with observa-
tions by the computerized monitoring system  of the
Detroit Water and Sewerage  Department.

The output from the SWMM Transport block is  routed
to a computer simulation of the  Detroit waste water
treatment plant called STPSIM, which was developed
at Wayne State University.   This model enables the
user to evaluate the  effect of storm flow on plant
performance and to compare  various strategies for
treating the stored waste water.   The simulated
results from STPSIM appear  to be quite represen-
tative of the actual  treatment plant performance.
However, model calibration  has been difficult due
to a shortage of real-time  data  from the plant.

All of the water pollution  simulation models operate
on Wayne State University's 360/67 computer  system
in time sharing or batch mode through an executive
program called the Detroit  Water Quality Informa-
tion System (DWQIS).  The DWQIS  was developed at
Wayne State University with assistance from  the
Detroit Water and Sewerage  Department.  In
addition to the above models, the DWQIS contains
census and local climatological  data for the
Detroit area which was used to provide some  of the
necessary input data  for the SWMM.
                  INTRODUCTION

Ninety-seven communities  of the Detroit metropol-
itan area are  supplied with potable water and
seventy-four communities  receive waste water
collection and treatment  services from the Detroit
Water and Sewerage Department (DWSD).   As of
July  1975,  the waste  water service area under
DWSD contract  was  1075 square miles (2784 sq. km.).
The collection system  contains separate sanitary
and combined sewers that  normally flow to the DWSD
regional  treatment plant  located at the confluence
of the Detroit and Rouge  Rivers.   This plant
provides  advanced  secondary treatment  to a flow
of ISO million gallons per  day (19.7 cms).  Total
plant flow averages 800 MGD (35.0 cms).  Seventy-
six possible points of overflow to the Detroit and
Rouge Rivers exist to relieve the system during
periods of extreme flow.  The plant is currently
undergoing a major improvement program including
the installation of a computer system that will
monitor and aid in the control of the treatment
unit processes.

The DWSD, with an EPA research and demonstration
grant, installed a system-monitoring and remote
control network to better integrate the waste water
collection and treatment operations.  The data
collected by the network is transmitted to the DWSD
System Control Center and provides operators with
information to remotely control dry weather and
storm pump stations, flow regulating devices,
inflatable dams, and the entire water distribution
network.  The DWSD computers monitor events as
they occur, but they are not capable of predicting
system response to storm events and the resulting
changes in waste water quality and quantity.  For
these reasons, the DWSD has begun a program of
utilizing computer models to predict system
behavior, which is a primary goal for optimizing
the performance of an existing system.  The
objective of the DWSD is to implement methods that
will provide system response information in an
attempt to utilize the storage capabilities of the
sewer system more efficiently, improve upon the
system operation to prevent overflows, and
operate the treatment plant more efficiently.  To
meet these objectives, the DWSD is applying the EPA
model SWMM and the Wayne State University treatment
plant simulation model STPSIM to the Detroit system.
APPLICATION OF SWMM TO THE OAKWOOD SEWER DISTRICT

The Oakwood Sewer District of the City of Detroit
was selected to test and evaluate SWMM because:
a) it is relatively isolated from the balance of
Detroit, b) its land use is typical of the City,
and c) it is of a manageable size.  The district
has a population of 17,250 and an area of 1500
acres (607.5 hectares).  46% of the land area is
classified as residential, 44% as industrial, 3%
as commercial, and 7% parkland.  A combined sewer
system collects and transports sanitary flow,
industrial wastes, and storm water to a single
pump station.  Under normal conditions, a 20.0 cfs
(0.6 cms) pump transports the dry weather flow to
the main plant.  During periods of high flow due
to storm water, additional pumps with a total
capacity of up to 488 cfs (13.8 cms) can be used to
relieve the system into the Rouge River near its
confluence with the Detroit River.  As part of the
                                                    353

-------
system-monitoring network, the pump station is
equipped with a weighing bucket and a tipping bucket
rain gage.  An additional tipping bucket rain gage
is located in the near vicinity.  In addition, three
level sensors are operating in the major sewers of
the district.  During periods of rainfall, the
system-monitoring network is capable of collecting
rainfall data with time intervals as small as five
minutes.  The level sensors provide flow data on a
continuous basis.

The major blocks of the SWMM that were utilized in
this study include RUNOFF, Version I and TRANSPORT,
Version II.  Version II of the RUNOFF block is
currently being evaluated.  The dual-stage pump
capacity built into the TRANSPORT block was found to
be inadequate for the conditions at Oakwood and had
to be modified to allow for multiple pumps of
different capacities.  Several storms with rainfalls
of up to 2 inches extending over 40 hours were used
to evaluate the models.
RUNOFF BLOCK

The Oakwood district was initially divided  into  150
subcatchments with an average size of 10 acres
CM-.05 hectares).  Land use divisions, sewer maps,
U. S. Census data, previous DWSD reports, and
topographic maps all played an important role in
this selection process.  Pipes up to 24 inches
(61 cm.) in diameter were included in the RUNOFF
block.  Due to the long computer times needed for
this configuration, later analyses used as  few as
50 subcatchments of up to 80 acres.  Storm  durations
ranged from 3 to 40 hours with a maximum of 120
time steps of 15 minutes used for the 40-hour storm.
Although this is in excess of the SWMM recommenda-
tions, it was about the smallest that could be used
without altering the RUNOFF block, and no serious
difficulties were encountered.
TRANSPORT BLOCK

The output from RUNOFF becomes the input to the
TRANSPORT block.  The combined sewers of the
Oakwood district are circular or egg-shaped,
monolithic concrete or brick, and range in diameter
from 3 to 10 feet (.92 - 3.05 meters).  Most of  the
necessary input data for this block was taken  from
DWSD sewer maps, the U. S. Census, and DWSD water
use data.  The major sewers were divided into  138
elements, and as in RUNOFF, 120 time steps of  15
minutes each were used for the simulation.  The
Oakwood pump station has four pairs of pumps with
capacities of 20 cfs (0.6 cms), 40 cfs  (1.1 cms),
98 cfs (2.8 cms), and 106 cfs (3.0 cms), respectively.
Under normal conditions, one 20-cfs (0.6-cms)  pump
handles dry weather flow.  As the level in the wet
well rises, additional pumps are activated as
needed.  For the storms modeled in this study, the
20-cfs (0.6-cms) pump handled dry weather flow and
two 40-cfs (1.1-cms) pumps were used during the
overflow condition.
The dry weather  flow,  infiltration, inflow, and
insystem storage option of TRANSPORT were used in
this study, but  the  insystem storage model was
found to be inadequate for the conditions in
Oakwood.  Consequently, DWSD computer programs
were used to determine the storage potential of the
Oakwood sewers which was then utilized in TRANSPORT
as an "irregular reservoir".
SWMM RESULTS

In general, the results  generated by SWMM correlate
satisfactorily with observations and it seems that
SWMM is capable of predicting the waste water systerc
behavior.  Figure 1 shows  the relationship between
the rainfall hyetograph  and the resulting runoff
hydrograph.  No data was collected to verify this
relationship, but the  pattern is clearly represen-
tative.  Figure 2 is a plot of the actual and
simulated overflow from  the Oakwood pump station.
      3:0 6X>  *O  I2» 15.-O B» 21:0 24*  3:0 6:0  9:0 12:0
   FIGURE 1-HAINFALL HYETOGRAPH & RUNOFF HYDROGRAPH VS. TIME



c
F
S



80 •


60-

40-
20 •








i
A
•*••-

r~

STORM 	 -^


, — SANITARY
1
	 1 J




















• 2
C
M
S
• I


      3:0 SO  *0 12:0 15:0 IS'-O  21:0 24:0  3:0 SO  SO  12*

            FIGURE 2-PUMP FLOW VS. TIME

                   SOLID-MODEL DASHED-REAL
The quantity of waste water that overflowed was
predicted within the accuracy of the measured data.
However, the time that the pump was predicted to
have gone on deviated from the actual pump records
by approximately two hours.   Based on an elapsed
time of 20 hours from the  beginning of the simula-
tion, this represents a  deviation of approximately
10%.  Considering the accuracy of the input data and
the state-of-the-art of  the models, this deviation
seems quite reasonable.
                                                       354

-------
       WASTE WATER TREATMENT PLANT SIMULATION
STPSIM is a dynamic waste water treatment plant
model which uses the digital computer to simulate
plant behavior.  The computer program is written in
Fortran and consists of various subroutines totaling
approximately 1500 lines.  STPSIM currently has the
capability of modeling; 1) settleable solids, 2)
suspended solids, 3) dissolved solids, 4) B.O.D.,
and 5) chlorides.  The output from STPSIM is
displayed in tabular and graphical forms with the
graphical display being optional.  Through STPSIM,
the user can study plant behavior by adjusting
various operation parameters such as tanks in
service, recycling, chemical feeds, etc.

There are many reasons for studying the dynamic
behavior and operational characteristics of waste-
water treatment processes, including optimizing the
treatment of processes by properly adjusting opera-
tional controls.  Some of the possible benefits that
could be derived from this optimization are:

  1.  Reduced costs:  savings on chemicals and
      electrical energy.
  2.  Better treatment of dry weather flow:
      higher removal efficiencies might be
      obtainable during normal plant operation.
  3.  Higher treatment capacities:  through
      better control it might be possible to
      treat higher capacities during high runoff
      periods while maintaining water quality
      standards.
However, treatment processes are usually operated
as steady-state systems with general operating
procedures based on flow quantity rather than flow
quality.  Furthermore, to report the efficiency of a
given process, the samples are usually extracted
from the inlet and the outlet at approximately the
same time.  This obviously assumes that the process
is at steady state.
STPSIM REQUIRED INPUT DATA

STPSIM requires the following types of information:

  1.  Time information:  time step and length
      of analysis.
  2.  Quality information:  influent concentra-
      tions for the pollutants that will be
      analyzed for each time period.
  3.  Quantity information:  flow rates for each
      time period.
  4.  Plant operating information:  which tanks
      will be in service, initial concentrations
      of tanks, routing schemes, recycle schemes.
  5.  Calibration information:  (optional) reaction
      coefficients, reactor behavior type.
  6.  Desired output:  graphical, tabular, or both.

The quality and quantity data can be the output from
the TRANSPORT block of SWMM or from plant observa-
tions .
A dynamic model is needed since waste water treat-
ment systems are rarely at steady-state due to
significant variations in flow quantity and quality
during a given day (1).  This can be seen in
Figure 3 where the concentration of suspended solids
of the influent was measured hourly for the Detroit
waste water treatment plant on February 23 and 24,
1975.
  260 -r
   210 •
   160 • .
   10
                                               e:o
                  ie:o           24:0
                      TIME (MRS)
       FIGURE  3-OBSERVED PLANT INFLUENT FEB. 23-24
                (SS-SUSPENDED SOLIDS, MG/L )
BASIC ALGORITHM

The basic reactor type used for modeling the various
processes is the "complete mixed" reactor.  Some of
the processes act as plug flow reactors while others
have complete mix characteristics (5).   Plug flow is
usually approximated in long tanks with a high
length-to-width ratio in which longitudinal disper-
sion is absent or at least minimal.  This condition
is usually characteristic of primary sedimentation
tanks.  Complete mixing occurs when the pollutant
entering the tank is immediately dispersed throughout
the tank.  Round or square tanks are generally rep-
resentative of this condition.  Aeration tanks, for
the activated sludge process are usually operated
under complete mix conditions (1,2).  However, in
many instances, the reactors cannot be properly
characterized by either of the above ideal types and
therefore are classified as non-ideal reactors.

The complete mix reactor was chosen as the basic
reactor for STPSIM since it is possible to model
plug flow and non-ideal reactors with this model in
parallel and series combinations.

For a single complete mix reactor, a mass balance
yields, for a given time,
  Mass in = Mass out + Disappearance by reaction +
            Accumulation
          = QC - RV + V (dC/dt)    where
          = flow - constant for a given tank
          = volume of the tank, constant over time
          = concentration of pollutant in tank
            incoming concentration of pollutant
QC.
x in
Q
V
c
c.
 in
                                                     355

-------
   R
   t
             reaction mechanism
             time
This basic differential equation is solved numerically
at each time step for each pollutant and for each
reactor.

The various reactor types are handled in the follow-
ing manner:
   1.  Complete Mix   use of the basic algorithm
   2.  Plug Flow - use of "N" complete mix sub-
       reactors in series.  As "N" approaches
       infinity, the behavior will approach that of
       plug flow, Figure 4.
                                       1
INFLUENT MIXED

T 1 f
1



i-
c
T
3
REACTOR



i

S
1
c

4-
3

EFFLUENT
'

                                                             330
                                                             250
                                                              170 •
                                                              90
                                                               10
                                                                s:o
                                                                                                          -f-
                                                                                                          s:o
                                                                              ie:o           24 :o
                                                                                  TIME (MRS)
                                                              FIGURE 6-SIMULATED d OBSERVED  ( + ) PLANT EFFLUENT
                                                                       FEB. 23-24 ( SS- SUSPENDED SOLIDS, MG/L )
        FIGURE 4- PLUG FLOW  REACTOR
    3.  Non-ideal types - using complete mix (CM) and
       plug flow (PF) reactors in parallel and series
       combinations along with recycle schemes,
       Figure 5.
             r
             I	J

       FIGURE 5- NON-IDEAL REACTOR
RESULTS

STPSTM has been used to model the Detroit Waste Water
Treatment Plant.  The analysis chosen for presenta-
tion is for suspended solids for the 23rd and 24-th
of February, 1975.  Suspended solids were measured on
an hourly basis during this period for plant influent
and effluent.  On this date, the activated sludge
system was not in operation which permitted the sedi-
mentation processes to be modeled alone.   If the
activated sludge system had been in operation,
additional information would have been required.  As
seen in Figure 6, there appear to be questionable
data points for the effluent measured at 2:00 P.M. on
the 23rd and at 4:00 A.M. on the 24th.  The approxi-
mate residence times during this period were on the
order of one hour.  Therefore, if peaks in the
effluent did occur, similar peaks should have existed
in the influent earlier, but none were observed.
                                                          Several possible explanations exist.  One is that the
                                                          measured points are in error, which is impossible to
                                                          check at this time.  A second explanation might be
                                                          that the measured effluent points were correct and
                                                          internal operations, such as recycling, were respon-
                                                          sible.   However, this information also was not
                                                          available.   Finally, a third explanation could be
                                                          that the influent peaks did occur prior to the
                                                          effluent peaks, but were missed.  This alternative is
                                                          examined below.

                                                          For this analysis, two influent points were adjusted
                                                          to creat 'artificial' influent peaks prior to the
                                                          two effluent peaks in question.  The measured influent
                                                          at 12:00 P.M. on the 23rd was changed from 100 mg/1
                                                          to 270  mg/1 and the measured influent at 3:00 A.M. on
                                                          the 24th was changed from 90 mg/1 to 320 mg/1.  All
                                                          tanks in operation, which were basically sedimentation
                                                          tanks,  were modeled as plug flow reactors.  Initial
                                                          suspended solids concentrations for all reactors
                                                          were assumed to be 10 mg/1.

                                                          It appears, by observing Figure 6, that the initial
                                                          concentrations were set too low and that it took
                                                          approximately three hours before simulated values
                                                          agree with  measured values.  From this time until
                                                          about 12 hours later, simulated values agree reason-
                                                          ably well with observed data.  This period of agree-
                                                          ment was also evidenced in other analyses of this
                                                          same data where adjustments in the influent points
                                                          were not made.  An interesting point is that these
                                                          artificial  influent peaks were still unable to explain
                                                          the 'questioned' effluent peaks,and the correlation
                                                          of observed with simulated concentrations was lower.
                                                          Therefore,  it appears either that these effluent
                                                          peaks did not occur, or that internal operations were
                                                          responsible.

                                                          As additional plant operating data becomes available,
                                                          it should be possible to calibrate STPSIM for other
                                                          pollutants  over a range of flow conditions.  These
                                                          preliminary results indicate that STPSIM can be
                                                          developed to the point where it will be a useful tool
                                                          for operating the plant.
                                                      356

-------
     DETROIT WATER QUALITY INFORMATION SYSTEM

The efficient use of all of these models, especially
if real-time plant operating information is expected
in a short time frame, requires a relatively sophis-
ticated computer management system that provides for
easy access to, and updating of,the input data along
with assignment of I/O devices and proper sequencing
of the programs.  The Detroit Water Quality Informa-
tion System (DWQIS) is a time sharing/batch mode
Fortran program designed to meet these needs.  In
addition to SWMM, STORM, and STPSIM, the DWQIS
manages the execution of the Wayne State University
Air Quality Information System (AQIS) and the Detroit
Metropolitan Area Data System (DMADS).  Efforts are
currently underway to use the ai-n pollution simula-
tion data as input to SWMM-RUNOFF to determine the
effect of air pollution washout on water quality.
The data system, DMADS, provides much of the socio-
economic input data for SWMM and STORM.  The EWQIS
also stores, retrieves and updates data files on disc
and tape, using language familiar to engineers and
technicians.  A program to check for probable errors
in the input data for the various models is also
being implemented as part of the DWQIS.  These
features are very important if a large staff of
people with or without strong computer capabilities
are to be working on the project.
              MANAGEMENT AND MODELING

 Based on DWSD's experience with the Oakwood sewer
 district, modeling costs for overflow quantity could
 range up to $3   $5 per acre ($1.2   $2.0 per hectare)
 depending on the accuracy required.  These costs
 include salary, fringe benefits, overhead,and computer
 time.  Sampling costs are expected to be a maximum of
 $15,000 per outfall for a detailed sampling program
 to obtain data for diurnal and seasonal variations
 in quality for dry and wet weather flow.  Total
 sampling costs will also depend on the desired model
 accuracy, quality variations among outfalls, and
 sampled parameters.  For the wastewater treatment
 plant, a $30,000 to $50,000 sampling program should
 provide sufficient model calibration data.  The
 estimated total modeling cost is about 0.25% of the
 DWSD's estimated overflow abatement and plant expan-
 sion construction costs through the year 1990.

 As more overflow abatement facilities are added to the
 system, a means must be found to operate the system
 in a more coordinated and efficient manner.  Because
 of the large number of potential operational modes,
 computerized analysis is perhaps the only viable
 alternative to optimize waste water collection system
 operations.  System response model output could
 provide the input data to the optimization models.
 Modeling of the treatment plant will allow for
 alternative nodes of operation to be simulated.  Con-
 sidering that DWSD treatment plant chemical and
 utility costs for 1975 exceeded $6 million, an
 increase in operational efficiency of as little as 1%
 or 2% could easily justify the cost of modeling.

 Since environmental modeling and simulation is a
 relatively new tool available to assist managers in
 decision making related to the planning, design, and
 operation of their systems, its use is limited.   To
 encourage the expanded use of modeling techniques, it
is recommended that a series of mini-seminars be held
with EPA sponsershiD to explain the ramifications of
modeling to managers.  The seminars should stress the
anticipated costs of modeling and the expected
benefits.

It is further recommended that additional EPA funds
be committed to research and demonstration grant
projects related to modeling and to improving the
state of the art in both waste water sampling and
flow measurement.  These funds should be at least
0.5% of the monies committed for construction grants.
In addition, the grants should be 100% federally
funded for projects which will utilize models or
measurement devices within urban watersheds and in
conjunction with the municipalities or agencies
having jurisdiction over the waste water system.

In summary, modeling appears to be a viable means to
analyze waste water collection/treatment systems.
Thus, a major objective of EPA must be to encourage
model development, usage,and calibration.
                    REFERENCES

1.  Andrews, J., "Dynamic Models and Control Strate-
    gies for Wastewater Treatment Process," Water
    Research, 1973.

2.  Chudoba, J., Ottova, V., Madera, V., "Control of
    Activated Sludge Filamentous Bulking - I.  Effect
    of the Hydraulic Regime or Degree of Mixing in an
    Aeration Tank," Water Research, 1973.

3.  DiGiano, F.A., Mangarella, P.A. , Ed., Applications
    of Stornwater Management Models, Environmental
    Protection Agency, EPA-670/2-75-065, June  1975.

4.  Huber, W.C. , Heaney, J.P., Medina, M.A., Peltz,
    W.A., Sheikh, H., Smith, G.F., Stormwater Manage-
    ment Model User's Manual Version II, Environ-
    mental Protection Agency, EPA-670/2-75-017,
    March  1975.

5.  Metcalf 6 Eddy, Inc., Wastewater Engineering,
    McGraw-Hill Book Company, 1972.

6.  Metcalf £ Eddy, Inc., University of Florida,
    Water Resources Engineers, Inc., Storm Water
    Management Model-Volumes I-IV, Environmental
    Protection Agency, 1971.

7.  Wisner, P.E., Marsalek, J., Perks, A.R., Belore,
    H.S., "Interfacing Urban Runoff Models", Pre-
    sented at the American Society of Civil Engineers
    Speciality Conference on Environmental Engineering
    Research, Development, and Design, Gainesville,
    Florida, July 20-23, 1975.
                                                       357

-------
                                         GENERALIZED  METHOD FOR EVALUATING

                        URBAN STORM WATER QUALITY MANAGEMENT STORAGE/TREATMENT ALTERNATIVES
  James P. Heaney, Wayne  C.  Huber and Sheikh M. Hasan
   Department of Environmental Engineering Sciences
                 University  of Florida
              Gainesville, Florida  32611
                   Michael P. Murphy
       William M. Bishop, Consulting Engineers
              Tallahassee, Florida  32302
We are nearing completion of an EPA-sponsored study  in
conjunction with the American Public Works Association
to estimate the nationwide cost of controlling pollu-
tion from combined  sewer  overflows and storm sewer
runoff.1  Two models,  the USEPA Storm Water Management
Model (SWMM) and the Corps of Engineers' STORM, were
used extensively in this  study to estimate pollutant
loading rates and evaluate various storage/treatment
alternatives.2 3 **  Detailed modeling studies were
performed in Atlanta,  Denver, Minneapolis, San Fran-
cisco, and Washington,  DC.  This paper describes  the
results of continuous  simulation of hourly rainfal.1
and runoff in these cities for a wide variety of
assumed availabilities of storage and treatment combi-
nations.  Results of these simulation studies are
presented as isoquants showing the technologically
efficient combinations of storage and treatment to
obtain a specified  per cent pollution control.  This
information is combined with cost data developed  in
this study to determine the optimal combination of
storage and treatment  for any desired level of control
for each of the five cities.  The results are presen-
ted in a normalized form  which enables engineers  and
planners to derive  preliminary estimates of control
costs for other cities.   This information is useful
for early phases of 208 planning and for NEEDS sur-
veys.  A more complete description of this procedure
is presented elsewhere.^

                Simplifying Assumptions

In order to devise  such a general procedure, numerous
simplifying assumptions were made.  A constant per
cent BOD removal was assumed for the treatment units.
In actuality, performance would vary widely due to  the
dynamic nature of the  inflows.  No account is taken
of the equalizing effects and treatment which occur  in
storage.  Cost functions  are based on relatively  few
actual installations.   The tradeoff between treatment
plant size and pipelines  is not considered explicitly.
Approximate curves  fit to the results for the five
cities are extrapolated to the other 243 urbanized
areas.

Multipurpose waste  management schemes are not con-
sidered.  Actual costs for a given city could be  quite
different than the  estimates obtained using this
highly simplified procedure.  However,  the methodology
is general.  Thus,  the user needs only  to substitute
more accurate local data to obtain refined estimates.

        Control Technology and Associated Costs

A wide variety of control alternatives  are available
for improving the quality of wet weather flows.6  7  8
Rooftop and parking lot storage, surface and under-
ground tanks and storage in treatment units are  the
flow attenuation control alternatives.  Wet weather
quality control alternatives can be subdivided into
two categories:   primary devices  and secondary devices.
Primary  devices  take advantage  of physical processes
such as  screening, settling and flotation.  Secondary
devices  take advantage of biological processes and
physical-chemical processes.  These control devices
are suitable for treating stormwater runoff as well as
combined sewer overflows.  However, the contact stabi-
lization process is feasible only if the domestic
wastewater  facility is of an activated sludge type.
The quantities of wet weather flows that can be treated
by this  process  are limited by  the amount of excess
activated sludge available from the dry weather plant.
At the present time, there are  several installations
throughout  the country designed to evaluate the
effectiveness of various primary  and secondary devices.
Based on these data, the representative performance of
primary  devices  is assumed to be  40 percent 8005
removal  efficiency and that of  secondary devices to be
85 percent  BODs  removal efficiency.  No treatment is
assumed  to  occur in storage.  Hasan has synthesized
available information regarding stormwater pollution
control  costs. ^   The results are  shown in Table 1.
      Table  1.   Cost Functions  for
                 Wet Weather Control  Devices
      a,b,i
                                     Total Annual Cost: $/yr
                                        TC - wTZ or wSZ
               Control Alternative
Primary



Swirl Concentrator0' 'e
Micros trainer c'
Dissolved Air Flotation6
Sedimentation11
2,555.0
9,179.8
10,198.1
40,792.5
0.70
0.76
0.84
0.70
            Representative Primary Device Total Annual Cost
              $3,000 per mgd
 Secondary
 Storage
            Contact Stabilizations
            Physical-Chemical"1
24,480.4
40,792.5
0.85
0.85
            Representative Secondary Device Total Annual Cost
              $15,000 per mgd
            High Density (15 per/ac)
            Low Density (5 per/ac)
            Parking Lotn
            Rooftop"
51,000.0     1.00
10,200.0     1.00
10,200.0     1.00
 5,000.0     1.00
            Representative Annual Storage CostJ ($ per ac-in) •*
              $122
                   0.16(PD)
 T = Wet Weather Treatment Rate in mgd; S K Storage Volume in I
  ENR » 2200.  Includes land costs, chlorination, sludge handling,
  engineering and contingencies.
  Sludge handling costs based on data from Battelle Northwest, 1974.
 CFleld and Moffa, 1975.10
 dBenjes, et al., 1975.n
 "T-ager and Smith., 1974.7

 fMaher, 1974.12
 8Agnew, et al., 1975.13
                                            lit
  Agnew,  et al. . 197513 and Wiswall and Robbins, 1975.
  For T <_ 100 mgd.  No economies of scale beyond 100 mgd.
  PD c gross population density, persons per acre.
                                                         358

-------
          Optimal Mix of Storage and Treatment

The evaluation procedure for the nationwide assessment
consisted of relatively detailed studies of five cities:
Atlanta,  Denver,  Minneapolis, San Francisco, and
Washington,  DC.  For each city, a single storm event
for a selected catchment was simulated using the USEPA
Storm Water  Management Model (SWMM).  Also, one year of
hourly precipitation, runoff, and discharge rates were
estimated using the HEC STORM model.1*  STORM estimates
the total volume of storm water which is treated for a
specified size of storage unit and treatment rate.
Numerous  combinations were tested for each of the five
cities to derive storage/treatment isoquants as shown
in Figure 1  for Atlanta.  Given the storage/treatment
isoquants and knowing the relative costs of storage and
treatment, one can determine an optimal expansion path
in terms  of  control costs versus percent BOD removal.
The optimal  expansion path is determined by comparing
the costs of the various alternatives as shown in
Figure 1, or
                                      .016   020
                                          024    .026
    Figure 1.   Storage/Treatment Isoquants for
               Various BOD Control Levels - Atlanta
                                                    (1)
where   CT  = unit cost of treatment,
        c   = unit cost of storage, and
         b
      MRS
         ST
marginal rate of substitution of
storage for treatment.
The above problem can be expressed in the more compact
mathematical form shown below:
     minimize Z - cg(S)  + CT(T)
     subject to f(R ;S,T)  - 0

                RI,S,T >. o
                                       (2)
                                             where    Z  = total annual  control  costs  per  acre,

                                                  c  CS)    storage costs,

                                                  c  (T)  = treatment costs,

                                                      S  = storage volume,  inches,

                                                      T  = treatment rate,  inches per  hour,

                                                      R, = percent pollutant  control,  and
                                              f(R^;S,T)  = production function relating the level
                                                           of pollution  control  attainable with
                                                           specified availabilities of storage  (S)
                                                           and  treatment  (T).
                                             The storage/treatment isoquants are of the form:
                                                                    _
                                                              - T1)e
                                                   (3)
                                             where T  = treatment rate at which isoquant becomes
                                                        asymptotic to the ordinate, inches per
                                                        hour,

                                                   T  = treatment rate at which isoquant intersects
                                                        the abscissa, inches per hour, and

                                                   K  = constant, inch

                                             Substituting equation (3) into equation  (2) and
                                             assuming linear costs, this constrained  optimization
                                             problem can be solved by the method of Lagrange multi-
                                             pliers to yield the optimal mix of storage,S*, and
                                             treatment,!*, or
                                                                S*
                                                       max  (  In --
                                                                                             ] ,  0)
                                                           and
                                                  T* = T  +
                                                                                    -KS*
                                                   (4)
                                                                                                              (5)
                                                           Note that T* is expressed as a function of S*, so it is
                                                           necessary to find S* first.  Knowing S* and T*. the
                                                           optimal solution is
                                                                Z*
                                                                     cgS* + CTT*.
                                                                                                 (6)
                                             The above optimization procedure was programmed to
                                             generate curves  (e.g., Figure 2) showing percent
                                             pollutant removed versus total annual costs for primary
                                             and secondary treatment in conjunction with storage.
                                             The results  indicated that the secondary treatment/
                                             storage curves could be used to estimate control costs
                                             over  the entire  range of interest.  Note that, for wet
                                             weather control, marginal costs are increasing because
                                             of the disproportionately large sized control units
                                             needed to capture the less frequent larger runoff
                                             volumes.  The curves shown in Figure 2 can be
                                             approximated by  functions of the form:
                                                       ke
                                                                                                               (7)
where   Z  = total annual cost, dollars per acre per
             year,
      k,n  = parameters,
        R  = percent pollutant removal, 0 <_ R. <_ R. ,
             and
        RI = maximum percent pollutant removal.

The five secondary cost curves and associated cost
functions are shown on Figure 3.  Note that the control
costs per unit of runoff are much higher for San Fran-
cisco and Denver.  This difference appears to be
                                                       359

-------
                         PERCENT POLLUTANT CONTROL, R,

Figure 2.  Control Costs  for  Primary and Secondary Units as a
           Function of Percent BOD Removal,  Atlanta
       	 WASHINGTON, D.C. - 18.49 in/yr
                       (50.0 cm/yr)

       	SAN FRANCISCO- 9.68 in/yr
                      (25.1 cm/yr)
       	ATLANTA - 16.93 in/yr
                  (43.0 cm/yr)
       	DENVER-5.90 in/yr
                  (15.0 cm/yr)
       	 MINNEAPOLIS - 10.99 In/yr
                     (27.9 cm/yr)
  10
Figure
                             i       ~r
            20      4O      60      80
                 PERCENT BOD CONTROL
                                               24.7
                                      100
3.  Control Cost for Secondary Units  as  a
    Function of Percent BOD Removal for  the
    Five Regions (Preliminary Results, see
    Heaney,  Huber,  et al., 1976 for Final
    Results)
                                                   attributable to the different precipitation pattern in
                                                   this part  of the country.  Thus, the  five  cities were
                                                   aggregated into two major categories.  The resulting
                                                   preliminary estimating equations are  shown below:
                                                                   THIRTEEN WESTERN STATES

                                                                              0.05R,
                                                          Z   =  (5.6 + 0.18AR)e
                                                        w s
                                     0 < R,  < 85    (8)
                                       ~"  J-  ~~
                                                                 7.
                                                                e. s
                                                                      1.8e
                                                                              EASTERN STATES

                                                                          (0.09AR + O.OSR.^
                                                                                                       <  85
                                                                                                        (9)
                                                          where   Z  ,  Z
                                                                 e s w s
                 annual  cost using secondary control
                 devices,  dollars per acre, in the
                 eastern (e) and western (w) US,
          AR  =  annual  runoff, inches per year, and

           R^ =  level of  BOD removal.


These equations  are  used  for estimating the control
costs for all of the urbanized areas in the US.  One
only needs to input  the annual runoff, AR, and the
desired level of control, R .


      Runoff Prediction for Nationwide Assessment

Techniques for predicting runoff quantities vary from
very simple methods  of  the Rational Method type to
sophisticated models of the nature of SWMM.  The tech-
nique used in STORM  is  relatively simple, relying on
weighted average runoff coefficients and a simple loss
function to predict  hourly runoff volumes.  Nonetheless,
because of the nature of  the continuous simulation in-
volved, it is at a considerably higher level, and
therefore more complex, than earlier, desk-top tech-
niques.  Due to  the  complexities and data requirements
of STORM, it was not possible to run the model on all
cities of the nationwide  assessment, or even a majority.
These considerations lead directly to the use of a
simple runoff coefficient method in which runoff  is
merely a fraction of rainfall.  STORM computes a runoff
coefficient, CR, weighted between pervious and imper-
vious areas by:
                                                                CR = 0.15 (1-1)+ 0.90 I

                                                                   = 0.15 + 0.75 I
                                                                                                       (10)
                                                       360

-------
where I is fraction Imperviousness and the coefficients
0.15 and 0.90 are the default values used in STORM for
runoff coefficients from pervious and impervious areas,
respectively.  Note that the effect of demographic fac-
tors (e.g.,  land use, population density) is incorpo-
rated into the imperviousness.   An equation developed
by Stankowski16 for New Jersey catchments was used to
determine imperviousness as a function of population
density, i.e. ,
     I =
         100
              PD
                0.573 - 0.039 log.,, PD
                                                   (ID
where    I    fraction  imperviousness,  and
       PD    population density,  persons per acre.

Thus,  annual runoff,  AR,  from precipitation of P
inches per year  is

    AR =  CRCP) •
                                                    (12)
The generalized estimating equation will be applied to
the Cincinnati, Ohio,  urbanized area to illustrate the
procedure.   The requisite data are presented below1:

Demographic Data

     1970 population = 1,110,000

     developed area, A = 125,000 acres

     population density, PD   8.88 persons per acre.

Annual Runoff, AR

     precipitation,  P  = 34 inches per year.

Using equation (12),

     AR = CR(P)

        = (0.15 +

            0.75[0.0963(PD)
     AR = 13.0 inches  per year.
                           0.573 - 0.039 log1Q PD
 Control Costs for 60% BOD Control, R,
                                        60%
 From equation (9)
               (0.09AR + 0.05R )
      Z  = 1.8e
     e s

      Z  = $116 per acre per year
     e s
     Total annual costs
                           Z (A)  = $116 (125,000)
                          e s
                      Conclusions
 A simple procedure for evaluating urban stormwater
 quality control costs is presented.  This work is a
 condensation of the methodology used to develop a
 nationwide cost estimate for USEPA.1  A detailed des-
 cription of a more refined desk-top procedure for such
 evaluations will be released later this year.5

 The reader is cautioned that the estimating equations
 (8 and 9) are preliminary.  Final results will be
 presented in Heaney and Huber, et al., 1976.
                                                           10.
                                                           11.
                                                           12,
                                                          13,
                                   $14,500,000 per year.   14.
                                                           15,
                                                          16.
                  References

Heaney, J. P., W. C. Huber, et al., Nationwide
Evaluation of Combined Sewer Overflows and  Storm-
water Discharges:  Vol. Ill, Cost Assessment and
Impacts, USEPA, 1976.
Environmental Protection Agency, "Storm Water
Management Model," Water Pollution Control Research
Series, Washington, DC, 1971.

a.  Volume I, "Final Report," No. 11024DOC07/71
b.  Volume II, "Verification and Testing,"
    No. 11024DOC08/71
c.  Volume III, "User's Manual," No. 11024DOC09/71
d.  Volume IV, "Program Listing," No. 11024DOC10/71

Heaney, J. P., W. C. Huber, et al., "Urban  Storm
Water Management Modelling and Decision Making,"
USEPA Contract EPA-670/2-75-022, May 1975.
Hydrologic Engineering Center, "Urban Stormwater
Runoff:  STORM," US Army Corps of Engineers,
Generalized Computer Program 723-58-L2520, 1975.

Heaney, J. P., W. C. Huber  and S. Hasan, "Storm
Water Management Model   Level I, Desktop Analysis,"
USEPA, 1976.
Field,  R. I. and E. J. Struzeski, Jr., "Management
and Control of Combined Sewer Overflows," JWPCF
Vol. 44, No. 7, pp. 1393-1415, 1973.
Lager, J. and W. Smith, "Urban Stormwater Manage-
ment and Technology:  An Assessment," USEPA Report
EPA-670/2-74-040, NTIS-PB 240 697/AS, 1974.
Becker, B. C. et al., Approaches to Stormwater
Management, Hittman and Assoc., USDI Contract
14-31-001-9025, 1973.
Hasan, S., Integrated Strategies for Urban Water
Quality Management, PhD Dissertation, University of
Florida, Gainesville, 1976.
Field, R. I. and P. E. Moffa, Treatability Deter-
minations for a Prototype Swirl'Combined Sewer
Overflow Regulator/Solids Separator, IAWPR Workshop
on Design-Operator Interactions at Large Wastewater
Treatment Plants, Vienna, Austria, 1975.
Benjes, H. et al., "Estimating Initial Investment
Costs and Operation and Maintenance Requirements of
Stormwater Treatment Processes, USEPA Contract
EPA-68-03-2186 (unpublished), 1975.
Mahar, M. B., Microstraining and Disinfection of
Combined Sewer Overflows - Phase III, USEPA Report
No. EPA-670/2-74-049, 1974.
Agnew, R. W. et al., "Biological Treatment of
Combined Sewer Overflow at Kenosha, Wisconsin,"
USEPA Report EPA-670/2-75-019, NTIS-PB 242 120/AS,
1975.
Wiswall, K. C. and J. C. Robbins, Implications of
On-Site Detention in Urban Watersheds, ASCE Hyd.
Div. Conf., Seattle, Washington, 1975.
Battelle Northwest, "Evaluation of Municipal
Sewage Treatment Alternatives," Council on Environ-
mental Quality, 1974.
Stankowski, S. J., Magnitude and Frequency of Floods
in New Jersey with Effects of Urbanization, Special
Report 38, USGS, Water Resources Div., Trenton,
New Jersey, 1974.
                                                        361,

-------
                              MODELING HYDROLOGIC-LAND USE INTERACTIONS IN FLORIDA
                 Philip P.  Bedient
                 Asst.  Professor
                 Envi.  Sci. & Engr.
                 Rice University
                 Houston, Tex. 77001
Wayne C. Huber
Assoc. Professor
Envi. Eng. Sci.
Univ. of Florida
Gainesville, Fla. 32611
James P. Heaney
Assoc. Professor
Envi. Eng. Sci.
Univ. of Florida
Gainesville, Fla. 32611
   A technique is developed to describe and quantify
various hydrologic-land use interactions within a Flor-
ida river basin.  Surface runoff quantity and quality
are estimated as a function of land use and drainage
patterns at several levels of resolution including the
river basin, tributary watersheds,  lake basins, and
marsh areas.  A hydrologic-land use model based on a
daily water balance is applied to each soil-land use
complex in the watershed to estimate soil storage and
total runoff.  The overall basin response seems to be
more sensitive to the land drainage pattern than to
the condition of the narrow river flood plain.
   Potential nutrient loading rates are calculated
using measured concentrations of total P and predicted
runoff volumes.  The drainage density index correlates
with observed concentrations and loading rates for the
tributary watersheds.  The detention time parameter
for various hydrologic components in the basin indi-
cates the potential for control of runoff quantity and
quality through on-site storage in marsh, pond, and
lake areas.  Excessive drainage activities have led to
higher nutrient loads and decreased detention times in
the river basin.

                    Introduction
   Traditional approaches to watershed analysis have
placed little emphasis on linkage mechanisms which re-
late land use and drainage conditions to resulting hy-
drologic and water quality responses in a watershed.
Measured changes in land use and drainage patterns pro-
vide a useful starting point for estimating the impact
of alternative future levels of development.  Environ-
mental responses which can be measured or predicted
include the volume of surface runoff and streamflow,
and associated pollutant concentrations or loadings
which stimulate aquatic plant growth.
   The main objective of this research is to describe
and quantify various hydrologic-land use interactions
which occur within a river basin, in order to estimate
the historical, present, and projected environmental
responses in the basin.  This requires that a techni-
que be devised to characterize surface runoff quantity
and quality as a function of land use and drainage
patterns.  Influences of soil storage, vegetative cov-
er, drainage intensity, land use, topography, and cli-
mate are directly considered in the formulation.
   It is important to consider these interactions at
several levels of resolution or detail in order to
better understand the overall response of the water-
shed.  Various levels which are investigated include
the river basin, lateral tributary systems, lake units,
and marsh drainage areas.  Analyzing the response of
these different components allows quantification of
storage and transport mechanisms through the system.

           Description of the Study Area

   The Kissimmee River Basin, located in central Flor-
ida, is undergoing pressure for both agricultural and
urban expansion, while vast, undeveloped marsh areas in
the basin provide a valuable environmental resource.
This basin provides a convenient study area because of
the quantifiable land use changes  and water quality
responses which have been observed over the recent
past.
   The original river began near the Orlando area and
          passed  through  a  series  of shallow lakes before emerg-
          ing south of Lake Kissimmee as a meandering river.  It
          then  flowed south to Lake Okeechobee through a rela-
          tively  narrow marsh  flood plain (Figure 1).   Presently,
          the upper portion of the basin consists of a chain  of
          large lakes undergoing rapid urbanization from the  sur-
          rounding Orlando  area.   The lower basin is undergoing
          transition from its  undeveloped state as a marsh/swamp
          system  to a regime dominated by improved pasture with
          lateral drainage  canals.  In addition to the land use
          changes, an extensive flood control project has been
          implemented by  the Corps of Engineers with control
          structures at the outlet of the lakes and along the
          channelized main  river.
             Water quality  degradation in the form of high nutri-
          ent loading in  one of the upper lakes and along the
          river channel has been increasing over the last two
          decades.  There is concern for protecting water quality
          since the Kissimmee  River is the main inflow to Lake
          Okeechobee, which provides water supply to all of south
          Florida and the Everglades.  Objections have been
          raised  by ecologists and conservation groups over the
          destruction of  a  unique, natural meandering river and
          its rich marshes,  the decline of fish and waterfowl
          resources, and  the effect of degraded water quality on
          the eutrophication of Lake Okeechobee.1  During the
          past  two years, intensive studies by several groups
          have been underway to examine the environmental re-
          source  problems in Lake  Okeechobee and the Kissimmee
          River Basin.  The development and application of envi-
          ronmental simulation models along with pertinent re-
          sults are discussed  in the following sections.

                               Land Use Analysis

             Land use in  the Kissimmee River Basin has undergone
          rapid and significant changes in the last 15 years.
          Past  activities in the upper part of the basin (1958)
          were  dominated  by urban  interests, especially around
          the Orlando area, and agricultural interests involved
          in citrus on the  eastern ridge, small amounts of im-
          proved  pasture  throughout the remainder of the basin.
          The dominant undeveloped category was freshwater marsh
          and swamp around  the large lakes and adjacent to the
          Kissimmee flood plain.   The 1972 land use patterns  show
          about 40 percent  of  the  land which was formerly unim-
          proved  pasture  has been  improved through diking or
          drainage procedures.  Large areas of marsh and swamp
          have  been converted  to improved or unimproved pasture.
          In addition, urban expansion is evident south of Or-
          lando ,  around lake borders, and in the Disney World
          area  of western Orange County.
             Future patterns of land use in the Kissimmee River
          Basin have been projected using estimates of the Soil
          Conservation Service (SCS) and the U.S. Department  of
          Agriculture in  conjunction with a linear programming
          model developed in the study.  The results of this
          analysis are projections, to the years 1980, 2000 and
          2020, of what land use could be.
             A  more complete description of the land use method-
          ology is available,  and  the results of the analysis
          serve as direct input to the hydrologic-land use model
          discussed below.2 The observed shifts in land use  and
          drainage practices have  already created a series of
          effects which have begun to jeopardize the region's
          ability to  cope with increasing runoff volumes and
          degraded water  quality from waste loads.
                                                       362

-------
         Hydrologic-Land Use Interactions

Introduction

   Relatively  little research has been done on prob-
lems  associated with watersheds dominated by marsh and
lake  storage,  extremely flat slopes, and long-term
seasonal  rainfall and flooding.  These are termed de-
pressional watersheds,  and are most commonly found
along the Coastal Plain of the southeastern United
States.   South Florida watersheds including the Kissim-
mee-Everglades region fall into this category.
   Because the hydrologic response of the drainage
basin is  the  controlling link for land use and water
quality considerations, a hydrologic-land use model
(HLAND)  has been  developed which directly incorporates
land use  changes  and drainage practices.  The model is
based on the  daily water balance technique of Thorn-
thwaite and  the  Soil Conservation Service runoff curve
number method applied to each soil-land use type in
each subwatershed.3»4  jhe technique places primary
emphasis  on  soil  storage and potential evapotranspira-
tion (PE) dynamics to determine surface and subsurface
runoff volumes on a daily basis.  The approach is
ideally suited for modeling depressional watersheds.

Hydrologic-Land Use Model Description

   The climatic water balance was first developed in
an effort to characterize the moisture condition of an
area based on a balance between precipitation  (P)
which adds moisture to soil storage and evapotranspir-
ation (ET)  which removes it.  Knowledge of the rela-
tionship between P and ET provides information on
periods of moisture surplus (S) and moisture deficit
 (D), which in turn provides data on irrigation require-
ments, surface runoff, groundwater recharge, and soil
moisture storage.
   The various terms and relationships involved in  the
water balance are shown in Figure 2.  The budget can
be run on a monthly, weekly, or daily basis depending
 on the desired accuracy.  Measured values of precipi-
 tation (P) and calculated potential evapotranspiration
 (PE), which can be determined  for a region by  any one
 of the available techniques, provide the  initial value
 of excess precipitation  (P-PE).5  If this value is
 positive, then soil moisture storage  (ST) is increased
 up to the maximum level  (SM),  and actual  evapotranspir-
 ation  (AE) equals PE.  A water surplus  (S) is  gener-
 ated above the ground  surface  if  (P-PE) exceeds  (SM-ST)
 for  a given time increment.  For  this condition,
                S =  (P-PE) -  (SM-ST)
(D
    If  the value of  (P-PE) is negative,  then  a  loss
 occurs from soil moisture storage.  The loss is  not
 linear, because as  the soil dries, plants  are  less
 able to remove water via evapotranspiration  due  to cap-
 illary forces.  Thornthwaite assumes  that  the  actual
 amount of removal is proportional  to  the level of soil
 moisture content.   This condition  can be expressed by
 an exponential relation of the  form
               ST =  (SM) e
                          -(DHL x AWL)
                                                     (2)
 where DWL = depletion coefficient, and
      AWL = accumulated water loss.
 Resulting curves for various levels of  SM are  plotted
 in Figure 2.  Thus, for the case of curve A, if  the
 accumulated water loss is 50 mm, the  resulting soil
 moisture retained (ST) is 62 mm.  Because ST is  less
 than  SM, the AE term is no longer equal to PE  for  the
 case  of negative (P-PE).  Instead,
          AST     available moisture which  can be removed
                  from  the soil,  over  one time step.

     The difference between PE  and  AE  is termed the water
     deficit  (D).
        The water balance technique is a powerful predictive
     tool for areas undergoing  land use and vegetative
     changes, increased drainage, and/or urbanization.
     Drainage of land generally causes a reduction in soil
     storage, an increase in  surface runoff,  and a decrease
     in groundwater levels, all of  which can be quantified
     using the water balance.   Increases in irrigation re-
     quirements can also be predicted  based on increasing
     moisture deficits  from drainage.
        The HLAND Model computerizes the Thornthwaite water
     balance  for calculation  of daily  runoff using daily
     rainfall values from each  soil-land use type in  the
     study area.  Several additional components have  been
     incorporated to better represent  the hydrologic  re-
     sponse.  A more detailed description of the model and
     input data is available.6
        The SCS runoff  curve  number CN(J,K)  for land  use J
     and soil group K is used to estimate maximum soil mois-
     ture storage  SM(J,K) by the equation
                      SM(J.K)  =   1000    -  10
                                CN(J,K)
                                                    (A)
 where
                   AE = P +  AST
                                                     (3)
Typical values of SM in the Kissimmee River Basin range
from 2 inches for drained improved pasture to 23 inches
for some of the marsh areas.
   Predicted surplus volumes  do not become runoff in-
stantaneously.  Rather, overland flow is delayed by
specifying that a fraction CDET(J,K) of the available
surplus will remain on the land per day.   These deten-
tion constants are estimates  derived from Thornthwaite
and Mather and the SCS, and vary from 0.60 for drained
improved pasture to 0.90 for marshes and swamps.7,4
These constants allow the surplus to become runoff at
an exponential rate, or in direct proportion to the
amount available.  Average detention time in days can
be derived from the detention coefficients for each
soil-land use type.
   Base flow contributions as a function of soil mois-
ture storage have been specifically determined for the
Kissimmee River Basin.8  The relation was obtained
through the technique of hydrograph separation for 15
years of streamflow 'data for the Kissimmee River.  Base
flow is incorporated into HLAND by fitting an equation
to Langbein's relation, and partitioning the subsur-
face flows to each soil-land use complex in each plan-
ning unit.  In this way, base flow is calculated as a
function of soil moisture storage on a given day, and
then subtracted from soil storage at the end of the day
If another type of base flow relation is preferred, it
can easily be incorporated into the model.

Flood Routing and Model Calibration

   Extensive and costly flooding occurred under natural
conditions in the Kissimmee River Basin due to prolong-
ed seasonal rainfall, inadequate secondary drainage,
and limited outlet capacity.  Tropical hurricanes,
which usually occur during the rainy season, also
served to intensify the problems.  The existing flood
control project was implemented in the 1960's and pro-
vided for channelization and control structures on the
Kissimmee River and below the large upper basin lakes.
   A comparison of flood hydrographs with and without
the flood control project indicates significant differ-
ences regarding both the shape and magnitude of the
response.  The 1969 hydrograph is characteristic  of a
developed drainage system with higher peak flows,
shorter lag times, and shorter recession times; compar-
ed to flood events prior to channelization and  upland
drainage.
                                                        363

-------
   The model HLAND was verified  for  the Kissimmee
River Basin using both 1958 and  1972 land use  condi-
tions and a series of historical daily rainfall  pat-
terns over the basin.  HLAND calculates the  contribu-
tion of  total runoff to the river, and a flood routing
procedure is used to simulate either the original mean-
dering river or the present channelized regime.  In
this way, it is possible to determine the relative
effects  of river channelization  vs. upland tributary
drainage on observed outflow hydrographs.
   While sophisticated flood routing methods are avail-
able, the linear Muskingum method is ideally suited
for modeling the daily response  in depressional  water-
sheds where long-term seasonal effects are of  primary
concern.9  Storage and travel time parameters  are ad-
justed to the original river or  channelized regime.
   A series of calibration years, 1965-1970, was se-
lected based on the availability of data and the fact
that this sequence includes both drought and extreme
flood conditions, which provides a good test of  the
accuracy of the model.  A comparison of measured and
predicted streamflows for 1972 land use conditions is
depicted in Figure 3 at the gaging station near  Lake
Okeechobee (S65-E).  It can be seen that the model
provides a generally accurate representation of  the
basin response during conditions of floods (1969-1970),
droughts (1965-1967), and average flows (1968).
   Based on calibration runs using 1958 land use and
the original flood plain, the basin response seems to
be much  more sensitive to the land drainage character-
istics than to the condition of  the narrow river flood
plain.   Overall travel times in  the system were  slower
under the 1958 regime because upland marsh and slough
detention provided additional storage capacity during
the wet  season.  The present regime induces excess
water into drainage canals at a  faster rate, and HLAND
results  indicate increasing percentages of surface
runoff compared to subsurface flows as upland  drainage
activity increases.  Thus, lateral subwatersheds domi-
nated by drainage canals tend to produce more  surface
runoff than those in a more natural drainage condition,
while subsurface contributions are less under drained
conditions due to decreased soil moisture levels.

            Water Quality Considerations

Monitoring Program

   Water quality data have been  collected in the
Kissimmee River for the past several years, and  in
tributary inflows for the 1973-74 period.   The monitor-
ing program was begun by the U.   S.  Geological Survey,
and has  been continued and expanded by the Central and
South Florida Flood Control District (FCD).2
   An analysis of available water quality data from
the FCD  indicates that total and inorganic phosphorus
levels are the most responsive parameters compared to
nitrogen variation.   Phosphorus   tends to adsorb  to
soil particles and is readily available for surface
transport via runoff and erosion.
   Samples were taken monthly for one year for the
lower river stations and the major upper lakes in the
basin.   A plot of average wet season total P concentra-
tions along the extent of the basin indicates a  rapid
decline  through the lake system and a further increase
along the channelized portion of the river (Figure 4).
The high levels in the upper lakes are primarily due
to nutrient loading from treated sewage effluent, and
it appears that the lakes are serving as nutrient
sinks at the present time.
   The water entering the channelized Kissimmee  River
is of fairly good quality,  but concentrations increase
rapidly below structure S65-C.   Detailed analyses of
tributary inflow quality by the  FCD indicate progres-
sively higher P concentrations,  especially south of
S65-C,  which correlates with the observed trend  in the
river.   There is a need,  then,  to explain the observed
distribution of  surface  runoff and nutrient loading in
the basin as a function  of land use and drainage activ-
ities.

Drainage Density and  Pollutant Loading

   While the hydrologic  model estimates source areas
which contribute runoff  volumes, non-point sources of
nutrients are primarily  a function of land use, with
agricultural lands  contributing relatively high loads
due to fertilization  of  cattle density.  Loehr has
surveyed the available literature to determine relative
loading rates from  various land uses.l"  Potential
nutrient loading rates can be calculated for each sub-
watershed in the basin using  measured concentrations of
total phosphorus and  predicted runoff volumes from
HLAND.  Higher loading tends  to be associated with
higher runoff rates in areas  of intense drainage.
   Detailed analyses  of  land  use and drainage patterns
along the lower  river system  indicate the importance of
the drainage density  index, measured in miles of drain-
age network per  square mile of land area.  Drainage
density provides a  useful general indicator of land use
intensity, runoff volumes, and nutrient concentration
associated with  the various tributaries in the Kissim-
mee River Basin.H
   When drainage density measurements are compared with
measured inflow  concentrations of average total P dur-
ing the wet season, positive  correlations are obtained.
Converting to phosphorus loading rates as a function of
tributary drainage  density yields the significant rela-
tionship in Figure  5,  Although only a limited number
of data points are available  for the lower basin, the
results compare  favorably with values reported in the
literature for agricultural loading rates.12  it is
reasonable to expect  this result since drainage density
is inversely related  to  the average length of overland
flow, an indicator of potential runoff and pollutant
transport.

               Storage-Treatment Concepts

   Characteristics of hydrologic and nutrient cycles
can be placed into the general framework of reservoir
storage and control.  Various  hydrologic components in
a river basin system  are distinguished by a set  of
specific inflows, outflows, storages,  and losses which
contribute to the overall response.   The detention time
parameter, T, defined as  the  ratio of storage volume to
outflow rate, can be  used to  characterize various com-
ponents of the hydrologic system,  e.  g.,  soil, marsh,
pasture, lake, subwatershed or river.
   Detention time also plays  a key role in nutrient cy-
cling as it relates to treatment rates for runoff on
the land, in the  soil, and in lakes or streams.  In
general, the longer the  detention time,  the greater the
potential for nutrient uptake  and/or deposition of sed-
iments.  Thus, water  quality  control through the system
can be characterized  by  the length of time available
for physical, biological and  chemical uptake mechanisms.
   Calculated detention  times  from the HLAND results
average about 130 days in the  soil system,  and range
from 1.5 to 9.5  days  for surface runoff from various
land uses.  Subwatersheds characterized by intensive
drainage activity tend to have lower average detention
times, 2.2 days  compared to 4.5 days for naturally
drained areas.
   The flood routing  technique in the river channel
uses a travel time or detention time of 3.5 days com-
pared to a possible upper limit of 9.0 days for  the
original flood plain  condition.   These relatively short
detention times  result in a low potential for nutrient
uptake in the river or flood  plain alone.
   Since nutrient uptake requires long detention times,
the greatest potential would  occur in lakes and marsh
storage areas, which  also provide a measure of flood
storage capacity.  Average detention times in Lake
                                                       364

-------
Tohopekaliga south  of Orlando varies from 4.0 to  6.0
months in  wet  and dry seasons, and  can  drop  as low as
1.0 month  during extreme  floods.  Although nutrient
loading is excessive to  the  lake, long  detention  times
allow for  up to 85% uptake by the time  the water
leaves the lake (Figure  4).   If future  developments
around the lake basin should cause  a reduction in de-
tention time from 4.0 to  2.0 months, then uptake  po-
tential could  drop  to 67%.
    The detailed study of  a marsh area above  S65-D re-
veals a definite potential for flood attenuation  and
nutrient uptake.  Results from a marsh  routing model
indicate  that  detention  times from  3.0  to 5.0 days
provide from 30 to  55% uptake of total  P, based also
on field  data.   Thus, marsh  areas provide a  signifi-
cant potential as long as routed runoff volumes
through them do not reduce detention times below  3.0
days.  These concepts are presently being tested  in
the basin.
    In general,  marsh and lake detention times are
comparable on  a per acre basis to  the soil system.
However,  both  the surface runoff and river system are
distinguished  by considerably smaller values of T,
around 5.0 days for the  entire river.   Thus, the
potential for  control of runoff quantity and quality
 in the basin exists through  on-site storage  in marsh,
pond, and lake areas.    Excessive drainage activities
have  led  to higher  nutrient  loads  and decreased
 detention times.
                       LIST OF REFERENCES
       Marshall, A. R. , J. H. Hartvell, D.  S. Anthony, ej al. , The Kiaaimmee-
          Okeechobee Baeln. Report to the Florida Cabinet, Tallahassee,
          Florida, 1972.

       Heaney, J. P., U. C. Huber, P. B. Bedlent, and J.  P. Bovden.  Environ-
          mental ReaourceB Management Studlea  In the Kisslmmee River Baaln,
          Final Report to the Central and Southern Florida Flood Control
          District, West Palm Beach, Florida,  1975.

       Thornthwalte , C. W.  "An Approach Toward  a Rational Claaalf Icatlon of
          Cll»ata." Ceogr. Rev.. 38, 1948, pp. 55-94.
       Soil Conaervatlon Service
                                              jidbook;  Hydrology,
    9.


   10.
    Sec. 4, U. S. Dept. of Agrlc., Waahington, D. C., 1969.

Tanner, C. B. "Measurement of Bvapotranaplration," In Irrigation of
    Agricultural Lands. R. H. Began et al.. eds., Amer. Soc. Agron.,
    Hadlaon, Wla., 1967, pp. 534-574.

Bedlent, P. B., Hydrologlc-Land Uae Interactions In a Florida River
    Baaln. Ph.D. Diaaertatlon, University of Florida, June, 1975.

Thornthwalte, C. V., and J. R. Mather.  The Water Balance. Drexel
    Institute of Technology, Publications in Climatology, 8(3), 1955.

Langbein, W. B. "Hydrologlc Studies," In U. S. Gaol. Survey Water
    Supply Paper 1255. G. G. Parker, G. E. Ferguson, et al., 1955,
    pp. 511-551.

Unsley, R. 1., M. A. Kohler, and J. L. B. Paulhus.  Hydrology for
    Engineers. McGraw-Hill Book Co., Nev York, 1975.

Loehr, R. C. "Characteristics and Comparative Magnitude of Non-Point
    Sourcea." Water Pollution Control Fed., 46(8), 1974,
    pp. 1849-1872.

Gregory, K. J., end D. B. Walling.   Drainage Basin Form and Procesa.
    John Wiley and Sons, New York, 1973.

Uttoraark, P,  D. , J. D. Captln, and K. M. Green.  Estimating Nutrient
    Loading of Lakea from Non-Point Sources. Environmental Protection
    Agency. Washington, D. C., 1974.
                                                                                                      KISSIMMEE
                                                                                                        RIVER
                                                                                                          BASIN
                                                                                    Figure 1.   Location  Map
                                                                                       THE WATER BALANCE

                                                                                                    Surplus Conditions
                                                                              ACCUMULATED WATER LOSS
^

DETERMINE ST
FROM DEPLETION CURVES
\

AST
\
AE = PE II
AE = P + |AST
^
f
P-PE 2 0
IF P-PE <, O
f
D = PE - AE
\

S - (P-PE) - (SM-ST)
IF (P-PE) > (SM-ST)
                                                                                                         Soil Moisture Depletion Cur
                                                                                                        100
                                                                                                                  25      50     75
                                                                                                                   AWL(mm)
                                                                                    Figure 2.   The Water  Balance
                                                                  365

-------
 Figure 3.  Calibration Curves  in the Kissimmee
            River Basin
                                                                I [9  16  K | 13  10 | 9
                                                                ^	TOKO.	-j CYPRESS K HATCI
                                                                                  SAMPLING  STATION
                                   s-65A  s-ETTSmTTLj
                                     KISSIUMEE RIVER	J
Figure  4.   Observed Phosphorus Concentrations  in the
            Kissimmee River Basin
                     34       66
                     DRAINAGE DENSITY (MI/SO HI)
Figure 5.   Phosphorus Load  vs.  Drainage  Density
                                                         366

-------
                                 MODELING URBAN RUNOFF  FROM A PLANNED  COMMUNITY

                           Elvidio V. Diniz, Espey-Huston  and Associates, Austin,  TX
                           Don E. Holloway, Espey-Huston and Associates, Houston,  TX
                           William G. Characklis, Rice  University, Houston,  TX
A management strategy for utilization of water
resources in the planned community of The Woodlands,
near Houston, Texas, is being developed by modifica-
tion and application of the EPA Storm Water Management
Model (SWMM).  Selected sites on Panther Branch,
which flows through The Woodlands, and on Hunting
Bayou, a completely developed watershed within the city
limits of Houston, Texas were modeled for testing and
verification of the modifications to the SWMM.

The capacity of the SWMM to model urban runoff quantity
has been improved to include the "natural" drainage
concepts of The Woodlands and the infiltration compu-
tation model in the SWMM is now capable of operating
with a rainfall record which includes periods of zero
rainfall.  Three new subroutines have been written to
operate in conjunction with the SWMM.  The three sub-
routines generate normalized area-discharge curves
for natural sections, model baseflow conditions, and
model the operation of porous pavements, respectively.
Verification of the SWMM with regard to suspended
solids and BODg was attempted and modifications to
predict COD, Kjeldahl nitrogen, nitrates and phosphates
were performed.

                    Scope of Study

Increased runoff rates and increased pollutant loads
are two of the major effects of urbanization on the
hydrologic regime of a previously undeveloped water-
shed.  The increase in impermeable areas due to urban-
ization results in high velocity surface flows which
tend to increase the potential  for capture of pollut-
ants by the storm water and reduce natural  infiltration
processes.

The planned new community of The Woodlands is designed
to minimize the detrimental effects of urbanization
upon the runoff characteristics of the watershed in
which it is located.   Several  extensive changes to the
U.S. Environmental  Protection Agency Storm Water
Management Model  (SWMM)  had to  be performed to allow
modeling of storm water runoff in The Woodlands by use
of the SWMM.  The necessary changes to the SWMM
include modified computations for infiltration volumes
and pollutographs and three new subroutines to develop
normalized  area-discharge curves for natural  channel
sections, to model  baseflow conditions,  and to model
runoff from porous  pavements.   This paper discusses the
changes that were performed to  the SWMM,  the new sub-
routines that were  developed,  and the concurrent
modeling effort in  The Woodlands.   An urban Houston
watershed,  Hunting  Bayou,  was also modeled because its
drainage characteristics  are similar to  those of
The Woodlands.

               The  Woodlands Study Area

The Woodlands is  a  planned urban community  being de-
veloped approximately 28  miles  north of Houston,
Texas in a  heavily  forested 17,800 acre  tract in
Montgomery  County.  A total  of  33,000 dwelling units
with a  projected  population of  112,000 is  programmed
at project  completion in  1992.   A  concern  for nature
and convenience for people are  two of the major
criteria used in  the  development of the  General  Plan
for The  Woodlands,  consequently,  all  development in
The Woodlands  is  based on  a  comprehensive ecological
 inventory conducted  from  1971  to  1973  .  Approximately
 33  percent  of  the  total area,  including all  flood-
 prone  land,  is  planned  for open space  uses.  Some of
 the open space  will  be  retained in  its original state
 to  provide  wildlife  habitat, while  other areas will
 be maintained  for  park  and similar  recreational uses.

 The Woodlands site is located  in  the Spring  Creek
 Watershed.   Panther  Branch, an intermittent  tributary
 to Spring Creek, is  the major  drainage channel as
 shown  in Figure 1.   Drainage channels  tributary to
 Panther Branch  and Spring Creek are characterized by
 broad  and shallow  swales having very mild slopes.
                                    ^^.
                   ;J<\  /   '"'S    .-V'%A,
                   I     NK-?    /  ;\\\.&-r\
                   ,—x S/'"  j  ki7,f.--.
           	  WATtRSHCD DivtfC

           	.  SUBCATCKUCNT DIVIDE
                 THE WOODLANDS BOUND-
 '  ^-4 N-V
-^;-r\
  ;;^-J
           FIGURE I
            .THE WOODLANDS DEVELOPMENT

A detailed soil survey by the U.S. Department of
Agriculture Soil Conservation Service and the Texas
Agricultural Experiment Station determined that the
soils in The Woodlands site are highly leached, acid
in reaction, sandy to loamy in texture and low in
organic content.  The vegetation is typical  for a
mixed woodlands of the Southern Piney Forest charac-
terized by loblolly and short-leaf pines in associa-
tion with hardwood species, including oak, sweet gum,

hickory and magnolia ,  The dense vegetation, sandy
soils and mild slopes result in high retention and
infiltration losses from rainfall.

The design of all  drainage channels in The Woodlands
is based on the premise that typically narrow and
deep drainage ditches are undesirable.  Therefore, the
existing drainage channels are utilized to the fullest
extent possible and any new channels are constructed
as wide, shallow swales and lined with native' vegeta-
tion to emulate the existing channels.  Storm sewers
and drains are used in high density and activity areas
to conduct the excess runoff to the nearest drainage
channel with sufficient capacity to safely carry the
flow.  To minimize increases in runoff volumes and
peaks, retention ponds are utilized whenever practical.

The net effect of this "natural" drainage system is an
increase in infiltration and storage capacity in the
channels, thereby reducing the impact of urbanization
upon the runoff regime.

             Data Sampling and Sources

There are two stream stage recorders located on
                                                       367

-------
Panther Branch:  Panther Branch near Conroe (P-10) and
Panther Branch near Spring (P-30) as shown in Figure 1.
Station P-10, located below the confluence of Panther
and Bear Branches, measures runoff from 25.1 sq mi of
undeveloped forest land.  Station P-30 has a drainage
area of 33.8 sq mi with the developing areas of The
Woodlands (Phase I) immediately upstream.

The Hunting Bayou study area is located northeast of
downtown Houston and within the metropolitan confines
of the city.  As seen in Figure 2, there are two gaging
stations:  Hunting Bayou at Cavalcade Street (H-10)
and Hunting Bayou at Falls Street (H-20).  The drainage
areas of Stations H-10 and H-20 are 1.03 and 3.08 sq mi,
respectively.  Land use is primarily residential with
some commercial and industrial areas.  There are very
few storm sewers and the major portion of the drainage
system is made up of grass-lined swales comparable to
those of The Woodlands.
where:
       LEGEND
       	  DRAINAGE DIVIDE

       	  DRAINAGE SUBDIVIDE

       	  DRAINAGE DITCH
         FIGURE  2—THE  HUNTING  BAYOU WATERSHED
fc

fi

fo
k
t*
                                                                          infiltration rate at time  t*
                                                                          initial infiltration rate
                                                                          final infiltration rate
                                                                          decay coefficient
                                                                          time from start of rainfall to  the
                                                                          midpoint of the time interval At; or
                                                                          t* = t + 0.5 At
The RUNOFF Block of the SWMM was  structured  such  that
Horton's time dependant infiltration  rate decay
equation would become operative from  the  start of
modeling time.  Consequently,  if  the  time of start of
rainfall did not coincide with the  start  of  modeling
time, the infiltration rate would have  decayed to a
lower rate by the time rainfall had begun.   This  may
be one reason why early investigators determined  that
the starting infiltration rate was  not  a  significantly
                   p
sensitive parameter .  A second problem with the  com-
putation of infiltration volume resulted  from the in-
put of two or more high intensity rainfall events
separated by time periods of zero or  low  intensity
rainfall that was not capable  of  satisfying  the infil-
tration rate.  The infiltration rate  would decay  with-
out regard to the availability of rainfall for infil-
tration.  Modeling runoff under these conditions  was
difficult and consequently the infiltration  computa-
tion method in the SWMM was modified.

The new computation scheme uses an  integral  form  of
Horton's Equation and a time parameter  to monitor the
                                                           progress of infiltration only.
                                                           Horton's Equation  is:
                                 The integral  of
                                                                  M-  f0t
                            (1  -  e'kt)
where:     M      accumulated infiltration volume in
                 inches  at the end of time t
           The  other variables are as defined previously

During  each  time interval (At), the volume of water
 capable  of infiltrating (M
                                                                                      t+At
                                                                                                 is calculated and
                                                            compared  to  the  total  volume of water available for
                                                            infiltration determined  as
During storm events,  streamflow  quality  sampling was
conducted in conjunction with  flow gaging.   The
samples were analyzed for a  large  number of parameters
including suspended solids,  COD, nitrates,  phosphates
and Kjeldahl nitrogen.   Reconstitution of the  observed
hydrographs and pollutographs  to calibrate  the SWMM
were attempted in  the modeling  effort.

              Modifications  to the SWMM

The SWMM was originally developed  to model  the hydro-
logic effects of older urban areas where an artificial
drainage system was imposed  upon,  and in most  cases
entirely replaced, the original  drainage system.   In
the application of SWMM to The Woodlands, several
deficiencies in the model were encountered.  The  major
modifications are discussed  in the following sections.

Modification to Infiltration Volume Computation

Infiltration rates are computed  in the RUNOFF Block of
the SWMM by means of Horton's Equation defined as
follows:

   fc   (fi   f0) e'kt* + f0
                                                                   Dt =  S, + Rt At

                                                            where:    D.     water volume after rainfall  during
                                                                           time interval  At
                                                                     St     water volume remaining from the previous
                                                                           At
                                                                     R^     intensity of rainfall during At

                                                            When the available volume is greater than the infiltra-
                                                            tion volume, the excess is calculated as the volume
                                                            of water available for runoff.  The results are com-
                                                            parable to those previously computed by the SWMM.

                                                            If the infiltration volume is greater than the
                                                            available volume, the time increment, At* < At, is
                                                            computed such that the infiltration volume is equal to
                                                            the available volume:
                                                                   M
                                                            where:
         t+At*

          M.
               - M.
                                                                      t+At*
        't   ul

         volume of infiltration at time t+At*

         volume of infiltration at time t

         volume of water available for
         infiltration
                                                            and no runoff is generated for that At.
                                                        368

-------
The infiltration rate at time (t + At*) then becomes
the starting infiltration rate for the next computa-
tional  time interval  beginning at time (t + At).
Therefore,  the elapsed time for infiltration rate
decay by Horton's Equation will  not necessarily coin-
cide with the elapsed runoff computation time.

Subroutine  NATSEC

In the TRANSPORT Block of the SWMM, normalized area-
discharge curves are required for flow routing.
Thirteen uniform channel shapes  (circular, rectangu-
lar, trapezoidal, etc.) have their respective curves
preprogrammed through Block Data, but those for
natural sections have to be independently computed
and input to the model.  Because of the large volume
of work required in preparing these curves for a
"natural" drainage system, Subroutine NATSEC was
written and incorporated into the SWMM.  This sub-
routine generates normalized area-discharge curves
for irregularly shaped cross sections and for cross
sections with varying values of Manning's roughness
coefficient, n.  The cross section is input to the
subroutine by means of a two-dimensional linear coor-
dinate system.  Three Manning's  n values, one for each
overbank and one for the channel, may also be used.
Depth  increments for equal increments of area are
calculated by an iterative process.

When the depth of flow is below bank elevations, a
single application of Manning's equation is sufficient.
If  the channel capacity is exceeded, the flows in
each overbank as well as flows in the channel are
computed by independent applications of Manning's
equation to each flow area.  The total discharge is
equated to the sum of the individual discharges.

The output from subroutine NATSEC is a tabular ver-
sion of the normalized area-discharge curves for
natural channels and is comparable to the other
area-discharge curves in Block Data of the SWMM.

Subroutine BASFLO

The SWMM computational scheme considers all infiltra-
tion volume as permanently removed from the runoff
volume.  The volume of rainfall  that soaks into
vegetation debris and surface soils and which drains
out at a very delayed rate is not accounted for
because in most urban areas this interflow volume is
negligible.  But, again, in the "natural" drainage
system of The Woodlands, interflow does become a
significant factor.  In vegetated areas, the volume of
infiltration as computed in the SWMM includes evapo-
transpiration losses and losses to groundwater.
Interception losses may be accounted for in either
the infiltration or surface depression storage.

The portion of the hydrograph beyond the point of in-
flection (where dQ/dt -»• °°) is generally considered as
depletion of runoff volume stored in the drainage
system or watershed.  As for most depletions in
nature, the rate of depletion approximates an
exponential decay and is often referred to as baseflow

recessions  of the form:
   Wt+At - wt e

 where:   Qt+At   flow at end of time interval At

         Qt      flow at start of time interval At

         k       recession coefficient

 The recession coefficients and their associated flow
 ranges are user-supplied to Subroutine BASFLO.  One
 theory of varying recession coefficients for a single
hydrograph is the concept of drainage of different
                                      n
storage units in the hydrologic system .  In The Wood-
lands, both stations P-10 and P-30 exhibit 2 recession
ranges.  The recession coefficients (determined from
observed hydrographs) are plotted against the flow at
start of recession and the corresponding regression
equations are derived.  The coefficients of the regres-
sion equations are input to Subroutine BASFLO.  All
flow rates beyond the point of inflection are determined
by consecutive applications of the recession coefficient
regression equations and the baseflow recession
equations.

Subroutine BASFLO also provides for inclusion of the
groundwater component of runoff.  The groundwater flow
rate may be input as a constant, linearly varying, or
logarithmic function.  All  computed groundwater flow
rates are added to the runoff hydrograph with respect
to time resulting in a corresponding upward shift in
the runoff hydrograph.

The baseflow rates are substituted into the runoff
hydrograph prior to addition of groundwater flow rates.
The specific water quality loading rates are applied to
the new flow rates and the corresponding pollutographs
are computed.

Subroutine PORPAV

The Woodlands Development Corporation has envisaged an
extensive use of porous pavements in the place of
conventional impermeable pavements.  Subroutine PORPAV
was developed to model the effects of porous pavements
on the runoff volume and peak flows because the SWMM
did not have this capability.

The modeling scheme consists of delineating the porous
pavement and the subgrade as two hydraulically
connected control volumes for which the inflow and out-
flow conditions are established by the equation for
continuity or conservation of mass:
   ds
   dt
ui   T   n
-rr -I   0
where:
         dt

         I
         0
            change in storage during time interval
            dt
            time average inflow
            time average outflow
Inflow to the porous pavement area is determined as the
sum of direct rainfall onto the pavement and the over-
land flow hydrograph as computed by Izzard's method .

The outflow is the sum of vertical seepage losses, hori-
zontal seepage losses, surface runoff when the porous
pavement storage capacity is exceeded, and evaporation
losses.  Vertical seepage losses are computed by the
variable head permeability equation.  A modified Darcy
Equation is used to model the horizontal seepage losses
and Manning's Equation is used to establish the surface
runoff rate.  The instantaneous evaporation loss rate
is computed from a time-lagged sine curve approximation
of diurnal evaporation loss rates.

Unfortunately, no data on existing porous pavements are
available.  Therefore, all testing of Subroutine PORPAV
has been done on a hypothetical area.

             Storm Water Quality Modeling

A thorough analysis of the available water quality data
from The Woodlands was conducted in an attempt to
define a methodology to predict runoff quality, speci-
fically nitrates, phosphates, Kjeldahl nitrogen, and
                                                       369

-------
COD.  The present  version  of the SWMM considers these
as percentages of  the  dust and dirt volume.  Recog-
nizing that in The Woodlands the dust and dirt genera-
tion rates are not typical  of other urban areas, re-
lationships between quantity and quality of flow were
sought.  Plots of  cumulative pounds of pollutant
versus cumulative  volume of flow indicate a strong
relationship as shown  in Figure 3.A for COD.  In some
cases, if availability of  the pollutant is exceeded,
the upper portion  of the straight line will curve up-
wards indicating that  the  rate of pollutant loading is
decreasing with increasing flow.   Also, it was deter-
mined that total pollutant loading in units of pounds
per acre is a function of  total  inches of runoff as
shown in Figure 3.B for COD.  The slopes of the
straight lines tend to increase with urbanization, in-
dicating an increase of pollutant loading for the same
volume of runoff.  The relative magnitude of the
urbanization effects may be determined by the increase
in slope.
           DISCHARGE   VOLUME    (ft3 x 10s)

          FIGURE 3.A—DOUBLE  MASS  ANALYSES
       FOR THE STORM OF  12/05/74  AT GAGE P-30
                  P-IO
                           P-30
                       RUNOFF  (in)

     FIGURE 3.B—TOTAL POLLUTANT LOADING RATES

The pollutant loading relationship may be used to de-
termine total pollutant mass  and the  cumulative
pollutant mass relationship can  provide a flow depen-
dent mass transport rate.  A  combination of the two
functions can be used to  develop a pollutograph.

This methodology to determine quality of runoff is
unrelated to the dust and dirt  accumulation approach
as used in the SWMM.  Consequently, the quality of
runoff computations in the SWMM  would have to be com-
pletely rewritten to incorporate the  new methodology.
A simpler modification involves  the input of user
supplied pollutant loading rates.   Initial modeling
attempts using this approach  are now  being conducted.

Application of SWMM to The Woodlands  and Hunting Bayou

The 33.8 sq mi of drainage area  upstream of Station
P-30 was divided into 57  subcatchments with an average
size of 380 acres.  Physical  parameters were deter-
mined from topographic maps obtained  from the U.S.
Geological Survey and The Woodlands Development Cor-
poration.  Certain parameters such as width of sub-
catchment and retention depths  cannot be directly
determined for natural watersheds.  Therefore, these
parameters  can  be adjusted within reason  so  that a
good fit exists between observed and computed hydro-
graphs.  Width  of subcatchment values were first esti-
mated using the method described in the SWMM User's
Manual.  These  values had to be reduced by approximately
40 percent  because overland flow will not occur as
sheet flow  over the entire subcatchment.

The parameters  for infiltration modeling  at  Stations
P-10 and P-30 are listed in Table 1.  Table  2 lists
the observed and computed peak flows and  volumes  and
Figures 4 and 5 compare the observed and  computed
hydrographs  and suspended solids pollutographs  for the
storm of 12/05/74 at Stations P-10 and P-30,  respec-
tively.  The SWMM predicts suspended solids  transport
very well at the P-10 gage, but the prediction  at the
P-30 gage is not as successful.  One reason  may be
the transient state of construction and development in
the drainage area between the two gages.  Accounting
for all construction areas and their credibility  prior
to the storm event being modeled proved to be difficult.
Consequently, it is presumed that several construction
areas where  the natural  ground had been disturbed and
stripped of  the protective vegetative cover  contributed
more suspended  solids than the SWMM could predict from
the available input data.

             TABLE 1 — INFILTRATION PARAMETERS FOR
                       PANTHER BRANCH WATERSHED
            STORM
            DATE


          10/28/74


          11/10/74


          11/24/74


          12/05/74


          12/10/74
                                                                                       INFILTRATION RATES
                                                                                      Initial   Final   Decay
                                                                                       in/hr   in/hr   /sec
     P-10
     P-30

     P-10
     P-30

     P-10
     P-30

     P-10
     P-30
     P-10
     P-30
      3.5
      3.5

      0.3
      0.3

      2.0
      2.0

      0.5
      0.5
      0.2
      0.2
       0.01
       0.01

       0.01
       0.01

       0.01
       0.01

       0.01
       0.01

       0.01
       0.01
       .0005
       .0005

       .00115
       .00115

       .00115
       .00115
       .00115
       .00115
       .00115
       .00115
                                                                        TABLE 2 — SUMMARY OF MODELING RESULTS
                                                                                     FOR PANTHER BRANCH
                                                                  STORM
                                                                   DATE
      10/28/74


      11/10/74


      11/24/74


      12/05/74


      12/10/74
                                                                                  PEAK FLOW
                                                                                 OBS.   COMP.
P-10
P-30

P-10
P-30
P-10
P-30
P-10
P-30
P-10
P-30
cfs


342
376

979
897
680
774
273
329
464
517
cfs


360
410

600
705
645
735
315
37C

380
425
                                     RUNOFF VOLUME
                                    OBS.       COMP.

                                  106cu.ft..   106cu.ft.
24.40
39.34

64.48
72.87

52.24
73.70

36.06
45.52

44.42
51.73
29.03
36.16

53.44
73.61
57.72
78.97
32.66
48.55

33.61
43.02
The 3.42 sq mi of  drainage area upstream of Station
H-20 was divided into 24 subcatchments with an average
size of 91 acres.   Physical  parameters were determined
in a manner similar to that for Stations P-10 and P-30.
The infiltration parameters and the observed and com-
puted peak flows and volumes at Stations H-10 and
H-20 are listed in Tables 3 and 4, respectively.  The
observed and  computed hydrograph and suspended solids
pollutograph  for the storm of 5/08/74 at Station H-20
are shown in  Figure 6.  During the initial phases of
the modeling  program it was determined that due to
testing difficulties, BODg modeling could not be
verified.

As seen in Figures 4, 5, and 6, the computed hydro-
graphs are reasonably acceptable, but the suspended
solids pollutographs for urban areas are severely de-
ficient.  Other investigators have arrived at a similar
conclusion  .
                                                       370

-------
                                                                         TABLE 3 — INFILTRATION PARAMETERS FOR
                                                                                    HUNTING BAYOU WATERSHED
        FIGURE  4--GAGE P-10, STORM OF 12/05/74
    L
                           TIME  (hrs)
           \\   OBSERVED
               COMPUTED	
            K)    203O40506O7O60

                       TIME  (hrs)

        FIGURE 5--GAGE P-30, STORM OF 12/05/74
                                              1r
                       TIME (hrs)
        FIGURE  6--GAGE H-20, STORM  OF 5/08/74

                  Future Directions

The modifications  and additions to  the SWMM which are
discussed  in  this  paper indicate  that the modeling of
storm water  runoff quantity by the  SWMM has been con-
siderably  improved.   The new infiltration and baseflow
models allow  a  closer parallel to the observed hydro-
graph.

The modeling  of smaller subcatchment  areas with more
definitive hydrologic regimes will  provide a method
of evaluation of the capabilities of  the new subroutines
and computational  methods in the SWMM.   This type of
data are presently being accumulated  in The Woodlands.

Storm water quality  modeling in the SWMM is constantly
being improved  but the results are  still  less than
satisfactory  as  shown in the preceding section.  The
Present size  and structure of the SWMM limits the
                                                                        STORM
                                                                         DATE


                                                                        9/08/68


                                                                        9/17/68


                                                                       11/09/70


                                                                        3/26/74

                                                                        5/08/75
                   STATION


                    H-10
                    H-20

                    H-10
                    H-20

                    H-10
                    H-20

                    H-20

                    H-20
                      INFILTRATION RATES
                    Initial   Final  Decay
                     in/hr    in/hr   /sec
                     1..00
                     1..00

                     0.75
                     0.75

                     2.50
                     2.50

                     0.10

                     0.30
                     0.10
                     0.10

                     0.10
                     0.10

                     0.10
                     0.10

                     0.02

                     0.10
                    .0005
                    .0005

                    .0005
                    .0005

                    .0005
                    .0005

                    .0005

                    .0005
                                                                           TABLE 4 -- SUMMARY OF MODELING RESULTS
                                                                                       FOR HUNTING BAYOU
  STORM
   DATE


9/08/68


9/17/68
                                                                                     PEAK FLOW
STATION


H-10
H-20

H-10
H-20
                                                                                   DBS.
                                                                   11/09/70 H-10
                                                                           H-20
                                                                   3/26/74  H-20

                                                                   5/08/75  H-20
121
325

144
333

 85
161

 40

 73
COttP.

 cfs

 160
 355

 155
 365

 125
 220
                                                                                                   RUNOFF VOLUME
                                                                                                  OBS.
                                                                                                            COMP.
                                   10°cu ft    106cu ft

                                    1.43      1.85
                                    4.48      4.69

                                    2.82      2.24
                                    8.37      6.02

                                    1.50      0.97
                                    3.50      2.65

                                    1-46      1.93

                                    1.38      1.32
application  of the new methodology described in this
paper.  Therefore, it is expected that  any significant
improvement  in water quality modeling by the SWMM will
necessitate  a  complete revision of the  present method-
ology.

In conclusion, the SWMM has been a valuable tool in
determining  the storm water runoff characteristics of
The Woodlands.  The quantity of flow has been pre-
dicted satisfactorily and the quality of flow from
undisturbed  areas  is also satisfactory.   The modeling
of quality of  flow from disturbed areas  is very complex
and further  detailed data and study are  necessary.

                    Acknowledgements

This study was supported by Grant No. 802433 from the
Storm and Combined Sewer Section, EPA.   Partial funds
for data collection were provided by The Woodlands
Development  Corporation.

                 List of References

1.  Wallace, Mcharg, Roberts and Todd, Woodlands New
      Community Phase One:  Progress Report on Land
      Planning and Design Principles, April  1973.
                         I
2.  Metcalf  and Eddy, Inc., University of Florida, and
      Water  Resources Engineers, Inc., Storm Water
      Management Model, Vol. II--Verification and
      Testing, EPA 11024DOC 08/71, August 1971.

3.  Onstad,  C.A. and D.G. Jamieson, "Subsurface Flow
      Regimes  of a Hydrologic Watershed  Model," Pro-
      ceedings Second Seepage Symposium, ARS 41-147,
      Phoenix, Arizona, March 1968.

4.  Holtan,  H.N. and N.C. Lopez, USDAHL-73 Revised
      Model  of Watershed Hydrology, ARS  Plant Physio-
      logy Institute Report No. 1, 1973.

5.  Izzard,  C.F.,  "Hydraulics of Runoff  from Developed
      Surfaces,"   Proceedings Highway Research Board,
      Vol. 26, 1946.

6.  Colston, N.V., Characterization and  Treatment of
      Urban  Land Runoff, EPA-620/2-74-096 December
      1974.
                                                         371

-------
                         MODELING IN SOLID WASTE MANAGEMENT:  A STATE-OF-THE-ART REVIEW

                                                 David H. Marks
                                         Professor of Civil Engineering
                                                   Room 1-163
                                     Massachusetts Institute of Technology
                                          Cambridge, Massachusetts 02139
      The application of modeling for the prediction of
cause and effect relationships for policy change and
for the analysis of decisions in choosing policy levels
in solid waste management is discussed in five major
subsystems:  1) the organization of individual collec-
tion systems (routing, scheduling, distributing, crew
assignment; 2) the choice of collection technology;
3) the organization of city-wide or regional systems
(facility location); 4) the choice of process and
disposal technology; 5) estimation of waste generation
for short-term and long-term policy changes.  Selected
listings of models in each area and an appraisal of the
transfer of modeling to practice are presented as well.

                     Introduction

      The large scale system by which residential solid
waste is generated, stored, collected, transported,
processed, recovered and disposed of is a complicated
and expensive system to successfully manage.  There are
many points within the system where policy decisions
about the types of technology used, level of service
offered and organization of services provided can be
made which can strongly effect the cost of the system
as well as impacts on other important objectives such
as environmental quality.  However, there has been only
moderate success in gaining an overall perspective of
decision possibilities.  For a detailed discussion of
structure of the solid waste management system and our
level of knowledge about cause and effect relationships
in each component, the recently completed National
Science Foundation study on Solid Waste Management
(Hudson, et al, 1974) is suggested.  The purpose of
this paper is to provide some order and summary of
analytic methods available for gaining insights about
policy decisions to be made in the system.  The word
model is used to imply the representation of a complex
real phenomena by a conceptual analog which is easier
to manipulate to gain understanding about the real
system.  Generally in application, such models take two
forms:  predictive models and decision models. Predic-
tive models are used to gain insight about the magni-
tude and direction of changes in system performance
measures brought about by changes in policy variables.
While it is possible in theory to derive such models
for Solid Waste Management inductively from assumptions
of the basic underlying mechanisms of the system, most
predictive models are deductive.  That is, based on
observed data taken on system variables and outputs,
relationships are built usually using statistical
techniques or for more complex systems simulation.
Decision models on the other hand assume a knowledge
of the underlying cause and effect relationships be-
tween policy variables and system output and attempt
to address the choice of level of policy variables
based on their impacts on stated objectives.  Such
models are optimization techniques such as linear
programming, dynamic programming, or search.  Through-
out the discussion of models in this paper, the trade-
offs between the detailed nature of the model and the
cost of data and solution, as well as the acceptability
of the assumptions of the theoretical model in real
practice must always be kept in mind.
      Modeling activity  in the field of environmental
activities over  the  last few years  has been extensive,
and solid waste  management has been no exception.  While
the development  of the computer and its introduction as
a routine tool of practice in engineering has been an
important motivating factor in the  design and adoption
of models to aid management,  not all modeling efforts
need to be computer  based as  will be shown later.  Fur-
ther, a complex  model may be  more difficult to transfer
than a simple one.   To present in these few briefly al-
lowed pages some overview and assessment of the types
of problems in solid waste management where models  have
been developed,  the  paper will be structured by pre-
senting major subsystems and  the modeling work appro-
priate to each.  An  apology is made in advance that
due to space and time limitation, the author has se-
lected only five major topics, where considerable work
has been done, although  there is evidence of work in
others.  Also, with  each major topic,  either because of
lack of space or lack of knowledge  of all possible  con-
tributions , not  all  modeling  work done in that area has
been presented.  The five major areas  discussed are:
The organization of  individual collection systems
(routing, scheduling, crew assignment),  The choice  of
collection technology, The organization of city-wide
or regional systems  (facility location problems),   The
choice and design of process  technology, and Estimates
of waste generation  for  short-term  and long-range plan-
ning.  This is followed  by an appraisal of how well
these models have transferred from  development to ap-
plication.  In general,  the success of transfer has not
been great because of the general difficulty encounter-
ed in several areas.  One is  that a strong enough effort
to transfer these models from academic and governmen-
tal research groups  to actual practioners has not been
make nor is such a transfer easily  designed.   Even
more important,  many of  the models  are not general
enough to transfer easily or  require huge amounts of
data that are expensive  to collect.   Finally,  many
of the decision  models are based on regional cost and
do not realistically address  other  important issues
such as subarea  cost and impact distribution.
     Models for Organizing Local Collection  Systems

      Models for dividing up  communities  into  tasks for
individual crews, and for efficiently  routing  the crews
over the street network have  been  developed  by a number
of researchers.  The work is  applicable to almost any
collection system, and seems  to be both complete and
thorough.

      The routing problem for solid waste collection is
actually a combination of a large  number  of  problems,
each of which can be solved with reasonable  effort.  At
one level, "routing" involves taking the  area  to be col-
lected by a crew on a day, and finding an efficient path
which will enable them to do  the collection  with the least
travelling.  Another side of  "routing" involves deciding
which crew should collect from what set of demands, and
dividing up the work to be done into task assignments.
Shuster and Schur (1974) call the  former  problem
microrouting and the latter districting or route-balancing;
                                                        372

-------
this  seems  to be  a reasonable terminology.  Microrout-
ing generally takes two forms.   A node-routing problem
involves  an attempt to  pick up  waste from a set of fix-
ed points,  while  travelling the least.  In the litera-
ture, this  is often referred to as the "travelling-
salesman  problem" or the "trick-dispatching problem"
when  many trucks  are used.   An  arc-routing problem
involves  travelling down all the streets, collecting
whatever  is there,  and  again attempting to minimize the
amount of travel.  This problem is known as the
"Chinese  Postman's Problem," or the "m-postmen's prob-
lem"  for  the case with  more than one route at a time.

      In  any routing study, the decision of what makes
a reasonable route is the important first stage. Routes
should require  an equal amount  of work, where possible,
and some  method must be available for choosing a fair
assignment. The  choice of  such a fair day's work is
closely related to waste generation (see later section
in this paper), topography  and  demography, productivity,
level of  service, and the trade-off between overtime
and incentive time.  If the estimation of the work re-
quired to collect individual blocks is poor, than it
will be impossible to form  balanced task assignments,
and the routing study will  be useless.  The best com-
pendium of  methods for  estimating work involved in col-
lection is  that by Shuster  (1973).  Shell and Shupe
(1973) also discuss this issue.  Hudson (1975) describes
the use of  census data, which are readily available,
for estimating  waste generation and collection time.
Lofy (1971) has developed a simple model for the task-
balancing problem which attempts to minimize lost time
at the end  of the day.

Districting or  Route Balancing

      Several methods have  been proposed for taking the
data on the work  required in any specific area and con-
verting these work requirements into routes.  These
methods may be  computerized (see Hudson, 1973), but
they may  also be  manual. Shuster (1973) discusses man-
ual procedures  for this method  in detail.

Microrouting

      The problem of microrouting is how to take a set
of districts, and generate  detailed collection routes
from them,  minimizing mileage and left-hand turns, go-
ing the correct way on  one-way  streets and grades, and
meeting other similar objectives.

      Shuster and Schur (1974)  develop a manual method
for reducing the  number of  left-hand turns in a route,
by making clockwise (right-hand) loops whenever pos-
sible.

      More  advanced methods of  minimizing mileage in
collection  have been developed  by Strieker, (1971),
Hudson, et  al (1973) and by Liebman and Male (1973).
The M.I.T.  approach involves manual routing techniques,
preceded  by districting; the basic output is the choice
of which  streets  should be  traveled twice and which only
once, to  minimize travel.  The  approach developed by
Liebman is  somewhat different.   Analysis is done first
on how mileage  in collection can be minimized for the
whole collection  area.   Then the whole collection area
is subdivided into balanced districts and rerouted.

      For all practical applications, the node-routing
problem reduces to an application of the Clark  and
Wright algorithm  developed  about 12 years ago (1964).
There are other techniques  available, but the Clarke
and Wright  approach is  both easy to understand, and
available in packaged form  from most computer companies
(e.g., IBM's VSP  — Vehicle Scheduling Program (1968).
It can also be  applied  by hand  fairly easily.  Beltrami
and Bodin  (1974) have extended  the basic  algorithm to
problems involving containers with different  frequencies
of collection, such as schools  and restaurants  on the
same route, with some success.  The  Clark and Wright
algorithm  does not work well if the  starting  and ending
points of  the route are widely  separated,  but route mod-
ification  after the analysis is fairly easy.

      Several companies have developed and marketed
routing packages and have made  claims of  tremendous
savings.   In a well managed moderate sized system,  some
savings can probably be realized by  the implementation
of these methods.  However, claims of large savings from
computerized routing usually include all  savings  from
the major  redesign of an inefficient system.  The  actual
amount saved by the modeling procedures are probably
smaller.

      For  crew scheduling, Bodin (1972), Heller, et al
(1973), and Ignall, et al (1972) have proposed models
for making shift and day assignments.  Lofty  (1971) pro-
poses a model for assigning crews to routes based  on
their productivity.

      Models for Choice of Collection Technology

      Research on the choice of vehicles for  collection
has concentrated on the issues  of truck capacity and age
of replacement.  Techniques have been developed  for
choosing vehicle capacity to minimize cost, and  for esti-
mating the age at which a truck should be  replaced.
Degner (1971) presents a detailed breakdown of vehicle
costs by a variety of measures  such  as capacity, turning
radius, type of loading,  etc.  After listing  and analyzing
all these attributes, his work provides about the  only
available  detailed discussion of a method  for choosing
a particular type of vehicle out of  the available  set  of
packer bodies and chassis manufactured.  The  technique
used is DARE, (Decision Alternative  Ration Evaluation)
which is a. performance scoring  tool:   a number of  ob-
jectives are stated, along with their relative impor-
tance to the decision-maker.  Then the available choices
are evaluated according to the objectives, and a gener-
alized score is created,  giving information about which
is best.   See Klee (1970) for a good description of the
DARE technique.  Clark  and Helms (1972) develop a model
for choice of vehicle size using data from Buffalo.  They
relate capacity to cost and residences served per truck
per day and use an optimization technique  to  solve  for
truck size.  A similar model is proposed by Cardile and
Verhoff (1974).  Clark  and Gillean  (1974) developed a
simulation model to test system configurations in terms
of truck capacity and crew size for  Cleveland.  Quon,
et al (1970) presents a model to show the effect of
truck age  on collection efficiency.  Douglas  (1973), and
Degner (1971) also consider models for estimating the
economic life of a collection vehicle.

     The Organization of City-Wide or Regional Systems

      At issue here is an evaluation of where facilities
for transfer and processing should be located and what
form they should take, as well as the assignment of
subareas to facilities.  Most of the techniques provided
for this task are optimization models based on minimiza-
tion of regional cost of facilities  and transportation.
There are numerous examples such as  Hekimian  (1973),
Berman (1974), Weston (1973), Fuertes, et al  (1974),
Helms and Clark (1970), Morse and Roth (1970), Marks
and Liebman (1971), Skelly (1968), Schultz (1968),
Wersan, et al (1971), and Vasan (1974).  The  two papers
on the EPA SWAMP program for regional solid waste
management presented at this conference are based  on
the Skelly model.  Table 1 gives a comparison of the
models according to what factors are included in costs
and the type of optimization techniques used  for solution.
                                                       373

-------
Minimization of total cost is not, however, the only
objective in planning solid waste management systems.
Local costs allocation and impacts are also important
but not considered.

      Currently, a major effort is underway in systems
analysis research on models to include social and poli-
tical objectives with a least cost economic analysis.
Examples are Fuertes, et al (1974), Kuhner, et al (1974)
and Hekimian (1973).  Klee (1971) developed DISCUS, a
solid waste management game which is capable of calcu-
lating total transfer, transportation, processing and
disposal costs for a given system design.  This is used
in an interactive process to help decision makers aware
of the tradeoffs involved in different decisions about
the system.

      Numerous other simulation models of city-wide
organization are available including Truitt, et al
(1969), and Wersan (1965).

   Models for the Choice and Design of Processes

      Methods exist whereby the alternative processing
options may be evaluated so that the least-cost alter-
native can be selected.  Unfortunately, the cost data
available, especially on the relatively new processes,
are often tentative, limited, or unreliable.

      One method for ranking alternatives is DARE
(Decision Alternative Ration Evaluation) developed by
Klee (1970).  The process requires the user to make a
number of comparisons between pairs of different eval-
uating criteria in order to develop a weighting system
for those criteria.  This provides the ability to es-
tablish a uniform scoring procedure for all alterna-
tives.  It has also been applied for collection tech-
nology as previously noted.

      Another approach is to use mathematical program-
ming techniques in order to minimize the net present
cost of establishing a processing facility.  Such a
model has been developed by Clark (1972).  Clark's model
can provide an estimate of what a facility will cost
over time including borrowing costs for facility con-
struction, assuming the model given is fair representa-
tion of the taxing, borrowing, and spending policies
currently in municipal practice.

      Wenger and Rhyner (1972) and Popovich, et al
(1973) use a cost-effectiveness analysis approach to
evaluate solid waste disposal systems.  Essentially,
this involved first stating a set of objectives for the
system and then ranking the various candidate alterna-
tives according to this set of objectives.

      Models for Estimating Waste Generation

      The prediction of the quantity and composition of
solid waste is important both for short-term planning
such as route design and for long-term facility plan-
ning and technology choice.  The first type of model
is concerned with the detailed local prediction of
quantities expected per collection.  Alpern (1972)
developed a model using Los Angeles data relating waste
production to housing type, income and topography.
Hudson (1975) uses aggregate cross-section data to
investigate waste load changes with changes in system
policies such as frequency of collection and place of
collection, as does Quon (1968).  McFarland (1972) did
simular work for income level, Clark and Toftner (1972)
worked from land use (zoning) data and DeGeare and
Ongerth (1971) describe commercial establishments.
Long range modeling is more difficult.  Stern (1973)
uses an input output methodology for industrial waste
generation as does Steiker (1973).
      Having  described  these models for different manage-
ment subcategories,  it  is  important to consider how
well they have been  transferred to real use in local
and regional  planning.   There are considerable barriers
to model transfer  in environmental planning which must
be carefully  accounted  for and eased before tools such
as these will be widely used.  One important barrier is
that most of  these models  have been developed without
much input from the  users.   Thus, while they may be of
academic interest, they address the wrong problem or do
not give enough information about the most important
aspects to the local users.  A good example are the
regional least cost  facility location models which
promise to choose  the optimal system configuration for
solid waste facilities  from among feasible sites on the
basis of facility  and transportation costs. To most local
planners, the high external costs (real or imagined)  of
such facilities make no site feasible because of local
opposition.   Thus, they do  not seek optimal location
but any location where  they can build the facility.
Mention has been made of improving optimality criteria
for such models to include  social factors which may be
one way out of this  dilemma.   But a better way might be
through considerable interaction with the local planner
to see what information he  can use to help design plans
that have a broad base  of public support  and,  therefore,
can be implemented.   At a minimum,  this would include an
estimate of disaggregated local impacts.   Such inter-
action is not easy nor  inexpensive.   Considerable time
must be spent in establishing a good dialogue between
theoreticians who  think in  terms of models and computers
and local planners who  are  possibly distrustful of  them.
We have been guilty  in  the  past of over selling the com-
puter as a panacea that could automate planning proce-
dures and solve all  our problems.   However,  the tools
developed just can't and probably won't ever capture all
the nuances of a specific application.  For truck routing
algorithms, it is hard  to input to models that some
crews are better performers  than others or that some
streets are hard to  access  from particular directions.
Thus, we must be more careful to emphasize the use of
models to aid our intuition about the problem and to
educate planners about  system tradeoffs.   With local users
as partners in the development and fine tuning of models
to real use, transfer can take place.   It might well
be necessary to work through  a rather lengthy local case
study with each local area  to which the model is  to be
transferred before such tools will be widely used.

      In summary,  I  feel at this point in time that
theoretical development must  be focused on the task of
producing with interaction  from local planners  on simple
to use models which  can use readily available  data and
produce information  focused on specific local situations.
More elegant algorithms developed without local input
are not needed at  this  point  and doomed to gather dust
on the shelf and in  the journals without  much chance of
implementation.  Rather focusing on issues such as impact
prediction, assessment  of objectives of local interest
groups and means for showing  graphically  what the inher-
ent conflict is between differing objectives for  the
solid waste system will make  a more positive step toward
better model application.

                        References
                                                      374
Alper, R. City of Los Angeles Refuse  Sampling Program.
Los Angeles: Bureau of  SAnitation,  1972.
Altman, S., E. Beltrami,  S. Rappaport,  and  G. K.
Schoepfle.  "Nonlinear  Programming  Model  of Crew Assign-
ments for Household Refuse Collection."  IEEE Transac-
tions on Systems Man, and Cybernetics.  SMC-1(3): 289-
291, July 1971.
Beltrami, E. J. and L.  D. Bodin.  "Networks and Vehicle
Routing for Municipal Waste Collection."  Networks 4(1):
65-94, 1974.

-------
Barman,  E.  B.   A Model for Selecting, Sizing, and
Locating_Regional Solid Waste Processing and Disposal
Facilities.   Bedford,  Ma.: MITRE Corporation Report
M73-111,  1973.
Herman,  E.  B.  and H.  J. Yaffe.  Note on Equation Struc-
ture and Sizings of the EPA and MITRE Solid Waste Al-
location Models .  Bedford, Ma.:  MITRE Corporation,
1974.
Bodin, L.  "Towards a General Model for Manpower Sched-
uling, Parts  I and II."  Journal of Urban Analysis 1
(2), April  1972.
Cardile, R. P.  and F.  H. Verhoff.  "Economical Refuse
Truck Size  Determination."  J. Env. Eng. Div., Proc.
ASCE  100(EE3): 679-697, June 1974.
Clark, R. M.  An Investment Decision Model for Control
Technology.  Cincinnati: U.S.E.P.A., NERC, 1972. (PB-
213 482).

Clark, R. M.  and J. L. Gillean.  "Systems Analysis and
Solid Waste Planning." J. Env. Eng. Div., Proc. ASCE
lOO(EEl): 7-24, February 1974.
Clark, R. M.  and B. P. Helms.  "Fleet Selection for
Solid Waste Collection Systems."  J. San. Eng. Div.,
Proc. ASCE  97(SA1): 71-78, February 1972.
Clark, R. M.  and R. 0. Toftner.  "Land Use Planning and
Solid Waste Management."  Public Works.  103(3): 79-80,
98, March 1972.
Clarke, G.  and J. W.  Wright.  "Scheduling of Vehicles
from a Central Depot to a Number of Delivery Points."
Operations  Research.   12(4): 568-581, July-August 1964.

DeGeare, T. V. and J.  E. Ongerth.  "Empirical Analysis
of  Commercial Solid Waste Generation."  J. San. Eng.
Div., Proc. ASCE 97(SA6): 843-850, December 1971.
Degner, D.  N. Systems Engineering Applied to Selection
and Replacement of Solid Waste Collection Vehicles for
Lawrence, Kansas.  Lawrence:  University of Kansas,
1971.  (PB-217 775; SW-4tg).
Douglas, J.  "Inflation — Its Effect on Equipment
Policy."  APWA Reporter.  40(3): 14-18, March 1973.
Fuertes, L. A.  Social and Economic Aspects of Solid
Waste Haul  and Disposal.  Unpublished M.S. Thesis,
Department  of Civil Engineering, Massachusetts
Institute of  Technology, 1973.  6.3.  See also: Fuertes,
I.  A., J. F.  Hudson,  and D. H. Marks.  "Solid Waste
Management:  Equity Tradeoff Models."  J. Urban Plan-
ning Div..  Proc. ASCE 100(UP2), November 1974 (to be
published).  6.3
Hekimian, K.  K.  A Systems Engineering Approach to
Environmental Quality Management with Emphasis on
Solid Waste Management.  Unpublished Ph.D. Thesis,
University  of Southern California, 1973.  (UM 73-7252).
Heller, N.  B., J. T.  McEwen, and W. W. Stenzel.
Computerized  Scheduling of Policy Manpower. Volume I.
Methods and Conclusions.  Saint Louis, Mo.:  Police
Department, 1973.  (PB-232 071/1)
Hudson, J.  F. (Thesis).
D.  J. Grossman, L. Fuertes, D. Marks.  Analysis of
_Solid Waste Collection.  Research Report R73-47,
Department  of Civil Engineering, Massachusetts Institute
of  Technology, Cambridge, Ma.: 1973.
Hudson, J.  F., F. Gross, D. G. Wilson and D. H. Marks.
Evaluation  of Policy  Related Research in the Field of
Municipal Solid Municipal Solid Waste Management.  MIT
Civil Engineering Systems Lab Report R74-56, Sept. 1974,
Cambridge,  Ma.
Ignall. E., P. Kolesar, and W. Walker.  "Linear Program-
ing ;  dels for Crew Assignments for Refuse Collection."

                                                      375
IEEE Transactions on Systems, Man,  and  Cybernatics.
SMC2(5): 664-666, November 1972.

International Business Machines Corporation.   IBM
Systems 360 Vehicle Scheduling Program  (360A-ST-06X):
Application Description.  White Plains, New York:
IBM Corp. Technical Publications Department,  1968.
Klee, A. J. "DISCUS — A Solid Waste Management Game."
IEEE Transactions on Geoscience Electronics.   GE8(3):
125-129, July 1970.
Klee, A. J. "Let DARE Make Your Solid-Waste Decisions."
The American City 86(2): 100-103, February 1970.
(ATM 101) .

Klee, A. J. "The Role of Decision Models in the Eval-
uation of Competing Environmental Health Alternatives."
Management Science 18(2): B-52 to B-67, October 1971.
Kuhner, J. Centralization and Decentralization for
Regional Solid Waste Management:  Toward Paretian
Environmental Analysis.  Unpublished Ph.D. Thesis,
Environmental Systems Program, Harvard University,
Cambridge, Ma. 1974.
Liebman, J. C. and J. W. Male.  Optimal Routing of Solid
Waste Collection Vehicles.  Department of Civil Engineer-
ing, University of Illinois, Urbana-Champaign, 1973.
(Final report to U.S.E.P.A.)

Lofy, R. J. Techniques for the Optimal Routing and
Scheduling of Solid Waste Collection Vehicles. Un-
published Ph.D. Thesis, University of Wisconsin, 1971.
(UM 71-25485).
Marks, D. H. and J. C. Liebman.  "Location Models: Solid
Waste Collection Example."  J. Urban Planning  and
Development Division, Proc. ASCE 97(UP1): 15-30,
April, 1971.
Marks, D. H. and R. Strieker.  "Routing for Public Ser-
vice Vehicles."  J. Urban Planning and Development Div.,
Proc. ASCE 97(UP2): 165-178, December 1971.
McFarland, J. M., C. R. Glassey, P. H. McGauhey,
D. L. Brink, S. A. Klein, and C. G. Golueke.   Compre-
hensive Studies of Solid Wastes Management, Final
Report.  Report 72-3, Sinitary Engineering Research
Laboratory, College of Engineering and School  of Public
Health, University of California, Berkeley, 1972. 3.2,
3.5, 3.6, 3.7, 4.7, 6.1, 6.3, 9.2.
Morse, N. and E. W. Roth. Systems Analysis of Regional
Solid Waste Handling.  Washington:  U. S. Government
Printing Office, 1970.  (SW-15c; AIM 136).  6.3
Popovich, M. L., and L. Duckstein, and C. C. Kisiel.
"Cost-Effectiveness Analysis of Disposal Systems."
J. Env. Eng. Div.. Proc. ASCE 99 (EE5): 577-591,
October 1973. 7.1
Quon, J. E., R. Martens, and M. Tanaka.  "Efficiency of
Refuse Collection Crews."  J. San. Eng. Div.,  Proc.
ASCE 96CSA2): 437-454, April 1970.  4.3, 4.7,  5.1
Quon, J. E., M. Tanaka, and A. Charnes.  "Refuse
Quantities and Frequency of Service."  J. San  Eng. Div.,
Proc. ASCE 94(SA2): 403-420, April 1968.  3.6
Quon, J. E., A. Charnes, and S. J. Wersan.  "Simulation
and Analyses of a Refuse Collection System."   J. San.
Eng. Div., Proc. ASCE 91(SA5): 17-36, October  1965.
Quon, J. E., M. Tanaka, and S. J. Wersan.  "Simulation
Model of Refuse Collection Policies."  J. San  Eng. Div.,
Proc. ASCE 95(SA3): 575-592, June 1969.
Schultz, G. P. Managerial Decision-Making in Local
Government:  Facility Planning for Solid Waste Collec-
tion.  Unpublished Ph.D. Thesis, Cornell University,
1968.  (UM 68-9894). 6.3   See also:  Schultz, G. P.
"Facility Planning for a Public Service System:
Domestic Solid Waste Collection."  J. of Regional
Science  9(2):  291-308, 1969.  6.3

-------
Shell, R. L. and D. S. Shupe, "Predicting Work Content
for Residential Waste Collection."  Industrial Engineer-
ing 5(2): 38-44, February 1973.
Shell, R. L. and D. S. Shupe.  "Work Standards for Waste
Collection."  Presented at the First Annual Systems
Engineering Conference, American Institute of Indust-
trial Engineers, Inc., New York, November 1973.

Shuster, K.A. Districting and Route Balancing for
Solid Waste Collection.  Cincinnati:  U.S.E.P.A., 1973.
(To be published)

Shuster, K.A.  A Five-Stage Improvement Process for
Solid Waste Collection Systems.  Cincinnati:  U.S.E.P.
A., 1974.
Skelly, M. J. Planning for Regional Refuse Disposal
Systems.  Unpublished Ph.D. Thesis, Cornell University,
Ithaca, New York,  1968.
Steiker, G. Solid  Waste Generation Coefficients:
Manufacturing Sectors.  RSRI Discussion Paper 70,
Regional Science Research Institute, Philadelphia,1973.

Stern, H.I. "Regional Interindustry Solid Waste Fore-
caating  Model."  J. Env. Eng.  Div., Proc. ASCE 99
Truitt, M. M., J. C. Liebman, and C. W.  Kruse.  "Simula-
tion Model of Urban Refuse Collection."  J.  San.  Eng.
Div.. Proc. ASCE 95(SA2): 289-298, April 1969.  "  "


Vasan, K. S. Optimization Models for Regional Public
Systems. Berkeley: University of California Operations
Research Center, 1974.   (PB-231 309).  6.3
Wenger, R. B. and C. R. Rhymer. "Evaluation of Alterna-
tives for Solid Waste Systems."  J. Env.  Systems  2(2):
89-108, June 1972.

Wersan, S., J. E. Quon, and A. Charnes.  Mathematical
Modeling and Computer Simulation for Designing Munici-
pal Refuse Collection and Haul Services.  Cincinnati:
U.S.E.P.A., 1971.  (SW-6rg; PB-208 154).

Roy F. Weston, Inc., Environmental Scientists and
Engineers.  Development of a Solid Waste Allocaiton
Model.  West Chester, Pa.: Roy F. Weston, July 1973.
572, December 1973
Requirements
Aggregate Generation
Household Generation
Haul Distances
Coordinates
Linear Haul Costs
Haul Cost Functions
Facility Capacities
Limits on Capacity
Linear Operating Costs
Operating Cost Functions
Given Haul Routes
Reduction Factors
Fixed Costs
Growth Rates

Limitations

Local Optimum
Limited Alternatives
Linear Costs
Short-Term
No Capacity Limit
Well Defined System
Capital Costs
Extensive Data Input
Development
Weak
Probable Application
Many Authors
Simulation Model

















X
X
?
X
?
X
7
X

X
X
Hekimian

X
X

X

X

X



X
X



X
X
X







X
Berman
X

X

X


X

X

X
X
X



X
X





X
X


Weston
X

X

X


X

X

X
X




X


X




X

X
Fuertes, et al
X

X


X

X

X

X
X
X



X
X





X



si
rt>
(-•
1
tp
n
H"
V
S-

X
X

X

X


X
X

X




X
X

X

y



V

Moorse & Roth
X

x

X

X

X


X






X
X
X


x


x

Marks & Liebman
X

X

X


X

X


X





X

X

V





%
(D
M
3
s
c-i
X

x

X

X



X

X




X
X

X







W
W
f^
(t>
n
t-|
w

X
x

x





X






X


X
x

V


x

O3
M
&
X.


x
x



X









X

X
x

Y




Schultz

X

X

X
X


X



x



x
X



x
x
x
x
x

s
1-1
01
B
w
0>
rt
B
M
x

x
















X
Y
x

x



Y
Vasan
X

X

X

X
X

X

X
X





X

X



X



Authorship




























                     TABLE 1:  COMPARISON OF LEAST-COST FACILIIY LOCATION TECHNIQUES

                                                      376

-------
                           WRAP:  A MODEL FOR REGIONAL  SOLID WASTE MANAGEMENT PLANNING
                                                 Edward B. Herman
                                               The MITRE Corporation
                                          Bedford, Massachusetts    01730
     An optimizing model called WRAP  (Waste Resources
Allocation Program) has been  developed for the
generation of minimum cost regional solid waste
management plans.

     This model is addressed  to the state, regional,
and local planner who is responsible  for sorting out
the confusing array of alternatives available to him,
and for finding a solution which  is both economically
viable and politically acceptable.

     The model was originally designed in eighteen
alternative modes of operation (nine  static modes and
nine dynamic modes) under a MITRE sponsored research
project.-*-  WRAP is a fixed-charge linear programming
model, using an algorithm developed by Dr. Warren
Walker.2

     In 1974, a basic static  mode of  the model was used
for a program of operational  runs in  support of
regional design analysis for  the  Commonwealth of
Massachusetts.  This program  used manually-generated
inputs to the algorithm, and  a manual interpretation
of outputs. ^

     The Office of Solid Waste Management Programs,
U.S. Environmental Protection Agency, has supported
the further development of  the model.  The EPA program
includes:

     •  the development of  a  computerized front end and
back end for one static mode  and  one  dynamic mode of
the model;

     •  an operational  test  program on the Greater St.
Louis Region  (the City  of  St. Louis and seven
surrounding counties);

     •  a parametric exercise program on a region of 53
communities of Massachusetts  and  New Hampshire; and

     •  documentation and  dissemination.

     This paper  describes  the model in brief and the
philosophy of its application programs.  The focus of
the discussion is on the use  of the model to
illuminate political and technical issues, using the
original Massachusetts  application as an example.  The
paper concludes with a  description of a model
improvement program now underway  at MITRE.

     The following paper by Ms. Donna M. Krabbe of EPA
describes an analytical evaluation of the St. Louis
operational test program.
                       Background

      Economies  of  scale available from two processes
 from the point  of  view of a potential processing site
 in Haverhill, Massachusetts are illustrated in figures
 1 and 2.   The costs  are based on MITRE preliminary
 haul and processing  costs for the two processes.  Note
 that the decline in  processing costs in Figure 1
 compensates  for rising haul costs as the region is
 enlarged,  and the  minimum cost available is attained
 at the maximum  region size considered, including all
 14 zones  (representing 53 communities and 3600 TPD).
     In figure 2, there are  less  economies  of scale
available, so that the minimum cost  is  obtained with
the inclusion of only 4 zones  (representing 20
communities and 1700 TPD).
     Figure 1. Economies of Scale 1n Dried Shredded Fuel/Residue Recovery
     Figure Z. Economies of Scale in Gas Pyrolysls (MITRE Preliminary Estimates)
     Economies of scale in processing are a driving
force towards regionalization,  but  from
regionalization two problems  are generated:

     •  a complexity of system  design,  and

     •  a problem of political  consensus.

     WRAP is addressed to both  of these problems.  It
is intended to:

     •  sort out the many alternatives  on siting,
sizing, linking, and process  selection for transfer
stations, primary processing,  secondary processing,
and disposal; and to generate the minimum cost plan
which will meet all requirements; and
                                                         377

-------
     •  illuminate  political issues and  hence help
their resolution.


            Brief Description of the Model

     Figure 3 presents an overview of  the model, its
inputs and its  outputs.   Note that both  fixed and
variable costs  are  input.  The output  is a
comprehensive regional solid waste management plan,
including the selection of sites for transfer,
processing, and disposal, the selection  of  a process
at each site, the  sizing of each site, and  the
selection of links  and flows connecting  sources,
transfer sites, processing sites, and  disposal sites.
The plan is preferred in the sense that  it  is the
minimum cost plan  that meets all requirements, given
the sites, processes, and links that were made
available.
            ZONES      PROCESSING
                        SITES

          HASTE AT EACH   LOCATION
         CENTROID LOCATION  POSSIBLE PROCESSES
                        COSTS: F + V
                        FLOW COEFFICIENTS
                        MAX TONNAGE
                       LOCATION
                       COSTS: F + V
                       LAND AVAILABLE
                       OPTIMIZER

          FIXED CHARGE LINEAR PROGRAMMING MODEL
 .OUTPUTS:
COMPREHENSIVE REGIONAL SOLID WASTE
       MANAGEMENT PLAN
                                                   A key capability of the model  is its ability  to
                                              trade off the economies of scale  in processing,
                                              obtainable through centralized processing, as against
                                              the haul costs implied by such centralization.
                                              Essential to this  trade-off capability is the ability
                                              to  represent economies of scale in  process costs.
                                              Figure 5 illustrates  a concave total cost function,
                                              typical of solid waste processing,  as represented by
                                              several linear segments.  Since the model is cost-
                                              minimizing, it will seek out the  lowest cost segment
                                              at  any level of tonnage.  Thus the  capability of
                                              treating cost in two  parameters (fixed and variable,
                                              or  intercept and slope) permits the model to represent
                                              economies of scale at any level of  accuracy desired.
                                              In  the actual WRAP applications,  three-segment
                                              representations have  been used for  nearly all
                                              processes.
                                                             INTERCEPT 3---
                                                              INTERCEPT 2
                                                                 INTERCEPT 1
                                                                                                           SLOPE:
                                                                                                           LINEAR SEGMENT 3
                                                                        Figure 5. P1ecew1se L1nMr Approximation of a Concave Function
                                                                               (Representing Economies of Scale)
           I LOWEST COST
           • MEET ALL REQUIRMENTS
                      /SITE SELECTION
                      ( TECHNOLOGY SELECTION
                      ) SIZING
                      BLINKS AND FLOWS
  F - FIXED

  V " VARIABLE
                   Figure 3.  Model Overview
      Figure 4 describes  the  five levels in the model
 and allowable linkages among levels.  Note that
 linkage from one A-level process to another A-level
 process is permitted.  In  the St. Louis application,
 this capability was used to  allow a packer-to—van
 transfer process to link to  a truck—to-rail transfer
 process.  Similarly the  dual C-level capability
 permits the model to carry two differential residue
 commodities (incinerator residue and air
 classification heavy-end)  into secondary processing
 through dummy secondary  recovery processes in which
 differentiated revenues  are  generated.  This
 capability was used in the Massachusetts/New Hampshire
 exercise program.
        LEVEL
  SOURCE

  A. TRANSFER STATION

  B. PRIHftnr PROCESSING

  C. SECONDARY PROCESSING

  D. SANITARY LANDFILL
                    Figure 4. Model:  Levels of Processing
                                                   The model has  three essential  components:

                                                   Structure - which assures that each alternative
                                              considered is feasible, handles all wastes, processes
                                              all residues, and so forth;

                                                   Cost - which assures that each alternative is
                                              properly costed, including economies of  scale where
                                              appropriate; and

                                                   Procedure - which is an organized search for the
                                              best solution.

                                                   Figure 6 illustrates the operation  of the model.
                                              The basic structure is rectangular,  which means that
                                              there are more variables than equations, and hence
                                              that the problem is underdetermined.  Thus there are
                                              many solutions.  Among the many solutions to the
                                              system of equations, only that subset of solutions
                                              which have no negative solution values is considered
                                              to be feasible  (for a negative solution  value implies
                                              grinding up the outputs of the process and generating
                                              its inputs).  The optimal solution  is that particular
                                              feasible solution which is lowest in cost.

                                                   The structure  is a system of equations that
                                              assures that each of the solutions  examined is
                                              feasible in the sense that (1) all  wastes generated
                                              are entered into  transportation;  (2) all wastes
                                              arriving at a site  are processed;  (3) all residues
                                              generated are processed at the site or entered  into
                                              transportation; and  (4) no process  exceeds indicated
                                              tonnage maximums.
                                                         378

-------
      RECTANGULAR
•D
              UNDETERMINED CASE

                    •   FEASIBLE SOLUTION: NON-NEGATIVE

                    •   OPTIMAL SOLUTION:  LOWEST COST
       SEARCH PROCEDURE
       FIXED CHARGE
                    •   WHICH STEPS IMPROVE SOLUTION

                    t   KNOWING THAT WE HAVE ARRIVED
                    •   DOUBLE COST ROW F + V

                    •   ALTERED SEARCH PROCEDURE
             Figure 6.  Operation of the Model
     The search procedure requires:

     •  that those steps which improve the solution can
be separated from those that  make it worse; and

     •  that the procedure knows when it can go no
further (i.e., when it has arrived at the optimum).

     The "steps" are transitions from one feasible
solution to another.

     In the fixed-charge linear programming procedure,
the algorithm adds the fixed  cost (to the system  cost)
whenever the corresponding solution value goes  from
zero to positive, and subtracts the fixed cost
whenever the corresponding solution value goes  from
positive to zero.  The fixed-charge algorithm
considers both fixed and variable costs in determining
whether a transition is an improvement.

     The fixed-charge algorithm also requires one or
another kind of forcing to make sure that the solution
domain is searched out thoroughly, thus avoiding  a
"local optimum" solution.  This step is unnecessary in
standard linear programming,  in which there are no
local optima.  The operational runs of WRAP have  used
"single forcing," in which each column outside  of the
solution is forced in, and the new solution is
evaluated for improvement over the best previous
solution.  The Walker algorithm also includes a double
forcing procedure in which all possible pairs of
columns outside of the solution are forced in,  and the
new solution evaluated for improvement; but this
procedure is practical only  for very small problems.
A new "group" forcing procedure has been designed, and
is described in the final section of this paper.
 Applications:   Illuminating Political and Technical
                         Issues

     An application, which is a  set of runs, is
designed  to  illuminate political and technical issues.

     Each run in the set will:

     •  handle all wastes,
     •  meet all environmental standards (since only
        processes which do meet  relevant standards
        are  offered),
     •  provide the lowest cost  solution for its
                     The "case"  is  a defined state of
                political/technical feasibility.  WRAP  will generate a
                plan and a system cost for each case.   The incremental
                costs of moving  from case to case are calculated, and
                in particular  the costs of moving from  less political
                acceptability  to greater political acceptability.
                Figure 7 illustrates a hypothetical  plan set.
                                           D      E      r
                                          POLITICAL ACCEPTABILITY -
                                 Figure 7.  The Plan Set
                     Figure 8 summarizes issues which have been
                illuminated in the three applications which have been
                completed  at this writing.
                          REGION SIZE

                              MASS:  LARGE REGION:  HOW DOES IT BREAK DOWN

                              ST.  LOUIS:  REGION VS STATE   BY   STATE


                          PROCESS AVAILABILITY

                              POLITICAL
                                LANDFILL IN MASS. & ST. LOUIS

                              TECHNICAL
                                GAS PYROLYSIS IN MASS.


                          SITE AVAILABILITY

                              ST.  LOUIS   PROCESSING AT PLANT

                              MASS.   SOUTH ESSEX SITE


                          MARKET AVAILABILITY

                              ST.  LOUIS   ILLINOIS POWER CO.


                          SENSITIVITY

                              TONNAGE, MARKET PRICES, PROCESS COSTS
                                                                              Figure 8.  Illumination of Issues
                                                        379

-------
            The Massachusetts Application
                                                                          Model Improvement Program
     A region of 47  communities  in Northeastern
Massachusetts and  6  in New Hampshire was  evaluated
primarily to determine how the region would  break down
under varying circumstances.  The region  was divided
into 13  zones for  tonnage generation.

     Process options were transfer station,  dried
shredded fuel, gas pyrolysis, sanitary  landfill,  and
residue  recovery.

     The residue recovery process in Lowell  East  was
given a  reduction  in all intercept costs  of  $634.1  per
day to represent the amortized value of an EPA  grant
which was obtainable only if  that process was selected
at that  location.

     The seven basic runs in  the Masschusetts
application, runs  E  through K, are described in Figure
9.   (Runs A through  D were experimental.)
Bone Run
(Optioru Available!
E Trarafor Slat ions.
Shredded Fuel,
Ga Pyrolyiii
RetiduB Recovery,
Landfill
F Tranifar Stalionj,
Shredded Fusl,
Residua Recovery,
Landfill
G Tranrfor Stations.
Shredded Fuel,
fl endue Recovery
Structure of BBEC
Run Solution
South Eitti Pyrolyiii
Lsvwence Py'olyiis
Gloucester Tranrfer Station
Lonall Eul Residue Recovery
$d.3B/(on
Landfill! in: Neviburypnrt.
E M,ddlESS)i, New Hampshire,
S.W, Central Era*. Lowell
East H endue Recovery
S7.34/ton
South Ecui Shredded Fuel
South Erax ReiiduE Recnvsry
Trenifer Ststiuns in
N ewta ury po rt . G 1 o ucaiter,
Lowell Eat, Lewrence
SI 1 23/ton
Modification of
Boiic Run
H
Double tonnage
K
Double Intercept
of Pyroiyas

1
Double tonnage
in ell zones
J
Roman South
Eioii Shredded
Fuel from
consideration
Solution to Modification Run
(Change in Baac Solution)
Additional Transfer
Stationi m;
Newburyport, Lowell E.
OtherwisB the fame $3.45/lon
Pyrolysii in Liwrenci only
Additional Transfer
Otherwiie the ume SB.85/ton

S. EBei,Newburyport,
Lowei! East, Gloucester,
Lowell Eeit Residue
Recovery SS.Wton
Lawrence Shredded Fuel,
TraniferStetiomin.
S. Etwx. Gtouctsta..
Lowell E. Reiidue
Rncovery$10.fl6/tDn
                   Figurg 9 Summary ol Mttuchuutt] Rum
Note  that:

      •  with all  options  (run E) gas pyrolysis was
selected  in two locations;

      •  with gas  pyrolysis removed  (run F) landfill
was selected in six locations for an incremental  $3
per ton (gas pyrolysis was not quite in the state-
of-the-art as of  the time of the analysis);

      •  with landfill removed (run G)  (it is of
questionable political acceptability in Northeastern
Massachusetts) shredded fuel was selected in one
location for an incremental $4 per ton (or an
incremental $3=50 per ton with a Lawrence location
as in run J); and

     •  doubling  the pyrolysis intercept (run K)
reduced the number of processing locations from two
to one, and added a transfer station.

     Ms. Krabbe will present an analytical evaluation
of the St. Louis operational test in the following
paper.

     The Massachusetts exercise program has been
described  in an earlier paper.4
     As the EPA-supported  model development program
neared completion,  opportunities for improvement of
WRAP were identifed.  MITRE internal funds have been
made available  for  the  initiation of two of these, as
follows:

     •  A marketing version of  WRAP, called RAMP
(Recovery and Market  Planning Model) has been designed
and carried through initial development, a stage in
which it can be used  by MITRE in its operational solid
waste planning work.  The  model has been run several
times in support of a an ongoing planning study for
the Commonwealth of Massachusetts.   RAMP provides a
marketing capability, with multiple commodities,
multiple -market locations,  and  multiple marketing
segments, with upper  bounds.  The model traces the
effects of market saturation, and determines the
impact on the preferred solution that  results
therefrom.  RAMP generates  specific transportation
activities linking  the  various  processing centers  with
the various markets.

     •  An improved forcing procedure  called "group
forcing" has been designed,  and program specifications
for it have been developed.   Group  forcing will take
advantage of the structural features of WRAP and RAMP
to generate a better  solution (i.e., a reduced
probability of a local  optimum)  in  less running time.
An essential aspect of  group  forcing is the necessity
to define forcing groups of  columns which are adjacent
extreme points relative to  one  another;  and it is  this
aspect of the technique that  is  model-specific (i.e.,
relates to WRAP and RAMP only).   Forcing groups will
be forced both in and out,  whereas  the Walker
Algorithm forces in only, and only  one or two columns
at a time.

     Through use of group  forcing,  it  will be possible
to explore the solution domain  with both greater
effectiveness and less  running  time.

     The two improvements  together  should provide  a
substantial improvement in  the  capability to define
preferred solutions to  real  problems.

                        Notes

1.  This design was reported  in  MITRE  Report M73-111,
Edward B. Herman, A Model  for Selecting,  Sizing, and
Locating Regional Solid Waste Processing and Disposal
Facilities, October 1973.

2.  Warren Walker,  Adjacent  Extreme Point Algorithms
for the Fixed Charge Problems,  Dept. of  Operations
Research, College of  Engineering, Cornell University,
January 30, 1968.

3.  The runs were reported  in ,  Edward B.  Berman and
Harold J. Yaffe, MITRE  Report MTR-2945,  Regional
Design Analysis for Regional  Resource  Recovery System
for Northeastern Massachusetts,  November 1974.

4.  Edward B. Berman  and William M.  Stein,  The MITRE
Solid Waste Management  Planning Model:   A Status
Report, Presented at  the Sixth  Annual  Northeastern
Regional Antipollution  Conference,  College of
Engineering, University of  Rhode Island, July 8-9,
1975.  A MITRE report   by  the author,  WRAP - A Model
for Regional Solid  Waste Management Planning:
Documentation of Operational  and Exercise Runs,  is in
final stages of preparation.
                                                       380

-------
                                       ST.  LOUIS:  AN APPLICATION  OF  WRAP

                                                Donna M.  Krabbe
                                         Operations Research Analyst
                                    Office  of  Solid Waste  Management  Programs
                                     U.S.  Environmental Protection Agency
                                            Washington, D.C.   20460
                      ABSTRACT

     A  mathematical model called  WRAP,  Waste  Resources
Allocation  Program, was developed to aid  regional
solid waste planners  in sorting through the myriad  of
problems  and  possible  solutions that they find  con-
fronting  them.  The true  purpose  of the model  is  not
to find the optimum solution,  but to use  it to  find a
structured  series  of  solutions which will  clearly show
the impact  of decisions made concerning major issues.

     This paper discusses the  series of model  runs
done for  the East-West Gateway Coordinating Committee
to illuminate some of the major  issues  confronting
the decision makers in the  St. Louis area.

                      BACKGROUND

     States,  regions,  counties, cities  and towns
across  the country are facing  critical  questions
about what to do  with solid waste. How can we plan
systems that dispose  of these  wastes?   Which  of the
many disposal options is  the best? Which will  meet
environmental objectives  as well  as provide the least
expensive solution?   These  questions are  particularly
difficult to answer when  a  plan must  be developed for
a region  consisting of a  number  of municipalities,  a
large area, and  a complex transportation  network.

     Many options are available  today,  or are rapidly
emerging  for consideration.  In addition  to new tech-
niques in landfilling and incineration, there are
numerous  resource recovery  technologies,  which can,
for example, process  mixed  waste  to produce energy
products  like steam or dry  fuel  as well as recover
additional materials  for  marketing.  Within the next
several years, more technologies  will  become  avail-
able.

     Each of these technologies  has distinct  economic
advantages and disadvantages,  and the  suitability to
a particular area is  dependent upon a  number  of
factors.   Among  these are the  existence and proximity
of markets for the various  reclaimed  products, the
existence of sufficient amounts  of waste  to warrant a
particular processing technology and  to utilize the
economies of scale inherent in that  process,  and the
availability of land.

     The array of options is confusing  yet deci-
sion makers must  be informed about the  full  system
cost of the major options available  to  them.   Most
importantly, decision makers must be  able to  determine
the economic effects  of varying  and changing  elements
of the system, according  to specific  desires  and
needs.

     A computer model  called WRAP, Waste  Resources
Allocation Program, has been developed  in order to
assist decision makers with these and  other compli-
cated considerations.  The  model  enables  its  users
to sort out all  the various options and generate and
cost a number of solid waste management plans.  Plans
are expressed in  terms of location and  capacity of
 sites  and  processes,  and  the  total  flow of waste  in
 the  transportation  network.   Total  annual  cost of the
 system and cost  per ton are computed.   One of the
 most important features of the  model  is that it can
 be used to guide the  decision making  process in the
 selection  of  alternative  systems  and  translate the
 impact of  this selection  into cost  figures.

                    APPLICATION  OF WRAP

     Although WRAP  is an  economic,  optimizing model,
 its  power  lies not  in selecting the solution for  a
 regional area, but  in allowing  a  decision  maker to
 analyze the impact  of his decisions.  This is accomp-
 lished by  structuring and executing a series of runs
 which  will  induce the model to  react  to changes of
 basic  assumptions or  decisions  by generating and
 costing an alternate  set  of plans.  The incremental
 cost of one plan over another is  the  cost  of that
 decision or change  of conditions.

     What  this approach to WRAP offers  is  an  effec-
 tive combination of optimization  and  gaming.   One uses
 the  model  iteratively to  examine  issues  and  decisions
 (gaming) but  at  each  step many  options  can be made
 available,  with  the best  combination  being selected
 (optimization).

                       ST. LOUIS
     Under the sponsorship of the Office of Solid
Waste Management Programs, the model was applied to
identify and illuminate issues in Greater St. Louis,
where the Union Electric Co. is proposing to install
an 8,000 ton per day resource recovery system using
the shredded fuel process developed by them.  The
proposed system included the marketing of the re-
covered fuel to Union Electric's power generating
stations within Greater St. Louis.  A local regional
planning agency, the East-West Gateway Coordinating
Council, requested EPA to fund an application of the
model  to provide further insights into the advan-
tages to the communities of participating in such a
plan.

                   WRAP IN ST. LOUIS

     Preparation of the WRAP application was a joint
effort among the East-West Gateway Coordinating Coun-
cil, the Union Electric Co., the Mitre Corporation
and EPA.

     The WRAP model  was used to analyze the 450 square
mile area of Greater St.  Louis, encompassing 185 muni-
cipalities, and roughly two and one-half million peo-
ple, producing an estimated 8,000 tons per day of
residential, commercial  and industrial waste.  185
landfills and dumps, and two incinerators currently
provide inadequate disposal services to the area,
often  in violation of environmental  regulations.
The Union Electric Company is proposing a large
resource recovery system using the shredded fuel
                                                      381

-------
process (shredding, air classification,  magnetic
separation of ferrous metals)  developed  by them,
including the marketing of the fuel  to Union
Electric's steam generating stations within the
region.

                      THE ISSUES

     The solid waste planners  at the East-West Gate-
way Coordinating Council identified  the  following
primary issues which needed to be investigated:

     1)  Will the Union Electric shredded fuel
         process be competitive with landfill?

     2)  Should processing for shredded  fuel  be at
         the utility sites or  at other locations?

     3)  What would be the impact of restrictions
         upon interstate flow  of refuse  and
         and shredded fuel?

     4)  What would be the effect of the loss of
         the Illinois Power company  as a potential
         market?

     5)  How would a refusal  by some large com-
         mercial haulers to participate  affect the
         system?

              SETTING UP THE APPLICATION

     Data for the application  was drawn  largely from
an earlier report prepared for the East-West Gateway
Coordinating Council.  The data comprised costs of
the proposed Union Electric process, the Bureau
of Mines residue recovery process, transfer sta-
tions, landfills, and truck and rail haul, as well
as revenues from the sale of recovered material and
energy, and waste generation rates.   The region was
divided into 29 districts for  waste  generation and
possible transfer and processing sites were located.

     Structuring of the model  runs was,  of course,
the critical step in obtaining insight about the
issues to be examined.  The number and purpose of
the runs needs to be defined in order to give struc-
ture to the analysis process,  but the modeler
must remain flexible about eliminating and adding
runs dynamically as the runs progress.

     Originally six runs were  anticipated for the
St. Louis problem.  During the course of the analy-
sis, one run was dropped as irrelevant,  but three
more were added to examine collateral issues.  These
runs are listed in Table 1.
          Table 1.  Summary of St.  Louis Runs
     A
     A-l
     B
     B-l
     C
     D
     E
     F
     F-l
Base Case (Off-Site)
B.  C. with rail  haul
Landfill  available
L.  A. with rail  haul
No Interstate Flow
Loss of Illinois Power
Reduced Tonnage
On-Site Processing
On-Site Expanded
$1.253 per ton
 1.440 per ton
 1.249 per ton
 1.610 per ton
 1.840 per ton
    Not Run
 1.750 per ton
 1.950 per ton
 1.590 per ton
     For each of the above runs, WRAP generated a
solution consisting of system costs, process and site
selections, and transportation activities.
                    ANSWERS TO ISSUES

 Is  Shredded  Fuel  Competitive with Landfill?

      This  first question is answered by a comparison
 of  the  A and B runs.   The cost of a total resource
 recovery system (Run  A)  is $1.253 per ton.   When
 landfill was offered  (Run B), it was selected to
 handle  only  .5% of the waste of the entire region
 and  cost was reduced  by only $.004 per ton.

      The systems  selected in A and B were almost
 identical  and  were off-site processing for shredded
 fuel.   Included in these runs was the assumption that
 fuel  produced  at  off-utility sites would be  trucked to
 the  closest  utility location  from each site  that was
 chosen  for processing.   The solutions in A and  B
 resulted in  all of the fuel  being trucked to the same
 utility site (there are  two in the region),  exceeding
 the  capacity of that  site.   This then raised a  col-
 lateral issue  of  the  method of haul  for fuel  pro-
 duced off-site.

 Should  Fuel  Be  Hauled by Truck or Rail?

      Runs  A-l  and  B-l  were  added to  the planned  se-1
 ries  of runs to correct  the capacitation problem of
 A and B by changing the  truck  haul  of fuel to rail
 haul.   This  change  did indeed  have the desired effect.
 Rail  haul   caused  the  waste  to  be sent to both utility
 sites.  Comparison  of the  A-l  and  B-l  runs show  that
 the  availability of landfill  does  not benefit the
 region.   We  can safely conclude  that  resource
 recovery is  competitive  with  landfill  for the region.

 Should  Processing  Be  At  The Utility  Sites?

     This  question  is  really answered by the fact that
 the  base case  (A-l) which  offers  both on-site and off-
 site  processing selected off-site  processing.  What,
 then, would  be the  incremental cost of on-site
 processing?  To answer this question  Run  F was used to
 force a selection of  on-site  processing.  Cost in-
 creased by $.51 per ton.  The  solution'in F, however,
 showed a curious selection  of  only one  utility site
 for shredded fuel processing.  Thus,  another issue
was raised.

 Should Only One Utility  Site Be  Used  for  Shredded Fuel
 Processing?

     The utility site which was  not selected in Run F
was actually closer to many of the waste  centers, but
 it was constrained  to a maximum  input  of  2,000 tons
 per day of raw refuse.  Was this capacitation con-
 straint the cause of  its rejection?   What would
 happen  if  it was expanded?  Run  F-l raised the con-
 straint to slightly more than  4,000 tons  per day.
The result was the  selection of  processing at that
 site at full  capacity with  the remainder  of waste
 going to the second utility site,  and  a  cost reduc-
 tion of $.36  per ton.

     The comparison of F and  F-l tells  us that enough
 transport  cost can  be  saved to justify the capital
expenditure  at the  first utility site  if the capacity
 is at least 4,000  tons per  day,  but not  if the size
 of the  facility is  too greatly restricted (i.e.,
 2,000 tons per day).  The fact that the  model
 selected full utilization of the  facility also
 indicates  that greater savings might  be achieved if it
were expanded further.
                                                     382

-------
     Because  F-l  has  a  more desirable solution than F,
it should  be  used  in  answering the  question of on-
site  vs. off-site  processing.   Comparison of A-l
and F-l  show,  of course,  that  off-site processing is
still  more desirable, and that,the  cost of on-site
processing is $.15 greater than  off-site processing.
Figures  1  and 2  show  the  geographical  ramification of
the on-site/off-site  controversy.   Both flows from
generation point to initial  offload point, and the
transfer network of flow  from  initial  offload point
are shown.

     Figures  3 and 4  depict the  comparison of on-site
processing at one  site  only and  the better system of
processing at both sites.

What Impact  Would  Interstate Restriction Have?

     The impact  of interstate  restictions on the flow
of refuse  was assessed  by comparing Run C with the
base case  (A-l).   There was no interstate transport
of raw refuse in  the  base case solution, so the
changes in the system and the  incremental cost, $.40
per ton, are the result of the restriction on trans-
port of primary  processing residue  which had to be
shipped to a central  secondary processing site in
Missouri.  The secondary  process is beneficial enough
that the model decided  to construct a facility for
this process in  both  Missouri  and Illinois when resi-
due flow was restricted.

What Mould Be The Effect  of Only One Market for
Shredded Fuel ?

      In the  base case run, the model had available to
it markets both  in Missouri (Union  Electric) and
Illinois (Illinois Power).  The model selected only
use of the Missouri market and,  therefore, this ques-
tion  is irrelevant.

What  Effect  Would A Drastic Change  in Volume of
Raw Refuse have  on the  System?

     To examine  the impact of  a reduction of raw
refuse entering  into  the  system, Run E was
designed.   Such  a question is  relevant because
much of the  waste in  the  region is  controlled by large
private collection companies.   It is essential to know
what  impact  their nonparticipation, for whatever rea-
son, would have  upon  the  system.  Comparison of the
Run E solution to the base case indicates that the
location of  primary processing should be shifted fur-
ther  out from the city center if less waste is anti-
cipated.  This is due to  the fact that much of the
waste to be  lost would  be the  commercial waste con-
centrated in the downtown business  centers.  Loss of
of this waste would increase the system cost $.31
per ton.

                     CONCLUSION

      Perhaps one of the greatest insights obtained
by using WRAP in the  St.  Louis region is that the
incremental  cost of most  decisions  is going to be
small compared to experiences  of other areas in the
country.  The model shows the  decision makers that
they  can have a  great deal of flexibility in re-
ordering the design of  their system when confronted
with unchangeable real-world situations.  For in-
tance if off-site processing is unacceptable to
the community, on-site  processing  is a viable
alternative for only a $.15 per ton higher cost.
While all of the unfavorable alternatives and con-
straints examined for the region increase the cost of
operation, none has so drastic an effect that it
mandates radical changes or abandonment of the
resource recovery system planned.  This provides
decision makers with the required framework in
which to confidently proceed in the final design
of a workable solid waste management system.

     The work in the St.  Louis area illustrates how
WRAP can be used effectively to sort out the best
solutions from a staggering array of possibilities.
Decisions that would oftentimes be made on political
considerations can be based on solid analytical tech-
niques when using such a sophisticated tool.  A model
such as WRAP can help decision makers discover the
best, minimum cost system, as well  as the cost of
deviating from that system.   Realization of the
true impact of their decisions will  lead decision
makers to wiser choices.
                   BIBLIOGRAPHY
WRAP, Waste Resource Allocation Program.  Washington,
U.S. Environmental Protection Agency Publication (in
preparation).

WRAP, User's Guide.  Washington, U.S. Environmental
Protection Agency Publication (in preparation).

WRAP, Programmer's Manual.   Washington,  U.S. Environ-
mental Protection Publication (in preparation).

WRAP.  Operational and Exercise Runs.  Mitre Corpora-
tion; Bedford, Massachusetts (in preparation).
                                                       383

-------
                                                              FIGURE 1:   SHREDDED FUEL PROCESS:   OFF UTILITY SITE
          FLOWS TO INITIAL OFFLOAD POINT • THE ST.  LOUIS REGION, RUN A-l      $1.44 PER TON                FLOWS  FROM INITIAL  OFFLOAD  POINT    THE ST. LOUIS  REGION,  RUN  A-l
DISTRICT
DISTRICT BOUNDARY
CITY/COUNTY BOUNDARY
DISTRICT CENTROID  .
CITY/COUNTY CENTROID
SANITARY LANDFILL
TRUCK TRANSFER
TRUCK CONTAINER TRANSFER
                          ®
GO
00
                                                                   FIGURE 2:   SHREDDED FUEL  PROCESS:
           FLOWS TO INITIAL OFFLOAD POINT : THE ST. LOUIS REGION,  RUN F-l           $1,59  PER TON
         OlSTllCT

         CUT/COUNTY •OUNDAXY
         OlSTllCT CtNTIOICT
         CITY/COUNTY CINTtOlO
          SANHAtY IANOMU
                                                                                                                  O  DISTRICT CENTROID
                                                                                                                  •(f  CITY/COUNTY CENTROID
                                                                                                                  (S)  SANITARY LANDFILL
                                                                                                                  (C)  TRUCK CONTAINER TRANSFER
                                                                                                          UTILITY SITE

                                                                                                              FLOWS  FROM INITIAL OFFLOAD POINT •  THE ST. LOUIS REGION, RUN  F-l
                                                                                                             DIITBICT
                                                                                                         	 DISTRICT BOUNDAKY
                                                                                                         ---- CITT/COUNTY 60UNOAIY
                                                                                                          O  DrSTRICT CENTROID

-------
                                                                    FIGURE  3:   CAPACITY  CONSTRAINED  AT  MERAMEC  (U.)
       FLOWS  TO INITIAt  OFFLOAD  POINT : THE ST.  LOUIS  REGION, RUN  F              $1.95 PER TON         FLOWS FROM INITIAL  OFFLOAD  POINT :  THE ST. LOUIS REGION, RUN  f
DISTRICT
DISTRICT BOUNDARY
CITY/COUNTY BOUNDARY
DISTRICT CENTROID
CITY/COUNTY CENTROID
SANITARY LANDFILL
TRUCK TRANSFER
TRUCK CONTAINER TRANSFER
                                                                                                                     p nucK TtAwrn now
                        UTIIITY
                        RAIL TBANIFIR
DISTRICT.
DISTRICT BOUNDARY
CITY/COUNTY BOUNDARY
DISTRICT CENTROID
CITY/COUNTY CENTROID
SANITARY LANDFILL
TRUCK TRANSFER
TRUCK CONTAINER TIAN5FE*
                                                                                                                                            UTILITY
                                                                                                                                            RAIL TRANSFER
                                                                                4:    CAPACITY  EXPANDED AT MERAMEC (U.)
FLOWS TO  INITIAL OFFLOAD POINT •  THE ST. LOUIS  REGION,  RUN  F-l
DISTRICT CCNIHOID
cnY/couNir CENTROID
JANUARY LANDFILL        ©  UTILITY
TRUCK TRANSFER         0  RAIL TRANSFE'
TRUCK CONTAINER TRANSFER
                                                                                            $1.59 PER TON
                                                                                                                       FLOWS FROM INITIAL OFFLOAD POINT    THE ST. LOUIS  REGION, RUN F-l
                                                                                                                                                                                                4
                                                                                                                 *  ClIY/COUNTY CENTHOID
                                                                                                                 ®  SANITARY LANDFILL        (Tj)
                                                                                                                 ®  TRUCK TRANSFER         0
                                                                                                                 ©  TRUCK CONTAINER TRANSFER

-------
                                DEVELOPMENT OF A MODEL FOR AN ORGANIC  SOLID WASTE
                                      STABILIZATION PROCESS ON A PILOT PLANT
                       D.S. Whang
               Division of Water Resources
   New Jersey Department of Environmental Protection
                   Trenton, New Jersey
                     G.F.  Meenaghan
             Office  of  Research Services
                 Texas Tech University
                   Lubbock,  Texas
                        Abstract

      The expansion of commercial production of fed
cattle has resulted in a severe source of pollution and
hazard to health.  The bio-stabilization of organic
solid waste is one of the few disposal processes which
recover the organic fraction of the wastes.  In spite
of the potential value of this process, lack of engine-
ering information has hindered its utilization and ap-
plication.  For this reason, an investigation was in-
stituted to ascertain the kinetic and generalized model
of the system.
      A small pilot plant was used for the purpose of
this study.  The data obtained from this unit was anal-
yzed with a computer model called COMPOST.  The result
of this analysis and modeling indicated that the be-
havior of bio-stabilization model was consistent with
the reaction model of forming an intermediate organism-
substrate complex under a quasi-equilibrium condition.
A mathematical model of the overall process was devel-
oped, which could be used for optimizing the design
of the process.  The culmination of this research re-
sulted in the development of a system model which pre-
dicts the behavior of the bio-stabilization process.
The kinetic model obtained may be used as a. model to
study the bio-stabilization process on an industrial
scale.  Particular attention should be devoted to
scale-up factors for industrial application.

                      Introduction

      The treatment and disposal of waste from the op-
eration of livestock industry has come to the fore as
a matter of considerable importance in pollution con-
trol.  The problem is attributable to the increasing
concentration and quantity resulting from the mass pro-
duction of fed cattle.  There are two major factors
that must be considered in attempting to solve a pol-
lution problem:  a treatment process must be economic-
ally feasible for operation and it must satisfy the
conservational criteria for the receiving environment.
As emphasized by the Federal legislation concerning
solid waste management, the ideal scheme of a pollution
control process would be ultimate recycling of all
wastes generated.1 The bio-stabilization of the organic
wastes, commonly called composting, is an excellent
example of a pollution control process which recycles
wastes.
      Even though the composting has a long history in
its application, there is no published work in the area
of the kinetics and modeling of the process which, from
a chemical engineering point of view, may be a control-
ling factor in the design and optimization of the pro-
cess.  Once the kinetic behavior of the process has
been defined, a mathematical model for the overall
system could be used for the optimization of the entire
process.
      A small composting pilot plant was used to ob-
tain the kinetic data for this study.  The enzymatic
kinetic theory has been applied for the analysis of the
kinetic data obtained.  A mathematical model for the
system has been developed and verified using the results
of the kinetic analysis  to  simulate the bio-stabiliza-
tion process of the organic waste  treatment.   The pur-
pose of this investigation  was  to  ascertain a kinetic
model of an organic solid waste stabilization process
and to develop a generalized  system model in an  attempt
to provide rational information for the optimization of
the process.

              Theoretical Considerations

The Optimum Reaction Conditions

      Approximately half of all urban wastes  and some
industrial wastes can be bio-stabilized to a  sanitary,
humus-like material?• As the  reaction proceeds,  the
causative organisms use nutrients  in the organic
wastes and develop all of the protoplasm and  energy
necessary for the metabolism.   Approximately  one-third
of the carbon in the organic  wastes serves as a  source
of energy.  The conversion  of the  substrates  into
energy causes a rise in temperature.  The magnitude of
the temperature rise is an  indication of the  intensity
of the microbial activity,  accelerating the reaction
rate.
      The activity of the micro-organisms is  highly
dependent upon the environmental conditions to which
they are subjected.  Among  these are temperature,
moisture, pH, aeration and  nutrients.3  The optimum
temperature range for most  cases has been found  to be
50 to 70° C.  It has been known that aerobic  assimila-
tion can occur at any moisture  content  between 30 and
100 percent.  Aeration can  be used to reduce  excess
moisture in the decomposing material and at the  same
time provide the required oxygen for the microbial
activity.  Particle size also effects the efficiency
of the areation.  Among the elemental requirements of
nutrients, carbon and nitrogen  are of major concern,
especially the carbon to nitrogen  ratio .4
      The end product of the  bio-stabilization is a
humus material by which the organic wastes are re-
turned to the ecological cycle  in  a productive form.
The organic wastes usually  consist of carbon,  hydrogen,
nitrogen and oxygen.  Despite the  differences in reac-
tion mechanisms, the overall  reaction is similar to
that of a catalyzed oxidation reaction  of organic
elements.

Development of the System Equations

      The enzymatic kinetic theory, developed by
Michaelis and Menten,5has been  applied  for the develop-
ment of a system equation,  i.e.
                          k2c
                                                   (D
where,
                   ki + c


ki= kinetic constant
r = reaction rate
c = concentration of substrate.
                                                        386

-------
On the other hand, based on an inert material,  namely
ash, and for a batch operation of  the  system under con-
sideration (see Figure 1), a material  balance of  the
substrate for the system results in the  following equa-
tion:
                             dt
                                                    (2)
where,
      w = weight of ash in the system
      t = time.
Combining Equations (1) and  (2), since w  is  a  time-
independent constant in the  system, one obtains
                                   and
                                                         Y (Z)  = ±.                     (6)
                                                                 CD
                                   Then,  the  initial  condition, c (o) = CQ) becomes

                                                         Y (0)  = 1                     (7)

                                   and Equation  (A) becomes

                                                   Z  = Jin Y  +  ~ (  Y-l  )             (8)
                                                                 Kl

                                   Equation  (8)  is a  dimensionless system equation with
                                   a dimensionless initial condition, Equation (7).   This
                                   equation competely describes the behavior of the com-
                                   post system under  investigation.
                                                   (3)

Th,e initial condition of the Equation  (3)  is  c(0)  = co.

Solving Equation (3) using the initial  condition,
              t = T-
       ( C-C-)
                                                   (4)
Equation (4) completely defines the behavior  of  the
system under consideration if the kinetic  constants,
ki and kz, are known.
 Transformation of the System Equation

      If one defines the dimensionless  time  and  concen-
 tration as follows;6
"
                            fr
                           (5)
                   Experimentation

Description of the Pilot Plant

      A small pilot plant as shown in Figure 1 was
used for the purpose of this investigation.  The pilot
plant consisted of an oxygen supply system, a humidi-
fier, and a reactor.  Humidified air was supplied to
the reactor by a perforated pipe which was attached to
the bottom of the reactor.  The reactor was insulated
with a coating of Eagle Pitcher cement to prevent bio-
logical energy loss.  The reactor was rotated by a
gear-reduction-motor to provide mixing and aeration of
the reactants.
      An opening was made on the top of the reactor to
facilitate manual filling and emptying of the reactant
material.  An exhaust port welded onto this opening pro-
vided access to the inside of the reactor for the
sampling for the elemental analysis of carbon.  The
maximum capacity of the pilot plant was 60 kilograms
per batch.
                                   Humidified  gas
                       Air
             Pressure
               gauge
        Pressure
        regulator
Safety Vdlve
2" 	 •
a
o
a
0
a
o>
1
                                                                          33
                                                                         O
                                                                                                22
                Vent   Water
                                                Figure
                                  The  Pilot   Plant
                                                       387

-------
Raw Material

      Cattle manure from the Texas Tech University Ex-
perimental feedlot was used as the raw material for  this
investigation.  In general, the fresh manure contained
85 percent moisture and 15 percent volatile and fixed
solids.  Grab samples from the surface of the feedlot
were directly loaded to the reactor.  The age of the
manure ranged from three to ten days.
Chemical Analysis

      Grab samples were taken from the reactor to anal-
yze the change of elemental carbon concentration with
respect to time.  A CHN analyzer, Model 185, manufac-
tured by Hewlett Packard, Avondale, Pennsylvania, was
used to analyze carbon content of the samples.  The
content of carbon, measured as grams of carbon per
gram of ash, was used as a system parameter for the
development of the system model.
      The samples for chemical analysis were dried in
a Bleeder-Vacuum chamber under a vacuum of 20" mer-
cury at a room temperature for 72 hours.  The dried
samples crushed to 48 mesh for the analysis.  Cyclo-
hexane 2, 4-dinitrophenylhydrazone (CeHioiN.NH.CeHs
(N0z)2) was used as a standard reference sample to pro-
vide calibration data.
Operating Conditions

      Samples were loaded manually to the reactor
through the opening.  The sample weight ranged from
34.5 kg to 49.5 kg.  The supply gas for oxygen was
saturated with water.  The pilot plant reactor was
turned one to three times every day to provide proper
mixing and aeration of the reactants.  All experiments
were conducted on a batch basis.  A summary of the
operating conditions of the experimental scheme is
shown in Table 1.
  Table 1.  The Operating  Conditions of Reactor for
             the Stabilization Reaction
    Experiment  No.

Sample weight, kg
Source of oxygen
Frequency of mixing,
per day
Gas flow rate, 1/min
Moisture, percent
49.5
Air
1

3
63
34.5
Oxy.
3

6
52
34.5
Oxy.
1

6
52
               Results  and  Discussion
      The results of  the  chemical analysis and the
change of carbon concentration with respect to time
are shown in Figure 2.  The  reaction rates were deter-
mined by measuring the tangent of the curve for each
experiment.  The "symmetric  mirror technique" has  been
applied to measure this tangent.   The kinetic con-
stants, k^ of Equation (1),  may be determined by the
application of the Lineweaver-Burke method.  The Line-
weaver-Burke plot involves transforming  Equation (1)
into the following form and  the kinetic  constants  are
determined graphically using the  paired  values of
1/r vs. 1/c:
                            k2
                                                   (9)
      In this analysis, a regression  technique has been
                                                                 o  Data  for  Experiment  No.4
                                                                 D   ii            i.      No.5
                                                                 A   it            ii      No.6
                                                                    Simulated  model  for  Exp No.5
                0.25
                                                                    10
                                             Time ,   Days


                                 Figure  2 .  Experimental  Data  and  Simulated   Model
                                                       388

-------
used to preclude human prejudice In the determination of
the values of k±.  The results of this analysis, which
was obtained from a subroutine of a main computer pro-
gram, are summarized in Table 2.
Table 2. Kinetic Constants of the Proposed Kinetic Malel
     Experiment No.
k2,
g C/g Ash
g C/g Ash-day
1.062
0.011
1.136
0.033
1.224
0.029
      A computer simulation model of the system, called
COMPOST, has been developed and used for the processing
of all experimental data obtained for this investiga-
tion.7  The COMPOST enables to predict the behavior of
the bio-stabilization system during the decomposition
of the material.  One advantage of the model is its
capability to describe the system behavior in a dimen-
sionless form.  This generalization of the model in
system analysis may be directly applicable to scale-up
of the process.
      In Equation (3), kt is a dissociation constant of
an enzyme-substrate complex and is a measure of the
affinity of the enzyme for the substrate.  As shown in
Table 2, all values of kj found in this Investigation
have the same order of magnitude within experimental
error range while those obtained for kj> vary consider-
ably depending on the experimental conditions.  The
consistency of k, in magnitude indirectly verifies that
ki is a characteristic constant of the system.  An in-
crease in the magnitude of ka would result in a higher
reaction rate.  Its contribution to the reaction rate
is directly proportional to its magnitude.
      It should be noted that approximately 300 percent
 increase  was  found in the magnitude of ka when pure
 oxygen was  used as the oxygen source.  This increase
 compares  favorably with the data reported for the
 activated sludge process in which the process was
 found  to  be more effective.  It should be noted,
 however,  that the magnitude of ki in this study was in-
 dependent of  the oxygen source.
       The system equation,  Equation (4), was developed
 based  on  the  mass balance and Michaelis-Menten kinetic
 theory.   The  curve in Figure 2, which simulates the be-
 havior of the system during the decomposition reaction,
 is a plot of  Equation (4)  using kinetic constants given
 in Table  2  for Experiment  No.  5.   As can be seen from
 the figure, the mathematical model is consistent with-
 in the experimental error  range.
       Equations (5)  and (6) define dimensionless time
 and concentration,  respectively.   The use of dimension-
 less variables allowed for  the system to be interpreted
 more easily.   Figure 3 shows the  results of these trans-
 formations.   It should be noted that the slope of the
 curve  depends  upon the initial concentration only as
 can be seen from the Equation  (8).   It was expected that
 the type  of the curve is similar  to  that of the  curve
 in Figure 2.
       The system model may  provide basic analytical in-
 formation for  control and optimization of the  process
 for commercial application.  However,  the control of  the
 bio-stabilization process is still based on experience
 due to  lack of analytical information  in the design
 procedure.  Further  investigation may  be necessary  for
 optimizing the entire process  in industrial applica-
 tion.   The kinetic data and model herein reported should
be used as a model to  study the bio-stabilization pro-
 cess on an industrial  scale.   Particular attention
 should be devoted to  scale-up  factors  from pilot  scale
 to industrial  operation.  In conclusion,  the proposed
model does predict the  behavior of the  system.
                  1.0
              2  0.8
              o
              .0
              k
              o
              o
                 0.6
              0  0.2

                   0
                              0.2       0.4       0.6       0.8       1.0

                                                  Dimenslonless  time, Z
                    1.2
1.4
1.6
                             Flgurt   3 .    Change of  Carbon  Concentration with  Respect  to
                                            Time In  Dimenslonless  Form  for  Experiment No. 5
                                                       389

-------
                      References
1.  Breidenbach, A.W. & Floyd, E.P. Needs for Chemical
    Research in Solid Waste Management.  U.S. Dept of
    HEW, Washington, B.C. 1970.

2.  "Solid Waste Disposal."  Chemical Engineering
    78., 14:155-59.

3.  Spohh, E.  "Composting by Artificial Aeration."
    Compost Science 11, 3:22-24.

4.  McGauhey, P.H.  American Composting Concept.
    Solid Waste Management Office, Environmental Pro-
    tection Agency, Washington, D.C. 1971.

5.  Rainer, J.W.  Behavior of Enzyme Systems.  Van
    Nostrand Reinhold Co., New York, N.Y. 1969.

6.  Smith, C.L., et^ al.  Formulation and Optimization
    of Mathematical Models.  International Textbook
    Co., Scranton, Penna.  1970.

7.  Whang, D.S.  A Kinetic Study on an Organic Solid
    Waste Stabilization Process on a Pilot Plant Unit.
    Dept. of Chemical Engineering, Texas Tech Univer-
    sity, Lubbock, Texas.  1972

8.  Albertsson, J.G. et_ ail.  Investigation of the Use
    of High Purity Oxygen Aeration in the Conventional
    Activated Sludge Process.  Federal Water Quality
    Administration, Washington, D.C.  1970
                                                      390

-------
                      EVALUATION AND SELECTION OF WATER QUALITY MODELS:  A PLANNER'S  GUIDE

                                     E. John Finnemore and G. Paul Grimsrud
                                             Systems Control, Inc.
                                             Palo Alto, California
     As part of a management guide for planners,  Sys-
tems Control, Inc., recently developed for the Environ-
mental Protection Agency systematic procedures for
evaluating and selecting receiving water quality models.
Using these procedures, each model is evaluated on the
basis of many considerations, which include both the
technical principles and capabilities of the models
and such resource needs and constraints as additional
labor, specialized technical expertise, time and funds,
and computer limitations.  All these considerations
are combined into a single performance index.  A pro-
cedure is also prescribed for combining the various
component costs of applying the model into a single
overall cost.  A comparison of this overall application
cost with the model's performance index may then be
used as a guide to model selection.  The selection
procedure is organized into phases of increasing level
of detail, each of which may or may not be required
depending upon the nature of the planning problem
being confronted.
                     Background

     The priority given to the accomplishment of the
 nation's commitment to the goal of clean water is
 evidenced by the size of the investment being made for
 abatement and prevention of water pollution.  This com-
 mitment makes management decisions affective receiving
 water quality of the utmost importance.  With such major
 decisions being made daily as part of numerous planning
 programs, it is incumbent on planners to assure that the
 expenditures they recommend are justified and that the
 courses adopted will fully achieve the expected results.

     But the selection of water quality planning method-
 ologies, one of the first major decisions facing plan-
 ners and managers, requires a good understanding of  the
 difficult technical problems which may be involved,  be-
 sides the limitations of time and funds.
     Since the number of wastewater management alterna-
 tives which exist and require evaluation may be large,
 the complexity of water quality analysis has stimulated
 the development of a variety of tools to assist plann-
 ing, ranging from simple graphical techniques to sophis-
 ticated computerized models.  While these tools freq-
 uently enable types and numbers of analyses which
 would otherwise be impractical, they can also be
 costly and time consuming.   It is therefore essential
 that planners give careful attention to insuring that
 their use is cost-effective.
     The model evaluation and selection procedures
 described here are specifically oriented to water
 quality and water resources planners and managers.
 They are designed to enable a planner without previous
 experience in water quality modeling to determine
 whether a receiving water quality model could and
 should be used in a particular planning program, and
 which specific model would be most cost-effective.

     The two primary purposes of this work were to
 develop a technique which would assist planners in
 selecting and using water quality analysis methods
 which are cost-effectively matched to their planning
 responsibilities,  and to summarize the technique into
 handbook form.   The handbook is designed to provide
 planners with a sufficient  introduction to water qual-
 ity modeling to enable effective communication with
 systems  analysts and administrators regarding water
quality modeling.  Besides model selection and  evalua-
tion the handbookl also provides guidance on the manage-
ment of modeling and the use of contractual services.

                       Method

     A systematic way of evaluating water quality
models was sought.  Clearly, the evaluation procedure
would require answers to many specific questions about
the models  and so a tabular format for presenting
these questions and answers was clearly preferred.
Tables represent the condensation of large quantities
of descriptive text, enable far more rapid information
retrieval, and greatly facilitate the comparison of
different models.  This tabular method of evaluation
was used directly in the procedures developed for both
model cost-effectiveness analysis and for final model
selection.

     Information on the models may be obtained  from
program documentation and user's manuals, published
articles about model development and applications,
and if necessary by direct communication with the
developers and users.  Most available water quality
models can be located at the following institutions:
the Environmental Protection Agency, the Army Corps of
Engineers, the U.S. Geological Survey, state water
quality planning offices, and colleges and universities
active in the water quality area.  The models chosen
to develop and demonstrate this technique are all use-
ful for the prediction of water quality and provide
a wide range of capability and applicability.  At
the time (1974-75), they seemed to represent a large
portion of the models expected to be in use in the
near future.  This does not imply that another model,
not initially included, might not be preferable in a
particular case.  The primary function of the selected
models was to provide a vehicle for development and
demonstration, rather than to draw any conclusions
about them.  The chosen models for these demonstration
purposes are all deterministic simulation models of
varying complexity.  They were arranged into the follow-
ing six groups, in accordance with their areas of
applicability:
     Group I
Steady-state Stream Models
     Group II   Steady-state Estuary Models

     Group III  Quasi-dynamic Stream Models

     Group IV   Dynamic Estuary and Stream Models
     Group V    Dynamic Lake Models
     Group VI   Near-field Models

     The models used to simulate only stream conditions
are least complex due to the one-dimensional character-
istics of flow.  Models for simulating stratified lakes
and reservoirs fall next in line of complexity, followed
by estuarine models.  Estuary models are more complex
because the prototype flow is usually in at least two
dimensions, and the boundary conditions, such as tides,
vary rapidly compared with those in lakes.  The costs
of model application tend to be proportional to their
complexity.

     The "quasi-dynamic" model category (Group III)
have been so named since only their weather (meteorolog-
ical) inputs may be dynamic.  Their solutions have
steady-state hydraulics, but dynamic water quality.
The near-field category (Group VI), is the only one in
which very localized effects, such as plume entrain-
ment, are simulated.

     There exist many other models different from the
                                                       391

-------
models chosen for this development and demonstration,
as well as other versions of those employed here.
A number of these were treated in a. similar but Iess2
extensive manner in an earlier study by the authors.
Probably much the same questions would be used in the
evaluation of other models, and only different answers
would be obtained in some areas.
     Types of models notably different from those
evaluated herein include:  ecologic modeling of receiv-
ing waters, in which the life forms are of prime inter-
est;  truly two-dimensional flow models, in which
velocity components are determined in two perpendicular
directions over a grid of points covering the water
body:  three-dimensional models which are still more
complex and presently are far from being ready for
wide use in planning;  "continuous" deterministic
models, whose output can be statistically analyzed for
probability studies, but which need much longer records
of water quality data and far longer computer run times;
and stochastic models, which provide an alternative
approach to the question of probabilities.

                      Results

     The resulting model evaluation and selection
technique divides naturally into two stages;  model
evaluation, and cost-effectiveness evaluation.  The
latter depends heavily upon the former.

Model Evaluation

     The questions to be answered in the evaluation of
the models were organized according to their purposes
and contents into the fourteen categories listed in
Table  1.  For each of these categories, a table was
prepared which contained from two to six columns, one
for each of the specific questions in the category.
Table 2 is an example of such a table, in this case for
the thirteenth category of Table 1.  In each row of
Table 2 the answers are entered for each model chosen
for evaluation.  These  tabulations of answers then

                      Table 1

            MODEL EVALUATION CATEGORIES
     MODEL CAPABILITIES

       Applicable Situations
       Constituents Modeled
       Model Factors Accounted for

     DATA REQUIRED

       For Model Inputs
       Additional, for Calibration and Verification

     MODEL COSTS

       Initiation Costs
       Utilization Costs

     MODEL ACCURACY

       Representation
       Numerical Accuracy
       Sensitivity to Input Errors

     EASE OF APPLICATION

       Sufficiency of Available Documentation
       Output Form and Content
       Updateability of Data Decks
       Modification of Source Decks
provide the information which is used to measure  the
suitability of the models for a particular purpose
(its performance index), and the total cost of oper-
ating the models.

Cost-effectiveness Evaluation

     A form of cost-effectiveness evaluation was  deter-
mined to provide the most appropriate basis for model
selection.  The procedure developed to evaluate the
cost-effectiveness of each model requires making  a com-
parison of its performance index (PI;  a measure  of
the model's suitability for a particular purpose) with
the total cost of operating it.
     Clearly the process of selecting a water quality
model for any wastewater management planning project
may involve numerous complex considerations.  Therefore
the procedure had first to identify the important
factors which influence the selection of models by
planners, and then to structure the consideration of
these factors in such a manner that confusion is
minimized.  Of great importance, the procedure does not
tell planners what decisions on models to make:   in-
stead it provides them with the essential questions and
thought structure upon which they or their assistants
can make the decisions.
     The model selection process is designed to give
users a choica of several different levels of detail
they may want to consider.  The process is therefore
divided into four phases, each going into progressively
more detail and requiring progressively more effort.
These phases are:

     Phase I:   Model Applicability Tests
     Phase II:  Cost Constraint Tests
     Phase III: Performance Index Fating - Simplified
     Phase IV:  Performance Index Rating - Advanced

The rejection of candidate models in one phase reduces
the number of models to be evaluated in the next phase,
and phases are designed accordingly.  All considerations
in the selection process are based upon the results of
the model evaluations discussed previously.

     After having identified the problem and inventoried
the data available for a particular planning program,
the basic decision must first be made whether any water
quality model at all should be used.  This decision must
task into account whether a model would be helpful in
plan formulation and whether a suitable model and suf-
ficient data are available.  (Any additional data gather-
ing should be postponed until after the models are
selected.)  Generally, water quality models are useful
in any area where the quantitative relationship be-
tween varying wasteloads and resulting water quality
must be known.  However, In "Effluent Limited" areas,
waste treatment alternatives will often be specified by
Federal Effluent Standards, thus eliminating the need
for a water qaulity model.  Where there is some doubt
whether water quality modeling is inappropriate or
inefficient in a particular planning application, the
model selection process will soon make this fact
apparent.

     A set of candidate models must be identified before
the model selection process can be initiated.  Although
the models chosen for demonstration purposes (see Table
2) could be used as the candidate set, many other models
are available which should be considered.  The planner
chould select his set of candidate models using as many
sources as possible.  Since model titles are frequently
descriptive of their capabilities, the planner should
first use the titles to screen out those obviously not
applicable to his particular problem.  For the purposes
of model selection, there is obviously no need to com-
plete the extensive (fourteen category) model evaluation
                                                       392

-------
                                                    Table 2

                         MODEL SUMMARY:  EASE OF APPLICATION, UPDATEABILITY OF DATA DECKS

I


II

[II

IV



V

VI
MODEL
DOSAG-I
SNOSCI
Simplified Stream
(SSM)
ES001
Simplified Estuary
(SEM)
QUAL-I
QUAL-II
(DEM)
Tidal Temperature
(TTM)
RECEIV
SKMSCI
Deep Reservoir
(DRM)
LAKSCI
Outfall PLUME
CARD CHANGES
(Column 1)
Few.
Few.
None.
Few.
None.
Very small, except for changes of
weather data which may involve
many cards.
"
Few in most cases.
Small, except for weather inputs.
Small, except for transient waste
inputs .
••
Small, except for weather data.
Small, except for weather and
inflow quality data.
Minor changes, of at most a few
cards .
RECOMPUTATION
TIME
{"rnlumn 71
Very small.
Very small.
Relatively large.
Small.
Relatively large.
Small.
Small.
Small.
Small.
Small.
Small.
Small.
Small.
Very small.
HELPFULNESS OF
AVAILABLE DOCUMENTATION
(Column _31
Good.
Good.
Good. Needs thorough study
and good understanding, before
using charts.
Good.
Good. Needs thorough study
and good understanding,
before using charts.
Good.
Good.
Good.
Good.
Poor.
Good.
Generally good. The three
comprising documents are less
convenient to use.
Good.
Adequate .
for models which are rejected by applicability or con-
straint tests  (Phase I or II) of the cost-effectiveness
evaluation. Therefore, the model evaluations should be
accomplished concurrently with,  and only to the extent
needed by, the various tests and ratings of the cost-
effectiveness  phases.
For similar reasons, no model should be processed in a
subsequent phse until all aspects of the preceeding
phase are complete.
     The detailed procedure for  selecting a water qual-
ity model is discussed in the following subsections,
one for each phase.  A flowchart guide to the procedure
has been prepared for each phase.  At the end of any
phase the user should decide whether to select a model
on the basis of the factors he has considered thus far,
or whether to  refine his analysis further in the next
phase.   Worksheets, similar in form to Table 2 but
containing columns for the entries indicated in Table
3, are required to record the cost-effectiveness evalu-
ations for each phase worked. The more phases the
planner uses,  the more confidence he can have in his
selection.   However, there will  be trade-offs between
selection effort and selection confidence,  and for
many applications adequate confidence in the selection
will be attained after completing only the  first one
or two phases  of the process.
     Applicability Tests (Phase  I).   These  tests ask
questions about  the appropriateness of the  models for
the problem at hand,  and inappropriate models are
rejected from  further  study.   This first phase of the
selection process is therefore very important, because
of its  rapid narrowing down of the field of candidates.
     Results of the applicability tests should be re-
corded as a "yes" or "no" in the appropriate columns of
the worksheet;  the categories to be tested are given
under Phase I of Table 3.  As the various tests proceed,
rejected ("no") models should be deleted from subsequent
worksheets to avoid unneeded further work.
     The ability of a model to simulate the behavior of
the correct type of water body is of prime importance.
The user can usually determine this capability of a.
model from the general description in the model documen-
tation.  For a deeper understanding, the scale of
interest and the extent of concentration and/or flow
variability should be analyzed.  The time variability
test determines whether the needed time-varying model
variables are provided by the model.  The time varying
requirements are established by the length of the
simulation period, and by the variability of the flow,
quality and weather inputs.  The discretization test
determines whether a model can simulate the level of
spatial detail required for the proposed application;
special features such as the presence of tidal flats,
flow augmentation sources, and storm loadings may
influence this requirement.  The constituents capability
test simply requires comparing the model capability to
simulate water quality constituents with the user's
needs; some models have specific constituent capa-
bilities, others can be used for whole classes of
constituents which have certain types of kinetic
reaction, such as first order decay.  The driving forces
and boundary factors test involves checking that the
model is capable of simulating all the important driving
forces in the prototype;  some of these may need to be
time varying boundary inputs.  The various tests of
                                                      393,

-------
                                                    Table 3
                                  SUMMARY OF COST-EFFECTIVENESS EVALUATION TABULATIONS
COST-EFFECTIVENESS EVALUATION CATEGORIES
APPLICABLE SITUATIONS
Water Body
Time Variability
Discretization, etc.
CONSTITUENTS MODELED
Constituents Modeled
Driving Forces, Boundary Factors
DATA REQUIREMENTS FOR MODEL INPUTS
Hydrologic and Geologic
Water Quality
Effluent
Other
DATA REQUIREMENTS FOR CALIBRATION AND VERIFICATION
Hydrologic and Hydrodynamic
Water Quality
Overall Data Rating
INITIATION COSTS
Model Acquisition
Equipment Requirements
Data Acquisition
UTILIZATION COSTS
Machine Costs
Manpower Costs
Total Costs
Cost Constraint Tests
ADVANCED PI RATING
Internal Factors Accounted For
Model Representation Accuracy
PI RATING, STAGE 2
Numerical Accuracy
Sufficiency of Available Documentation
Output Form and Content
Updateability
Ease of Modification
OVERALL PI RATING
PHASE
I II III IV ALL
U)
c
o
-H
•U
a!
4J
•H
si * - - %
•5 T t^ M Tl
So <2 °° y y a £ «
i ,_j ^^ p »C G f*-i to
J Jj HJ4J4J 0)4J -H 60 -HOD fij
fills 1 1 3 S 3 S B
A R w
A R W
A R W
A, R W
A R W
A A R
A A R
A A R
A A R
A A R
A A R
R W
A A
A A
A A
A A
A A R W
A A
A A
R W
R W
R W
R W
R W
R W
R W
R
 A = Answer  (yes, no, $, weeks, etc.); R = Rating on scale 0-10;  W   Weight, normalized about 1.0.
data requirements are designed to ensure that the quan-
ity and quality of the available prototype data are
satisfactory for the needs of the model.  While per-
forming these tests, hoever,  it is important to consid-
er whether additional data can be specifically acquired
during the project, or whether the model will be used
over a number of years during which data collection
will probably improve.  Entries under the "Applicability
Limitations" column of Table 3 should be briefly
descriptive.
    Any models deemed marginally applicable in the
Phase I tests may be maintained for further considera-
tion in Phase III, when their level of applicability
will be given a rating.
    Cost Constraint Tests (Phase II).  In this phase
of the cost-effectiveness evaluation both elapsed
project time and dollar costs are considered as cost
items, which must be compared for each model with user
constraints.  Answers to most of these tests (see Phase
III of Table 3) probably will be either in weeks or in
dollars, obtained from the preceding model evaluation
(similar to Table 2).
    Model acquisition costs will frequently be nominal,
though delivery time may be a factor, and some may
include a surcharge for each run made.  Other models
may require a lease or purchase agreement.  Equipment
requirements for calculators and computers with their
peripheral equipment must be compared with the avail-
                                                       394

-------
able capabilities,  although many services are available
through remote terminals.  Data acquisition costs
summarize the costs of acquiring any additional data,
as discussed under  Phase I.  Machine costs include
charges for computation plus the use of various
peripheral equipment.  Computation time is about pro-
portional to the number of constituents modeled, dis-
crete segments modeled, time steps used, and runs made.
Manpower costs include considerations of the number of
personnel and the level of expertize needed for a model,
possible recruitment and/or training time, model set up
time, run time, time to analyze the results, and, of
course, personnel salaries and overhead costs.

    From the above, estimates of the total cost and
time requirements for a model can be obtained.  In the
final cost constraint tests these are compared with the
project resource constraints, and grossly unacceptable
models rejected.  Marginally unacceptable models
probably should not be rejected in this phase because
of  the approximations undoubtedly necessary in making
the many cost and time estimates.
    Simplified Performance Index Rating (Phase III).
This portion of the model selection process gives a
method for estimating the effectiveness of the candi-
date models.  The effectiveness is obtained through
a "Performance Index Rating", which is divided into
two parts.  The first, "simplified" part  (Phase III)
accounts for the more basic, and usually more important
model  attributes which have previously been discussed
in the Phase I and Phase II tests.  The second "advanc-
ed part of the Performance Index (PI) rating, performed
in Phase IV, involves much more detailed and usually
somewhat less important considerations of the models.
In most cases, Phase III will give the planner a very
good  idea of which model is best for his particular
planning problem.  A brief review of the contents of
 the second part (Phase IV) will then usually indicate
whether those further considerations are necessary.
     The same categories of model attributes treated in
Phase I are now rated more quantitatively, if they have
not been previously rejected.  This is shown in Table
 3;  each category now requires a "rating" and many
 require a "weight" for Phase III.
     The user must  select attribute ratings based upon
 his knowledge of the model capabilities and the appli-
 cation needs for the category considered.  The rating
 should fall on  the following scale from zero to fen:
 10 is excellent, 8 - good, 6 - fair, 4  -  poor, 2   very
 poor, and  0 - completely inadequate.
      The  "weights" are used  to adjust  the  impact  of  each
 attribute rating on  the overall Performance Index, based
 upon their  relative  importance.  For example, if  the
 "Time Variability" capability of a model  is much more
 important  than  the "Constituents Modeled"  capability,
 then it  should  have  a  larger weight.  Weights must be
 assigned by the planner based upon his  judgement  of  the
 importance  of  each attribute to his planning problem.
 In assigning weights the most significant  factor  is  the
 relative importance  of the various attributes.  There-
 fore they  are normalized about a value of  1.0;   typical
 weight ranges  are  given  in Ref.  1.  For a particular
 application a single set of  weights should be used  for
 all candidate models.  But  they will probably vary with
  each application to  a  different  prototype situation.
 The user will probably find  it more convenient  to assign
  the ratings and weights  at  the  same time  as he  performs
  the Phase I tests.
       When all Phase III ratings  required in  Table 3 are
  complete,  the planner  should decide whether  or  not  to
  proceed with more  detailed ratings  in  Phase  IV.   If  the
  Phase III ratings  are  deemed adequate  for model selec-
  tion, then the overall performance  index of  the jth-
  model can be computed  using  the  equation:
                     [Rating (i,j)] [Weight  (i) ]
                 1=1
                          >    Weight  (i)
                         1=1
      where i = the attribute numbers
            n = number of attributes considered

     Advanced Performance Index Rating  (Phase  IV).
This final phase enables far more intensive probing into
details of various candidate models.  Most of  the rat-
ings in this phase follow the procedures of Phase III
but require extensive user insight and experience in
water quality analysis and modeling.  For this reason
it is included as optional in the overall selection pro-
cess, and it is not discussed in detail here.  The eval-
uation categories for Phase IV are listed in Table 3,
and an overall PI rating for both Phases III and IV
could be obtained in the same manner; full details are
given in Reference 1.

     Final model selection simply requires making a
tabulation of the total dollar costs  (Phase II) and the
overall PI ratings for each model.  Table 4 is the re-
sult of an example application.  The user can make his
selection by either comparing costs and expected per-
formance, or by using the Pi/cost ratio.

                       Table 4

            COST EFFECTIVENESS COMPARISON
Model
A
B
C
D
PI Rating
(From Phases
III & IV)
6.0
6.1
6.9
7.1
Total Cost
of Application
(From Phase II)
$ 50,740
56,040
53,650
52,740
PI*104
Dollar
1.18
1.09
1.28
1.36
Rank
3
4
2
1
                      References

 1.   Grimsrud,  G.  P.,  E.  J.  Finnemore,  and H.  J.  Owen,
     "Evaluation of Water Quality Models:   A Manage-
     ment Guide for Planners," Systems  Control,  Inc.,
     Palo Alto, California (EPA Contract No.  68-01-
     2641),  177 p., July 1975.

 2.   Systems Control,  Inc.,  "Use of Mathematical Models
     for Water  Quality Planning," State of Washington,
     Department of Ecology,  WRIS Technical Bulletin No.3,
     Olympia, Washington, June 1974.

                    Acknowledgements

      This work was supported by the U.S.  Environmental
 Protection  Agency, Office of Research  and Development,
 and by the  State of Washington Department of Ecology.
 The guidance and contributions of the  EPA Project
 Officer, Mr. Donald H.  Lewis, and the  DOE Project Offic-
 er, Dr. Robert Milhous,  are particularly appreciated.
 The assistance of Dr. Roger D. Shull,  Mr. William P.
 Somers, and Mr.  John  Kingscott, all with the EPA, is
 acknowledged.
                                                        395

-------
                             A LANDSCAPE PLANNING MODEL AS AN AID TO DECISION-MAKING
                                       FOR COMMUNITY GROWTH AND MANAGEMENT
                     J.  Gy.  Fabos
         Department of Landscape Architecture
                 and Regional Planning
              University of Massachusetts
                Amherst, Massachusetts
                                      S.  A.  Joyner,  Jr.
                               Department of Civil Engineering
                                University  of Massachusetts
                                   Amherst,  Massachusetts
                       Abstract

     A landscape planning model for assessing special
resources, hazards and development suitabilities is de-
scribed.  Computer mapping aids in the quantitative and
spatial mapping of resultant assessments.  A framework
for incorporating economic evaluations of resources,
hazards and development suitabilities into land use de-
cisions is proposed.   Application of the model to a
town in the Boston Metropolitan Area showing the results
of 20 years of Metropolitanization is illustrated.
                       The rationale of  the  METLAND team has been that the
                  attention of decision  makers  could be better and more
                  easily brought  to focus  on these landscape issues if the
                  magnitude of the negative  effects resulting from their
                  actions were clearly pointed  out.  Our research to date
                  has demonstrated an attempt to place economic values on
                  several resource variables.   The continuation of this
                  research, however, will  investigate other evaluations
                  based on energy use analysis  and the perception of
                  various interest groups  such  as conservationists and
                  developers.
                      Background

     An interdisciplinary landscape research team was
established at the University of Massachusetts in 1971.
Since that time, over 30 people have contributed to the
development of a Metropolitan Landscape Planning Model
(acronym, METLAND).   The team has responded to the per-
ceived problem that the "metropolitanization" of eastern
Massachusetts has caused a needlessly high depletion of
its environmental/landscape resources, has increased
hazards, and development has often occurred on margin-
ally suitable lands.  Furthermore, metropolitanization
has impaired the vital ecological stability of large
landscape units.  If these phenomena could be quanti-
fied, it was argued, an important step would be taken
to placing them on equal footing with other quantified
"values" and thereby integrating them into the decision
making process.1
     It is well recognized that highways and other major
public installations have been the major growth gener-
ators; their planners have seldom taken into account
the factors described above.  The model presented here
is designed to provide a procedure to assess special
resource, hazard and development suitability potentials,
which could complement and benefit existing decision
making.  The model presented here has been applied to
the town of Burlington in the larger Boston Metropolitan
region.  The Boston metropolis has gradually engulfed
2500 square miles (see Figure 1).  Application of the
model demonstrates the consequences of this metropoli-
tanization on the town of Burlington.  Approximately
eighty percent of this town has been developed with
modern industries, shopping centers and low density
housing.
                                            Post-1960
                                             Suburban
                                           - Growth
Burlington
Study Area
Major Suburban
  Growth During
          1960s

 Pre-Suburban
  1950 Growth
         Figure 1.   Boston Metropolitan Area
                              Framework of  the  METLAND Model

                       To deal quantitatively with  environmental issues
                  of the "metropolitanized" landscape, the  METLAND  study
                  has proposed a three-phase planning model including
                  assessment, evaluation and implementation phases  (see
                  Figure 2).  The assessment phase,  which is the focus of
                  this paper, will be outlined  in detail  below.   The re-
                  maining two phases are in the early stage of develop-
                  ment at the time of this  writing.
                            Figure 2.  METLAND  Conceptual Model
                    ASSESSMENT - PHASE 1
                     SPECIAL VALUE  RESOURCE  COMPONENT.
                     VARIABLES: water,  agricultural and wildlife pro-
                     ductivity, earth  resources,  visual quality
                     ECOLOGICAL STABILITY COMPONENT.
                     VARIABLES: ecological functions, transaction
                     functions, regional closure
                     HAZARD COMPONENT.
                     VARIABLES: air and
                     pollution
                      Combined Resource and Hazard values to
                   J  influence development restriction or
                      resource conservation
                            DEVELOPMENT SUITABILITY
                            COMPONENT.   VARIABLES:
                            physical,  topoclimate,
                            visual
                      Analysis of conflicts between  restrictive a'reas
                      and those which are suitable for development
                      Ecological  Stability  as  a function  of land uses  '.
                      and use distribution                	._
                    EVALUATION - PHASE 2
                    Trade-off  analysis  of alternative use types,
                    densities  and distributions in regard to VALUES,
                    NEEDS  and  OBJECTIVES, as expressed by interest
                    groups or  measured  by professionals
IMPLEMENTATION — PHASE 3
Identification of existing devices and development
of new devices for implementation and application
of those devices
                       The assessment phase  consists  of  a  selection of
                  variables analyzing the intrinsic value  of those en-
                  vironmental characteristics which may  produce benefits
                  or result in harm to society.  These several resource
                  and hazard analyses are mapped and  organized into four
                  groups, called components.  While each individual vari-
                  able has a specific value, this  grouping helps to
                  identify complementary relationships and environmental
                  issues and to provide combined values  which are useful
                                                       396

-------
in making decisions.  The four components of the assess—
ment phase are referred to as: (1) special resources,
(2) hazards,  (3)  development suitability, and  (4) ecol-
ogical stability.


             METLAND Assessment Components
                  (Refer to Figure 2)
     The special resource component of the assessment
phase addresses the issue of environmental resources and
specifically deals with three types of resources.  These
are:  (1) renewable physical resources  (e.g., water),
(2) non-renewable physical resources  (e.g., sand and
gravel), and (3)  "aesthetic-cultural" resources  (e.g.,
views).  Both renewable and non-renewable physical re-
sources are critical to the metropolitan region.  Aes-
thetic-cultural resources, while not critical, are
nonetheless important because their presence enhances
the quality of life.  Presently comprising the special
resource component are five individual special resources
or "variables."  Representing the first type of environ-
mental resource are the variables known as agricultural
productivity, wildlife productivity  (including openland,
woodland, and wetland wildlife subvariables), and water
resources  (including water quality and water supply
subvariables) .  Representing the second type of resource
is, at the present time, only the variable of sand and
gravel supply.  Representing the third.resource type is
visual landscape quality, which refers to such special
landscape features as quality wetland areas, views and
historical sites, and the visual contrast, diversity
and compatibility of land uses.
      The assessment of this component is based on the
premise that if a portion of a landscape possesses a
high  quality and quantity of one or more of these re-
sources , those areas should receive special planning
consideration and various degrees of protection from
development.  If immediate need for the resources is
not apparent, they should be protected or conserved
much  as capital resources are saved in a bank until
they  are needed.
      Assessment procedures have been developed for each
of the special resource variables and subvariables.  In
addition, a composite resource assessment procedure has
been  developed to evaluate all special resources to-
gether.  Although such a composite assessment can be
based on any one of several resource value sets, as it
was mentioned earlier, METLAND has to date used only an
economic evaluation of relative resource significance
in compositing individual special resource assessments.
 (This is also the case in the composite assessments
made  for other METLAND model components.)
    '  The hazard component of the assessment phase
focuses on the issue of environmental dangers or unde-
sirabilities.  Presently comprising the hazard com-
ponent are three environmental variables: air pollution,
noise pollution and flooding.  The individual assess-
ment  of these three hazard component variables provides
spatial information on both the type and the magnitude
of hazards.  The composite assessment of hazards com-
bines this information.
      The development suitability component addresses
the issue of environmental opportunities for alterna-
tive  types of development.  These opportunities are
landscape resources which can minimize the cost of
development while increasing human comfort and user
amenities.  Included in the component are three vari-
ables—a physical variable, a. topoclimate variable and
a visual variable—which enhance the suitability and
the attractiveness of an area for development.  To date
only  the physical variable is operational.
      The ecological stability component, which is being
developed, will deal with the issue of ecological im-
Paot, ecosystem structure and function, and the impli-
cations of such structures and functions in land use
decisions.
     The variables and  subvariables  comprising these
four assessment components  are  the elements  designed
to perform the specific analyses  required  for  applica-
tion of the assessment phase of the  model.   They  are
also the elements which provide the  basic  assessment
procedures.


     METLAND Assessment Procedure and Application
     The essential element  of the METLAND  assessment
procedure is a mapped depiction,  at  a common scale,  of
the results of the variable assessment.  A map, there-
fore, is prepared for each  subvariable and variable.
These variable assessment maps  are then overlaid  to
form composite special resource, hazard or development
suitability maps for the study  area.
     With the expansion of  the  assessment phase of the
METLAND model to include special  resources, environ-
mental hazards, opportunities for development, and
ecological stability, the need  for a tool  to rapidly
digest and manipulate large amounts of data collected
on the regional scale, and  to prepare these individual
and composite variable assessment maps, became impera-
tive.  On the basis of a study  undertaken to select
such a tool, the Computer Mapping for Land Use Planning
(COMLUP) system (developed by Dr. Neil Allen of the
U.S.D.A. Forest Service) was chosen as most appropriate
for the METLAND study.3  This selection by METLAND of a
computerized mapping system reflected the fact that
computerized data banks are today becoming available
for use by metropolitan regions and communities.  It
was assumed that this availability would continue to
increase in the near future.  Below, the capabilities
of the COMLUP mapping system are summarized.

               The COMLUP Mapping System
     The COMLUP system is essentially a computer map-
ping package of some twenty-five programs with provision
for inventory, overlay  (including weighting), and line
plotting of spatially located source data.4  For the
purposes of the METLAND study,  its function has been
expanded, by METLAND computer specialist Dorothy
Grannis, to that of a landscape planning tool.   As such,
it not only provides an inventory of existing environ-
mental values and combinations  of values, but is also
able to estimate or simulate the cause-effect relation-
ship of proposed alternative environmental land use
patterns and decisions.

Capabilities of COMLUP
     COMLUP is a second generation grid system which
followed the first generation or manually applied over-
laying of grids, most often referred to as SYMAP.5  But
because remote sensing and other land use survey infor-
mation is becoming available at a finer scale,  a more
accurate data storing and manipulating system was
needed.  The advantage of the second generation COMLUP
computer geographic technique over the earlier tech-
nology is that any shape and size polygon can be
directly input and stored in the computer, by image
digitizer, without subdividing  the polygon into grid
cells.6  For data manipulation, the digitized area
still must be subdivided into cells, but this is now
done in a second step by the computer automatically,
instead of as a manual first step.7
     Application of the COMLUP  system can be described
in a simple three step procedure, as follows.  In step
1, the COMLUP program takes the digitized  data in line
segment form and overlays the grid of fine granularity
(500 X 1000 cells) on these data.  In step 2, maps are
overlaid on one another one at  a time by the computer.
Step 3 re-converts the grid data back to line  format  so
that it may be plotted on a drum or  flat-bed plotter.
                                                        397

-------
The plotted output constitutes a mapped depiction of
the results of the applied variable assessment proce-
dure.
     Composite assessment maps are prepared in a similar
way.  The source maps in these cases are the already
prepared individual variable assessment maps.  The as-
sessment maps of variables belonging to a single as-
sessment component are' internally overlaid and weighted
as desired.  The plotted output is a mapped depiction
of the composited variable assessments belonging to the
component in question.

     The rest of this paper briefly summarizes the
application of the assessment phase.  The assessment
procedure at the variable level will be shown only by
one example.  An overall assessment is also shown which
is produced by the combination of the first three com-
ponents  (special resource, hazard and suitability) shown
in Figure 2.  The final portion of the paper will sum-
marize the conceptual framework of the evaluation and
implementation phases.


        Application of METLAND Assessment Phase

     In Figure 2, there is an implied difference among
the various components.  Special resource and hazard
components are developed so that development restric-
tion or resource conservation can be achieved.  The
third or development suitability component, however, is
designed to show opportunities for development, from the
point of view of physical and topoclimatic suitability
and visual amenity values.  To describe an assessment
procedure of a variable, the physical development suit-
ability variable is selected for illustration.  The
combined or component assessment incorporates the re-
sults of all variables of each component.  In this ini-
tial study, variables are weighted by economic evalua-
tion.


Physical Development Suitability
Assessment Procedure
                          Q                 Q
     Surficial geologists,  soil scientists,  civil
engineers and landscape architects-'-"'-'-^ have studied
for decades aspects of physical development suitability.
The significance of this variable can be supported by
the findings of these researchers.  Our investigation
identified six subvariables of significance supported
by scientific results and estimated added development
costs needed to overcome development constraints, when
suitability is less than ideal.
                               12
                                   These subvariables
are as follows in the order of their importance:  (1)
depth to bedrock, (2) depth to water table,  (3) drain-
age  (this often overlaps with water table characteris-
tics) ,  (4) slope, (5) topsoil, and  (6) bearing capacity.
To assess composite suitability based on these six sub-
variables a simple four step procedure is used:13

Step 1: An interpretation of soil types for each of the
        six physical factor subvariables.

     This interpretation is based on  (1) the three-part
symbols by which the SCS identifies all soil types, and
(2) information provided by the SCS Engineering Tech-
nical Guide which identifies the dimensions of physical
factors  (e.g., 0-2' depth to bedrock).

Step 2: An assignment of estimates of expected added
        development costs to physical subvariable di-
        mensions interpreted from soil types.
     On the basis of METLAND research, it has become
possible to directly assign actual dollar estimates
representing added development costs.  The upper limits
of these added development costs per acre  (assuming
that two houses are built with basements on each acre)
for each subvariable are as follows: depth to bedrock
up to $20,000; depth to water table up to $5,000;
drainage up to $5,000 (it should be noted that correct-
ing high water  table or  poor drainage characteristics
can be  done  together at  little extra cost); slope up to
$1,300  (but  development  on slopes greater than 15% is
prohibitive  for the  average development); topsoil up to
$1,500; and  bearing  capacity up to $1,500.

Step 3: A determination  of the estimated total added
        development  costs  for each soil type.

     This is accomplished  simply by combining  the added
costs per physical subvariable for each soil type, with
the exception of the depth to water table and  drainage
variables, in which  case only the higher cost  is
counted for  both.

Step 4: An aggregation of  total added development costs
        into A-B-C-D classes for physical development
        suitability  (as  shown in Table 1  below).
                         Table 1
 A-B-C-D Classes for Physical  Development Suitability
Total Added Costs
$ 0 - $2000
$2001 - $4000
$4001 - $9000
$9001 +
Aggregation Class
A
B
C
Undevelopable (or Class D)
     As in each assessment procedure,  this  type  of
aggregation serves to categorize  soil  types in terms of
high, moderate and low potential  suitability for a typi-
cal housing development.  In  this case. A,  B and C
classes are based on what total added  development costs
actually mean in terms of housing square  footage.  Also
in this case, a fourth category referred  to as "undevel-
opable" (or Class D) is considered.  The  inclusion of
this fourth category reflects the fact that there is a
significant practical difference  between  sites that
have a low development suitability and those that are
entirely unsuitable for development.

     Once appropriate A, B, C and "undevelopable," or
D classes are assigned to each soil type, the COMLUP
system is used as before to produce the desired  assess-
ment map for physical development suitability.   Figure
3 shows the COMLUP map results of this physical  develop-
ment suitability assessment technique  as  it was  applied
to the town of Burlington.

Combined Assessment of All Components
and a Preliminary Evaluation

     At this writing the majority of assessment  proce-
dures for the variables have been developed.  The five
special resource variables listed in Figure 2 were
applied to the study town of Burlington,  Massachusetts.
The composite assessment of all special value resources
and a composite assessment of two of the  three hazard
variables was prepared.  Each of  the variables was
weighted using economic evaluation.  The  third map used
in this combined assessment procedure  was the physical
development suitability assessment map.   Topoclimate
and visual suitability assessments were not included
since satisfactory economic evaluations have not yet
been incorporated into these procedures.
     The specific purpose of this combined  assessment
procedure is to show the consequences  of  twenty  years of
suburban development in Burlington with regard to the
three components for which assessment  procedures have
been developed to date.  The town of Burlington  has
undergone large-scale suburbanization  over  the past two
decades which is typical of the growth trends of many
metropolitan.communities.  In the case of Burlington,
this growth is undoubtedly a combined  result of  the
town's proximity to Boston and of the  large-scale high-
way construction activity, which  has taken  place in the
town over the last twenty-five years.  For  the sake of
visual clarity at the present scale, a simplified ver-
sion of these combined overlays is shown.   In addition,
since original estimates of resource values, hazard
                                                       398

-------
 Burlington Physical
 Development
 Suitability Map
  Burlington Composite
  Evaluation Map
                                                i KM
      A" quality areas:  added development costs
         from $0   $2000/acre
 nrn "B" quality areas:  added development costs
 ^^     from $2001 - $4000/acre
 p^ "C" quality areas:  added development costs
 1=3     from $4001 - $9000/acre
 r—I "D" quality or undevelopable areas: prohibitive
 ^—^     added costs ($9000+)

      Figure 3.   Burlington Physical Development
                    Suitability Map
value losses,  and added  development costs were based on
generalized in irmation  unconfirmed by site-specific
investigation   ombined  landscape values are broadly
expressed by   .lar value ranges.  (It should be noted,
however, that   ,ie COMLUP system can easily compute
specific coirf   ations of value and value loss for any
desired land  ipe unit.   Also, it should be noted that
all economic   aightings  of values have been substan-
tiated by tht team's resource economists.  The argu-
ments which support these values are described in the
Part II Research Bulletin of the METLAND Research.14)
     The consequences of twenty years of metropolitan
suburbanization  illustrated in Figure 4 are probably
obvious.  However,  a highlighting of these consequences
as they are seen by the  METLAND research team is thought
to be in order.

     First, it should be pointed out that each of the
three types of areas identified in Figure 4 indicates
the presence of  significant land use constraints.  These
mapped constraints  may represent special resource areas
which are valuable  to people in general, hazard areas
which are harmful or undesirable to people and property,
or other areas which due to their physical landscape
characteristics  are especially costly to develop.  In
Burlington, parcels of land exhibiting one or more of
these constraints occupy actually about half the town,
or five square miles.
     As can be seen,  areas having significant land use
constraints are  concentrated primarily in the south-
western portion  of  the town.   Unfortunately,  when one
inspects the existing land uses of the town,  it becomes
evident that post-war land use decisions resulting in
land use changes  in Burlington were in no way respon-
sive to the presence  of  these land use constraints.
Instead,  residential  development,  particularly single
family  Bousing, has been established  fairly evenly in
                                                                           128
               1962  Composite  Special  Resource  Assess-
                    ment Value  ($18,000  -  $150,000)/acre
               1971  Air and Noise  Pollution  Hazards
                    Assessment ($1280+/acre  damages)
         jjjjjjjj   Development Constraint  Assessment
               ($4000/acre added development costs)
    Figure 4.  Composite evaluation map  of  special
          resource  values, hazard potentials,
               and development constraints
all sectors of the  town  (regardless of the  character of
the landscape), while major shopping  centers and  indus-
trial uses have been located  almost exclusively along
the Burlington stretches of the Route 128 and  Route 3
highways.
     These growth-generating  highways have  themselves
been located in that part of  the  town which has some of
the most valuable natural resources together with some
of the worst natural conditions for development.  As a
matter of fact, about two-thirds  of the  area which is
particularly valuable in terms of special resources is
also particularly unsuitable  for  development,  requiring
average added  development costs of as much  as  $9000 or
more per acre  to overcome natural constraints  for even
low density development.  These constraints are primar-
ily a result of a very high water table  and/or poor
load bearing capacity.  By glancing again at Figure 4,
one is reminded that despite  such drawbacks, most of
this unsuitable area is not only  developed, but is
actually developed with massive commercial  and indus-
trial structures for which the assessed  constraints and
compensatory costs are undoubtedly much  greater.  The
irony of this  whole land use  situation lies in the fact
that there is  a great deal of  land in the town which is
distinguished  neither for its  special resources nor for
its natural constraints to development.

     Despite these facts, development in Burlington over
the past two decades did not  follow what appears  from
all points of  view to be the most rational  course.
There are two  principal reasons for this.   First, most
decision makers during the greater part of  the post-war
era knew very  little about landscape values and con-
straints as they have been assessed here.   Second, even
for decision makers who might have known, there were no
commonly accepted devices available for  implementing
protection and conservation planning decisions while
satisfying the rights of land owners and developers in
an equitable fashion.  As a result, land use changes
followed and were induced by  the  location of major
transportation routes—a phenomenon which is well-
                                                       399

-------
demonstrated here by the Burlington case study.15
     Considering the magnitude of the resource loss,
hazard increase, and development cost to society pro-
ceeding from this pattern of suburban growth, it is
firmly believed by the METLAND team that the long-term
public interest for planners and decision makers is to
take as much account as possible of the natural charac-
teristics of the landscape.  Despite the general nature
of the results shown in Figure 4, this type of assess-
ment information is absolutely essential to wise land-
scape planning.  At the very least, development deci-
sions should be postponed until estimates of potential
landscape value have been confirmed at the site level.
     According to the conservative estimates of the
team, over 20 million dollars worth of resources were
destroyed, hazards increased substantially, and unneces-
sary development costs soared.  The team also developed
two alternative growth patterns, one similar to the
existing one and another using a P.U.D. type develop-
ment concept.  It was concluded that each growth pat-
tern could easily have accommodated the 25,000 existing
population without impairing or loosing landscape re-
sources or exposing so many residents to unhealthy air
and noise pollution, or incurring unnecessary construc-
tion and site improvement costs.  Had either of these
plans been adopted originally, the town would today
find itself in significantly better shape.  Instead of
polluted or highly salted local supplies of water, the
town would have numerous sources of clean ground water.
Instead of having to import sand and gravel from distant
sources at ever increasing costs, the town would have
accessible local aggregate supplies for development or
maintenance projects for many years to come.  Instead
of being virtually without wetlands and the visual-
cultural benefits associated with them, the town would
have preserved its major wetland which was filled in to
accommodate the sprawling Burlington Mall.
     It is realized that economic evaluation has limita-
tions.  In broadening the framework of evaluation and
developing specific steps for implementation, it is
thought important to consider other interpretations
based on energy analysis and on the perceptions of
various special interest groups  (such as conservation-
ists and developers).  In addition the team is attempt-
ing to propose a procedure which would help an effec-
tive participation both of decision makers and of the
public in general.   These activities are planned to be
undertaken in Part III of the research during 1976 and
1977.
     While expanding and improving this landscape plan-
ning process, the team is convinced that the Burlington
case study should provide an impetus for landscape
assessment and planning prior to metropolitan invasions.
We do not have all the answers, but there is sufficient
supporting evidence which shows the utility of this
approach.
                       ENDNOTES
      IRIS-Illinois  Resource Information System.
Feasibility Study Final  Report,  Center for Advanced
Computation, University  of  Illinois at Urbana, Urbana,
Illinois, 1972.

      Ferris and Fabos.

      While METLAND  is presently using a "second
generation" technology,  computer programmers are
rapidly refining third generation technology designed
to overlay polygon areas directly without reconversion
to grid cells for manipulation.   The Canadian Geo-
graphic Information  System  developed by the Canadian
National Government  is an example of a completed third
generation system  (see Ferris and Fabos, 1975) .
     Q
      Flawn, Peter F. Environmental Geology.  Harper's
Geoscience Series, Harper and Row,  New York,  1970.

     9Bartelli, L.J.; Klingebiel,  A.A.;  Baird, J.V.;
and Heddleson, M., (ed.). Soil Surveys and Land  Use
Planning.  The Soil  Science Society of America and the
American Society of  Agronomy,  Madison, Wisconsin,  1966.
                                     The MIT  Press,
      Lynch, Kevin. Site Planning.
Cambridge, Massachusetts, 1974.

      Way, Douglas. Terrain Analysis; A Guide to Site
Selection Using Aerial Photographic  Interpretation.
Stroudsburg, Pennsylavania, Dowden,  Hutchinson and
Ross, 1973.

      Fabos, Julius Gy., and Caswell, Stephanie J.
Part II of the "Metropolitan Landscape Planning Model"
(METLAND), Agricultural Experiment Station Research
Bulletin, Amherst, Massachusetts, 1976  (in press).

      The initial physical suitability procedure was
developed by Robert Reiter of the METLAND team.  The
assessment shown here is a revised procedure based on
the development cost estimates of our resource
economists Robert Torla and John Foster.  The adapta-
tion of those costs to this revised  procedure was done
by Stephanie Caswell and the authors of this paper.
    14.
      'Fabos and Caswell.
      Fabos et al.
Assessment.
                    Model for Landscape Resource
Credit to be given to the following agencies:

This study has been primarily supported by the
Massachusetts Agricultural Experiment Station, Uni-
versity of Massachusetts at Amherst, Paper No. 2013;
additional support has been received from the U.S.D.I.
Office of Water Resources Research and the U.S.D.A.
Forest Service, Pinchot Institute of Environmental
Forestry.
       Fabos,  Julius Gy., et al.  Model for Landscape
 Resource Assessment. Agricultural Experiment   Station
 Bulletin No.  602, Amherst, Massachusetts, 1973.

       All  graphics prepared by Richard Rosenthal.

       Ferris, Kimball,  and Fabos, Julius Gy..  The
 Utility of Computers in Landscape Planning.  Agricul-
 tural  Experiment Station Bulletin No. 617, Amherst,
 Massachusetts,  1974.
     4
       Allen,  Neil. Computer Mapping  for Land Use
 Planning:  COMLUP. Intermountain  Region: U.S. Forest
 Service, 1973.
                                                       400

-------
                                  A RESOURCE  ALLOCATION MODEL FOR THE EVALUATION

                               OF ALTERNATIVES IN SECTION 208 PLANNING CONSIDERING

                                    ENVIRONMENTAL,  SOCIAL AND ECONOMIC EFFECTS
                                                    Douglas Hill
                                           Grumman  Ecosystems  Corporation
                                                 Bethpage,  New York
Modeling for 208 planning should be designed to fac-
ilitate participation by planners and representatives
of the affected public.   Intangibles and incommensur-
ables must be considered, and the ultimate need for
value judgments to assess the importance of environ-
mental, social, and economic effects must be accommo-
dated without obscuring  the factual analysis.
Ideally, population groups that are affected differ-
ently should be accounted for separately.

Model building should therefore proceed in successive
stages of greater precision.  An initial qualitative
analysis leads to a conceptual model that identifies
the differential impacts of the alternatives, making
only the judgment that the impacts are beneficial or
detrimental.  With land  use decisions among the
alternatives for satisfying water quality goals, a
resource allocation model is useful to account for
other costs and benefits.  This can be solved for the
minimum dollar cost mix of activities as a datum;
alternative plans can then be generated by assigning
additional importance to intangible and incommensur-
able values, with the plan selection informed by knowl-
edge of its incremental dollar cost.
         Criteria For Modeling in Planning

 The nation is embarked on something new in the way of
 planning making use of models.  Under Section 208 of
 PL 92-500, 150 local agencies are engaged in two-year
 programs to prepare areawide waste treatment manage-
 ment plans to meet water pollution abatement require-
 ments with "a total resources perspective."  In its
 guidelines for conducting this planning, the EPA(l)
 emphasizes land use considerations, nonpoint pollution
 sources, and environmental, social and economic impact
 evaluation in the comparison of alternatives and
 selection of a. plan.

 Almost without exception, the preparation of these
 plans is in the hands of regional planning agencies,
 not the departments of public works who have tradi-
 tionally done water quality planning.  As never before,
 land use planners and water quality engineers are
 being thrown together to combine their talents.  It
 remains to be seen whether in 208 environmental
 modeling will provide the organizing structure that
 focuses interdisciplinary planning or whether it will
 remain an arcane art practiced by a few professionals.

 To the ordinary planner,  a mathematical model is a
 black box which is mysterious if not frightening.  If
 ha is trapped in a situation where he must use the
 results of models, he can no doubt be intimidated into
 doing so by the authority of his consultants or their
 peers, but he will be acting as much on faith as if
someone were reading entrails.  When the contract ends,
he will be in no position to judge how his plans should
change as conditions change.

Moreover, not only professionals are engaged in 208
planning.  The requirement for regular meetings of a
citizens' advisory committee seems to be leading to
genuine public participation.  This may be regarded
as a monumental nuisance or as the opportunity to add
the dimension that has been missing from most environ-
mental decisions in the past:  proper consideration of
value judgments.

Environmental decisions necessarily involve both facts
and value judgments, and at some philosphical risk,
emphasized by Walker\2), a  distinction can be made
between them.

      o  Fact:  whether and  to what extent as effect
        occurs

      o  Value judgment:  whether it is good or bad,
        and whether that matters

Environmental phenomena described by scientific state-
ments that are subject to test are facts; so are the
tangible products of engineering decisions.  Whether
it is important that certain areas of a bay be safe
for swimming is a value judgment.  Granted that in the
domain of aesthetics where  one's very perceptions are
determined by one's tastes  the distinction may not be
.clear.

A sequence of analytical steps showing the relation-
ships of facts and values is given in Table  1.  The
sequence starts with a planned action, shown at the
bottom of the diagram, which may have direct environ-
mental, social,  or economic effects as indicated by
the lower box in each of the respective columns.

The first step in the analysis is to determine whether
such an effect occurs (yes/no).  If so, the next step
is to identify the kind of effect.  Identifying the
kind of environmental effect may lead to the identifi-
cation of a social effect (horizontal arrow) not
previously identified as a direct social effect.
Similarly, identification of the kind of social effect
may lead to the identification of an economic impact
(horizontal arrow) not previously identified as a
direct economic impact.

The next fact that may be determined is the direction
of the effect (more/less).  On the basis of these
facts, a value judgment can be made (horizontal arrows)
as to whether the change is beneficial or detrimental
to the environment or to man, some of the latter con-
sisting of economic effects.  As we illustrate below,  :
                                                       401

-------
ENVIRONMENTAL EFFECTS
FACTS
VALUE JUDGMENTS
SOCIAL EFFECTS
FACTS

VALUE JUDGMENTS

                                                                                  ECONOMIC EFFECTS
                                                                             FACTS
                            I
                                                                                           VALUE JUDGMENTS


                           FIG.  I     SEQUENCE OF ANALYTICAL STEPS IN ENVIRONMENTAL MODELING
carrying the analysis only to this point may provide
considerable insight into the implications of alterna-
tive environmental decisions.

The final step in the sequence of analysis of fact is
the quantification of the amount of the change, an
example of which is water quality modeling.  This may
assist in the quantification of social effects on man
(diagonal arrow) which may further lead to the mone-
tization of economic effects (diagonal arrow).  The
monetization of economic effects may be accomplished
by the marketplace in terms of market prices.  In
the case of public goods or bads, however, monetiza-
tion must be accomplished by other analytical means,
i.e., cost-benefit analysis.

Two important classes of effects that cannot be mone-
tized are those that are intangible or incommensurable.
Nevertheless, the direction, more or less, of these
effects may be determined, and a value judgment made
as to whether such an effect is beneficial or detri-
mental to man.  There may be some difference of opin-
ion among individuals as to how aesthetic effects, for
example, may become more pleasing; on the other hand,
there is likely to be considerable consensus on what
is displeasing.

The difficulties in treating aesthetics are great
enough that they are usually ignored in environmental
analyses, yet we are told by Kneese and Bower(3) that:

          The limited evidence from the studies
          and analysis...leads to the virtually
          inescapable conclusion that higher
          water quality must be  justified pri-
          marily  on  aesthetic  and recreational
          grounds, if it  is to be justified at
          all.

Similarly, Ridker^'  observed  that psychic costs are
likely to be a large  portion of  the total cost  of air
pollution.  If anything,  the aesthetic  sensibilities
of the public have probably become more acute since
these studies were made.   In the absence of adequate
quantitative methods,  assessing  the importance  of
aesthetic effects and psychic  costs is  largely  a
matter of judgment.

Between the facts of the  matter  and their effect on
public policy, in short,  stands  a screen of value
judgments.  Most properly, these are the concern of
the affected public.   In  environmental  studies, they
are often ignored or  - worse - estimated by technicians
as "importance factors."   Of the numerous objections
to this practice, as Andrews(5)  points  out,  it may in
particular obscure the choices to be made by "burying
usable information about  impacts on specific para-
meters beneath a layer of questionable  value judgments."
The expense that may be justified in raising the
fidelity of models of  the facts  in the  presence of the
noise of value judgments  is another question.

In summary, if environmental models for planning are
to be Improved, they  should meet several criteria:
        They should be comprehensible to other
        planning professionals and the lay public
                                                       402

-------
   o  They should identify what  is  Important
      and what is not, preferably early  enough
      to avoid large scale data  collection of
      little relevance

   o  They should distinguish between  facts
      and value judgments

   o  They should provide a mechanism  that
      reflects the uncertainty in value  judg-
      ments and represents the views of  the
      affected public

   o  They should therefore  distinguish  sections
      of the public  that  are affected  in different
      ways

In attempting to meet these criteria, the modeling
procedure proposed here will be described by two
examples:  (l)  a qualitative model to be used  near
the beginning of the 208 process to determine what is
important, and (2) a quantitative model to be used at
the end to facilitate making the value judgments
needed to select a plan.

                Qualitative Model

         To discover profound truths about
         man and his relations with the world
         about him, we are well  advised to
         follow two simple rules:

              Rule 1.  Take a simple idea.

              Rule 2.  Take it seriously.
                             Garrett
The qualitative analysis is based upon the  simple  idea
that a relatively non-controversial judgment  can be
made as to whether the environmental, social,  and
economic effects of a planned action are beneficial
or detrimental.  Taking this idea seriously consists
of organizing the results of this analysis  and pre-
senting them in a way that clearly illustrates the
differential beneficial and detrimental effects of
alternative plans on geographical areas or  interest
groups .

Qualitative analysis is the process of finding how
many and what elements or ingredients are present,
as in chemistry.  The model for planning appropriate
to this initial stage of analysis is a conceptual
model which although a preliminary and tentative
representation of reality should nevertheless  consist
of the right variables in their correct relationships.
A particular value of a model at this stage is its
heuristic use as an instrument of discovery to explore
the structure of the problem.

As pointed out by Ackoff and Sasieni (?', differences
in the degree of obscurity of a problem have produced
different patterns of model construction.   Inevitably,
however, the model builder must decide how  to  simplify
reality in the most satisfactory way.  In environ-
mental modeling, this has often led to omitting rele-
vant variables:  the intangibles and incommensurable s.

In simplifying, the model builder is faced with con-
flicting objectives:  (l) to make the model easy to
solve and (2) to make it accurate.  At this initial
stage,  the emphasis should be on making it  easy to
solve,  if possible easy enough for the nonmathematical
planner.  This can best be accomplished not by dropping
variables but by backing off on the requirement for
accuracy to only the first step in quantification:
 whether it is more or less.  Moreover,  since value
 judgments provide the final weighting of environ-
 mental impacts anyway, the process of "solving" the
 model at this stage can be short-circuited by the
 judgment that a given effect will make  things better
 or worse.

 This methodology has previously been applied to an
 environmental analysis of alternatives  to dredging a
 harbor on Long Island, as reported in Wells and Hill("),
 Reference will be made to tables originally presented
 in that paper for illustration.


 Procedure  for Qualitative Analysis

 The procedure for  qualitative  analysis is as  follows:

      1.  Establish the  comparison of  alternatives
         on  a valid  cost-effectiveness basis,
         e.g.5 by  comparing the  environmental,
         social, and economic  costs of alternatives
         of  equal  effectiveness, say, in disposing
         of  equal  amounts of wastewater.

      2.  Determine the  geographical areas or  interest
         groups that are affected differentially
         by  the alternatives considered.

      3.  Identify  comprehensively the environmental,
         social, and economic  parameters possibly
         affected  by each alternative.

      h.  Summarize in tabular  form the nature of
         the  impact  on  each parameter of each
         alternative.

      5.  Evaluate  whether each of these  impacts is
         beneficial  (+) or detrimental (-) to each
         geographical area or  interest group, as
         illustrated in Table  2, nil owing for counter-
         vailing effects (+) and uncertainty  in the
         judgment  (/).  At this  stage, th'is judgment
         is made regardless of whether the information
         available is precisely  quantitative  or
         merely subjective.

      6.  Present the pattern of beneficial,  detri-
         mental,  and countervailing impacts in a
         matrix of alternatives vs.  geographical
          areas or interest  groups.   (The summary
          of all  impacts on Port Jefferson,  Table 2,
          is shown as Row k of Table  3.)

      7.  Superimpose on this matrix the  identifi-
          cation of the  parameters  affected benefici-
          ally or detrimental 1y,  aggregating those
          that are affected similarly and noting those
          that vary identically with an  arrow as
          shown in Table 3.

This final table provides a useful display of the con-
sequences of the choices to be made.   It does not make
the decision, but it clarifies the  decision structure.
The beneficial or detrimental implications of the
hard scientific and engineering facts are exposed.
To choose among the alternatives, further judgments
must then be made as to the environmental, social,
or economic importance of these consequences, a step
best left to the end of the quantitative analysis.

The results of the qualitative analysis  consist of
the following:

      o  Geographical areas (and thus population
         groups) and interest groups affected
         differentially
                                                       403,

-------
                                         TABLE 2  - SUMMARY OF AREA  IMPACTS
                                IMPACT  AREA:   PORT  JEFFERSON  HARBOR AND VICINITY
"\ PARAMETERS
\, AFFECTED
"•v
\.
ALTERNATIVES X.
I \
No Modification
Dredge Channel, Basin
Offshore Platform (Harbor)
(Long Island Sound)
(Truck)
Transship from (Barge)
Northvllle: 3
(Pipeline)
Pipeline from
New York Harbor
Ma
Qu



ter
ality
Wate
Shellfish
i

o/-
o/-
H
4

4
4




Fin
Fish
' 1 '

o/-
o/-
4

4
4
t-



0
0
0
0
0
0
0
Gro
rfowl Wa
Other
Wildlife
1

•
-
H
H



V

-t-
Oil
Pollution
, I \

0
0
0
0
0
0
0





i
±
+

+

01
und Plea
er Boa
Air
Quality
i
ase
o/-
0
0
0
0
0
Spoil
Disposal
' 1 '
line C
0
0
0
0
+




0
0
0
0
0
sure pi
ing Haz
Road
Traffic
'





0/+
0/+
o/-
0/+
0/ +
Waves,
Surges
1 w

0
0
0
0
H
4
h

o/-
o/-
0
0
0
0
0
re
ard
SUMMARY
OF ALL
IMPACTS
ON
Aesthetics PT. JEF


4-
4-
Land Use
Conflicts
\ •

o/-

-
H

+





0
0
0
0
+
±
±
;

t
                TABLE 3    CONCEPTUAL MODEL:  ENVIRONMENTAL  EFFECTS  BY  ALTERNATIVE  AND  AREA
                                                         PLATFORM
                                                         IN PORT
                                                         JEFF
                                                         HARBOR
                                                     TRANSHIP
                                                   BY PIPELINE
                                                       FROM
                                                    NORTHVILLE
                                                                                Oil Poll'n
                                                                                Aesthetics '
                                                                                Grd. Water "
                                                                                Land Use Conf.
                                                                   Poll'n.
                                                                Aesthetics
                                                                Grd. Water ^
                                                               «Road Traffic
                                                                                "Water Quality^
                                                                                /Waterfowl    s
                                                                                 Oil  Poll'n  =
                                                                                /Aesthetics
                                                                            Water Quality!
                                                                            Waterfowl  =
                                                                            Oil Poll'n =
                                                                            Aesthetics EEEE
 Water Quality^,
 Waterfowl   ^.
 Oil Poll'n J~
.Aesthetics _
// // - <&
        — N
                                                   plater Quality
                                                   ^Haterfo
                                                   Oil  Poll'n
                                                   ^Aesthetics  ^
                                                   *Grd. Water"
                                                   ?Lani^ Use Conf.
                                                  Nfoaf Traffic^
                                                               Water Quality
                                                               Waterfowl
                                                               Oil Poll'n
                                                               Aesthetics II
                                                               iGrd. Water
                                                               Land Use Conf
                                                               Road Traffic
Water Qual1ty_
Waterfowl   =
Oil Poll'n ==
Aesthetics =
Grd. Water  ^=1
Land Use Conf.=
  ad
_Water Quality .
"Waterfowl  »
'Oil Poll'n = "
 Aesthetics  =
*Land Use Conf*
           NORTHVILLE t, VICINITY
                                                   EWater Quality
                                                   =Waterfowl
                                                    Oil Poll'n
                                                    Aesthetics
                                                     lating
Water Quality'
Waterfowl
Oil Poll'n
 esthetics
 oating
                                                                             l/ater Quality!
                                                                             tlaterfowl
                                                                             011 Poll'n „
                                                                             Aesthetics ,
Water Quality
waterfowl  ~
011 Poll'
           PORT JEFFERSON HARBOR
           S VICINITY
             OTHER L.I. HARBORS
                NEW YORK BIGHT g
                                 BENEFICIAL
o   Direction of the environmental,  social,
    and  economic impacts:  beneficial  or
    detrimental

o   Determination of identical  impacts among
    areas
                                            o   Identification of the  environmental,  social,
                                                and economic parameters affected  on which
                                                data are therefore needed.

                                     For  each of  the  impacts thus identified as discrimina-
                                     ting among the subplans and alternatives considered,
                                                           404

-------
a further evaluation  can then be made as to whether
these impacts are :

      o  Long or  short  term

      o  Avoidable or unavoidable

      o  Reversible or  irreversible

Where these distinctions  are  important in discrimin-
ating among alternatives, the  information presented
in the tabular summaries  can be  suitably coded.   For
example,  irreversible adverse  consequences can be
highlighted in the table.  We  believe that this format
is especially suitable  to display and discuss the
evaluations and methodologies  with a nontechnical
audience.

This qualitative analysis can be performed very early
in the 208 program, since one  does not usually begin
in a state of complete  ignorence of the possibilities.
Forcing a preliminary identification of subplans
early in the plan development  should have the useful
effect of focusing the  main part of the work on real
possibilities.  Moreover, the  results of this quali-
tative analysis of the  impacts will be available  early
enough to assure that information needed for the  final
evaluation is not unknowingly overlooked until the
final months of the planning  program.

                   Quantitative  Model

The qualitative model identifies the trade-offs to  be
made; the quantitative  model  should inform them with
data insofar as possible.  The purpose of the quanti-
tative model is to define rigorously the information
needed to compare subplans, to compile them as mixes
into alternative plan packages,  and to show the conse-
quences of alternative  decisions.
                 Linear programming models have been  used previously
                 to determine" efficient degrees of wastewater treatment
                 to achieve  specified levels of receiving water quality.
                 With the emphasis on nonpoint sources  of pollution,
                 nonstructural water quality controls,  and land uses
                 in 208 planning,  however, the problem  is not simply
                 to minimize  dollar costs.  Alternative land uses have
                 varying environmental and social as  well as economic
                 value, and  alternative plans are appropriately cast
                 in the framework of a resource allocation model.  On
                 the basis of the  qualitative analysis, the resource
                 allocation  can be decomposed into geographical areas
                 that can be  analyzed as more or less homogeneous units
                 with the local environmental and social impacts of
                 various activities.generally judged  to be either bene-
                 ficial or detrimental.  An illustration for one such
                 area, a hypothetical estuary in which  the use  of wet-
                 land is at  issue, is shown in Table  k.  Further details
                 of this model are given in Hill(9).

                 The use of  a linear program to describe  in part natural
                 processes which  are  characteristically nonlinear
                 perhaps deserves  some defense.  The  linear model was
                 used here because it is inexpensive  to solve using
                 standard computer software, and because  i£s economic
                 interpretation has been well established.  Certainly
                 it will be  important to verify that  the  use of linear
                 coefficients does not do violence to the  known facts.
                 The more serious  question may be whether enough is
                 known about  the  natural systems to justify modeling
                 at all, however,  not how closely the model simulates
                 reality.  Probably the principal value of the  model
                 is its identification of the data needed to make an
                 informed decision.

                 The model is formulated with an economic  objective
                 function (Row A)  using market values and costs to
                 identify as  a datum the mix of activities meeting the
                 constraints  at minimum monetary cost.   While it is
                 not generally possible to define environmental and
                               TABLE 4.   DATA FOR RESOURCE ALLOCATION  IN AN ESTUARY
               * Product

               - Consumt
       CONSTflAIIITS

       1  Tall Fringing S^_ a_H. (000 Acres)

       2. Other Sfiartina (000 Acres)

       3  Clam Beds (Acres)

       4. Mussel Beds (Acres)

       o  Sea Worm Bottom (Acres)

       6  Detritus 1n fay (000 Ib per year)

       /. Haste Water (HGD)

       6. D.O. Level (rog/1)

       9. Nitrogen Load (000 Ib per day)

       1C. Flounders (000 Ib)

       11. Cod (000 Ib)

       12. Fish Aatto

       13. ftartnas {*)

       OBJECTIVE FUNCTIONS

       A  Annual Net Income (SOOO)

       B.  Envtronaental Value

       ,C.  Social Value
                                 «E
                              B  Sg
-0.003

-0.007

-0.001

-0.00?
                                                      M   H   I
                    -1000  -100
                    +0.20? +0.214 +0.35
                                                  s    s
                                                  0

                                                 +27.5
> 0.12

t- 1.68
                                                         > 650

                                                         > 140
                                                     -.0026 +'

                                                     ,.05  (I
                                                         405

-------
 social functions quantitatively, the  signs of the
 coefficients can be judged  (Rows B and C).  These may-
 be viewed  as the direction  of corrections that  should
 be made to market prices to allow for nonmarket exter-
 nal effects.  Preserving salt marsh  (Columns 1  and 2)
 is estimated to have a market value of only $35 per
 acre per year, for example, an amount representing
 rental of hunting rights.   This understates the true
 value of wetlands in their  natural state because of
 their many attributes as a  public good:  providing
 habitat and food for fish and wildlife, serving as a
 nutrient trap for waste water, providing open space
 that offers aesthetic values, etc.  Because of these
 positive environmental and  social values, the amount
 of $35 per acre per year should be increased'.in deter-
 mining the best use of wetland, but it is not known
 by how much.

 The program is therefore exercised parametrically with
 the value  of wetland increasing to determine how
 environmental and social factors will affect the mix
 of activities as they are assigned additional impor-
 tance .  For each such change in the program mix, the
 incremental monetary cost can be determined.  This
 establishes the break-even  cost at which one alterna-
 tive plan  gives way to another because of its additional
 environmental and social value.  Part of this incre-
 mental cost can be attributed to those benefits that
 can be quantified.  Whether the remainder is sufficient
 to account for unquantifiable intangibles can be decided
 with the participation of representatives of the public
 that will pay the additional dollar cost.

 The results of the model are illustrated in Table 5
 in which the allocation^ of wetland is determined in
 part by their environmental and social value.  The
 minimum monetary cost allocation is described by the
 bottom row of the table where the market value of wet-
 lands is $35 per year and environmental and social
 values of wetlands are taken to be zero.  Under these
 circumstances, the allocation of wetlands consists of
 preserving 120 acres of tall Spartina alterniflora and
 380 acres of other marsh grass, while 1,300 acres of
 the other marsh grass is filled for development.  If
 the positive environmental  and social values of wet-
 lands are recognized, however, an alternative plan
 determined by the model consists of the preservation
 of all marsh grass as shown in the top row of the table.
 This is determined by the program to have an incre-
 mental cost of $247,000 per year which in this case is
 the opportunity cost of not filling marsh land for
 commercial purposes.

 Thus, the decision makers are presented with the
 information that if they choose to preserve al1 wet-
 lands, they are in effect indicating that the community
 is willing to pay an additional quarter million dollars
 per year to preserve its environmental and social
 values.  Notice that this result does not state that
 the environmental and social values of wetland in these
 circumstances are $247,000; it states that the value
 must be_ worth $247,000 for the decision to be made to
 preserve all  wetlands because of their environmental
 and social values.  Thus it is left to the decision
 makers to decide on behalf of the community whether
 this expenditure is justified.  Although this kind of
 result does not give the- decision makers the "answer",
 it narrows the range in which judgment just be exer-
 cised.

 Environmental decision analysis is largely economic
 in nature, but it is ultimately political.  Models for
 environmental planning can therefore be no more deter-
 minate than the political processes.  They will perform
 a service if they provide  the political decision making
processes with information on the consequences of
 alternative decisions.   Whether an ordinal comparison
                 TABLE 5.  WETLAND ALLOCATION vs.
                    ENVIRONMENTAL & SOCIAL VALUE

ALTERNATIVE
PLAN
MINIMUM
MONETARY
COST
WETLAND
VALUE
MARKET
$35
$35
ENVL a
SOCIAL
f
(190)
0
WETLAND ALLOCATION (ACRES)
PRESERVE
TALL
S.ALT
120
120
OTHER
1680
380
FILL
TALL
S.ALT
0
0
OTHER
0
1300
INCREMENTAL
COST
+$247,000
	
that ranks alternatives is  sufficient  for political
decision making or.whether  a cardinal  measure of
utility is needed has been  a matter  of dispute, as
debated in HookU°).  Haefelei11' has  pointed out that
it is the importance of the  issue as well as one's
preference as to its outcome  that makes vote trading
possible.  Certainly the price that  we are willing to
pay is a familiar measure of the importance we assign
to things as well as the way  we reveal our preference.,
Moreover, recognizing population groups that are
Impacted differentially, as we have  proposed, while
not likely to lead to a Pareto optimum in which no one
suffers, at least identifies the gainers and losers.
The equity with which this  trade-off may be made in the
larger political context may depend  upon the model used
for planning.

                     References

 1.  U. S. Environmental Protection  Agency.  1975.
     Guidelines for areawide  waste treatment management
     planning.  Washington.
 2.  Walker, Richard A. 1973. Wetlands preservation
     and management on Chesapeake Bay: the role of
     scinece in natural resource policy.  Coastal Zone
     Management Journal l(l):  75-101.
 3.  Kneese, Allen V. and Blair T. Bower.  1968.
     Managing water quality:  economics, technology,
     institutions.  The Johns Hopkins  Press, Baltimore.
     328 p.
 4.  Ridker, Ronald G.  1967. Economic costs of air
     pollution.  Frederick  A. Praeger, New York  214 p.
 5.  Andrews, Richard N. L.   19714..   Comments on 'An
     Environmental Evaluation System for Water Resource
     Planning1 by Norbert Dee et al.  Water Resources
     Research 19(2):  376-378.
 6.  Hardin, Garrett.  1972.  Preserving quality on
     spaceship Earth.  Transactions  of the 37th North
     American Wildlife and  Natural Resources Confer-
     ence, Wildlife Management Institute, Washington.
     472 p.
 7:  Ackoff, Russell L. and Maurice  W. Sasieni.
     Fundamentals of operations research.  John Wiley
     and Sons, New York.  455 p.
 8.  Wells, James and Douglas Hill.  1974.  Environ-
     mental values in decision making: Port Jefferson
     as a case study.  Journal of Environmental Sciences
     12(4):  19-28.
 9.  Hill, Douglas.  1976.   A modeling approach to
     evaluate tidal wetlands. Transactions of the 4lst
     North American Wildlife  and Natural Resources
     Conference, Wildlife Management Institute,
     Washington,  (in press)
10.  Hook, Sidney (Ed.)  1967.  Human  values and
     economic policy.  New  York University Press,
     New York.  268 p.
11.  Haefele, Edwin T.  1973. Representative govern-
     ment and environmental management. The Johns
     Hopkins University Press, Baltimore.  188 p.
                                                       406

-------
                           REGIONAL RESIDUALS-ENVIRONMENTAL QUALITY MANAGEMENT MODELS-
                               APPLICATIONS TO EPA'S REGIONAL MANAGEMENT PROGRAMS
                 Walter  0.  Spofford
         Quality of  the  Environment Program
              Resources  for the  Future
                  Washington,  D.C.
                   Charles  N.  Ehler
            Office  of Air,  Land and Water
          Office  of Research  and Development
         U.S.  Environmental  Protection  Agency
                   Washington,  D.C.
ABSTRACT

The paper describes  the  elements  of  a  regional  inte-
grated  residuals-environmental  quality management model
developed at Resources for  the  Future  to assist govern-
ments in establishing public  policy  on regional environ-
mental  quality--air, water  and  land—through  the
explicit analysis  of the linkages  among gaseous, liquid
and solid residuals, and among  the various  environmental
media.  Within an  optimization  framework, the model
evaluates a large  number of residuals  management options
including non-treatment  alternatives,  so that least-cost
ways of achieving  various levels  of  ambient environ-
mental  quality, subject  to  constraints on the geographic
distribution of costs of achieving these levels, can  be
identified.  The overall  management  model is  made up  of
three parts:  a linear programming model  of regional
residuals generation and discharge,  environmental models
(air dispersion and  aquatic ecosystem  models),  and an
environmental evaluation section.  A summary  of results
from a  test application  of  the  model in the Lower Dela-
ware Valley is presented.

Lessons learned from the development of the Delaware
model are related  to the objectives  and analytic
requirements of EPA's current regional  management
programs, Air Quality Maintenance  and  Areawide  Waste
Treatment Management (208)  plans.
INTRODUCTION

While it has long been known  that  the  physical  environ-
ment is an interconnected system,  we have  traditionally
analyzed and regulated it by  each  of its component
media--air, water, and land—ignoring  intermedia  link-
ages and possible interform transformations  of  wastes.
The management of wastes, or  residuals, from society's
production and consumption activities  requires  decisions
regarding environmental quality and economic trade-offs
implied in the discharge of residuals  into one  or more
of the environmental media.   For example,  "clean" air is
more likely to result if we choose to  discharge
residuals to water bodies or  place them on the  land;
 clean" water is highly probable if we choose to
discharge residuals to the atmosphere  and/or land;  and
so on.  However, if we choose to have  high levels of
both air and water quality, as well as high  quality
landfills, then the management decisions become increas-
ingly more complex, requiring information on the
physical and economic effects of achieving specified
levels of environmental quality simultaneously.   The
development of this information requires analytical
methods that are more sophisticated than those  generally
In practice today.   The purpose of this paper is  to
discuss regional environmental quality management models
in general, and to describe one model, developed  over
the past four years at Resources for the Future,  that
potentially has great utility in providing this type of
information to decision-makers.
Before discussing the model, however, we will briefly
discuss the needs of one possible group of users of this
type of model--sub-Federal units of government charged
with implementation of two of EPA's regulatory programs,
Air Quality Maintenance and Areawide Waste Treatment
Management (208) plan development.
EPA'S REGIONAL ENVIRONMENTAL QUALITY MANAGEMENT PROGRAMS

Under the 1970 Clean Air Act Ammendments, in April 1971,
EPA promulgated primary and secondary National Ambient
Air Quality Standards (NAAQSs) for hydrocarbons, carbon
monoxide, nitrogen doxide, sulfur oxides and particulate
matter.  In addition, standards have been set for photo-
chemical oxidants, even though they are not directly
emitted to the air, but are a product of atmospheric
reactions between nitrogen oxides and reactive hydro-
carbons.  National primary ambient air quality standards
are specified at a level of air quality necessary to
protect the public health.  National secondary ambient
air quality standards define levels of air quality which
are necessary to protect the public welfare from any
known or anticipated adverse effects.

After the Federal air quality standards were established,
all States were required to submit plans by which they
would insure that the standards would be attained by
1975.  In May 1972 EPA published its approvals and
disapprovals of State Implementation Plans (SIPs) and
shortly thereafter promulgated substitute regulations
for deficient State plans.  However, the Natural
Resources Defense Council, Inc. (NRDC) and various other
petitioners challenged EPA's approvals on several
grounds, including the contention that the plans approved
were not adequate to ensure maintenance of the NAAQSs
over time once they were attained.  No plan was found
that had adequately analyzed the impact of growth on air
quality maintenance for any significant period into the
future.  Subsequently, in March 1973, EPA disapproved
all SIPs with respect to maintenance of standards.  In
June 1973, EPA promulgated regulations requiring States
to develop Air Quality Maintenance (AQM) plans for areas
with the potential for exceeding a NAAQS between 1975
and 1985.  From June 1973 until very recently, EPA and
the States have been designating AQM areas.  EPA guide-
lines specified that, at a minimum, the States should
consider all Standard Metropolitan Statistical Areas
(SMSAs).  To date, 168 AQM areas have been designated
for at least one airborne residual.

The AQM plan is simply an extension of the SIP.  Many of
the strategies in the SIP to insure attaineent of the
standards can also serve to insure maintenance.  Tradi-
tional strategies such as the review of new and modified
stationary sources and the prevention of their construc-
tion if air quality standards will be violated, new
source performance standards, and standards for emissions
from new motor vehicles might be sufficient to maintain
standards in many areas of the country.  In most areas,
however, these strategies will not be sufficient to
                                                       407

-------
maintain standards and it will  be necessary to incor-
porate air quality considerations into the overall
context in which decisions regarding metropolitan
development are based.

EPA is currently requiring that the plans be submitted
as soon as possible for areas that will  fail to meet the
NAAQSs in the near future; for areas that will not fail
to maintain standards until  the more distant future, EPA
will usually require the AQM plan to be  submitted at
least 3 to 5 years before measures are actually needed
to maintain the standards.  While EPA is flexible on the
time horizon of the AQM plan, it is encouraging a period
of 10 to 20 years as appropriate for most areas.  The
regulations will require States to reassess, at
intervals of not more than 5 years, all  areas to deter-
mine whether any areas need plan revisions.

Beyond demonstrating that the strategies proposed in the
plan are effective in maintaining NAAQSs, the plan must
also demonstrate that the State or a substate entity has
adequate legal authority to implement the measures con-
tained in the plan.  Where measures are  included that
local governments have traditionally enforced, e.g.,
minimum thermal insullation requirements for new
construction, the AQM plan must demonstrate that a local
government has the legal authority to enforce such
measures.  The plan must also describe the relationships
between air quality management and State, local and
regional programs for land use, transportation, water
quality and solid waste management.  Of particular
interest are the physical, technological, economic and
institutional relationships between AQM  and Areawide
Waste Treatment Management (208) planning.

The Federal Water Pollution Control Act Ammendments of
1972  (FWPCAA) call for the development of ambient water
quality standards "	whenever the State revises or
adopts a new standard	such revised or new water
quality standard shall consist of the designated uses of
navigable waters involved and the water quality criteria
for such water based upon such uses.  Such standards
shall be such as to protect the public health or welfare
enhance the quality of water, and serve the purposes of
the Act."  The "purposes" of the Act are defined to
include "....an interim goal of (ambient) water quality
which provides for the protection and propagation of
fish, shellfish, and wildlife and provides for recrea-
tion  in and on the water be achieved by July 1, 1983."

Section 208 of the FWPCAA calls for Areawide Waste
Treatment Management Planning in areas with substantial
water quality management problems due to urban-
industrial concentrations or other factors.  Particular
emphasis is being placed in 208 planning on the "soft-
ware" or implementation aspects of water quality manage-
ment  (e.g., economic incentives, land use management
measures, etc.), in addition to the traditional
technological options.  208 planning also places
particular importance upon the development of management
strategies for non-point sources.

To date, 149 areas have been designated as 208 planning
areas, but it is anticipated that eventually most SMSAs,
approximately 250, will be covered by 208 plans.  In
addition, all 50 States must prepare plans for
non-designated areas of their respective States.  Plans
must  be submitted to EPA two years after the work plan
is declared operational, but not later than November 1,
1978.  The planning time horizon for 208 planning is
20 years.

Specific plan outputs include the identification of
anticipated municipal and industrial treatment works
over  a 20 year period, identification of urban storm-
water management systems, a program for the management
of residuals generated in treatment (secondary
residuals), and a program for non-point  source  manage-
ment.  As in Air Quality Maintenance plans,  the 208 plan
must demonstrate that the strategies proposed are
enforceable and must identify agencies authorized  to
construct, operate and maintain facilities required by
the plan, and otherwise implement the plan.

While AQM and 208 plans have slightly different planning
requirements, both will perform essentially  the same
analytical tasks:

     1. a survey of existing emissions/effluents,
          their sources, ambient environmental  quality,
          and an initial assessment of existing  and
          potential problems;
     2. a projection of population and economic
          activities over the planning period;
     3. a projection of future emission/effluent
          discharges, from 2;
     4. a projection of future ambient environmental
          quality, from 3.
     5. comparison of results of 4 with  standards and
          a determination of the reduction required;
     6. development of alternative strategies to
          achieve and maintain standards;
 and 7. evaluation and selection of a "best" strategy
          for implementation.
STRUCTURING MANAGEMENT MODELS

Regional residuals-environmental quality management
models can be used to analyze the simultaneous impacts
on costs and on environmental quality of alternative
residuals management strategies.  Their basic purpose
is to generate information upon which to base public
decisions regarding the levels of use and/or protection
of the natural environment.

Management models are used primarily to rank sets of
strategies according to a given criterion, such as
least cost to the region.  For this use, the intent is
to locate the optimal, or in some sense "best" strategy.
In addition, these models are used to explore the range
of technologically, economically and politically
feasible alternative strategies for the region.

Basically, there are three approaches to seeking an
"optimal" strategy for any given objective and set of
constraints:  1) response surface sampling using
simulation; 2) optimization (mathematical programming)
techniques; and 3) a combination of 1) and 2).  An
example of the latter is exogenous treatment of various
levels of low flow augmentation in a water quality
optimization model.  However, the costs (and damages,
if they occur) of providing the various augmented flows
are included in the overall ranking of the various
management alternatives.

Each approach has its advantages and disadvantages.
Simulation models, in general, are able to provide a
more realistic representation of real-world conditions,
and their outputs are generally easier to obtain than
optimization models are to solve.  They are conceptually
straightforward, and nonlinearities, discontinuous
functions, non-steady-state (transient) behavior, and
stochastic aspects are much easier to include than with
optimization models.  However, there are two major
disadvantages.  First is the general difficulty of
selecting a priori that combination of raw material
inputs, production processes, recycling and by-product
production opportunities, and residuals modification
activities and levels that optimizes a given'objective
function.  Exhaustive sampling of a finite number of
combinations can be used.  But because the total number
of combinations is usually extremely large, random
sampling techniques appear to be a more reasonable approach.
                                                       408

-------
The second major disadvantage with simulation-type
management models is the extreme difficulty of exploring
the economically and politically feasible range of
management strategies, even when least cost strategies
are not being sought.   For regional analyses, economic
and political feasibility becomes relevant when it is
desired to constrain costs, either the regional aggre-
gate, or the distribution of costs among dischargers,
consumers in geographic subregions, and/or among income
groups; or the levels  of ambient environmental quality
at designated locations throughout the region; or both
costs and environmental quality simultaneously.

The two major advantages of optimization models are:
1) the direct determination of the activity levels that
optimize a given objective function; and 2) the ability
to simultaneously satisfy sets of constraints,
especially inequality  constraints (e.g., upper and lower
bounds on activities), and thus the possibility for
exploring the range of technologically, economically
and politically feasible management strategies.  Their
major disadvantages, given the magnitude of the regional
residuals management problem, are that they are
generally difficult to construct and then to solve,
even when formulated as linear programming problems.
Furthermore, they may  not be sufficient representations
of the actual (real world) situation.  For some cases,
a combination of simulation and optimization techniques
provides the logical approach to residuals management
problems.  The use of  one technique or the other or a
combination would depend upon each individual situation.

There are two basic types of programming models:
 1) linear programming  (LP) models, and 2) nonlinear
programming (NLP) models.  Linear programming models
are particularly useful when the environmental models
 (e.g., water quality,  air dispersion, and ecosystem
models) are formulated as a set of linear relationships.
This form of management model is in widespread use
 today, especially for applications involving the
management of regional water quality.1

Unfortunately, linear  programming models cannot always
 be used for analyzing  regional management strategies,
especially if ecosystem models are incorporated within
 the analytical framework.  Ecosystem models are often
 expressed as a set of nonlinear relationships, and for
 these situations, nonlinear programming models are
 necessary.2

The regional model and application described in the next
 section is of this general nonlinear form.  The
 nonlinear programming  algorithm used for the analysis is
 based on the gradient method of nonlinear programming.
 Unlike the more classical linear management models, the
 environmental models are not incorporated in the con-
 straint set, but dealt with in the objective function.
 This modification requires the use of penalty functions
 for exceeding ambient standards.3  All nonlinear
 programming algorithms start from a trial feasible
 solution and using an  iterative search process, select
 increasingly better solutions until the best possible,
 or optimal, solution is found.  In the application
 described in the next  section, a linear program
 is used to select better and better solutions.
 Details of the algorithm and solution procedure may be
 found in [10,12,13 and 17].


A.REGIONAL RESIDUALS-ENVIRONMENTAL QUALITY MANAGEMENT
JtUfcL APPLICATION	   	

 In this section we describe the essential elements of a
 regional integrated residuals management model developed
at Resources for the Future by an interdisciplinary team
representing the fields of political science, economics,
ecology, and engineering.1*  This illustrative application
                                                       409
to the eleven-county Lower  Delaware  Valley  region
represents the final phase--a  real world  application--
of a research effort at RfF which  has  concentrated  on
the development of  regional residuals  management models
to aid government in establishing  public  policy for
regional environmental quality management.   Publications
describing various  stages in the development of the
model are available [10-16]. In addition, a  fairly
detailed description of the regional application,
results of analyses using the regional model, policy
and research implications,  and lessons learned from the
application have been prepared [17-19].

The Research Objectives

The major objectives of the RfF research  effort and
regional application can be stated,  briefly,  as follows:

     1. to investigate the  importance  of  including
          within a  single analytical structure the
          the linkages among gaseous,  liquid, and
          solid residuals,  and the three  environmental
          media;
     2. to explore  the feasibility of  incorporating
          within a  regional optimization model a complex,
          nonlinear aquatic ecosystem model.  An
          assessment of this objective is not included
          in this paper, but may be  found in  [19];
 and 3. to explore  the ways of designing  regional
          residuals management models  to provide
          distributional information on costs and
          environmental quality such that these models
          would be  useful in a legislative, as well  as
          executive, setting.  The distribution of  costs
          and environmental quality  is often the central
          issue in  regional environmental quality manage-
          ment, with efficiency considerations (e.g.,
          least cost strategies) of  secondary importance.

The Region

The region that the RfF group selected for their illus-
trative case study  is one of the most densely populated
and heavily industrialized in the U.S.   The region
surrounds the well-studied Delaware  Estuary, covers
portions of Delaware, Pennsylvania,  and New Jersey,  and
includes the major  cities of Philadelphia, Wilmington,
Camden and Trenton.  For the sake of brevity, let it
suffice to note that 5.6 million people resided in  the
region in 1970.  A major concentration of industrial
activity spews out enormous quantities of liquid and
gaseous residuals to the watercourses and atmosphere
in the region, and generates various types and quantities
of solid residuals.

The Regional Model

The model, applied  to the Lower Delaware Valley region,
is designed to provide the minimum cost way of producing
1970 production levels, or "bills of goods," at the
individual industrial plans; of meeting electricity, and
home and commercial space heating, requirements for the
region; of handling, treating, and disposing of specified
quantities of municipal wastewater and solid residuals,
subject to two sets of exogenously imposed constraints:

     1. the distribution of environmental quality--
        i.   in the 22  reaches of the Delaware Estuary:
               minimum levels of dissolved oxygen and
               fish biomass, and maximum concentrations
               of algae;
        ii.  at 57 selected "receptor" locations through-
               out the region:  maximum annual average
               ground level concentrations of sulfur
               dioxide and suspended particulates;

-------
        iii.   at landfill  sites  throughout the  region:
                restrictions  on  types  of  landfill  opera-
                tions  which can  be  employed.

     2.  the distribution  of consumer costs in the  57
          political  "jurisdictions" of the region5--

        i.     increases  in the costs of electricity
                (by  utility service area)  implied  by
                constraints on regional environmental
                quality  (the  first  constraint set);
        ii.   increases  in home  heating costs implied
                by constraints on regional environ-
                mental quality;
        iii.   increases  in municipal sewage disposal
                costs  implied by constraints on
                regional  environmental  quality;
        iv.   increases  in municipal solid residuals
                handling  and  disposal  costs implied
                by constraints on regional environ-
                mental quality.

The regional  management  model is shown schematically
in Figure 1 below.  Because the  aquatic ecosystem  is
nonlinear, an iterative  nonlinear programming algorithm
was employed to search for "optimal" solutions. There
are three main parts of  the overall regional model: a
linear programming model  of regional residuals  genera-
tion and discharge (comprising both production  and
consumption activities);  the  environmental models; and
an environmental evaluation section.   These three  major
components of the regional model are discussed  in  more
detail below.  But first, we  should indicate the
linkages among these major parts, and the iterative
scheme that is used to converge on an "optimal"
solution.

A key output of the LP model is a vector of  residuals
discharges.  This vector is input to the environmental
models.  The output of the environmental models are
vectors of ambient environmental quality at  designated
points in the region.  The ambient concentrations
implied by a given solution of the LP model  are then
compared with exogenously imposed environmental "stan-
dards."  Marginal penalities, based on penalties for
exceeding the environmental standards and on the
environmental models, are computed and returned to the
LP model as prices, or effluent charges, on  residuals
discharges.  This iterative process is continued (in
principle) until no better value of the objective
function can be obtained.6

The LP Regional  Activity Model:  The LP residuals gen-
eration and discharge model describing the major
activities in the Lower Deleware Valley region is
quite large.  It consists of roughly 8,000 columns
(variables), over 3,000 rows (constraining relation-
ships), and maintains information on about 800 indi-
vidual residuals discharges.  The following  regional
activities are included in this model:7 7 petroleum
refineries, 5 steel mills, 17 thermal electric power
plants, 57 home heating activities (one for  each
political jurisdiction), 57 commercial heating activi-
ties (one for each political jurisdiction),  75 large
dischargers of gaseous residuals, 36 Deleware Estuary
sewage treatment plants, 10 paper plants, 23 municipal
      Figure 1:  SCHEMATIC DIAGRAM OF THE DELAWARE VALLEY RESIDUALS-ENVIRONMENTAL QUALITY MANAGEMENT MODEL
LINEAR PROGRAMMING MODEL OF PRODUCTION,
RESIDUALS GENERATION-MODIFICATION-DISPOSAL, ETC
Marginal Penalties Attributed to Individual
i !
1 1
\l/ \1/
COSTS OF PRODUCTION,
MODIFICATION, ETC.
Generation
and
Modification
of Residuals '
from
Production
Processes
Generation
and
Modification
of Consumption
Residuals
(including
landfill,
incineration,
etc.)
COSTS OF
RECYCLING
Recycl i ng
Alternatives
(municipal ,
commercial ,
industrial,
waste paper)
COSTS OF
DIRECT E.Q.
MODIFICATION
Instream
Aeration
("discharge"
of
oxygen)
TRIAL
"EFFLUENT CHARGES"
Discharge of Residuals
(differentiated by
type and location
of discharge)
Constraints on the distribution of consumer costs, e.g., increase
in municipal sewerage bills, percentage increase in electricity,
home-heating, and solid residuals disposal costs
Production,
Modification,
etc.

Extent of
Modification:
Local and
Regional
Options
Extent
of
Recycling
*
ACTIVITY LEV
ENVIRONMENTAL MODELS
Regional Atmc
Disperson f
Aquatic Ecos}
Model of De
Estuary
Aeration
Horsepower
(oxygen
injected)
ELS
Residuals
Discharges

\/

ode! pended Pa
Dioxide

leware Residuals
Biomass o
Zooplankt
sncentration of Sus-
'ticulates and Sulfur
jncentrations of
Dissolved Oxygen;
f Fish, Algae,
jn

Residual
T
\ \
s Discharges and Oxygen Injected
YPES OF CONSTRAINTS
Minimum
Production Levels
Mass Continuity
Conditions
Minimum
Residuals
Modification
Levels
Upper Limit on
Consumer Costs
by Type and
by Political
Jurisdiction



r
/\
ENVIRONMENTAL
EVALUATION
SECTION

' on Ambient Concentrations
and Species Biomass
(ambient standards
with penalty functions)

1
1
J
                                                       410

-------
Incinerators,  57 municipal solid residuals handling and
disposal  activities (one for each political jurisdic-
tion),  23 Deleware Estuary industrial dischargers, and
22 in-stream aeration activities (one for each Estuary
reach).

The Environmental Models:  Two environmental models are
incorporated in the regional model: a nonlinear
ecosystem model of the Deleware Estuary [14,15,16]
and a linear air dispersion model.   The eleven "com-
partment" ecosystem model, developed at RfF, is based
on the trophic level approach.  The model was calibrated
on conditions  that existed in the Estuary in September
1970.  Inputs  of residuals to the ecosystem model
include:   organics (BOD), nitrogen, phosphorus, toxics
(phenols), suspended solids, and heat.  As reported
above, target outputs include ambient concentrations of
algae, fish, and dissolved oxygen.

The Gaussian plume-type air dispersion model was
adapted from the U.S.E.P.A.'s Implementation Planning
Program (IPP).  Residuals discharges and meteorlogical
conditions in 1970 were used to calibrate the model.
The model accepts as inputs discharges of sulfur
dioxide and particulates, and provides as output ground
level, annual  average ambient concentrations of sulfur
dioxide and suspended particulates.

Model Studies and Results of Analyses

The regional model described briefly above has been
run under a number of combinations of air and water
quality standards, solid waste disposal restrictions,
and assumptions about the availability of two regional
residuals management alternatives:   in-stream aeration
 (in the Estuary) and regional sewage treatment plants.
For the model runs, the following alternative environ-
mental quality restrictions were imposed:
                              Total  regional  costs  ($  million per year)
Ambient
Air Quality

    S02      i
    TSP      i

Ambient
Water Quality

    DO       2
    Algae    <;
    Fish     s
                       Standard
                        Set (E)

                       120  ygms/m3
                       120  ygms/m3
  Standard
   Set (T)

 j: 80 ygms/m3
 i 75 ugms/m3
                       3.0   mg/n.
                       3.0   nig A,
                       0.01  mg/i
i 5.0
* 2.0  mg/4
a 0.03 mg/i
        Landfill  Quality
        (3  levels)8     High(H)  Medium(M)   Low(L)

 Implications for the Lower Delaware Region:  For the
 production runs completed so far, we have found two
 general implications for the region: 1) the attainment
 of  the national primary air quality standards for
 sulfur dioxide and suspended particulates will be
 costly to the region—over $400 million per year
 (compared with $100 million per year for the less
 restrictive E level air quality set, and $50 million
 per year for the most restrictive T level water quality
 set for the Estuary); and 2) that both the regional
 management alternatives (in-stream aeration and
 regional sewage treatment) appear to hold promise for
 reducing the total regional costs of meeting given
 ambient quality standards.  The total regional costs,
 and savings due to these regional alternatives, are
 Illustrated in the following table (for T level water
 quality standards, E level air quality standards, and
 H level landfills):

Regional treatment
plants
no
yes
In-stream aeration
no
$190
$170
yes
$155
$143
Notice that employing both in-stream aeration and
regional sewage treatment plants results in a savings
to the region of about $47 million per year.

Research Objectives and Lessons:  In addition to these
general policy implications for the Lower Delaware
Valley region, the model results to date have also shed
light on the three major objectives of the research with
which we started:  the inter-media tradeoff question;
the use of the nonlinear, aquatic ecosystems models in
regional analyses (again, not discussed in this brief
paper); and the generation of information on, and the
ability to constrain, the distribution of costs as well
as physical and biological indicators of environmental
quality.

A.  Inter-media Linkages:  Ample evidence for the impor-
tance of linkages and tradeoffs among the qualities of
the three environmental media are provided by the results
of the analyses, and examples have been provided else-
where [17,18].   Not enough space exists to repeat all
these examples here, but to provide some quantitative
evidence of these linkages, the increase in total costs
(in $ millions) to the region of improving Estuary
quality for specified levels of air quality, given high
quality landfills, is shown in the following table:

                            Air Quality Standards*
        Water
        Quality
        Standards*

0


E

T
0
$12.3


$39.8
-.( A - "\1 ~\\-
$52.9
E
$ 96.7
-(A 35.4)-
$132.1
( A - 9"5 fi ^
$155.1
                                *For  definitions of E and T  levels of  air
                                 and  water quality, see earlier  table;
                                 "0"  level indicates no ambient  standards
                                 applied  in  the analysis.

                        For the 0 level  air  quality standards,  the increase  in
                        total  regional  cost  of moving from  the  0 to  the  E  level
                        water  quality standards amounts to  $27.5 million per
                        year.   For the E level air quality  standards,  this
                        difference amounts to $35.4 million per year.   If  there
                        had been no inter-media linkages, these differences
                        would  have been the  same.  The  difference in total
                        regional costs between the E and T  level water quality
                        standards are even more pronounced.   For the 0 level
                        air quality standards, the difference amounts  to $13.1
                        million per year; at the  E level air quality standards,
                        the difference jumps to $23 million per year.

                        B.   Distributional _Information:  Distributional  infor-
                        mation on both environmental quality and certain con-
                        sumer  costs is available  as output  of the Lower  Delaware
                        Valley regional  model, and has  been presented  in detail
                                                        411

-------
elsewhere [17].  For the sake of brevity, we will not
present this information again here.  Rather, we will
take this opportunity to discuss the problems we had in
trying to obtain certain kinds of distributional infor-
mation that we desired.  In the next section, we will
explain in more detail why we think this information
would be particularly useful in the preparation of Air
Quality Maintenance -and Areawide Waste Treatment
Management (208) plans.

Distributional information, especially on costs, can be
used in two ways, depending on the analysis and on the
type of management model employed:  1) it can be used in
the unconstrained mode to provide information on the
environmental and cost implications of alternative
residuals management strategies; and 2) it can be used
in the constrained mode to help shape the set of eco-
nomically and politically feasible residuals management
strategies that are selected for consideration.  Both
programming (optimization) and simulation management
models can be used for the former analyses, but only
programming models are useful for the latter analyses.
Since in most regions the distribution of costs and
environmental quality will be a more important issue
than regional efficiency, we feel  it is important to
address the analytical problems associated with
attempting to provide information  based on constrained
costs and environmental quality.

We have had relatively little difficulty using the
Lower Delaware Valley regional  model in constraining
levels of regional  environmental quality and generating
information on the implied costs.   And, of course, it
would have been still easier, and  certainly less expen-
sive, to merely provide information on the implied costs
and implied levels of regional  environmental  quality for
various alternative residuals management strategies
(however selected).  But the real  difficulty arose when
we attempted to constrain both  the levels of regional
environmental quality and the distribution of costs
simultaneously.  There are two  primary reasons for this
difficulty:

     1. when costs and levels of regional environ-
          mental  quality are constrained simul-
          taneously, infeasible solutions are
          commonplace (as one would expect a priori);
     2. the "real  world" tradeoffs among distribu-
          tions of cost, among  levels of regional
          environmental quality and the environ-
          mental  media, and between levels of
          environmental quality and costs, are
          extremely subtle and  many, and occur at
          the very top, and flattest portion, of
          the regional total  cost  response surface.

The first difficulty poses a problem for both linear
and nonlinear  programming formulations.   (Simulation
models, except in  very simple situations, are of very
limited use in this type of analysis.)   If the run
turns out to be infeasible, it may be obvious (from the
dual values) which constraints need to be relaxed, but
it is not at all obvious by how much these constraints
should be relaxed.  Clearly, we need much more opera-
tional experience  here before we perfect this use of the
regional model.

Regarding the second difficulty, current nonlinear
programming algorithms are simply not practical,
especially for the large regional  applications, and may
not be practical for the smaller (less complex) ones.
Most nonlinear programming algorithms become less and
less efficient as the optimum is approached.  Thus, when
the regional efficiency criterion  is employed, it makes
sense to stop these algorithms  short of an optimum
Only modest cost savings (as a percentage of total
regional costs) are at stake anyway.  But in examining
the tradeoffs among the distribution of  costs  and  envi-
ronmental quality, the important information  is  not  only
in the total regional cost dimension, but  in  a variety
of other dimensions as well.  It is the  range  of alterna-
tives (or management strategies) for satisfying  broadly
stated societal objectives, and of course  the  resulting
implications for individuals' costs and  environmental
quality, that become of major importance.  And here  is
where the current crop of nonlinear programming  algo-
rithms lets us down.9  Unfortunately, for  these  very
important kinds of regional environmental  quality man-
agement strategy analyses, our only choice, at this
point in time, is to resort to linear programming
techniques.  And we are currently in the process of
restructuring the Lower Delaware Valley  regional model
as an LP model by removing the nonlinear ecosystem model
of the Delaware Estuary and replacing it with  the Dela-
ware River Basin Commission's linear dissolved oxygen
model.
CONCLUSIONS. ISSUES AND FUTURE RESEARCH

In this paper, we  have addressed  the  use of integrated
regional residuals-environmental  quality management
models in  the development  of  Air  Quality Maintenance
and Areawide Waste Treatment  Management  (208)  plans.
The structure and  usefulness  of different kinds of
regional management models were discussed,  and a con-
siderable  portion  of the paper dealt  with an application
of such a  model  to the Lower  Delaware Valley region.
Admittedly, there  are limitations  to  the use of the RfF
model, but the research clearly shed  light  on  two
important  issues associated with  the  development of AQM
and 208 plan development and  strategy evaluations:

     1. linkages among the three  major forms of
           residuals and among the three  environ-
           mental media do  exist,  and  evidence
           suggests that these linkages are  impor-
           tant both in physical and economic terms.
           Since  both of these programs involve
           significant public  investments and imply
           major  effects upon  their implementation,
           to the extent possible,  the tradeoffs
           among  residuals  and media should  be
           explicitly analyzed;
 and 2. ambient  environmental quality standards can
           be met through varying  combinations  of
           strategies which can imply  substantially
           different distributions  of  costs  among
           the public and private  sectors, among
           the residents of the various subregions,
           and among different income  groups.   It
           is clear that at the local  level, at
           least, the distribution  of  the costs of
           improving and/or maintaining environmental
           quality will be  the central  issue in
           determining the  political feasibility of
           different strategies, with  total  regional
           costs  of secondary  importance.  Thus, to
           the extent possible, information  on  cost
           distributions of each strategy should be
           generated and presented.

Finally, a model as sophisticated as  the Delaware
application is beyond the  resources of most environ-
mental quality management  agencies.   However,  the
insights like those described above can  guide  research
to develop simpler analytic techniques that may be of
more immediate application to AQM and 208 planning
requirements.  An  effort to develop a range of techni-
ques in an operational handbook for regional environ-
mental quality management  is  currently planned at RfF.
                                                        412

-------
NOTES AND REFERENCES
 1. Examples of the linear programming formulation  for      9.
   wastewater management purposes can be  found  in  M.J.
   Sobel, "Water Quality Improvement Programming
   Problems," Water Resources Research, 1  (4),  1965,
   pp. 477-487; C.S. Revelle, D.P. Loucks,  and  W.R.
   Lynn, "Linear Programming Applied to Water Quality
   Management," Water Resources Research,  4 (1), 1968,
   pp. 1-9; and R.V. Thomann, "Systems Analysis and
   Water Quality Management," Environmental  Science
   Services Division of Environmental Research  and
   Applications, Inc., New York,  1972.                    10.

 2.  In  terms of the complexity involved in  incorporating
   steady-state environmental models within an  opti-
   mization framework, we find  it useful  to distin-
   guish among four broad categories: 1)  linear
    relationships where ambient  concentrations,  R,  are
   expressed as explicit functions or residuals dis-
    charges, X, i.e., R = AX; 2) linear, implicit          11.
    functions, i.e., X = AR (note  that this  equation
    set can be rearranged by inverting the matrix of
    coefficients, i.e., R  : A~lX); 3) nonlinear,
    explicit functions, R   f(X);  and 4) nonlinear,
    implicit functions, i.e., X    f(R).  Classical
   water quality models fall in the first two cate-       12.
    gories and aquatic ecosystem models in  the last
    category.  For more details, see [13].

 3.  For the use of penalty functions in nonlinear
    programming, see A.V. Fiacco and G.P.  McCormick,
    Nonlinear Programming:  Sequential Unconstrained       13.
    Minimization Techniques (New York, John  Wiley &
    Sons, Inc., 1968); and/or W.I. Zangwill, Nonlinear
    Programming:  A Unified Approach (Englewood  Cliffs,
    N.J., Prentice-Hall, Inc., 1969).

 4.  The RfF modelling team consisted of Walter 0.          14.
    Spofford, Jr., Clifford S. Russell, Robert A. Kelly,
    and Edwin T. Haefele with the  assistance of  Louanne
    Sawyer, Pathana Thananart, Blair T. Bower, and  James
    W.  Sawyer, Jr.  Edwin Haefele  developed a legislative
    bargaining and vote-trading  model which  is not         15.
    described in this paper.  However, the management
    model reported here is designed to operate in con-
    junction with Haefele's political model.

 5.  To provide information on the  geographic distribution
    of environmental quality and consumer  costs  through-
    out the region, the Lower Delaware Valley was          16.
    divided into 57 political jurisdictions  of roughly
    100,000 people each.  To form  these jurisdictions,
    some  of the 379 cities, towns, boroughs, and town-
    ships that are located in this region  were aggregated,
    and some were subdivided.                              17.

 6.  For details, see [10,12,13, and 17].

 7.  For details, see Appendix B  of reference [17].
 8. The three  landfill  qualities  used in the analysis
    include:   low—open dump,  but no buring allowed;
    medium—good  quality sanitary landfill; high—good
    sanitary  landfill with  shredder, impervious  layer
    to protect groundwater, wastewater treatment of
    leachate,  aesthetic considerations such as  fences,
    trees, etc.
                                                          18.
19,
Regional environmental quality management models
represent only one example of a large set of
natural resource allocation problems that would
benefit greatly from the development of nonlinear
programming algorithms that could deal efficiently
with practical, large-scale applications.  We hope
that these opportunities will one day be recognized
by the applied mathematician and operations
researchers.

C.S. Russell and W.O. Spofford, Jr., "A Quantita-
tive Framework for Residuals Management Decisions,"
in Allen V. Kneese and Blair T. Bower, eds.,
Environmental Quality Analysis:  Theory and Method
in the Social Sciences (Baltimore, Md.: The Johns
Hopkins University Press for Resources for the
Future, 1972).

C.S. Russell, W.O. Spofford, Jr., and Edwin T.
Haefele, "The Management of the Quality of the
Environment," in Jerome Rothenberg and Ian G.
Heggie, eds., The Management of Water Quality and
the Environment (New York: Halsted Press, 1974).

W.O. Spofford, Jr., C.S. Russell, and R.A. Kelly,
"Operational Problems in Large-Scale Residuals
Management Models," in Edwin S. Mills, ed.,
Economic Analysis of Environmental Problems (New
York: National Bureau of Economic Research, 1975).

W.O. Spofford, Jr., "Total Environmental Quality
Management Models," in Rolf A. Deininger, ed.,
Models for Environmental Pollution Control (Ann
Arbor, Mich.: Ann Arbor Science Publishers, Inc.,
1973).

R.A. Kelly, "Conceptual Ecological Model of the
Delaware Estuary," in Bernard C. Patten, ed.,
Systems Analysis and Simulation in Ecology,  Vol .  IV
(New York: Academic Press, forthcoming).

R.A. Kelly and W.O. Spofford, Jr., "Application of
an Ecosystem Model to Water Quality Management:
The Delaware Estuary," in Charles A.S. Hall and
John W. Day, Jr., eds., Models as Ecological Tools:
Theory and Case Histories (New York: Wlley-
Interscience, Inc., forthcoming, 1976).

R.A. Kelly, "The Delaware Estuary," in C.S. Russell,
ed., Ecological Modeling in a Resource Management
Framework (Washington, D.C.: Resources for the
Future, 1975).

W.O. Spofford, Jr., C.S. Russell, and R.A. Kelly,
Integrated Residuals Management:  A Case Study of
the Lower Delaware Valley Region (Washington, D.C.:
Resources for the Future, forthcoming, 1976).

C.S. Russell, W.O. Spofford, Jr., and R.A. Kelly,
"Interdependences Among Gaseous, Liquid, and Solid
Residuals: The Case of the Lower Delaware Valley,"
The Northeast Regional Science Review, Vol. 5, 1975.

C.S. Russell and W.O. Spofford, Jr., "A Regional
Environmental Management Model: An Assessment," a
paper prepared for presentation at the First Annual
Meeting of the American Association of Environ-
mental Economists, Dallas, Texas, December 28, 1976.
                                                       413

-------
                      A COMPUTER MODELING STUDY TO ASSESS  THE  EFFECTS  OF A PROPOSED MARINA ON A
                                                  COASTAL  LAGOON
                                    Kuang-Mei Lo,  Ph.D.,  P.E.,  Project  Manager
                                      Thomas G.  King,  P.E.,  Project  Engineer
                                      Arthur S.  Cooper, P.E., Vice President
                                                       of
                                   Connell/Metcalf & Eddy, Coral  Gables,  Florida
ABSTRACT

A water quality and hydrographic study was conducted
to determine the effects of a proposed marina on the
water quality of Old Pass Lagoon, located on the
northwest coast of Florida.

Utilizing field data, the flushing characteristics of
the lagoon were determined using two methods.   An
estimate of pollutants discharged from engines of
boats using the marina was made based on information
in the literature.  Based on the flushing charac-
teristics and the estimate of pollutants, the post-
construction water quality was predicted using a
steady state water quality model.

BACKGROUND

The proposed marina  at the Sandpiper Cove develop-
ment  is located at the eastern end of Old Pass
Lagoon.  Old Pass Lagoon is close to Destin and is at
the western end of a spit causing the enclosure known
as Choctawhatchee Bay.  A map showing the location is
presented as Figure 1.

As may be seen from Figure 1, Old Pass Lagoon is open
at the western end to Choctawhatchee Bay and the Gulf
of Mexico.

The mouth is quite narrow   150 feet at the narrowest
point - and this causes considerable bottling-up of
the waters of the lagoon.  A further bottling effect
occurs toward the eastern end of the lagoon where the
lagoon narrows to a width of 150 feet at Norreigo
Point  before widening out in the area where it is
proposed that the marina be located.

Tidal motion plays the most important role in water
movement and the flushing action in a lagoon.  In the
Gulf of Mexico the range of tide is uniformly small,
but the type of tide varies considerably at different
locations.  At Pensacola there is usually but one high
and one low water each day, while at Galveston the
inequality is such that the tide is semi-diurnal when
the moon is on the equator and diurnal at times of a
maximum north or south declination of the moon.
Consequently, in the Gulf of Mexico, the principal
variations in the tide are due to the changing de-
clination  of the moon.
FIELD SURVEY

A field survey was conducted in order to determine the
existing quality of the waters of Old Pass Lagoon.  At
each of eight stations samples were taken for the
measurement of dissolved oxygen, BOD, nitrogen, ammonia
orthophosphate. total phosphate, and oil and grease.
The analyses were performed at the Connell/Metcalf &
Eddy laboratory in Miami, according to procedures in
Standard Methods (1).  The physical characteristics of
the lagoon were determined by measuring nine cross
sections.

A dye test was conducted in the proposed marina area
in order to determine the longitudinal dispersion
coefficient.  The dye material used was a solution of
50 grams of Rhodamine-B.  The solution was diluted in
the field with lagoon water and released at mid-depth.
Approximately 150 feet down-current of the point of
release, water samples were collected at intervals
following the release of the dye.  The water samples
containing the Rhodamine-B were analyzed with an
Aminco-Bowman Spectrophoto-Fluorometer (SPF)  capable
of detecting and differentiating Rhodamine-B concen-
trations as low as 5 ppb.  The dispersion coefficient
was determined to be 4 ft /sec based on the solution
of the one-dimensional advection and dispersion dif-
ferential equation (2,3,4).

COMPUTATION OF FLUSHING PERFORMANCE

The flushing performance of the  lagoon was computed by
two methods.  A digital computer model was used to
solve the partial differential equations describing
longitudinal advection and dispersion by numerical
methods and then to  compute the  reduction in concen-
tration of pollutants due to tidal flushing action.
The flushing performance was also computed using  the
tidal prism theory - a simplistic approach which
nonetheless serves as a useful check  on the results
obtained using the computer model.

Flushing of a  tidal  water body is primarily controlled
by mixing and  translation of the tides.  When  a mass  of
dye or pollutant is  introduced into the lagoon, it  is
distributed through  the water body by  the mechanisms  of
advective and  dispersive transport.   Advective transport
is generally determined by the velocity of  the water
due to tides and the addition of fresh water.   Dis-
                                                       414

-------
                                                                              SITE OF PROPOSED
                                                                                     MARINA
                              OLD    PASS    LAGOON
                                                                              _NORREIGO//,
                                                                                 POINT //
                                                                                      CLUBHOUSE-^
   Vi PROJECT
        SITE/
                                                                                     500   0   500  1000 FT
                                                                                              2
                                                                                           SCALE
                                       GULF    OF   MEXICO
LOCATION  MAP
                    Fig. 1  Map of Old Pass Lagoon Showing the Site  of  Sandpiper Cove Marina
 persive transport  is controlled by diffusion which, in
 turn, is affected  by wind, turbulence, and is also
 considered by many investigators to be a function of
 the tidal velocity.  By assuming that the water  is well
 mixed, vertically  and laterally, the flushing of a
 conservative pollutant can be described by the fol-
 lowing partial differential equation:
!£.= ! . 3   (AD3c)
3t  A ' 3x    3x
                         3x
                                                   (1)
 where:    c: concentration of  the pollutant
          U: mean flow velocity in the cross
             section
          D: longitudinal  dispersion coefficient
          A: cross sectional area
          x; longitudinal  distance along the canal
          t: time.

 In order to solve this equation by numerical methods,
 the following initial  and  boundary conditions are
 required:
     c(x,0) = C0
     c(0,t)   0°
                                                (2)
Thus   it is assumed that the initial concentration of
lagoon water is C0 at time t=0,  and the concentration
at the tidal source is assumed  to be zero at all times.

The mean velocity, U, represents the average of the
velocities resulting from tidal  action and fresh water
flow  at a given cross section in the lagoon.  Since
flow  due to the addition of  fresh water in this
instance is minimal,  the mean velocity is the
average tidal  velocity.
                                                     To determine this velocity it was assumed that,  for a
                                                     station in the lagoon,  the amount of water flowing
                                                     through the station in  a time period At was equal to
                                                     the change in water elevation, AH, multiplied  by the
                                                     surface area   between  the station and the deadend.
                                                     This relationship can be described by the following
                                                     equation:
                                                          AU = SAH
                                                                At
                                                                                                         (3)
                                                     In order to use the  above equation to describe  the
                                                     average velocity throughout the lagoon,  it  is
                                                     necessary to assume  that the tide is uniform with
                                                     no significant difference in range or phase.

                                                     If a mathematical formulation known as the  tidal
                                                     function, H(t), is used to describe the average
                                                     tidal cycle then Equation (3) may be rewritten.
                                                   The tidal function, H(t), used in this study  is
                                                   chosen from the report entitled "Storm Water
                                                   Management Model," prepared by the Environmental
                                                   Protection Agency, October 1971 (5) :

                                                        H(t) = A! + A2Sin(w) + A3Sin(2w) + AASin(3w)
                                                               + A5Cos(w) + A6Cos(2w) + A?Cos(3w)

                                                   where     w = ^^, n= 1, 2, ... n
                                                     and
                                                                    24
                                                               A^,  A2,  . . .A? are coefficients.
                                                     415

-------
A computer program employing a least squares procedure
was developed to calculate the coefficients in the
above equation.  Using the computed coefficients, the
tidal function for average tidal conditions at Old
Pass Lagoon may be written as:
     H(t) = 0.32 + 0.084 Sin(w) + 0.002 Sin(3w)
            + 0.261 Cos(w) - 0.003 Cos(2w)
              0.003 Cos.(3w)
                                  (6)
The cross section information necessary for determining
the mean tide velocity was obtained from field measure-
ments and from information presented in Nautical Chart
870-SC published by the U.S. Department of Commerce (6).

To solve Equation (1), the effective longitudinal
dispersion coefficient, D, is also required.   This
coefficient was obtained from the dye test con-
ducted as described above.  The value 4 ft /sec
was used as the average dispersion coefficient
throughout the lagoon.

Tidal prism theory was also used to compute the
flushing performance of the lagoon.  The average
turnover rate (or flushing rate) can be computed
from the following equation assuming that the
concentration, C, of a pollutant in the lagoon is
GO at time zero, and that the bay is free of  the
pollutant.  The concentration of the pollutant
after N tidal cycles is given by:
       = C0(:
  N
-)
                                                  (7)
where:
          a: mixing coefficent
          P: volume of water in tidal prism
          V: volume of water in lagoon at low tide.
The volume of water in the lagoon at low tide was
calculated from the measured cross sections to be
79,600,000 cubic feet.

An average value for the tidal fluctuation at East Pass
Destin was computed to be 0.55 feet from tidal predic-
tions published by the U.S. Department of Commerce (7).
By considering this depth of water over the entire area
of Old Pass Lagoon, the volume of water entering the
lagoon during the rising tide was estimated to be
4,200,000 cubic feet.

It is difficult to accurately assess the degree to
which Gulf water is mixed with the lagoon water.  It is
known that the mixing coefficient is dependent on the
tidal and wind velocities as well as the geometric
configuration of the lagoon.

An estimate of the mixing coefficient was made by com-
paring the salinity of Gulf water with that of Old
Pass Lagoon.  Although no obvious source of freshwater
entering Old Pass Lagoon was evident, the salinity in
the lagoon was below that in the Gulf of Mexico.  By
comparison of the salinity in the lagoon before and
after the flood tide an estimate of 0.6 was made for
the mixing coefficient.  With the above information,
it is possible to compute the flushing performance of
the lagoon using Equation (7).

Since C/CQ is the fraction of conservative pollutant
remaining in the lagoon, (1-C/CO) expressed as a per-
centage indicates the degree to which flushing has been
achieved.

The results  of the flushing calculations  are  pre-
sented in Figure 2.   They indicate that the time for 50
percent flushing performance is  about 30  days for the
area in the  middle of the lagoon.   In other words,
this is the  time required to reduce the concentration
of a conservative material initially distributed uni-
                                                            100
                                             50
o
E
V)
:D
                                          olO
                                          UJ
                                          o
                                          cc
                                          UJ
                                          Q.
                                                     AVERAGE FLUSHING IN LAGOON-v
                                                     BY TIDAL PRISM METHOD        *-
                                                                     •FLUSHING IN MID-LAGOON
                                                                     AS GIVEN BY THE
                                                                     COMPUTER MODEL
                                                                10     15
                                                                  TIME  (DAY)
                                     20
    25    30
                                                          Fig. 2.
        Computed Flushing Performance of Old Pass
        Lagoon as Computed by the Tidal Prism
        Method and the Computer Model
                                         formly over the canal system to 50 percent of the
                                         initial concentration.

                                         ESTIMATE OF POLLUTANTS FROM THE PROPOSED MARINA

                                         Little data is available regarding the quantities of
                                         pollutants that are discharged from the engines of
                                         pleasurecraft of the type anticipated to use Sandpiper
                                         Cove Marina.

                                         Jackivicz and Kuzminski (8) have presented information
                                         on the various compounds found in the exhaust from
                                         outboard motors.

                                         The sizes and types of craft expected to use the marina
                                         were reported as follows:
                                         Number of Boats

                                              30
                                              18
                                              14
                     Boat Length

                        20  ft.
                        31  ft.
                        40  ft.
Engine Type

  Outboard
  Inboard
  Inboard
                                         Also, the following conditions are anticipated to be
                                         enforced:

                                             -The general public will not be allowed to use
                                              the marina.
                                             -An onshore waste disposal system will be available
                                              to receive sanitary and other wastes from boats.

                                         Using the information available and making the following
                                         assumptions, an estimate was made of the amount of oil
                                         that will be deposited in Old Pass Lagoon from out-
                                         board engines.  The assumptions are:

                                             -Average oil: gasoline ratio   1:50.
                                             -Discharge of both volatile and nonvolatile oil
                                              per boat - 6 gms/liter of fuel comsumed.
                                             -Average usage   twice per week.
                                             -Speed - 15 knots.
                                             -Fuel consumption - 3 nautical miles/gallon.
                                                      416

-------
The distance from the marina to the bridge at the mouth
of Old Pass  Lagoon is approximately 10,000 feet.  Thus,
the fuel consumed on two trips in and out of the lagoon
may be computed to be 8.32 liters.  Consequently, the
oil discharged per boat per week is A9.92, say 50
grams and the total oil discharged from the thirty
outboard boats is 1500 grams/week.

The biochemical oxygen demand (BOD) and chemical oxygen
demand (COD) may also be computed.  Under the above
assumptions, the time spent per boat operating in Old
Pass Lagoon  is 26.4 minutes.  This may be rounded up to
30 minutes to take into account slower speeds used in
the immediate vicinity of the marina.  For boats in
operation for one hour,the rates at which BOD and COD
are generated are 1.05 and 2.50 grams/liter of fuel
consumed respectively.  Since 8.32 liters of fuel are
consumed and there are thirty outboard boats the total
weekly demands are 262 and 624 grams respectively.

The above computations have analyzed the effects from
outboard engines.  Inboard-engined boats do not utilize
an oil:gasoline mixture and no data were located that
analyze the  pollutional aspects of inboard boats.  The
potential problem from inboard boats centers around
discharges from the bilges.  However, such discharges
are illegal  and it must be assumed that stringent
enforcement  of the law will severely limit such dis-
charges.

For the purposes of this analysis assume that oil is
discharged at half the rate as that from an inboard
motor, and that BOD and COD are created at the same
rates.  Thus, the pollutants from 32 inboard boats are:
     Oil:
     BOD:
     COD:
           507.
 x 50 x 32
 8.75 x 32
20.80 x 32
800 grams/week
280 grams/week
665 grams/week
 The total discharges from all boats at the marina are:
     Oil:  2,300 grams/wk
     BOD:    542 grams/wk
     COD:  1,290 grams/wk
              0.73 Ibs/day
              0.17 Ibs/day
              0.40 Ibs/day
                                            WATER QUALITY MODELING ANALYSES

                                            In order  to evaluate  the  effects  of  the  pollutants
                                            generated by boats  from the marina,  a  water  quality
                                            modeling  analysis was conducted using  a  steady  state
                                            estuary model developed by Connell/Metcalf & Eddy,  Inc.
                                            and Hydroscience, Inc.,(9).

                                            The results of the  modeling study indicated  that  the
                                            boats using the marina will cause a  practically un-
                                            detectable increase in BOD loading in  the lagoon.

                                            Since pollution from  oil  and grease  was  the  major cause
                                            for concern a more  detailed analysis of  oil  and grease
                                            pollution was conducted according to the following
                                            steps:
                                                       (1)   The existing oil and grease level in the waters
                                                       of the lagoon was simulated using a computer model.
                                                       The  results of the simulated oil and grease profile
                                                       are  presented in Table 1.

                                                       (2)   In order to provide a conservative analysis it  was
                                                       assumed that the oily waste generated by the 62 boats
                                                       is discharged as a point source at the marina.  Ini-
                                                       tially, the 0.73 Ib/day of oil computed above were
                                                       considered to be generated at the marina.  The results
                                                       of this waste input on the lagoon water are also shown
                                                       in Table 1.  The indication is that the effect on the
                                                       overall oil/grease level in the lagoon waters is almost
                                                       undetectable.

                                                       (3)   Finally, 22 Ib/day of oil loading were considered
                                                       as a point source in the marina area.  This figure
                                                       represents the loading that is applied during a period
                                                       of 60 days assuming that 50% flushing takes place.  It
                                                       should be noted that this flushing time is much higher
                                                       than the flushing performance of the lagoon as computed
                                                       in the previous section and, therefore, the results  are
                                                       conservative.

                                                       The above calculation was based on the following assump-
                                                       tions :
                 TABLE 1 .  RESULTS OF THE WATER QUALITY MODELING STUDY FOR OLD PASS LAGOON
Location from  Measured Oil/
Hwy.  98  Bridge  Grease  Values
                                 Simulated Existing
                                 Oil/Grease Condition
                                 (mg/1)      	(11
                                            Oil/Grease Profile of the
                                            Lagoon after inputting
                                            0.73 Ib/day Oil/Grease
                                            at Proposed Marina    (2)
                                                          Oil/Grease Profile of the
                                                          Lagoon after inputting
                                                          22 Ib/day Oil/Grease at
                                                          Proposed Marina	(3)
600 9.33
870
1400
1945
2480
2955
3555
4000 3.37
4215
4825
5220 0.42
5300
5750
6335
6670 5.81
7020
7770
8190 0.11
8270
8570
9195 0.11
9740 0.64

7.455
6.770
6.373
6.076
5.894
5.703

5.480
5.252

5.085
4.878
4.640

4.309
3.741

2.951
2.280
1.609


7.484
6.804
6.409
6.114
5.943
5.744

5.523
5.297

5.132
4.927
4.691

4.363
3.802

3.026
2.356
1.159


7.706
7.106
6.759
6.500
6.341
6.175

5.981
5.783

5.639
5.460
5.255

4.969
4.481

3.806
3.222
2.180

                                                       417

-------
(1)  4 ft /sec was used as the dispersion coefficient.
This value was determined from the field dye test
conducted near the site of the proposed marina.

(2)  It is assumed that the oily waste will be mixed
only in the top one foot of water in the lagoon.   This
is not unreasonable since oil floats but wind and wave
action will create mixing.

(3)  No freshwater contribution was considered through-
out the lagoon.

(4)  Oil and grease are considered to be conservative
materials.  The above modeling analysis indicates that
oil and grease discharged from the 62 boats at the
marina would result in an increase of the oil/grease
content of the lagoon water by about 1 mg/1 near the
proposed marina site, and about 0.3 mg/1 near the
outlet of Old Pass Lagoon.  The quantity of the  in-
crease is considered insufficient to cause deteriora-
tion of the lagoon water.

Even though the quantity may not be sufficient to
degrade the water quality, oil in such quantity as to
be visibly apparent is aesthetically objectionable.
Oil films of microscopic thickness are responsible for
the bright bands of color that may be observed on the
surface of water in canals or on wet roads.

Based on  the information contained in a manual on
Disposal  of Refinery Wastes published by the American
Petroleum Institute (10), the amount of oil that  will
result in a barely visible film on the surface of the
water near the proposed marina may be determined.

The American Petroleum Institute reports that an
oil film  of thickness 0.0000015 inches is barely
visible under the most favorable light conditions.
Considering the small lagoon area east of Norreigo
Point, which has a surface area of 792,000 ft ,
the amount of oil in the film is:

     0.0000015  x 792000 x 7.48
        12
     = 0.74 gal.
6.18 Ib.
Assume that the oily waste discharged into the water
due  to the marina operations will have disappeared from
the  surface within 24 hours due to mixing induced by
wind, boat traffic and tidal action.  Then the 6.18
Ibs/day of oil required to maintain a film of oil is
over eight times larger than the volume of oil and
grease that was estimated to be generated by the 62
boats using the marina.

The  assumption that the oily waste will have disap-
peared from the surface within 24 hours is a conserva-
tive estimate.  According to the American Petroleum
Institute, experiments have indicated that oil films up
to 0.0000003 inches in thickness generally do not
persist more than 5 hours on an agitated water surface.
Tested at sea, less than 24 hours are required to
dissipate a film of 0.00004 inches thickness.  In
general, the thinner the film, less time is required
for  dispersion.

Furthermore, the 0.73 Ib/day oil waste estimated to be
generated from the boats will be spread over the entire
area of Old Pass Lagoon.  In order to be conservative,
we have assumed that this waste is discharged near the
marina site as a point source.

It should be pointed out that the above calculation is
based on the assumption that the oil is evenly distri-
buted on the surface of the water at the vicinity of
the proposed marina.  It is possible  that wind  may
drive the film to the shore or an  inlet  and  result in a
film thick enough 4s to be visible.   There is a tendency
for winds to come from the north or from the south,
while least wind activity is from  the west.   This  would
indicate that there is less likelihood of wind  action
pushing an oil film toward the deadend of the lagoon
and the vicinity of the proposed marina  as opposed to
driving it out of Old Pass Lagoon.

CONCLUSION

The study indicated that existing  water  quality in the
lagoon is very good.  The computations indicate that
the lagoon has a relatively poor flushing performance,
causing the water to be sensitive  to  the effects of
pollution.  However, based on the  analysis,  it is
concluded that the proposed marina and the 62 boats
berthed there will have little effect on the water
quality of the lagoon.


LITERATURE,CITED

1.   Am.  Public Health Assoc.,  Am.  Water Works
     Assoc., Water Pollution Control Fed. Standard
     Methods for the Examination of Water and
     Wastewater, Thirteenth Edition, 1971.

2.   Feurstein, D.  L.  and R.  E. Selleck,  Aug.
     1963.  Fluorescent Tracers for Dispersion
     Measurements.   Journal of the Sanitary
     Division, ASCE, Vol 89,  No. SA4.  pp. 1-21.

3.   Fischer, H. B., Nov. 1967.  The Mechanics of
     Dispersion in Natural Streams.  Journal of  the
     Hydraulics Division, ASCE, Vol. 93.   No. HY6,
     pp.  187-216.

4.   Fischer, H. B. , Oct. 1968.  Dispersion Pre-
     dictions in Natural Streams.   Journal of the
     Sanitary Division, ASCE, Vol.  90, No.  SA6.
     pp.  124-130.

5.   Metcalf & Eddy, Inc. , University of Florida,
     and Water Resources Engineers, Inc., Oct.
     1971.  Storm Water Management Model, Vol. IV
     Program Listing,  Environmental Protection
     Agency.

6.   U.S.  Department of Commerce,  National Oceanic
     and Atmospheric Administration.  Nautical
     Chart 870-SC.

7.   U.S.  Department of Commerce,  National Oceanic
     and Atmospheric Administration.  Tide Tables
     1974   High and Low Water Predictions,  East
     Coast of North and South America, 1973.

8.   Jackewicz, T.  P., Jr., and L.  N.  Kuzminski,
     Aug.  1973.  A Review of Outboard Motor Effects
     on the Aquatic Environment.  Water Pollution
     Control Fed.,  Vol. 45, No. 8, pp. 1759-1770.

9.   Connell/Metcalf & Eddy,  Inc., and Hydroscience,
     Inc., Water Quality Modeling  Study for State
     of Florida Department of Pollution Control,
     1974.

10.  American Petroleum Institute, 1969.   Manual
     on Disposal of Refinery Wastes, Chapter 2.
                                                         418

-------
                      DIGITAL COMPUTER SIMULATION OF  SECONDARY  EFFLUENT  DISPOSAL ON LAND
          Kuang-Mei  Lo, Phd., PE, Project Manager and Connell/Metcalf
          Donald Dean Adrian, PhD., PE, Professor, Department of Civil
     A computer simulation study was conducted  to
determine the required field area for an  infiltration
type of land disposal system based on weather condi-
tions and soil properties.  The input information  in-
cluded local evaporation data, average monthly  rain-
fall and average number of days during the month on
which it rained, and characteristics of soil's  in-
filtration and drying rates.

     The results of this 20-year simulation were
used to determine the liquid loading rate under
various weather and soil conditions.  The liquid
loading rate was used to determine the necessary field
area for a particular disposal system at  a given lo-
cation.

                  Introduction

     Land disposal of domestic and industrial waste-
waters has received a great deal of publicity in
recent years.  In the Southeast, where climatic con-
ditions are favorable for this type of disposal, wide-
spread interest has been expressed both among
professionals in the wastewater management field and
the general public.
     In general, land disposal is practiced  in  three
ways:  spray irrigation, overland flow, and  infiltra-
tion   percolation.  Large land areas are required
for all three methods.  Over the years, controversy
surrounding the long-term effects of the  land disposal
of effluent, together with the lack of design criteria
to determine the land area required, has  in  many cases
forced engineers or planners to disregard this  treat-
ment method as a feasible method of disposal.   The net
result is either failure to consider a viable solu-
tion to effluent disposal or creation of  additional
controversy because many people or groups consider
land disposal as the most promising solution for
wastewater management.
     This paper is primarily concerned with  the devel-
opment of rational design criteria for land  disposal,
from which an engineer or planner could determine  the
required land area based on the soil infiltration  and
drying rates and local climatic conditions.  Considera-
tion of chemical and biological changes in the  applied
wastewater  is beyond the scope of this paper.   This
paper is confined to the study of the Infiltration-
Percolation type of land disposal.  The best example
of  this method is the Flushing Meadows Project  near

Phoenix, Arizona .

Description of the System
     A.hypothetical land disposal system  is  shown  in
Figure 1, in which a series of infiltration basins are
constructed with soil banks surrounding each  basin  to
prevent entry of surface water runoff.  Secondary
effluent from a treatment plant is pumped from  a
storage pond into the basin to a predetermined  depth,
the basin is kept flooded until a certain amount of
effluent has infiltrated, then the basin  is  dried out
prior to the next application.  Generally, three major
stages are involved in the disposal process.  The
first stage is called the ponding stage because during
this stage effluent is ponded on the surface of the
basin and is removed from the basin by infiltration
and evaporation.  Also, in this period, the  volume
of the effluent to be disposed of is increased  by the
rainfall  on the surface of the basin.  As the disposal
process progresses, the second stage is reached when
there is no more ponding water in the basin.  In this
                                                       419,
            & Eddy, Inc., Miami, Florida
             Engineering, Uni. of Mass/Amherst
            RAINFALLl

                       fEFFLUENT LOSSES THROUGH
                        EVAPORATION OR DRYING
                         EFFLUENT LOSSES THROUGH
                         INFILTRATION OR
                         REDISTRIBUTION
                 <7 GROUNDWATER TABLE
             Figure 1.   Infiltration Basin

stage, the soil water beneath the basin undergoes a
redistribution process which percolates the excessive
water from the upper layer to the lower layer of the
soil.  The rate of redistribution normally decreases
rapidly or becomes negligible when  the water content
in the soil reaches the field capacity.  During the
re-distribution process, water in the soil is also
lost through evaporation.  After the soil reaches
field capacity, the disposal  process progresses
to the final stage, drying, during which evaporation
proceeds at a rate lower than evaporativity, and the
actual rate is dictated by the ability of the soil
profile to deliver moisture toward the evaporation
zone.  The drying process is very important for land
disposal because it not only restores the infiltration
rates of the soil,   it also provides a necessary
period for the soil mass to renovate the effluent.
During the redistribution and drying periods, the
process is also affected greatly by rainfall, which
increases  the soil  water and prolongs the redistribu-
tion or drying period.  Sometimes when the rainfall  is
high, ponding water results.   Consequently, the process
may return to the first stage.
Approach
     Because of the stochastic nature of rainfall and
its resultant effect on effluent disposal rates, a
simulation approach was used in this study to test how
a particular design would perform under conditions
representative of a given area of the county.   To
achieve this simulation, mathematical equations to
describe the infiltration and drying processes were
adopted from the literature to relate soil characteris-
tics to the amount of water lost by infiltration and
drying.  Input data also included rainfall and evap-
oration information.  Since actual rainfall records are
cumbersome to use, synthetic rainfalls were generated
for this study.  Mean monthly evaporation from a free
water surface published by the U.S. Weather Bureau was
used to represent water loss to the air during the
infiltration and redistribution stages for a given
area.

                 Synthetic Rainfall
     A rainfall probability function was utilized to
sequentially generate rainfall data using the Monte
Carlo Simulation Technique.  The generated rainfall
data could not be distinguished from historical rain-
fall data by means of the statistical tests of
significance.
     The modified Poisson distribution was used by
      2
Bagley  to represent the frequency distribution of
daily rainfall for San Francisco, Sacramento and
Spokane.  The modified Poisson distribution contains a
persistence characteristic so the likelihood of rain
occurring on a given day increases  if it has rained on
the previous day.  The Poisson and modified Poisson
distribution are  compared in Table 1.  The modified

-------
  Table 1.  Comparison of the Poisson and Modified
            Poisson Distribution
Probability
of units of
rain
(p.)
            Poisson
            distribution
                          Modified Poisson  Distribution

                          l/(Hd)x/d
            xe
              -x/1 !
Pi
             -x/i!
                          ix (x+d)...(x+(i-1) d)
Poisson distribution is a function of two parameters,,
x and d,which represents the degree of dependence of
one event upon another.  When d is zero, the modified
Poisson distribution approaches the Poisson distribu-
tion as a limit.  The procedure by which to calculate

X  and  d follows an example by Lo  which uses Amherst,
Massachusetts rainfall records.

     From 1961 to 1965 there were 1224 observation days
for the interval March to October of each year.  In
this period, 377 days were considered as having measur-
able rainfall.  The average rainfall in this period was
0.087 inch/day.
     If one lets M   total no. of days in the period,
N   total no. of days with rain, and U = average daily
rainfall for the whole period, then the probability of
ho rain was (M-N)/M, which must be equal to P  as
shown in Table 1.
 (M-N)/M
or
(1225-377)/1225   I/ (l+d)X/d
                                                   (2)

The expected value of the modified Poisson distribution
must  be  equal to U, the average daily rainfall for the
entire period.  With the unit increment of rainfall of
0.05  inch, the relationship is:
 U
                          2 X  (x + d)
0.05
           (Hd)
                X/d+1
                             (l+d)
                                 X/d
         i X  (x + d)  .  .  .  .  (x +  (i   I)1 d)
             i  !  (1  * d)x/d + 1-          !

rearranging Equation (1) as

x   -[d log( (M - N) / M )  / M )]/ log (1  + d)
                                                   (3)
                                                   (4)
and solving Equations (3) and (4) simultaneously, we get
d   13.8, and X = 0.094.

     From the probability distribution function,it is
then possible to generate synthetic rainfall to repre-
sent real-world precipitation.  This was done by means
of the Monte Carlo simulation technique, in which a
subroutine available at the University of Massachusetts
Computer Center was used to generate a uniformly distri-
buted random number sequence. Then,by applying a table
interpolation method for the inverse probability inte-
gral transformation, the random samples of daily rain-
fall were obtained.  Comparison of the generated and
recorded rainfall is shown in Figure 2.
                                                                                 	SYNTHETIC RAINFALL
                                                                                 Location ' Boston, Moss.
                                                                      M
                                                                              M
                                                                                           A  .S
                                                                                  J   J
                                                                                  MONTH
                                                                Figure 2.  Comparison of  the  Generated
                                                                           .and Recorded Rainfall

                                                               Infiltration and  Redistribution  of Soil
                                                                   Moisture Following Infiltration

                                                           Infiltration

                                                                Passage of water from a basin to  the  ground  fre-
                                                           quently occurs under  unsaturated conditions  because of
                                                           the distance between  the groundwater table and  the
                                                           ground surface.  The movement  of water in  the unsatu-
                                                           rated soil can be described as:
                                                           36 _   36 ,n
                                                           8t '   X (D
                                                                        36v
                                                                               3K
                                                                               3l
                                                           where x is the volumetric water content,  D(x)  is the
                                                           soil water diffusivity, K is  the  hydraulic  conductivity
                                                           Z is the distance downward, and t is  the  time.
                                                                 A
                                                           Philip  developed the above differential  equation's
                                                           solution which describes cumulative infiltration as:
                                                                  St1/2 + (A2+Ko)t + A3 t3/2 +
                                                                                                             (6)
                                                           Subject to t
                                                                          0, Z>0, e=e.; and t>0, Z=0, e =e
Philip suggested use of the first two terms to describe
approximately the infiltration:

I(t)   St1/2 + At                                  (7)

     For a larger t, Hillel5  stated  that  the  cumulative
infiltration can be expressed as:
      1/2
                                                           I   St"" + Kt

                                                           which yields the infiltration rate as:

i                                                                 C ^~ I / ^ i  i/
                                                               "o" O L     T K
                                                                                                              (8)
                                                                                                              (9)
                                                           where K is the hydraulic conductivity of the soil's
                                                           upper layer.  S was defined by Philip as sorptivity;
                                                           it can be determined in the laboratory.

                                                                Equation (9)describes  the infiltration rate of the
                                                           soil  for this study.   The  infiltration rates used in this
                                                           study'are shown in Figure 3.  The rates describe a
                                                           loamy soil.

                                                           Redistribution

                                                                The infiltration process then comes to an end
                                                           when the applied effluent is depleted by evaporation
                                                           and infiltration.  The movement of water within the
                                                           soil  after end of infiltration is called redistribu-
                                                           tion, since its effect is to redistribute soil water
                                                           from upper layers of soil, wetted to near-saturation, to
                                                           the lower layers.  The rate of distribution depends on
                                                           the soil properties, the groundwater depth, and the
                                                        420

-------
      ,. 30

      I

      u
      ui 20
      2  10
                 10
                  20
                 DAY
                               30
                                      4O
Figure 3.
                  Infiltration Rate as a
                  Function of Time
moisture content of the soil.  The redistribution  pro-
cess also involves hysteresis, which complicates the
redistribution process and makes it difficult  to des-
cribe mathematically.

     For this study, the redistribution  process deter-
mines the time required for draining the excessive
water in the soil.  The faster the soil  drains the
excessive water, the earlier it can start  the  drying
process.  Normally, whether the infiltration basin is
ready for next effluent application depends on the
moisture content of the top soil.  As a  result,
only the redistribution process of top soil is con-
sidered for this study.  The depth of the  top  soil  can
be considered as root zone or tillage zone normally
observed in the field.  For this reason, we
assumed that the redistribution in the top soil will
become negligible when the soil water content  reaches
field capacity.  Field capacity is defined as  the
amount of water that a well drained soil retains about
148 hours after being thoroughly wet.  Thus, we con-
sidered that the redistribution process  would  take two
days after infiltration ceases.  We further se-
lected 0.3 volumetric water content as the field ca-
pacity for this study, after which we assumed that the
redistribution process ceases and drying process starts.
           Evaporation and Drying
Evaporation
     The loss of water through evaporation occurs
during the period when effluent is ponded  in the in-
filtration basin.  U.S. Weather Bureau publications
were used for the mean monthly rate of evaporation.
In the simulation, evaporation ceased as  soon as the
ponding water was depleted.  Water losses  to the air
were then considered to be from drying.
Dryi ng

     Drying occurs in two distinct stages.  First,
when there is ample water in the soil, the drying  rate
is constant and is determined by external  and soil-
surface conditions, rather than conductive properties
of the soil profile.  Constant-rate drying generally
occurs while the soil  water undergoes the  redistribu-
tion process.  After the soil  reaches field capacity,
the drying process enters into second stage, which
proceeds at a rate lower than the constant rate and
is controlled by the conductive properties of the  soil
profile.  The second stage is called falling-rate
drying.   A study of evaporation from bare  soil  was

reported by Gardner and Hi 11 el5, who stated that the
drying rate can be expressed as:

E = D(e) Wir2/ 4L2                                  (10)

where E is  drying rate  in cm/day, D(e)  is the diffu-
sivity corresponding to the average water content  of
                  p
soil  columns  in cm /day, and W is the volumetric water
                                                    content in cm for a soil column  L  cm long.

                                                         We confined ourselves to investigating  the  top  30-
                                                    cm layer of soil because, in  practice,  this  is  the
                                                    layer that dictates whether  the  basin  is  dried to a
                                                    degree ready for next application  of effluent.   As
                                                    shown in Figure 4, we assumed that the  drying  process
                                                                   VOLUMETRIC WATER CONTENT
                                                                                °'2     °'3
                                                         Figure 4. Illustration of Drying Process
                                                                   in the Soil
                                                    starts when the soil is at field capacity, after
                                                    which the water loss through drying would follow
                                                    Equation (10) until  a 0.1 volumetric water content is
                                                    reached for the top 30 cm of soil.  We assumed that
                                                    the constant drying rate is equal to drying rate deter-
                                                    mined by Equation (10) at volumetric water content '
                                                    0.3, or the evaporation rate, whichever is smaller
                                                         Diffusivity of the soil  is generally measured
                                                    experimentally.  Testing procedures are available from
                                                    many technical papers and soil and water textbooks.
                                                    Figure 5 presents the diffusivity used in this study
                                                    as a function of water content, while Figure 6 presents
                                                    the drying rates determined by Equation (10) for soil'
                                                    water content ranges pertinent to the drying period.
                                                               30
                                                               20
                                                             «  10
                                                                     0.1     0.2      0.3
                                                                    VOLUMETRIC WATER CONTENT
                                                                                           0.4
                                                              Figure  5.  Diffusivity-Water
                                                                        Content  Relationship
                                                               0.5
                                                             g 0.3

                                                             £
                                                             2 0.2

                                                             i
                                                               a.
                                                                     0.1      0.2     0.3
                                                                     VOLUMETRIC WATER CONTENT
                                                                                           0.4
                                                              Figure  6.  Drying  Rate-Water
                                                                        Content Relationship
                                                       421

-------
                 Simulation

 Procedure
      Once  the mathematical equations for  the  various
 components  of the  land disposal process were  formula-
 ted,  simulation  of secondary effluent disposal  on  an
 infiltration basin was initiated.   Input  data were
 local  daily rainfall  and  evaporation, and parameters
 to  describe the  infiltration and drying properties of
 the soil,  including  sorptivity, conductivity,  and
 diffusivity.

     After obtaining the necessary input data and para-
meters, the simulation was conducted on a digital
 computer according to the following steps:

  •  The simulation started at the  beginning of  each
    day with the addition of the daily rainfall on
    the surface  of the applied effluent.   If  it was
    not raining, a zero amount of  rainfall was
    added.  The  depth of  effluent  on the  next day  was
    obtained by  subtracting the amount of water lost
    by infiltration  and-evaporation during the  pre-
    vious  day.   This  procedure was  repeated until
    •no more ponding  effluent remained on  the  surfacee
    of the  basin.
  -  After  infiltration ceased, two  days were  added for
    redistributing excessive water  in the top 30 cm of
    soil to the  field capacity.  In case  rain occurred
    within  the two days,  the amount of the total rain-
    fall was compared with the water lost through  con-
    stant-rate drying and infiltration.   If it  was
    larger than  that lost by drying and infiltration,
     the excessive  rainwater was considered to be
    ponded on the  surface, and the  simulation returned
     to Step 1.   If it was smaller,  then an extra
    'day was added  to account for the additional
    infiltration process.  The water lost through  dry-
    ing was determined utilizing the drying equation
    at a volumetric water content  of 0.3.
  •  After  reaching field  capacity, loss of soil  water
    through drying occurred.  If rain occurred  during
    this period, the amount of rainfall was compared
    with the existing water content of the soil to
    determine whether it  would increase the soil
    moisture content, or  return the soil  water  to
    Step 2, or even to Step 1.  The entire process
    stopped when the volumetric water content of the
    top 30 cm of soil reached 0.1.

    After  completion of the above  three steps,  efflu-
ent was applied to  the basin again  and the above steps
were repeated.  This process was repeated  until
the simulation progressed  to 20 years.

    Five locations were chosen to  represent different
 meteorological conditions:  Phoenix, San  Francisco,
 Miami, Boston, and Duluth.  In recognition of the  dif-
 ficulty in  applying effluent on land in freezing
 weather, the normal  freezing period for Boston  and
 Duluth was  excluded  from  simulation.  As  a result,  8
 months out  of a  year were used for disposal in  Boston,
 and 6 months for Duluth.

    The total effluent to be disposed of  for  each
 application was  chosen as 200, 500 and 1,000 cm per
 unit  area  for all  locations.
     The  output  of  this  simulation was a  random vari-
 able,  the  time  required  for  disposing the effluent
 on  the basin,   and its  associated frequency of
 occurrences  for the simulation  period of 20 years.  A
 sample output is shown  in Table 2.   It shows  that if
 applying 500 cm of effluent  on  the  basin in Miami,
 once in  20 years it would be possible to dispose of
 the  effluent within  35 days,  6  times it  would be  '.
 possible within 42 days, and so on.  The mean period
 between  two  successive  applications  is shown  to be
64.2 days.  The output for the  simulation  in  other
areas are summarized  in Table 3.

Table 2.  Simulation  Output  for Miami  for  Applied
          Depth   500 cm.
Time reqd.  No.of         Time  reqd.    No.  of
  day       Occurrences      day
   35.0
   42.0
   43.0
   46.0
   52.0
   57.0
   58.0

Table 3.
Location
     1.0
     6.0
     2.0
     1.0
     1.0
     2.0
     4.0
59.0
60.0
71.0
80.0
81.0
82.0
83.0
Occurrences

  80.0
  20.0
   1.0
   3.0
   2.0
Mean Disposal Time for Effluent Applied on
Infiltration Basin

        Mean Disposal Time (Days)	

         Application Depth (cm)
1000
Boston
Duluth
Miami
San Francisco
Phoenix
103,
109,
136,
128,
91.
.3
.7
.6
.1
.5
500
56
54
64
71
40
.9
.6
.2
.2
.4
200
27.
26.
33.
33.
18.
0
5
0
2
0
100
18,
18,
24,
26,

.2
.8
.8
.7

50
16,
14,
22,
22.
-

.6
.2
.4
,4

                Applications

     The major application for the output of this
simulation was to determine the liquid loading rate,
which in turn was used to size the required field area
in which the disposal process actually takes place.
For example, Table 3 shows that in the Miami area it
takes an average of 64.2 days to dispose of 500 cm of
effluent, therefore the liquid loading per year is:
     x 500 = 2843 cm/yr   93 ft/yr
                                        (ID
and the field area required based on the liquid loading
is:
Field area(acres)    ' "° ^

where Q   flow rate of plant effluent, MGD; L = annual
liquid loading, ft/yr.  Therefore, for a 1-MGD plant,
the required field area would be 12 acres.  The actual
system may be constructed as a series of infiltration
basins.  The effluent would be pumped into the first
basin to a predetermined depth.  The basin would be
kept flooded until the total effluent reaches 500 cm.
Then the basin would be left to dry out.  The effluent
would be applied to the basin again when the volumetric
water content in the basin  reaches 0.1.  The design
for this particular example seems quite adequate, be-
cause based on the 20-year  simulation, the actual dis-
posal time is larger than the average value of 64.2
days only nine times.

     The liquid loading rates for other areas are
presented in Tables 4 to 6,  which also list the
required field area for a 1-MGD plant.
            Discussion of Results
     The results indicate that the required field area
for an infiltration type of land disposal system
differs depending on the weather conditions and soil
properties.  For example, under the same conditions,
an infiltration basin in the Miami area requires about
50% more area than one located in Phoenix.

     The study also indicates that the required field
area decreases if the basin is kept flooded longer.
For example, the required field areas for a 1-MGD
plant in the Miami area are 12.8, 12.0, and 15.4 acres
                                                      422

-------
Table 4.   Liquid Loading Rates and Required Field
          Areas for 1-MGD Plant at Various Locations

Effluent Depth per Application:  100 cm (32.8 ft)



Location
Boston
Duluth
Miami
San Francisco
Phoenix
Liquid
Loading
Rate
(ft/yr)
76.2
53.8
87.7
93.5
130.9

Field
Area
(Acre)
14.7
20.7
12.8
12.0
8.5

Storage*
Area
(AC-ft)
368
552
Not required
Not required
Not required
 Storage area for effluent during the freezing period.    5.

 Table 5.  Liquid Loading Rates and Required Field         fi
          Areas for 1-MGD Plant at Various Locations
Effluent Depth


Location
Boston
Duluth
Miami
San Francisco
Phoenix
per Appl
Liquid
Loading
Rate
(ft/yr)
69.2
54.1
93.3
84.1
149.3
i cation:

Field
Area
(Acre)
16.0
20.7
12.0
13.3
7.5
500 cm (16.4 ft)

Storage*
Area
(AC-ft)
368
552
Not required
Not required
Not required
  Storage area for effluent during the freezing period.

 Table 6.  Liquid Loading Rates and Required Field
          Areas for 1-MGD Plant at Various Locations
 Effluent Depth per Application:
               Liquid
               Loading   Field
               Rate      Area
location       (ft/yr)   (Acre)
200 cm (6.6 ft)
Storage*
Area
(AC-ft)
Boston
Duluth
Miami
San Francisco
Phoenix
58.3
44.6
72.6
72.1
133.1
19.2
25.1
15.4
15.5
8.4
368
552
Not required
Not required
Not required
  Storage area for effluent during the freezing period.

 corresponding to effluent depth per application of
 1000,  500, and 200 cm.   In other words, the  longer the
 basin  is flooded, the more effluent is infiltrated in-
 to the ground, and consequently, smaller field area is
 required.  However, this does not mean that  we should
 select the longest flooding time in order  to have
 the highest  liquid loading rate, because the liquid
 loading rate alone is not sufficient to design the
 system.  The ability of  soil to renovate the effluent
 and to avoid excess nitrogen loadings on the ground -
 water  should be considered.  The determination of
 water  quality loading rates is beyond the  scope of
 this paper.
                   Acknowledgment

     The junior author acknowledges the support of
 Office of Water Research and Technology grant WR-B038-
 MASS,  U.S. Forestry Service Pinchot Institute grant
 USDA-23-591, and the senior author acknowledges sup-
 port of Metcalf & Eddy,  Incorporated, Miami.
                                            References
                             Herman Bouwer, R. C. Rice, and E. D.Escarlega,
                             "High-rate Land Treatment I:  Infiltration and
                             Hydraulic Aspects of the Flushing Meadows Project,"
                             Journal WPCF: Vol. 46, No. 5, May 1974.
                             Bagley, J. M., "An Application of Stochastic Pro-
                             cess," Technical  Report No.  35, Department of Civil
                             Engineering, Stanford University, 1964.
                             Lo, K. M., "Digital Computer Simulation of Water
                             and Wastewater Sludge Dewatering on Sand Beds,"
                             Civil  Engr. Dept. Tech. Rept. EVE 26-71-1
                             (July 1971), Univ. of Mass.
                             Philip, J. R., "The Theory of Infiltration:
                             4.  Sorptivity and Algebraic Infiltration
                             Equation," Soil Science, 85, 1958.
                             Hillel Daniel, Soil and Water:   Physical Principles
                             and Processes, Page 138 to 140, Academic Press,
                             1971.
                             Gardner, W. R., and Hillel,  P.I., "The Relation
                             of External Evaporation Conditions to the Drying
                             of Soils,"  J. Geophys. Res., 67, 1962.
                                                      423.

-------
                                COMPUTER SIMULATION  OF LONG-TERM SECONDARY IMPACTS
                                        OF WATER AND WASTEWATER PROJECTS
   Gerald A. Guter, Ph.D.
   Director of Environmental Studies
   Boyle Engineering Corporation
   Newport Beach, California
John F. Westermeier
Biologist
Boyle Engineering Corporation
Newport Beach, California
Thomas C. Ryan
Environmental Planner
Boyle Engineering Corporation
Newport Beach, California
 Applications of the KSIM technique were made  in  the
 course  of  environmental studies  for water  and waste-
 water projects.  The National Environmental Policy Act
 mandates a systematic  interdisciplinary approach which
 will insure the integrated use of natural  and social
 sciences and the environmental design arts in planning
 and in  decisionmaking which will have an impact  on
 man's environment.  KSIM is being employed as a  part
 of this interdisciplinary approach.  This  application
 of KSIM requires modification of published techniques
 for water  resource planning and  adaptation to the cases
 discussed.

 The computer simulation, as applied to three  water and
 wastewater projects, are discussed herein.  These proj-
 ects include an Areawide Facilities Plan for  the Las
 Virgenes Municipal Water District in Los Angeles and
 Ventura Counties, California; a  Master Plan of Water
 and Reclamation Facilities for Los Alisos  Water  Dis-
 trict,  Orange  County,  California; and an irrigation
 project on the Colorado River Indian Reservation in
 western Arizona.  In considering the application of
 KSIM to the above projects, major advantages,  accept-
 ability to reviewers and agencies, types of projects
 to which KSIM  appears  applicable, and further research
 on the  methods are discussed.

                       Background

 Section 102 of the National Environmental  Policy Act
 sets forth broad and sweeping policies for all agencies
 of the  Federal government to:

 (A)  Utilize a systematic, interdisciplinary  approach
 which will insure the  integrated use of natural  and
 social  sciences and the environmental design  arts in
 planning and decisionmaking which may have an impact
 on man's environment.

 (B)  Identify  and develop methods and procedures,  in
 consultation with the  Council on Environmental Quality
 established by Title II of this  Act, which will  insure
 that presently unquantified environmental  amenities
 and values may be given appropriate consideration in
 decisionmaking along with economic and technical
 considerations.

 (C)  Include in every  recommendation or report on pro-
 posals  for legislation and other major Federal actions
 significantly  affecting the quality of the human envi-
'ronment, a detailed statement which is now commonly
 referred to as an Environmental  Impact Statement.

 Litigative activity over past years has centered
 around  implementation  of Section 102 (C).   Case law and
 guidelines prepared by the Council on Environmental
 Quality and by the Federal agencies have left little
 doubt about certain aspects of Environmental  Impact
 Statement  (EIS) preparation.  They deal mainly in sub-
 ject matter or required contents, format,  and proce-
 dures of review, however; and give little  guidance for
 implementing Sections  102(A) or  (B).

 Section 102(A) mandates the use  of the interdisciplin-
 ary approach which insures integration of  the disci-
              plines.   This  implies the numerous expertise must be
              communicable to  an integrator or one trained as a
              generalist, or the experts must learn to communicate
              among themselves.   In either case, communication
              interdisciplinary  in nature is necessary within the
              decisionmaking body.   The use of several disciplines,
              however,  is a  difficult and time-consuming task.

              Section 102 (B) mandates an additional difficult task
              for the agencies preparing environmental documents.
              This section requires that "unquantifiable environ-
              mental amenities and values may be given appropriate
              consideration  in decisionmaking."  The Tenth Circuit's
              decision  in Trout  Unlimited vs.  Morton (7 ERC 1321)
              touched upon the importance of the unquantifiable.
              This court said  that in most projects, "the ultimate
              decision  to proceed ...  is not strictly a mathemati-
              cal determination.   Public affairs defy the control
              that precise quantification of its issues would impose,"

              The preparation  of environmental documents pursuant to
              NEPA and  the so-called "little NEPA"  legislation
              adopted by states  requires the development of method-
              ologies to effectively implement the  above NEPA sec-
              tions.  The Environmental Studies Department of Boyle
              Engineering Corporation is pioneering the use of a
              computer  simulation procedure for EIS preparation which
              was developed  for  integration of the  disciplines.   The
              methodology described in this paper allows for a struc-
              tured and sophisticated consideration of the numerous
              complex relationships of a project or alternative and
              its environment, yet is so structured to allow the
              integration of both "hard" and "soft" data.

                          Description of Simulation Model

              Because of their mathematical nature, most simulation
              models tend to be  excessively numerical.  Variables
              which are readily  quantified exclude variables which
              are subjective or  intuitive but which may be just as
              important.  For  example,  wastewater collection network
              parameters, treatment plant capacity, and discharge
              requirements are included in the scope of planning  a
              wastewater treatment system.   However, subjective or
              semi-quantitative  considerations such as local planning
              policy, environmental quality, and stimulus to land
              development can  easily become the controlling factors
              in the choice  of an alternative system.

              The methodology  of the simulation,taking into consid-
              eration both quantitative and qualitative variables and
              applied to cases described below,was  developed by Kane,
              Vertinsky, and  Thomson1 and applied to water resource
              planning.  For a detailed account of  the mathematical
              treatment of the model, the reader is referred to the
              above basic reference, as well as reports by Kruzic2
              and Suta3.  One  of the advantages of the Kane Simula-
              tion Model  (KSIM)  is that a detailed mathematical
              knowledge of the model is not required to use the
              model.  Thus,  a  barrier is removed between the disci-
              plines who jointly participate in the construction  of
              the model variables and the simulation modeling by  use
              of a simplified  simulation language.
                                                        424

-------
KSIM mathematics has the following properties:

(1)   System variables are bounded.   It is assumed  that
any variable of human significance cannot increase
indefinitely; there must be distinct limits.  In an
appropriate set of units these can always be  set to one
and zero.

(2)   A variable increases or decreases according to
whether the net impact of the other variables is posi-
tive or negative.

(3)   A variable's response to a given impact  decreases
to zero as that variable approaches its upper or lower
bound.  It is generally found that bounded growth
and decay processes exhibit this sigmoidal character.

(4)   All other things being equal, a variable will
produce greater impact on the system as it grows larger.

(5)  Complex interactions are described by a  looped
network of binary interactions.

Representative previous applications of KSIM  include
Impact of Canadian Water Sales to the United  States
(Kane), Implications of a U.S. Deep Water Port  Policy
(U.S. Army Corps of Engineers), Sensitivity of  Alter-
native Manpower Policies  (U.S. Office of Naval  Re-
search) , and Effects of a "Make or Buy" Research Policy
(Policy and Planning Directorate, Canada).

We use the following procedure to apply KSIM  as a  part
of the environmental impact reports or statements.
The most appropriate portions of a study are  those in
which long-term relationships between the proposed
project, its alternatives, and unquantifiable values
must be analyzed.  The steps are as follows:

Step 1:  Assignment and Preparation of Team Members.
Normally all specialists who participate in the pre-
paration of the draft statement are assigned  to the
KSIM team.  These are staff scientists, engineers,and
planners of varying backgrounds and expertise involved
in the major subject areas of the EIS.  The individ-
uals work together in various studies in support of the
EIS and meet frequently to discuss the results  of  their
studies.  Thus, each is familiarizing himself with the
proposed project, the environment of the project,  and
the alternatives under consideration.  He is  also
learning to communicate with his team members.  These
researchers represent a core group.  Other experts also
familiar with the project area may join the core group
at a later date.  These may be lead agency staff members
or the decisionmakers themselves.

Step 2:  Identification of Variables.  Critical long-
term variables are identified and defined as precisely
as possible.  One variable must represent the proposed
project or alternative.  The definition of the  vari-
able is separated into its quantitative and qualitative
components.

Step 3.  Set Initial Values.  A well defined variable
will lend itself to setting an initial value.   An
estimate is made of the maximum growth level  the vari-
able could achieve.  For example, if the variable  is
population, the initial value is the present  fraction
of the ultimate population.

Step 4.  Cross-Impact Analysis.  KSIM requires  that
two matrices be completed for the simulation.   One of
the matrices represents the long-term (Alpha) relation-
ship between variables and the other represents the
short-term (Beta) relationship.  The matrices are  con-
structed with each variable listed as a row and a  col-
umn heading of a table.  A basic assumption is  that
when one variable  changes,  it may increase or decrease
each of the other  valuables or it may have no relation-
ship at all.   Thus,  "0,"  "+," "-" is assigned to each
square of  the  matrices.   Numerical values are assigned
as a refinement.

Step 5.  Computer  Projection of Variables.   Variables,
initial values, and  cross-impact  values  are  typed into
a computer with the  KSIM  program.   The computer  per-
forms the interactive calculations and displays  the
projected changes  in each variable over  time.  Team
members may now modify and refine  their model by  chang-
ing variable definitions, initial  values, and numerical
cross-impact values  or by choosing an alternative proj-
ect.  This refinement is  repeated  until  the  team mem-
bers are satisfied that their projections and inputs
are reasonable.

Step 6.  Interpretation of the  Projection. Major  trends
which appear in the  projections and some  analyses of
the key factors,and  issues which bring about those
trends, can now be  discussed  using  the projections as a
basis for the  discussion.  Differences in the long-term
impacts between the  various  alternatives  can also be
analyzed and discussed.

            Examples of Applications of KSIM
                To Environmental Studies

The application of the KSIM  procedure to preparation of
environmental  studies has been  demonstrated  in three
water and wastewater management related projects  in the
past year.  A  fourth study application is currently
under way.  Described briefly below is the manner in
which KSIM methodologies were applied to the various
studies.

An EIS was initiated to evaluate an Areawide Facilities
Plan of wastewater collection,  treatment, and disposal
facilities for the Las Virgenes Municipal Water Dis-
trict.  The 118,000-acre  service area of the District
included coastal Malibu, portions  of the Santa Monica
Mountains, and inland areas  of  western Los Angeles and
eastern Ventura Counties, California.  The issues of
population growth  and land use presented unusual prob-
lems due to conflicting philosophies within the region.
Population growth  had been rapid and was considered
desirable in the inland urban areas, whereas  the
coastal portions of  the study area had defeated sewer
bond issues (presumably on alleged growth-inducing
impact) three  times  since 1966.  KSIM was employed to
model the long-term  effects  of  implementing  the Area-
wide Facilities Plan.

The KSIM team  consisted of the  staff of the Depart-
ment of Environmental Studies.  Staff members included
specialists trained  and experienced in the fields of
water quality  and  reclamation,  public health, social
services, local and  regional  planning, ecology, and
environmental geology.

Nine variables were defined  and initial values set by
the team according to procedures outlined in the  pre-
vious section.  They;are given  below:

     Population (POP) represents the number  of people
living in the  study  area.  An initial value  of 15
percent was established.

     Pollution (POL) indicates  the level of  air,  water
and noise pollution  in the study area.  The  initial
value of 45 percent  represents  the ratio of  the pres-
ent measured levels  of the three pollutants  to the
regulatory standards set  for maximum limits.

     Desirability  (DES) denotes the general  attrac-
tiveness of the study area based on such  factors  as
                                                       425

-------
aesthetics, amenity, and quality of life.  Largely a
qualitative determination, the study area was given an
initial value of 70 percent.

     Urban  Services  (US) measures  the  quality
and quality of public services such as schools, police,
fire protection, and utilities provided to area inhabi-
tants.  Services were judged more than adequate as
indicated by an initial value of 20 percent  (5 percent
above population).

     Resource Consumption  (RC) represents the total
amount of energy and water consumed in the study area.
The initial value of 11 percent is the ratio of present
usage to that required by the ultimate population
anticipating the effect of current and future energy
conservation measures.

     Residential Density  (RD) indicates the ratio of
high density urbanization to developed land, initially
estimated at 25 percent.

     Employment Dispersal  (ED) is the relationship of
distance between places of employment and residence.
The initial value of 27 percent is the quantitative
estimate of the ratio between maximum and existing
distance driven to work.

     Wastewater System Capacity (WSC) represents the
proposed project.  The initial value of 9 percent is
the existing fraction of ultimate projected wastewater
treatment capacity.

     Cost of Living  (COL) is probably the most quali-
tative of all variables.  The initial value of 20 per-
cent was based on the assumption that the cost of
living would not exceed five times its present value.

Through cross-impact analysis, the Alpha and Beta
matrices were completed and checked for obvious errors.
The  derived values were tested on the computer and
subsequent refinements were made.  Tables 1 and 2 show
the Alpha and Beta matrices which produced the regional
model depicted in Figure  1.
      Table 1. Alpha Matrix of Long-Term Impacts
VARIABLE INITIAL VALUE

1.0
      POP
                                              WSC
                                                    COL
POP
DES
POL
US
RC
RES
ED
WSC
COL


POP
DES
POL
US
RC
RES
ED
WSC
COL
+2
-1
+2
-1
+1.5
+2
0
+1.5
0
Table
POP
0
+ .5
+ 2
-1
+1.5
-1
0
-1
+ .5
+ 1
+ .
0
+1
0
-1

+ ,
+ .
2.

5 -1.
+ .
0
0
+
.5 0
.2 0
.8 0
Beta
5 +1
5 +1.5
5 + .5
0
+1
5 -1
0
+ .5
+ .9
Matrix
DES POL US
0
+
0
0
0
+
0
0
0

.1 -1
+ 1
+ 1
0
.1 0
0
0
0
.2 0
.5 + .3
0
0
0
0
0
0
+ .5
0
0
+2.2
0
+ .5
.3
0
+1
+ .6
.1
-1
-1
-1
.5
+ .5
.2
.5
-1
of Short-Term
RC
0
0
+1.6
.5
-1
0
0
.5
+ .7
RES
0
.8
.4
0
0
+ .3
0
+ .5
0
-1
-1
+ 1.2
0
+1.5
0
0
0
+ .5
+ .25
+1.7
.7
0
+1
0
0
0
+ .6
-1
.5
.5
+ .8
-1
+1
.5
.5
0
Impacts
ED
.3
0
+1
0
+1
0
0
0
0
WSC
0
+ .1
-l.S
0
0
0
0
0
0
COL
0
.2
0
0
-1.3
0
.1
0
0
   Figure 1.  Regional Model, Areawide  Facilities Plan
The simulation indicates continued urbanization in the
study area, shown by the increases in  population, urban
services, and residential density.   The fact that urban
services increase slightly  is  an  indication that the
area may be able to retain  much of its rural character.

Resource consumption increases at a rate slightly
greater than the population, indicating a slight in-
crease in per capita consumption.   The continued rise
in pollution can be directly tied to its strong rela-
tionship with both population  and resource consumption.
The steady decrease in employment dispersal indicates
an increase in the number of employment centers in the
project area.  (The area has virtually none at  present.)
Joining employment dispersal in an overall decline over
the planning period is desirability.   The decline is
attributable mainly to increases-in population  and
pollution.  Cost of living  does interact here,also
tending to reduce desirability.   The increases  in popu-
lation, resource consumption,  and urban services indi-
cate future growth, which are  accommodated by in-
creased wastewater system capacity.  The demand for
wastewater treatment facilities for the year 1987 in
the simulation does not rise above the capacity that
would be provided by the proposed project.

KSIM was again employed in  association with an  environ-
mental impact report for the Master Plan of Water and
Reclamation Facilities of the  Los Alisos Water  District,
a 5,400-acre district located  within a rapidly  growing
area of Orange County, California.   Several constraints
on growth exist within the  district, including  land use
restrictions associated with the  flight path of a
military air base and lack  of  regional wastewater treat-
ment and disposal facilities.

Team members participating  in  the KSIM analysis in-
cluded members of the Department  of Environmental
Studies.  District staff and the  Board of Directors
reviewed initial KSIM models and  offered suggestions
for refinement of the models.  Nine variables defined
for this study include the  following:

    Population  (POP) is a quantitative variable of the
number of people residing within  the district.
                                                        426

-------
     Nonrenewable Resources  (NRR) represents consump-
tion levels of nonrenewable resources within the dis-
trict including energy consumption and use of  con-
struction materials.

     Public Services (PUB) reflects levels of  public
services including schools, parks, solid waste col-
lection, police protection, public transportation, and
social services.

     Pollution  (POL) variable is a semi-quantitative
variable reflecting levels of water and air quality
within the district.

     Natural Resources (NAT) reflects both qualitative
and quantitative, ecological, agricultural, mineral,
and archaeological resources of the district.

     Economic Resources (ECO) reflects both quanti-
tative and qualitative aspects of economic activity
within the district including property values, busi-
ness activity, family income, and employment.

     Desirability  (DBS) is a qualitative variable
reflecting the characteristic which may cause  people
to desire living in an area.  Examples of these char-
acters include levels of public services; closeness
to work, shopping, and friends; quality of schools;
and aesthetic qualities.

     Reclamation (REC) is a quantitative variable
representing the amount of wastewater generated within
the district that must be reclaimed.

     Water Consumption (WAT) is a quantitative vari-
able representing the amount of water used within the
district.

Three KSIM models were generated to reflect possible
changes in land use and in wastewater treatment and
disposal.  The first model  (Case 1) was the more con-
servative case reflecting current land use restraints
and wastewater treatment and reclamation near  or with-
in district boundaries.  Case 2 reflected the  addition
of some capacity in regional treatment and disposal
facilities.  Case 3 reflected a situation where land
use restraints associated with the flight path of the
military air base were liberalized and regional waste-
water treatment was available.

Alpha and Beta matrix values for Case 1 are in Tables
3 and 4.  Alpha and Beta values for Cases 2 and 3 were
similar; however, the initial value for reclamation
was lowered to reflect capacity in regional wastewater
treatment and disposal in Cases 2 and 3.  The  initial
value for population was also lowered in Case  3 to
reflect a potential higher ultimate population.
          Table 4. Beta Matrix for Case 1

POP
NRR
PUB
.POL
NAT
ECO
DES
REC
WAT
POP
+2
+ .8
+1.5
+ .5
-2
+2
.5
+ .3
+ 2.5
NRR
.5
+ .5
+ .5
+1.5
.5
.5
.4
+ 1
+ 1
PU
+2
+
+



+1
+1
+ 1
5

5
S
2
1
2



POL NAT
.5 + .3
+1 -1
+1 .2
+ .1 .1
.4 + .5
.5 .5
.5 + .5
+1.5 .3
.3 .3
EC
+ 2
+1
+1
+ 1
-1
+
+1
+1
+ 1
3
5
5


5
S
5
5

DE
+2
+1
+
+
+
+1
+
+ 1
+
3 REC
+ 2
+1.5
8 + .5
1 + .5
2 -1
-1
8 .5
5 + .5
7 .8
«IAT
.1
.3
.5
.1
.1
.5
.3
2.5
.5
Computer projections of each variable  for Case  1  are
shown in Figure 2.  Population rises to the  approxi-
mate levels predicted by current land  use elements.
Nonrenewable resource consumption increases  in  the dis-
trict, reflecting increased urbanization.  Public ser-
vice levels rise slightly. Pollution levels  increase
over time reflecting air quality degradation in the
region.  As urbanization increases, natural  resources
decrease in the district.  Economic resources rise
steadily over time.  Desirability remains at approxi-
mately the same level.  Reclamation and water consump-
tion increase in relation to population increase.
Computer projections of variables for  the other two
cases were similar to Case 1 with higher population,
reclamation, and water consumption, and a sharper
decline in natural resources.

VARIABLE INITIAL VALUE
1.0  i—
                            1985

                            YEAR
           Figure  2.  Regional Model for Case 1
         Table 3. Alpha Matrix  for  Case 1
      POP
           NRR
                 PUB
                      POL
                            NAT
                                   ECO
                                              REC
                                                    WAT
POP
NRR
PUB
POL
NAT
ECO
DES
REC
HAT
+2
+2.5
+ .5
+2
.4
+2
.2
+3.5
+2.5
.2
+ .5
+ .5
+1
.4
.3
.1
+1
+1
+1
+
f



+ 1
+2
+ 1

5
2
3
3
2


5
+ .2
+1
+1
+ .3
.4
.4
.5
+1.5
.3
+ .3
.5
.2
.3
+ .S
.2
+ .5
.3
.1
+2.5
+1.5
+1
+ .5
.5
+ .8
+1
+2
+1.5
+2
+ 1
+1.5
+ .2
+ .1
+1.5
+1
+1
+ .7
-1
+1
+1
+



+
-1

S

3
5
8
4
5

.1
.4
.3
.1.
.l'
.5
.3
2.5
.5
A brief description of KSIM and the results of the
computer simulations were included in the environ-
mental impact report for the district's Master Plan.
Public agencies, organizations and individuals review-
ing the EIR had no adverse comments to offer in regard
to the use of KSIM.

Use of KSIM by the Department of Environmental Studies
has not been limited to wastewater projects.  KSIM  has
provided valuable assistance in analyzing environ-
mental impacts associated with expanded agricultural
development on the Colorado River Indian Reservation  in
western Arizona.  Team members were limited to the
staff of the Department of Environmental Studies.
Variables selected for the analysis included:  Indian
Self-determination (ISO), Agriculture  (AG), Resource
                                                       427

-------
Consumption  (RC), Scientific Relationships  (SR) ,
Pollution  (POL), CRIT System Procedures  (CSP), Quality
of Social Services  (QSS), Employment  (BMP), and Tribal
Income  (TI).

Computer projection of these variables is shown in
Figure 3.  Several trends are apparent in this simu-
lation.  Indian self-determination rises possibly in
response to higher tribal income brought about through
increased agriculture and employment.  Resource con-
sumption also increases in response to increased
agricultural development.  Increased agriculture also
appears to result in increased pollution levels and
lowering of scientific relationship values.  Quality
of social services is projected higher although govern-
ment role  (CRIT System Procedures) in the reservation
is decreased.
VARIABLE INITIAL VALUE

1.0
Figure 3.  Model for Colorado River Indian Reservation
The Department of Environmental Studies is currently
applying KSIM to the evaluation of environmental
aspects of a water-oriented recreational development.
Variables including recreation, resource consumption,
environmental quality and jurisdictional framework
have been defined for this simulation.

     Operational Aspects and Implications of Use

The application of KSIM to appropriate projects has
distinct functional advantages.  It allows a multi-
disciplinary interchange as mandated by current
environmental legislation, and provides a sound basis
to decisionmaking through the interactions of the
panel approach.  Its use of cross-impact analysis
encourages disciplined inquiry and allows decision-
makers to appreciate the magnitude and complexity of
factors affecting planning decisions.  As it provides
a model of the future, it may be periodically checked
to determine if adopted policies are having the effects
initially perceived.  Future changes in goals and
policies can be readily integrated into the KSIM model
through ongoing refinement.

Additionally, KSIM has advantages related to EIR/EIS
processes.  Evaluation of long-term, secondary,envi-
ronmental impacts as required by NEPA and other acts
is greatly facilitated by the KSIM projection.
Displaying the projections  at work sessions or public
hearings and soliciting  comment  on the models can pro-
vide decisionmakers with a  unique  way of involving the
public in the planning process.

KSIM is attractive to many  reviewers and public agen-
cies as participation in cross-impact analysis does
not require a highly technical background.   Because of
this relative simplicity in use, decisionmakers and
other interested parties can actively participate in
the formation of the simulations.   However, use of
KSIM for many agencies is limited  due to the avail-
ability of necessary data processing equipment.  Active
participation by many decisionmakers is also hampered
by the time required for completion of both Alpha and
Beta matrices by the KSIM panel.

We believe KSIM has wide applicability for  environ-
mental studies on many types  of  projects in addition to
those discussed in this  paper.   Generally,  KSIM can be
applied to projects where impacts  of a project go well
beyond the immediate time frame.   It is particularly
well suited for projects requiring master planning
techniques, as KSIM itself is  a planning exercise.   We
believe KSIM will be more commonly used for projects
which require a decision about implications of future
growth.

We are currently researching  methods to modify,  monitor,
and refine the KSIM procedure so that it can be more
easily implemented.  One  approach  is to reduce the
time required to formulate  and run  a simulation.  Ways
in which this may be accomplished  include:   the com-
posing of checklists of  typical  variables for specific
types of projects, standardizing and refining methods
for setting variable limits and  determining initial
values, and adjusting the program  and procedure to
require only the Alpha cross-impact matrix.

It may be possible to save  much  panel discussion time
by bringing the decisionmakers into the exercise at  a
later stage of the simulation.   Decisionmakers would
then have more of an evaluative  function since a core
group would have already defined variables  and set'
initial values.  The use  of questionnaires  is also
being investigated as a  way to document the input of
KSIM team members who cannot  participate in the dis-
cussion in person.

Future research of the KSIM procedure is needed to sub-
stantiate the accuracy and  validity of the  model.  This
can only be done through  long-term monitoring and
evaluating of a KSIM model, and  subsequent  revision of
the procedure.   As with  any planning tool,  however,
KSIM's real benefit is not  in predicting the future
but in facilitating the  formulating of policy which
will at least set a rational  course of action towards
the future.
                      References

 1.   Kane  J.;  I.  Vertinsky;  and W. Thomson.,"KSIM:  A
     Methodology  to Interactive Resource Policy Simulation,
     Water Resources Research 9(1): 65-79, 1973.

 2.   Kruzic,  P.G./'Cross-Impact Simulation in Water
     Resource Planning," U.S. Army Engineers Institute,
     Fort  Belvoir, Virginia, 1974.

 3.  Suta, B.E.,"KSIM Theoretical  Formulation,  A Para-
     metric Analysis," Stanford Research  Institute, Menlo
     Park, California, 1974.
                                                       428

-------
                                   A CRITICAL APPRAISAL OF MATHEMATICAL MODELS
                                         FOR LAND SUBSIDENCE SIMULATION

                                    E. John Finnemore and Robert W. Atherton
                                              Systems Control, Inc.
                                              Palo Alto, California
     Land  subsidence can have a major environmental im-
pact resulting from the withdrawal of geofluids: oil,
gas, groundwater,  or geothermal water and steam.
While mathematical models for simulating land subsid-
ence caused by pore fluid withdrawal are still in a re-
latively early stage of development and are not yet
very numerous, the subject is drawing increasing atten-
tion.   Documentation is available on two simple and
nine advanced models of this type, and additional mod-
els under  various  stages of development were identified.
The appraisal of such models reported here required an
examination of the physics included, the model equa-
tions, the numerical methods, and the practical applic-
ability of each.  The status of this field and the need
for further work are reviewed.

                       Background

     Examples of major subsidence resulting from
the withdrawal of  petroleum, groundwater, and geo-
thermal fluids are respectively:  8.8m vertical and
3.6m horizontal movement over the Wilmington oil field,
Long Beach, California; 8.8m vertical movement in the
San Joaquin Valley, California; and 4.5m vertical and
0.8m horizontal movement at the Wairakei geothermal
field in New Zealand.  Numerous instances of lesser
subsidence have occurred around the world.1

     The total environmental impact of subsidence ob-
viously depends upon the geographic and economic devel-
opment of  the site.  In populated areas, roads, railroad
tracks, sewers and drains, power lines, pipelines,
wells, airfields,  houses, and buildings may be damaged.
In rural areas, dams  and levees, irrigation chan-
nels, agricultural drains, wells, electric transmission
towers, vegetation  patterns, and crop irrigation pat-
terns may  be affected.  Along coastlines and rivers the
areas subject to flooding may be altered and increased,
and drainage paths may be changed.  In any location,the
incidence  of minor earthquakes may be affected.  Within
the producing field itself, wells and well casings,
pipelines, and plant  (if any) may be damaged, and the
storage and transmissivity of the producing porous med-
ium (fluid reservoir) may be reduced.  It must be noted
that the resulting bowl-shaped surface depressions
which usually form may be significantly offset from the
producing  wellfield, as occurred at Wairakei.  Also,
for a number of the impacts just mentioned, horizontal
ground motion (not predictable by one-dimensional sub-
sidence models) can have far more serious effects than
vertical motion.

     Models are employed for two reasons:  they provide
a mechanism with which to improve our understanding of
the nature and behavior of a producing field, and they
provide a  means for predicting subsidence and for in-
vestigating alternative schemes for subsidence mitiga-
tion.  As  early as 1925, while providing an understand-
ing of the consolidation of soils for the first time,
Terzaghi deemed theories of models to be among the most
important  and indispenslble engineering tools.

     The work reported here was performed as part of a
study of subsidence associated with geothermal develop-
ment, supported by the National Science Foundation.
The models reviewed in that study also included certain
other non-subsidence models, which have frequently been
incorporated into  the subsidence models.  The work was
accomplished by a review and analysis of the published
and unpublished literature on models for subsidence
caused by any type of geofluid withdrawal.  This was
supplemented by discussions and interviews with re-
searchers in the field of subsidence modeling, and by
attendance at numerous seminars and symposia where these
topics were considered.

                     Model Appraisal

     Formulating a mathematical model for land subsid-
ence is no small undertaking.  As usual in modeling geo-
physical processes, the accuracy of the model is limited
by the ability to describe the detailed structures pro-
vided by nature.  Even when the scope of the modeling
effort is reduced, two major problems remain:  the
question of data availability and the problem of com-
putational tractability.  It is not clear whether the
data base for a large-scale model is available at this
time.  With regard to computational tractibllity, a com-
prehensive three-dimensional reservoir subsidence model
would tax even today's large computers.

     Despite the drawbacks listed above, models are in-
valuable as a means of studying a phenomenon, correlat-
ing data, identifying the most crucial data needs, or
extrapolating to unknown conditions.  In studying sub-
sidence, we have been compelled to rely on a hierarchy
of models rather than a single definitive model.  Our
appraisal has consisted of three parts:  (1) a study of
the simplest class of subsidence models, (2) an investi-
gation of the form of an advanced subsidence model which
would provide the minimum predictive capability, and (3)
a study of the state-of-the-art in subsidence modeling.

Simplified Subsidence Models

     Simplified models serve two purposes.  First, they
provide models which can be used independently of large
computers and program decks.  Thus, they provide an in-
expensive means of making a first estimate of subsid-
ence, and they provide a vehicle for the non-expert in
solid mechanics to obtain an understanding of the sub-
sidence process.  Second, the site data required to use
a simple model are limited and more likely to be avail-
able.
     Simplified subsidence models have been developed
by adapting the work of Geertsma.3  These simplified
models will be a key portion of a subsidence handbook
being developed to provide a guide to the analysis of
potential subsidence associated with geothermal develop-
ment . ^
     The simplest compaction model can be expressed
as follows:
                      AH = H C  Ap
                              m

This simple equation illustrates the fundamentals of
compaction.  The compaction AH is the product of  the
reservoir thickness H, the reduction in pore pressure
Ap, and the compaction coefficient C .  Therefore, as
Geertsma pointed out, compaction can occur  in well con-
solidated reservoirs, if they are thick and  experience
a large decrease in pore pressure.3  Slightly more
sophisticated models have also been presented in  which
C  is calculated using an integral over total effective
stress.
                                                       429

-------
     This compaction equation alone does not account
for response of the overburden to reservoir compaction;
it is the surface deformation that is evident as sub-
sidence.  A simple overburden model is also available.

Description of an Advanced Subsidence Model

     In modeling subsidence, we are interested in de-
scribing the sinking of the earth's surface due to ad-
justments in the subterranean material stimulated by
the withdrawal of geothermal fluids.  Since the fluid
withdrawal is the subsidence stimulus, the first re-
quirement is for a well-bore model and a reservoir mod-
el to relate the pore-pressure distribution in the res-
ervoir to the rate of withdrawal at the surface.
Since reservoir compaction is the precursor of subsid-
ence, a second need is a constitutive equation, a model
for mechanical behavior, for the reservoir material.
These two models must be coupled (interactive), as com-
paction affects the porosity and permeability of the
reservoir.

     Two additional distinct material models will be
required:  one for clays and one for the overburden.
Clays in communication with the reservoir will also
undergo reduction in pore pressure, and consequently
will compact, though usually with considerable time de-
lay.  The overburden, experiencing a change in stress
at the reservoir boundary, will deform; the deformation
of the overburden can be treated as an elastic material.
The all-important reservoir will likely require treat-
ment as a plastic material in order to account for the
well.known irreversible nature of compaction.   The
same argument holds for clays.  The exact form of the
required two plastic constitutive relations is yet to
be determined.

     To summarize, an advanced subsidence model must
include computer subroutines to generate the character-
istic physical properties of the site.  It should con-
tain a well-bore model, a reservoir model, and three
models of material behavior, i.e., for the reservoir,
communicating clays, and the overburden.  All models
should be implemented using the best available numeric-
al methods.

Survey of Subsidence Models

     The recent interest in environmental aspects of
land subsidence, and the increased availability of
large computers, have provided incentives to the devel-
opment of models for subsidence.   A considerable number
of these models are presently under continuing develop-
ment.  In addition, more subsidence models will be in-
troduced by the plans to incorporate deformation models
into many of the reservoir models (fluid flow, or fluid
heat flow only) presently under development also; these
future possibilities are not discussed here.

      The survey reported here identified two simple and
fourteen advanced models designed to simulate subsid-
ence caused by geofluid withdrawal.  The developers of
these models and the applications for which they were
designed are listed in Table 1.   In the miscellan-
eous category, the first model has a general capability
for any type of man-induced subsidence, and the second
simulates natural geological subsidence in a basin ex-
periencing sedimentary deposition.   While much has been
learned about the effects of groundwater withdrawal
since Terzaghi described his consolidation theory in
1925, advanced models for simulating resulting ground
deformation are seen to have appeared only after 1972.
Mathematical models for computing land subsidence
caused by the extraction of oil and/or gas from the
ground were probably first formulated in the 1950's.
The early models were all analytical models; a number
of the more recent numerical subsidence models have
been developed by petroleum company personnel, so that
they remain proprietary.   The recent upsurge of inter-
est in alternate  energy sources has stimulated the de-
velopment of models  for geothermal resources.  However,
only two geothermal  subsidence models are, as of
February 1976 near first-version completion, and neither
of them has been  tested in applications yet.

                         Table 1
        MATHEMATICAL MODELS OF LAND SUBSIDENCE
           CAUSED BY PORE FLUID WITHDRAWAL
       TYPE
    Miscellaneous
    Groundwater
    Oil and Gas
    Geothermal
          DEVELOPER(S)
 Sandhu and Wilson  (1970)
 Jacquin and Poulet  (1970)

 Gambolati,et al. (1973, 1974)
 Helm (1974)
 Narasimhan (1975)

*McCann and Wilts (1951)
*Geertsma (1966)
 Geertsma and van Opstal  (1973)
 Frazier (1973), Archambeau (1974
 Finol and Farouq Ali  (1975)
 Paris and Farouq Ali  (ongoing)
 Kosloff and Scott  (ongoing)
 Oil Companies (proprietary)
 Pritchett, Garg, Brownell (1975)
 Lippmann ,and Narasimhan (ongoing)
 Safai and Finder (ongoing)
*Simple models  (remainder are  advanced  models)

    Each of the models listed  in  Table  1  is  discussed
in the following paragraphs, in the  same  order.

    Sandhu and Wilson developed a finite  element method
for the general analysis of  land  subsidence.   It per-
mits the consideration in two  or  three  dimensions of
complex geometry, arbitrary  time-varying  boundary con-
ditions, non-homogeniety as  well  as  anisotropy, and
non-linear and time-dependent  material  behavior includ-
ing viscoelasticity, creep,  temperature effects, resid-
ual stresses and plastic behavior.

    Jacquin and Poulet developed  a two-dimensional
(axi-symmetric) computer model to study the  hydrodynam-
ic patterns in a naturally subsiding sedimentary basin;
With time and deposition of  successive  sand  and clay
strata, the depth of the conical-shaped basin  increased
and water was expelled from  the clay.   Fluid flow was
horizontal in the sands and  vertical in the  clays.

    Gambolati et al. developed a  two-step mathematical
model to analyze subsidence  in the complex,  unconsolid-
ated aquifer-aquitard system underlying Venice, Italy.8^
The hydraulic pressures were calculated in a two-
dimensional vertical cross section in radial coordin-
ates in the first step by a  model based on the diffus-
ion equation, which was solved with  a finite element
technique.  The values of the  hydraulic heads  in the
aquifers were then used in the second step as  time-
dependent boundary conditions  in  a set  of one-
dimensional vertical consolidation models, which were
solved with a finite difference technique.   The com-
paction models are based on  the one-dimensional form
of the classic diffusion equation, written in  terms of
one unknown, the fluid potential, and employing only
one elastic constant, the vertical compressibility, a.
While the compressibility, a,  of  any layer may be a
non-linear and irreversible  function of the  pressure
head, at Venice the non-linearity was considered neg-
ligible.  The irreversibility  was provided by  using
two a values for each layer, one  for compaction (pre-
consolidation) and another  (about one-tenth  as large)
for expansion.
                                                       430

-------
     Although model calibration was hampered by the
sparseness of the available data, the study was able to
predict the results of alternative mitigation measures?
The main disadvantage of this model lay in the limita-
tions imposed by the requirement of radial symetry.
When using the two-oc-vaiue method described above, it
is very important that the computer code should keep
track of the past maximum effective stress at every
point, and use the smaller (expansion) value for com-
pression when the effective stress does not exceed the
past maximum (preconsolidation) stress.

     Helm has developed two one-dimensional mathemati-
cal subsidence models for groundwater withdrawal.10-12
Cumulative compaction and expansion in a series of
aquifers and aquitards was computed from the known
applied stress history and from two storage coeffi-
cients (compressibility values), one for recoverable
and the other for non-recoverable compression.  By dis-
tinguishing between present effective stress and past
maximum effective stress at any depth, these models em-
ployed the two storage coefficients in a manner very
similar to that used in the model of Gambolati, et al?>^
Non-recoverable compaction occurred only when the past
maximum effective stress at any point was exceeded.

     Helm's two models are also both based on the one-
dimensional diffusion equation.  One model assumes
linear (not stress-dependent) coefficients, and the
other assumes non-linear (stress-dependent) coeffi-
cients.  Two transformations of applied stress enabled
the non-linear formulation to be represented by an
equivalent linear homogeneous formulation.  Both models
were solved by finite difference techniques.

     The models were applied to a series of 21 aqui-
tards at Pixley, in the San Joaquin Valley, California.
There, continuous records of hydraulic head and com-
paction revealed 3.19 feet of compaction between 1959
and  1971, although there was no long-term decline of
the  groundwater level which experienced seasonal fluct-
uations of about 100 feet.  The maximum error in the
predictions of the non-linear model was 2.9% compared
with 17. for the linear model.
     Narasimhan has developed a subsidence model named
TRUSTT3TRUST will simulate transient groundwater
motion in variably saturated, deformable, heterogen-
eous, isotropic, multidimensional, porous media.  It in-
corporates a one-dimensional subsidence model employing
Terzaghi's consolidation theory into a general three-
dimensional, isothermal, groundwater flow model.  The
non-linear governing equation employs the pore-pressure
head as the dependent variable; it is solved using an
integrated finite difference method.

     TRUST was tested and verified on nine different
problems.  Applications involving deformation included:

      •  One-dimensional, time-varying consolidation of
        clay under a foundation load
      •  One-dimensional shrinkage of an active (benton-
        ite) clay slurry, in which large volume changes
        can occur in very short times
      •  One-dimensional drainage (partially saturated
        flow) of a deformable sand
      •  Two-dimensional draining (both fully and part-
        ially saturated) flow in a deformable sand

      •  Two-dimensional drainage and deformation around
        a fresh excavation in soft clay.

     McCann and Wilts developed two analytical models
as part of a mathematical study of the oil-field sub-
sidence in the Long Beach area.1^  At the time (c.
1950), it was decided that the only physical model
which could logically describe the general known pro-
perties of the soil and be amenable to mathematical
solution was one of a three-dimensional, homogeneous,
isotropic elastic medium of  semi-infinite  (downward)
extent.  Solutions were obtained  for motions  (vertical
and horizontal, as functions of depth)  and  stresses
developed in such a medium under  the action of  general
distribution of the two alternative types of  ideal-
ized subsurface disturbance  forces.  These  forces were
intended to represent the effect  of drops in  the re-
servoir oil pressures, and ways were devised  to obtain
the forces from the pressure drops.

     The first type of disturbance force was  called a
"tension center" (or "tension sphere"), and the second
type consisted of a pair of  equal vertical  forces act-
ing in opposite directions a short distance apart, con-
sequently named a "vertical  pincer."

     Analysis yielded the stresses and  deformations
caused by such a single disturbance force.  Employing
the principle of superposition, arrays  of tension cen-
ters and vertical pincers were sought which would yield
the deformation observed at  that  time.  The assumption
of an elastic material is essential for the use of
superposition, and McCann and Wilts themselves  stated
that the most serious difficulty  with their analysis
was the failure of the earth to behave  like an  elastic
material.  They found that the tension  center model
could be arrayed to fit all  the observed deformation
data to the accuracy with which they could  be measured,
while the vertical pincer model could fit none, and so
they concluded that only the tension center model
should be considered further.

     Geertsma developed a three-dimensional analytical
subsidence model for poro-elastic displacements around
a contracting oil reservoir  in a  semi-infinite, homo-
geneous, isotropic rock medium.    It employed a "nu-
cleus of strain" concept,  and it  integrated the result-
ing displacement function over the volume of a hori-
zontal, disc-shaped producing reservoir.  The same lin-
ear elastic properties were  specified within and out-
side the reservoir, into which no natural recharge was
allowed as the contained pore pressure was  reduced,
causing changes to both internal  and external stresses
and strains.  Results of evaluations with this model
indicated that notable subsidence (as opposed to com-
paction) can be expected only above large reservoirs
consisting of highly compressible sediments and exper-
iencing substantial pore-pressure reductions.

     Geertsma's model (poro-elastic theory) described
here, and McCann and Wilts' model (elastic  theory) de-
scribed previously, were reviewed, compared, related,
and improved by Gambolati.1°  In  particular, Gambolati
demonstrated that Geertsma'« model can be easily ex-
tended to incorporate heterogeneity of  the  reservoir
(reservoir material more easily deformed than its
overburden).
     Geertsma and van Opstal evaluated  conceivable nu-
merical methods for calculating subsidence  above oil
or gas reservoirs of arbitrary three-dimensional shape
and change in pressure distribution.    They concluded,
in 1973, that the simplest method, which still pro-
vided a good overall impression of the  spatial  subsid-
ence distribution,  was one based  on the linear  elastic
theory of nucleus of strain in the half space..   Then
tested a suitable three-dimensional finite  element pro-
gram named ASKA for such purposes, and  obtained results
which agreed quite satisfactorily with  their  analysis.
They also developed another  computer program  to help
integrate their nucleus-of-strain theory over a com-
pacting reservoir of arbitrary shape, by dividing it
up into a finite number of small  parts.  This latter
approach they used to predict subsidence and  horizontal
displacement patterns over the Groningen gas  field from
1975 to 2100.
     Frazier and Archambeau  developed an elastic re-
servoir model with interactive fluid flow and rock
strain equations, which they applied to the Long Beach
                                                       431

-------
oil field with both production and injection.18'19  It
is an axi-symmetric or planar (two-dimensional) finite
element model, which reduces to an implicit time-step
scheme.
     Finol and Farouq Ali developed a two-phase, two-
dimensional black oil model for simulating reservoir
production behavior and simultaneous ground deforma-
tion.^   Reservoir compaction was described on the ba-
sis of reported experimental data, from which the sur-
face subsidence was calculated using Geertsma's theory
of poro-elasticity and nucleus-of-strain concept.15
Fair results are obtained with a simulation of the pro-
duction and subsidence history of an oil field on the
Bolivar Coast of Western Venezuela.

     Paris and Farouq Ali are presently extending the
work of Finol and Farouq Ali described above, but no
results have been published as yet.
     Kosloff and Scott are developing a deformation
model for the Wilmington oil field, which requires pore
pressure histories as input data.  This procedure has
the advantage of removing uncertainties in the fluid-
flow patterns caused by large variations in permeabil-
ity, but it makes the model more difficult to use
with future production schemes.   They consider that
soils exhibit plastic behavior from the start, and with
stress they strain harden and eventually become elas-
tic.  Accordingly, they have used the plastic cap model
as the basis of their constitutive relations.  A two-
dimensional, axi-symmetric version of this deformation
model appears to have given good results for subsid-
ence at Wilmington, in spite of the block-shaped zones
in the reservoir formed by faults.  They are now at-
tempting to check these results with a three-
dimensional version of the model.

     Oil Companies are known to have various models for
simulating subsidence over oil and gas fields  and in
permafrost.  Because of their proprietary nature, few
details are available.

     Pritchett, Garg, Brownell and others are in the
process of developing probably the first multi-
dimensional, deformable, geothermal reservoir model?1""
They have constructed and tested separate two-phase
fluid-heat flow and deformation models, and presently
are in the process of coupling these.  The multi-
dimensional deformation model is designed to make
possible the use of a variety of elastic and/or    :
plastic constitutive relations.   They plan first to>
apply the model to the Wairakei geothermal field in
New Zealand, and wish thereafter to apply it to a site
in the Imperial Valley of California.

     Lippmann and Narasimhan are presently working to
incorporate heat drive and temperature dependencies
into the formerly described isothermal groundwater
model of Narasimhan.13  The resulting model will be a
one-phase, three-dimensional geothermal reservoir flow
model, combined with a. one-dimensional deformation
model based on the Terzaghi consolidation theory.  It
will therefore be unable to simulate horizontal ground
movements.

     Safai and Finder are presently developing a
single-phase, two-dimensional axi-symmetric geothermal
deformation model.  For this, Biot's three-dimensional
elastic theory is being extended to a three-dimensional
visco-elastic theory in such a way that the elastic
part can be weighted to control the amount of deforma-
tion irreversibility obtained.

              Discussion and Conclusions
     We have discussed subsidence models in three
categories:  simplified models,  advanced subsidence
models, and the state-of-the-art.

     The status of the current modeling efforts may be
 summarized as follows:  at least ten subsidence  models
 have been developed for pore-fluid withdrawal, and
 another five are known to be under development with
 three nearing completion.  Firm plans for at  least four
 more are known to the authors.
      Compared with the requirements of an advanced sub-
 sidence model, the present status of subsidence  model-
 ing is seen to be in its infancy.  With regard to spa-
1 tial  coverage, one-and two-dimentional capabilities are
 most common in the current modeling efforts;  the few
 three-dimensional capabilities have, as yet,  been little
 tested.  With regard to mechanical properties of reser-
 voir materials, most of the earlier models employed
 elastic deformation or Terzaghi's one-dimensional con-
 solidation theory.  More recently a few have  provided a
 capability for deformation irreversibility by incorpor-
 ating two different elastic coefficients, a larger one
 for compaction and a smaller one for expansion.  Devel-
 opment of models employing plasticity is just beginning.
      With regard to numerical methods, finite differ-
 ence methods have generally been employed in  the flow
 models, and finite element methods  in the deformation
 models.  There are, of course, a few variations upon
 this theme.
      The need for further work is evident from a com-
 parison of the above discussion and the previously
 stated requirements for an advanced model.  The most
 challenging problem lies in the modeling of the rheo-
 logical behavior (deformation properties) of  the geo-
 logical materials.  A second problem lies in deter-
 mining the best way to model the reservoir flow-
 reservoir compaction interaction.  Overall, there are
 also several difficult problems of numerical analysis in
 reducing the run time of multi-dimensional simulators to
 an acceptable level.

                     Acknowledgements
      This work was supported by the National Science
 Foundation-RANN, under NSF Grant No. AER 75-17298; the
 guidance and contributions of the NSF Program Manager,
 Dr.  Ralph Perhac, are particularly appreciated.   The
 authors also wish to express their gratitude to many  of
 the cited modelers who have contributed to the work
 through their helpful discussions.

                        References
 1.   Poland, J.  F. and G.  H. Davis,  "Land Subsidence Due
     to Withdrawal of Fluids," ^n D. J.  Varnes and
     George Kiersch, eds. , Reviews in Engineering Geo-
     logy, Vol.  2, Boulder, Colorado, Geol. Soc.  Amer-
     ica, pp. 187-269, 1969.
 2.   Terzaghi, Charles, "Principles of Soil Mechanics:
     VIII - Future Development and Problems," Engineer-
     ing News Record,  Vol. 95, No. 27, pp.  1064-1068,
     December 31, 1925.
 3.   Geertsma, J., "Land Subsidence above Compacting Oil
     and Gas Reservoirs," J. Pet. Tech., pp. 734-744,
     June 1973.
 4.   Systems Control,  Inc., "Handbook for the Analysis
     of Subsidence Associated with Geothermal Develop-
     ment and Its Potential for Environmental Impact,"
     Systems Control,  Inc., Palo Alto, California, in
     preparation.
 5.   Nystrom, G. A., "A Review of Soil Models for Sub-
     sidence Analysis," Technical Memorandum 5139-400-
     10, Systems Control, Inc., Palo Alto, California,
     November 1975.

     Continued on next page.
                                                      432

-------
6.  Sandhu, R. and E. L. Wilson,  "Finite Element
    Analysis of Land Subsidence," in  Land Subsidence,
    Proceedings of the Tokyo  Symposium,  September 1969,
    No. 8 in LASH-Unesco series:   Studies and Reports
    in Hydrology, International Association of Scienti-
    fic Hydrology, Gentbrugge, Belgium,  and Unesco,
    Paris, Vol. 2, pp. 393-400, 1970.

7.  Jacquin, C. and M. T. Poulet,  "Study of the Hydro-
    dynamic Pattern in a Sedimentary  Basin Subject to
    Subsidence," paper No. SPE 2988,  presented at the
    45th Annual Fall Meeting  of the SPE  of AIME,
    Houston, Texas, 1970.

8.  Gambolati, G. and R. A. Freeze, "Mathematical
    Simulation of the Subsidence  of Venice,  1: Theory,"
    Water Resources Research, Vol. 9, No.  3,  pp. 721-
    733, June 1973.

9.  Gambolati, G. , P. Gatto and R. A. Freeze, "Math-
    ematical Simulation of the Subsidence of  Venice, 2:
    Results," Water Resources Research,  Vol.  10, No. 3,
    pp. 563-577, June 1974.

10.  Helm, D. C. , "Evaluation  of Stress-Dependent Aqui-
    tard Parameters by Simulating Observed Compaction
    from Known Stress History," Ph.D. dissertation,
    University of California, Berkeley,  California,
    175 pp., 1974.

11.  Helm, D. C. , One-Dimensional  Simulation of Aquifer
    System Compaction Near Pixley, California:  1.
    Constant Parameters," Water Resources Research, Vol.
    11, No. 3, p. 465-478, June 1975.

12.  Helm, D. C. , "One-Dimensional Simulation of Aquifer
    System Compaction Near Pixley, California:  2.
    Stress-Dependent Parameters," Water  Resources Re-
    search  (in press).
13.  Narasimhan, T. N., "A Unified Numerical Model for
    Saturated - Unsaturated Groundwater  Flow," Ph.D.
    dissertation, Dept. of Civil  Engineering, Univers-
    ity of California, Berkeley,  California,  244 p.,
    February 1975.
14.  McCann, G. D. and C. H. Wilts, "Mathematical Anal-
    ysis of the Subsidence in the Long Beach - San
    Pedro Area," California Institute of Technology,
    Pasadena, California, 117 p.,  November 1951.
15.  Geertsma, J. , "Problems of Rock Mechanics in Petro-
    leum Production Engineering," Proceedings of the
    First Congress of the International  Society of Rock
    Mechanics, Lisbon, Portugal,  Vol. 1,  pp.  585-594,
    September 1966.
16.  Gambolati, G. , "A Three-Dimensional  Model to Com-
    pute Land Subsidence, " Bulletin  of  the Interna-
    tional Association of Hydrological Sciences,  Vol.  17,
    No. 2, pp. 219-226, 1972.
17.  Geertsma, J. and G. van Opstal, "A Numerical Tech-
    nique for Predicting Subsidence above Compacting
    Reservoirs, Based on the  Nucleus  of  Strain Concept','
    Verhandelingen Kon. Ned.  Geol. Mijnbouwk. Gen.  Vol.
    28, pp. 63-78, 1973.
18.  Rlney, T. D., et al., "Constitutive  Models and Com-
    puter Techniques for Ground Motion Predictions,"
    Systems, Science and Software Report SSS-R-73-1490,
    La Jolla, California, March 1973.
19.  Archambeau, C. B., "Final Report, 1971-1973, ARPA-
    USGS Fluid Injection   Waste  Disposal Research Pro-
    gram," California Institute of Technology, Pasa-
    dena, California, 340 p., February 1974.
                                   \
20.  Finol, A. and S. M. Farouq Ali, "Numerical Simula-
    tion of Oil Production with Simultaneous  Ground
    Subsidence," Journal of the Society  of Petroleum
    Engineers, Vol. 15, No. 5, pp. 411-424,  October
    1975.
21.  Brownell, D. H., S. K. Garg, and J. W. Pritchett,
     "Computer Simulation of Geothermal Reservoirs,"
     paper No. SPE 5381, presented at the 45th Annual
     California Regional Meeting of the SPE of AIME,
     at Ventura, California, April 2-4, 1975.

22.  Pritchett, J.W.j et all., "Geohydrological Envi-
     ronmental Effects of Geothermal Power Production,
     Phase I," Systems, Science and Software report
     No. SSS-R-75-2733, La Jolla, California,
     September 1975.

23.  Garg, S. K., et al., "Simulation of Fluid-Rock
     Interactions in a Geothermal Basin," Systems,
     Science and Software report No. SSS-R-76-2734, La
     Jolla, California, September 1975.
                                                       433

-------
               UNSTEADY-STATE, MULTI-DIMENSIONAL ANALYTICAL MODELING OF WATER QUALITY  IN  RIVERS
                                              Robert W. Cleary
                                           Water Resources Program
                                            Princeton University
                                            Princeton, New Jersey
The unsteady-state, two-and three-dimensional, convec-
tive-dispersive mass transport partial  differential
equations which describe the concentration distribu-
tion of a contaminant released as a line, point or
general source have been solved analytically using in-
tegral transform methods.   General  two-and three-
dimensional modular solutions are presented in closed-
form which may be used to obtain the exact solution to
a variety of particular boundary conditions and source/
sink formulations.  Several exact solutions for un-
steady-state, two and three-dimensional  water quality
problems subject to finite geometry boundary condi-
tions are derived from these modular solutions and pre-
sented in closed form.

The solutions may be used to model  many water quality
variables including:  BOD, temperature,  chlorides, and
dye tracers.  They are in the form of rapidly-conver-
ging infinite series and error functions and are
easily and inexpensively applied.  In addition to
their value as simulation tools, these closed-form ex-
pressions may be used to verify multi-dimensional,
digital computer numerical models which  in many cases
could not previously be independently checked.

                    Introduction

In recent years the modeling of water quality in rivers
has advanced from simple one-dimensional analyses to
the more accurate and also more complicated two-and
three dimentsional approaches^«2,3,4,5,6,7.   in  gen-
eral, most of the multi-dimensional models presented
in the literature have been solved by numerical tech-
niques.  Analytical solutions in two and three dimen-
sions are notably lacking.  Those closed-form expres-
sions which are available are generally  based on in-
finite geometry systems and have largely been borrowed
from the air pollution literature.   Boundary effects
have either been ignored or in special  cases been
accounted for using the method of images.  The method
of images works well for homogeneous first and second
type boundary conditions but for cases  of a non-homo-
geneous boundary concentration (e.g., concentration
varies as a function of time) or a boundary flux (e.g.,
benthic deposits of phosphorus diffuse into overlying
waters), the method fails.

Despite the wide availability of numerical models for
multi-dimensional water quality problems, it is the
opinion of the author that a need exists for analy-
tical models.  In a given water quality  situation the
selected model should be commensurate with the ques-
tions being asked.  Very often these questions can be
answered by a closed-form analytical  model without re-
sorting to the complexities, computation problems, and
expenses of a large numerical mode.  To  be sure, the
analytical model often requires coefficients to be
average constants while the numerical model is more
flexible, allowing for coefficients to  vary through-
out time and space.  However, in most field situations
one does not know how these coefficients vary spatially
and the numerical modeler often must use average con-
stants over a given large region.  Numerical models are
also often plagued by maladies inherent  in approxi-
mating derivatives by numerical analogs.  The most
serious of these are numerical dispersion (in cases of
convective flows), stability  and  convergence.   Anyone
who has worked with numerical  models  can  appreciate
the unbelievable frustrations  these digital  computer
maladies can give.  It  is  particularly bothersome when
the digital program is  extremely  large (over a  few
thousand cards) and one  is trying to  track  down the
bug which is causing the program  to blow  up when a
different range of a parameter is used.   Numerical
models may often require large memories which may only
be available on certain  computers.  Or they may re-
quire inordinate amounts of time  to complete a  simu-
lation and are thus expensive  to  operate  over a long
period of real time.  They also require skilled oper-
ators to set-up, run,and interpret the output.   This
may preclude their use  by  small consulting  companies
or public agencies with  limited budgets.
It is the opinion of the author that  in many cases,
if one considers  the questions being  asked, the ex-
pense and complexities of applying a  numerical  model,
and the ease of applying an analytical expression, one
will  opt for the analytical solution  and  will find it
adequate for estimating the expected water  quality
under the given circumstances.

       Multi-Dimensional Analytical Solutions

It has been noted that multi-dimensional  analytical
solutions to the convective-dispersive transport
equation for rivers with finite depths and  widths are
significantly absent in the literature.   There  arefi
many  solutions to the straight diffusion  equation.
However, modifying this equation  to account  for  con-
vective fluid motion complicates  its  analytical  solu-
tion.  Additional complications are also  introduced
by the presence of boundaries,  in that the  final analy-
tical expression must satisfy  the operating  boundary
conditions.  If these boundary conditions are non-
homogeneous functions of space and time,  the problem
is immensely complex, when standard methods  are  used.

The purpose of this paper  is to present the  solution
technique and analytical solutions to  the two-and
three-dimensional, unsteady-state, non-homogeneous
convective-dispersive,  general  transport  equations
which describe the spatial and temporal distribution
of a water quality variable in a  river   The solution
technique is a systematic  integral transform approach
which easily handles non-homogeneous  source/sink terms
and non-homogeneous boundary conditions which are
functions of space and/or time.

             The River  Coordinate System

The river is modeled by  rectangular geometry as  shown
in Figure 1.  The average width of the river is W, the
average height is H,and  the flow  is predominantly in
the longitudinal direction and is described  by  the
cross-sectionally, time-averaged  constant velocity U.

          Water Quality Transport Equation

Multi-dimensional river water  quality  is  mathemati-
cally described by the  convective-dispersive trans-
port equation modified  for general source/sink
activity.  This equation may  be written in  vector nota-
tion as follows:
                                                       434

-------
                                                  C   F(X,Z)
                                                                                             t = 0
                                                                      (2d)
                X - 00

      Figure  1.   River  Coordinate System
3t
      3X
= v(D •  vC) + G
                                                 (1)
where C represents the concentration  of a  given water
quality variable, U is a constant,  averaged  velocity,
D represents effective dispersion coefficients  in the
appropriate dimensions, t  represents  time  and 6 rep-
resents a  non-homogeneous  source  or sink function.
In the case of first order  biological  decay  in  the
river, one would add -KC to the right-hand side of
equation (1).  We will not  carry  such a modification
through our analyses, as the  final  results are  easily
modified to account for such  decay.   It should  be
noted at this point that the  assumptions of  constant
velocity and constant (but  numerically different)
effective  dispersion coefficients place important
limitations on the solutions.  Velocity can  vary
spatially  in a river and there is some evidence that
the vertical dispersion coefficient may have a  para-
bolic distribution.9  Notwithstanding  these  limi-
tations, the solutions represent  a  significant  im-
provement  over present "boundary-less" solutions
(which also assume a constant, averaged velocity and
constant dispersion coefficients).  They also are
commensurate with many of  the questions being asked
by decisionmakers and in cases where  a numerical
model must be used, they can  serve  as an important
and necessary check on the  accuracy of the numerical
scheme, for the particular  case of  constant  coeffi-
cients.

          General Two-Dimensional Solution
 In two dimensions, equation  (1)  reduces  to:
                                                 (2)
To maintain complete  generality,  equation (2) will  be
solved subject to non-homogeneous third type boundary
conditions.   In this  way,  first  type (concentration
specified) and second type (flux  specified)  boundary
conditions are automatically  included in the final
solution.  A  general  non-homogeneous initial condi-
tion as well  as an arbitrary  source/sink function will
also be used.  Under  such  specifications, the final
solution will be modular in form  and can be  used to
solve a host  of non-homogeneous  boundary value prob-
lems of the first, second, or  third type with non-
homogeneous generation or  depletion.   The boundary
and initial conditions are:
— + h r
^7   II^U
°L    3
              f3(X,t)
Z = 0
                                       Z = H
                                        (2a)


                                        (2b)


                                        (2c)
                    If C does not approach zero as X approaches  infinity,
                    but instead approaches a predictable  constant  value,
                    one may define a new variable which represents  the
                    concentration in excess of this constant  value.   An
                    example source term, G (X,Z,t), might be  a line source,
                    which would describe the release of a contaminant
                    from a diffuser pipe.  An instantaneous release would
                    be modeled by three Dirac delta functions:

                    GL = g^ fiU-X^ 6(Z-Z1) s(t-tQ)                   (3)

                    where g^ represents the instantaneous line source

                    strength (e.g., grams/foot of stream width), X,  is the
                    longitudinal source location, and Z, is the vertical

                    source location; t  is the time of release.

                    Equation (2) and associated initial and boundary  con-
                    ditions may be further simplified by introducing  the
                    following dimensionless variables:
                        XU
                    p  -
                     2 " DxDz
                    f  -
                     4    D
                                                                     Z
                                                                     H


                                                                     G =
                                                                          U2t
                                                      = n
                                      GD
                                                                               f3H
                                          h3H
                                                               H  -
                                                               H4 "
                                                  (4)
                    Introducing these variables into equation (2) results
                    in a straight diffusion equation:
                                                  3C   3C
                                                       ~
                                  32C
                                                                     (5)
                    subject to the following dimensionless initial and
                    boundary conditions:
                                                       5 = 0
                                                       5   1
                    C 	*- 0

                    c = F(c,e)
                                                                                                   (5a)


                                                                                                   (5b)

                                                                                                   (5c)

                                                                                                   (5d)
                                     Method of Solution
If one attempts to solve equation (5) by the common
separation of variables method, severe difficulties
are encountered due to the spatial and temporal non-
homogeneities introduced by the functions:  G, f, and
f,.  After separation of variables, the resulting
equation in the finite space variable does not meet
Sturm-Liouville requirements for the equation or
boundary conditions.  Such problems may explain the
notable lack of analytical solutions for multi-
dimensional, non-homogeneous, partial differential
equations in the literature.  One of the purposes of
this paper is to illustrate a general integral trans-
form solution technique which may be used to solve prob-
lems like equation (5), regardless of how non-homo-
geneous they are.  The ease with which the technique
handles spatial and temporal non-homogeneities makes
such an approach very powerful and useful in modeling
water quality in rivers.  Indeed,the methods may be
used to solve a host of unresolved problems in many
areas of environmental and water resources engineering.
                                                      435.

-------
 To  avoid  keeping  the  analysis  too  esoteric,  more de-
 tails  of  the  solution  method will  be  presented than
 is  customary.   Essentially, the method is  based on in-
 tegrally  transforming  all  spatial  derivatives  out of
 the equation,  leaving  only an  ordinary differential
 equation  in time.   This  equation is solved directly
 and the transformed water  quality  variable is  then
 inverted  back  by  previously defined inversion  formu-
 las to obtain  the desired  solution.   In Cartesian
 coordinates, the  usual integral transforms will  be
 the Fourier (semi-infinite space variables), complex
 Fourier (infinite)  and Finite  Fourier (finite).   Con-
 sidering  only  first (Dirichlet), second (Neumann) and
 third  (Robin or mixed) type boundary  conditions,  there
 are three possible  kernels for the Fourier transform
 and nine  possible kernels  for  the  Finite  Fourier
 transform.  The appropriate kernel to use  depends, of
 course, on the type of boundary conditions present.
 In  the case of the  Finite  Fourier  transform, there are
 also nine associated eigenvalue relationships.   The
 kernels and associated eigenvalue  expressions  come
 from solving the  homogeneous analogs  of the  original,
 variable-separated  partial  differential equation.
 Since  the kernels  depend only  on the  type  of boundary
 condition present,  once  they have  been tabulated, they
 can be used in a  variety of different water  quality
 problems  for which  the only similarity is  the  type
 (first, second,or third) of boundary  conditions  pre-
 sent (the particular non-homogeneous  function  associ-
 ated with each type boundary condition does  not
 affect the analytical form of  the  kernel:   it  is  only
 the type  itself which is important).

 Equation  (5) has  one finite (vertical)  and one in-
 finite (longitudinal) dimension.  These space  vari-
 ables  may be transformed out of the equation by  a
 Finite Fourier and  a complex Fourier  transform,  re-
 spectively.  The  result will be an ordinary  differen-
 tial equation  in  time, which may be integrated
 directly  for the  transformed concentration variable.
 This transformed  variable  is then inverted twice  by
 previously defined  inversion formulas  to yield the
 solution  to equation (5).

 To  remove the two space variables the  following  double
 integral   transform  and corresponding  double  inver-
 sion formulalO>H>12 for the concentration func-
 tion C (C,C,T) in the ranges:   -
-------
To use solution  (10),  one  simply substitutes into the
total expression  his particular G,  F,  f,,  f. and ap-

propriate, kernels  and  carries  out the  indicated in-
tegrations.

       General Three-Dimensional Solution

In three dimensions,equation  (1) becomes:
               222
   + U    =  D       + D     +  D  *   + G(X,Y,Z,t)
                                                (12)
                                                                          CO   00
              3X
Once again,to maintain  complete generality,  only
third type boundary  conditions  will  be used:
To remove all three  space  variables,  we  define a
triple integral transform  and  corresponding triple
inversion formula:
                   JQ  J0
                                                (15)
                                                         C(e,C,£,T)  = -+-
                                                                         m=0  N=0
        exp(-i>e)K(ein,c)K(vN,e)
- Q |y + h,C f,(X,Y,t) Z = 0 (12a)
Z oL O O
Dz|§+ h4C f4(X,Y,t) Z = H (12b)
- Dyf£ + h5C = f5(Z,X,t) Y 0 (12c)
D« W+ hfic - ffiU.X.t) Y W (12d)
Y o T v D
C 	 > 0 X 	 >- + » (12e)
C - F(X,Y,Z) t - 0 (12f)
The dimensionless variables given by equations (4) as
well as the following additional variables will be
incorporated into equation (12) to further generalize
It:
Y p . U2W2 - f5W
; W 'l D D 6 '' " l T5 Dv
" i y
hrW f.u h,W
H - — f - 6W u - 6
5 Dy 6 " °y 6 ' °y
Equation (12) then becomes a straight diffusion
tion:
3C 32C 1 32C 1 32C
— - = _ + ' • • • • + — 	 -f- filp r F T]
2— y D p p O — ^^V^J'aJS*1-/
3e 1 8c 2 35
subject to:
"H + H5C = f5(S,e,T) 5 = 0
}^H6C=f6(c,e,T) , = 1
-f + H3C = fgU.c.O 5 = 0
H + H4C = f4(E'C,T) 5 = 1
C — ». o E 	 ». + -
C-F(e,c,0 T 0
equa-
(14)
(Ha)
(14b)
(14c)
(14d)
(He)
(Hf)
00 OO
c I I K(B
m=0 N=0 n
•(f(*B v)
where: „ ?
28 V..
+ m+ ™
Pl P2
and:
and n T
f r f f
= Loo I L e)
                                                                    •  C(i|), B  ,vN,T)di|i                     (16)

                                                        As  in  the  two-dimensional  case,  the  integral  transform
                                                        defined  by equation  (15)  is  applied  to  equation  (14)
                                                        and its  associated  initial and boundary conditions.
                                                        The resulting  ordinary differential  equation  is  inte-
                                                        grated to  obtain the triple-transformed concentration
                                                        variable which is inverted by equation  (16) to yield
                                                        the following  general  modular solution:
                                                                      K(sm,?)K(vN,e) 27     exp(-1i|ie)exp(-yT)
                                                                               exptur1
                                                                                                           (17)


                                                                                                           (17a)
                                                                               ,   ,

                                                                                          "(K(em'c)
                                                                                 exp(i>E')K(vN,s'
                                                                          K(vN,e)
                                                                         V£
,,)
                                                                                       -o
                                                                                           exp(i>e')K(Bm,c':
                                                                        exp(l>e1)K(em,c')K(vN,C')F
                                                                              Applications
                                                                                                           (17b)
                                                                                                           (17c)
                                                        To  illustrate the  comprehensive  flexibility  built  into
                                                        the modular  solutions  given  by equations  (10)  and  (17),
                                                        selected  closed-form solutions,  based  on  these general
                                                        equations, will  be presented. Due  to space limitations
                                                        typical solution details  cannot  be presented.here  but
                                                        may be  found elsewhere.3'6'13
                                                          Two-Dimensional, Instantaneous Tracer Line  Source
                                                        For this  case,  the two-dimensional transport equation
                                                        is  subject to a Dirac  delta  instantaneous line source
                                                      437

-------
function and two second type,no-flux boundary con-
ditions at Z = 0 and Z   H.  The details are given in
(G). Applying modular solution (10), the solution
with dimensions is:
                 f   (X-X -Ut)]
                  -   4Dt
                 S=*—-I
C(X,Z,t)   —	*=	'-^-	J|l+2  I  exp(-DTu*Nt>
                     t)
                       1/2
                                                (18)
Three-Dimensional,  Instantaneous Tracer Point Source

In this case, the three-dimensional  transport equation
is subject to an instantaneous Dirac delta point
source function and four second type no-flux boundary
conditions at Z=0,  Z=H,  Y=0,  and Y=W.   The details may
be found in Reference 6.  Applying modular solution
Equation 17, the solution with dimensions is:
                        (X-X-j-Ut)2"
C(X,Y,Z,t) =
                     r  (x-xrut)-[

                     I     4Dxt   J
             +2

                    exp(-vlDzt)cosyNZcosuNZ1
1C + M 8C = D
at   u 3X   ux
               a2C
                  + D
                                      KC
                        3Y
                                 az
   + gpT 6{X-X1)6(Y-Y1)6(Z-Z1)                   (20)

The BOD is released as a continuous  point  source and  is
subject to no-flux conditions on  all  four  boundaries.
Initially there is zero BOD.  The solution may easily
be obtained by using equation (17) to derive  the solu-
tion to the first order decay,  instantaneous  point
source case (similar to equation  (19))  and then inte-
grating this solution over time to obtain  the contin-
uous point source solution with dimensions:
   gpT EXP(XU/2Dx)

    4 W H  (Dx)1/2


  • EXP{-X(k2/Dx)1/2}
                     ERFC{X/(4DYt)1/2-(k,t)1/2
                                ,1/2
                        ERFC{(kt)1/2+X/(4Dt)1/2}
  •  EXP{X(k2/Dx
                             COSyNZCOSyNZ1
                         N=l
 (
                                                   (21:
                1/2
  EXP{-X(k,/D)l/':}ERFC{X/(4DYt)1/2-(k,t)1/2}
          O  A               A        J


   EXP{X(k3/Dx)1/2}ERFC{(k3t)1/2+X/(4Dxt)1/2} J


      -  COSS YCOSB Y, /             , /0

 + 2 Jl   (k>/2 m ] (^-Xtk^)1/2)


• ERFC{X/(4DYt)1/2-(k/lt)1/2}
                                                             EXP{X(k4/Dx)1/2}ERFC{(k4t)1/2+X/(4Dxt)1/2}  J
                                                                                       COSBJ
                                                                                       1/2
                                                               N=l  m=l             (k5)


                                                           •  (EXP{- X(k5/Dx)1/2}ERFC{X/(4Dxt)1/2-(k5t)1/2}


                                                             EXP{X(k5/Dx)1/2}ERFC{(k5t)1/2+X/(4Dxt)1/2}J
                                                           where:
                                                           and
 ihree-Dimensional,  Biochemical  Oxygen Demand Transport     4

The unsteady-state, three-dimensional,  first order de-     5
cay, transport model  for BOD in rivers  is given by:
                                                                0 ;  X > 0
                                                                                References
                                                           1.   Ahlert, R.C., Biguria, G., and Tarbell, J.. Hater
                                                               Resources Research. Vol. 6, No. 2, 1970, pp.  614-
                                                               621.
                                                           2.   Bansal, M.K.. Journal of the Hyd. Div..  ASCE, Vol.
                                                               97, No. HY11, 1971, pp. 1867-1886.
                                                           3.   Cleary, R.W., T.J. McAvoy and W.L. Short, Uater-
                                                               1972, Amer. Inst. of Chem. Eng., Symposium Series,
                                                               Vol. 69, No. 129, pp. 422-431.
                                                               Ruthven, P.M.. Water Research, Vol. 5, 1971,  pp.
                                                               343-352.
                                                           5.   Wnek, W.J., and Fochtman, E.G., Envir. Sci.  and
                                                               Tech.. Vol. 6, No. 4, 1972, pp. 331-337.
                                                           6.   Cleary, R.W., and Adrian, D.D., Journal  of the
                                                               Envir  Eng. Div^, ASCE, Vol. 99, No. EE3, 1973,
                                                               pp. 213-227.
                                                           7.   Leendertse, J.J. and S-K Liu, Symposium on Model -
                                                               ing Techniques, 2nd Annual Symposium of the Water-
                                                               ways, Harbors and Coastal Engineering Div. of
                                                               ASCE, Sept. 1975, Vol. 1, pp. 625-642.
                                                           8.   Carslaw, H.S., and Jaeger, J.C., Conduction of
                                                               Heat in Solids, 2nd ed., Oxford University Press,
                                                               London, England,  1959.
                                                           9.   Al-Saffar, A.M., thesis presented to the Univer-
                                                               sity of California, at Berkeley, Calif., in 1964.
                                                          10.   Sneddon, I.N., The Use of Integral Transforms,
                                                               McGraw-Hill Book Company, New York, N.Y., iy/2,
                                                               pp. 423-439.
                                                          11.   Sneddon, I.N., Fourier Transforms, McGraw-Hill
                                                               Book Company, New York, N.Y., 1951, pp.  71-82,
                                                               166-202.
                                                          12.   Tranter, C.J., Integral Transforms 1n Mathematical
                                                               Physics, 3rd ed., MetTiuen and Co., Ltd., London,^
                                                               England, 1966, pp. 84-85.
                                                          13.   Cleary, R.W., D.D. Adrian and R.J. Kinch, The
                                                               Journal of the Environ. Enq. Div., ASCE, Vol.  100,
                                                               No. EE1, pp. 187-200.
                                                       438

-------
                  SIMULATION MODELING OF ENVIRONMENTAL INTERACTION EFFECTS


                                       Ethan T.  Smith
                                       Program Analyst
                                   2014 Golf Course  Drive
                                      Reston, Virginia
                    Abstract

The  present  research addresses the air-water-
land problem simultaneously by means of a
series of mathematical  models.  A steady-
state water  quality  model  is used to simulate
the  effect of biochemical  oxygen demanding
wastes on the dissolved oxygen concentrations
in an estuarine  system.  A Gaussian plume air-
quality  model is similarly utilized to relate
the  particulate  and  sulfur oxide emissions of
waste sources to the concentration of these
contaminants in  the  regional airshed.  A ma-
terials  balance  model is formulated which
simulates the impact of pollution control for
a given  medium in exacerbating environmental
pollution in another medium.  A strategy mod-
el is formulated that derives removal per-
centages for air loads  and waste water dis-
charges, while simultaneously minimizing the
flows of material to solid waste disposal.
The  constraints  on the  optimizing strategy
model are given  by equations which require
that ambient quality standards must be at-
tained  for dissolved oxygen, particulates,
and  sulfur oxides.

The  set  of models is applied to an eleven-
county  study area centering on the city of
Philadelphia, Pa. The  results of applying the
models  indicate  that present ambient quality
standards can be achieved.

        Planning for Environmental
        Resource Management

     The management  of  the environment, like
any activity undertaken by man, requires some
method  of approach which can be thought of as
a plan.  On  the  one  hand,  this may refer to
some rather  formal approach involving a se-
quence  of steps  arranged in time to achieve
some objective.   In  the absence of a formal
approach, actions can still be undertaken
which,  by default, will tend to be rather in-
dependent of one another.   In either case,
effort  will  be expended on some mixture of
data collection, analysis, negotiation, and
modification of  the  physical world.

     A  pervasive problem is associated with
the nature of cause-effect linkages, or
causal  chains.   The  name is perhaps somewhat
of a misnomer, since cause-effect interactions
related  to the environment are seldom simple
one or  two step  processes.  Indeed, a much
"lore accurate picture would be that of a
multi-branched tree  or network structure with
numerous feedback loops.  Under these circum-
stances  even well-conceived scientific ap-
proaches to  pollution control have encoun-
tered severe problems.

     It  is now possible to discern the essen-
tial characteristics of many environmental
Problems.  Attempts  to  address these problems
often fall into  the  error  of suboptimization.
Under these circumstances, solutions may be
successful to a greater or lesser degree in
dealing with the matter immediately at hand
(e.g., Biochemical Oxygen Demand, or BOD, in
the water).  More relevant however is the fact
that secondary effects resulting from the in-
itial solution act to exacerbate other envir-
onmental problems, or even to cause new pro-
blems.

     The concept of materials balance appears
to offer a promising route toward the resolu-
tion of these kinds of difficulties.l  In
this approach, the principle of conservation
of mass is applied to material goods as they
pass through the system, undergoing of course
many changes in form.  If successful, this
kind of analysis should identify the residual
materials that are byproduct from the process-
ing steps which are designed to yield the
economic products of society.  This makes it
possible at least in principle to specify the
links in the causal network in quantitative
terms.  This study is focused on just a small
part of the general materials balance model,
specifically the routing available to control
some of the major contaminants of air, water,
and land.

     Even in the present rather modest form,
this work significantly transcends tradition-
al concepts of pollution control, and is per-
haps best described as resource management.
This process begins to explore the tradeoffs
possible among (a) the level of technology,
(b) the spatial needs for land use, (c) the
concentration of contaminants in the ambient
environment.  There are also important impli-
cations for the use of natural resources such
as water and fisheries for many purposes
(e.g., recreation), and at a further remove,
the use of mineral and energy resources re-
quired to achieve pollution control by tech-
nology.  Ultimately, this type of research
is capable of contributing to a definition in
quantitative terms of the carrying capacity
of regions for human activity.2

     First, it is important to include se-
condary effects within the set of decisions
variables.  Thus, to take one example, the
increase of treatment which removes BOD from
wastewater must be reflected as some combina-
tion of routing to land and air.  Organic
sludge either is sent to a disposal site or
is incinerated, and in the latter case it
acts to lower air quality.  The incineration
can be accomplished either on site or at a
municipal incinerator.  Simultaneously, con-
trols applied to industrial stacks will re-
sult in either ash .to be sent to a landfill
(if dry removed) or solids in the sewerage
system (if wet scrubbed).  At least some part
of these solids will be removed at a  sewage
treatment plant and will cycle through  the
system.  Second, the foregoing description,
when applied to a large number of treatment
                                              439

-------
plants, firms, incinerators, power plants, and
landfills, leads to complicated spatial inter-
action as materials are routed from point to
point.  This is especially true in a relative-
ly large region like the present eleven county
study area encompassing the Trenton-Philadel-
phia-Wilmington SMSA's.  Superimposed on this
interaction is a requirement for segregating
political jurisdictions and accounting for
physical barriers which would preclude par-
ticular paths in a real case.  Next, there is
a requirement for state-of-the-art air and
water quality dispersion models with the best
possible validation procedures.  Without
field-testing, there is small probability that
conclusions reached by these models will be
acceptable.  In order to accomplish these ob-
jectives, the following residuals control and
routing processes have been analyzed:

     a.  Treatment at sewage treatment plants
         (STP's),
     b.  Treatment at firms possessing indi-
         vidual wastewater facilities,
     c.  Control of particulates and S02 at
         individual STP's, firms, power plants,
         and municipal incinerators,
     d.  Control of particulates and 302 by
         area sources,
     e.  Routing of area source solid waste to
         land disposal,
     f.  Routing of sludge from STP's and
         firms to municipal incinerators,
     g.  Routing of wet scrubbed particulates
         and S02 to sewerage system, STP's and
         river,
     h.  Routing of sludge from STP's and
         firms to land disposal,
     i.  Routing of dry removed particulates
         and S02 to land disposal.

These processes are shown in Table 1.  To fa-
cilitate modeling these processes,  a series
of decision variables is defined, which are
common to each process.  These decision var-
iables are as follows:
     DW = BOD allocated to the water,

     D  = BOD allocated (in transformed form)
          to the air, by incineration of
          sludge solids,

     D  = BOD allocated to solid waste dis-
          posal, as removed sludge solids,

     E& = Particulates allocated to the air,

     E  = Particulates allocated to the water,
          by wet scrubbing,

     E  = Particulates allocated to solid
          waste disposal,  by dry removal,

     F& = S02 allocated to the air,

     FW = S02 allocated to the water, by wet
          scrubbing with limestone,

     F  = S02 allocated to solid waste dis-
          posal, by dry removal with lime-
          stone.

These decision variables act to allocate the
various residual streams among three possible
media:  air,  water,  and land.  The variables
are applied in each of the control  and rout-
ing processes, at the completion of which the
total residuals discharged to the media as a
result of these processes are obtained.  All
allocation variables are in terms of percent-
ages of untreated (or raw) waste load, and
hence are dlmensionless.   It is essential to
realize that all  percentages for a given re-
sidual sum  to  one,  i.e.,  DW + Da + °s = 1.0.
The allocation variables  are readily related
to familiar quantities: for example, the per-
cent removal of BOD at  a  wastewater treatment
plant is 1.0 - DW.   Similarly,  the percent re-
duction of  particulates to the air at a spe-
cific facility is 1.0 - Ea.   The other alloca-
tion variables are  defined in parallel manner.

        Results of  Routing Processes

     The results  of the routing processes
shown in Table 1  can be summarized as follows,
with special reference  to mass  discharged to
the air, water, or  land as residuals:

     Sources (k)  which  discharge directly to
surface water  (including  STP's), discharge
BOD as a function of DW,  as  well as solids
resulting from the  scrubbing of stack gases
as a function  of  Ew and Fw.   Particulates and
S02 are emitted to  the  air,  both from sludge
incineration and  industrial  processes.   Land
disposal is provided for  treatment plant
sludge and  solids.

     Controls  are used  (Ea and  Fa)  to  reduce
the emission of particulates and S02 at
sources of  the industrial (m),  municipal in-
cinerator(i),  and power plant (j)  types.
Solids which are  scrubbed out are routed to
collection  systems  and  to land  disposal by
Ew, ES, FW, and FS  except for sources  having
access to surface water,  such as power plants.

     Area sources (n),  which cover perhaps
five to 25  square miles each, contain  a het-
erogeneous  assortment of  emissions,  often
with large  contributions  from residential  and
commerical  land uses.   The space heating and
refuse incineration aspects  of  these sources
are included,  by allocating  part of the solid
waste to land  disposal.   In  the case of these
sources, it is reasonable to assume that re-
duction of  air emissions  occurs through fuel
switching and  the restriction of local  incin-
eration practices.

     The routing of materials according to the
spatial configuration of  the study  area re-
quires some assumption  for choice among alter-
native destinations.  It  is  assumed that under
long-term average conditions the physically
nearest destination will  be  selected.   In
reality, service areas  are probably better de-
terminants  for selection  of  destinations,
e.g., which landfill will service which source;
however, such information  is  usually not read-
ily available.  In  addition,  destinations  are
required to be in the same political jurisdic-
tion (Del., N.J., Pa.,  or Philadelphia)  as the
source being serviced.  This constraint adds
reality to  decisions under consideration,  and
helps to demonstrate the  consequences  of such
a restriction.

     In the case  of landfills,  the  selected
site is checked to  determine whether the
acreage required  is available;  if not,  sites
progressively  further away are  examined until
all conditions are  met.   If  no  landfill meets
all conditions, the unmet demand is recorded
as a requirement  for further acreage.
                                             440

-------
        Water Quality Model Formulation

     A one-dimensional, steady-state, finite-
difference model of the Delaware Estuary is
employed for the coupled variables BOD-DO.
Similar formulations may be found in Thomann3
who has worked extensively with the coupled-
system approach.  Figure 1 is a map of the
Delaware Estuary showing the 30 segments or
sections of the mathematical model.  This es-
tuary constitutes the receiving surface water
in this study.  For each of these model sec-
tions a mass-balance equation can be written
for the BOD in the system, and another for
the DO in the system.  This results in linear
differential equations based on the physical,
hydrologic,  and biochemical characteristics
of each section.

     This model, expressed in matrix form, is:

                                          (1)
                                                  The atmospheric characteristics are  simu-
                                             lated in the mathematical model under speci-
                                             fic assumptions.  The first of these is that
                                             the discharge emitted from each source will
                                             take the form of a Gaussian plume.  This  means
                                             that the dispersion of the plume at a distance
                                             downwind is assumed to follow a Gaussin distri-
                                             bution in directions perpendicular to the wind
                                             vector.  The concentration of pollutant can
                                             then be calculated by applying a normal pro-
                                             bability function.  In the form used here
                                             this is usually termed the Martin-Tikvart
                                             model . 6
                                                  In this case, the model takes the form
                                               sr
                                               Sduv Xs
                                                                                       (5)
 (c)  =  (oa)  -

(GJ)  =  (A)(J)

(cp)  =  (BMP)
          * 

     The model has been calibrated for the
study area by EPA using 1968 data for par-
ticulates and 303.  Regression and correlation
yields r values of 0.87 and 0.88, respectively.
Variation around the annual average predic-
tions of the model is on the order of ± 15 to
 (c)
- (c,) ± (cn)
    j      p
                         (A)(AJ).
     The model  has been verified by the Dela-
ware Estuary  Comprehensive Study, based on
research conducted from 1961 to 1969.^  In
addition,  time-series studies by Thomann make
it possible to  calculate a variance of about
1.56 mg/1  around the summer mean values of DO
computed by the model.5

     Since (AJ)  is a function of the alloca-
tion variable DW,  it is possible to evaluate
the DO profile  in equation ^ by specifying DW,
and to compare  the vector (c) against water
quality standards.

       Air Quality Model Formulation

     Mathematical  models of air quality re-
present a  relationship between the sources of
air pollution and  the result as measured in
the ambient aif.   With the establishment of
air quality standards it becomes necessary to
employ some such  relationship to determine
what modification  of the source emission
loads is required  if the ambient standards
are to be  met.
     The source emission Qs is a function of
the allocation variable Ea or Fa, depending on
pollutant.  Therefore, equation 6 can be
evaluated for each receptor point r as func-
tion of Ea and Fa, i.e., Xr = f(Ea,Fa).  Con-
tour maps of Xr can be compared to ambient
air quality standards.

            Ambient Quality Data

     The initial conditions for the analysis
are given by the present state of the system.
A part of this state is measured in terms of
the numerical values of ambient environmental
quality.  In the case of water quality, the
dissolved oxygen concentrations are based on
the work of the Delaware Estuary Comprehen-
sive Study.  An initial verification of the
water model was carried out for 1964 data.
Subsequently, the data were updated to 1968
and to 1970, principally by accounting for
the growth of effluent discharges.^,8,9

     The DO standards and initial DO profile
for the Delaware Estuary are given in detail
in Figure 6.  The DO standards are those of
the Delaware River Basin Commission (DEBC).IO

     The data base for particulate  and S02
concentration is taken from the EPA Implemen-
tation Planning Program work on the Philadel-
phia Air Quality Control Region.7  The exist-
ing air quality in this reference is based on
measurements from 1968.  Figure 2 shows the
1968 annual average pattern of particulate
concentration in the study area, and Figure 3
shows the  pattern for S02-  This data base
                                              441

-------
has been used by EPA to validate the Gaussian
plume air quality model used in this study.

     The air quality standards sought in this
study are an annual geometric mean of 75
ug/m3 for participates, and an annual arith-
metic mean of 80 ug/m3, for sulfur oxides. 11

      Discharge Loads to the Environment

     In order to carry out analysis of the
study area, numerical measures of the dis-
charges to the environment are necessary.
Ideally, the discharge data should be for the
same year as the ambient quality date (in this
case 1968) so as to permit prediction of im-
provement in environmental quality as a re-
sult of modifying the discharges.

     Discharge data for effluents to the Del-
aware Estuary are available in terms of BOD. 8, 9
All values are in terms of first stage car-
bonaceous BOD discharged at the outfall, for
1968.  The present research includes 2^ sew-
age treatment plant effluents, which comprise
about 98$ of the BOD discharged to the es-
tuary by municipal waste sources.  In addi-
tion, 29 industrial firms are included which
possess wastewater discharges from company-
owned treatment facilities.  These sources
account for about 96$ of the BOD discharged
to the estuary by industrial waste sources.

     Discharge data for air emissions to the
study area are available as part of the data
base used by EPA in the Implementation Plan-
ning Program.?  For the year of 1968 both
particulate and S02 emissions exist for each
stack in the source data file for the Phila-
delphia Air Quality Control Region.  The
following set of air emission sources is in-
put to the model:

     a.  58 industrial sources accounting
         for about 85$ of the S02 and Qk%
         of the particulates from all such
         sources,
     b.  16 municipal incinerators account-
         ing for about 97$ of the S02 and
         of the particulates from all such
         sources,
     c.  10 steam-electric generating plants
         accounting for about 82$ of the 302
         and 89$ of the particulates from all
         such sources.
     d.  55 area sources accounting for about
         63$ of the S02 and 55$ of the par-
         ticulates from all such sources.

     The most recent National Survey of Solid
Waste Practices is for the year 1968, which
makes it possible to obtain landfill data
contemporary with the rest of the research. 12
Forty-nine landfills and available acreage at
each site (located by Cartesian coordinates)
are used in this study.
problem is to determine  the  values of the de-
cision variables  that  will enable all quality
goals to be  satisfied.

     Recognizing  the possible tradeoffs in the
construction of optimization models,  the ap-
proach selected in  this  study relies  on an as-
sumption that the percent  removal of  a given
material must be  equal for all waste  sources
of one type.  This  approach  has the following
characteristics:

     a.  It  is mathematically simpler than a
         linear or  nonlinear programming ap-
         proach ,
     b.  The assumption  of equal  percent re-
         moval is often  administratively fa-
         vored, even though  less  flexible
         than an  approach allowing sources  to
         be  individually adjusted,
     c.  It does  permit  all  ambient standards
         to be achieved  simultaneously,
     d.  Damage (cost) functions  are  not ex-
         plicitly represented in  the  model,
         although such functions  are  implicit
         in the ambient  quality standards,
         since meeting the standards  is  often
         taken as equivalent to minimizing
         damages.

     Mathematically, the strategy model  can be
described as follows:

 MIN

subject to (A)(AJ) >  (c ) 6 = i'm»J'n
                         o
where

and
                   (°owk - Dw)  Tk
           X  ( A Q  ) £ X__  for particulates
            r     s     rg  and S02

where    AQ_ =  (E    - E ) P  for particu-
            s     oas    a   s lates
                (Foas - Pa> Us
         D« + D» + DO = 1-0,
                                   S02'
         "w    a
          was
         F  + F  + F
          was
                      = 1.0,

                      = 1.0.
Equation 7 shows the minimization of volumes
V of mass allocated to land disposal for all
source types; this process is constrained by
modification of the A Jjj loads so as to meet
the incremental DO goal (eg).  Dowk is the
initial percentage allocated to  the water,
and Tk is sum of BOD residuals generated.
Similarly, modifications of air  source emis-
sions AQS must meet the ambient air stand-
ards Xrg.  This depends on initial percentage
allocations E0as
                      oas
                              each source s,
and on Pg and Us, the sum of particulate and
S02 residuals generated  (see Table l).

              Model Results
               Strategy Model

     There are nine decision variables repre-
sented by the allocation variables which di-
vide residuals among water, air, and land.
The ambient environmental standards have been
specified for both airborne and waterborne
contaminants, and models developed to predict
changes in ambient concentrations as a func-
tion of materials routing.  The remaining
     The derived values of  the allocation per-
centages indicate that 91$  Carbonaceous BOD
removal is required for all sources discharg-
ing into the Delaware Estuary.  Similarly,
75$ removal of particulates and 12% removal of
S02 is called for in the case of  sources hav-
ing air emissions.  In all  cases  these are
percentages of untreated waste discharges.
At this point it should be  noted  that one re-
sult of recent EPA work on  this river called
                                             442

-------
for a yy$> reduction in Nitrogenous BOD dis-
charges  to be superimposed on these conclu-
sions. 9   Other conclusions are that 30$ of
the sludge produced should be incinerated,
and 61%  of it consigned to land disposal.

     The effect of the treatment levels can
be seen  in the predicted improvement in the
quality  of the environment.  The resultant
dissolved oxygen profile is shown in Figure
6.  Except in the three sections containing
the "Bristol  Sag" (5 through 7) the DO stand-
ards for summer average conditions are at-
tained everywhere.  The primary annual aver-
age air  quality standards of 75 ug/m3 for
participates  and 80 ug/m3 for S02 are also
achieved.  Figure 4 shows the resultant par-
ticulate concentration pattern for the study
area. Figure 5 shows the S02 pattern re-
sulting  from  the strategy.  These figures
clearly  show  the effect of load reduction in
breaking up the region-wide pattern into a
few peaks of  high concentration which tend to
be centered on dense urban and/or industrial
sources  within the study area.

     The routing of residuals to land ulti-
mately results in the consumption of space at
the sanitary  landfills in the study area.
The model will attempt to utilize all avail-
able space in accordance with the routing and
jurisdiction  rules, and will route any re-
maining  solid waste to another category which
represents the amount by which demand exceeds
supply.   The  demand excess is as follows:
     New Jersey
     Pennsylvania
     Philadelphia
     Delaware
309 acres
239 acres
739 acres
zero
 These figures represent demands generated by
 this study alone, and are in addition to any
 other acreage requirements.

     The significance of this analysis is:

     a.  Numerous air-water-land interactions
 can be simulated so as to produce quantita-
 tive, non-intuitive results.  If desired,
 ambient standards can be easily altered and
 sensitivity analysis performed on these and
 other variables.
     b.  Numerical results show that required
 controls are probably within the range of
 current technology.  This implies that
 changes in land resources; e.g., changes in
 density, can be avoided or at least post-
 poned.
     c.  The attainment of water quality
 standards implies optimum utilization of,
 e.g., recreation and fisheries resources, as
 defined by the standards for the region.
     d.  The emerging problem of solids dis-
 posal is quantitatively defined as it would
 impact land use if land disposal is used.
     e.  The methodology is transferable to
 other study areas where discharge loads, am-
 bient air/water quality data, land disposal
 data, and model parameters are available.
 Assembling such data should become easier as
 a result of recent public laws.
                                 References

                1.  Kneese, A., Ayres, R. , and D'Arge,  H. ,
                    "Economics and the Environment. "  Resources
                    for the Future, Washington, D.C.  (1970).

                2.  Smith, E.T., "Mathematical Models for En-
                    vironmental Quality Management. "  Rutgers
                    University (Doctoral dissertation),  New
                    Brunswick, N.J. (197^).
                3.
 5.


 6.




 7.




 8.



 9.


10.




11.


12.
Thomann, H. , "Systems Analysis and Water
Quality Management. " Environmental Research
& Applications, Inc., New York,N.Y.  (1972).
"Delaware Estuary Comprehensive Study,
Preliminary Report and Findings." U.S.
Dept. of the Interior, Philadelphia, Pa.
(1966).

Thomann, R.V. , "Time-Series Analysis  of
Water Quality Data. "J. San Eng. Div. .ASCE.
(.1967).
"Air Quality Implementation Planning Pro-
gram, " EPA Office of Air Programs, APTD-
0640, Vol. 1, Research Triangle Park,
N.C. (1970).
"Application of Implementation Planning
Program (IPP) Modeling Analysis, Metro-
politan Philadelphia Interstate Air  Quality
Control Region. " Environmental Protection
Agency, Research Triangle Park, N.C. (1972).

"The Delaware River - Where Man and  Water
Meet." U.S. Dept. of the Interior, Phila-
delphia, Pa. (1969).

"Delaware Estuary Water Quality Standards
Study. 1 Environmental Protection Agency,
New York.N.Y., and Philadelphia, Pa. , (1973).
"Water Quality Standards for the Delaware
River Basin." Resolution No. 67-7, Sec-
tion X, Delaware River Basin Commission
(April, 1967).
"A Citizen's Guide to Clean Air." Conser-
vation Foundation, Washington, D.C.
(Jan., 1972).
Muhich, A., et al,  "National Survey  of
Community Solid Waste Practices, Region
2, Vol. 1 and 2." U-. S. Dept. of HEW,
Cincinnati, Ohio (1969).
                                             443

-------
TC
FROK
DIRECT-
DISCHABGERS
(STPs i IND.)
k
MUNICIPAL
INCINERATORS
1
INDUSTRIAL
SOURCES
POWER PLANTS
1
AREA SOURCES
DIRECT-
DISCHARGERS
(STPs 4 IND.I
k

SOLIDS
SOLIDS


I '
MUNICIPAL
INCINERATORS
i
SLUDGE




LAND
DISPOSAL
1
SLUDGE,
SOLIDS
SOLIDS
SOLIDS
SOLIDS
SOLID
WASTE
AIB
PASTICULATES ,
so2
PABTICULATES,
so2
PAHTICULATES ,
so2
PABTICULATES,
so2
JARTICULATES,
so2
SURFACE
HATER
BOD,
SOLIDS


SOLIDS

                                                           METROPOLITAN PHILADELPHIA INTERSTATE AIR QUALITY CONTROL REGION
      Table  1 - Residuals Routing Process
                              DELAWARE ESTUARY
                             COMPREHENSIVE STUDY
                                SECTIONS FOR

                            MATHEMATICAL MODEL
Figure  1 - Segmented Water Quality Model  of
            the Delaware  Estuary
                                                                	COUNTY LINE

                                                                	STATE LINE
                                                                	 REGION BOUNDAI
                                                       Figure  2 - 1968 Annual  Average  Participate
                                                                   Concentration (ug/m3)
                                                          METROPOLITAN PHILADELPHIA INTERSTATE AIR QUALITY CONTROL REGION
                                                                 - COUNTY LINE
                                                                 - STATE LINE
                                                                 - REGION BOUNDA
                                                             Figure 3  - 1968 Annual Average 803
                                                                         Concentration  (ug/m3)
                                                 444.

-------
METROPOLITAN PHILADtLPHIA INTERSTATE AIR QUALITY CONTROL REGION
                                                         METROPOLITAN PHILADELPHIA INTERSTATE AIR QUALITY CONTROL REGION
         COUNTY LINE
         STATE LINE
         REGION BOUNDAH
Figure
        Resultant Annual Average
        Particulate  Concentration lug/ no)
                                                                   	COUNTY LINE
                                                                   	STATE LINE
                                                                   	 REGION BOUNDARY
Figure 5 -  Besultant Annual  Average 302
             Concentration (ug/m3)
                                                               FBESENT WATEB QUALITY.
                                                                 STANDARDS
                                                               (SUBTRACT 0.5 TO COWUTE
                                                               HlNinUK DAILY AVEBAOE)
                                            DELAWARE ESTOAH1 SECTIONS
                       Figure 6  - Summer Average  Dissolved Oxygen
                                    Profiles  by Estuary Model Section
                                    (mg/1)
                                                445

-------
                                TOWARD A DYNAMIC  ECONOMIC MODEL FOR REGULATING

                                            FLUOROCARBON EMISSIONS
                                 Ralph  d'Arge,  Larry Eubanks,  Joseph Harrington
                                            Department  of  Economics
                                             University of Wyoming
                                            Laramie,  Wyoming  82071
Abstract

A sequence of benefit-cost models is examined to deter-
mine economically feasible and optimal regulatory stra-
tegies for the production of chlorofluorocarbons by the
United States.  Estimates of environmental costs and
market losses (consumer surplus) are developed to esti-
mate at the margin where these costs balance each other.
The implications of a dynamic regulatory model are
briefly outlined.

Introduction

During the past decade, there has been a growing recog-
nition that economic decisions might yield major im-
pacts on global commons property resources, including
the oceans, atmosphere, and even the electro-magnetic
spectrum.  Most recently, concern has been expressed as
to the impact of chlorofluorocarbons on the ozone con-
centration in the stratosphere-'- and on the Impact of
these same compounds on world climate.  According to
the IMOS report,

     "Although the theory of possible ozone reduction
     (in the stratosphere) by fluorocarbons 11 and 12
     (F-ll and F-12) cannot be presently supported by
     direct atmospheric measurements, the matter has
     been carefully studied independently by many
     scientists.  Thus far, the validity of the theory
     and the predicted amounts of ozone reduction have
     not been seriously challenged.  More research is
     required and will be undertaken, but there seems
     to be legitimate cause for serious concern."2

An extremely simplified sketch of this concern might be
as follows: chlorofluorocarbons after or during econom-
ic use escape and ultimately collect in the stratos-
phere, a distinct air layer 11-60 kilometers above the
surface of the earth.  In the stratosphere these chem-
icals interact with 'ozone and other chemical constitu-
ents, initiating a reduction In ozone and perhaps a
change to plants and animals, including humans.  The
climatic changes are presumed to induce another set of
adjustments to organic life.  The major question is
whether, on balance, these changes in organic life are
beneficial or adverse to humans.  This is also the cen-
tral question addressed, from an economic perspective,
by the researchers authoring this report.  Secondarily,
a set of policy alternatives is examined as to the eco-
nomic feasibility of alternative regulatory strategies.
Typically such examinations are done with the aid of a
cost-benefit analysis of the problem, and the results
of such an analysis will be briefly summarized.  In ad-
dition, possible steady state solutions to the question
of optimal emissions is also examined.  Finally, since
the above two analyses do not indicate the optimal path
to the optimal emission levels, and since there are
differential emission rates for various uses of F-ll
and F-12, a dynamic model of emissions is also outlined
and discussed in an attempt to gain insight into the
nature of the optimal path of emission reduction over
time.
Cost-Benefit Analysis

The strategic point of  the analysis was  to  examine mar-
ket relationships for F-ll and  F-12 and  consumer  prod-
ucts utilizing them and also  to attempt  to  partially
estimate the societal costs and benefits involved in
their production and ultimate emission into the atmos-
phere.

Fluorocarbons for the most part are not  purchased di-
rectly by households but are  utilized as inputs to pro-
duce consumer products or services.  In  consequence,
the observed demand relationships  for fluorocarbons do
not directly relate to consumer valuation but rather
indirectly through demand for products utilizing  fluor-
ocarbons.  Under some specialized  circumstances,  final
product demand and consumer surplus will be exactly re-
presentable by the derived demand  and surplus for flu-
orocarbons,  such that observed  losses in "derived" sur-
plus would be equivalent to loss in consumer surplus in
final goods markets.  Unfortunately, observed data on
prices and quantities sold historically  may not ade-
quately reflect actual dependencies between final prod-
uct demand and surplus and derived demand and surplus.
In order to obtain reasonable,valid bounds  on "consumer
surplus," both derived surplus  and consumer surplus
losses had to be estimated.

Measures of "derived surplus1' loss for restrictions in
fluorocarbon production were  developed for  F-ll and
F-12 along with measures of consumer surplus loss for
the major final products using  these fluorocarbons in
their production.  Other fluorocarbons were not exam-
ined in detail by the authors.   Included in the list of
final products were: refrigerators, aerosol deodorants,
auto air conditioners, polyurethane foam mattresses,
and mobile vehicle refrigeration systems.   According to
the IMOS report, these products accounted for about 9($
of utilization of major fluorocarbons and more than 98^
of F-ll and F-12 use in 1972.3

Table 1 summarizes the empirical development of derived
and final product demand relationships.   Due to the re-
lationship between derived and  final product demand, it
was believed to be the case that we were estimating de-
mand relationships which were part of a  simultaneous
system of equations, and as a result the regression
procedure known as two-stage-least squares  was utilized
as opposed to ordinary least  squares in  order that con-
sistent coefficient estimates could be obtained. 4
Table 2 presents the set of willingness  to  pay estimat-
ing equations which were derived from the estimated de-
rived and final product demand  relationships in Table !•

In Table 3 are recorded estimates  of the present  value
of derived surplus for F-ll and F-12 and consumer sur-
plus for major consumer products using F-ll and/or F-12
in their production.  Such estimates were derived using
the equations in Table  2.  As is readily apparent from
the estimates, "derived surplus" estimates  amount to
about $3 billions, while consumer  surplus for the major
products using them amount to more than  $84 billion8•
Of course, these estimates would tend to bound the ac-
tual value of consumer  surplus. On one  extreme,  if no
                                                      -446

-------
substitutes  existed for producing the final product,
then the appropriate measure of economic loss would be
the sum of consumer surplus losses in the final markets
Impacted.  Whereas,  if  there were such substitution
possibilities  it  would  appear appropriate to utilize
the "derived surplus" estimates.

It has been  hypothesized that F-ll and F-12 emissions
will induce  two global  effects:

     1)  reduction in stratospheric ozone and increase
         in  UV-B  light  at the earth's surface
     2)  a slight rise  in surface temperature due to an
         increased transparency of the stratosphere re-
         sulting  from ozone depletion.

Both of  these  global effects, if they occurred at a
significant  level, would have large scale ramifications
on biological  life and  thereby on the U.S. and other
nations'  economies.  It would seem to be impossible to
empirically  estimate the thousands of interrelated im-
pacts of  changes  in surface microclimates.  In the par-
tial analysis  which was undertaken, costs and benefits
are estimated  for some  major sectors of the U.S. econo-
my from ozone  depletion or enhancement and for slight
long increases in surface temperature.  The U.S. sec-
tors and/or  components  of them included:

     1)   Ozone depletion
         1.1 Non-melanoma skin cancer
         1.2 Materials  weathering (polymeric materials)

     2)   Temperature change (induced by ozone depletion)
         2.1 Marine resources  (13 economic species)
         2.2 Forest products
         2.3 Agricultural crops (corn and cotton pro-
            duction
         2.4 Urban resources (fossil fuel, electricity,
            housing, clothing and government expendi-
             tures) .

Estimated environmental costs and benefits by category
for  1973 levels of emissions continuing into perpetuity
are  recorded in Table 4.

The major question is whether, given the evidence, F-ll
and  F-12 should be regulated as to production and/or
emissions and  to  what degree.  It is clear from a sim-
ple  comparison of Tables 3 and 4 that the present value
of  net benefits of a complete ban on fluorocarbons
production is  positive  if "derived surplus" is used as
a measure of social cost and negative if the sum of
"consumer surpluses" is used as the relevant measure.
These conclusions are summarized in Table 5.

From Table 5 several general conclusions can be infer-
entially drawn.  These  are:

     1)   A complete ban on F-ll and F-12 may or may not
         be  economically feasible depending on the
         availability of substitutes.  The benefit-cost
         ratio for a complete ban may range from 0.2 to
         more  than 6.0.

     2)   A partial ban  on F-ll and F-12 use in products
         other than as  a refrigerant appears to be eco-
         nomically feasible, although a major end use,
         hair  sprays,^  has not been included in the
         benefit-cost comparisons.

     3)   If  the hypothesis that fluorocarbon emissions
         affect temperature through altering the amount
         of  ozone and thereby light reduction is not
         true, then the economic feasibility of a total
         ban is questionable.
Steady State Analysis

The benefit-cost analysis in the preceeding section  in-
dicated the circumstances under which a complete or
partial ban on F-ll and F-12 might be economically fea-
sible.  This analysis is supplemented in this section
by an examination of the optimum "steady state" per*-
centage reduction in the production of fluorocarbons.

The optimum "steady state1' percentage reduction in the
production of fluorocarbons can be defined as occurring
when total costs, including surplus losses and environ-
mental costs, is at a minimum.  This optimization prob-
lem can be written as:
                 min.  S(Q) + EC(Q)
           (1)
where S(Q) denotes consumer surplus losses which are a
function of the percentage of 1973 production of F-ll
and F-12, Q, and EC(Q) denotes the environmental costs
as a function of Q also.  The necessary and sufficient
conditions for an optimum are given by:
       S'(Q) + EC' (Q) =0  or  S'(Q) =
                 S"(Q) + EC"(Q) > 0
EC' (W)
(2)
where (') and (") denote first and second derivatives
respectively.

For the purposes of this optimization, both S(Q) and
EC(Q) have been estimated.  The best estimator of S(Q)
appears to be by a parabolic function.  Three possible
S(Q)'s were estimated for various measures of consumer
surplus.  The estimated S(Q) functions are presented in
Table 6.  The first function uses derived surplus as
the measure of surplus loss, the second function uses
consumer surplus, and the final function uses consumer
surplus utilizing the assumption that substitutes for
refrigerants exist after ten years and substitutes for
propellants and foams are available immediately.

It was observed that the relationship between environ-
mental cost by category in present value terms and U.S.
production of F-ll and F-12 approximated a straight
line over the range of interest.  Thus, the estimation
of the chain of events of production to emissions,
emissions to changes in UV light and temperature,
changes in UV light on materials life and skin cancer,
and temperature on urban plus natural resources, and
finally conversion of these physical-biological impacts
into discounted (at a constant rate) economic costs can
apparently be approximated by a linear relationship.
The approximate linear relationship occurred for both
temperature and UV related environmental costs.  These
estimated linear relationships are presented in Table
7.  In order for the environmental cost functions to be
comparable with the surplus loss functions at given
"steady state" long run reductions in U.S. production,
those occurring in the distant future had to be dis-
counted back to the present then annualized.  Thus the
estimated EC(Q) presented in Table 7 represents annual-
ized environmental costs for the various discount rates
(3%, 5%, and 8%).

The results of the minimization problem represented by
(1) are summarized In Table 8.  It is important to note
that Q is only defined over the interval  [0,100] since
it represents percentage of 1973 production of F-ll and
F-12.  As such, any solution presented in Table 8 for
which either Q_<0 or QxLOO represents a "corner solution1'
and corresponds to either a 100% optimal reduction in
production or no reduction in production, respectively.
Since Q denotes percentage of 1973 production of fluor-
ocarbons, the optimal reduction in fluorocarbons is
given by 100-Q.  The important conclusion of this
"steady state" optimization analysis is that the deter-
                                                       447

-------
initiation of the optimum is dependent both on the pos-
sibility of substitutes for fluorocarbons in final
products as well as whether the environmental impacts
are likely to be associated with UV increase or also
with climatic change.

The possible policy alternative would be to assume
that discretionary power existed which would allow
regulators of fluorocarbons to specify which final
products would be allowed to utilize F-ll and F-12 in
their production.  For purposes of making such policy
decisions, one reasonable strategy might be to elimi-
nate use of F-ll and F-12 first in those products
which provide the smallest loss in consumer surplus per
pound of fluorocarbons used in the production of the
product.  If crude estimates are made of average con-
sumer surplus generated in dollars per pound of fluor-
ocarbon by type of consumer product, one obtains a
range of from 18c per pound for deodorants to $272 per
pound for refrigerators.  Figure 1 illustrates the op-
timal "steady state" under this assumption of discre-
tionary power where the loss in consumer surplus func-
tion is derived by assuming complete elimination of
final products utilizing fluorocarbons where the prod-
ucts are eliminated from smallest value of consumer
surplus per pound of fluorocarbon to greatest value.
In this case, a rather narrow optimum range of from 42
to 48% reduction in F-ll and F-12 is estimated as
optimal.  The products that could not be allowed to
use F-ll and F-12 are:  deodorants using propellants;
perhaps hair spray; foam mattresses, and perhaps some
type of refrigeration systems.  It is to be noted that
unlike the previous cases, the optimum strategy is
highly insensitive to the discount rate applied to
environmental costs.
bons.  If one denotes  emissions  by product Z ,Z  and
Zr for propellants,  foams,  and refrigerants respective-
ly, structural  equations  for  a model might be of the
following form:                                V
                      Zf(t)  =  BfQft
                    Zr(t+6)
                                       (4)
                                       (5)
                                       (6)
where the Q's would represent  quantity of  fluorocarbons
used as foams, propellants,  and  refrigerants,  and the
6's give the emissions per unit  of  fluorocarbons in
each use.  Z (5+6) was calculated as  a>weighted average
of emission rates for refrigerants, using  the  assump-
tion that all emissions are  released  at the end of the
economic lifetime of the refrigeration final product in
question; and the economic lifetime was assumed com-
plete after 70% of the refrigerant  had been emitted at
given emission rates for the type of  refrigeration
final product being considered.  For  example,  after a
refrigerator had emitted 70% of  its original charge it
was assumed that the refrigerator was "scrapped" and as
a result all remaining refrigerant was assumed emitted.
This is an extremely simplified  approach which does not
include a more realistic representation of  the depre-
ciation process of the refrigerator nor its consequent
source of emissions when it  is no longer useful as a
refrigerator.  Total emissions' of F-ll and  F-12 can
then be represented by:
         Zt =
                                       (7)
A Prelude to Dynamic Analysis

The analysis of the preceeding section has indicated
possible optimal "steady state'' reductions in the prod-
uction of F-ll and F-12, but such an analysis does not
indicate the optimum path through time for regulation,
i.e., it might pay to slowly approach the steady state
optimum and only achieve it in two or three decades.
The importance of this possibility is perhaps suggest-
ed by the observation that there are differential
emission rates for each of the possible categories of
final product uses of fluorocarbons, and also by the
fact that at current emission rates an appreciable
(2-4%) ozone depletion may not occur for 10 to 20
years and biological impacts 10 to 30 years after
that.  In addition, time lags in approaching the
"steady state" would permit the development of ade-
quate substitutes for some or all final products now
dependent on F-ll and F-12.  The dynamic model which
is developed below will perhaps aid in an analysis
attempting to discover optimal time paths toward the
optimal steady state.

Development of the model best begins with a discussion
of differential emission rates.  There are, as was
noted earlier, three major categories of fluorocarbon
use:  propellants, foaming agents, and refrigerants.
Each of these uses causes emission of fluorocarbons
into the atmosphere, but the rate of release is dif-
ferent in each case.  Foaming agents release their
emissions immediately upon use, while the assumption
which is usually made with regard to propellants is
that their emissions are released within six months of
their production.6  On the other hand, refrigerant
uses release emissions at a much slower rate that var-
ies from about two percent to 30% annually depending
upon the type of refrigerant use.7  Thus the dynamic
aspect of the fluorocarbon emission problem which is
of particular interest is the rate at which the emis-
sions occur for the various product uses of fluorocar-
where =Q  represents the quantity  of  fluorocarbons lost
to the environment during production,  transport, and
storage.

Estimated values for the parameters in the above equa-
tions have been derived from the IMOS  report and imply
the following relationships:"
                      Zf(t) =
                   Zp(t+.5) =
                     Z  (t+6)
                 • 94Q.
                     ft
                                rt
 (8)
 (9)
(10)
A representation for the production relationships be-
tween the total quantity of F-ll and F-12 and the
quantities used as foams, propellants and refrigerants
needs to be derived.  A very simple relationship of
this form would be:
                                      rt
                                                   (ID
Such a relationship essentially represents a fixed
coefficient production function which would not ade-
quately reflect actual production  decisions that would
vary according to the relative profitability of apply-
ing fluorocarbons in the production  of  foams, propel-
lants, and refrigerants.  The following relationship
was estimated using 1973 production  data:9
Qt = .
                            .10Qft + .25Qrt
Utilizing  (8),  (9),  (10),  and (12),  a simple "mass
balance" relationship is derivable:

        Z(t)  =  .711Q(t)  + .25Q(t-6)  + .039Q(t-l)   (13

Assuming the  remainder of  propellants escape into the
stratosphere  in the  following year.
                                                       448

-------
The next aspect,  and  perhaps the most difficult to deal
with adequately,  is describing the relationship between
the production and use  of  fluorocarbons and environmen-
tal costs.  Earlier in  the paper it was indicated that
the sort of causal chain involved is from production to
emissions of fluorocarbons,  from emissions to ozone de-
pletion, from ozone depletion to increases in UV light
and changes in climate,  and  from these changes to envi-
ronmental cost impacts.   However, this simple delinea-
tion of a causal  chain  has abstracted from the time
element involved.  For  example,  the impact of an in-
crease in UV light on the  incidence of skin cancer does
not occur immediately with the change in UV light but
rather may reach  the  new steadv state incidence rate
after approximately 80  years.

The problem is one of relating environmental costs to
ozone depletion because of its consequent impact on UV
light and climate.  Ideally,  it would be desirable to
estimate the following  sets  of relationships:
                          and EC (0
                                    t-z,.
               o3(t)
        03(t-l)
                                    (14)
(15)
where 0   denotes  ozone  concentrations in the stratos-
         1        2
phere, EC and  EC  denote environmental costs associat-
ed with  UV changes  and  climatic changes respectively,
f(zt) denotes the functional relationship between ozone
depletion and emissions  of  fluorocarbons, and t-z.  and
t-z. represent  the  appropriate time profiles for UV
induced  and  climatic  induced environmental costs.  Un-
fortunately, such complete  relationships have not been
adequately estimated.   The  problem arises because esti-
mates of ozone  depletion resulting from F-ll and F-12
production presume  "steady  state"  production levels
into the indefinite future.

A3 a first approximation to  the problem, environmental
costs can be specified  as a  function of emissions.   Re-
calling  the  discussion  of "steady  state" optimum in
the previous section, it was observed that the rela-
tionship between  production  and annualized environmen-
tal costs was very  nearly linear.   It seems reasonable
to assume that  the  relationship between environmental
costs and emissions would also be  approximately linear.

Net economic benefits from fluorocarbon production can
be estimated as consumer surplus.   The objective func-
tional for this model could  then be represented as  the
maximization of net benefits which is the difference
between  consumer  surplus and environmental costs:
                                           rate which is directly related to the rate  of  reduced
                                           emissions from this source.

                                           An estimate of the environmental loss in present value
                                           to the United States of a one year delay (from 1977  to
                                           1978) in achieving a reduction of 90% in production  is
                                           $826 millions (at 5%).  Alternatively, a one year delay
                                           would yield a gain of about $89 millions if measured by
                                           derived surplus or $2,471 millions if measured by final
                                           goods markets.  These results further confirm  the con-
                                           clusions above.   Continued research will hopefully
                                           identify, given alternative assumptions regarding the
                                           time rate of discovery of substitutes and recycling
                                           possibilities, a feasible dynamic path for regulation.
                        Footnotes

      H.J. Molina and F.S. Rowland, "Stratospheric  Sink
for Chlorofluoromethanes: Chlorine Atom-Catalysed
Destruction of Ozone," Nature, Vol. 249  (June  28, 1974).

      Fluorocarbons and the Environment, Report of
Federal Task Force on Inadvertent Modification of the
Stratosphere (IMOS), Council on Environmental  Quality
and Federal Council for Science and Technology (June
1975).
     3
      Fluorocarbons and the Environment  (op. cit.)
p. 88.
     4
      For a discussion of simultaneous equation bias
and estimation of simultaneous equations, see  Jan
Kmenta (1971), Elements of Econometrics, McMillan
Co.; New York, or Henri Theil  (1970) Principles of
Econometrics, John Wiley and Sons, Inc.: New York.

      Hair sprays were not incorporated  in the analy-
sis due to an unavailability of adequate price data.

      Fluorocarbons and the Environment  (op. cit) and
Arthur D. Little, Preliminary Economic Impact  Assess-
ment of Possible Regulatory Action to Control  Atmos-
pheric Emissions of Selected Halocarbons, Draft Report,
Vol. 1, EPA Contract No. 68-02-1349, Task 8 (July 1975)

     7Arthur D. Little (ibid).
                                                 Fluorocarbons  and  the Environment  (op.  cit),
T, B(Q,.)n  -
                              EC(Z
(16)
p. 91.
     g
      Fluorocarbons and the Environment  (op. cit.)
p. 88.

      Pythagoras Cutchis, Estimates of Increase in
Skin Cancer Incidence with Time Following a Decrease'
in Stratospheric Ozone, Paper P-1089, Department of
Transportation, Climatic Impact Assessment Program,
Washington, D.C.
where n  is a discount factor and  the  planning  interval
one to y is given.

With these assumptions and solving  this  simple  model
utilizing the previously described  relationships,  two
results are obtained regardless of  the discount rate
employed.  First, if "derived surplus" measures of
consumer surplus are used, it is optimal to  immediately
love to the optimum steady state.   Second, if the  sum
°f final product consumer surpluses is utilized (with
the consequent assumption of no future substitutions),
then it will never pay to alter 1973 U.S. production
levels.  Finally, if reductions in  emissions can be
achieved through improved recycling and  reduced losses
for refrigerants, then (with this model)  production  of
U.S.  F-ll and F-12 should be reduced through time  at a
                                                      449

-------
.  502.136 - 463.632(Pn) - 194.898(P ;) - 2948,898(FCO) -  .065(PN.
 (133.592)**(521.080)      (654.021)     (1422.507)*     (.014)*'

  - 10.167 (P^)** - -004 (P)
   (2.706)       (.655)
   2503.286 - 14.6160"^) + 2.304(1)
   1316.320)* (4-957)**   (.197)***
                                                           .9973    4.560   2.677
                             8(P-) + 1.340(1)
                                                           .9720   20.585   1.253
=-3073.343 - 35.262(PH) + 47368(PSH)
  (1364. 501)* (16. 499)*    (21.657)*
                                 1.237(1)

                                 (.213)***
6}



(3199. 545)*(16. 680)
qMR "5811-897 17-625(PMR}
(103007. 750)(18. 671)

(.709)***
(30.649)
.1L5L J01. JIJ. i.OJ-3


Where
                          ollowing meanings:
             11 = fluorocarbon 11       R = refrigerator
             12 = fluorocarbon 12       D = aerosol deodoi
                                                            iding coefficient
    tistics for the estimated coefficients may be obtained  by dividing the coefficient by
    tandard error and can be used to test the hypothesis that the value of the estimated
    icient is  statistically different from zero.  The follovins superscripts placed on the
    *i9y/. level of confident
      Table 1.   Summary  of  Two-Stage Least  Squares
                     Regression  Analysis
                                                                                                             Willingness to Pay Equatioi
                                                                                                                                                          Annual Willingness to P»y
                                                                                                                                                              1973 Steady State
                                                                                                                                                          CMilliona of 1967 Dollati)
-.001(QU>  + [1.083-.4aoO'12)-6.359(Pco)-.001(PN)-.022(

- .00001(Pq)] Qu

-.00015{Q12)2 + [.498-.290(Pi:i)-.00005(Px>-.001(PQ)J QJ2

-.024(QR)2 4- fl71.27(3+.158(i)] QR

-•00025(Q^)2 + [-.839+.383(FJ,T1)+.0007(1)] QD

-.014(QH)2 + [-87.157+1.343(PSH)+.035(1)] QM

-.036(()A)2 + [-479.493+.342(1)]  QA

••029(Qm)2 + [-329.752+1.863(1)] QMR
  144.300

  128.057

2,780.303

  649.060

  73.777

1,906.729

  217.740
                                                                                              Symbols:  IJTP - wtllingni
                                                                                                                 aerosol deodorants
                                                                                                                                 mines:

                                                                                                                                     M - polyur.
                                                                                                                                              : conditioners
                                                                                                                                              -chicle refrige-
                                                                                        Relevant 1973 Values:

                                                                                        Q., = 325 million pounds

                                                                                           •= 487 million points
                                                                                         i   •
                                                                                            • 5,940,000* unit


                                                                                            = 1,722,000* units
              Q.  = 6,462 units

              QMR = 68,992 units

              QD  - 439,400,000 unit:

              r11 - $.168 per pound*
                                                                                                                                          P  - $.011 per pound
                                                                                                                                            « $709.31 per mil-
                                                                                                                                             lion cubic feet
                                                                                                                                        P.... = $3.49 per 1000
                                                                                                                                         NA  gallons (STP)
 I   • $2945

 Px  - 6*.8*


 PSM - 38.00*


 PSD " 95.20
                                                                                                                                        (incli
                                                                                                                                        Mattr,
                                                                                                                                        Indus
                                                                                                                                                                 jn Equipment
                                                                                                                                                                 -ties HA-35H
                                                                                                                                                                 j M25E
                                                                                             Soap, Cosmetics and Chemical  Specialties

                                                                                             Consumer Price Index

                                                                                             U.S. Department of Commerce,  _Survey of Current Business (1974)
                                                                                                                                                             and Environment!
                                                                                                 No.  93-110,  (USCPO, Washington, D.C., 1975).

                                                                                             U.S. International Trade Commission
                                                                                               Table  2.   Willingness  to  Pay   (WTP)  Estimation
                                                                                                              Equations  and Estimated Annual  Will-
                                                                                                              ingness  to Pay  Assuming 1973  as  Steady
                                                                                                              State Values
                                                            Present  Value 1973, 5 Percent Discount Rate.
                                                                       In Millions of  Dollars
Commodity
Type
F-ll
F-12
Refrigerators
Polyurethane foam
mattresses
Aerosol deodorants
Auto air conditioners
Mobile vehicle refrigera-
tion systems
Estimated 3973
Expenditures
by Commodity**
40
96
1,386

39
729
489
97
Consumer Surplus
or Derived Surplus
(present value)
2,201
740
39,727

1,007
1,174
16,473***
3,349
                                                *Area  under  the  derived demand  curve  less equilibrium purchases
                                          in  1973.

                                                **Expenditures are estimated  from the demand  relationships rather than
                                          actual data since actual price may deviate slightly from predicted  price
                                          as  given by the estimated demand relationship.

                                                ***See Chapter II for  explanation of  the size of this estimate.
                                               Table 3.   Estimates of  Consumer  Surplus  and
                                                              Derived Surplus*  For  Selected  Products,
                                                              United  States,  1971 Dollars
                                                                                 450

-------
Category of
Impact
1. Ozone Depletion
1.1 Non-melanoma skin cancer
1.2 Materials weathering
1.3 Biomass productivity
2. Temperature Change (ozone induced)
2.1 Marine resources
2.2 Forest products
2.3 Agricultural crops
2.3.1 corn
2.3.2 cotton
2.4 Urban resources
2.4.1 fossil fuel use
2.4.2 electricity use
2.4.3 housing & clothing
expenditures
2.4.4 public expenditures
TOTAL
Cost or
Benefit*

52 -206
569


-661***
-11,060

269
-16

-5,719
45,617

-11,377
-696
16,357
    *Costs are expressed as a  present value of all  future
costs and benefits resulting from  the emission of  F-ll and F-12
produced in the year 1973 and maintained at that level into
perpetuity.  A five percent rate of discount was utilized to
convert to present values.  Estimated costs applying three and
eight percent discount rates are tabulated in Chapter VI of
this report.

    **Non-melanoma skin cancer costs are estimated  at $325
per case and $1,292 per case.  See Chapter IV and Appendix 6
for justification.

    ***Negative  sign denotes benefit.
                                                                A.   Derived Surplus                   ?
                                                                     S(Q)  = 147.004 - 2.935Q + .0146Q
                                                                     S'(Q) = -2.935 + .029Q2
                                                                     S"(Q) = .029

                                                                B.   Consumer  Surplus
                                                                     S(Q)  = 4086.704 - 81.755Q + .40°Q
                                                                     S1 (Q) = -81.755 - 818Q
                                                                     S"(Q) = .818

                                                                C.   Consumer  Surplus with Substitution Assumption
                                                                     S(Q)  = 3977.708 - 79.550Q + .398Q2
                                                                     S'(Q) = -79.55 + .796Q
                                                                     S"(Q)  = .796


                                                                      aQ 5 "percentage 1973 production of F-ll  and F-12

                                                                       S(Q)    surplus loss function

                                                                       Q is defined only over  the range  [0,100]

                                                                      c
                                                                       The assumption about substitution is  that there
                                                                are immediate substitutes for  foams and propellants
                                                                and substitutes for refrigerants after  ten  years.
                                                                    Table 6.  Estimated Surplus Loss  Functions
                                                                                                                ab
 Table 4.   Estimates  of  Environmental Costs by
           Category Due  to Current  Levels of F-ll
           and F-12 Emissions, United States, 1973
           into Perpetuity (Million of 1971 dollars)
Measure of Benefit*
Cost and Benefit Estimates
Derived Surplus 16,978-17,132
Final Product
Consumer Surplus 16,978-17,132
Derived Surplus
(plus omission
of temperature
impacts) 621-775
Final Product
Consumer Surplus
(Refrigerators,
mobile regrige-
ration systems
and automobile
air condition-
ing uses exclud-
ed) 9,508-^,594
Cost Benefit-
Estimate Cost Ratio
2,941 5.8

81,720 .2]



2,942 .21-. 26








2,181 4.4
    *Measured by savings in environmental costs of 1973 level
production of F-ll and F-12 in present value terms.

    **Loss in consumer or derived  surplus at 1973 use rates in
present value terms.

    ***Approximation based on a 56% reduction in steady state
emissions obtained from Fluorocarbons in the Environment (op. cit.
Table VI-12
     Table 5.  Benefit-Cost Comparisons for a Ban  On
               Production of Fluorocarbons 11 and  12
               United  States, 5 Percent Discount Rate
               (Millions of U.S. 1971 dollars)
Discount Rate Intercept
Total Environmental Cost
3% 184.94
(55.36)
57, 96.08
(36.57)
8% 23.23
(22.77)
Skin Cancer and
Materials Weathering Costs
3% 6.29
(0.78)
57. 5.89
(0.46)
8% 3.88
(0.20)
Slope
15.87
(1.04)
7.8J
(0.68)
1.96
(0.43)
0.46
(.015)
0.33
(.01)
0.16
(-01)
2
r
.98
.96
.81
.99
.99
.99
                                                                      *The estimated equation was y = a + bx where y equaled  annual
                                                                  costs commencing in 1973 in millions of 1971 U.S.  dollars and x
                                                                  equaled the percentage of 1973 U.S. production of F-ll and F-12.
                                                                  Table  7.   Equations  Used  to Approximate  the
                                                                             Relationship Between Environmental Costs
                                                                             and U.S. Production Level of Fluoro-
                                                                             carbons 11 and 12
                                                            451

-------
     I.  Optimization Using Derived  Surplus

                      UV Related EC(Q)d   Total EC(Q)"
   Discount
   Race for
   EC(Q)
37.
57.
    100
    100
    10(1
    o
    0
    49
    II.   Optimization Using Consumer  Surplus
                      UV Related  EC(Q)    Total EC(Q)e
   Discount
   Rate for
   EC(Q)
37.
57.
8%
q = 99
Q - 99
Q * 99
Q - 80
Q = 90
Q - 97
   III.   Optimization Using Consumer Surplus and Substitution Assumption

                      UV Related EC(Q)d   Total
Discount
Rate for
EC(Q)
37.
57,
8%
Q - 99
Q = 99
Q = 99

q - 99
q 90
q = 97

         Solutions are obtained by solving the necessary conditions
   S'(Q)  = -EC'(Q) for Q.   Note that the sufficient  conditions, S"(Q)  >  -V.C' (Q),
   hold  in every case.  Since  EC(Q) is linear EC'(Q) = 0, and from Table 6
   it can readily be seen  that S"(Q) > 0 in every case.

         Q = percentage of 1973 F-ll and F-12 production
         EC(Q) E environmental costs

         Q <_ 0 or Q >_ 100  represents a "corner solation" and corresponds to
   Q = 0  and Q ° 100 respectively.

         UV related environmental costs represent skin cancer and materials
   weathering costs, and the appropriate relationships can be found in Table 7.

         Total environmental costs included both  UV and climate related
   costs.  For appropriate relationships see Table 7.
     Table  8.   Solutions  for  Optimum  "Steady  State"
                  Reduction  in F-ll and  F-12 Production
        4500
Millions
of 1971
U.S.
Dollars  4000
               i Consumer Surplus*
               \Lo89 (annual)
                                                         Environmental
                                                         Costs**
                                                         (annual)
                                                                 A(3X)
                                                    80X       100%
                                                          % 1973 Production
                                                          Level— F-ll 6 F-12
        "Consumer surplus loss where products are removed according  to rank
    order or  loss per pound of f luorocarbons.

       "Total environmental costs including both UV and temperature effects
    Cases A,  B, and C refer to application of 3, 5,  and 8 percent social rates
    oi discount! respectively.
     Figure  1.   Annual  Consumer  Surplus  Losses With
                   Product Ranking  and Environmental
                   Costs,  United States,  1973
                                   452

-------
                                    ENVIRONMENTAL, FISCAL AND SOCIO-ECONOMIC
                          IMPACT OF LAND USE POLICIES:  TOWARD AN  INTERACTIVE ANALYSIS
        J.  KUhner,  M.  Shapiro, R.J. deLucia
                 Meta Systems Inc
             Cambridge, Massachusetts
                        W.C. Lienesch
                   Water Planning Division
               Environmental Protection Agency
                       Washington, D.C.
     Effective implementation of recent environmental
quality legislation requires planning tools which give
quantitative estimates of the various impacts of land
use and environmental controls.  A literature review
revealed that no adequate comprehensive tools are
available.   Three models have been developed by Meta
Systems Inc for EPA in a recent study.  They evaluate:
(1) impact of urban nonpoint sources on water quality;
(2) sewer and treatment plant capacity; and  (3) distri-
bution of costs borne by different groups in response
to new development and environmental controls.

                   Introduction

     Federal, state, regional and local environmental
and land use policies have impacts on the physical
environment and on local governments' economic and
fiscal conditions.   The impacts have been recognized
in a qualitative manner, but there are few appropriate
tools to quantify them.  If land use and emission
controls are to be used effectively in implementing
environmental policies, it is necessary to consider
the dynamics of the local environment in which these
controls are being imposed.  The availability of inter-
active computer models would permit local and regional
planners and decision makers to look at the  inter-
actions and impacts described above.  Ideally, such
models should enable planners to consider all re-
ceiving media  (air, water, land) and take into account
the processes which transfer residuals from  one medium
to another.

     Meta Systems recently completed a project for
EPA which emphasized urban land use/environmental
quality relationships.1  Figure 1 is a conceptual
                         INPUT
            Soclo-Economic and Demographic; Land
            Uses; Hydrology/Hydraulic; Meteorology;
            etc.
                        OUTPUT
            I) Emissions  and Ambient  Quality
            2) Costs to> Federal and State Government;
              Community, Household; Private Firms; etc.
            3} Cost Incidence      	
    Figure 1:  Conceptual Model  for Urban Land Use
               Environmental  Quality Relationships
model of the interactions  between land use and environ-
mental quality.  This model provided a general frame-
work for analyzing relationships  and for organizing
a review of literature on  models  and emissions (see
deLucia, et^ al^.  ) .  However,  the  project concentrated
on the development of three types of models:
(1) models for assessing the  environmental impact of
urban stormwater runoff,  (2)  a model for evaluating
the capacities of sewers and  wastewater treatment
plants; and (3) a cost distribution  model for  asses-
sing the cost to be covered by different groups in
response to new urban and  suburban development.   In.
addition, an extensive evaluation of air pollutant
emissions was undertaken.  In this paper we briefly
describe the models developed during the course of the
study and outline their potential applications.

                 Descriptions of  the Models

Urban Runoff and Washoff

     Urban runoff produces about  the same order of
magnitude of pollutant as  secondary  effluent from
separately treated sanitary sewage,  with the exception
that it is somewhat lower  in  total nitrogen and higher
in sediment.2  Thus maintenance and  improvement of
water quality requires methods for predicting  the
impact from runoff associated with urban land  uses.
We have combined a dynamic water  quality model with a
rainfall-runoff-model; the resulting program can be
used for investigating the impact of land uses and
urban land management (such as street sweeping)  on the
quantity and quality of stormwater runoff,  and on the
propagation of waves and pollutants  in the receiving
water  body.  After an intensive  literature review1'2,
we decided to link together STORM^ and the dynamic
receiving water body model of SWMM4, two publicly avail-
able computer models.  STORM, a. continuous  model,  is
attractive because of its  relative simplicity  of use.
The data required for the model are  not very detailed
and appear to be available in most areas.   These data
include land use categories,  terrain description,
pollutant loading, runoff  coefficients,  antecedent
conditions, available storage and treatment, erosion
potential and precipitation record.   The major draw-
back of the model is its simplified  approach to the
runoff coefficient; it uses an adjusted rational
formula containing a composite runoff coefficient.
STORM is designed to compute  urban as well as  non-
urban runoff and washoff.  For every hour of runoff,
hydrographs as well as pollutographs are generated.
Pollutants included are suspended solids,  settleable
solids, BOD, N, P04, and coliforms (MPNs).  STORM
does not have any routing routines so that the appli-
cation is limited to areas of less than approximately
10 square miles.

     SWMM has two distinct phases (hydrodynamic and
quality)  which may be simulated together or separately.
In the first phase, the equations of motion and
continuity are applied to derive  the hydrodynamics of
the system for each time step, while in the second
phase concentrations of conservative and non-
conservative quality constituents are computed by using
                                                       453

-------
the first phase results and equations for conservation
of mass.   Information requirements are similar to
those of the steady-state models.

     A considerable amount of time and effort has gone
into restructuring STORM and SWMM to link the two pro-
grams and make them compatible.  The major modifi-
cations completed are presented in Table 1.

                      Table 1

         Changes in STORM (A) and SWMM (B)
             for an Efficient Linkage

(A) — H GPH* files have been created to pass results

       from STORM to SWMM
    — Rain interval is used instead of rain event for
       file generation
    — Erosion is calculated hourly
    — Erosion is accumulated over the rain interval
    — Eroded material can be added to suspended
       solids
    — Coliforms are included as sixth pollutant
    — Program calculates amount of dust and dirt
       accumulated at the beginning of rain interval,
       and amount left over after the rain event
    — Numerous bugs in the original program have been
       corrected (logic, core, program, files, de-
       fault) .
       TT
(B) —   GPH files are accepted as sequential input

    — One or more of the six pollutants can be
       selected for individual runs
    — Quality phase can be run independently of
       hydrodynamic phase
    — Adjusting factors introduced for   GPHs.
     GPH means hydrograph and pollutograph.
The linkage provides two points of interaction for the
planner.  First, he can choose specific rain intervals
from the continuous simulation period of STORM.  Then
he can select those intervals from STORM1s output of
pollutographs and hydrographs, to be passed on to
SWMM.  This option significantly reduces the computing
costs, but still allows for simulation of all the
events if computation of a frequency distribution of
conditions is desired.  Water quality computations
for specific pollutants can be done separately from
the hydrodynamic computations by varying pollutograph
inputs generated for each point discharge of runoff.
This permits intensive testing of quality related
parameters and thereby facilitates calibration of the
quality model.5

    The following types of output can be generated by
STORM and SWMM:  (1) hydrographs and pollutographs
as total/year/sub-basin or as total/rain interval/
sub-basin or as total/hour/sub-basin;  (2) the amount
of dust and dirt on impervious areas at the beginning
and end of rain intervals; (3) total erosion for
selected rainfall events and the amount finally
reaching channels  (stream) after application of a
sediment delivery ratio;  (4)  stage of water level
at each selected node of the river system for every
rainfall event;  (5) water level at every node of the
river system for each day; (6) velocity and flow in
every channel of the river system for each day; and
 (7) hourly concentration of selected pollutants at
every selected node.
Capacity Evaluation Model

     The capacity evaluation model enables planners to
predict when and where  new  sanitary sewer and waste-
water treatment plant capacity  will be needed to accom-
modate projected development.   This capability enables
planning to begin on relief sewers before environmen-
tally disruptive back-ups of main sewers occur.   In
addition, the information provided by the capacity
evaluation model provides one of the elements needed
to project the fiscal impact of development.   It is
not the purpose of the  model to replace the detailed
engineering services provided by public works depart-
ments and engineering consultants.  Rather it serves
to warn planners when additional detailed (and expen-
sive) engineering studies are required and indicates
where the most significant  problems are likely to
occur.

     The evaluation model works from two types of input
data.  One set of data  contains land use and planning
information, including  land use projections and  waste-
water generation characteristics.   The data set  is
organized by cells, which are geographic sub-regions
of the planning area.   Cells may be specified from
rectangular grids, census tracts,  or other classifi-
cations for which planning  data are available.

      The  second set of data characterizes the collec-
 tion network  and employs a. classification scheme adop-
 ted  from  a previous Meta Systems study.6  The collec-
 tion system is  divided into a series of arbitrary
 links.  Each  link  is assigned an identification number,
 and  characterized  by link type and by the number of
 the  next  downstream link.-*-  This characterization
 enables the program to "reconstruct" the sewer network
 and  evaluate  link  flows in a straightforward fashion.
 The  program logic  makes it possible to characterize
 virtually any realistic collection network, and allows
 for  the inclusion  of force mains and relief sewers.
     The major output of the capacity evaluation model
is a tabulation of maximum  flows for each link in the
network.  This information  is presented for four time
periods:  a base year and 10, 25,  and 50 years from
the base year.  The projected flows are compared with
maximum design flows of the links and the percent
utilization and overflows  (if any)  are indicated.
An additional program option allows similar results
to be computed and listed for treatment plants located
in the system.

     The model uses steady  state hydraulics;  the
ability of upstream links to store back-up from
an overloaded line is not taken into account.  Thus
actual overflows are likely  to be less than those
predicted.  However, the model  is intended only  to
signal possible problems.   More detailed evaluations
should be performed when overflows are indicated.

Cost and Fiscal Impact  Model

     Because of the multiplicity of financing arrange-
ments and the importance of considering the temporal
aspects of financing, it is often difficult for  a
planner to work through the financial aspects of new
development, particularly where many alternative
development patterns and financing arrangements  must
be considered.  The  fiscal  impact model is designed
to help planners trace  how  the  environmental  infra-
structure costs incurred by new development are  reflec-
ted in fiscal demands upon  the  community and in  charges
to the individual consumer.

     The structure of the impact model is depicted  in
Figure 2.  Input data includes  three types of infor-
mation :
                                                       454

-------
    1.   Socio-economic data describing the size and
growth of the residential, commercial, and industrial
land uses served by the facility; and data on the
property values associated with these uses.

    2.   Facility data specifying the type and size of
facility.  Currently the program handles eight types
of facilities:  septic tanks, sanitary sewer laterals,
sanitary sewer house-connections, sanitary sewer mains
and trunks, storm sewer laterals, stormwater detention
basins, storm sewer mains, and sewage treatment plants.

    3.  Cost allocation data specifying the mechanisms
 (e.g., user charges) used to finance the facilities
and the cost shares allocated to each mechanism.

r
BOND
ISSUE
»
GENERAL
PROPERTY
TAX
[


»
_ LOCAL
GOV'T


USER
CHARGE


OUTPUT
DEVELOPER/
RESIDENT

SPECIAL
ASSESS-
MENTS
1


J
$/YEAR
	 $/ 1000$ ASSESSED VALUE
$/ GAL. TREATED
       FINAL INCIDENCE!
      i OF COSTS     J"
    Figure 2:   Logic of Cost and Fiscal Impact Model
     Cost sub-models  compute  the  capital and OSM costs
 for each facility  type.   These costs  are then alloca-
 ted to  federal  and state  subsidies,  local government,
 and developers  or  residents.  The local government
 expenditures are further  broken  down  into those
 financed through long  term debt  (mostly capital
 expenditures) and  those financed through current
 revenues  (mostly O&M expenditures).   Finally the
 local expenditures are converted to  assessments and
 user charges according to the input  specification.

     Output  from the  program  includes  several measures
 of fiscal impact:  the time  sequence  of aggregate
 expenditures required  by  federal, state and local
 governments and the  private  sector,  the time sequence
 of property taxes  and  user charges needed to finance
 the local costs, and measures of total costs borne
 by the  consumer.   The  latter are computed by summing
 all charges (including private sector) paid by the
 consumer and converting to a constant base to yield
 an implied  tax  rate  or user  charge,  i.e., the tax
 rate or user charge  which would  be required if all
 local and private  expenditures were  financed by a
 single  mechanism.

     A final desirable  element of the  model which has
 not yet been developed, is a sub-model for computing
 final cost  incidence.  This  sub-model would provide
 the tax impacts of federal and state  subsidies and the
 amount  of commercial and  industrial  expenditures which
 are passed  on to consumers inside and outside of the
 service area.
                Examples of the Results

     Results from the three models are not based on
the same case study, but are drawn from different
examples that best demonstrate the models' usefulness.

Stormwater Runoff Model

     Results from the first model  (Figure 3 and Table
2) are based on the analysis of the Mill River Basin,
Hamden, Connecticut.
     36.8
                                                                36.7
                                                            UJ
                                                            ID
                                                            U.
                                                            ?  36.6
                                                            I
                                                            a.
                                                            UJ
                                                            o


                                                                36.5
                                                                36.4
                                                                                  f'\—1985
                                                                           A
                                   I
          18      21      24      3       6       9
                      TIME IN HOURS

   Figure 3:  Stage Graph at Dam for 1974 and 1985
   conditions:  3 hour rainfall interval with
                  .51 inches of rain; low base flow

                        Table 2

    Ratio of Coliforms from 1985 to 1974 Land Uses
conditions:
Junction
nH
1-1
td
c •
•H
&






Time
(hour)
! 19:00
20
21
22
23
24
2
6
14
:00
:00
:00
:00
:00 (M)
:00
:00
:00
1
(dam)
1
1.
1.
1.
1.
1.
1.
1.
1.
14
03
4
5
55
57
43
59
1

1
1
1
1
1
1
1
3
.95
.28
.38
.44
.47
.67
.68
.66
6
1.38
1.
1.
1.
3.
2.
1.
2.

41
60
89
04
44
27
24
19
Number
1.
2.
4.
1.

1

1
1
8
.46
.18
.06
.9
.81

.97


11
1.72

.
1


1
1
1
86
97

92
88



,L2
(up-
stream)
.51
1
1
1
1
1
1
1
1
The river has a drainage area  (above the dam at Whitney
Lake) of 37.7 square miles and is of triangular shape,
about 13.5 miles long and about  5.5 miles wide at the
upper end.  During the 40-year period  from  1918 to
1957 the average flow at the dam was 42 mgd.  For the
analysis we have divided the basin in  11 sub-basins.
Mixed urban and non-urban land uses exist in the 11
sub-basins; 5,030 acres are considered developed in
1974 and 6,300 developed acres are projected for 1985.
                                                        455

-------
In Figure 3 the peak stages at the dam are higher in
1985 than 1974, indicating the impact of increased
development on runoff in the basin.  In Table 2 the
ratios of 1985 to 1974 coliform concentrations indi-
cate that, at most junctions, additional development
results in higher concentrations.  However, due to
the increased runoff, at some junctions and times
coliform concentrations are actually reduced.

Capacity Evaluation Model

     Table 3 illustrates the output from the capacity
evaluation model for a hypothetical 12 link sewer
network and associated treatment plant.  The Table
indicates that link 6 is likely to back-up under the
proposed development scenario and that the efficiency
of treatment will be somewhat reduced, as the capacity
of the tertiary treatment facilities will be exceeded.
As a next stage in the analysis, the planner could
evaluate proposed relief sewers by adding appropriate
links and re-running the model.

                      Table 3
           Link Capacities and Flows for
            Hypothetical Sewer Network

Full and/or Overcapacity Flows:

  Link 6 flow exceeded the maximum capacity
         by 12.7 percent (1.58 cfs)

                Flow in Links (cfs)
                         Table 4

    Selected  Costs  of New Residential Development

            (590  townhouse units;  10 garden
                 apartments (30 du each))
Max.
Link ID












Total
1
2
3
4
5
6
7
8
9
10
11
12
Flow
1.
1.
1.
1.
1.
12.
5.
12.
1.
5.
3.
1.
20
20
96
20
20
42
77
42
20
77
55
96
Flow Entering
Actual
Flow
1
0
1
0
1
12
1
3
0
0
0
1
.11
.10
.53
.19
.02
.42
.35
.73
.51
.76
.76
.50
Treatment
Treatment
Plant
Percent Cumulative
Utilization Overflow
92
8
78
16
85
100
23
30
42
13
21
76
Plant
.6
.4
.3
.0
.0
.0
.5
.0
.7
.2
.4
.7
(cfs) = 12
0.
0.
0.
0.
0.
1.
0.
0.
0.
0.
0.
0.
0
0
0
0
0
58
0
0
0
0
0
0
.42
Capacity
Maximum

Stage
Capacity
Primary
Secondary
Tertiary
30.
20.
10.
(cfs)
00
00
00
Percent


Utilization



41.4
62.1
124.2






Cost Impact Model

       Table 4 represents one of the summary outputs
available from the cost impact model.  It lists
aggregate expenditures for four infrastructure types
required by a proposed new development of 890 dwelling
units.  The cost allocations employed are hypothetical
and are not intended to correspond to existing prac-
tices.  Costs are assigned to developers and local,
state and federal governments.  Within the local
government category expenditures are further classi-
fied by revenue sources.
Cost++
Type
Sanitary
Sewer
Laterals
Capital
O&M
Sanitary
Sewer
Interceptor
Capital
O&M
Storm Sewer
Laterals
Capital
O&M
Storm Sewer
Interceptor
Capital
O&M
Dev- Local Gov't
elo


303
0



278
0


26
0


290
0
per A


.7 0.0
.0 0.0



.0 222.4
.0 0.0


.5 13.2
.0 0.0


.2 0.0
.0 0.0
B


60.
1.



0.
0.


0.
0.


90.
0.


7
5



0
2


0
0


7
0
C


182.
0.



0.
0.


0.
0.


272.
0.



2
0



0
6


0
8


1
'8
State
Gov1


60.
0.



5.
0.


10.
0.


72.
0.
t


7
4



6
2


6
2


5
2
U.S.
Gov't


0.
0.



0.
0.


2.
0.


0.
0.


0
0



0
0


6
1


0
0
                                                           (+)  All values are  in  thousand base year dollars.
                                                          An entry of zero  indicates  the group or mechanism was
                                                          not chosen for financing  cost type.

                                                           (++)  Capital costs  are in  thousand  dollars  per  group
                                                          or mechanism.  OSM costs  are in thousand dollars per
                                                          year per group or mechanism.

                                                           (+++)  A = Special Assessment;  B = User Charge;
                                                          C = Property Tax.
                      Conclusion

     The impact of federal,  state,  regional,  and local
environmental and land use policies on  environmental
quality is receiving increasing attention.  Section 208
of the Federal Water Pollution Control  Act  Amendments
of 1972, for example, requires the  development and
implementation of plans which include regulatory
programs to control both point and  nonpoint sources
of pollution on an areawide  basis.7 Section  208
also mandates that land use  controls be considered
as measures to be included in the regulatory  programs.

    In addition, the 208 plans are  to include a
determination of the cost of the plan and a financial
program to ensure that sufficient funds are available.
It will be necessary, as part of the financial program,
to determine the sources of  funding, such as  federal
grants, user charges, property tax  revenues,  etc.

    There is a need within the 208  and  similar programs
to determine more specifically the  relationship between
land use and environmental quality. Because  many 208
agencies lack a quantitative understanding  of the
land use-water quality relationships for their parti-
cular areas, they may find it useful to follow the
approach presented in this paper.   Approaches such
as this one do not provide definitive answers, but
they do provide quantitative data that  are  more precise
                                                        456

-------
than  the data which are generally available.  This
type  of quantitative analysis will be necessary input
to decisions which affect environmental quality.

                    References

1. Meta Systems  Inc,  "Land Use Environmental Quality
        Relationship," Prepared for the U.S. Environ-
        mental Protection Agency under contract no.
        68-01-2622,  November, 1975.

2. deLucia, R. J. ,  Kiihner,  J. , and Shapiro, M. ,
        "Models  for Land Use/Water Quality:  Some
        Observations  on What Exists and What is
        Needed,"  Presented at the 47th National ORSA/
        TIMS Meeting,  Chicago,  April 30, 1975.

3.  U.S. Army Corps of Engineers, "Urban Runoff:
        Storage,  Treatment and Overflow Model,
        STORM,"  U.S.  Army Davis, California
        Hydrologic Engineering Center, Computer
        Program  723-S8-L2520, May, 1974.

4.  Metcalf and Eddy,  Inc.; University of Florida,
        Gainesville;  and Water Resources Engineers,
        Inc.;  "Stormwater Management Model, Final
        Report," four volumes, prepared for U.S.
        Environmental Protection Agency, July, 1971.

5.  deLucia,  R.J., and Chi, Tze-wen; "Water Quality
        Management Models:  Specific Cases and
        Some Broader Observations," Paper presented
        at the  joint USSR/USA Symposium on the "Use
        and Limitations of Mathematical Models to
        Optimize Water Quality Management," Khorkov,
        USSR,  December, 1975.

6.  Meta Systems  Inc,  "A Program for Simulation of
        Acid Mine Drainage in a River Basin,"
        Prepared for the Appalachian Regional Com-
        mission, 1969.

7.  Emison, G.A., and Lienesch, W.C., "Areawide
        Water  Quality Management Under Section 208:
         Conflicts in Planning for Implementation,"
         Paper presented at the 1975 American Insti-
         tute of  Planners National Conference, San
        Antonio, Texas, October, 1975.

                 Acknowledgements

     A number of  individuals made significant
contributions to  the work described in this paper.
The authors would like to thank, in particular,
Eric Schwarz, David Magid, Larry Russell and Ingrid
Dichsen.
                                                       457

-------
                              A TOTAL CONCEPT  SYSTEM  FOR MUNICIPAL WASTE DISPOSAL
                                                 Lester  L.  Nagel
                                            Senior  Project Engineer
                                        Facilities  Technology  Division
                                           Federal  Facilities  Office
                              U. S. Environmental Protection Agency   Region II
                                             New York,  New York
ABSTRACT

A  review of the past and present means of waste
disposal practices is given with individual
evaluations of each.

The negative vs the positive mode of thinking with
respect to waste disposal uses are discussed.

Based on the positive aspect element, two total
concept systems are technically developed.  Existing
and proposed useful end product concepts are examined
and evaluated.

Continuity is maintained by proceeding to examine the
economic aspects of the respective systems proposed
and how each relate, in its resultant cost estimate
and to the ultimate financial impact on the community,
to a system's capacity as well as on a per capita
basis.
A  summary of the success of the proposed systems to
the basic criteria, as outlined in the "positive
aspect" approach to the disposal problem, is given
for each of the eight conditions initially cited.

BACKGROUND

The body of information presented in this paper is
directed to those public officials and individuals
concerned with the disposal of municipal wastes of
all types and the design, development and economics
of such a system.

The multi-disciplines involved require extensive
knowledge in such divergent areas that individuals
and/or responsible public agencies do not usually
have compatible expertise such that the end result
of such a comprehensive system can be evaluated.

It therefore seems appropriate for an individual,
having extensive experience in such areas to propose
this concept.

The author's approach is therefore directed to these
technical individuals and officials involved in and
responsible for a community's waste disposal problems.

GENERAL CONSIDERATIONS

The problems of todays living are both complicated
and often dangerous to our existence.   None however
vex us more than those problems  related to the dispos-
al of the products of our daily  waste.   As we become
more numerous,  this waste problem increases in
magnitude and at the same time becomes restrictive in
the available means of its disposal.

According to our 1970 census  statistics there were
916 communities in the United States  having a popula-
tion of over 25,000.   Sixty-five (65)  percent of them
are concentrated in twelve states.  At present there
are more than 150 over 100,000 each;  26 of them exceed
500,000.   These figures  do not include any of their
surrounding suburbs.
The need for a modern practical  method of waste  dis-
posal has long been sought.  Numerous  papers  and
articles have been previously presented on the subject.
(5,8,9,11,15,19,20,22,28,29,30,31,532).  In no instance
however, has the solution  encompassed  the entire prob-
lem in terms of the domestic concept requirements.
This paper therefore, presents such means for a
complete system for the disposal  of all  of a communitys
daily waste matter.
Additionally it focuses attention on the  advantages of
combining any existing sewage treatment  and/or munic-
ipal incinerator facilities at a single  location.
Regardless of the type of  wastewater treatment facili-
ties available the flash evaporation of  the wastewater
effluent is not affected by the  overall  system's
operations nor the number  of treatment  stages prior to
the flash evaporator unit.

For simplification only, two wastewater  systems  are
illustrated.  In practical applications  any waste-
water treatment system is  suitable to  the proposed
concepts presented.

BASIC PROBLEMS

The use of fire to dispose of wastes has  long prevail-
ed.  We have designed, built and operated incinerators
that are initially expensive, inefficient in  operation
wasteful in the utilization of the energy produced,
and are one of the major causes  of air pollution in
areas of concentrated population.  (1)

We do not recognize the fact that what  appears to be
an acceptable solution in  foreign concepts  may not be
the most practical answer  to our domestic problems of
waste disposal (5,8,15,19, 20 §  22).

Reviewing these waste disposal problems,  their magni-
tude, the present methods  used and our  future require-
ments, we can summarize them as  follows:
Each of us presently creates approximately 5  Ibs, of
waste refuse each day (6)  (7).   In addition we also
create daily 0.2 Ibs. of sewage  matter.   Both present
a disposal problem. Means  of disposal  are limited and
can be broadly classified  as follows:
   Landfill  (all types)
   Incineration
   Energy §/or Materials Resource  Recovery
   Burning or dumping  at sea
   Sewage treatment
1.
2.
3.
4.
5,
6.  Septic systems

Numerous installations have attempted and are util-
izing, to some extent, the heat value in the burning
of refuse. (5,9,11,15,19,20 § 22). Its use, in most
instances, is limited to the production of steam and/
or electrical power for incinerator plant use. EPA
has made demonstration grants for various other uses
such as: refuse derived fuel, methane recovery etc.
                                                      458

-------
Our utility systems' efficient production of electric
power makes the use of this heat energy for such
purposes  expensive and wasteful except for metro-
politan size installations.  Efforts to utilize sewage
waste matter have proved largely unsuccessful  (13)
and economically impractical.

Incinerator designs have changed little since  their
inception.   Todays designs basically consist of a
firebox with intricate grate designs.  They are batch
or continuous fed and, until the early 1960's  were
lacking sophisticated fire combustion controls.  We
have visited and been briefed by others on the success
of foreign installations using low excess air  firing
concepts, apparently without any significant changes
to our own methods on existing installations.  (8,14)
European grate designs (8,15,19) have proven far more
effective than ours but are only lately incorporated
into our incinerator units.

Our past control of stack emissions have been  either
non-existant or inadequate in design and/or effective-
ness.  (1)  The installation of efficient APC equip-
ment has been installed and evaluated only within the
past few years.  (21,23, 24)

Past and present trends seem more concerned with the
architectural aspects of functional disguise and,
after the initial operational fanfare, settle  into
their normal inefficient dirty routine operations.
The present status of disposal procedures cannot
continue to prevail; otherwise, the threat of  disease
could reach a level detrimental to the public's
health.  Barring further opposition to change  some
hope for a practical solution is possible.

The Total Concept Solution

Consider the problem without the chains of  conven-
tional ways and existing techniques. Let us review
the asset possibilities of the two discarded products
of our daily lives, refuse and sewage wastes.  Both
 consist of organic and inorganic substances which
are useless to us individually and are therefore
 collected and disposed of for us by central means.

At this point the most useful utilization of this
matter must be considered.  Economic evaluation pre-
 sents the following considerations:

 1. Steam and Elec. Power Generation
 2. Composting
 3. Fertilizer Production
 4. Land Reclaimation
 5. Pyrolysis
 6. Water Distillation

The production of steam and/or electric power  cannot
be domestically justified economically except  for
 large installations.  Power can be purchased at a
 lower cost than produced in limited capacities.

Composting has been tried and also found not commer-
 cially competitive. (22) Fertilizer production has
 also  failed both economically and because of odor
 problems.  (16)
 Land  reclaimation is limited by available sites and
 creates gas and pestilence problems unless  the fill
 consists of inert matter.
 Pyrolysis has yet to achieve economic success.

 The sixth consideration, that of water distillation,
 presents interesting possibilities.  Few will  dispute
 the creation of clean water, when processed by the
utilization of waste products. Water distillation
fuel costs range from 30 to 40 percent of the total
process product costs.  (2)  (3)  The development of a
low cost distillation system utilizing.  "Industrial
Waste Heat" has been proposed  (4) and other proposals
also utilizing waste heat have previously been made
(5).  Prime consideration to the solution of such
systems demand that it be functionally self-sufficient
and at least partially self-supporting from an eco-
nomic standpoint.

Two versions of the suggested  "Total Concept System
for Municipal Waste Disposal"  are shown in Figures I
and II.

Each fulfills the economic and engineering require-
ments as previously stated.  The ultimate concept
system, given in Figure II however will be shown to
have several advantages over that of the system shown
in Figure I and will also fulfill all of the engin-
eering and economic requirements previously stated.

The common advantages of both systems are reasonably
obvious.

Consolidation of the refuse and sewage services at
the same geographical location results in lower over-
all initial plant site costs, personnel requirements
and plant operating expenses.  Incineration of all
solids increases the potential heat energy available,
helps stabilize the refuse heat value and produces a
reasonably inert fill residue greatly reduced in
volume and weight.  The disposal problem is thereby
diminished to a considerable extent.  The use of the
chemically treated sewage water to produce clean
water for the various system functions shown plus the
surplus available for outright sale, achieves the
objective of the system's partial economic self-
sufficiency.  Auxiliary fuel costs are practically
non-existant and the electric power purchased for
either system costs less than when in-plant produced.

Additionally the system of Figure II will operate at
a higher net economic efficiency by utilization of all
of the resultant heat to produce clean water plus
elimination of all boiler maintenance on the non-
existant steam boiler.

Each system would utilize an incinerator unit of 300
tons per day capacity, however whereas the Figure I
system unit would utilize a conventional incinerator-
steam boiler arrangement, shown in Figure III, the
Figure II system would consist of a three stage
incinerator having as the final stage a kiln type
rotating barrel (Volund) design direct gas discharge
unit (19,22,26) as shown in Figure IV. Firing
temperature of both systems would be in the order of
1800-2300F which insures elimination of stack odor
possibilities.

The rotary kiln exit gases of the Figure II system
would be precleaned by a cyclone collector and then
directly utilized in a flash type water distillation
unit (17). The ultimate production of clean water
would be 2.4 x 106 GPD for the Figure I system and 2.7
x 10& GPD for the Figure II system.  Each system would
provide approximately 15,000 GPD for use in dissolving
sewage treatment chemicals.  Some small additional
amount would be required for makeup for the Figure I
system steam boiler.  The balance in either system
would be available for sale as boiler feed water or to
supplement the community's water needs.
                                                       459

-------
Both systems have the advantage of allowing for wide
variations in the quantity of water produced, depend-
ent upon the seasonal or even daily change in heat
energy value of the incinerated matter.

The systems presented are representative of the size
required for a community or group of communities hav-
ing a total population of 100,000.  Multiples of the
system's units could be installed at a lower unit cost
with a resultant increase in total investment return
in the distilled water product produced for use and/
or sale.

Table I presents the respective capital costs of
equipment and plant investment for both systems.
Annual costs and investment return value for each
system are also included.  All values are predicated
on a single 300 ton per day unit (100,000 population)
system.  Based upon the data presented in Table I-A
the net annual cost per ton of matter incinerated is
$4.00 and $3.84 respectively.

These net costs are derived upon the basis of a total
of 300 tons/day or 110,000 tons per year generated on
a 365 day basis and being incinerated on a 345 day
schedule of continuous operation each day at a 90/100%
capacity level.

Refuse collection costs are not included, however they
would normally be lower reflecting the savings in
shorter hauling distances with the use of such a
centralized system for small communities.  Included
are removal costs of the incinerated residue for use
as adjacent landfill.

Credit for the distillate water produced has been
calculated at the lowest prevailing rate of producing
boiler feed water, $2 per 1000 gals.  Present day
costs of treated boiler feed water is in the order of
$2 to $3 per 1000 gallons.  No additional credit has
been included in the Table I calculations over the
lower cost figure.

For comparison purposes the relative costs for the
same systems shown in Figures I and II are given in
Table I-B where an activated Sludge Type of Sewage
Process is utilized.

Three additional aspects of the system functions given
in this paper should also be noted.

They are all economic in nature and are therefore of
primary importance when the following conditions may
prevail:

1.  In metropolitan areas having an urban-suburban
population of 1,000,000 or greater, the generation of
electrical energy from the heat output of a 3,000 Ton
per day incinerator-steam boiler unit (System Fig. I)
becomes economically feasible.

In this modification the flash evaporator unit would
be eliminated, however the treated sewage water would
then become the condensing cooling water for  the
steam turbine.

2.  Most communities, having a basic population of
25,000 or more, have installed, on a regional basis,
sewage treatment facilities and in some instances a
municipal incinerator.

    Where either or both of these exist the initial
investment for either of the basic systems proposed
is substantially reduced.
3.  A raw refuse waste  classification system above and
beyond the reclamation  of the  metals  in the ash residue
can be added to any of  the proposed systems discussed.

    The economic advantages  however must be carefully
examined on an individual installation basis before
inclusion as part  of  the  basic systems proposed.

CONCLUSION

As a result of the concepts  and economic data as
presented the following criteria has  been established:

1.  Current methods of  refuse  disposal only,  indicates
an annual per capita  cost range of $3.50 to $8.00  (7,
24).  With reference  to Table  I,  the  comparable net
annual per capita  costs,  after proportionate  value
credit for   the useful products  produced is  made,
would be 63
-------
6.  The Environment   Gilberston   Spring 1966 Issue
Engineering  Joint Council Publication   Engineer.

7.  Refuse Study for the Capitol Region Planning
Agency  (Conn.)    February 1963   UPA Project P-27.

8.  An  appraisal of Refuse Incineration in Western
Europe    Rogus,  C.A.    ASME National Incinerator
Conference    1966.
9.   Navy to  Incinerate Rubbish for Power
Removal  Journal    1967.
Refuse
               29.  Megawatts from Municipal Waste    IEEE  Spectrum
               Nov. 1975.

               30.  Garbage-to-Energy Conversion Fuels  Bond-Sector
               Interest.  Wall Street Journal Aug.  11,  1975

               31.  Wheelabrator to Sell Refuse-Produced Power  to
               General Public Unit   Wall Street Journal  May 20, 1975
32. Turning Trash into Energy
October 20, 1975
                                                                                           US  News  §  World Report
10.  Solid Waste Disposal   Part II   Fleming, R. R.
  The American City 1966.

11.  An Incinerator With Power and Other Unusual Fea-
tures   Heeding,  Velzy § Landman   December 1964
American Society of Mechanical Engineers.

12.  Brisbane,  California   Proposed 2000 Ton Inciner-
ator (Reported Late 1967),

13.  Something  in the Air-A Suburb in Michigan Stinks
to High Heaven   Wall Street Journal   May 16, 1968.

14.  Air Pollution from Incinerators   Causes and Cures
  Flood, L.  P.   December 1965   American Society of
Chemical Engineers,

15.  European Developments in Refuse Incineration
Magazine Public Works   May 1966   Rogus, C.A.

16.  Florida Garbage Plant Makes its Presence Known
too Strongly   Wall Street Journal   February 12, 1968.

17.  Gas Turbines Show Promise in Water Desalting
Plants   Power Engineering   November 1966 and July
1967.

18.  Key West Desalination Plant Goes on Stream
Magazine Public Works   October 1967.

19.  European Practice in Refuse Burning   Stabenow,
G.   National  Incinerator Conference -1964.

20.   Survey of European Experiences with High Pressure
Boiler Operation Burning Wastes and Fuel   Stabenow,
G.   National  Incinerator Conference   1966.

21.  What Price Incineration Air Pollution Control
National Incinerator Conference   1966.   Fife fj Boyer.

22.  European Practice in Refuse § Sewage Sludge
Disposal by Incineration   Eberhardt, H.   National
Incinerator Conference   1966.

23.  New Precipitators for Old Incinerators   Refuse,
Collection § Disposal   1968.

24.  City of Baltimore   Rehabilitation  § Renovation
of Incinerator No.  4 (1975)

25.  Garbage Collection and Incinerator Study   Depart-
ment of Public Works Study Committee   City of Hack-
ensack, New Jersey.

26. Combustion Profile of a Grate-Rotary Kiln Incin-
erator   Woodruff,  P. H. $ Larson, G. P.   National
Incinerator Conference   1968.

27.  Municipal  Incineration   U.S.  Environmental
Protection Agency   AP-79  June 1971.

28.  Energy Report   IEEE Spectrum Nov.  1975.
                             SUMMARIES   TABLE I, I-A §  I-B

                                  PLANT CAPITAL COSTS
                                         with
PRIMARY TREATMENT PLANT
Equipment Investment
Plant Investment
System I
9,545,000
1,570,000
Total Investment $11,115,000
Net Annual Costs (Credit) 63,000
NET ANNUAL COSTS CHARGEABLE TO EACH
Function Percent
Incineration/Ton 25
Distillation/M.gal 35
Sewage Treat- 40
ment/106 gal
Unit
System I
$4.00
74.4(f
19. 3£
System II
9,020,000
1,535,000
$10,555,000
($216,000)
FUNCTION
Costs
System II
$3.84
63. 5*
18.5*
                                  PLANT CAPITAL COSTS
                                        with
                               ACTIVATED SLUDGE PLANT
               Equipment Investment
               Plant Investment
                      Total Investment
                            11,715,000
                             1,670,000
                           $13,385,000
 11,220,000
  1,635,000
$12,855,000
                                      TABLE I
                                  PLANT CAPITAL COSTS
                          EQUIPMENT INVESTMENT  (INSTALLED)
                          PRIMARY TREATMENT PLANT TYPE
               Item
               1 Incinerator § Boiler
                 (300 TPD) (incl. Fans)
               1A Incinerator (only)
                 (300 TPD) (incl. Fans)
               2 Mechanical collector
               3 Figure I Distillation
                 Plant (2.4 x 10  GPD)
               3A Figure II Distillation
                 Plant (2.7 x 10  GPD)
               4 Sewage Process Plant
                 (10 x 10  GPD)
               5 APC Unit (ESP)
               6 Heating § Air Condi-
                 tioning System
               7 Odor Unit (not required)
               8 System Coordination
                 (10% of Equipment costs
                  and Installation)
                             Figure I

                             1,700,000
                                80,000
                             3,000,000
                             3,500,000

                               300,000
                                70,000
                               865,000
  Figure II



    850,000

     80,000
  3,400,000

  3,500,000

    300,000
     70,000
    820,000
               Total Equipment Investment  $9,545,000      $9,020,000

               All figures have been rounded to nearest $1,000.
                                                      461

-------
PLANT INVESTMENT
Item Figure I
Waste Land (100 acres @ 100,000
$1000/acre)
Architectural § Engin- 720,000
eering Fees (7% of T.E.I. § Bldg)
Office 6 Operations 750,000
Building for Items #1,
3 or 3A 6 4
Total Plant Investment $1,570,000
Grand Total $11,115,000
ANNUAL COSTS
A Fixed Charges 8.024% 892,000
(5% - 20 years)
B Electric Power
(a) Sewage Plant 300 KW
(b) Distillation 110 KW 90,000
(c) Incineration 600 KW
C Chemicals
(a) Distillation Plant 50,000
(H2 S04 @ 2.50
-------
                     TABLE I-B
ACTIVATED SLUDGE TYPE PLANT

Item
1



1A

2
7
J
3A
4

5
6

8




(a) Equipment Investment
Equipment Figure
Incinerator § 1,700,
Boiler
300 TPD-Including
Fans
Incinerator only 	
300 TPD-Including Fans
Mechanical Collector 80,
r\^ „ 4. J T 1 .3-1-4 rt-n Plant* ^ Oflfl
UlSClJ-ldL-LLHl rJ. dfl L O 3 UUU >
2.4 x 106 GPD
Distillation Plant 	
2.7 x 106 GPD
Sewage Plant 5,500,
10 x IQo GPD
Precipitator 300,
Heat £, Cooling
System 70,
System Coordination 1,065,
10% of Equipment §
Installation Costs
Total Equipment $11,715,
Investment
CAPITAL COSTS
Installed

I Figure II
nfin




-

000
CiCiCl

3
000 5

000

00
000 1


000 $11

PLANT INVESTMENT

Land
A § E

100 Acres @ $l,000/Acre
Fees 7% of T.E.I.
rigure I
100,000
820,000





850,000

80,000


,400,000
,500,000

300,000

70,000
,020,000


,220,000


Figure II
100,000
785,000
     and  Building
Building    Item  1,  3,  §  4          750,000      750,000

       Total Plant  Investment  $1,670,000   $1,635,000

       Grand Total           $13,385,000  $12,855,000
                                                     463

-------
                                          MS FLOW
SYSTEM I
                                                                               STACK
                                          FIGURE  I




                      TOTAL CONCEPT SYSTEM FOR  MUNICIPAL WASTE DISPOSAL




                       (INCINERATOR-HEAT RECOVERY BOILER UTILIZATION)
SYSTEM II
                     EMERGENCY
                                       GAS FLOW


REFUSE
TO SYSTEM
ItOT/O

AIR INLET
EMERGENCY BY-PASS
300 TPO
INCINERATOR
UNIT

GAS
FLOW
MECHANKAL
COLLECTOR
\ ASH ASH
\ DISPOSAL/ \DISPOSAL
FRE
cm
COMPACTED SOUOS TO INCINEK

EXCESS
FRESH
WATER
1

GAS FLOW
FLUSH WATEf
\
'• GAS FLOW
WATER
DISTILLATION
UNIT
2.7«108
GPD
SH PROCESSED
WATER FOR
WAL DILUTION




SEWAGE
PROCESSED
WATER


ESP
TUBULAR
COLLECTOR
IWETI
\ SETTLING/
rOip
SOLIDS
DISPOSAL
APCCOU
FLUSH 1
RETU
EXCE

STACK
GAS FLOW 1 1
PU"1 HiTM 1
HEAW. *|^J |
ICTOR
MTER
W
SS PROCESSED SEWAGE WATER TO
RIVER OR OCEAN SOURCE
SEWAGE PROCESS UNIT
IOII06 GPD
ATOR HTT/D

SEWAGE FU1W TO SYSTEM
                                         FIGURE  II




                     TOTAL CONCEPT SYSTEM FOR MUNICIPAL WASTE DISPOSAL




                         (ROTARY KILN WITH DIRECT GAS UTILIZATION)
                                       464

-------
Figure 3.   Boiler With Reciprocating Stroker
                                           IGNITION GRATES
                                                   BY-PASS
                                                              MIXING CHAMBER
TO WASTE
  HEAT
 BOILER
                                                                          SCRUBBER
         Figure 4.  Section Through Incinerator Unit

                           465

-------
                                       ECONOMIC FORECASTING FOR VIRGINIA'S
                                             WATER RESOURCE PROGRAMS

                                  Charles P. Becker, Allender M. Griffin, Jr.,
                                                Carol S. Lown
ABSTRACT

Water resource and water quality management planning
depend,  to a large degree, on forecasts of industrial
activity and population projections.   A flexible eco-
nomic data base is especially important where planning
follows  varying formats of geographical and industrial
detail.   Records of employment and payroll are collect-
ed in the administration of Unemployment  Insurance
(U.I.) programs and are available from State Employ-
ment Agencies.   These statistics have been collected
over a long period of record (thrity-five years).
Many years of record are available on punched-cards
or magnetic tape and may be arrayed and manipulated by
computer.  This basic approach has been followed in
Virginia.  Historical U.I. payroll and employment re-
cords for the period 1956 through 1970 were procured
on magnetic tape.   This data was arrayed by major
hydrologic area and by regional  planning district.
Projections of manufacturing activity were then gener-
ated by fitting several exponential equations to annu-
al payroll data in two-digit Standard  Industrial
Classifications.  These exponentials were then ex-
trapolated to provide a range of  industrial projec-
tions.  Other parameters of manufacturing activity
were then correlated to the payroll data to generate
projections of  indexes such as employment, value-added,
and gross manufacturing output.   U.I. payroll data is
now being correlated to parameters in non-manufactur-
ing categories.  Projections for  industries such as
trade and services will link extrapolated payroll data
with benchmark correlations of payroll and sales re-
ceipts.

(KEY TERMS:  water resource planning; unemployment
insurance (U.I.) statistics; value-added; exponential
forecasting; population projections)

                                 \
Economic data has played an important role in water
resource planning and water quality management plan-
ning.' Parameters such as population, employment and
value-added   in manufacturing  have  been  correlated  to
watei—use and waste generated.  Water  resource  planning
engineers and sanitary engineers  are able  to make pre-
dictive estimates of future water-use  and  waste levels
by making correlations with various population  and  in-
dustrial projections.  Water demand, expressed  in mil-
lions of gallons per day  (MGD)  has  been  related to
value-added in selected manufacturing  categories.
Watei—use coefficients are also available  for other
heavy water-using industries such as mining.  Domestic
water demand can be predicted  by  applying  per capita
water-use factors to population forecasts.  Parameters
of water quality such as  biological oxygen demand (BOD)
and chemical oxygen demand (COD)  have  been correlated
to economic indexes in major water-using  industries.
Relationships between per capita  population and domes-
tic waste generated have  also  been expressed quantita-
tively  in terms of BOD and COD.

In order to produce valid economic  forecasts for vary-
ing size planning units,  the water  resource economist
must have a flexible and  comprehensive data base.
Traditional data sources, such  as the  Bureau of the
Census — U. S. Department of  Commerce,  publish data'
which provides a valuable overview  to  the  water re-
source planner.

Often, however, more detailed,  unpublished data is
necessary where planning  units  follow  a  hydrologic
format.  Data by reporting establishment must be sorted
and manipulated to produce a valid  benchmark or fore-
cast base for hydrologic  planning areas  of river basins.
Of course, this same data may  be  sorted  by county or
city and further aggregated into  economic  planning
regions.

The State Employment Security  Agencies  have collected
and stored an impressive  record of  payroll and  employ-
ment data for administering Unemployment  Insurance
programs.  This data has  been  collected  in all  of the
     'Between 19o6 and 1972, the Virginia Division of Water Resources of the Department of Conservation and
Economic Development was responsible for comprehensive water resource planning for the State of Virginia.
On July 1, 1972, the Division of Water Resources was merged with the Virginia State Water Control  Board.
Since 1946, the Board has been responsible for water quality management  in Virginia.  The combined  agency  is
now operating as the Virginia State Water Control Board.

      "Value-added of an industry consists of labor compensation, proprietors' income, profits,  interest,
depreciation, and indirect business taxes.1' (U.  S. Department of Labor,  B.L.S.,  1970).

     •>ln addition to an every-five year Census of Manufactures, the U. S. Department of Commerce.  Bureau
of the Census also conducts Annual Surveys of Manufacturing during  interim years.

      The State Employment Security Agencies are affiliated with the Manpower Administration  (formerly  the
Bureau of Employment Security of the U. S. Department of Labor.
                                                       466

-------
states,  in the territories  of Puerto Rico and  in.the
Virgin Islands.  Unemployment Insurance  (U.I,)  laws
vary somewhat from  state  to state in such areas  as
program detail and  reporting coverage.   Some states
have, for example,  full coverage in unemployment-in-
sured industries.   Other  states have required  U.I.
reports from firms  with four or more employees.  Sup-
plementary employment data  may be obtained from  the
Federal  Bureau of Old Age and Survivors  Insurance
(B.O.A.S.I.) of the Social  Security Administration to
bring coverage up to a universal  or "100 percent" in
these "partial coverage"  states.

The State of Virginia provides a  good illustration
where U.I. coverage was partial  for years (required
of firms with four  or more  employees) in unemployment
insured industries.  An amendment (effective January
1, 1972) to the Virginia  U.I.  law extended coverage to
firms with one or more employees  in unemployment  in-
sured industries.   Certain  types  of employers  are
still excluded from U.I.  coverage.   Federal  and  local
government, railroads, churches and state government
(except non-teaching staffs of hospitals and institu-
tions of higher learning) remain  exempt from U.I.
coverage.

All states, Puerto  Rico and the Virgin Islands submit
U.I. employment and payroll  data  to the Manpower
Administration under the  report designation Employment
Security  (E.S.) 202.  The E.S. 202 report is forwarded
in the form of a computer print-out.   This record
(E.S. 202) is assembled using individual establishment
reports, i.e., the  Employers Quarterly Contribution
Report (see facsimile —  Figure 1).   The Contribution
Reports are audited for completeness  and accuracy,
and then key-punched.  Each Contribution Report
contains the following identification:

1.  A four-digit Standard Industrial  Classification
    (S.I.C.) Code
2.  A three-digit area code designating the county or
    city in which the reporting  establishment  is
    physically located
3.  A six-digit serial or identification number unique
    to each establishment

As was mentioned, U.I. Contribution Reports  are filed
quarterly and contain (in Virginia)  the following data

1.  Monthly Employment
2.  Gross Quarterly Payroll
3.  Gross Quarterly Payroll  subject  to Unemployment
    Insurance
It.  Quarterly contribution,  i.e.,  U.I.  tax
5.  Quarter and year liability (to U.I.)  started
6.  Report date (quarter and year)
                  MANUFACTURING  DATA

Of these items above, employment (item #1)  and gross
quarterly  payroll  (item #2) are  of  particular impor-
tance to the water resource planner.   Payroll is of
special  relevance, since when cumulated by  quarter to
an annual  figure it is a major component of value-
added.   This index,  value-added,  has  been and is
currently  used extensively as an economic indicator
(past,  present and future) of water-use and waste
generated.   The U.I.  payroll  in  manufacturing is also
an important component of gross  manufacturing output
or value-of-product.

   a  prerequisite  for access to  the  E.S.  202 — U.I.
      it is  necessary that the requesting agency be
aware of the publication restraints  and c*^ta non-
             Figure  1.  Contribution Report

               VIRGINIA EMPLOYMENT COMMISSION
                 EMPLOYERS QUARTERLY CONTRIBUTION REPORT
    &OX 1358     FOR QUARTER ENDING_
 Richmond, Vo. 23211
                                              2nd MO—
                                              3rd M0._
 INSTRUCTIONS AM ON
 IACK OF EMPLOYERS
 COPY OF CONTRIBU-
 TION REPORT
                                              ALSO:
                                              Numb*t ol HI


Notk* .f Chang*
G Nornt cKar>9<
G Maibng Xddnni chonat
Q D,iiol*«i, no luunior
n ^£!d ^iTL™,
H •" -hoi.





Indrcat* n«w nom» and/or
oddrtu ,„ thl. tpac. If
btiiineil indie ot» lucctiwr
narn* and addrni in thu
^ ^ZZ- C^lMH 1. 1, 4 I J, uf, 'OM! rvurn
PAYROLL DATA
1. TOTAL WAGES for Quarter, including remuneration other
per individual (over $3,000 prior to January 1 , 1 972).

January 1, 1972).

CALCULATION OF CONTRIBUTION
4. CONTRIBUTION - Multiply total of Lin* 3 by tax rate ihown
above.
3. CREDIT MEMO NO. ( ) DEDUCT
(Alwayt attach white copy ol Credit Memoi.)
4. INTEREST (computed on contribution - - Lin* 4 • - at rat* of 1%
p«r month from du* dot* to date of payment.)
PENALTY -• $« INSTRUCTIONS
7. TOTAL AMOUNT DUE for which remittance ii encloiod.



s

s

s


s
$
$
$

 I, (or we) certify thai the information contained in thil report, required In accordance wilh the Virginia Unemploy-
 ment Compensation Act, it true and correct and that no part of the contribution reported wot, or n to be. deducted
 from worker's wagei.
OKIOINAL - BFTUBN TO COMMISSION
VEC-FC-30 (R-ll-7-74) (200M 11-7-74)

disclosure  requirements.   In Virginia the publication
criteria  are  as  follows:

1.  The  industry group must include at least three
     independent  reporting firms (i.e., companies  —
    not establishments).
2.  The  industry's  employment must be sufficiently
    dispersed  so that  the combined employment of  the
    two  largest  firms  does not exceed 80 per cent of
    the group  total.
3.   Individual firm data  may not be published or  dis-
    closed verbally under any circumstances.
^4.  E.S.  202 data may  not be used for law enforcement
    purposes,  except  in the administration of the U.I.
    law under  which the data is required.

In most  states,  other  detailed economic  data germane
to water  resource planning is available  in both  pub-
lished and unpublished form.   In many instances,  the
unpublished data by firm or reporting establishment  is
an extremely  flexible  planning tool.  The data usually
has been  collected  by  reporting unit and contains
identification which is similar  to and  compatible
with  the  U.I.  reports  discussed above.   In Virginia,
an Annual Survey of Manufacturers  is conducted by the
State  Department of Labor and  Industry.  This survey
is based  on a  selected sample and  represents about  75
per cent  of all  manufacturing activity  in the State.
Firms  which participate in the survey are assigned  the
following identification data:
                                                        467

-------
1.   A four-digit Standard Industrial  Classification
    code
2.   A three-digit county or city code
3.   A five-digit serial  or identification number

The Annual  Survey of Manufacturing is conducted by a
mailed questionnaire referred to as the S-l form.
Questionnaire data items include:

1.   Total employment
2.   Production worker employment
3.   Salaries and wages (total payroll)
4.   Wages paid to production workers
5.   Net selling value-of-product
6.   Cost of materials
7.   Contract work
8.   Physical volume-of-product
9.   Capital expenditures
10. Anticipated capital  expenditures
 had  an  address  indicating a physical location well
.within  a  particular  river basin.   The more difficult
 hydrologic  address determinations were those where a
 firm was  located  near  a  ridge line.   In these in-
 stances,  a  good deal of  map detail  was necessary.
 These "ridge-line" address  (hydrologic) determinations
 could be  accomplished  by field  trip or by correspon-
dence with  knowledgeable people within the "ridge-
 line" locality  itself.   This latter, -less expensive
 alternative was chosen.

A number  of forecasting  methodologies" (or combinations
 thereof)  are compatible  with the  data base discussed
above.  The behavior of  price-adjusted, annual  payroll
data was  quite encouraging  when subjected to several
exponential growth curves.   This  experience,  coupled
with the  availability  and continuity of U.I.  payroll
data, indicated that growth curve fitting and extra-
polation  would be fairly valid  as a  general  forecasting
technique.  Asymptotic growth curves describe an  in-
dustry  passing through the  following stages:
                                                          1.
    Period of  initial  industrial development  and
    limited production  — a phase  characterized by
    slow growth
    Stage of accelerated  industrial  development,  in-
    creasing production and rapid  expansion
    Period of  relative  stability where  the growth rate
    levels off with  the main emphasis on  operating
    efficiency and cost minimization
                                                          Curves fitted using the Gompertz equation adhered
                                                          closely to most historical payroll data in the study
                                                          areas (River Basins and Planning Districts).  A
                                                          Gompertz curve has the shape of a nonsymmetrical "S"
                                                          when graphed on arithmetic paper.  Its nonsymmetrical
                                                          nature results from a difference in behavior on
                                                          opposite sides of the points of inflection.  The
                                                          Gompertz equation generates a curve in which the growth
                                                          increments of the logarithms are declining by a con-
                                                          stant percentage.  The general equation of the Gompertz
                                                          curve is:
                                                          whe re:
                                                                                Yc=Ka(bx)
 11. Cost and quantity (KWH) of electric power consumed

Value-added is not surveyed directly as a question-
naire item.  It can be easily computed, however, as
follows:

     Value-added=(Net selling value of products)
         -(Cost of materials)-(Contract Work)

The same publication and disclosure restrictions as
outlined regarding the E.S. 202—U.I. data apply to
 the Annual Survey of Manufacturing records,  (i.e.,
S-l data).

 In Virginia, extensive water resource and water quality
management plans are being developed for the nine major
 river basins (see River Basin Map—Figure 2).  These
 studies were begun by the Virginia Division of Water
Resources  in 1966 and are being completed by the
Virginia State Water Control Board (see footnote 1).
This planning is being approached in a six volume foi—
mat.5  Within Volume II  — Economic Base Study, con-
 siderable  emphasis is placed on the analysis of manu-
 facturing  data.  This priority reflects the signifi-
 cance of high water-use and related high waste poten-
 tial of many manufacturing categories.

Much water resource planning is conducted on a hydro-
 logic format.   In order  to express benchmark manufac-
turing data on a hydrologic basis, a major rearrange-
ment of E.S.  202 — U.I.  data and S-l  data (Annual
Survey of Manufacturing)  was necessary.  This realign-
ment of the data went beyond the normal county and
city format.   The county and city codes were useful,
however, as a  broad hydrologic sort routine.  As a pre-
liminary step,  the punched cards for both the E.S.
202 file and the S-l  file were interpreted and sorted
by county and  city.   Obviously,  many counties and
cities are completely within the major hydrologic
areas.   In those counties or cities which are situated
in two or more hydrologic areas,  however, detailed
address determinations  of individual  firms had to be
made.   It was  necessary,  therefore, to have address
data for each  reporting  firm or  establishment which
was as specific as possible regarding  physical location.
Usually the firm's mailing address coincided  closely
with the firm's  physical  location.   Based on  this ad-
dress,  a valid  hydrologic address determination could
be made.  This  task was  especially easyjvhen  the firm

       Volume  I    Introduction;  Volume II   Economic Base Study; Volume  III - Hydrologic Analysis;
Water Resource Problems  and Requirements; Volume V   Engineering Development Alternatives; Volume
tation of Development Alternatives.

      ^Other forecasting techniques have utilized standard growth rate tables such as those based on the com-
pound interest rate formula.  Industrial  projections have also been inferred from predictions of population trends.
                                                             x =Time interval
                                                             K =Asymptote or limit which the trend value
                                                                approaches as x approaches infinity
                                                             a =The distance from the asymptote to the Y-inter-
                                                                cept
                                                             b =The base of the exponential equal to the constant
                                                                ratio between successive first differences of
                                                                the log Y

                                                          Two other growth trends which are useful as forecasting
                                                          equations are the Modified Exponential and the Pearl-
                                                          Reed (logistic).  These equations may be categorized
                                                          with the Gompertz trend in the broad family of ex-
                                                          ponential curves.  The general equations for the Pearl-
                                                          Reed and the Modified Exponential may be written as
                                                          follows:
                                                                                          Modified Exponential
        Pearl-Reed

        	K_	
           1+10a+bx
                                                                                              Yc=K+abx
                                                                                                   Volume IV -
                                                                                                  VI    Implemen-
                                                       468

-------
                                  Figure 2.  Major River Basins in Virginia.
            RIVER  BASINS  IN VIRGINIA
        • r
                      I  POTOMAC-SHENANDOAH

                      2  JAMES

                      3  RAPPAHANNOCK

                      k  ROANOKE

                      5  CHOUAN AND DISMAL SWAMP

                      6  TENNESSEE AND BIG SANDY

                      7  SMALL COASTAL BASINS AND
                        CHESAPEAKE 6AY
                      6  YORK
           TENN.
                                                              N. C.
The Pearl-Reed  curve  traces  a pattern in which the
first differences  of  the reciprocals of the Yc values
are declining by a constant  percentage.  The Modified
Exponential  curve  describes  a trend where the amount
of growth declines by a constant percentage.

Figure 3  provides  an  illustration of a typical expo-
nential growth  curve.   As  is evident, the trend line
(TT1) increases, but  at a  decreasing rate on the right
of the point of inflection.   The horizontal line (KK1)
(narks the upper limit of growth or the horizontal
asymptote.

Asymptotic growth  curves approaching horizontal limits
were fitted  to  the price adjusted U.I. payroll data.
Whenever  a valid "data fit"  was established, an equa-
tion resulted.  An extension of the curve marked a
trend of  possible  growth.  Several growth curves fit-
ted to various  intervals of  data in the same histori-
cal series were used  to create a range of projections.
Value-added, gross manufacturing output and employment
were correlated to payroll data for the forecast re-
ference points.

Prior to  growth curve fitting,  it is well  to look
critically at several  aspects of the data and the
study area:

1.  An appraisal should be made to determine if
    historical  growth experience by the industry
    under study is actually  a valid trend.
2-  Is the available  data  record of sufficient length
    to present  a representative trend in the area
    and industry under study?
3.  Is the historical record of sufficient magnitude to
    represent a data base wide enough to portend
    future industrial development?

Our experience indicates that price adjusting U.I.
Payroll data is an absolute necessity prior to
growth curve fitting.  Price adjusting, of course,
eliminates the fluctuations of inflation or deflation,
leaving "real" changes.  Unfortunately, there is no
"ideal" price index for price adjusting payroll  or
labor costs.

The Wholesale Price  Index? (published by the Bureau of
Labor Statistics (B.L.S.), U.S. Department of Labor)
has proven quite satisfactory when applied to manu-
facturing payroll data.  Most applications have been
on the two-digit S.I.C. level using the 196? base
converted to 1970.

The Bureau of Water Resources of the Virginia State
Water Control Board now has available 15 years  (1956-
1970) of U.I. employment and payroll data.  An  IBM 360
computer is currently being used for the exponential
curve fitting routines.  Previously, an Olivetti Pro-
gramma 101  (a programmable calculator) and an IBM 1130
computer were used.  The IBM 360 has, of course, great-
ly expedited curve fitting and extrapolation routines.

Utilizing the "360" program, fifteen years of histori-
cal payroll data were fitted to three exponential
curves — Gompertz, Modified Exponential and Logistic
(Pearl-Reed).  The data was analyzed  in 6, 9, 12 and
    ^The Wholesale Price  Index  "...is  an index of the prices at the primary market levels where the first
important commercial  transaction for  each commodity occurs." (Tuttle, 1957).  "Wholesale1, as used in the title
of the  index, refers  to  sales  in large  quantities, not to prices received by wholesalers, jobbers, or
distributors."   (U. S. Department of  Labor,  B.L.S. Handbook, 1971)-
                                                      469

-------
                       Figure 3-
       U.I.  Payroll  estimates and projections in
  transportation, communications and public utilities
    for the Southeastern Virginia Planning District
      (data expressed in constant 1970 dollars).
  400n
  350
  300
  250
o
o
  200
CO
z
°  150
   100
    50
           I960   1970    1980   1990   2000   2010   2020

                            YEARS
 15-year  intervals.  For fifteen consecutive years of
 data, this method  resulted  in twenty-two possible
 curve fits for each two-digit S.I.C.  Since there were
 several  different  forms which the exponential curves
 could take, constraints were built  into the program to
 eliminate the curves which  did not  fit a pattern of
 normal growth.  The desired shape of the growth curve
 was  that which sloped upward to the right, approaching
 some horizontal  limit, while increasing at a decreas-
 ing  rate  (see figure 3).

 Fifteen  years of annual payroll data (1956-1970) were
 read in  for each  industry.  The first six-year period
 (1956-1961) was  analyzed, and the exponential equation
 was  developed.   If the equation did not violate the
 built-in  constraints, then  the program extrapolated
 the  historical data from  the initial year of the fit
 period  (in this  case 1956)  to the year 2020.   If the
 equation  violated  the constraints,  a message was
 printed  out  indicating that there was no fit for that
 series.   The  second group of consecutive years of
 payroll  data  (1957-1962)  was then analyzed.  This con-
 tinued  through the twenty-two possible combinations
 until the final  serial  (1956-1970)  had been analyzed.

 The  extrapolated  universe payroll values, payrol1-per-
 employee, S-l value-added,  S-l gross manufacturing
 output  and S-l payroll were used  in a Programming
 Language 1  (PL1)  program  which generated a table of
projections for value-added, gross  manufacturing out-
put, payroll and employment,  The tables  were structur-
ed for photographic reproduction directly from the
printout, thus eliminating virtually  all  typing and
proofing.  The value-added projections  were developed
by computing the ratio of S-l payroll and S-l  value-
added for the benchmark year.  This ratio was applied
to the extrapolated payroll figures to  develop the
universe value-added projections.   The  gross manufac-
turing output projections were developed  in much the
same way.  The ratio of S-l payroll to  S-l  gross
manufacturing output was computed for the benchmark
year, and this was applied to the extrapolated payroll
values to give gross manufacturing  output projections.
The extrapolated payroll values were  divided by the
extrapolated payrol1-per-employee figure  to develop
employment projections for each S.I.C.  group.


Because of the large volume of output from  the exponen-
tials, another method of analysis has been  devised
which expedites the evaluation of the extrapolations.
A curve plotting routine has been added to  the expo-
nential programs so that each curve that  extrapolates
is also graphed.  This enables the  analyst  to  pick
the best fit from the plots without having  to  analyze
reams of computer print-outs.  A FORTRAN  IV8 program
has been written to utilize the plotter capability of
the IBM 360.  This plotter routine  will graph  the
price-adjusted historical payroll data  and  all  possible
extrapolations.  By employing transparent  plotting
paper and a uniform scaling factor, an  overlay effect
is created for the graphic extrapolations within each
S.I.C.  The three exponentials -- Rompertz,  Pearl-Reed
and Modified Exponential -- are thereby grouped and
the trend selection process is greatly  facilitated.
A clustering effect is a "reasonable" indication of a
medium range projection.

A COBOI.9 routine is used at this point  to expand the
extrapolated U.I. payroll to a universe.  This  universe
payroll figure will include U.I. payroll,  B.O.A.S.I.
payroll and non-covered payroll.  Universe  employment
data can then be estimated by dividing  the  universe
payroll projections by extrapolations of  payrol1-per-
employee.  Value-added and gross manufacturing  output
(Value-of-product) can be projected through correla-
tion of benchmark payroll to value-added  and payroll
to gross manufacturing output  (G.M.O.).

                POPULATION STATISTICS

In Virginia, the Division of State  Planning and
Community Affairs  (D.S.P.C.A.) has  been designated as
the agency  responsible for the State's  population
projections.  This Division  (D.S.P.C.A.)  has recently
published population forecasts for  all  counties and
cities in Virginia.  These projections  are  on  an every
ten year basis to the year 2020.

The planning guidelines of the Virginia Division of
Water Resources required a range of population fore-
casts.  The range of projections  (high, medium and
low)  reflect varying demographic assumptions.   The
low projections assume a very  subdued rate of in-
dustrial development and continued  out-migration of
the resident population.  The medium  forecast  is  based
on a  rather vigorous  industrial development program.
     "FORTRAN  IV  is a computer  language which  is used most  frequently  in  scientific and engineering applications.
The  term FORTRAN  relates to  the primary use of  the  language:   FORmula  TRANslating.

     -"COBOL  is a  computer  language which  is used extensively  in  business  and  commercial data processing.  The
term COBOL  is derived from the  expression COmmon Business Oriented  Language.
                                                       470

-------
An extremely  accelerated rate of economic growth is
implicit  in  the  high projection.  High and low pro-
jections  were generated by fitting the compound in-
terest  rate  formula above and below the D,S,P.C.A.
forecast  (medium).   This trend fitting was accomplish-
ed using  a FORTRAN  IV program on an IBM 1130 computer.
County  and city  population projections developed by
the Virginia Division of State Planning and Community
Affairs were used by the Division of Water Resources
as the  medium range on which the high and low pro-
jections  were based.

The following high  and low control totals (in thou-
sands)  were  assumed for the entire state:
                 Virginia  Population  (X1000)

                 1970    1980    1-330.    2020
     High
     (Medium)
     Low
4,648
5,632
5,415
5,198
6,9J9   12,100
6,284    9,340
5,629    7,100
 The average annual  rate of change was computed for
 each ten-year period using the compound interest rate
 formula:
    Average annual  rate of change= R =
    e.g.    Xt  =  5,629,000
            X,  =  5,198,000
            n   =  10 (years)
            R   =  0.00799
                           t_
                          X]
               1990 State  low
               1980 State  low
 A set of ten constants were then computed, five high
 (Hj) and five low (L;).  These can be defined for
 each ten-year period as the differences between R
 for the high projection (RHj) and R for the medium
 (RM|), and the difference between RM; and for the low
 (RLj):
          H;   =  RH;
          Li   =  RM:
       RMj
       RL:
        i = 1,5
        i = 1,5
 These constants were then applied to each county and
 city in developing the high and low projections.

 RCi was computed for each county and city for each
 ten-year period, using D.S.P.C.A.'s medium projections:
     RC. =
 For a given county or city, then, the high projections
 were computed as follows:
     Hi(1980)
     Hi(1990)

 and the low:
                                                10
(1970 Pop!'n) (1.0 + H] + RC,)
Hi (1980) (1.0 + H2 + RC2)10. etc.
                                             10
     Lo(198o)  = (1970 Popl'n) (1.0- L, + RC,)
     Lo(1990)  = Lo (1980)  (1.0 - L2 + RC2)1°, etc.
               BITUMINOUS COAL MINING

An analysis of Virginia's bituminous coal mining  in-
dustry was made  in Volume II —Economic  Base  Study  of
the Tennessee and Big Sandy River Basins.  Three  basic
economic indicators — production, employment  and  pro-
ductivity — were presented.  Production  in the coal
industry is measured  in mine tonnage and  has  experienc-
ed an increasing trend  in Virginia since  the  late 19th
century.  Record keeping has been quite good  in this
industry and a comprehensive set of  historical  data^is
available from the Virginia Department of Labor and
Industry.  Based on the availability and  continuity
of this data, growth  curve fitting and extrapolation
were selected as reasonable forecasting  techniques.

Asymptotic growth curves describe a mineral industry
passing through  the following stages:

1.  Period of initial exploration, market development
    and limited  production, a phase characterized by
    slow growth
2.  Stage of sharply  increasing production and rapid
    expans ion
3-  Period of relative stability where the growth
    rate levels  off with the main emphasis on operat-
    ing efficiency and cost minimization

The three exponential curves (Gompertz, Pearl-Reed and
Modified Exponential) discussed on the above  pages
were used in this analysis.

The medium range projection represented the rates of
growth believed  to be the most probable.  High and  low
projections were also developed.  These three forecasts
provided a range of data wherein certain water re-
source  planning alternatives could be tested.

Basically the same approach (asymptotic growth curves)
was used to project the future low employment trend
in the coal industry.  Because of the historically
declining employment  series, a low range curve with a
negative trend and a  lower limit was fitted and ex-
trapolated.

                  CURRENT PROJECTS

Recent emphasis  in Virginia has been on Metropolitan/
Regional Plans to the State's Water O.uality Manage-
ment Plan.  The  Metropolitan/Regional Plans are being
developed for Virginia's twenty-two planning  districts.
Since the planning districts are aggregations of  entire
counties and cities,  the data base, described above,
was arrayed and  manipulated using the three-digit
county or city codes.  The basic economic parameters
developed in the river basin plans, discussed above,
were also generated for the Metropolitan/Regional
Plans.

Data has recently been developed for a special water
quality management study for the lower James  River
Basin.  The project (The Lower James River Basin  Com-
prehensive Management Study), often referred  to as  the
l:3c" Study, was  authorized under Section  3(c) of  the
1965 Federal Water Pollution Control Act.  The purpose
of the 3(c) Study is  to develop a viable water quality
management plan  for one of the most  intensively
developed sections of Virginia's largest  river basin.
     '^Annual  Reports,  Virginia Department of Labor and  Industry,  1951-69
                                                       471

-------
The data assembled for the "3c" Study includes stan-
dard parameters for all industries and follows a county
and city format.  Economic data for the "3cM Study
was generated in terms of a 1970 benchmark and ten-
year projections to the year 2020.  Those indexes
requiring price adjustments were expressed in constant
1970 dollars.

The forecasting methodology for the "3c" Study data
generally paralleled  the techniques discussed re-
garding the manufacturing data.  Again, extrapolations
of "growth curves" fitted to price adjusted payroll
data were correlated to other parameters such as
employment and sales.  The major exception was in a
shift from the B.L.S. Wholesale Price Index to the
B.L.S. Consumer Price Index for price adjusting his-
torical payroll data.  The "3c" Study places consider-
able emphasis on "real" income of the Study area in
relation to the proposed expenditures for water quali-
ty management.  Payroll data (for all industries)
price adjusted with the Consumer Price Index should
produce a fairly realistic indication of how local
income can meet expenditure recommendations.  Certain
non-payroll data, however, such as manufacturing
value-added, gross manufacturing output and wholesale
trade receipts was adjusted with the Wholesale Price
Index.

                      CONCLUSIONS

Economic data adds an important dimension to water
resource and water quality management planning.
Payroll and employment statistics collected to ad-
minister State Unemployment Insurance programs have a
multitude of applications in economic analysis and
forecasting.  U.I. data is a continuous, carefully
maintained and relatively extensive set of historical
records.   It has been accumulated under national
guidelines of the U. S. Department of Labor, Manpower
Administration and  is quite uniform in format.  U.I.
records have, for years, been structured for data
processing applications.  Further manipulation of
this data such as price adjusting and trend fitting
are thus facilitated.  Most of the standard economic
parameters of water resource and water quality manage-
ment planning such as value-added and gross manufac-
turing output have been correlated to U.I. payroll
and employment benchmarks.  The Annual Survey of
Manufacturing  (Virginia Department of Labor and
Industry) provides value-added and gross manufactur-
ing output data.  County and city detail and a data
processing format is an important feature of the
Annual Survey  (S-l).  Both the U.I. and S-l data have
been further formated by hydrologic area  in Virginia.
On balance, the U.I. and S-l data have become valuable
tools for water resource and water quality management
planning in Virginia.


                                               LITERATURE  CITED

Tuttle, Alva M. 1957.  Elementary Business and Economic Statistics.   New York,  N.  Y,:   McGraw-Hill
     Book Company,  Inc.

U. S. Department of Labor, Bureau of Labor Statistics, 1971.   Handbook of Labor Statistics.  Bulletin 1705-
     Washington, D. C.

U. S. Department of Labor, Bureau of Labor Statistics, 1970.   Patterns of U.  S.  Economic Growth.   Bulletin
     1672.  Washington, D. C.

Virginia Department of Labor and  Industry.  1951-68.  Annual  Reports.  Richmond,  Virginia.
                                                       472

-------
                   MODELING RADIATIVE TRANSFER IN THE PLANETARY BOUNDARY LAYER:  PRELIMINARY RESULTS
                                         Francis S. Binkowski, NCAA*

                                     Meteorology and Assessment Division
                                 Environmental Sciences Research Laboratory
                                        Environmental  Research Center
                                     Research Trianqle Park, N.C.  27711
                       Abstract

     Long wave radiative fluxes and cooling rates
are calculated using a statistical band model.  The
vertical  quadratures are sums  of analytic integrals
of the transmission functions.   The calculated cool-
ing rates and fluxes compare very favorably with ob-
servations.                 '

                    Introduction

     The  prediction of pollutant dispersion conditions
in the lower troposphere for use in air quality simu-
lation models requires information about  the tempera-
ture profile, since atmospheric thermal stability
strongly  influences the character of dispersion.
Thus, the problem becomes to predict the temperature
profile in a realistic way.  One of the major in'flu-
ences on  the temperature profile especially at night,
but also  in the'late afternoon is the rate of change
of the net long wave (infrared) flux with height (or
pressure).

     This paper will discuss a method for calculating
the upward, downward and net fluxes for clear skies
(no clouds) and clean air (no aerosol) cases.  When
methods for treating clouds and aerosol are introduced
into the  calculation method, comparisons with the
benchmark case (clear skies- clean air) as well as
with real atmospheric data will be possible.

     Many methods have been proposed for calculating
these fluxes.  The most flexible methods are those
which divide the terrestrial spectrum into finite
intervals and calculate the transmission of infrared
radiation within each spectral interval and sum over
the contributions of each interval.  Atwater (1966),
Rodgers and Walshaw (1966), Ellingson (1972) and Fels
and Schwartzkopf (1975) have all used this method.
All these authors also used a random band model for
the transmission function.  That is, the transmission
for each  spectral interval was obtained by consider-
ing the absorption lines within the spectral interval
to be randomly spaced and to have intensity specified
by some probability distribution.  The random band
model approach is used here as well as a transmission
function  model which has a more realistic probability
distribution for water vapor (Malkmus, 1967) and has
a^simpler mathematical form than previous transmis-
sion models.  This transmission function has not been
used for  this purpose in the open literature before.

     Since our interest is in the lower troposphere,
we shall  consider only those absorbers which are most
important in this part of the atmosphere.  Thus, wa-
ter vapor and carbon dioxide are the only absorbers
considered.   Ozone which has a strong absorbtion band
at 9.6p,  and is in the water vapor "window" of 8 to
13 p was  not considered in the present work.  The
quantitative effects of this ommission are unknown
* On assignment from the National  Oceanic  and  Atmos-
  pheric Administration, U.S.  Department of Commerce
since  real-time  tropospheric  ozone  profiles  were  un-
available  for  the  comparisons  discussed  below.  Quali-
tatively,  the  most important  effect of ignoring ozone
is that the  downward  flux  at  the  surface is  smaller
than it would  be if ozone  were  considered.   Ozone will
be added in  future calculations.  All  other  gases pres-
ent in trace amounts  are also  ignored.   As mentioned
previously, the band model  approach  used  here is flexi-
ble enough to  permit  additional absorbers to be treat-
ed relatively  easily.  Other  absorbers will  be consid-
ered when  it becomes  apparent  from  comparison with
measurements that  it's necessary  to include  them.

                Method of  Calculation

     We start  with the equation of  radiative transfer
in integrated  form is given by  Rodgers and Walshaw
(1966)(with  slight modification,^
+F(z)=B(z)+{+F(H)-B(H)}T(U.
 KF(z)=B(zM+F(g)-B(0)}T(Ub)+
                          ^,(u')T(u')du'  (la)
                          Sr,(u')T(u')du'  (Ib)
where for each spectral interval +F(z), tF(z) and
B(z) are respectively  the downward,  upward and black-
body fluxes  at the  reference  level z  and +F(H),  +F(g)
and B(0) are respectively, the  downward flux at  the
top of  the computational domain H, the upward flux at
the ground,  g, and  the blackbody flux at the bottom of
the atmosphere.  The definite integrals are taken over
the amount of absorber (water vapor)  u' between  z and
levels  above z in (la) and between z  and levels  be-
neath z in (Ib).  The  total absorber  amounts from z
to H and from z to  the ground are U£  and U.  respec-
tively.  The transmission function T(u') is defined
such that T(0)   1  and T-K) as u'->-<».  The integral
      I
 f (ul)T(u')du'
Jo
may be written without approximation as the sum

   !  -V   C  'dBdi1:
where U, which may represent U. or U. has been di-
vided into n increments Ausu. - u._,.  If we rep-
resent dB(u')/du' by an average value for each in-
crement of Au such that
then
      dB_
      du1
            B(U,;
           B(u. .)
              l-l
                 AU
= AB
  AU
                                                   (2)
     Both of the definite integrals in (1) may be ap-
proximated in this way.  This approximation of dB/du'
by AB/Au is strictly valid only when B varies linearly
over u'.  In practice only discrete values of B and u'
are available, either from radiosonde measurements in
the atmosphere or from grid point calculations.  The
                                                      473

-------
approximation in (2)  says that between these discrete
values B varies linearly with u1.   This is the sim-
plest approximation which allows B to vary between
levels.   This may be  viewed as a next step from con-
sidering an isothermal  slab between the discrete
points.   The approximation is invoked in order to ob-
tain the form (2) which contains the integral  of the
transmission function,  T, over absorber amount u1.  If
the precise form of T specified is analytically inte-
grable,  then the definite integrals in (1a,b)  may be
given by simple sums.

     We shall use the transmission function of MaVkmus
(1967) discussed by Rodgers (1967):
        T(u')= exp{-a(l+bu1)'i+a}
                                         (3a)
which may be integrated as

    f T(u')du'= 2a'V2{(Yra-1)exp(Yi)-(Yi_.-a-l
                i)}                               (3b)

where Y.= -a(l+bu.)2 +a, and the parameters a and b
to be discussed below, are held constant.  Thus the
approximation dB/du' by AB/Au, combined with (3b) in-
dicates a simple efficient computation scheme.

     The band parameters a and b are defined by (Rod-
gers, 1967) as
                             4k
                            irap
 or for notational convenience

         a = ap     ,
where k, a, and & are respective by average line in-
tensity, half width (at one atmosphere) and line spac-
ing which are representative of the spectral  interval
considered; and P is an average pressure (discussed
below) for the layer defined by Au.  The expressions
a, 6 may be obtained from the wave length, half width
and intensity of individual spectral lines within each
interval.  Rodgers and Walshaw (1966) discuss the use
of line data for determining a, 6, and also recommend
the use^of the Curtis-Godson approximation to correct
the a, b for changes in temperature and pressure
along an atmospheric path.  This approximation defines
a corrected absorber amount u, and a mean pressure V
between two levels say i, j as:
                 P.
                            dp_
 J    r  !•
E  h
£=i  J H-1
            1.66 r
                       .-1
                 *
                              oo
                                                  (5a)
                                                 (5b)
where r is the mixing ratio, P is pressure, P   is
101.325 kPa and 1.66 is the diffusivity factor°for cal-
culating the diffuse fluxes from beam transmission
functions  (Armstrong, 1969; Ellingson, 1972).  The
*  and o> functions are obtained by calculating values
of a, b at three temperature values and fitting empir-
ical functions of the form  (Rodgers and Walshaw.
1966):
)= exp{A(e-e

 = exp{A'(e-e
                B(e-
                                 }
(6a)


(6b)
     This representation recovers values of a, 6 with-
        in the temperature range considered with a relative
        error of 1% or less.   The line data of McClatchey et.
        al., (1973) were used to calculate a,6,A',B' with a
        reference temperature e  of 275K.  With values of A,
        B, A1, B'  (6) may be  used in (5) to evaluate_the cor-
        rected absorber amount u, and mean pressure p. Then,
        if the corrected absorber amount is_ used as the vari-
        able of integration in (3b), with p constant for each
        layer, a and b are constant for the integration.  The
        temperature and pressure effects,^are included in the
        AB/Au term through the factor 2a~ b   in (3b), thus
        AU is an increment of corrected absorber amount.  The
        six calculated band parameters were favorably compared
        with Rodgers and Walshaw (1966) for consistency where
        possible.   The line data of McClatchey et.al., how-
        ever, includes information which was not available to
        Rodgers and Walshaw (1966).  This procedure was used
        to calculate the transmission function for water vapor
        for intervals from 0-760cm   and 1200-2500cm  .

             The 8-13p window was treated differently.  The
        absorption in this spectral region is of somewhat dif-
        ferent character than that treated previously.  The
        absorption appears as a "continuum" (Bignall, 1970),
        that is a region of nearly constant absorption.   The
        transmission function may be given as

                      T= exp{-Kstl}

        where K  is a constant and u is a corrected absorber
        amount aefined as
                   •i          " ^
                      --— f     1.66(0.005P  +  0.995e) * r ^2-
       where  the  pressure,  p,  term  represents  the  broadening
       of  the individual  lines by collision  with other  species,
       and the water  vapor  pressure,  e,  term represents  self
       broadening by  water  vapor  (Bignall,  1970; McClatchey
       et.al., 1972).   The  temperature  correction  term

               4>    exp{l745/6   5.90}

        is  an  empirical  correction term due to Lee (1973).
        When corrected absorber amount u is used as a variable
        of  integration,  this transmission function may be
        analytically integrated.  This transmission function
        and its integral were used for the two intervals 760-
        1000cm'1  and 1000-1200cm''.

             Carbon dioxide  is treated in a highly parameter-
        ized way.   For one spectral  interval  560-760cm~l,
        which  includes the entire  15p CO^ band an empirical
        transmission function due  to Rodgers and Walshaw
        (1966) is  used.   This transmission function, T(c),  is
        also integrable in terms of the carbon dioxide amount
        c.   The overlap of C02 and water vapor for the one
        spectral  interval 560-760cm   is treated by calculat-
        ing a  mean transmission T  given by
                                                                    T  =
                                                                              T(c)  dc
where C, is the carbon dioxide amount at the same lev-
el as U,.  The water vapor Iransmission functions in
(3) are then multiplied by T in the integrals and by
T(C) where C is the total C02 above or beneath z as
appropriate.  This parameterization is used for two
reasons: first, it seems to yield good results in com-
parisons with cases when no CO^ is used; second, it is
simple, efficient, and economical compared with a more
comprehensive scheme such as used by Pels and Schwartz-
kopf (1975) in which a detailed line by line integra-
tion is used.  A band model approach for COp such as
used by Ellingson(1972) is quite costly since he used
15 intervals for the C02 alone and the present model
                                                       474

-------
only considers 12 intervals in total.

     Planck functions, B, were calculated by a Gauss-
Laguerre numerical  quadrature scheme due to Johnson
and Branstetter (1974).   A simple 5 point formulation
gave 6 decimal place accuracy, which is certainly suf-
ficiently accurate  for the present needs.  This is to
be contrasted with  the 96 point Gaussian quadrature
of Ellingson (1972) which also gave 6 place accuracy.

     As mentioned above, 12 intervals were used. This
is to be compared with 20 intervals used by Rodgers
and Walshaw (1966); 125  intervals used by Atwater
(1966) and 100 intervals used by Ellingson (1972).
Calculations with as many as 18 intervals were indis-
tinguishable from calculations with 12 intervals, but
calculations with 7 intervals were markedly poorer,
especially in the calculated cooling rates as com-
pared with observed cooling rates.  Only results from
the 12 interval model will be presented here.

     Equations (la,b) together with approximations
(2a,b) and the appropriate transmission functions are
used to calculate the upward and downward fluxes.
The downward flux at the top is taken to be blackbody
flux times an emissivity of 0.01 which gives the right
order of magnitude for the downward flux at 20 kPa.
The bottom surface, at present, is taken to have an
emissivity of 1.0.   The cooling rates are obtained
from the divergence of the net flux, that is

            31-   J_ !!n   _l!!n
            3t     pC    3Z   C   3p

where  F  is the net flux  (F = tF    4-F).  At present,
the vertical  p-derivative is approximated by a finite
difference.

         Results of Validation Calculations
                                                          four  values  are  significant at the  5% level.   The low-
                                                          est three  altitudes  of downward flux and the  lowest
                                                          value  of the net flux.   The relative error for the
                                                          downward flux are of the order of 8%, while the rela-
                                                          tive  error of the net flux  is  quite large because of
                                                          the small  value  of the  flux.

                                                               We may  compare  these errors  to the  differences
                                                          that  exist among simultaneous  measurements.  Gille and
                                                          Kuhn  (1973)  show comparisons of the U.S.  radiometer-
                                                          sonde  with a German  and a Japanese  instrument.   For
                                                          the 90 kPa altitude  there was  a 2%  difference in  down-
                                                          ward  flux  between the U.S.  measurement and the mean of
                                                          all three  (U.S.,  German, Japanese), which is  signifi-
                                                          cant  at the  5% level.   Further the  model  calculations
                                                          of Ellingson used by Gille  and Kuhn showed an under-
                                                          estimate of  the  order of 5% for the same  altitudes.
                                                          These  differences are also  significant at the 5%  level
                                                          (t^4.0).   Gille  and  Kuhn also  give  some  tentative evi-
                                                          dence  of a systematic overestimate  of downward flux
                                                          and systematic underestimate of net flux  by the U.S.
                                                          radiometer-sonde when compared with surface measure-
                                                          ments  from a Linke-Feussner instrument.   Thus the er-
                                                          rors,  that is, differences  between  the observed and
                                                          calculated fluxes, taken on a  case  by case basis  are
                                                          real  and significant,  but of the  same order of magni-
                                                          tude  as differences  among radiometer-sondes,  and  of
                                                          the same order of magnitude as other calculations.

                                                               The cooling rate differences show no significant
                                                          difference at the 5% level  between  calculations and
                                                          measurements. This  is  encouraging, since the cooling
                                                          rate  is the  dynamical  variable needed for prediction
                                                          of the temperature profile.

                                                                               Conclusions
     To  assess  the  validity  of  the  method  of calcula-
tion the clear  skies measurements discussed by  Gille
and Kuhn (1973) were used  for comparison with calcula-
tions presented here.   Temperature  and mixing ratio
at 5 kPa (50mb) increments  from 100 to 20  kPa were
used as  input data.  Upward, downward  and  net fluxes
were calculated at  the  input levels and compared with
the in situ measurements of Gille and  Kuhn.   Results
of this  comparison  are  shown in Figure 1.   The  in situ
data shown are  from over water  ascents of  the U.S.
Radiometer-sonde  in the vicinity of Panama on four
clear skies evenings  (OOOOGMT +_ 20  minutes).  Figure
1 displays the  means of the  measurements and the means
of the calculations for the  upward, downward and net
calculations, while Figure  2 displays  the  means of the
measured and calculated cooling rates.  Visual  inspec-
tion of  the figures indicates a good correspondence
between  observations and calculations.  Table 1 shows
"student t" values  (Weatherburn, 1961) for the  com-
parison  of these  means.  The table  indicates that for
5% significance level with  3 degrees of freedom
(t=3.18) the means  do not differ for any of the fluxes or
cooling  rates.  The interpretation  is  that a set of
sample calculations is  indistinguishable from a set
of measurements.  This  is  consistent with  the findings
of Gille and Kuhn (1973) who used results  of Elling-
son's (1972) model  for  calculations.

     A somewhat more demanding  test of model perform-
ance is  a comparison of case by case predictions for
the clear skies cases.  The  relative error of the
calculation compared with  the observation, (OBSERVED-
CALCULATED)/OBSERVED, for each  case was calculated and
averaged.  A perfect model would have  this average
relative error  equal to zero.   The  "t" values for this
comparison are  displayed in  Table 2, where we see that
     The 12 interval model presented here is capable
of calculating useful estimates of the upward, down-
ward and net fluxes and cooling rates in the lower
troposphere for the clear skies, clean air case.

                   Acknowledgements

     The author wishes to thank his colleagues Dr. J.
T. Peterson, Dr. J. H. Shreffler, and Dr. J. K. S.
Ching for their comments, criticisms, and suggestions.
He also wishes to thank his supervisor Dr. K. L.
Demerjian for his encouragement and support.

                      References
Armstrong, B.H., 1969:  The Radiative Diffusivity Fac-
tor for the Random Malkmus Band.  J_. Atmos.  Sci., 26,
741-743.

Atwater, M.A., 1966:  Comparison of Numerical Methods
for Computing Radiative Temperature Changes  in the
Atmospheric Boundary Layer.  J. Appl. Meteor. (5) 824-
831.                         ~

Signal!, K.J., 1970:  The Water-Vapor Infra-Red Con-
tinuum.  Quart. J_. £. Meteor. Soc_., 96, 390-403.

Ellingson, R.G., 1972.  A New Longwave Radiative Trans-
fer Model: Calibration and Application to the Tropi-
cal Atmosphere.  Dept. of Meteorology.  Florida State
University.   Report 72-4  June 1972.  348 pp.

Pels, S.B., and M.D. Schwartzkopf, 1975:  The Simpli-
fied Exchange Approximation:  A New Method for Radia-
tive Transfer Calculations.  J_. Atmos. Sci., 32, 1474-
1488.

Gille, J.C. and P.M. Kuhn, 1973.  The International
Radiometersonde Intercomparison Programme (1970-1971).
                                                      475

-------
WMO Tech.  note.   No. 128  Geneva.

Goldman, A.  and T.G. Kyle,  1968:   A Comparison Be-
tween  Statistical Model and Line  Calculation  With Ap-
plication  to the 9.6y Ozone and 2.7y Water  Vapor
Bands.  Appl.  Optics, 7,  1167-1177.

Johnson, R.B.  and E.E. Branstetter; 1974:   Integration
of Planck's  Equation by the Laguerre-Gauss  Quadrature
Method.  J_.  Optical Soc.  of America., 64, 1445-1449.

Lee, A.C.L.,  1973:   A Study of the Continuum  Absorp-
tion Within  the  8-13y Atmospheric Window.   Quart. J_.R_.
Meteor. Soc.,  99, 490-505.

Malkmus, W.,  1967:   Random  Lorentz Band Model  with
Exponential-Tailed  S   Line Intensity Distribution
Function.  J_.  Optical Soc.  America, 57, 323-329.

McClatchey,  R.A., W.S.  Benedict,  S.A.  Clough,  D.E.
Burch, R.F.  Calfee, K.  Fox,  L.S.  Rothman, J.S.  Garing,
1973:  AFCRL  Atmospheric  Absorption Line Parameter
Compilation.   AFCRLTR-72-0096   23 January 1973
Environmental  Research Paper No.  434.   Air  Force  Cam-
bridge Research  Laboratories,  L.G.  Hanscom  Field, Bed-
ford, Massachusetts.  78  pp.

Rodgers, C.D., 1968.  Some  Extensions  and Applications
of the New Random Model for Molecular Band  Transmis-
sion.  Quart.  J_.j*.  Meteor.  Soc..  94, 99-102.

Rodgers, C.D.  and C.D.  Walshaw, 1966:   The  Computation
of Infrared  Cooling Rate  in  Planetary  Atmospheres.
Quart. J_.Ji.  Meteor. Soc., 92,  67-92.

Weatherburn,  C.E.,  1961:   A  First Course in Mathemati-
cal Statistics,  Cambridge University Press.   277  pp.
          The  t  values for comparison of mean of calculations

          with iwan of measurements for upward, downward, and

          net fluxes and the cooling rates.
Pressure
(kPa)
100
90
80
70
60
50
40
30
20
Upward
0.35
2.47
1.81
1.73
1.82
1.65
1.25
0.58
0.35
Downward
2.21
1.84
1.27
1.36
0.47
0.68
0.55
0.22
0.29
Net
-3.04
-1.29
-0.99
-0.97
-0:66
-0.26
0.05
-0.27
0.61
Cooling r;
1.52
0.06
0.28
0.71
0.91
0.62
1.13
0.95
1.89
Negative value indicate an over-estimate in the calculations.
                                                                       TABLE 2
                                                                                 The  t values for relative error of calculation
                                                                Pressure
                                                                  (kPa)

                                                                100
                                                                70

                                                                60

                                                                50

                                                                40

                                                                30

                                                                20
         Upward


          1.09

          3.00

          3.02

          1.56

          1.86

          1.81

          1.39

          0.81

          0.60
with measurements on a
r upward, downward, and
ing rates.
)ownward
6.55*
5.56*
3.96*
2.69
1.91
0.84
0.88
0.20
-1.78

Net
-3.83*
-2.35
-1.69
-1.81
-1.16
-0.68
-0.08
0.35
0.93
case by case
net fluxes ai

Cooling ral
1.25
-0.44
0.05
0.47
0.29
-1.17
1.33
1.11
-0.09
                                                                      * Significant at the 5% confidence levels for 3 degrees of

                                                                       freedom. t= 3.18
                                                         476

-------
                                    X    Measurements

                                    •    Calculations
               30


               43



               50


               60






               SO


               90


               100
                    0   100  200   300  400  500


                      DO'.iN'.iARD FLUX (UM~2)
I
200  300  400   500


  UPWARD FLUX (HH~2)
0   100  200   300


  NET FLUX (WM"2)
Figure  1  :   Comparison  of mean measured with mean
             calculated  fluxes  for  upward,  downward,
             and  net fluxes.
X Measurements
. Calculations
20
30

40
50

60

70
80
90
100

V
\







X

*
A

A

\
\ .
.
            012345


             COOLING RATE  (  K DAY "')
Figure  2   :     Comparison of  mean measured  and mean
                calculated cooling rates.
                                                              477

-------
    ADAPTIVE FORECASTING OF BACKGROUND CONCENTRATIONS USING FEEDBACK  CONTROL AND PATTERN RECOGNITION TECHNIQUES
                       R. Carbone
         Academic Faculty of Management Sciences
                The Ohio State University
                     Columbus, Ohio
                        W.L. Gorr
             School of Public Administration
                The Ohio State University
                     Columbus, Ohio
                        Abstract

      This paper develops and tests an empirically-based
 model for tracking and forecasting background concen-
 trations of air pollutants.  A set of variables (e.g.,
 meteorological, locational, economic activity levels,
 etc.) and a functional form relating the variables con-
 stitute the model.  The technique for estimating the
 parameters of the model is derived from recent develop-
 ments in adaptive feedback and pattern recognition.
 Time-varying characteristics of the parameters are
 "tracted"; and thus, automatically adapted to structural
 changes in the air pollution system.  The model and
 estimation technique are applied to total suspended
 particulates background in Allegheny County,
 Pennsylvania, as a pilot test.

                    1.   Introduction

      The clean air program for attainment and mainte-
 nance of air quality standards (AQS's)  has given rise
 to a number of modeling requirements for the time  se-
 ries of pollutant concentrations.   (See Rhoads [1]  for
 a concise review of programs.)  Program planning for
 regulation of pollutant emissions,  capacity expansion,
 facilities location etc.  requires  models relating  con-
 trol variables to the  controllable  component of air
 quality.  These models are common;  see  for example [2],
 [3], and [4].   In addition, it is  necessary to model
 the uncontrollable component in order to obtain total
 or ambient concentrations.   Planning tends to involve
 special, "once only" studies in a  decision process  that
 yields a regulated time series of  concentrations de-
 signed to fall within  target levels.

      Program evaluation of plans and their implementa-
 tion requires,  on the  other hand,  routine modeling  of
 the time series for tracking,  detection of disparities
 with targeted levels,  diagnosis of  cause,  and correc-
 tion.   Large numbers of variables and massive quanti-
 ties of data are involved  in the clean  air program  so
 that a staged modeling approach is  desirable  with  the
 early stages being automated and suggestive of the  lat-
 er,  specialized stages.

      Tracking  models should provide robust and up-to-
 date estimates  of ambient  concentrations  so that sig-
 nificant changes  can be detected.   Such models may  also
 form the basis  for simple  forecasts  through extrapola-
 tions  of the tracked time  series.   If there is a dis-
 parity between  current  or  forecasted  concentrations  and
 targeted levels,  then  it is  required  to  advance to  a
 second stage of  automated modeling  for  preliminary  di-
 agnostics.

     First,  tracking and forecasting should be extended
 to separate  the  controllable and uncontrollable compo-
 nents  of  the time  series to  determine which is the
 problem.  Second,  a  sufficient number of explanatory
 variables  (e.g., meteorological, economic  activity lev-
 el,  regulatory activity level etc.) should  be  correlat-
 ed to  concentrations through multivariate  statistical
 models so that analysts can theorize as to  the causes
 of the problem.  From this basis, analysts  could pro-
 ceed if necessary  to a third stage of analysis which
would be non-routine and involve further measurements
 experiments,  or other special studies.  The results of
 diagnostic  work would serve the decision process for
 modifications of plans or targets.

      This paper presents an approach, the adaptive
 statistical diffusion model (ASDM), which is promising
 for many of the program requirements outlined above.
 The ASDM is a. multivariate time series model based on
 a  "time-varying parameters'1 principle and feedback es-
 timation procedures  leading to a highly flexible and
 automated modeling capability.   Section 2 formulates
 the ASDM and  its estimation procedure, and Section 3
 provides program applications.   Sections A and 5 give
 a  specific  application to total suspended particulates
 (TSP) background in  Allegheny County (Pittsburgh),
 Pennsylvania.   Finally,  Section 6 outlines future  work.

 2.   Adaptive  Estimation and Forecasting of Pollutant
    Concentration Over Time:  a General Approach

 2.1   Model  Formulation

      The ASDM consists of a set of explanatory vari-
 ables and time-varying parameters combined in a func-
 tional,  form for pollutant concentrations.  An impor-
 tant  aspect of the ASDM is that all parameters are  as-
 sumed to be time-varying,  and in the estimation proce-
 dure  discussed below,  parameter estimates are updated
 sequentially  as each observation of the explanatory
 variables occurs.  This  leads to the potential for au-
 tomatically capturing the effects on concentration of
 missing variables  and system structural changes.  For
 example, suppose an  air  pollution source to the west
 of a  concentration monitor has  major impacts on the
 monitor, but  pollutant emission strength is a missing
 variable.   If  wind direction (from which the wind
 blows) sectors  are represented  by a set of indicator
 variables,  then the  time-varying parameter for the
 variable "west" might account for effects on concen-
 tration of  trends  or cycles in  emissions.

     The explanatory variables  can be arbitrarily
 classified  as  "quantitative" which refers  to dimen-
 sioned qualities  such  as  wind speed,  or "qualitative"'
which leads to  0/1 indicators  for nominal  classes of
 the qualitative  variable;  e.g.,  north,  east,  south,
 and west for wind  direction;  no,  mild,  and heavy for
 precipitation.   A general first-order equation pre-
 sented in [6]  portrays a useful  interaction phenomenon
between quantitative  and  qualitative  variables  in a
 time-varying framework.   Applied to  the air pollution
problem, this becomes  the ASDM:
y(t) = n a±(t)

      1=1
                   2 . (t)
                         [ I  3  (t)x  (t)] +
u(t)
                                                   (1)
                                     1,2,...
     where a. specific averaging  time,  pollutant spe-
     cies, and point p in geographic region R are as-
     sumed; and
          y(t) = pollutant concentration  at time t,
          z.(t) = the i-th qualitative indicator which
                  takes values of 0 or 1  depending on
                                                        478

-------
        whether or not the i-th nominal  class
        occurs at t,
      = the parameter for the t-th observa-
        tion associated with the i-th quali-
        tative indicator,
x.(t) = the t-th observation for the j-th
        quantitative variable,
B.(t) = the parameter for the t-th observa-
        tion associated with the j-th quanti-
        tative variable, and
u(t) = an undefined error term.
         ai(t)
     A normalization process is  employed for the quali-
tative variables:   one nominal class,  a "z.(t)" indica-
tor,  is suppressed in the ASDM for each qualitative
variable;  e.g.,  "north" for wind direction and "no pre-
cipitation"  for  precipitation.  The collection of sup-
pressed nominal  classes for all  qualitative variables
becomes the  "standard" qualitative condition.  An ob-
servation  of this  standard would then result in the
value of 0 for all remaining, non-standard z.(t)'s in

the product  portion of the ASDM.  Thus the product
would take the value 1 and the ASDM for the standard
condition  would simply be:
y(t)
g.(t)x.(t)
                           u(t)
                                           (2)
Observations with non-standard conditions result in a
product greater or less than 1 reflecting the relative
effect on the standard concentration model as in (2).

2.2  Estimation Algorithm

     Our purpose is to obtain updated estimates of the
a(t) and 6(t) parameters for each observation in order
to capture and track the changing effects of the vari-
ables on concentration.  Due to the nature of our prob-
lem, the algorithm used should possess certain desir-
able properties:

     a.  It should be designed to minimize over-reac-
         tions to pollutant measurement and other tran-
         sient errors, and thus, provide robust estima-
         tors.

     b.  Estimates of the parameters should be updated
         without requiring any a priori knowledge of
         the kind of processes which may govern their
         time variation.  This aspect is crucial since
         changes in uncontrollable conditions that may
         occur in the future are seldom known or accu-
         rately predicted in advance.

     c.  From an operational point of view, the high
         rate of observations occurring over time re-
         quires the use of an algorithm which involves
         sequential processing of information as op-
         posed to batch processing.  It should be com-
         putationally tractable and implementable on
         widely available computers.

     An approximation method for estimating time-vary-
ing parameters, Adaptive Estimation Procedure (AEP),
recently developed by Carbone and Longini, see [5] and
[6], lends itself to the nature of our formulation and
problem.  In this methodology, robust time-varying es-
timators are generated without using any a priori know-
ledge by recursively updating values via the following
two formulas:
                                            e..(t) =
                                                          and
                                                                          +  |B. (t-i)
                                                                                       V(t) - y(t)   x  (t)
                                                                            y(t)

                                                                         for all j
                                                                                                          (3)
                                                                  a±(t-l)
                                                                                            TZ\  W
                                                                         for  all  i =  l,...,n
                                                     where
                                                      y(t)   predicted  concentration  for  time  t
                                                             based on B(t-l)  and a(t-l) estimators,
                                                      D = damping parameter > 1,
                                                      S, = the number of qualitative variables  (or
                                                          groups of nominal indicator variables) ,
                                                          and
                                                      x.(t) = updated average for  the j-th  quanti-
                                                       -1      tative variable.
                                                          An exponential smoothing scheme is used to calculate
                                                          this latter average as follows :
                                                 x  = S0x  (t) +  (l-S0)x  (t-1)

                                                 where 0 <  SQ <  1
                                                                3.  Applications

                                                 The problem of attainment  and maintenance of AQS's
                                             is  to  determine values of  control variables  (e.g.,  fuel
                                             quality, emission-control-device efficiences, stack
                                             heights, etc.) affecting the  controllable  component of
                                             concentration, C (t), so that
                                                 Ap(t)
                                                                   C  (t) + B  (t)  <  6  (t)  for  all  p  in  R    (5)
                                                                   P       P       P
                                                 where A  (t) =  total  or  ambient  concentration  aver-
                                                                aged over a period  of specified
                                                                length (e.g.,  a year) ending  at t,
                                                       C  (t) =  controllable  component,
                                                       B  (t) =  background component, and
                                                        Sp(t>
                                                                         AQS.
                                             In  order  to  calculate  estimates  or  forecasts  of  A (t)
                                                                                              P
                                             and its components,  it is  necessary to  average the  ASDM
                                             estimate,  y(t),  over the relevant period  by calculating
                                             simple  averages,  or  by developing a joint probability
                                             distribution of  all  explanatory  variables for calculat-
                                             ing expected values  as done  in  [2]  and  [3].   Several
                                             approaches have  been used  or proposed for constraints
                                             (5);  from maximum technically feasible  controls  [7], to
                                             satisficing  with some  tradeoffs  in  cost versus air
                                             quality achievement  [8], and to  cost-effectiveness  [9],
                                             [10].

                                                  An estimate,  A  (t), of  ambient concentration is
                                                               P
                                             obtainable through samples at any p through the  ASDM.
                                             For remote regions,  R  , with no  significant controll-
                                             able sources of  pollutants,  it  is reasonable  to  assume
                                             that concentration is  uniform over  p but  not  t;  i.e.,
                                                               Ap(t)
                                                          A ^(t)  for all p in R
                                                                                                          (6)
                                                          where  p*  is  the location of any properly mounted moni-
                                                       479

-------
tor in R .  It is also common to assume that model  (6)
        r
provides an estimate for background concentration oc-
curring in developed or urban regions R   (see  [11]):
     B (t) = A . (t) for all q in R
                                                     (7)
Thus the ASDM has direct use for tracking,  forecasting,
and diagnosing concentrations through models  (6)  and
(7) using concentration samples from p*.

     For attainment, the federal AQS's are  a  constant
over p and t; i.e., S (t) = 6.  However, proposals  for
maintenance of air quality have the following defini-
tion [12]:
     6 (t) = A (1974) + y (t)
      P       P          P
                                                    (8)
where y (t) is a specified series of allowable incre-

ments of degradation of air quality.  If the region  is
rural, then model (6) suffices for S (t).  If the set

of urban monitoring sites is extensive or representa-

tive of air quality, then A (t) as modeled by the ASDM

for all available monitoring sites may be sufficient
for 6 (t).  I
     P
toring sites.
for 6 (t).   Interpolation may be required between moni-
     Background concentration is often estimated  as  the
concentration advected into R  as sampled by  "back-

ground monitors" located at the boundaries of R  ;  see,

for example, Pooler [13], Rubin and Bloom [14], and
Samson e_t a^. [15],  Here the ASDM extends the concept
of a pollution rose (see Munn [16]) where background is
identified by the wind directions corresponding to ad-
vection into R .  Sections 4 and 5 develop the ASDM

background model in detail; where for example, one mod-
el is
     B (t) = BdA(t) for all p in R
                                                    (9)
where d* represents a dummy site.  This "site" is  a
composite of all background monitor sites such that
only samples from wind directions corresponding  to ad-
vection are utilized.

     For program evaluation purposes, it is possible  to
estimate the controllable, or more accurately, the lo-
cal component of concentration by difference:
     cp(t)
             Ap(t)  - Bp(t)
(10)
where Ap(t) and B (t) are estimated by the ASDM  ap-

proach.  In this way, tracking and forecasting may  re-
veal the effects of controls by separating out back-
ground.

                  4.  A Demonstration

     We now focus our attention on an empirical  study
to demonstrate and test out the ASDM approach.   The
study is mainly directed at forecasting  and  tracking
changes in total suspended partlculate  (TSP) background
concentration over time in Allegheny County, Pennsyl-
vania.  It involves the analysis of some  800 observa-
tions of daily average particulate concentration re-
corded from 1970 to 1975 at two monitors  located at op-
posite boundaries of the county.  The location of the
two "background" monitors chosen and surrounding major
point sources is found in a map of the county presented
        In  Figure 1.
             In addition to daily average particulate concen-
        tration monitored (by hi-vol) by the Allegheny County
        Bureau of Air Pollution Control, six variables were
        used  in the study to reflect meteorological  and struc-
        tural conditions.  They are the 24 hour resultant wind
        direction, average wind speed, precipitation,  inver-
        sion; and finally, two factors reflecting  general ac-
        tivity level, weekend or weekday and cooling or heating
        day.   Information on these various conditions  for each
        observation were obtained from the U.S. Department of
        Commerce,  Local"GliTuatblogl'cal data for the  Greater
        Pittsburgh Airport and the Denardo & McFarland Weather
        Services,  West Mifflin, Pennsylvania.

             The next task was to determine how to enter the
        variables  into the ASDM structure.  Both inversion and
        wind  speed were defined as quantitative aspects; where-
        as, the remaining were considered as qualitative.
        Here, only a ground level inversion of strength greater
        than  2 C determined an inversion, which corresponds to
        air-pollution-emergency "watch" conditions in  Allegheny
        County.   Also, the inverse of average wind speed was
        the measure utilized.  Wind direction was broken down
        into  eight classes (0-45 degrees, 45-90 and  so on) and
        precipitation into four (no precipitation; low,  0-.10
        inches;  mild, .10-.35 inches; and heavy, .35  and over).
        Having identified the predictors and how to  incorporate
        them, the study then proceeded according to  the follow-
        ing steps.

        step  !_:  a third input file, herein referred to as Mon-
                 itor III, containing only observations with
                 background wind directions (315-360 and 0-135
                 degrees for Monitor I and 135-315 degrees for
                 Monitor II) was created.

        step  2:  AEP was applied to the three sets of  input da-
                 ta in the following way.

                 a.  The initial value for all the parameters
                     was set equal to 1.
                 b.  A standard qualitative condition  was de-
                     fined as weekend, cooling day,  315-360 de-
                     grees resultant wind direction,  and no
                     precipitation.  This implies that the pa-
                     rameters associated with these  character-
                     istics were held equal to 1 when  running
                     the procedure.
                 c.  SQ = .04 was used for updating  mean values
                     for the quantitative aspects.
                 d.  A dampling parameter of D = 50 was  ini-
                     tially assumed and subsequently  readjusted
                                                               (?) FUEL COMBUSTION

                                                               (?) INDUSTRIAL PROCESS

                                                               © MASS EMISSION RATE OF LARGE
                                                                  POINT SOURCES, Q TOKS/DAY
                                                                Figure 1.  Major Partlculate Source
                                                                       Monltora.
                                                                                        in Allegheny County add Background
                                                        480

-------
             via a recycling procedure  (see  16]).  The
             recycling of the observations was neces-
             sary here because of the small  number of
             observations over the years covered  (370
             for Monitor I, 410 for Monitor  II, and All
             for Monitor III), so as to converge  to ex-
             perienced patterns of change.

                      5.  Results

     Some descriptive measures of predictive perform-
ance of ASDM are presented in Table 1.  The measures

contained in the table are first, average actual  (AC)
and predicted (PC)  concentration, and also their stan-
dard deviations, (SAC) and (SCP).  The table further
contains the root mean square prediction error (RMSPE),
the simple correlation coefficient (r) between pre-
dicted and observed values, the mean absolute percent-
age deviation (MAPD), and the serial correlation coef-
ficient (p) of a first-order autoregressive scheme.

     By carefully examining Table 1, we observe that
the results are promising in terms of the RMSPE, r, and
MAPD;  and comparable to diffusion modeling results of
others (see for example, Slade [17]  p. 142, McCollister
and Wilson [18], and Bankoff and  Hanzevack [19]).   The
ASDM results have little or no error in central tenden-
cy detected;  and also, we note no evidence of first-or-
der serial correlation in our results for the three
monitors.   What may appear somewhat  disturbing is  that
the standard deviation of the predicted concentrations

                        Table 1

         Some Descriptive Measures of ASDM  Performances


AC
PC
SAC
SPC
HMSPE
r
MAPD
P

I
86.81
88.10
43.34
32.98
33.49
.6458
34.33
-.124
Monitors
II
67.35
69.64
36.26
21.94
32.65
.4622
33.97
.020

III
65.74
66.95
35.68
18.83
32.64
.4194
35.77
-.001
 is consistently smaller than for the observed values.
 However, it can easily be argued that because of the
 expected large measurement errors present in TSP data,
 the spread of true concentration should be smaller than
 that of observed concentration.  The AEP is specifical-
 ly designed to track true measurements (see [6]) rather
 than observed values.

      Table 2 presents ASDM parameter estimates for Mon-
 itor I at two points in time—the start and end of the
 five year study period.  Attention is focused on this
 monitor since it has had the greatest degree of time-
 variation in the parameter estimates.  The values in
 Table 2 give results as theoretically expected;  for ex-
 ample,  the results reveal that concentration on a week-
 day is  about 24% greater than on weekends at the begin-
 ning of the period and 12% at the end which provides
 some measure of the general  effectiveness of control
 policies;  that a low precipitation level  reduces con-
 centration by approximately  12% in contrast to no pre-
 cipitation over the period;  that a resultant wind di-
 rection from the major point sources  (225-270  degrees)
 located next to the monitor  (see Figure 1)  leads to
 about 79%  greater concentration than  the  standard di-
 rection assumed which Is  of  a background  nature;  and
 that total  concentration  given the  standard  qualitative
 condition  assumed for no-inversion-high-wlnd-speed  days
 decreased  from 68 ug/m3 to 50 pg/m3.

      A  value of 35  pgm/m3  annual geometric mean  for TSP
 background  was  assumed for Allegheny  County  in the  1971
 state implementation  plan.  More recently, the U.S.  EPA
 assumed a 1972  value  of 30 ugm/m3 falling to 20  ygm/m3
 between 1975 and 1980;  and Rubin and  Bloom [14]  esti-

 mated 45 to  50  ygm/m3  based on  a pollution rose  of
 Monitor II  (1971-72).   Thus,  TSP background has been
 found to be a  significantly large component of ambient
 TSP  air quality  in  Allegheny  County.  Our estimates
 show, however,  that the background component is  even
 larger  than  in  the  previous estimates.

     Figure  2 presents  two plots of expected total con-
 centration using Monitor III weights  (background moni-
tor) .  The plots presented illustrate the acute TSP
background problem in Allegheny  County:  under the con-
                                                     Table 2
                               ASDM Parameter Estimates for Monitor I:  Start and End of Period.
Variables
Degree Day
Heating
Cooling
Day
Weekend
Weekday
Precipitation
None
Low
Mild
Heavy

Percent
or
Mean Value
Parameter Estimate
Start End
Variables
Percent
or
Mean Value
Parameter Estimate
Start End
Wind Direction
28.
71.

26.
73.

43.
20.
15.
10.

3.1
89

22
78

51
27
95
27

0.859
1.000

1.000
1.249

1.000
0.881
0.841
0.926

0.
1.

1.
1.

1.
0.
0.
0.

842
000

000
127

000
855
813
907

00
45
90 -
135 -
180 -
225 -
270 -
315 -
Wind Speed"
Inversion
Constant
45
90
135
180
225
270
315
360
1


4.
5.
8.
7.
10.
34.
17.
12.
0.
30.
1.
05
41
65
57
27
32
30
43
12
00
00
0.760
0.931
0.845
1.342
1.539
1.796
1.410
1.000
53.08
7.98
67.59
0.744
0.931
0.853
1.259
1.576
1.773
1.328
1.000
46.79
18.57
49.91
                                                      481

-------
         DQ. 00   65. 00   170.00   ?S5.00   3WO.00   42S.00

                     Observation Serial Nunber

                Figure 2(fi),  Wind Direction fran (MS*.
          °O.OD  " 65.00   170.00   e«.OD  34D.OO  4ZS.OD

                     Observation Ferial Xxrbcr

                 Figure 2(b). Wind Direction From 225-270*.


          rre 2. /SD-I TUB Set ic<; of TSr Concentration for 'fonitor III and tlic
             Omditions;  ttoolinc, Kccldaj , No IVccipJiEitaon, No Inversion,
             and 10 niilcs/luwr Wind Speed.
ditions specified, there  is a nearly  constant trend of
high background concentration.  This  observation is
consistent with the fact  that the  updated mean TSP
background over all conditions  computed via an exponen-

tial smoothing scheme varied from  75  ygm/m3 at the be-

ginning of the period to  68 ygm/m3 at the end.  It ap-
pears, from current evidence, that attainment of the
federal secondary annual  TSP standard of 60 ygm/m3 is
impossible regardless of  local  control policies.
                    6.  Future Work

     It is desirable to model a pollutant  sampled more
frequently and with a short averaging  time than the
case in this paper.  It appears that a 24  hour averag-
ing time provides overly aggregated data with respect
to variation in the underlying phenomenon.  In future
work, it is also desirable to further  investigate the
functional form used in the ASDM;  e.g.,  as to which
variables should be multiplicative, transformed etc.
Finally, we wish to pursue the relationship of adaptive
modeling to the decision framework of  the  clean air
program, and to determine the requirements of the de-
cision maker.
1.
2.
3.
4.
                      References

     Ehoads, R.G. , "The Nationwide Program for Mainte-
     nance of Air Quality," JAPCA. Vol.  25 (1975).  pp
     1203-1206.             -

     National Air Pollution Control Administration, Air
     Quality Display Model. U.S.  Public  Health Service?
     PH-22-68-60, 1969.

     Busse, A.D. and J.R. Zimmerman, User's  Guide  for
     the Climatological Dispersion Model.  NTIS Report
     No. EPA-R4-73-024, Wash., D.C., 1973,,

     Gifford, F.A. and S.R. Hanna, "Modeling Urban  Air
     Pollution," Atmos. Envir.. Vol. 8,  pp.  555-561.
 5.  Carbone, R.,  "The Design of an Automated Mass-Ap-
     praisal  System Using Feedback," unpublished Ph.D.
     dissertation,  Carnegie-Mellon University,  1975.

 6.  Carbone, R. and R.L. Longini, "An Adaptive Sto-
     chastic  Approximation Algorithm for Estimating
     Time-Varying  Parameters," Administrative Science,
     Ohio State  Univ.  Paper WPS 75-56 (1975).

 7.  Sussman, V.H.,  "A Critique:  New Priorities in Air
     Pollution Control," JAPCA. Vol. 21 (1971),  pp.
     201-203.

 8.  Dunlap,  R.W.,  W.L. Gorr, and M.J. Massey,  "Desul-
     furization  of Coke Oven Gas:  Technology,  Econo-
     mics and Regulatory Activity," in J. Szekely [ed.]
     The Steel Industry and the Environment, Marcel
     Dekker,  N.Y.,  1973.

 9.  Kohn, R.E., "Application of Linear Programming to
     a Controversy on Air Pollution Control," Manage-
     ment Science. Vol. 17 (1971), pp. 609-621.

10.  Gorr, W.L., S.A.  Gustafson, and K.O. Kortanek,
     "Optimal Control  Strategies for Air Quality Stand-
     ards and Regulatory Policy," Environment and Plan-
     ning. Vol.  4  (1972), pp. 183-192.

11.  Environmental  Protection Agency, "Guidelines for
     Air Quality Maintenance Planning and Analysis Vol-
     ume 12," EPA-450/4-74-013, Research Triangle Park,
     N.C., 1974.

12.  Federal  Register,  "Maintenance of National Ambient
     Air Quality Standards," Vol. 40 (203):  49048
     (Oct. 20, 1975).

13.  Pooler,  F.  Jr.,  "Network Requirements for the St.
     Louis Regional  Air Pollution Study," JAPCA, Vol.
     24 (1974),  pp.  228-231.

14.  Rubin, E.S. and H.T. Bloom, "Maintenance of Ambi-
     ent Particulate Standards in an Industrialized Re-
     tion," 68th Annual Meeting of the Air Pollution
     Control  Association, Boston, Mass.   (June,  1975).

15.  Samson,  P.J.,  G.  Neighmond, and A.J. Yencha, "The
     Transport of  Suspended Particulates as a Function
     of Wind  Direction and Atmospheric Conditions,"
     JAPCA. Vol. 25  (1975), pp. 1232-1237.

16.  Munn, R.E., Biometeorological Methods, Academic
     Press, N.Y.,  1970.

17.  Islitzer, N.F.  and D.H. Slade, "Diffusion and
     Transport Experiments," in D.H. Slade [ed.] Meteo-
     rology, and Atomic Energy 1968, U.S. Atomic Energy
     Commission, TID-24190, Springfield, Va., 1968.

18.  McCollister,  G.M.  and K.R. Wilson,  "Linear Sto-
     chastic  Models  for Forecasting Daily Maxima and
     Hourly Concentrations of Air Pollutants," Atmos.
     Envir., Vol.  9  (1975), pp. 417-423.

19.  Bankoff, S.G.  and E.L. Hanzevack, "The Adaptive-
     Filtering Transport Model for Prediction and Con-
     trol of  Pollutant Concentration in an Urban Air-
     shed," Atmos.  Envir.. Vol. 9 (1975), pp. 793-808.
                                                        482

-------
                                   SOURCE-ORIENTED EMPIRICAL AIR DUALITY MODELS
                Kenneth  L.  Calder
          Environmental  Protection Agency
     Environmental  Sciences Research Laboratory
           Research Triangle Park, N.C.
                   William  S. Meisel
            Technology Service Corporation
                  Santa Monica,  CA. 90403
ABSTRACT

Meteorological  dispersion functions in multiple-source
simulation  models  for urban air quality are usually
specified on  the basis of the analysis of data from
special field experiments, usually involving isolated
sources.   In  the urban environment, individual sources
cannot be isolated.   One may, however, ask for a source-
receptor  relationship which, when summed (or integrated)
over all  the  sources, would minimize the average squared
error in  prediction  of measured values.  The feasibility
of this approach is  demonstrated by application to model-
generated data, where the source-receptor relationship
is known.

INTRODUCTION

In commenting on the lack of acceptance of empirical/
statistical models in air quality modeling in 1973, one
of the authors called attention to "the historical be-
lief that air quality models based on statistical re-
gression  type of analysis are not source-oriented and,
therefore,  are largely useless for control strategy in
terms of  the  contribution of individual sources to the
degradation of air quality"[i].  He went on to ask
"whether, with an  appropriate analysis, a source-
oriented  statistical-type of air quality model could
be developed  which did not involve prior specification
of meteorological  dispersion functions per se and in-
corporation of these as in present air quality models.
My thought  here is that for given 'meteorological con-
ditions'  these dispersion functions play the role of
transfer  functions between the air quality distribution
and the distribution of pollutant emissions, and if one
were smart  enough  might, therefore, conceivably be ob-
tained empirically by a mathematical inversion tech-
nique (as,  for example, by numerical solution of sets
of integral equations) utilizing accumulated data on
the distributions  of air quality and emissions.  If this
could be  accomplished then maybe a major shortcoming of
the current statistical mode.ls could be removed and we
should then in effect have an alternative to the custo-
mary meteorological-dispersion type of modeling."  These
comments  suggest the motivation for the study reported[2].

 The  difficulties  in  developing a source-oriented  empi-
 rical model can be  stated  from a statistical  point  of
 view.  The spatial  distribution of  pollutant  concentra-
 tions over a region  is determined by  emissions and
 meteorological conditions.  The number of variables
 determining the concentration at a  given point is
 tremendous, particularly since emissions arise from a
 large number of point  sources and area sources.   Con-
 sequently  the number of emission variables alone  can
 easily be  in  the hundreds.  If an empirical model were
 to be developed in  the most obvious manner, there
 should be an  attempt to relate the  pollutant  concentra-
 tion at  a given point  to all the possible emission
 variables and meteorological variables affecting  the
 *
  This work was supported in part by Contract No.
  68-02-1704 with the Environmental Protection Agency.
 concentration at that point.  Since the determination
 of the relationship between emission/meteorological
 variables and concentration requires examples of that
 relationship over a very wide range of emission and
 meteorological variables, a tremendous amount of data
 would be required to adequately determine this relation-
 ship.

 If we could, however, isolate a given emissions source
 and we had a number of receptor locations scattered
 about the source, the variation in wind speed and di-
 rection would cause a wide variation in measured con-
 centration at the receptor locations.  With enough
 examples of the source-receptor relationship, the varia-
 tion of the concentration with distance from the point
 source could be determined empirically.

 In the urban environment, of course, individual sources
 cannot be isolated.  Measurements are the result of
 contributions from a number of sources.  However, be-
 cause of the wide diversity of meteorological conditions,
 the concentration will vary widely at a given point,
 and the sources which contribute to the concentration
 at that point will similarly vary.  One may then ask for
 a consistent source-receptor relationship which, when
 summed (or integrated) over all the sources, would
 explain best on the average the observed concentrations.
 More specifically, one could choose the source-receptor
 function which minimized the average squared error in
 prediction of the measured values.  This concept is the
 core of the ideas tested.

 The data used to test these ideas are model-created
 data.  Model data were chosen for three major reasons:

 1.  With model data, the source-receptor function is
 known and can be compared with the function extracted
 from the data.  With measurement data, "truth" is un-
 known.

 2.  Area sources and point sources can be isolated and
 studied separately as well as jointly.

.3.  The cost of verifying and organizing measurement
 data would have been beyond the scope of the present
 study.

 MATHEMATICAL FORMULATION

  We work with  a  rectangular  coordinate system with  x-axis
  along  the mean  horizontal wind direction,  with  y-axis
  crosswind,  and  with  the  z-axis vertical.   Then  in  urban
  air  quality models  it is  customary  to consider  the  pol-
  lutant emissions  in  terms  of a limited number (say  0)
  of elevated point-sources  together  with  horizontal  area-
  sources, the  latter  being  possibly  located at a few
  distinct heights  cs  (say,  for example,  for s   1,2,3).
  The  total concentration  x(x«yi°)  at ground level  at the
  receptor location  (x,y,0) will  be the sum  of the con-
  centration  contribution  from the  point-source distri-
  bution, say xp(x,y,0)  and  that from the  area-source
  distribution  \(x,y,0)t  i-e.,
                                                        483

-------
   x(x,y,o) = xD(x,y,o) + xA(x,y,o)
where
XD(x,y,0)    £ QD(4)K(x-5Jl,y-pJl;0,ct)
 r           ft = 1  r
                                               (D



                                               (2)
  (x.y.O)    ^  ././ VA
            s=l  A
                   .//QA(5,n,cs)  •
               K(x,c,y-n;0.cJd?dn
                                               (3)
and
            Q (l) = emission rate of Jl-th elevated
             p      point-source, located at position
       QA(5.n,?s)   emission rate of horizontal  area-
                    source distribution located  at
                    height ?s, and A denotes the total
                    integration domain of the area-
                    source distribution

   K(x-?,y-n.;0,c)   source-receptor function; it gives
                    the ground level concentration at
                    the receptor location (x,y,0)
                    resulting from a point-source of
                    unit strength at (?,n,?).

Note that this formulation includes the assumption of
horizontal homogeneity, namely, that the impact  of a
given source upon a given receptor depends only  upon
their relative and not absolute coordinates.  This
assumption is true for an urban environment only in
an average sense.  A single wind direction is simi-
larly valid only in an average sense.  Finally,  it
should be noted that the above formulation assumes
steady-state conditions and is thus only applicable
for relatively short time periods (of the order  of one
hour), when this may be an adequate approximation
providing the emissions and meteorological conditions
are not rapidly changing.

In equations (2) and (3) above it is convenient  to
use "source-oriented" position coordinates, and  to
consider a typical ground-level receptor location as
   Let
      y'=
        rS

        i-n
dx'= -d? ,

dy'= -dn ,
Then
xp(xi,yi,0)    £  QpU)K(xrryr)l;  o,q)

                                                  (4)
                                                  (5)
           ,0)
                 K(x',y';0,cs)dx'dy'
                                               (6)
In the following several different source-receptor
functions [K(x',y';0,?)] will be considered, includ-
ing the classical Gaussian form that is the basis for
                                                          the RAM-model  [3].   For the latter, and with the
                                                          meteorological  condition of infinite mixing depth
                                                         exp
                                                             K(x',y';0,c)=
                                                                                           exp
                                                                                                         (7a)
                                                                                        ay(x')o-z(x')
                                                       where U denotes the mean wind speed, and we assume
                                                       simple power-law dependencies for the standard devia-
                                                       tion functions, say
                                                         oy(x') = ay(x') y
                                                                 ')  = az(x') y.
                                                                                                         (7b)


                                                                                                         (7c)
                                                      Also, as in the RAM-model we will assume that the
                                                      narrow-plume hypothesis may be employed in order to
                                                      reduce the double integral of equation (6) to a one-
                                                      dimensional integral.  Thus, under this hypothesis, if
                                                         J  K(x',y';0£s)dy'   G(x',es),
                                                         —00


                                                      then in place of equation  (6) we have
                                                                                                         (8)
                                                                        3    f
                                                         XA(x1,yi,0)    £  J  QA(xi,x',yi,?s)6(x',i;s)dx>
                                                                                                        (9)
                                                      which only involves values of the area-source emission
                                                      rates in the vertical plane through the wind direction
                                                      and the receptor location.

                                                      For the special case of a Gaussian plume
                                                             G(x',?s)
                                                                         exp
                                                                     V1-
                                                                     "w    I
                                                                           U02(x')
                                                                                                       (10)
                                                       The basic equations (5) and (6) (or (5) and (9)), with
                                                       the Gaussian  forms  for K(x',y'; 0,5)  and G(x".?)
                                                       involve four unspecified parameters through the  equa-
                                                       tions (7b) and (7c), namely, a ,b ,az and bz.  More
                                                       generally, any functional form chosen for K (and there-
                                                       fore G) may have unspecified parameters;.we will de-
                                                       note the set of unspecified parameters by the  vector
                                                       a.  Thus for the special Gaussian form

                                                          o   (ay,by,az,bz)                             (U)

                                                       The explicit dependence of the calculated concentra-
                                                       tion values on these parameters could be indicated by
                                                       the notation x(xi ,y.j,0;a).

                                                       The basic method employed in this study is that  of
                                                       choosing a. to minimize the error between calculated

-------
and observed values of concentrations.   In order to ex-
press this statement formally, we must elaborate our
notation to indicate explicitly the dependence on wind
direction; thus x(x,- ,y. ,0;9; a).  For each wind direc-
tion 6j(j=l ,2...R) there is a~concentration observation
for each receptor location  (monitoring station).  The
receptor locations are denoted (x^.y.) for i=l,2...N,
and are assumed to be at ground level so that we may
omit the symbol 0 in the x-n°tation.  Then the mean
square error over all observations is


  2      ]   N   R  r                                -.2
 e  (l)   OT £ £  LWx1,y1,ej)-xcalc(x1,y1;ej;a)J
         ,   N   R
         I  v«   y-
         ™ £  fi
                                                    (12)
where XD  and
and (9)).
                are given by Eqs.  (5) and  (6)  (or  (5)
 The problem of minimizing e  with respect to a is a
 standard optimization problem.  Chambers provTdes a
 good recent survey of available techniques[4].  The
 particular technique we employed was "structured ran-
 dom search" [5]; this is a rather inefficient technique,
 but one which does not require calculation of deriva-
 tives and which converges under difficult conditions
 (given enough time).  This technique's main advantage
 was that we could modify the form of the source-
 receptor function without modifying the search tech-
 nique.  The results of applying this methodology to the
 best data are discussed following; however, we first
 turn to a description of the test data.

 TEST DATA

 For a realistic distribution of point-sources, area-
 sources and receptor locations, use was made of unpub-
 lished information from a 1968 air pollution study
 conducted in St. Louis, Missouri[6],  The area sources
 were gridded into over 600 square regions; there were
 60 point sources and errors were calculated at 40 re-
 ceptors for the 16 wind directions.  (See Reference [1]
 for more details.)  The corresponding concentration
 data were generated by the EPA-developed RAM algo-
 rithm[3], which is a specific implementation of the
 classical Gaussian plume formulation, that considers
 both point- and area-sources, with three possible
 heights for the latter, and which uses the "narrow-
 plume" hypothesis (i.e., Eq. (9)) to calculate the
 area-source concentration contribution x/\.  A constant
 wind speed U of 5 meters per second was employed, and
 sixteen wind directions at the points of the compass
 were simulated.  Infinite mixing depth and a neutral
 atmospheric stability category were assumed.  For the
 latter, in Eqs. (7b) and (7c), we have
               a  = 0.072

               az = 0.038
                              by = 0.90

                              bz = 0.76  .
(13)
  "Observed" in the present case is model-created test
 data; the technique is, of course, intended for prac-
 tical use on measured data.
                                                           For this  data,  these values and the indicated equations
                                                           are optimal  and would produce zero mean-square error.
                                                           It is  this  result we hope to be able to recover from
                                                           the data  by the optimization procedure.

                                                           OPTIMIZING  PARAMETERS FOR THE GAUSSIAN FORM OF THE
                                                           SOURCE-RECEPTOR FUNCTION

                                                           The data  base described earlier contains concentration
                                                           values at forty receptors and sixteen wind directions,
                                                           a total  of  640  values (referred to as "actual" values).
                                                           The contribution to the concentration from point and
                                                           area sources was available separately, as well as in
                                                           toto.

                                                           Equations (5) and (7a) provide a prediction of the
                                                           point-source pollutant concentration at any given re-
                                                           ceptor location once the four parameters are specified.
                                                           A comparison of values predicted by these equations ver-
                                                           sus actual  values allows calculation of the root-mean-
                                                           square value of the error with a given choice of param-
                                                           eter values.  (See Eq. (12), with area sources at zero.)

                                                           With initial guesses of a =a =0.1 and b =b =1.0, the
                                                           search routine  described arrived at values of
                    ay = 0.74,  by = 0.92,
                                                                                                 0.039, bz = 0.77
                                                           when the "true"  values (those used to create  the data)
                                                           were
                    a   = 0.72,  b   =  0.90,  az  =  0.38,
                                                                                                          = 0.76.
The root-mean-square (RMS) error initially was 157 yg/m
and the maximum error over the 640 values was 1205 yg/m ;
the parameter values after 100 iterations yielded an 3
RMS error of 14 yg/m-* and a maximum error of 175 yg/m .
(Table 1 summarizes these results.)  To place the size
of the final error in perspective, we note that the
actual values (due to point sources alone) were as high
as 1545 yg/m3.

Employing Eq. (9) for area sources and using only the
area-source contribution in the "actual" data, we get
similarly promising results (Table 2).  Actual values
of concentrations due to area sources reach maximums of
over 800 yg/m3.

The results of treating point and area sources simul-
taneously, representative of the case which would be
encountered with measurement data, are listed in
Table 3; the algorithm once again closely approaches
the optimum values in 100 iterations.  Actual values
of the total concentrations from both point and area
sources go above 1600 yg/m3.

While the initial parameter values we chose in these
cases converged toward the values used in creating the
data, experimentation indicated that this was not al-
ways the case.  Small RMS errors could be achieved with
combinations of parameters significantly different in
value from those used in creating the data.  As indi-
cated in Figure 1, rather different combinations of a
and b yield very similar values of ax° over the range
of x in which we are interested.  It is clear that an
essentially equivalent combination of values should
not be deemed erroneous, since they yield an accurate
empirical model.  We regard this a characteristic of
the formulation chosen for calculating a and do not re-
gard it a difficulty of the methodology proposed.  Fur-
ther, in practice, initial values for the parameters
would be chosen from the literature, and the solution
obtained would be a set of values similar to the ini-
tial values, but which minimized the prediction error.
                                                       485

-------
Table 1.   Point sources only;  parameter values at initial,  mid,  and final  iteration
          during search.(tfindspeed is fixed at 5.0 m/sec.)


Iteration
0 (initial)
50 (mid)
100 (final)
ACTUAL
VALUES:

a
y
0.100
0.049
0.074
(0.072)

b
" —
1.00
0.85
0.92
(0.90)

az

0.100
0.050
0.039
(0.038)

bz

1.00
0.71
0.77
(0.76)
RMS Error
(yg/ms)'

157
85
14
(P)
Max. Error
(uq/ms)

1205
711
175
(0)
Table 2. Area sources on!
during search.
not affect area
y; parameter values at
fftindspeed is fixed at
source values. )
initial , mid
5.0 m/sec.

, and final iteration
Values a and b do



RMS Error Max. Error



Iteration
0 (initial)
50 (mid)
100 (final)
bz
0.100 1.00
.028 0.89
.037 0.79
(yg/m3)
157
15
6
(yg/m3)
1205
69
24




ACTUAL
VALUES:
(0.038) (0.76)
(0)
Table 3. Point and area sources together; parameter values at
iteration during


Iteration


!y
0 (initial) 0.100
50 (mid)
0.055
100 (final) 0.074
ACTUAL
VALUES:






i





(0.072)
.22

.20_

Ifi
—

14 ^

D
.12
0
search. (Windspeed is


^y_ az
1.00 0.100
0.79 0.044
0.89 0.036

(0.90) (0.038)






n 7s
a .038 x °'76
n CQ
a .028 x Ulby
a .044 x °'67
fixed at 5.0

L
bz
1.00
0.67
0.74

(0.76)

A







at
A'
//
(0)

initial, mid, and final
m/sec. )
Both Point and Area
RMS Error Max.
( g/m3) (

Sources
Error
9/m3)
157 1205
79
24

(0)











583
194

(0)











0 '°H // /





.03_

.06_
.04_
.02-
n^r^
/
//
/y

/° ^ '
^^'
/ '
f
/













            Figure 1.   Plot of ax  for several  values  of a  and  b.   (The  variable  x
                       is plotted on a log  scale.)

                                           486

-------
This aspect of implementation also suggests that a
good initial guess would be employed and, thus, that
convergence to an "optimum" solution would be rapid.

Forty receptors (i.e., air quality monitoring stations)
are more than are available in many monitoring systems.
How many stations are required for this methodology to
be effective?  The answer to this question is heavily
dependent on the number and distribution of sources,
but the indications from experiments with our test data
suggest that a considerably smaller number of stations
may suffice.  Table 4 indicates errors due to changes
in parameter values, one at a time, from the optimum
values, for a selection of the individual stations.
The errors are sufficiently large that one would expect
that optimum parameter values could be extracted from
a small number of stations at well-chosen locations.


MORE GENERAL SOURCE-RECEPTOR FUNCTIONS

More complex source-receptor functions (such as multi-
variate polynomials and piecewise quadratic functions)
were tested with success[l], but broad conclusions
about alternative forms will not be forthcoming through
the analysis of the present test data.  Analysis of
measurement data may allow meaningful comparison of
the Gaussian and more general parameterized forms.

CONCLUSION

A methodology for empirically testing alternative forms
 and extracting optimal parameters for source-receptor
 dispersion functions has been described.  Feasibility
was demonstrated on data for which the "true" source-
 receptor function was known; the methodology recovered
 parameter values very close to true values.  This
 approach shows promise as a means for calibrating
 Gaussian-form models for particular urban environments
 and in  testing alternative forms.
REFERENCES

1. Calder, Kenneth L., Quoted by Niels Busch in the
   proceedings of the fourth meeting of the NATO/CCMS
   Panel on Air Pollution Modeling, from a letter
   written in March 1973.
2. Calder, Kenneth L., W. S. Meisel, and M. D. Teener,
   "Feasibility Study of a Source-Oriented Empirical
   Air Quality Model", (Part II of "Empirical Tech-
   niques for Analyzing Air Quality and Meteorological
   Data"), Final Report on EPA Contract No. 68-02-1704,
   December 1975.

3. Hrenko, Joan M. and D. B. Turner, "An Efficient
   Gaussian-Plume Multiple Source Air Quality
   Algorithm", Paper 75-04.3, 68th Annual  APC Meeting,
   Boston, June 1975.

4. Chambers, John M., "Fitting Nonlinear Models:
   Numerical Techniques," Biometreka, Vol. 60, No. 1,
   1973, pp. 1-13.
5. Meisel, W. S., Computer-Oriented Approaches to
   Pattern Recognition, Academic Press, New York,
   1972, pp. 51-53.
6. Turner, D. B. and N. G. Edmisten, Unpublished manu-
   script of National Air Pollution Control Adminis-
   tration, "St. Louis SO- Dispersion Model Study,"
   November 1968.
       Table 4.  Sensitivity Analysis.   Root-mean-square  and maximum  error due to change in each parameter
                from  nominal  values  at  selected  receptors.  Concentrations are from both point and area
                sources.
Parameter
ay
by
az
bz
U
M'cX.
^%
Change \£
From Tp_
.072 .079
.90 .99
.038 .042
.76 .836
5.0 5.2
Error (in ^g/m1) at Selected. Receptors.
1
RMS MAX
9 32
15 59
24 49
32 66
15 25
10
RMS MAX
67 260
23 63
32 102
44 111
18 30
13
RMS MAX
17 46
22 69
24 45
29 62
19 44
21
RMS MAX
10 32
20 62
27 51
37 80
18 33
22
RMS MAX
11 33
20 66
16 52
23 69
16 39
24
RMS MAX
14 52
10 31
33 109
27 65
14 37
35
RMS MAX
44 175
46 175
47 177
52 178
45 177
Error for
All 40
Receptors
RMS MAX
16 175
24 234
27 151
35 230
18 177
                                                        487

-------
                                         EPA FLUID MODELING FACILITY
                                   Roger S.  Thompson and William H. Snyder*
                                      Meteorology and Assessment Division
                                  Environmental  Sciences Research Laboratory
                                     U.S.  Environmental  Protection Agency
                                         Research Triangle Park, N.C.
     A meteorological  wind tunnel  and a  water channel-
towing tank as  used by EPA scientists for laboratory
studies of air  pollution  dispersion  in the vicinity
of buildings and over  complex terrain are described.
In these fluid  modeling studies,  simulated atmospheric
boundary layer  flow patterns  are  created over special-
ly constructed  scale models of structures or geoaraphic
areas.  Modeling theory provides  similarity criteria
to ensure that  flow behavior  in the  model simulates
real  processes  in the  atmosphere.  Visualization  of
dispersion from modeled emission  sources is obtained
by releasing an oil fog in the wind  tunnel and dye
in the water channel.   Simulated  pollutant concentra-
tion  levels are determined by emitting a tracer,
which is sampled and measured at  locations of interest
around the model site.  Neutral atmospheric conditions
are modeled in  the wind tunnel and in the water
channel-towing  tank in the recirculating mode of
operation.  Dispersion under  thermally stratified
atmospheric conditions is modeled by filling the
water channel-towing tank with stratified layers  of
salt  water and  towing  models  through the motionless
fluid.  Results of some recent projects  are presented
as examples of  the types  of information  gained at the
Fluid Modeling  Facility.

*0n assignment  from the National  Oceanic and Atmospheric  Administration.
                    Introduction

     In-house research in fluid modeling of atmos-
pheric dispersion is conducted at the Environmental
Protection Agency's Fluid Modeling Facility (FMF)
located in Research Triangle Park, N.C.  The  FMF,
which is a part of the Meteorology and Assessment
Division of the Environmental Sciences Research
Laboratory, opened in June 1974 with the installa-
tion - including instrumentation, shop equipment, and
a minicomputer - of the meteorological wind tunnel
(Figure l).  In a major expansion of the facility,
the installation of a water channel-towing tank
(Figure 2) was completed in December 1975.
     Fluid modeling involves placing a scale model
of a topographic region or an urban area, for example,
in a moving fluid to simulate meteorological effects
at the site.  In a towinq tank, the model is moved
through motionless fluid.  By following scaling laws,
full scale atmospheric flow can be accurately simu-
lated in the laboratory.  In addition, quantitative
measurement of the concentrations of a tracer at
various points in the diffusion field over the model
can be used to provide estimates of pollutant con-
centrations in real (full scale) situations.
                                  Figure  1.   EPA meteorological  wind  tunnel,
                                                      488

-------
                                  Figure  2.   EPA water channel-towing tank.
      Theoretical  and  Practical  Considerations

     Discussions of  similarity  considerations appli-
cable to fluid modeling  have  been  presented in  the

literature    and  will not  be repeated  here.  Compro-
mises often must be  made because all  similarity cri-
teria can not be satisfied  simultaneously.   It  is
evident that each  laboratory  has its  own  set of cri-
teria, which may differ  or  even  conflict  with those
of another laboratory.   Also, other aspects of  model-
ing,  such as the minimum Reynolds  number  limit  for
similarity of plume  rise, have  not been completely
established.  A primary  goal  of the FMF,  therefore,
is to test the limits, determine the  proper similar-
ity criteria, and  set  the standards for fluid model-
ing of atmospheric dispersion.
     Both air and  water  are suitable  fluids to  use
                                            p
as media for modeling  atmospheric  dispersion  .   In
principle, a factor  of 15 in  the Reynolds number may
be gained by modeling  with  water as the medium.  How-
ever, because of structural and  pumping requirements,
water facilities are normally much smaller  and  run
at much lower speeds than wind  tunnels.   Thus,  the
full  potential for obtaining  larger Reynolds numbers
using water flows  is seldom realized.
     Water has some  advantages  over air as  the  model-
Ing medium.  Flow  visualization  -  using different
colors of dyes, hydrogen  bubbles,  and neutrally buoy-
ant particles - is generally  much  easier  in water.
Salts, acids, and  dyes are  used  as tracers  to deter-
mine  pollutant concentrations when modeling dispersion
characteristics in water.   Water is also  rather easily
stratified using salt  water layers of varying density;
whereas, stratification  in  a  wind  tunnel  requires
rather elaborate heating  and  cooling systems.
     Wind tunnels, however, have been used  in many
applications to simulate atmospheric motions.   Flow
visualization, velocity  measurement, and  concentration
detection techniques have been developed  and advanced
to a level of high reliability and accuracy.  Models
used in wind tunnels need not be as solidly construct-
ed or as firmly supported as those for use in water
channels, where pressure forces are generally much
larger.  In addition, connections to probes do not
require as much care with electrical insulation in air
as in water, and there is less corrosion o* metal.
     Because both a wind tunnel and water channel-
towing tank are available at the FMF, EPA scientists
can use the one most suited to a particular research
project.
             Meteorological Hind Tunnel

     The meteorological wind tunnel is an open-
circuit, low-speed wind tunnel desianed for simulating
neutral atmospheric flows.  The test section -
18 meters (m) long, 3.7 m wide, and 2.0 m high - has
an adjustable ceiling to compensate for blockage when
large models are used.  Five subsections with inter-
changeable windows and floor units comprise the test
section.  A removable, 3.4 m diameter turntable can
be placed in any of the five sections.  The portion
of a model installed on the turntable can be easily
rotated to change the effective wind direction.
     A 75 kilowatt (k!J) a.c. motor driving a 1.8 m
diameter fan through a speed controller (eddy current
coupler) produces a top air speed of 10 meters per
second (m/sec.).  A carefully designed entrance-
contraction section contains a honeycomb and four
screens to produce a low turbulence flow at the
entrance to the test section.  The fan is downstream
of the test section and is housed within a sound-
deadening enclosure.  Acoustic silencers in the flow
both upstream and downstream of the fan provide for
quiet operation.
     An instrument carriage mounted on rails can
position a probe anywhere in the test section.
Controls to move the carriage in three dimensions are
                                                      489

-------
located on an operator's console near the tunnel.
Diaital readout Indicates the position of the  probe
to the nearest millimeter.
     Many methods have been devised for developing
simulated atmospheric boundary layer air flow  in wind
tunnel test sections.  Tha FMF has slightly modified

a method devised by Counihan  at the Central Elec-
tricity Research Laboratories, England, to create
power law wind profiles.  Elliptical vortex-generating
fins are placed just downwind of a castellated  barrier
at the entrance to the test section (Fiaure 3).  The
fins initiate a boundary layer with a thickness equal
to their heiqht.  Two-dimensional roughness elements
are placed on the tunnel floor downstream of the
vortex generators to maintain the boundary layer char-
acteristics over the  length of the test section.  Mean
velocity and turbulence intensity profiles occurring
4 1/2 heights downwind of 1.8 m fins are shown  in
Figure 4.  The mean velocity profile generated  by this
fin/roughness combination is close to a l/5th  power
law, which is typical of flat country with low  shrub-
bery.  The turbulence intensity profile compares

favorably with atmospheric data reported by Harris  ,
which are shown in the figure for comparison.   The
FMF has 5 sets of these fins ranging from 15 to 180
centimeters (cm) in height.  The fins may be easily
inserted in the wind tunnel to obtain a boundary layer
appropriate to the scale and characteristics of the
model under test.
Fiqure 3.  Elliptical vortex generating fins  for
developing simulated atmospheric boundary  layers  in
meteorological wind tunnel test section.
each layer of different  density.   Atmospheric temper-
ature gradients are modeled by the density gradients
of the salt water.  Models  are affixed to a turntable,
suspended from a  towing  carriage  into the fluid, and
towed the length  of the  test section, making possible
the study of flow and  dispersion  around buildings and
over complex terrain and urban areas, under stably-
stratified atmospheric conditions.  The carriage speed
is continuously variable from 0 to 0.5 m/sec.  A
filling system, consistina  of a brinemaker and five
large tanks, provides  the capability of fillina the
test section with a desired stably-stratified salt-
water mixture in  approximately 4  hours.
     In the water channel mode of operation, the
facility is used  in a  manner similar to the wind
tunnel  procedure, with models fastened to the floor
of the test section.   A  1.5 m diameter pump, driven
by a 75 kW a.c. motor  throuah a speed controller
(eddy current coupler),  produces  a top speed of
1.0 m/sec.  The channel  is  supported on jacks that
can be adjusted to tilt  the entire unit to compensate
for the pressure  drop  through the test section.
       O 4.5 H DOWNWIND OF VORTEX GENERATING
      - FINS
 O 4.5 H DOWNWIND OF VORTEX
~ GENERATING FINS    —
                                                                                                   ATMOSPHERIC DATA
                                                                                                      f
                                                               0.2  0.3  0.1   0.5  0.6  0.7  0.8  0.9  1.0

                                                                               U/U,,
                                            10   15   20

                                           !T  I U . 100*
  Figure 4. Vertical profiles of mean velocity (A) and local turbulence
  intensity (B) downwind of vortex generating fins (H=fin height-
  1.83m). Atmospheric data6 are included for comparison.
             Water Channel-Towing Tank

     The water channel-towing tank was added  to  the
FMF to make possible the study of dispersion  under
stably-stratified atmospheric conditions.  As the
name implies, it is a dual-purpose facility.  Fiaure  2
shows its closed-circuit desion, with the  pump  in the
return leg on the bottom and the test section (free
surface) on the top.  The test section - 2.4  m wide,
1.2 m deep, and 25 m lono - is constructed with  floor
and sidewalls of acrylic plastic in an aluminum
framework.
     In the towing tank mode of operation, the ends
of the test section are blocked with gates and the
test section is filled layer by layer with salt  water,
                   Instrumentation

     The FMF has a  Digital  Equipment Corporation
POP 11/40 minicomputer  located  within the  facility
to process all laboratory  data.  The system includes
3 maanetic tape drives,  3  disk  drives, an  80K memory
bank, a 16-channel  analog-to-diaital converter, a
refresh-graphics terminal,  and  an  electrostatic
printer/plotter.   It  operates under RSX-11D, which is
a multi-task, multi-user operatina system.   Real-time
analysis of the outputs  of electronic data  gathering
instruments provides  instant  feedback to the experi-
menter on the results of data being taken.   The
maanetic tape drives  provide  for economical storaae
of digitized data  for future  analysis.
                                                       490

-------
     Velocity measurements in the wind tunnel and
water channel-towinq tank are made with Thermo-
Systems  Inc.  constant temperature hot-film anemometers.
The outputs  of the anemometers are digitized, linear-
ized, and  analyzed on the computer according to
previously determined calibrations for each hot-film
probe.  Mean velocity and turbulence intensity values
are printed  out immediately after the last sample  has
been digitized for the scrutiny of the technician.
Fast Fourier Transform techniques are used to obtain
spectra  and  correlations of turbulence from signals
recorded on  magnetic tape.  Programs have been
written  to calculate Reynolds stresses and velocity
fluctuations in two coordinate directions from the
output of cross-film probes.
     In  the  wind tunnel, pollutant dispersion is
studied  by releasing a dilute tracer-gas-in-air mix-
ture from the model source, collecting samples throuah
a  sampling tube, and measuring the concentration of
tracer in the sample.  Two Beckman model 400 Hydro-
carbon Analyzers (flame ionization detectors) are
used to  determine the concentration of tracer in the
sample.   The output of the analyzer is also processed
by the minicomputer.  The response time of this
system is too lona to obtain statistics on concentra-
tion fluctuations, however.
     Qualitative evaluations of dispersion patterns
are made using flow visualization techniques.  An  oil
fog generator produces a paraffin oil mist that is
released from model sources in the wind tunnel;
organic dyes are released in the water channel.  By
observing plume behavior and touchdown points, sam-
pling locations for tracer measurements are deter-
mined.  Still and motion picture cameras are used  to
photograph the experiments for a permanent record.
     A metal and woodworking shop located within the
FMF contains equipment and tools for constructina
detailed models from metal, wood, and plastics.
Minor modifications and additions to the facility
are also performed in-house.
      The FMF also houses an electronics shop for the
 repair and maintenance of instruments and other
electronic equipment and for the development of new
 instrumentation.
                    Applications

     Because the water channel-towing tank  has only
 recently been installed, specific applications are
 in  the plannina stages.  Basic characteristics of
 the system are being studied, and modeling  and
 measurement techniques are being developed  for use in
 future projects.  Some applications of the  meteor-
 ological wind tunnel will be discussed.
     The first study completed in the meteorological
 wind tunnel was an analysis of the flow behind a
 two-dimensional mountain ridge to determine  l:rules
 of  thumb" for the placement of smoke stacks.  A
 cavity of recirculating flow was found in the lee of
 the ridge with a height equal to approximately twice
 the ridge height and a length equal to ten  ridge

 heights.  A summary paper  on this work will be
 presented at this conference.
     Wind tunnels are often used to evaluate aero-
 dynamic influences of buildings on smokestack plumes.
 Unique characteristics of specific emission  sites or
 building shapes may require exact modeling  of individ-
 ual cases.   The FMF is more involved with the analysis
 of  general  cases from which general conclusions or
 "rules of thumb" can be obtained.
                   o
     One such study  was performed to test  the notion
 that a stack must be 2 1/2 times as high as  the
 tallest nearby building to avoid plume downwash
 resulting from building effects.  Two buildings were
5(a) building width twice its height
5(b) building width l/3rd its height
5(c) stack without building

 Figure 5.  Plume visualization for a stack 1  1/2 times
 as  hiqh as nearby buildino.   Buildino heiahts, stack
 heinhts, and boundary layer  and effluent character-
 istics are identical  in all  three photoaraphs.
 used;  one with its width twice its heiaht and one with
 its  width one-third its height.   The 2 1/2 times rule
 was  found to be unnecessarily conservative for the thin
 building.  Smoke visualization photooraphs are present-
 ed  in  Figure 5 for a stack that is 1.5 times the
                                                       491

-------
building height for the wide building, the thin
building, and no building.  Comparison of the photo-
graphs shows that, even though the stack heights and
building heights are identical, the wide building
produces strong plume downwash, whereas the thin
building has essentially no influence on the plume.
Quantitative concentration measurements verified the
visualization results.
     The influence of buildinas on stack emissions
was more thoroughly investigated in a cooperative

study  performed in response to a request by the EPA
Office of Air Quality Planning and Standards (see
Figure 6).   One building with width equal to twice its
height, HR, was used.  The stack was located at the

center of the downwind  side of the building.  Emphasis
was placed  on quantifying the building influence on
around level concentrations as a function of stack
height, HL, stack diameter, D, exit velocity, H, and

buoyancy.  Many combinations of these parameters were
examined both with and  without the building in place.
Figure 6 presents ground level concentrations meas-
ured under  the plume centerline for a stack that is
1  1/2 times as high as  the building.  Measured con-
centrations for a ground release in the building wake
are also presented.  A  mathematical model based upon
a Gaussian  plume formulation with a correction for
increased dispersion behind the building has been
found to approximate the data reasonably well.
theories will be evaluated  to  establish guidelines for
proper modeling methods.
                                      as   12.0   us    is.o
 Figure 6.  Nondimensionalized ground level concentration for neutral-
 ly buoyant plurjie.  D/HB = 0.063, W/U = 0.7.
     Another model constructed for study in the
meteorological wind tunnel involved moving l:32-scale-
model vehicles to simulate highway traffic (Figure 7).
The vehicles are pulled across the test section by a
chain imbedded in the floor of the turntable.   Char-
acteristics of the turbulent wake region downwind of
moving vehicles are to bs determined.   Preliminary
results show that the vehicle-induced mechanical  tur-
bulence has a strong influence on the dispersion  of
vehicle exhaust close to the highway.
     Through studies of this type, a general  under-
standing of the mechanisms of atmospheric dispersion
can be gained.  EPA Fluid Modeling Facility scientists
intend to concentrate efforts on this  type of project
as opposed to evaluating specific case studies involv-
ing circumstance peculiar to a given topographical
location or building design.  In conjunction  with
these studies, modeling techniques and similarity
Figure 7.  Hiahway model with moving  vehicles  on
meteorological wind tunnel turntable.
                     References
1. Snyder, W.H., R.S. Thompson, and  R.E.  Lawson, Jr.,
   The EPA Meteorological Wind Tunnel:  Design,  Con-
   struction, and Operating Details.   Environmental
   Protection Agency, Research Triangle Park, N.C.
   (In preparation.)

2. Snyder, W.H., "Similarity Criteria  for the
   Application of Fluid Models to the  Study of  Air
   Pollution Meteorology", Boundary-Layer Meteorology,
   v. 3, no. 2, 1972, pp. 113-34.

3. Cermak, J.E., "Laboratory Simulation of the
   Atmospheric Boundary Layer", AIAA Journal, v. 9,
   no. 9, Sept. 1971, pp. 1746-1754"       ~~

4. Sundaram, T.R., G.R. Ludwin, and  G.T.  Skinner,
   "Modeling of the Turbulence Structure  of the
   Atmospheric Surface Layer", AIAA  Journal, v. 10,
   no. 6, June 1972, pp. 743-750.

5. Counihan, J., "An Improved Method of Simulating
   and Atmospheric Boundary Layer in a Wind Tunnel",
   Atm. Env.. v. 3, 1969, pp. 197-214.

6. Harris, R.I., "Measurement of Wind  Structure at
   Heiahts up to 598 ft. above Ground  Level", Symp. Hind
   Effects on Buildings and Structures, Louahborouah
   Univ. Tech. (Dept. of Transport Technology), 1969.
                                                      *
7. Huber, A.M., W.H. Snyder, R.S. Thompson, and
   R.E. Lawson, Jr., "Plume Behavior in the Lee of a
   Mountain Ridge -- A Wind Tunnel Study", Presented at
   the EPA Conference on Modeling and  Simulation,
   Cincinnati, Ohio, April 1976.

8. Snyder, W.H. and R.E. Lawson, Jr.,  "Determination of
   a Necessary Height for a Stack Close to a Building  --
   A Wind Tunnel Study".  Atm. Env.  (In press.)

9. Huber, A.H. and W.H. Snyder, "Building  Wake  Effects
   on Short Stack Effluents".  (To be  presented at the
   Third Symposium on Atmospheric Turbulence, Diffusion
   and Air Duality, Raleinh, N.C., October, 1976.)
     Mention of trade names or commercial products does
not constitute endorsement or recommendation for use
by the Environmental Protection Agency.
                                                      492

-------
                    PLUME BEHAVIOR IN THE LEE OF A MOUNTAIN RIDGE  —  A  WIND  TUNNEL STUDY

                                                 Alan H. Huber
                                     Monitoring and Data Analysis  Division
                                 Office of Air Quality Planning and Standards

                                              William H. Snyder*
                                              Roger S. Thompson
                                     Meteorology and Assessment Division
                                  Environmental Sciences Research  Laboratory

                                                      and

                                           Robert E. Lawson, Jr.
                                          Northrop Services, Inc.

                                              Work Performed In
                                          Fluid Modeling Facility
                                    Meteorology and Assessment Division
                                   U.S. Environmental Protection Agency
                                   Research Triangle Park, N. C.   27711
    A wind tunnel  study  of the  concentration field
resulting from a  stack  placed  in the highly turbulent
region downwind of  a  two-dimensional mountain ridge
is presented.  This highly  turbulent region, often
referred to as the  "cavity," was found to consist
of a large semipermanent  eddy.   The general cir-
culation was in the main  flow  direction along the
upper edge, opposite  the  main  flow direction
along the ground  surface, and  up the slope along
the leeward ridge surface.   The  eddy is a result of
the main flow separating  at the  apex of the ridge.
A stack was positioned  to emit an air-methane mix-
ture into the cavity  in the lee  of the ridge.
Longitudinal, lateral,  and  vertical  concentration
profiles showed that  a  tall  stack placed near the
upper boundary of the cavity resulted in higher
ground level concentrations near the downwind end
of the cavity than  did  a  short stack.  The high-
est ground level  concentrations, however, were
found to occur near the base of  the short stack.
Application of the  "2.5 times  rule"  with respect
to the ridge height was found  to be sufficient
for avoidance of  the  highly turbulent region.

                     Introduction
     Aerodynamic effects  induced  by local  terrain
features can have a major influence upon the
dispersion of locally emitted  effluents.  Even with
the best demonstrated control  technology applied,
most plume concentrations at the  stack exit are at
levels far in excess of ambient air quality standards.
The plume may frequently  be entrained  in the tur-
bulent eddies created by  air flow over local terrain
features and be brought to the ground  before the
concentrations are sufficiently reduced to levels
below ambient air quality standards.   These effects
are often referred to as  "plume downwashing."

     This paper discusses  the  results  of a set of
experiments designed to examine the highly turbulent
region that can be found  on the leeward side of a
steeply sloping mountain  ridge.   The study was con-
ducted in the meteorological wind tunnel at the Fluid
Modeling Facility of the  U. S. Environmental Protec-
tion Agency (EPA).  Fluid modeling is  ideally suited

* On assignment from the  National  Oceanic  and
  Atmospheric Administration.
 for investigations of the complex plume behaviors
 that result from aerodynamic effects because essential
 variables  can be controlled and examined at will.

      For neutral (neutrally stable) atmospheric
 flows,  aerodynamic effects evolve from interacting
 frictional  forces and pressure gradients induced
 by local  surface roughness and terrain features.
 Adverse effects  exist when surface friction and
 pressure  gradients combine to retard the surface
 layer flow  enough to produce separation of the
 boundary  layer.   Separation in a neutral flow
 generally occurs near the apex of the terrain feature
 resulting in  development of a stagnation region on
 the leeward side, often  referred to as a "cavity."
 At the  point  of  separation, the main stream of flow
 is vertically raised, resulting in the development of
 a  stagnation  region (cavity) below, where mean
 velocities  are reduced and the flow is highly
 turbulent.  The  flow reattaches itself somewhere
 downstream  of the obstacle.  In the region of re-
 attachment, a portion of the flow is deflected up-
 stream, forming  a zone of recirculation.  The
 dividing  "streamline" that separates the recircula-
 ting flow from the main  stream encloses the cavity,
 as shown in Figure 1.  The "wake region," which is de-
 fined as that region of  the flow field that is dis-
 turbed  by the obstacle,  can extend far downwind.  The
 "envelope"  is the upper  boundary of the wake region.
 Far enough  downwind, of  course, the flow readjusts
 itself  to a boundary layer appropriate to local
 surface roughness.
Figure 1. Diagrammatic sketch of envelope and cavity regions behind
a two-dimensional ridge.
                                                      493

-------
Literature Review
                                         1-11
     A review of published field studies,
supports the assertion that, on the leeward side
of a mountain ridge, a recirculating flow region with
strong downwash and enhanced dispersion exists.
Consistent information that could define the point
of separation and the size and extent of the cavity
was not found, however.  The point of separation
appears to be very much a function of mean flow speed
and direction, atmospheric stability, both the down-
slope and upslope angle of the ridge sides, and the
location of the ridge with respect to surrounding
terrain.

     For a particular situation, the cavity will be
largest when separation occurs at the ridge apex.  Ob-
structions with sharp edges should exhibit definite
separation at those edges under all atmospheric condi-
tions.  The size of the cavity region is greatest for
isolated ridges with steep sloping sides.  Stable at-
mospheric conditions act to restrict the size and ex-
tent of the cavity region.

Experimental Plan

     Terrain features that most adversely affect
the flow are two-dimensional in nature.  Lateral
air motion around a hill results in a smaller cavity
size than would be observed for a two-dimensional
ridge.  A study of neutral flow with separation
occurring at the apex of a two-dimensional mountain
ridge best demonstrates the extent to which stagna-
tion regions can influence the dispersion of locally
emitted effluents.

     There were three major phases to this study.
Phase I involved a Gaussian ridge and a triangular
ridge.  Velocity and cavity size measurements were
made in order to demonstrate that the basic flow
structure is independent of the detailed shape of
the ridge.  Phase II involved mapping of the concen-
tration field resulting from a source placed within
the cavity.  Phase III examined the effect of
variations in the approach flow on the cavity size
and shape.  This presentation emphasizes the results
of the second phase.  The complete description of
                                       12
these studies is given in Huber, et al.

              Experimental Details

Similarity Criteria

     In order to ensure that the flow in the model
simulates that in the atmosphere, it is necessary
to meet certain similarity criteria.  Various
nondimensional parameters characterizing the flow
in the atmosphere must be matched in the model.
Because this study is concerned only with neutral
atmospheric flows, non buoyant effluents, and rela-
tively small scales, the Richardson, Froude and

Rossby numbers may be ignored (Snyder  ).  The
remaining parameters of significance are as follows:
  H  D
       W. U H
W.D.
L ns US 5  U  (u")  !l X.  a H  SUS
H'H  'H  'H 'U ' U   '  U ' v ' anQ T~'
            to         S

Where:  L   - characteristic width of ridge
        H
height of ridge
        HS  = height of stack

        D   = inside diameter of stack
                                                    6   = boundary  layer  thickness

                                                    U   = mean wind  speed (a  function  of
                                                          elevation)
                                                    1/2
                                               (u  )    = root-mean-square  of longitudinal
                                                          velocity fluctuations

                                                    W     stack effluent  speed

                                                    U     wind speed at top of stack

                                                    v     kinematic viscosity of air
     The first four of these  parameters  (length
ratios) are easily matched by constructing  a
scale model, but because no particular field  situa-
tion was modeled, idealized ridge  shapes  and  re-
presentative values were chosen.   The fifth and
sixth parameters characterize the  boundary  layer
approaching the ridge.  Two different boundary
layers were used. A vortex generator-roughness
                                               14
element combination similar to that of Counihan
was used to provide a 60-cm atmosphere-like
boundary layer.  The other was the natural  (<5=15-cm)
boundary layer developed over the  smooth  wind
tunnel floor.

     The effluent speed to wind speed ratio was
maintained at 3:2 in all tests.  This value is the
minimum necessary to avoid downwash in the immediate

lee of the stack itself (Sherlock  and Stalker15).

Plume rise or downwash from the model stack placed
in the lee of the ridge are,  therefore, associated
with disturbances induced by  the ridge.   The diameter
to ridge height ratio was kept at  a value equal to
0.03 in reference to the 30 cm model ridge.

     The last two parameters  are the ridge
Reynolds number (Reu =U H/v)  and the effluent
                   H   00

Reynolds number (ReD = W D /v).  For exact

similarity, the model  ridge Reynolds number must
equal the actual  ridge Reynolds number.   This was
not possible for the model  scales  used and, for-

tunately, is not necessary (Snyder  ) because the flow
fields become independent at  sufficiently large
Reynolds number.   For a mountain ridge with a sa-
lient edge near the peak, the boundary layer may be

expected to separate at the edge (Scorer2).  If the
point of separation on the model occurs at its apex,
similarity of the two flow patterns should result.
The plume behavior should be  independent  of the ef-
fluent Reynolds number provided the flow  is fully
turbulent at the stack exit.   Internally  serrated
washers were placed inside the stack to ensure fully
turbulent flow at the exit.

Equipment

     The wind tunnel test section  measures 3.7 m x
2.1 m x 18.3 m.  The flow speed within the wind tun-
nel can be controlled between 0.3  and 10  meters per
second (m/sec).  The ceiling  of the wind  tunnel.can
be adjusted to compensate for  blockage effects of
the models.  In this study, the ceiling was adjusted
to obtain a nonaccelerating free-stream flow above
the mountain ridge.  Further  details of the wind

tunnel may be obtained from Snyder, et al.16

     Three model  ridges were  constructed. One
ridge was triangular in shape with a 30.5 cm high
                                                      494

-------
apex and sloping sides  of  30°;  the other two had
sides with idealized  shapes.  The ridges were sym-
metrical about a center line, each side of which
could be divided into three  sections.  The center
section had a constant  slope  of 30°.  The upper and
lower sections were respectively convex and concave
outward.  This model  shape appears to be very close
to a Gaussian probability  distribution and is re-
ferred to as the Gaussian  model ridge in order to
distinguish it from the triangular model ridge..
The two similar Gaussian ridges had apexes of
30.5 cm and 15.2 cm,  respectively.  The three
models will be referred to in the remainder of this
report as the 30-cm triangular ridge, the 30-cm
Gaussian ridge, and the 15-cm Gaussian ridge.

     For mean velocity  and turbulence intensity
profiles, a Thermo-Systems Inc. Model 1054 A
anemometer was used in  conjunction with their
Model 1210-20 hot-film  probes.   Smoke visualization
studies made use of an  oil-fog generator.

     An air-methane mixture  was ejected from the
stack as a tracer  gas.   This  effluent simulated a
neutrally buoyant  plume because the amount of methane
in the gas mixture was  only  1 percent and the stack
gas temperature was equal  to  the ambient air tempera-
ture.

     Concentration profiles  were obtained by sampling
a stream through a Beckman Model 400 Hydrocarbon
Analyzer, which is a  flame ionization detector.
Its response time  of  0.5 second is too long to
examine any dispersion  micro-structure.  Time averages
can be related to  steady-state averages occurring
in similar full scale situations, however.  A 2.5
minute averaging time was  found to yield stable
values of concentration.

Experimental Concentrations

     Concentrations measured  in the model may be
related to steady-state averages that would be
measured in the field.   The  stack gas concentration,
C  , is related to  the emission rate by
     Q"Cs[(ir/.4)(D~)]Ws.
(1)
 The  field concentration,  Cp, is linearly related to
 the  model concentration,  CM, to the emission rates,
                                   2     2
 Qp,  and to the dilution  ratio, UMH M/UpHp.  These
 basic relations result  in the expression
 CF=(CM)(Qp/QM)(UM/UF)(HM/HF)2, or                  (2)

 (C/CS)F=(C/CS)M[(WS/US)F/(WS/US)M][(DS/H)2/(DS/H)2].

 With identical effluent  speed to wind speed ratios
 and  stack diameter  to ridge height ratios, the model
 concentration ratios are  equal to the field concen-
 tration ratios,

 (C/CS)M = (C/CS)F.                                  (3)

                Experimental Results

 Phase I

     Smoke Visualization.   For flow separation to
 occur at the apex of the  30.5-cm Gaussian ridge,
 a tripping device was required.  A small square rod
 was  placed along the ridge apex to induce flow
 separation.   With flow separation at the apex, the
 cavity depth and length were two times larger than
 for  the cases without and were independent of the
        mean flow speed.  Flow visualization  measurements
        behind the 30-cm triangular  ridge  resulted in similar
        size and shape of both the cavity  and envelope as
        those for the tripped 30-cm  Gaussian  ridge (Figure 2).

             Mean Velocity and Turbulence  Intensity.

             The mean velocity and turbulence intensity pro-
        files for both upstream and  downstream positions
        from the tripped 30-cm Gaussian  ridge are present-
        ed in Figure 3.  The 60-cm atmosphere-like boundary
        layer was used as the approach flow.   The mean velo-
        city, U, is defined as the 1-minute average flow
        speed in the longitudinal direction,  x.   The  turbu-
        lence intensity is defined as the  standard deviation
        of the velocity fluctuation  in the longitudinal
        direction, normalized with the local  mean velocity.
        Measurements in regions having mean flow reversals are
        not quantitatively valid because the  hot-film cannot
        distinguish flow direction.  Smoke visualization
        quite clearly revealed upstream  flow  near the
        surface level.  The data presented, however,  should
        permit valid qualitative comparisons.

             The degree of disruption of the  approach flow
        was found to be small in extent  for the  untripped
        ridge.  The profiles around  the  30-cm triangular
        ridge were found to compare  quite  well with those
        of the tripped 30-cm Gaussian ridge.   For those cases
        in which flow separation occurred  at  the apex of the
        model, the profiles were found to  be  independent of
        the free-stream velocity.
                                                    I     I
              O ENVELOPE; U«= 7.62 m/j«
              D ENVELOPE; U°°= 3.05 m/uc
              • ENVELOPE; U«= 7.S2 m/*c
               (SEPARATION NOT FORCED)
              OCAVITY; U~=7.62m/s«c
              QCAVITY; U°°O.05ni/[K
              • CAVITY; U"=7.62m/Mi:
               (SEPARATION NOT FORCED)
               H * 30,5 cm
         Figure 2. Cavity and envelope size in lee of 30.5-cm Gaussian ridge.
          Figure 3. Mean velocity and turbulence intensity profiles for the
          30.5-cm, tripped Gaussian ridge, Uoo = 3.05 m/s.
                                                       495

-------
Phase II

       Concentration measurements  are  presented in the
figures in order to assist  in describing  plume be-
havior within the cavity region  in  the  lee  of the
30-cm tripped Gaussian ridge.  To  present the data
in a form for easy comparison, the  measured concen-
trations have been nondimensionalized with  the stack
gas concentration, C .  Thus, the  value "C, percent"
in the figures is the measured concentration expressed
as the percentage of stack  gas concentration.
This value can be directly  related  to average field
concentrations as discussed earlier.  A field situa-
tion with the stack emission concentration  equal
to 2000 ppm would result in the  air quality measure-
ment of 1 ppm under circumstances  similar to those
of the model where C percent is  equal to  0.05.

       Concentration Measurements.  The concentration
values in Figure 4 were detected by a ground level
probe located near the base of the  30-cm  tripped
Gaussian ridge.  The stack  was positioned at four
different downwind locations and raised to  various
heights while the probe sampled  at  the fixed position.
The stack height where no ground concentrations were
detected defined the limit  of the  cavity.   The  fact
that significant ground level concentrations are
found at a distance of 5 H  upwind of the  stack
demonstrates the existence  of a  strong recirculating
flow within the cavity region.   The size  of the
cavity determined from the  tracer  gas measurements
(Figure 4) showed a maximum depth  and horizontal
extent of 2 H and 8.5 H, respectively.  The horizon-
tal extent was only slightly smaller than that  found
by smoke visualization (10  H) for  the same  situation
(Figure 2).

       Figure 5 gives the resulting ground  level  con-
centrations downwind from a stack  placed  at the
leeward base of the tripped ridge  for three stack
heights.  Maximum concentrations are found  very
close to the stack base.  The peak  concentration
decreases and is shifted downwind as the  stack
height increases.  This pattern  is  typically found
to occur over flat terrain.  Beyond two ridge  heights
downwind from the stack, however, the concentrations
are highest for the tallest stack,  which  is not at
all typical of plume behavior over  flat terrain.
                0 It U   0  0.1 0-2     0 0.1  IU     0  0.1 U
    Figure 5.  Longitudinal ground level concentration profiles
    downwind from stack placed at base (x/H = 2.7 from cen-
    ter) of 30.5-cm tripped Gaussian ridge.
Figure 4. Ground level concentrations measurements with sampling
probe fixed near base (x/H = 3 from ridge center, z/H = O) of 30.5-
cm, tripped Gaussian ridge (stack was placed at four downwind
locations and its height was varied).

      Figure  6a presents some  vertical  concentration
profiles  in  the lee of the tripped ridge.   The  stack
was  fixed at the ridge base with the height of  the
stack measuring one-half the  ridge height.  Because
of nearly stagnant mean flow  in the lee of  the  ridge
and  the general upward flow in the cavity region
along the ridge surface, a substantial plume rise
occurs -- as is indicated by  the vertical con-
centration profiles.  The concentrations on the
leeward side of the cavity are more uniform than
those farther upstream, but little spreading occurs
into the  region above the cavity (z/H>2).  The
lateral  concentration profiles in Figure 7a show
that the  lateral plume width  changes only slightly
in the downwind direction..

      Additional concentration measurements  for the
same stack and location, but with stack height 1.5
times the ridge height, were  also made.  Figure 6b
presents  a few vertical concentration  profiles for
the  elevated stack and shows  the plume rise to be
essentially  zero.  Zero plume rise now occurs because
the  elevated stack is above the region of mean flow
stagnation.   The elevation of the point of maximum
concentration decreases with  downwind  distance,
showing evidence of the recirculation within the
cavity.   The highly uniform concentrations below
z/H  = 1  are  also evidence of  the recirculation.
Even for  this elevated stack, little dispersion
into the  region above the cavity occurs.  The
lateral  ground level concentration profiles in
Figure 7b show essentially identical spread with
only minor differences in their values near the
center (y/H  = 0).  From these results, it is evident
that instead of the strong immediate downwash, which
occurred  for the shorter stack, the taller stack
emissions are caught in the outer recirculation
region within the cavity.  The direction of the re-
circulation  is down the leeward side of the cavity
and  upstream along the ground.  Concentrations
measured  near the reattachment point were highest
for  the 1.5  H stack.  This occurs because emissions
near the  upper boundary of the cavity  region are
caught in the general recirculation that downwashes
the  plume towards the reattachment point.   Because
emissions from a shorter stack at the  same  location
as above are more rapidly downwashed and dispersed,
a  lower concentration will occur at the reattachment
point.  This is one instance  in which  increasing
the  stack height does not decrease ground level con-
centration.
                                                      496

-------
Phase III
                      Conclusions
     The goal was to  determine the effect  of  the
approach boundary layer  conditions  on the size and
shape of the leeward  cavity region.  The cavity
size and envelope for the 60-cm atmosphere-like
boundary layer approaching the 15-cm Gaussian
ridge were found to be similar to those for the
15-cm natural boundary layer.  The turbulence
levels of the approaching natural boundary layer
may be expected to be much lower than those of the
simulated atmospheric boundary layer.

     Changes in the size of the cavity and the
envelope were found when the mean flow approached
the ridge along a plane  elevated to the height of
the ridge apex.  Both the wake and envelope size
were reduced in comparison with the cases  for an
Isolated ridge.  The  effect of changing the boundary
layer conditions appears to be minor in comparison
with effects of upwind terrain changes.
     For  the  30 cm Gaussian ridge with  separation
occurring at  the apex, the maximum depth  and hori-
zontal extent of the cavity region were found to be
2 H and 10  H, respectively.  The flow patterns in
the lee of  ridges that exhibited separation at their
apex were found not to be sensitive  to  the detailed
shape of  the  slopes.  The cavity sizes  and shapes
were found  to be only slightly affected by the
thickness and turbulence intensity of the approach
boundary  layer, but were dependent upon the upwind
slope of  the  terrain.

     Near the downwind end of the cavity  (that is,
near the  reattachment point), the flow  is difficult
to characterize.  Part of the main flow recirculates
within the  cavity, and the rest continues downwind
forming a newly developing boundary  layer.   This
region can  best be characterized by  the increased
vertical  and  lateral spreading of a  plume over that
occurring for a flow without the mountain ridge
disruption.   The above assertions are generalizations
drawn from  a  limited amount of smoke visualization
and mean  velocity and turbulence intensity data taken
downwind  of the reattachment point.  Further studies
of the behavior of plumes from stacks placed down-
wind of the reattachment point are needed to
characterize  dispersion there.
                                                                 • i/H • 0, «/H • US FROM STACK

                                                                 * i/H - 1 . K/H - U5 FROM STACK

                                                                 * i/H • 0 , K/H • 4.76 FROM STACK

                                                                 o i/H - 1, x/H • 4.7S FROM STACK

                                                                  H-M.Bcm
                                                               •i/H - D, i/H - US FROM STACK

                                                               • i/H - 0, K/H • 111 FROM STACK

                                                             _ *i/H • 0. «/H -4.76 FflDM STACK

                                                                 H - 30.6 cm
                                                                                              1      I
  Figure 6. Vertical concentration profiles for stack placed at base
  (x/H = 2.7 from center) of 30.5-cm tripped Gaussian ridge.
         = 0.5;B-Hs/H= 1.5.
 Figure 7.  Lateral concentration profiles from stack placed at base
 (x/H = 2.7 from center) of 30.5-cm, tripped Gaussian ridge.
 A- HS/H = 0.5; B - HS/H = 1.5.
                                                        497

-------
     The cavity region leeward of the model  ridge was
found to be highly turbulent with significant plume
downwash.  The plume downwash results in ground
level concentrations within the cavity region that
are greater than 0.05 percent of the stack effluent
concentration.  These concentrations are undoubtedly
significantly higher than would occur in the absence
of the mountain lee effects examined in this study.
For similar actual situations, it would be good
engineering practice to avoid placement of any
significant source within the expected cavity region.

     The general engineering "rule of thumb," as
found repeatedly throughout the literature,  for
avoiding plume downwash in the lee of an obstruction
is to keep the height of the stack "2.5 times" the

height of the obstruction.  According to Sutton,
the rule was probably derived by Sir David Brunt
from a study on the height of disturbances over a
long ridge in connection with a British airship
disaster investigation.  It, therefore, is not sur-
prising that the general rule is applicable  to the
results of this study.  Although the maximum depth
of the cavity was found to be 2 H, some margin of
safety is well advised, because strong downwashing
occurs in the upper regions of the cavity.  The
maximum horizontal extent of the cavity was  found
to be 10 H.  Part of a plume emitted above a cavity
can, in this distance, spread downward and thus be-
come entrained within the cavity.
                     References

1.   Scorer, R.S.  Theory of Airflow Over Mountains.
     IV-Separation of Flow from the Mountain Surface
     Quart. Journal Royal Met.  Society.   81:340-350,
     1955.                               ~

2.   Scorer, R.S.  Air Pollution.   Oxford, England,
     Pergamon Press, 1968

3.   Buettner, K.J.K.  Orographic  Deformation of
     Wind Flow.  Final Report USAEDRL Project Number
     1AO-11001-B-021-01.   University of  Washington,
     Seattle, WA.  Contract Number DA.36-039-SC-89118.
     May, 1964. 70 p.

4.   Start. G.E., N.R. Ricks, and  C.R. Dickson.
     Effluent Dilutions Over Mountainous Terrain.
     Air Resources Laboratory.   Idaho Falls, Idaho.
     NOAA Technical Memo ERL-ARL-51. 1975. p. 168.

5.   Gloyne. R.W. Some Characteristics of the Nat-
     ural Wind and Their Modification by Natural and
     Artificial Obstruction.  Scientific Horticulture.
     XVn_:7-19, 1964-1965.

6.   Pooler, F.,Jr. and L.E. Niemeyer.  Dispersion
     from Tall Stacks:An Evaluation.  Paper No.ME-14D
     (presented at 2nd International Clean Air Con-
     gress.  Washington, D.C. December 6-11, 1970)

7.   Corby, G.A.  Airflow Over Mountains:A Review of
     Current Literature.   Quart. Journal Royal
     Met. Society.  80_:491-521, 1954.

8.   The Airflow Over Mountains.   World  Meteorologi-
     cal Organization. Geneva,  Switzerland.  WMO No.98
     1967. 43 p.

9.   Davidson, B.  Some Turbulence and Wind Variabil-
     ity Observations in the Lee of Mountain Ridges.
     J. Applied Meteorology.  2_(4) :463-472, 1963.
 10.   Halitsky, J., G.A. Magony, and P. Halpern.
      Turbulence Due to Topographical Effects.
      New York Univ., N.Y.,  Geophysical Sciences
      Laboratory Report TR66-5, 1965, 75 p.

 11.   Orgill, M.M., J.E. Cermak, and L.O. Grant.
      Laboratory Simulation and Field Estimates of
      Atmospheric Transport - Dispersion Over
      Mountainous Terrain.  Colorado State Univer-
      sity, Fort Collins, CO.  Technical Report.
      CER 70-71 MMO-JEC-LOG40.

 12.   Huber,  A.H., W.H. Snyder, R.S. Thompson, and R.E.
      Lawson.Jr.  Stack Placement in the Lee of a
      Mountain Ridge.  Environmental Protection
      Agency, Research Triangle Park, NC. (To be pub-
      lished).

 13.   Snyder, W.H.  Similarity Criteria for the
      Application of Fluid Models to the Study
      of Air  Pollution Meteorology.   Boundary-Layer
      Meteorology.  3_(2) :113-134, 1972.

 14.   Counihan, J.  An Improved Method  of Simulating
      an Atmospheric Boundary Layer  in  a Wind Tunnel.
      Atmospheric Environment.   3_: 197-214,  1969.

 15.   Sherlock, R.H.  and  E.A.  Stalker.   The  Control of
      Gases in the Wake of Smoke Stacks.  Mechanical
      Engineer.   62.: 455-458,  1940.

 16.   Snyder,  W.H., R.S.  Thompson, and  R.E.  Lawson, Jr.
      The EPA Meteorological  Wind Tunnel :Design,
      Construction, and Operating Details.   Environ-
      mental  Protection Agency,  Research  Triangle Park,
      NC.  (In  preparation).

 17.   Sutton,  O.G.   Discussion  before Institute of
      Fuels.   Journal  Institute  Fuel(London).
      33:495,  1960.
      Mention of trade names or commercial products
does not constitute endorsement or recommendation  for
use by the Environmental  Protection Agency.
                                                      498

-------
                                       A  NOTE  ON THE  SEA BREEZE REGIME
                 S. Trivikrama Rao
              Division of Air Resources
New York State Department of Environmental  Conservation
                  Albany, New York
                  Perry J.  Samson
             Department of  Meteorology
              University of Wisconsin
                 Madison, Wisconsin
 ABSTRACT

 Rotary  spectrum analysis  is  applied  to  the  wind field
 over Long Island, New York,  to  delineate  the  clock-
 wise and counterclockwise turning  associated  with
 various spectral  frequencies.   On  the  south shore and
 at higher elevations elsewhere  over  the island, the
 diurnal oscillation is  found to be a predominantly
 clockwise rotation derived from sea  breeze.  At lower
 levels, mid-island and  on the north  shore,  there is  no
 dominant rotation in the  diurnal oscillation.  This
 may be  attributed to the  interaction between  the
 sound breeze  from Long  Island Sound  and the sea breeze
 from the Atlantic Ocean.
 INTRODUCTION

 The sea  breeze regime  is  mainly driven by the
 difference  between  the air  temperature over  land and
 that over water.  The  diurnal  variation of solar
 radiation sets up a  temperature contrast between the
 two surfaces because of their  different heat  capaci-
 ties.  Because of the  influence of  the Coriolis  accel-
 eration, the rotation  of  the sea breeze is predomi-
 nantly clockwise.

 The Long Island Sound  to  the north  and the Atlantic
 Ocean to the south  of  Long  Island produce sound
 breezes  and sea breezes,  respectively, resulting in
 complex  circulation patterns over the  island.  Rotary
 spectrum analysis provides  a means  of  distinguishing
 the presence of clockwise and  counterclockwise rota-
 tion components in  the horizontal wind field  and is
 an excellent tool to study  •such interactions.

 This paper  reports  the results of rotary spectrum
 analysis of the wind data collected on the north and
 south shores and in the center of Long Island, New
 York.
 SOURCE OF DATA

 Data from wind direction  and  speed  sensors were
 obtained from towers  located  on  the south shore  of
 Long Island at Tiana  Beach, on the  north shore at
 Shoreham and Jamesport, and in the  center at  Brook-
 haven National Laboratory (BNL).  These  locations  are
 shown in Fig. 1.  Hourly  values  were collected from
 sensors at 75 ft. at  Tiana Beach, 355  ft. at  BNL,
 33 ft. and 150 ft. at the Shoreham  tower and  400 ft.
 at Jamesport tower.   Data for the period July-August
 1975, representative  of summer,  are used in this
 study.  Data for December 1974-January 1975 at 355 ft.
 level at BNL, representative  of  winter, are also
 analyzed to explore the seasonal variation of the
 different spectral components.
METHOD OF ANALYSIS

Gonnella  used the rotary spectrum to study internal
waves in the ocean.  This type of analysis has since
been applied to the study of the sea breeze regime
along the coast of Oregon by O'Brien and Pillsbury2.
Basically, the method hinges on the fact that the
spectral decomposition gives at each frequency (f) a
sinusoidal wave for each horizontal velocity componert
The east and north component sinusoids together form
an ellipse at each frequency.  Four quantities emerge
in the rotary spectrum analysis:  the mean kinetic
energy or the total spectrum, the rotary coefficient,
and the orientation and stability of the ellipse.  All
these quantities, except the orientation of the
ellipse, are invariant under coordinate rotation.
                                                           If P
            are the auto-spectra and P
                                                                                                         are the
cross and quadrature spectra of the east and north
components, u(t) and v(t), of the horizontal velocity,
the following mathematical relationships exist:

The total spectrum:

     Sf = Cf + Af = 1/4 (Puu + P^)

where the clockwise spectrum:

     Cf = 1/8 (Puu + PVV - 2 Quv)

and the anti clockwise spectrum:

     Af = 1/8 (Puu + PVV + 2 Quv)

The rotary coefficient:

     CR   cf - Af = -2 Quv
            sf      puu + pw

The orientation of the ellipse:

     tan 2 0 = 2 puv	
               P   - P
                uu    w

and the stability of the ellipse:

    |p|2 _ (P   + P  }2 - 4 (P   P   - p2  )
    111   = *.ruu   rw-'    H ^ruu rw   r uv'
                                ^2
The rotary coefficient, CR, varies as (1 -£) where £
is the eccentricity of the ellipse.  Its numerical
value ranges from 0, indicating unidirectional motion,
to unity, a pure rotary motion.  Further descriptions
of the spectrum may be found in Gonnella .

For the statistical analysis presented here, a fast
Fourier transform algorithm is used to partition the
sample variance among frequency bands.  The total
length of the data is divided into 5 non-overlapping
segments each consisting of 256 data points (N = 256),
                                                       499

-------
so that the spectral estimates have 10 degrees of
freedom.  This technique is discussed by Hinich and
Clay3.  To avoid end effects, the N points are
smoothed by use of a weighting function of the form
1/2 (1 - cos lUS), n = 1, 2	N.  In addition, the
              N
trend is removed by subtracting a linear function from
the data values.  These data are Fourier-transformed to
derive the complex Fourier coefficients from which the
clockwise and anticlockwise spectra are computed for
each segment, and then averaged over all segments.
Furthermore, these spectral estimates are subjected to
a "Double-Banning" smoothing procedure (Jenkins and
Watts^-) which increases the reliability of the spec-
tral estimates.
RESULTS

The variance spectra for the wind field at Tiana Beach,
Fig. 2a, shows the dominance of the clockwise rotation
at the diurnal and semi-diurnal frequencies.  The
spectra at BNL and Jameaport, Fig. 2b and 2d respec-
tively, do not reveal any anticlockwise rotation at
the diurnal frequency.  The peak corresponding to
f = 0.04 cycles/hr at these elevated sensors is pre-
dominantly a clockwise rotating oscillation, a sea-
breeze phenomenon.  The spectra at the 33 ft. level at
Shoreham (Fig. 2e) show no clear dominance of rotation
However, at the 150 ft. level (Fig. 2f) the dominance
of clockwise rotation is again seen.  Also included
is the spectrum for winter season at BNL 355 ft. level
(Fig. 2c).  Note the complete absence of diurnal and
semi-diurnal peaks suggesting that the influence of
the large-scale weather features, dominant in the
winter season, overshadows the sea-breeze circulation
generated by the land and water temperature difference

The rotary spectral statistics are presented in Table
1.  The lower bound on values of the ellipse stability
at the 1% confidence level is 0.63 (Panofsky and
Brier^). This implies that the chances are 1 in 100
that a stability coefficient of 0.63 or more will be
found by accident.  It is evident from the table that
the ellipse stabilities at Shoreham 150 ft., Tiana
Beach 75 ft., BNL (summer) 355 ft., and Jamesport
400 ft. exceed this limiting value.  Also, CR values
of 0.73 at the BNL (summer) and 0.66 at Jamesport
indicate a strong rotary motion at the upper levels.
At the Shoreham 33 ft. level a CR value close to zero
and insignificant ellipse stability coefficient imply
no dominance of any particular direction in the
oscillation.  At the upper levels, an average value
of the orientation of the major axis of the ellipse
is found to be 76° (from true north) which corresponds
to the orientation of the Long Island coastline.  This
indicates that on eastern Long Island, local circu-
lation patterns are generally aligned in axes parallel
to the shoreline.
                      TABLE 1

Rotary Spectral Statistics for the Diurnal Oscillation
Station Height of
0'
oserva-
tion(ft)
Shoreham
Shoreham
Jamesport
BNL (Summer)
Tiana Beach
BNL (Winter)
33
150
400
355
75
355
Orientation Rotary
of the
ellipse(0)
275°
83°
70°
75°
78°
347°
uoerr-
ic(clf
0.07
0.32
0.66
0.73
0.42
0.24
Ellipse Cf
Stabil-
ity^)
0.31
0.88
0.77
0.72
0.80
0.20
* A
Af
1.25
1.81
3.88
4.77
2.45
1.68
The  effect of  friction on the  sea breeze hodograph
tends  to render the ellipse smaller and considerably
more eccentric.  Large eccentricity values  for the
diurnal oscillation  found  at  lower levels can be
attributed  to  the  influence of  friction at the lower
boundary.   It  should be  recalled that  the rotary co-
efficient values vary as  (1- C)  where  C is the eccen-
tricity.  The  decreasing values  of eccentricity (in-
creasing CR) with  height reflect the decrease in
frictional  influence.  The eccentricity found at Tiana
Beach is less  than that  at the  Shoreham 150 ft.  level.
This may be explained by the  interaction of sound
breeze and  sea breeze on the  north shore.

Haurwitz' described  the  relationship between the max-
imum temperature difference between land and water as
a function of  coefficient  of  friction.   He suggested
that without friction the  time difference between max-
imum sea-breaze velocity and  maximum land and water
temperature contrast is  6  hours.   With increasing fric-
tion, the time difference  between the  maximum of  the
sea breeze velocity  and  that  of  temperature difference
decreases.  The average shape of the diurnal wave
can be obtained through superposition of  several  diurnal
cycles in the  record,  and  then averaging the specific
value for each phase of  the wave.   When  this is done,
it is found that the wind  speed  maximum  occurs around
17:00-18:00 EST at Tiana Beach and at  the  Shoreham
33 ft. level;  and around  20:00  EST at Shoreham 150ft.,
BNL 355 ft., and Jamesport 400 ft.  levels.   With such
values of friction as  have been  determined  from
observations on land,  the  maximum sea breeze should
occur about three hours after the temperature differ-
ence between land  and water has  reached  a maximum.
Usually, the maximum temperature difference  occurs
about 14:00-15:00 EST.  The decrease of  frictional
effects with height  could  explain the  larger time
difference between maximum wind  speed at Shoreham
150 ft., BNL 355 ft.,  and  Jamesport 400  ft.  levels
and maximum land and water temperature difference,
consistent with the  rotary coefficient values.

A study of the characteristics of  the Atlantic sea
breeze and Long Island sound breeze (LILCO^) concluded
that the sound breezes are generally morning phenomena
and have onset times between  09:00 EST and  12:00 EST.
The average duration of the sound  breezes was about
three hours after which they were  generally  destroyed
by the strengthening of the Atlantic sea breeze.   They
also found that the  stronger Atlantic sea breeze is
capable of overrunning the sound  breeze completely,
resulting in a flow reversal from on-shore to off-shore
near Shoreham.   Dominance  of clockwise rotating os-
cillation in the rotary spectra  at  upper levels lends
support to their observation.   The  dynamics  of the in-
teraction of the sea and sound breezes at Shoreham
probably account for the lack of  an easily discernable
rotation pattern.
SUMMARY AND CONCLUSIONS

A relatively new descriptive technique to analyze
vector time series, the so-called rotary spectrum, has
been applied to study the sea-breeze circulation over
eastern Long Island, New York.  The results indicate
a clear dominance of the sea breeze at levels above
150 ft.  The hodographs show a pronounced eccentricity
at lower levels, a manifestation of friction.  The
observed phase differences between occurence of max-
imum wind speed over land and maximum land and water
temperature difference are consistent with the sea-
breeze dynamic theories.  Furthermore, as to be ex-
pected, during the winter season the local sea-breeze
pattern set up by the land and water temperature con-
trast has been completely overshadowed by large-scale
weather features.  In future studies, it would be in-
teresting to compute the trajectories of the wind
field over Long Island to obtain more quantitative
                                                       500

-------
information on the nature of sound and sea breezes.
ACKNOWLEDGEMENTS

Thanks are due to Mr.  Geroge Martin of the Long Island
Lighting Company and Ms.  Constance Nagle of the Brook-
haven National Laboratory for providing us with the
necessary data.
REFERENCES

1.  Gonnella, J. (1972)  A rotary-component method
    for analyzing meteorological and oceanographic
    vector time series.  Deep Sea Res. 19, 833-846.

2.  O'Brien, J. J. and Pillsbury, R. D. (1974)
    Rotary wind spectra in a sea breeze regime.
    J. App. Meteor., 13, 830-825.

3.  Hinich, M. J. and Clay, C. S. (1968)  The appli-
    cation of the discrete Fourier transform in the
    estimation of power spectra, coherence and bi-
    spectra of geophysical data.  Rev. Geophys. 6,
    347-363.

4.  Jenkins, G. W. and Walts, D. G. (1968)  Spectral
    analysis and its applications.  Holden-Day,
    San Francisco.

5.  Gossard, E. E. and Noonkester, V. R. (1968)  A
    guide to digital computation and use of power
    spectra and cross-power spectra.  Naval Electron-
    ics Laboratory Center for command control and
    communications, NELC Technical Document 20,
    San Diego.

6.  Panofsky, H. A. and Brier, G. W. (1965)  Some
    applications of statistics to meteorology.  First
    ed., the Pennsylvania State University, University
    Park, Pennsylvania.  224 pp.

7.  Haurwitz, B.  (1947)  Comments on the sea-breeze
    circulation.  J. Meteor., 1, 1-8.

8.  LILCO (1975)  Jamesport Nuclear Power Station -
    Applicant's Environmental Report.
                               Fig.  1  Location of the  specific measurement sites
                                       used  for this  study on Long Island, New York.

                                                      501

-------
Fig. 2  Rotary Spectral Density Estimates:
        a)  Spectrum of the horizontal wind  during
            summer at Tiana Beach at the 75  ft.  level.
        b)  Spectrum of the horizontal wind  during
            summer at Brookhaven National Laboratory
            at the 355 ft. level.
        c)  Spectrum of the horizontal wind  during
            winter at Brookhaven National Laboratory
            at the 355 ft. level.
        d)  Spectrum of the horizontal wind  during
            summer at Jamesport at the 400 ft.  level.
        e)  Spectrum of the horizontal wind  during
            summer at Shoreham at the 33 ft.  level.
        f)  Spectrum of the horizontal wind  during
            summer at Shoreham at the 150 ft.  level.
The solid lines represent the anticlockwise  spectrum
and the dashed lines the clockwise spectrum.   If  S(f)
is the true spectral estimate and Sf is the  estimated
value, then there is a 90% confidence (for 10  degrees
of freedom set by a chi-squared distribution)  that
0.55 Sf
-------
                      A NUMERICAL MODEL FOR STABLY STRATIFIED FLOW AROUND COMPLEX TERRAIN*


                                                  James J.  Riley
                                                   Hsien-Ta Liu
                                                 Edward W.  Geller
                                               Flow Research, Inc.
                                                Kent, Washington
A computer program has  been developed,  based on an ex-
pansion suggested by  Drazin (1961)1 and Lilly (1973)2
to compute three-dimensional stratified flow around
complex terrain  for the case of very strong stratifica-
tion (small  internal  Froude number).  Also, laboratory
experiments  were performed for strongly stratified flow
past three different  terrain models.  Preliminary com-
parisons  of  the  results of the computer program and the
laboratory modeling indicate that the computed results
are in fair  agreement with the experiments.  Discre-
pancies are  probably  attributable mainly to the separa-
ted wake  in  the  lee of  the models.  Other possible
sources of error are  discussed in some detail.

                 1.  Introduction

Assessment of  the environmental impact of the release
of pollutants  into the  atmosphere involves the estima-
tion of diffusion patterns under atmospheric conditions
ranging from average  to extreme.  A detailed knowledge
of the wind  field is  important in the estimation of dif-
fusion patterns, especially if the region of release is
characterized  by complex terrain.  Thus, in the assess-
ment of pollution effects, the understanding and pre-
diction of local wind fields is often very important.

One approach to  understanding and predicting local wind
fields is numerical simulation.  However, the numerical
simulation of  three-dimensional stratified flow over
complex terrain  is a  very difficult task.  This diffi-
culty is  a result of  numerical complications associated
with stratification effects and complex boundaries, and
also of the  limitations imposed by the core size and
cycle time of  present day computers.  Thus, exploration
of certain limiting conditions under which the physical,
mathematical and numerical problems can be simplified
is useful.   One  such  limit is that of very large inter-
nal Froude number, i.e.,  weak stratification, where the
tools of  three-dimensional potential-flow theory are
often available.  Another limit is very small internal
Froude number, or strong stratification.

When fluid is  strongly  stably stratified, vertical mo-
tions are heavily constrained and fluid elements tend
to remain in their horizontal planes.  The degree to
which they do  remain  is measured by the ratio of their
initial kinematic energy, —p U , to the potential energy
required  to  lift the  fluid element over or around the
obstacle,      ..   dp_
                                                           the square of the internal Froude number, where
                     ch2.
              2 5 dx
                    j
Here, h is the characteristic  vertical scale of the
obstacle, g is the acceleration  due  to gravity, and

j— is a characteristic ambient  stratification.  The
dX3
ratio is
             o"2      /o_f_
             PC .2    \Nh/
            dp
            dx_
"•This work is supported by  the  Environmental Protection
 Agency, Research Triangle  Park,  North Carolina,  Con-
 tract No. 68-02-1293.
                                                                                       Vdp
                                                                                    JL_c
                                                                                    p_ dx,,
                                                                         N  =
                                                           is a characteristic Brunt-Vaisala frequency of the am-
                                                           bient fluid.  For strongly stratified flows (F + O),
                                                           Drazin (1961)1 and, later, Lilly (1973)2have proposed a
                                                                                2
                                                           formal expansion in F , which predicts that to the low-
                                                           est order, the flow resembles two-dimensional  (hori-
                                                           zontal) flow around contours of terrain at a given
                                                           level.  The deviations from this two-dimensional flow
                                                           can be determined from the higher order terms in a
                                                           power series.

                                                           In the work discussed in this presentation, we have
                                                           developed computer programs to solve the equations re-
                                                           sulting from the expansion suggested by Drazin and
                                                           Lilly.  We have also performed laboratory studies of
                                                           stratified flows past simple terrain configurations to
                                                           validate the numerical programs.  Finally, we have made
                                                           preliminary comparisons of theory and experiment.

                                                                   2. Brief Description of the Theory

                                                           Consider the steady-state flow past a three-dimensional
                                                           terrain feature with a typical vertical scale, h, and
                                                           horizontal scale, L.  We assume the oncoming (free-
                                                           stream) flow has characteristic velocity, U, and char-
                                                           acteristic stratification, dp /dx_, which is a constant.
                                                           (The coordinate system is oriented with positive x.j
                                                           vertically upwards).  We will also make the Boussinesq
                                                           approximation and will neglect viscous (turbulent) ef-
                                                           fects.  The equations of motion are (see Phillips,
                                                           1966)3:
                                                                                             g   (0,0,-g)
                                                           and
                                                                                u-  =  0
                                                                        u •  Vp + u, -4P-  =  0.
                                                                                  3 dx.
                                                                                                             (2.1)
                                                                                                             (2.2)
                                                                                                             (2.3)
                                                           Here, u is the velocity vector, p is the density fluc-
                                                           tuation about the ambient p, and p is the pressure per-
                                                           turbation about the ambient.  The vertical displacement
                                                           of a fluid element, 
-------
balance in the vertical momentum equation.  Then,

         p  -   .      P                           (2.5e)
                                                                               (o)
To scale u_, we assume that u  -j*- and either u. -T&- or
          3                  3 dx3             1 3X-L

G  -4P- (or both) are in approximate balance in equation
     2
(2.3), which results in  „

                         U3
                a3       ~~
where
                       U   F
                      Nh
                            2 '
                                                  (2.5f)
                                                  (2.6a)
is the Froude number,
                               Vdp
                           _S_  _£.
                           P   dx_
                                                 (2.6b)
                            o    3
is the characteristic Brunt-Vaisala frequency, and
where we scale the ambient stratification (assumed)
horizontally uniform as
                                 dp
From (2.4), the scaling for ty is
                                                  (2-6c)
                                                  (2'6d)
                        hF
Substituting  (2.5) into the horizontal component of
(2.1), we find


        ?H ' V!±H + F* U3 ^ "H = - V"           <2

The vertical component of (2.1) becomes
Continuity is now expressed as
                           2 3u3
                 v -^  + F       -° •
and the incompressibility condition is
                                                  (2.9)
Finally, the equation for ijj, the displacement, becomes


                                                   (2.11)
                                   = v
Next, we expand the independent variables in the powers
                             2
of the small parameter, £ = F , i.e.,
                   n=0
The resulting equations to the lowest order are

          a,<0) • V"     V<0) •
                            -  o.
                                                   (2.12)
                                                  (2.13)
                                                  (2.14)
                                                                                       - P
                                                                                          (o)
                                                           and
 (o)  .    (o)   =  u (o)
1                 J
                                                   (2.15)


                                                   (2.16)


                                                   (2.17)
                                                           Combining (2.16)  and (2.17) gives
                                                           Assuming  p    _and  ijj    are zero in the free stream,

                                                                       -r-^-
                                                                           (o)  ,
                                                           or with  (2.15)
                                                                                     3x,
                                                                                             dx
                                                                                                             (2.18)
Note that, from equations  (2.13)  and (2.14),  the equa-

tions for UH    and p    are  those  for an inviscid,
two-dimensional flow in  a  horizontal plane.   In parti-
cular, if the incoming  (horizontal)  flow is  irrotational,
then the entire horizontal flow field is irrotational.
Thus, in this case the tools  of the  potential-flow
theory can be employed to  compute the flow.   In a given
horizontal plane, the resulting solution would be that
of a two-dimensional flow  about an obstacle  defined by
the contour of the terrain at the vertical level of
that terrain.  The vertical displacement can be com-
puted from (2.18), where it is  a  result of the pressure
difference in the flow between  two adjacent  horizontal
layers.  A calculation of  the vertical displacement can
also be used to estimate the  region  of validity of the
results.

Equations (2.13), (2.14) and  (2.18)  were programmed on
the computer for flow past somewhat  arbitrarily shaped
terrain features.  The free-stream flow was  assumed
irrotational, and standard numerical procedures for
computing the two-dimensional potential flow past ar-
bitrarily shaped bodies  were  used.

         3.  Description  of the  Experiment

The experimental setup was basically the same as that
discussed in Flow Research Report No.  57 (Liu and Lin,
1975)4.  In addition to  "the idealized terrain model,
which has been used for  detailed  studies in  the past
(see Flow Research Report  No.  29)5  and is defined by
                                                                            =  17
                                                                                .SJ
                        exp   -.0008513(x.    61)"
                                                                          .01197  (x2  -  18.82)
                                                                                       7                     2
                                                                exp | -.0008513(3^- 61)   -  .01197(x2 +18.82)
       + 16 ,   (3.1)
                                                                                                         1(
we used a conically shaped model

                      30  f 1 -  jj)     r £ 15

                      0               r > 15,
                                                                                                               (3.2)
                                                        504

-------
and a Gaussian shaped model,
                                                           the experiments was roughly from .05 to .5.
,)  - 30 exp |  - fa
     a/2
                                                   (3.3)
           / 2    2\1/2
Here, r =  Ix  + x_ I    and all distances are measured
in centimeters.

The two latter (new) models were designed to be inter-
changeable with the idealized model.  Neutrally buoyant
dyes, each of a different color, were released through
small stainless-steel tubes (.3 mm I.D.) at three levels
upstream of the model.  Three plumes spaced in the hori-
zontal were released in each level.  The plume trajec-
tories were photographed, and then analyzed for later
comparisons with analytical results.  Figure 1 shows a
typical side view of the streak-line patterns for flow
past the Gaussian peak.
     Figure 1.  The Flow Patterns Traced by Neutrally
                Buoyant Dye in the Vicinity of a
                Three-Dimensional Gaussian-Shaped
                Model.  N = .135 Hz, U = 4 cm/s,
                F,  =  .97.
Note that in each horizontal plane,  three streak lines
were released.   However,  in the side view, the three
are difficult to distinguish, especially in the lowest
Froude number cases.   This difficulty is somewhat com-
pounded by the slight vertical spread of each streak
line.  However,  in a given horizontal plane, the Inner-
most streak line is displaced more than the others, so
that its displacement is  easily detectable in the photo-
graphs.  Thus,  when comparisons were made, we used the
innermost streak line.

The choice of the model conditions was based on the
following criterion.   The expansion can only remain
valid as long as streamlines do not  cross in the verti-
cal.  This crossing would occur, for^example, if
$(x->)> J (x +Ax )+Ax .  Thus, -Ax. >  i^(x,+Ax-)-iJ)(x-), or
   -'333       '    J      j->     -5
in the limit, as Ax, -* 0,
                         3x, —

 When one considers both  upward  and  downward  displace-
 ments,  this condition  generalizes to
                        dip
                              >  1.
 Thus, in nondimensional  terms,  a  necessary  condition for
 the validity of the expansion is
                               <  1.
                                                   (3.4)
 We selected the various parameters  in  the  experiments  so
 that  (3.4) was satisfied  over  a  large  portion  of  the
 vertical region of interest.   The Froude number  range  of
                                                          4.  Comparisons of Experimental and Numerical Results

                                                           The general behavior of the streak lines can be seen
                                                           by examination of figure 1 .   In the middle horizontal
                                                           plane, consider the inner streak line, which exhibits
                                                           the largest vertical displacement.  As a fluid element
                                                           approaches the model, it slows down in a manner similar
                                                           to an element in a two-dimensional flow about a
                                                           cylinder.   Simultaneously, the element experiences an
                                                           upward pressure force, causing it to rise upward (see
                                                           equation 2.18).  As the element starts around the
                                                           mountain,  it accelerates, the vertical pressure force
                                                           changes direction, and the element is displaced down-
                                                           ward.  The flow separates near the point of maximum
                                                           lateral extension of the model.  Past the midpoint of
                                                           the mountain, the elements return to their equilibrium
                                                           levels, and are entrained into the wake of the model.
                                                           For the Froude number range in the experiments, the
                                                           wake flow in the lee of the model appeared to consist
                                                           of turbulent, quasi-horizontal eddies, whose vertical
                                                           velocity fluctuations were rapidly decaying with down-
                                                           stream distance.

                                                           The incoming flow was slightly unsteady.  We observed
                                                           that the inner streak lines slowly oscillated from one
                                                           side of the mountain to the other.  This oscillation
                                                           often produced an inner streak-line pattern as sketched
                                                           in figure  2.
                                                                             ^^HM*******^  £.
                                                               Figure 2.  Sketch of the Instantaneous  Streak-
                                                                         Line Pattern
                                                            One  possible  explanation  of  this  phenomenon  is  the
                                                            following.   For  two-dimensional  flow  past  a  cylinder,
                                                            turbulent  vortex streets  are observed in the Reynolds
                                                            number  range  of  about  60  £ R £ 5000,  where the  Reynolds
                                                            number  R is  UD/v,  D  is the diameter of  the cylinder,
                                                            and  v is the  kinematic viscosity.  For  our case,  R  is
                                                            typically
                                                                       R «
                                                                             .01  cm /sec
                                                                                            ~ 4000
                                                           which is in this range.   The vortex motion is accom-
                                                           panied by movement of the stagnation points,  which in
                                                           turn causes the incoming flow to oscillate slightly.
                                                           For R in this range, the Strouhal number,  defined by
                                                           S = nD/U, where n is the vortex shedding frequency in
                                                           radians/sec,  is approximately .21.  Thus,
                                                                  n  ^  .21 x 4 cm/sec
                                                                  2TT ~   2IT x 10 cm
                                                                                        X .014  cycles/sec.
                                                           This value corresponds roughly to the frequency of the
                                                           oscillations noted in the experiments.

                                                           Figure 3 shows streak lines in the horizontal plane,
                                                           x, = 13.2 cm (the middle plane) for the Gaussian model
                                                        505

-------
with F = .152.  Also shown are  the  numerical predic-
tions.  In addition, the contour  of the  model at the
free-stream level of the plumes is  displayed.  Note that
the numerical calculation tends to  underpredict streak-
line displacement.  This discrepancy is  probably the re-
sult of mainly two effects.   The  first is the displace-
ment effect of the boundary  layer,  which is not taken
into account in the inviscid numerical model.  The
second and more important effect  is the  displacement
effect of the separated wake.   These two effects to-
gether produce an "apparent" body,  as sketched in
figure 4a.

   	NUMERICAL RESULTS
   	_. NUMERICAL (WITH SIMPLIFIED WAKE MODEL)
   ^__c-^—~ EXPERIMENTAL RESULTS
                                           20 X,(CM)
               COMPARISON OF EXPERIMENTAL E NUMERICALLY COMPUTED
               STREAK LINES IN THE HORIZONTAL PLANE DEFINED BY X,= I4CM
               FOR CASE TTC  (GAUSSIAN MODEL Ff -.9?)
                                       APPARENT" BODY
                                          CONTOUR
           Figure 4a. Apparent Body Shape
Reynolds numbers  will usually be several orders  of mag-
nitude larger than the critical value, separation is
likely to  occur just past the point of maximum lateral
extent.

When viscous  terms are added to ^he scaling  arguments
presented  in  section 2, one finds that the lowest order
solution is no longer two-dimensional.  Thus,  the con-
clusions drawn above could be modified somewhat  because
of the three-dimensional nature of the boundary  layer.

The displacement  effect of the separated wake  was crude-
ly modeled by extending the body, as shown in  figure 4b.
Figure 3 also shows the results of a calculation using
this body  shape instead of the circular shape.   The mod-
ification  of  the  streak lines, especially the  outermost
ones, is noticeable.  However, from the discussion
above, the model  suggested in figure 4b is probably more
adequate for  the  full-scale case than the laboratory
case.  Also,  the  effect of the boundary layer  may have
to be taken into  account to obtain close agreement be-
tween the  numerical and experimental results.

The width  of  the  tank is approximately 120 cm, so  it is
possible that the sidewalls influence the flow field
near the models.   Sidewalls were not included  in  the
numerical  calculations for the Gaussian and conical
models.  However,  the horizontal displacement of  the
streamline whose  free-stream position is at the side-
walls was  computed.   For the Gaussian and conical models
the displacement  of this streamline was negligible for
the cases  computed.   Thus,  for these cases,  we can as-
sume that  the effect of sidewalls is unimportant.

Figure 5 shows  the experimentally determined vertical
displacement  for  the conical model with F = .109.  Also
shown in this plot is  the numerical prediction for the
innermost  plume in the middle level.   The numerical
calculation predicts a very slight rise in the plume as
it approaches  the  mountain,  which is  also discernible in
the experiments.   The  distance that the plume drops is
also predicted  fairly  well.   However,  the asymmetry of
the experimental  results is  missed entirely.   This dis-
crepancy is again probably  attributable to the neglect
of the boundary layer  and wake effects.
                                                                 SOLID LINES-EXPERIMENTAL RESULTS
                                                                 DOTTED LINE "KUMSRICAL RESULTS
       Figure 4b.  Model for the Separated Wake
 Note that for a two-dimensional flow past  a  circular
 cylinder, if the Reynolds number is subcritical  (i.e.,

 below approximately 3 x 10 ), the boundary layer is
 laminar, and it separates at about 80° from  the  front
 stagnation point (Schlichting, I960)6.   Since  the Rey-
 nolds numbers for the experiments were an  order  of mag-
 nitude less than the critical value, it  is reasonable
 to assume that the boundary layer was laminar  and sepa-
 rated before the point of maximum lateral  extension  of
 the body.  For two-dimensional flow past a circular  cy-
 linder, if the Reynolds number is supercritical,  the
 boundary layer is turbulent, and separation  probably
 occurs just past the point where the cross section
 starts to converge.  Thus, in the full-scale case, where
     FIGURE 5 COMPARISON OF EXPERIMENTAL 8 NUMERICALLY COMDUTED RESULTS
            FOR VERTICAL DISPLACEMENT FOR CASE TX. b
            (CONICAL MODEL &-.7-;)
Figure 6 shows  similar  experimental plots for the Gaus-
sian model at F   .152.   Also  shown are the numerical
predictions for the  innermost  plume for each of the
three levels.   The agreement and explanation of the re-
sults are very  similar  to the  previous case.

Finally, figure 7 shows  comparisons of experimentally
observed and numerically computed streak lines for the
idealized terrain case,  with F = .218.   The dye was
released at approximately x, = 11.5 cm.  Note that the
                                                         506

-------
       SHADED AREAS* EXPERIMENTAL RESULTS

       	- NUMERICAL RESULTS
                             • 10 CM
       FIGURE 6.  COMPARISON OF EXPERIMENTAL £ NUMERICALLY COM PUTED
                RESULTS FOR VERTICAL DISPLACEMENT FOP, CASE 7 C
                (GAUSSION MODEL, ft •  .97)
         IWB1CAL RESULTS
         EXPERIMENTAL RESULTS
           FIGURE 7. cmnlSOIl OF EXPERIBOITAL t «I|«E«!CALLY CWUTETJ STREAK
                HUES IN THE IIOHIZOIiTAL PLANE DEFINED BY L • U 5 ffl
                (ILEALI2EJ) TERRAIN MILL,  F,, - 1,37)    '

calculation included separation  in  the  wake,  as  discus-
sed above.   Comparisons for this  case are  much better
than for the previous cases because the crude wake
modeling for the potential flow  calculation closely
matched the real flow.  In the calculations for  the pre-
vious cases, the assumption that  the separation  stream
line is straight and parallel  to  the upstream flow di-
rection was not supported by  the  experimental results
which indicate a diverging wake  region.

An examination of the side view  shows sizeable vertical
displacement as the stream lines  traverse  the ridge.
This is accompanied by some motion  towards the ridge,
resulting in the inner plume  appearing  to  cut through
the terrain in the top view.   As  the fluid comes over
the ridge,  it tends to fall below its ambient level,
thus forcing the fluid laterally  away from the ridge,
and producing the slight bulge seen in  the figure above
the ridge.   This bulge may also be  attributable  to a
boundary layer separation bubble.   According to  the
potential flow calculation, the boundary layer is sub-
jected to an adverse pressure  gradient  near the  most
upstream point on the model.   Therefore, separation of
the laminar boundary layer is  likely.

Finally, calculations of the  streamline at the location
of the tank sidewall show that its  lateral displacement
is significant.   Thus sidewall effects, which were
neglected in the calculation,  could be  of  some impor-
tance in this case.

  5. Conclusions and Suggestions  for Future Studies

This study  was intended to be  a very preliminary examin-
ation of the use of the scaling suggested  by  Drazin

(1961)1 and Lilly (1973)6 for  the case  of  very low
Froude-number,  three-dimensional flow over complex ter-
rain.  Preliminary  results  indicate the following:
(i) For the  Froude  number regime studied, the basic
scaling suggested was  appropriate,  and the flow did
resemble two-dimensional  flow around contours of the
terrain model at the appropriate level.
(ii) To improve the numerical model, the inclusion of
at least two effects is of  primary  importance.  They
are: (a) the displacement effect of the boundary layer,
and, more importantly,  (b)  the displacement effect of
the separated wake.
(iii) The prediction of vertical displacement was
roughly valid for the  case  computed, considering that
the effects  discussed  in  (ii) were  not modeled.
(iv)  The accuracy  of  the lowest order solution depends
strongly on  the type of terrain feature considered, as
well as the vertical level,  the lateral distance from
the terrain  feature, and,  of course, the Froude number.
For a given  Froude  number,  the agreement between the
numerical model and experiment was  much better for the
idealized complex peak than for the Gaussian and coni-
cal models.
(v)  The computed vertical  displacement'can be used to
estimate regions of applicability of the lowest order
solution.
(vi) Slight unsteadiness  in the oncoming flow may be a
result of turbulent vortex  shedding in the lee of the
models.

One obvious improvement can be made in the numerical
model.  The separated wake  can be crudely modeled by
standard techniques used  in aerodynamics to compute
flow past two-dimensional bodies.   (Note,  however,
that one may have to take into account the three-
dimensional nature  of the boundary  layer).

Several other improvements  are also possible.   First,
one could include the vertical and  horizontal shearing
of the free-stream  flow.  Second, the atmospheric
boundary layer could be modeled.  This modeling can be
accomplished most simply  by using the computed invis-
cid flow to  derive  a turbulent boundary layer model.  A
more sophisticated  approach would allow the computed
boundary layer to react back on the inviscid flow.
Third, the scaling  analysis  could include the effect of
atmospheric compressibility,  although this effect
shouldn't be too important  because  the vertical motions
are weak.  Fourth,  Coriolis  forces  could be included.
Fifth, as discussed by Drazin and Lilly,  the scaling
breaks down near the model  peaks  (because the local
scale height is very small,  and,  therefore,  the local
Froude number is very large).   An investigation of  the
coupling of  the present numerical model with some other
model near the mountain peaks could be performed.
Sixth, turbulent diffusion  could  be modeled in the
plume dispersion process  with the turbulent diffusivity
related to the local Richardson number._ Finally,  it is
implicit in the scaling analysis  that dpc/dx3 character-
ize the complete density  profile.   For example, in re-
gions where dp/dx3  is very  small  compared to dpc/dxj,
the expansion will  probably  break down.  So the case of
a two-layer fluid (each layer having a different constant
density) cannot be  treated with the present scaling.
Thus, rescaling the equations to  include these more
general cases would be useful.

                      References

1. Drazin, P. G. (1961),  Tellus 13. 239-251.
2. Lilly, D.K. (1973) Flow  Research Note No.  40.
3. Phillips, O.M. (1966), The Dynamics of the Upper
   Ocean, Cambridge University Press.
4. Liu,  H.T. and Lin, J.T.  (1975),  Flow Research
   Report No. 57.
5. Lin,  J.T., Liu, H.T. and  Pao,  Y.H.  (1974),  Flow
   Research Report No. 29.
6. Schlichting, H.  (1960), Boundary Layer Theory,  McGraw
                                                        507

-------
                           HYDRODYNAMIC AND WATER QUALITY MODELING

                         IN THE OPEN OCEAN USING MULTIPLE GRID SIZES
                           Priya J. Wickramaratne, James W. Demenkow,
                       Stanley G. Chamberlain and Janice D. Ca.-llah.an
                                Environmental, Systems Analysis
                            Oceanographic & Environmental Services
                                       Raytheon Company
                                   Portsmouth, Rhode Island
                  ABSTRACT

This paper discusses the problems and limita-
tions of the application of hydrodynamic and
water quality models to open ocean areas in
which varying degrees of resolution are re-
quired.  In applications involving power plant
or wastewater treatment discharge into large
bodies of water, a higher degree of spatial
resolution is often required near the dis-
charge location.  The models have been execu-
ted using two different grids sizes in order
to obtain the desired resolution while re-
ducing overall computer costs.  The grids were
intermeshed using a technique in which the
length scale of the larger grid was an even
multiple of the smaller grid.  The results
obtained using the larger grid were used to
generate boundary conditions for the smaller
grid.  This was easily accomplished since the
length scale of the larger grid was taken
to be an even multiple of the smaller grid.

                INTRODUCTION

The modeling of environmental problems per-
taining to water pollution frequently requires
a greater level of detail in certain geograp-
hical regions,  while not requiring as much in
the rest of the study area.   For example, a
model of a discharge plume would require con-
siderably more detail in the vicinity of the
outfall, where the concentration gradients are
the steepest, and less detail as one moves
away from the outfall, where the gradients are
considerably less steep.

Many of these water quality models use a
finite difference technique, where the study
area is overlaid by a grid of constant cell
size.  Computer costs and core storage size
often make the modeling of the entire area
with a grid fine enough to obtain the desired
details both costly, and impractical.  Hence,
the need exists to mesh together a fine and
coarse grid structure which will provide both
the desired detail of the fine grid and the
economic advantages of the coarse grid.

This paper describes a technique to accomplish
the intermeshing of coarse and fine grid
structures in far-field models for any well-
mixed waterway   An application of the tech-
nique is given  to illustrate how it can be
performed for an open ocean situation.

                 TECHNIQUES

Circulation Model

A circulation model used extensively by
Raytheon is a two-dimensional long wave pro-
pagation model based on Leendertse1.
 The model  consists  of a digital computer
 algorithm  which yields a numerical solution
 to  the  vertically   averaged hydrodynamic
 equations  of  motion.   The equations of motion
 describe water  currents which are driven by
 horizontal pressure gradients produced by
 tidally induced changes in surface elevation.

They include  a  "continuity  equation"
and the momentum equations

      3u  ,  3u
3u
5t
8t    3x

where :

u =
                                   . o
            3y
                  3y
                        v(u2+v2)1/2
    vertically-averaged velocity in x direc-
    tion .
v = vertically-averaged velocity in y direc-
    tion.
r\ = incremental tide height about mean value.
h = water depth to reference plane (mlw) .
C = Chezy coefficient.
g = acceleration of gravity.
t = time .

These equations are solved numerically by the
multi-operation method : which possesses  the
good stability  attributes of  a pure implicit
scheme and the computational efficiency of an
explicit scheme.

     Boundary Conditions .  A key to running
the circulation model is the generation of the
appropriate driving forces on  the boundaries.
This paper will concentrate on the generation
of boundary conditions at the boundaries
between the coarse and fine grid regions.
(The outer boundary conditions for the coarse
grid region are specified in the usual way) .

If  the fine grid region is  entirely within the
coarse grid region, as in Figure  1, then  the
fine grid boundary  conditions  can be  found
from the coarse grid  simulation.  Boundary
condition data  for  the fine grid  is obtained
by  first executing  the coarse  grid model  and
saving the computed water levels  in the  cells
which form the  boundaries of  the  fine grid,
at  every time step  over the desired simula-
tion time period.   The values  thus obtained,
are linearly interpolated between spatial
points (at every time step) ,  as shown  in
Figure 1, to give the desired  boundary
                                              508

-------
conditions.   The time step is determined from
the grid size and water depth, with computer
costs  being  used as constraints, to satisfy
the accuracy criterion,

At < 2AL (gh)"1/2

where  At is  the time step, and AL is the
length of a  grid square.
The basic equation which describes the de-
pendence of the concentration  (C) on the dis-
tance variables (x,y),  time (t), depth (h),
currents (U,V), constituent decay coefficient
(k),  diffusion coefficient (E  , E ) and dis-
charge rate (S) is:          x   "





-------
increase the accuracy of the boundary values
for both grids.
	 Fine Grid Outer Boundary

-•-•- Coarse Grid Inner Boundary

  • Boundary Values of Fine Grid From
     Coarse Grid

  X  Boundary Values of Coarse Grid from
      Fine Grid

Figure 2.  Interfacial Boundaries Within
           Fine and Coarse Grid Regions
           for Water Quality Models.

This procedure is repeated for the desired
length of the simulated time, with the
boundary values of the fine and coarse grid
being interchanged at contiguous intervals
of time of length AT .


Since mass transfer occurs at the boundaries
between the coarse and fine grid regions,  the
values of the concentrations in the coarse
grid must be allowed to affect the values
of the concentration in the fine grid and
vice versa.  This is the primary motivation
for intermeshing the coarse grid and fine
grid in the manner described.

                APPLICATION

The techniques described in the preceding
section are now illustrated by the applica-
tion to the dispersion of a water quality
constituent discharged into a coastal region.

Circulation Model

    Grid Size  The grid size was selected
to give the desired spatial resolution of
currents and realistic representation of
boundary contours commensurate with computer
costs and capacity.  A coarse grid with a
length of 1/3 nautical mile (2025 feet)
between points was used to model the entire
survey area.  A fine grid with a  length of
1/9 nautical mile  (675 feet) was  used  to ob-
tain the necessary detail in the  immediate
vicinity of the discharge point  (see Figure 3).
Figure 3 .   Fine and Coarse Grid Regions for
           Open Ocean Example

    Time Step  The time step for the coarse
grid was chosen to be 60 seconds.  Based on
a sensitivity analysis (the fine grid was
originally run with a time step of 20 sec-
onds) the time step for the fine grid was
also selected to be 60 seconds to minimize
computer costs.

    Boundary Conditions - Coarse Grid  Tide
he ights were specified on all the open
boundaries .   The tide heights on the two
boundaries perpendicular to the coast were
determined from tidal measurements and tide
heights from the nearest National Ocean Sur-
vey tide reference station.  The tide heights
along the boundary parallel to the coast
were determined by linearly interpolating the
values at the two end boundaries .

    Boundary Conditions - Fine Grid  The
boundary conditions for the tine grid were
computed by the coarse grid model.  As shown
in Figure 3, the outside boundary of the fine
grid corresponds to a set of interior grid
points in the coarse grid.  The numbers com-
puted along this interfacial boundary in the
coarse grid were then used as input to the
fine grid model.

Water Quality Model

The grid layout for the water quality model
is the same as that for the circulation
nodel  (shown in Figure 3) for the coarse and
fine grids.   The ambient tidal currents
(U,V) are obtained from the circulation model.
The time step was selected as 15 minutes and
5 minutes for the coarse grid and fine grid,
respectively.

These values were based on the stability
criterion:
max [ |U| + |V| ]
                       <  1
RESULTS

Circulation Model

The results of  the  application  of  the  coarse
grid/ fine grid  technqiues  to  the open  ocean
example  are best  exemplified in Figures 4
and 5.   Instantaneous  velocities for both
                                             510

-------
grid sizes are displayed in Figure 4 for a
time of left-flowing tidal currents.  The use
of the fine grid allows better delineation
of the coastline with the resulting improve-
ment of resolution near the shore where the
currents are somewhat less (lead in phase)
due to frictional effects.  The fine grid
displays the slower velocities quite nicely
whereas the coarse grid does not.
                                                	1.0 Contour

                                                	0.5 Contour

                                                ....0.3 Contour
 Figure 4.
Instantaneous Coarse Grid (above)
and Fine Grid (below) Currents
for Time of Left Flowing Currents
 Water Quality Model

 Figure 5 displays the results of  the water
 quality model.  The need for improved reso-
 lution near the discharge can be  seen by
 noting that the contour lines are less  than
 three grid lines apart  (the coarse grid
 resolution) over a large portion  of the fine
 grid study area.  Also, as noted  from the
 left most portion of the 0.3 contour, the
 plume has sufficiently dispersed  so that
 1/9 mile resolution is no longer  necessary
 beyond the fine grid region.
Figure 5.   Far Field Water Quality
           Distribution for Time of
           Maximum Leftward Plume


                 REFERENCES

1.  Leendertse, J.J., "Aspects of a Compu-
    tational Model for Long-Period Water-
    Wave Propagation, "  Rand Corp.
    Report No. RM-5294-PR, May 1967.

2.  Leendertse, J.J., "A Water Quality Simu-
    lation Model for Well-Mixed Estuaries
    and Coastal Seas:  Volume I, Principles
    of Computation", Rand Corp.  Report No.
    RM-6230-RC, February 1970.

3.  Maach, F.D., N. Narayanan and R.J.
    Brandes, U. Texas, Report Hyd. 12-7104,
    August 1971.

4.  Laevastu, T. and Staff.  "A Vertically
    Intergrated Hydrodynamical Numerical
    Model, Parts 1-4". U.S. Naval Post
    Graduate School, January 1974.


5.  Chamberlain, S.G., and G.P. Grimsrud,
    "Extraction of Water Quality Information
    from Field Data using Mathematical
    Models", Proceedings Ocean '72 IEEE
    International Conference on Engineering
    in the Ocean Environment, Page 397-401,
    September 1972.

6.  Chamberlain, S.G., and G.P. Grimsrud,
    "Numerical Modeling of Water Circulation
    and Effluent Dispersion,"  Raytheon
    Company Report, December 1971.
                                              511

-------
                                           BLACK RIVER THERMAL ANALYSIS
                  Donald R. Schregardus
             S. Environmental Protection Agency
                        Region V
              Michigan-Ohio District Office
                    Fai rview Park, Ohio
                  Gary  A.  Amende la
        I).  S.  Environmental  Protection Agency
                     Region V
            Michigan-Ohio  District Office
                 Fairview  Park,  Ohio
 ABSTRACT

 One-dimensional temperature modeling techniques were
 employed to determine allowable thermal loads from
 United States Steel Corporation Lorain Works for dis-
 charge to the Black River.   The Black River from river
 mile 5.0 to its mouth in Lake Erie at Lorain, Ohio is
 a complex system affected by dilution from Lake Erie,
 recirculation resulting from the location of U. S.
 Steel  water intakes and outfalls,  and abrupt changes in
 the physical dimensions of the river system caused by
 maintenance dredging by the U.  S.  Army Corps of Engi-
 neers.  Based upon physical and hydrologic characteris-
 tics,  the Black River below river mile 5.0 was divided
 into three segments.  A one-dimensional thermal model-
 ing technique was developed and applied to each segment
 taking into account river flow, lake dilution, meteoro-
 logical data, and heat loadings from U. S. Steel.   The
 equations were verified with measured data at critical
 stream flows and used to compute thermal  loads which
 can be discharged by U.  S.  Steel  and still meet appli-
 cable temperature standards.  Practical considerations
 were employed in arriving at the final recommended
 effluent limitations.

 BACKGROUND

 The Black River is a system of natural drains flowing
 23 miles in a generally northerly  direction and empty-
 ing into Lake Erie at Lorain,  Ohio.   The State of  Ohio
 has classified the river as a warm water fishery and
 correspondingly sets temperature standards to maintain
 this classification.  The water quality standards  for
 temperature applicable to the Black River limit the
 temperature increase of the river  attributable to  human
 activity to 5°F (2.8°C)  and set maximum monthly water
 temperatures not to be exceeded.   The above criteria
 are to be achieved at the water quality design flow of
 the stream which has been defined  as the  annual  mini-
 mum seven-day consecutive flow with a recurrence inter-
 val of once in ten years.  In the  lower portion of  the
 Black  River the water quality design flow has been  de-
 termined to be about 21  cfs.
The temperature regime  in  the  lower  portion  of  the
river from mile 5-0 to  its mouth  at  Lake  Erie  is  af-
fected by discharges from  United  States Steel Cor-
poration's Lorain Works, a fully  integrated  steel
plant.  On July 23-26,  \31k, Region  V  - Michigan-Ohio
District Office conducted  a comprehensive field survey
on the Black River in an effort to determine the
effect of U. S. Steel's discharges on  the quality of
water and to obtain data for verification of predic-
tive mathematical models being applied to the  river.
Figure 1  is a  location  map of  the survey  area showing
the U. S. Steel Lorain  Works five river outfalls  and
two intakes and 8 of the 13 stream stations  where
water quality was monitored.   The temperature data
obtained during the survey  indicate  that  thermal  dis-
charges from the U. S.  Steel Lorain  Works result  in
violations of the water quality standards presented
above.  Stream temperatures were  increased by as much
as 15°F by U. S. Steel  outfalls with surface tempera-
tures generally 12°F above ambient temperatures (tem-
peratures measured above U. S.  Steel).  However,
monthly allowable maximum  temperatures were  not ex-
ceeded.

GENERAL CONDITIONS

As the Black River approaches  Lake Erie,  stream level
and water quality become affected by backwaters from
the lake.  Velocity profiles as well as sodium and
chloride data obtained  during  the July 197^  survey and
another U. S.  EPA survey conducted on  May 2, 1974, in-
dicate that a wedge of  cool lake water flows upstream
along the bottom of the river  beneath  the warmer river
and effluent water 'See Figure 2).   Further  analysis
of the data reveals that the intruding lake water
causes dilution of river concentrations of sodium and
chloride as far upstream as U. S. Steel intake Wl-3
(RM 3.88).  This lake water intrusion  is  a major com-
ponent in the cooling of heated waters discharged  to
the lower portions of the stream.

Estimates of lake intrusion flow during the July sur-
vey were made using sodium and chloride data.  Twenty-
                       FIGURE  1
                BLACK RIVER  LOCATION  MAP
OUTFALL 004
 OUTFALL 003
              SEGMENT 1  RM 5.00 to 3.88
              SEGMENT 2  RM 3.88 to 2.85
              SEGMENT 3  RM 2.85.to 2.1)0
           FIGURE 2 - STREAM VELOCITY PROFILE
                                                                 fc
                                                                 o
       24
                                                                   30
                                                                                 RIVER MILE 2.4
                                                                                Upstream
                                  Downstream^
-.10
                              0.0
                     VELOCITY "IN  WWTS
.10
                                                        512

-------
four  hour equal  volume composite samples for sodium
and chloride were obtained at top,  middle, and bottom
depths  at five survey stations located between Outfall
001 and the downstream end of the turning basin (Fig-
ure 1).  Because of the conservative nature of these
elements and the large concentration differences be-
tween the river  upstream of U. S. Steel and Lake Erie
these data can be used in a mass equation to estimate
the intrusion flow at each survey station.  The compu-
tations are based upon the assumption that the instream
quantities of both sodium and chloride are affected
only  by French Creek, discharges from U. S. Steel, and
mixing with a known concentration of intrusion water.
This  assumption  is reasonable in that the July survey
was conducted during dry weather so that the runoff was
negligible.  In  addition, stream and discharge flows
and concentrations were relatively  constant during the
survey allowing  the assumption of steady state condi-
tions to be made.   The resulting equation used to esti-
mate the intrusion flow is:


                           (C -C )                   (1)
Where:

     0_R = river flow assuming no intrusion, cfs

     C  = expected river concentration assuming no
          intrusion flow, mg/1

     C  = measured concentration of the river, mg/1
      m
     CT = concentration of the intruding water, mg/1

The daily intrusion flow at Stations 4, 5, 6, and 7 was
determined by averaging the flow values computed using
the sodium data and the chlorine data.  This procedure
was completed twice,  once assuming the sodium and
chloride concentration of the intruding water to be
equal  to the lake concentration and once assuming the
concentration to be equal to the river concentration
measured at the bottom of the next downstream survey
station.  The results of the computations are presented
in Table 1.   The computed intrusion flow at Station 4
using Station 3 bottom concentrations appears unreason-
ably high when compared with other data.  Further re-
view showed that Stations 3 and 4 have nearly the same
concentrations causing the ratio of concentrations  in
Equation 1 to become very large.   When concentrations
are very close, small variations in the data can sig-
nificantly alter the computed flow.

Based upon physical  and hydrologic characteristics, the
Black River below French Creek was divided into three
segments,  each with relatively uniform thermal proper-
ties (Figure 1).   The upstream segment from Outfall 001
to Intake WI-3 CRM 5.0-3.88)  was considered to be a one
dimensional  stream not yet affected by the intruding
lake water.   Segment  1  averages about 10 feet deep and
160 feet wide.   Water withdrawn at  Intake WI-3 is dis-
charged at Outfalls 001  and 005.   Heated water dis-
charged at Outfall  001  cools  as it  flows downstream.

The second segment  is located between the turning basin
and Intake WI-3 (RM 3.88-2.9).   This  segment  averages
about  15 feet  deep  and 250 feet wide.   Temperatures
were  relatively constant along the  length of  Segment 2
but some horizontal  stratification  did exist.   The tem-
peratures  are  affected  by lake intrusion but  not  to the
same extent  as  the  turning basin.   Outfall  002 dis-
charges to this portion  of the river  and heated river
water enters from  upstream.

The Black  River turning  basin (RM  2.9-2.4)  is the third
    segment.  The turning basin is dredged periodically by
    the U. S. Army Corps of Engineers to a depth of about
    30 feet and averages about 600 feet wide.  Large quant-
    ities of water flow upstream from the lake and mix with
    the heated water discharged from Outfalls 003 and 004
    and the heated water entering from upstream.   Intake
    WI-2, located in Segment 3, supplies the water dis-
    charged at Outfalls 002, 003,  and 004.  Temperatures
    were relatively uniform across the surface; however,
    vertical temperature stratification existed throughout
    the basin during the July 1974 survey.

    MATHEMATICAL MODEL

    Each of the segments described above was analyzed sep-
    arately to determine the allowable thermal loads which
    can be discharged to the river.   Daily steady-state
    conditions were assumed throughout the analysis.  This
    assumption proves reasonable for the Black River be-
    cause diurnal variations of the flows, heat loads and
    upstream river temperature were not significant.  Com-
    plete mixing of the heated discharge with the receiving
    water was also assumed.  The large discharge flow at
    Outfall  001 in relation to the upstream river flow re-
    sults in complete mixing just  below the outfall.  Large
    discharge flows at Outfalls 002,  003,  and 004 resulted
    in a relatively constant horizontal  temperature distri-
    bution a short distance from the respective outfalls.
    Complete vertical mixing was also assumed despite ver-
    tical temperature stratification that  occurred during
    the July survey in Segments 2  and 3-   This assumption
    affects only the surface heat  exchange term in the
    energy budget.  Based upon the July 1974 data,  this
    assumption introduces an error of less than 1% to the
    temperature computations.

    The Edinger and Geyer one-dimensional  formulation
    was applied to Segment 1.   The formulation is  based
    upon the concept that a raised temperature resulting
    from a heated discharge will approach  the natural
    stream temperature by the exchange of  heat at  the air-
    water interface.   Assuming the heat added to the water
    body to be thoroughly mixed, the rate  at which the
    temperature changes in the downstream  direction is con-
    sidered proportional  to the product  of an exchange co-
    efficient and the temperature  excess.   Under steady-
    state conditions the equation  used for estimating tem-
    perature downstream of d heated  discharge is:
                    T = E + (T -E)e
                                                       (2)
    Where:
         0_  = river flow rate ft /day
          R
                           TABLE 1
           Average Computed Intrusion Flow (cfs)
Station
7
6
5
4
River
Mile
3.88
3.35
2.85
2.40
Lake (])
Concentration
45
95
127
510
Bottom . .
Concentration
69
194
192
1342
    (1)
    (2)
                                                               Intrusion  flow was  computed  by  setting  the  concen-
                                                               tration  of sodium and  chloride  in  the  intrusion
                                                               flow equal  to  the measured values  in the  lake.

                                                               Intrusion  flow was  computed  by  setting  the  concen-
                                                               tration  of sodium and  chloride  in  the  intrusion
                                                               flow equal  to  the measured value at the bottom of
                                                               the  next downstream station.
513

-------
      E  =  equilibrium  temperature, °F
                                       2
      K  =  exchange  coefficient,  BTU/ft -°F-day

      A  =  surface area of  the  stream to  the  point
           where T  is determined,  ft

        =  density of water, 62.4 Ibs/ft

      C  =  heat capacity of water,  1 BTU/lb-°F

      T  =  mixed temperature of the stream  and the  heat-
      m    ed  effluent  at the outfall

 The  equilibrium temperature (E)  used in  Equation 2 is
 defined as the temperature at  which the  net  exchange of
 heat at the air water  interface  is zero.

 A  different formulation was employed for Segments  2 and
 3.   The low stream  velocities  and the uniform surface
 temperature distributions of these two segments  indi-
 cate that  a cooling pond formulation would more accur-
 ately represent actual conditions.  In this  formulation,
 a  heat budget equation was constructed.  Under steady-
 state conditions the total heat  content of the segment
 remains constant and the heat  budget equation can  be
 solved for the cool ing pond temperature.
 The  heat  budget which applies to both segments  is:
                                                     (3)
Where:
     H- = heat added at the outfalls

     H  = heat removed by the  intakes

     H.. = heat entering at the upstream end of the
       U
           reach
     H  = heat entering from lake  intrusion flow

     Hn = heat leaving at lower end of the reach

     H  = heat lost at the water surface

The expression used to estimate the surface heat ex-
change, H-  is:

                     HS   KA (TS-E)                 (4)

with T  being the water surface temperature and the

other variables are as defined previously.

All heat terms except H  in Equation 3 represent advec-
tive heat transfer resulting from the transport of
water into or out of the segment.   The heat contained
in the flowing water is given by the general  expression:
                       H
                                                    (5)
In the analysis of Segment 2, H  in Equation 3 is the

heat added at Outfall 002.  There are no industrial
water intakes in Segment 2 therefore H. is zero.   Sub-

stituting Equation k as well as the appropriate advec-
tive heat transfer terms  into Equation 3 and solving
for the Segment 2 temperature results in the following:
                     Vu+0-2T2+0_l Tl
                    	U U  L LL
(6)
            S2     KA+pCptQ.,+0,^0^1


 Subscripts  denote where  the water  came  from  prior  to
       complete mixing in Segment 2 (i.e. upstream, U; Outfall
       002,2;  lake intrusion,  L) .

       A  similar analysis was  used to derive the temperature
       equation for Segment 3,  the turning basin.  Process
       water  at the basin temperature is withdrawn at  Intake
       WI-2 and discharged at  Outfalls 002, 003, and 00^.
       Substituting Equation k and the advective heat  terms
       into Equation 3 and solving for the basin temperature
       yields  the expression:
        T
         S3
Where:

     0.  =  lake  flow  entering  the basin at the down-
            stream  end,  cfs

     (V  =  lake  water flowing  upstream along the bottom
            to the  mid-section,  cfs

The above equation  takes  into  account  that not all  lake
intrusion water  entering  the basin is  mixed within  this
section.  A portion,  Q. ,  flows  upstream to Segment  2.

VERIFICATION

Before  Equations 2, 6,  and  7 were used to  determine
allowable thermal  loads at  the  water quality design
flow of the river,  the models were tested  on the Black
River using July data with  the  resulting computed tem-
peratures compared  to measured  values.  The July data
base provided an excellent  test for the thermal  models
because the average measured flow upstream of U.  S.
Steel (22.6 cfs) was  very close to the water quality
design  flow for  the river (21 cf si , and U.  S.  Steel was
at maximum production.

For the July verification,  the  equilibrium temperature
( E) for all segments  was  set equal  to  the  flow weighted
average of the values measured  at French Creek and Sta-
tion 10 because  there are no significant thermal  dis-
charges upstream of U. S. Steel.   In addition, equilib-
rium temperatures computed  using  procedures  outlined in
Reference 2 and  average daily meteorological  conditions
recorded at Cleveland, Ohio, agreed very well  with the
assumed values.  The  value  for  the exchange  coefficient
(K) was obtained from the report  "Effects  of Geographi-
cal Location as  Cooling Pond Requirements  and  Perform-

ance"^.

For computing Segment  1 temperatures,  Equation 2 was
applied first from Outfall  001  to Outfall  005
(RM 5.0-3.92) and then from Outfall 005 to Intake WI-3
(RM 3.92-3.88).  Mixed river temperatures  (Tm) at Out-
falls 001 and 005 were calculated  from measured  efflu-
ent temperature and computed river temperatures  imme-
diately upstream of the outfalls.  The three-day aver-
age computed temperature along  with the maximum  minimum
and average measured  temperatures  are  shown  in Fig-
ure 3.

Segment 2 temperatures were computed on a  daily  basis,
the same period  for which the lake intrusion  flows were
determined.  In applying  Equation  6 to the Black River
the lake intrusion flow computed  at Station  6  using
Station 5 bottom concentration  was  used.   Correspond-
ingly,  the temperature measured  at  the bottom  of survey
Station 5 was used as  the intrusion water  temperature.
Station 6 intrusion flow was chosen instead  of that
determined at Station 5 because  Station 6  is  located
close to the center of Segment  2  and flows there would
more likely represent the average  intrusion  flow
throughout the segment.  Daily  average temperatures
                                                       514

-------
recorded at Station 7 and for Outfall 002 were also  in-
put to Equation 6.  The average of the three daily
values is plotted in Figure 3.

Black River turning basin temperatures were computed
daily using Equation 7.  Lake intrusion flows calcula-
ted with lake concentrations of sodium and chloride
were used in the computations since flows computed with
bottom concentrations appeared unreasonably high.
Daily average temperatures and flows measured at Out-
falls 003 and 00k were  input to Equation 7-  The aver-
age of the top, middle, and bottom temperatures meas-
ured at Station 6 was considered  representative of the
upstream water temperature entering the basin.  The
average computed basin  temperature is presented  in
Figure 3.

The computed temperatures along the entire river agree
well with the measured  values, Figure 3.   In Segment  1,
computed temperatures are generally within the range
of measured values  recorded at each station located
within the segment.  Average computed values at the
lower end of the  segment (RM 3.88) agree within  1°F
of average value measured at  Intake WI-3.  The com-
puted Segment 2 temperature was also within 1°F of the
average temperature measured at Station 6 mid-depth.
Computed Segment 3  temperatures agree well with aver-
age measured temperatures throughout the basin.  The
calculated value  is about 3 F above the total average
temperature measured throughout the basin and about
2°F below the average temperature recorded in the top
9  feet of the basin where most of the heat is dis-
charged.  This  result was expected in that a portion
of the cool lake water  was not mixing with water  in
the basin but  instead was flowing upstream and mixing
 in Segment 2.

Based upon the  ability  of the computational procedures
to  replicate measured temperatures during  low flow
conditions within  reasonable  limits, the equations de-
veloped  in the  previous section were employed to com-
pute  allowable  thermal  loads  from the U. S. Steel Lor-
ain Works.
         PROPOSED  THERMAL  LOADINGS

         In  computing  allowable  thermal  loads,  the  mixed  river
         temperature  in  the  segments  were set  equal  to  the  nat-
         ural  river temperature  plus  the maximum temperature
         increase  permitted  in the  water quality standards, 5° F
         The discharge temperatures and  the  associated  heat
         loads  were then determined using the  equations devel-
         oped  and  verified above.   The water quality design
         flow of the  river upstream of U.  S. Steel  was  assumed
         in  the computations (21  cfs).   To simplify the analy-
         sis the temperature of  the river upstream  of U.  S.
         Steel,  the lake temperature  and the equilibrium  tem-
         perature  were set equal.   This  assumption  is reason-
         able in that  there  are  no  significant  heat discharges
         above  U.  S.  Steel.   The natural  water  temperature  used
         in  the analysis was obtained from Reference 2  and  is
         based  upon average  meteorological conditions recorded
         at  Cleveland, Ohio  during  September,  the month when
         low flows generally occur.   The exact  equilibrium  tem-
         perature  is  relatively  insignificant  in computing  the
         temperature  profile due to the  fact that the water
         quality standards are based  upon increases above nat-
         ural  temperature  and maximum temperatures  are  currently
         not exceeded  with existing loads.

         To  compute the  proposed heat loads  at  Outfall  001, the
         allowable temperature  increase  from Intake WI-3  to
         Outfall 001  was determined.  This was  achieved by
         first  determining the discharge temperature which
         would  increase  the  mixed  river  temperature at  the out-
         fall  by 5°F  and then simulating river  temperatures
         down  to Intake  WI-3, using Equation 2.   At Outfall 005
         the maximum  daily heat  load  discharged  to  the  river
         was assumed  to  be 15 x  10° BTU/hour,  slightly greater
         than the  maximum  value  recorded during  the July  survey,
         m  x 10°  BTU/hour.   The computed temperature and the
         river  flow at intake WI-3  were  assumed  to  mix with the
         average daily lake  intrusion flow at the intake.   In-
         trusion water temperature  was assumed  equal  to the
         lake temperature, therefore  allowing the maximum heat
         load to be determined.   The  results indicate that a
                                                     FIGURE 3
                                       BLACK RIVER TEMPERATURE VERIFICATION
001
86

Ll-
10 82
in
LLl
CD
£78
LU
ce
£74
LU
^
&
^70

66
6
1




. 10 9
"j MAX 1 MUM

• AVERAGE
-I MINIMUM
.


)
1
i





8




1 . ,,, . I .
.5 " 5.0

U.S. STEEL OUTFALLS 005 002 003 004
1
LTT ,
r^k^l
>t

•1

SAMPLING
8a 8b*- STATION NOS. — *'




, , . 1 i i . . 1 i
4.5 4.0

1 U

; C


/


)



[


t

1 c
!
T
O
1
-
j 1 <
i I i
654 3 i



NT 2 	 1-SEGMENT 3-1
1

3.5 3.0 2,5 2.0 I.31 0
RIVER MILE
                   STATION AVERAGES
                     SURFACE    O
                     MID-DEPTH  4-
                     BOTTOM     A
BASIN AVERAGES
  SURFACE    El
  TOP 9 FEET a
  ALL DEPTHS E
  BOTTOM     H
COMPUTED VALUES
  AVERAGE   X
                                                       515

-------
temperature increase of 3.4°F may be imparted to the
flow discharged at Outfall 001.  Using the following
equation the allowable heat load at 001 was determined
to be 58 million BTUs per hour.
                    H =" p CpQ.,ATj
Where:
     Q.   flow discharged at Outfall 001, cfs

    AT. = allowable discharge temperature increase
          above the intake temperature,  °F
Equation 6 was used to determine the allowable thermal
loads discharged at Outfall 002.  Setting the tempera-
ture of Segment 2 to 5°F above natural temperature,
Equation 6 was solved for the temperature at Outfall
002.  The intrusion flow determined assuming lake con-
centration was used in the computation to be consist-
ent with the assumption that the intruding water was
at lake temperature.  The results indicate that Out-
fall 002 can discharge at a temperature 16.9°F above
equilibrium temperature.   Assuming the water with-
drawn from the basin at Intake WI-2 to be 5°F above
equilibrium, Outfall 002 can discharge at a tempera-
ture 11.9°F above the intake value or using Equation
8, 123 million BTUs per hour.

A  similar procedure was used to determine the combined
thermal  load discharged from Outfalls 003 and 004  into
the turning basin.  The average temperature of the
turning  basin was set equal to 5°F above equilibrium
and Equation 7 was solved for the net heat discharged
to Outfalls 003 and 004.  Lake  intrusion flow was
assumed  equal to the average daily value determined
using  lake concentrations with the intrusion tempera-
ture at  equal to the lake temperature.  The flow en-
tering from upstream, which was composed of lake flow,
the flow from Outfall 002, French Creek, and the river
above  U. S. Steel was assumed to be 5°F above equili-
brium  as set in the previous computation.  The com-
bined  heat load for Outfalls 003 and 004 was deter-
mined  to be 515 BTUs per hour.  This corresponds to a
AT of  about 16°F for the total flow of Outfalls 003
and 004.

Prior to the July_1974 survey,  the Government  had  est-
imated thermal  loadings  from U.  S.  Steel  to be accept-
able in terms  of achieving the 5°F AT standards. These
loadings were  proposed as  effluent  limitations pursu-
ant to settlement  of Civil   Action C71-445  (N.D. Ohio)
against U.  S.  Steel  (Table 2).

Equations 6 and 7 were employed to determine the re-
sultant  temperature of Segments 2 and 3 using the pro-
posed  settlement loadings.  All other inputs and flows
                                                          were kept the  same.  Segment  2  temperature,  computed
                                                          assuming a thermal  load of 6? million  BTUs per hour at
                                                          Outfall 002, was  found to be  3.7°F  above the ambient
                                                          stream temperature.  With this  computed  temperature and
                                                          600 million BTUs  per hour from  Outfall 003 and OO't,
                                                    (8)   Segment 3 was computed  to  be 5.3  F  above ambient.   An
                                                          allowable thermal  loading  of 600  million BTUs per  hour
                                                          from Outfalls 003  and 004  appears reasonable consider-
                                                          ing the sensitivity of  the basin  temperature to lake
                                                          intrusion flow, and the  uncertainty involved in estima-
                                                          ting intrusion flow.  Reduction of  thermal  loads to 600
                                                          million BTUs per hour can  be accomplished without  sub-
                                                          stantial additional capital expenditures whereas reduc-
                                                          tion below that value will  require  costly additional
                                                          cool ing towers.
                                                          Based upon this analysis, Ohio Water Quality  Standards
                                                          for temperature can be maintained on the Black  River
                                                          by reducing U. S. Steel Lorain Works heat  loadings to
                                                          the values presented herein or those proposed for
                                                          settlement of Civil Action C71-445.

                                                          REFERENCES

                                                          1.  Edinger, J.E., and Geyer, J.C., "Heat  Exchange
                                                              in the Environment", Edfson Electric  Institute,
                                                              New York, June 1965

                                                          2.  Thackston, E.L., and Parker, Frank L. , "Effect of
                                                              Geographical  Location on Cooling Pond  Require-
                                                              ments and Performance", EPA Pub. No. 16130  FDO_,
                                                              03/71, March 1971.
                     TABLE 2

    U. S. STEEL LORAIN WORKS THERMAL LOADINGS

                   (106BTU/hr)
Outfall
001
005
002
003,004
Total
July 1974
D ischarqe
177
13
303
694
1187
Loadings
from this
Analysis
58
15
123
ill
711
                                 Proposed Settlement
                                 Civil  Action C71-445
                                           60
                                           10
                                           67
                                          600

                                          737
                                                      516

-------
                                          WATER MODELING IN OHIO EPA
                  Robert 6.  Duffy
     Water  Quality Surveillance Section Chief
                      Ohio EPA
                   Columbus, Ohio
                    A. Ben Clymer
                 Consulting Engineer
                    Columbus, Ohio
     Steady-State Stream Modeling in Ohio

Earliest Modeling Efforts
    In 1965 the Ohio Department of Health developed
the Garrett-McAnaney computer program for modeling
steady-state stream quality for a river mainstem.
The equations comprising the model were the Streeter-
Phelps equations plus the equations for mixing at a
node point.  That is, the only parameters modeled
were dissolved oxygen and BOD.

    In 1972-73 the Ohio EPA developed the Clymer-
Duffy computer program which enables a designer to
determine various combinations of allowable loads from
a single sewage treatment plant, such that water
quality standards would not be violated anywhere
downstream.
Features of the Garrett-Clymer-Duffy Models

    Two advanced stream water quality models were
developed by Ohio EPA in 1973 to project allowable
loads for non-conservative parameters (i.e. D.O.,
6005, NH3-N).   The two models were a "mainstem" model
and a "mini-basin" model.  The equations used in both
models were obtained by closed-form integration of the
linear constant-coefficient differential equations for
first order kinetics.  Mixing equations were written
at each node point.

    A feature unique to the mini-basin model is a
set of equations to calculate the average stream
velocity and depth when cross-section data are un-
available.  These equations are based on the Chezy-
Manning formula for channel flow.

    In the mainstem model the change in D.O. as water
flows over a dam is taken into account.  The re-
aeration equation over a dam is described by Klein (1).
The mainstem model allows for the borrowing of water
from a river for once-through cooling purposes.  A
reduction in D.O. concentration is calculated, if the
water is heated beyond saturation.  A special canal
subroutine was developed for the Cuyahoga River
whereby a certain volume of water is diverted from the
river to a canal running parallel to the river.  Over-
flows from the canal divert water back into the river
at several points.  Phenol and cyanide decay, and
hydrolysis of organic nitrogen to ammonia is also
modeled in the mainstem model.

    The effect of benthal deposits on D.O. is address-
ed in both models.  All suspended solids are assumed
to settle out behind a dam pool or elsewhere only if
the stream velocity is less than 0.6 ft./sec.
Osman-Clymer-Kim Model and Program

    The Garrett-Clymer-Duffy programs lacked some
features which became desirable in 1975 in connection
with a multibasin planning project, denoted herein
as the Water Quality Planning Model Project, or
"WQPM Project".  These desired features were:

    1.  Ability to model a branching configuration of
    tributaries along with a mainstem, instead of
    just one chain of reaches;

    2.  Inclusion of algebraic formulas for the costs
    of waste-treatment and waste-conveying facilities;

    3.  Provision of an automatic load-allocation
    "loop" to insure that dissolved oxygen would
    meet water quality standards everywhere;

    4.  Addition of models for certain nonconservative
    parameters, such as the count of bacteria per
    unit volume;

    5.  Consideration of time of travel as a given
    value for each reach, from which velocity would
    be calculated as reach length divided by time of
    travel;

    6.  Provision for conservative substances as
    parameters;

    7.  Incorporation of a pre-existing in-house
    stream temperature program as a subroutine in the
    WQPM program;

    8.  Refinement of the formula for the oxidation
    rate "constant" for phenol to incorporate non-
    linear dependence upon temperature and concentra-
    tion;

    9.  Inclusion of pH as a known input for each
    reach.

    In all other respects the WQPM builds upon the
Garrett-Clymer-Duffy mainstem model.
Validation Check of Ohio EPA Model

    The Garrett-Clymer-Duffy mainstem model output
was checked against field data collected by the
U.S.E.P.A. Michigan-Ohio District Office on Feb. 12-13,
1975.  The comparison for dissolved oxygen (D.O.) is
shown in Fig. 1.  Fig. 2_ compares D.O. field data
collected by OTTio EPA in the Scioto River in August
1974.  In both cases the difference is of the order
of 1 ppm of D.O..which is typical for water quality
models (2).
                                                      517

-------
Past Applications

    An early version of the Garrett-Clymer-Duffy
mainstem model was applied to the Little Miami River.
This model was unable to simulate reaeration over a
dam, benthal demand, or thermal withdrawals.
However, the model applied to the Scioto, Mahoning,
and Cuyahoga Rivers did simulate these phenomena.
When modeling the Cuyahoga River, the canal sub-
routine was used.  The mini-basin model was applied
to problem areas in the Maumee, Hocking, Rocky Fork,
Licking and Wabash Basins, including Findlay, Lima,
Lancaster, Mansfield, and Newark.  All of the forego-
ing studies constitute the modeling work that has
been done to date by this agency to comply with the
requirements of Section 303(e) basin planning
studies.
Present Applications

    The WQPM project is scheduled to prepare and
analyze long-range wastewater treatment plans for
major portions of the following basins by September,
1976:  Little Miami River, Great Miami River, Mad
River, Stillwater River, Alum Creek, Darby Creek,
Tuscarawas River, Chippewa Creek, Sandy Creek, and
Nimishillen Creek.  These ten segments include
representatives of most of the water quality problems
which occur in Ohio.

    The studies being performed have been designed
to make maximum use of the Ohio EPA Water Quality
Planning Model.  Appropriate regionalization and
economical processes for all new treatment plants
will be found which will meet all present water
quality standards until the year 2000.
 Near  Future Applications

    The  "Title X" project will utilize models
 developed by Ohio EPA to assess the impact of point
 source discharges in the Scioto, Muskingum, and
 Little Beaver Creek Basins.  Cost-benefit assessment
 of  different waste treatment schemes, including
 regional waste treatment plants, is planned.  Plans
 are being formulated to model the impact on water
 quality  of a 1"  rainfall following a prolonged dry
 period.  Modeling of conservative and non-conservative
 elements to insure compliance with water quality
 standards will be emphasized in the Title X program.

    Another application of  the models will be made by
 the Water Quality Standards Section of the Ohio EPA
 for the  purpose  of revising water quality standards.

    Presumably these models will find application
 also  in  future Section 303(e) basin studies and
 208 studies.
 Limitations  of  the  Stream Water  Quality Models

     All  models  mentioned thus far  have definite
 limitations,  which  prevent  them  from being  useful  for
 some purposes of  the  Ohio EPA:

     1.   These existing  models assume steady flow,
     which  is  an invalid assumption for the  study of
     urban  or  rural  runoff from a storm, or  any other
     transient flow  problem;
    2.   The models assume one-dimensional flow, which
    is  not applicable to a wide and/or stratified
    reservoir or estuary;

    3.   The programs are not designed to be economical
    tools for Monte Carlo studies of the stochastic
    effects of stream flow, sewage treatment plant
    performance, influent flow and composition, etc.,
    upon stream water quality;

    4.   Because instantaneous and perfect mixing are
    assumed at each node point throughout the cross-
    section, the models cannot evaluate water quality
    distribution in a mixing zone;

    5.   Since models fail to treat the biota
    explicitly (except for Coliforms), they cannot be
    used to draw conclusions about the ecosystem in
    a stream;

    6.   The programs do not relieve the user of all
    tedious tasks in connection with the preparation
    of input data; hence, additional labor-saving
    features might be desirable;

    7.   The models do not deal with a thermal plume in
    a river or lake.

    Accordingly, there is clear need for research and
development of other types of model, as discussed
further below.
              Research and Models Needed

Unsteady Stream Flow and Water Quality Model

    The appropriate equations for transient (unsteady)
stream velocity, depth, and concentrations of a
pollutant having first-order kinetics are:
         _ !U              M     8U.   3u  U
    (1)     3t   gtan a + g8x + U3x+'pH2
           £ H. 3W
         + 2 W 3x
(2)
            3H   1 3Q
            3t   W 3x
    (3)  Q   WHU
     3£   c  dW 3Q      ^c
(4)   3t   W- dH ~3x — U 3x
     - K  c + F (x,t)
                             J_
                             HW
                                    __
                                    3x
                                            3c,
    (5)  W   W (H,x)

where U is section-average velocity, t  is time, g is
the acceleration of gravity, tan a is the slope of the
stream bed (+ up), H is section-average depth, x is
the space independent variable along the stream
center!ine, p is equivalent viscosity,  p is water
density, W is section width, Q is stream flow rate,
c is pollutant concentration, K is the  longitudinal
diffusion coefficient, K-) is the kinetic rate
constant of "decay" of c, and F (x,t) is a varying
and distributed source.  The partial differential
equations (1, 2,4) can be converted to  ordinary
differential equations by spatial finite differencing
of the dependent variables.  They can then be solved
by numerical integration in time.
                                                       518

-------
    The desired model should have the following
features:

    1.   Ability to model depositing and resuspending
    of at  least one size-and-density class of
    suspended solids;

    2.   Inclusion of nitrate and at least one form
    of phosphorus from rural runoff as parameters;

    3.   A lumped model of any sewer system and
    treatment plant in the area;

    4.   Inclusion of urban and/or rural runoff during
    and shortly after a storm.
Reservoir and Lake Models
    Ohio has numerous reservoirs and lakes in which
water quality is of concern.  Ohio also has many
rivers which discharge into Lake Erie.  It is desir-
able, therefore, that the Ohio EPA have computer
programs capable of modeling pollution, photo-
synthesis, wind-driven currents, etc., in a lake well
enough for water quality management purposes.

    The simplest case of a reservoir is one which
results from a dammed up stream and which is re-
latively narrow.  When the water is not stratified, it
can be modeled as a deep stream.  When the water is
stratified, it might be dealt with as two streams, one
on top of the other, with a minor amount of mutual
coupling.  However, most reservoirs and lakes are so
wide that the mixed-stream assumption would be
invalid, necessitating a treatment in at least two
dimensions (lateral and longitudinal).  A wide dam
pool in a stream should be modeled as a lake,
especially if it is used as a source of cooling water.

    The parameters of lake models which are most
important include the velocity and temperature fields,
D.O., BOD, benthic demand, and nutrients.  Most of
the problems can be considered in seasonal steady-
state, although some are progressive during a season,
as in the case of a lake bottom going anaerobic in
summer as a result of organic matter.

    There seem to be available a number of models
meeting most of the foregoing requirements. (3-5)
 "Estuary" Models

    The rivers in Ohio which flow north have
 "estuaries" where they enter Lake Erie.   In an
 estuary there are changing gradients of temperature
 and concentrations of salts.  A city with substantial
 industrial and municipal discharges to an estuary
 might require special water quality standards.  An
 estuary model is needed for this purpose.

    Many phenomena complicate the modeling of such
 an estuary (6):  a thermocline at some times of year,
 currents and circulations due to the wind vector, a
 wedge of cold water at the bottom from lake or river,
 and sloshing back and forth of water (at much
 greater flow rates than the river flow) due to lake
 level fluctuations.  These problems are at the
 frontier of the modeling art.  However, simpler
 estuary models are available. (7-9)
Stochastic Stream Model

    Recognition of the essentially-stochastic nature
of water quality  is  implied  in the practice  of ex-
pressing standards in terms  of 7-day  10-year low
flow.  It would be desirable to push  the stochastic
approach to water quality further by  development and
use of a stochastic model of stream minimum  D.O. re-
sulting from rainfall episodes.  Because of  the cost
of the large number of long  computer  runs, it is not
economically feasible to enclose a transient  stream
segment, sewer system, and treatment  plant model in
a Monte Carlo (repeated trials) loop.  Accordingly,
it is necessary to shorten the time for each  itera-
tion run.  One approach is to replace the transient
model with a nonlinear algebraic stochastic model
containing the principal phenomena, component
frequency distribution-and other building blocks.

    One way to get the algebraic model is to do
regression analysis on the results from a transient
model in a variety of cases.  A complementary way is
to assemble the stochastic model  from theoretical
building blocks, which include empirical parameters
having unknown values to be determined from model-
fitting studies with the transient model.
Mixing Zone Models

    Many water quality standards are expressed in
terms of a mixing zone downstream of a discharge.
However, stream models have customarily been based
upon the assumption of instantaneous uniform mixing
across the entire cross-section at the discharge.
Thus there is need for a two-dimensional model capable
of describing the steady-state plan view concentration
field in the mixing zone.  The required inputs are
the stream flow, width, average depth, bottom rough-
ness, characteristic height or other determinant of
transverse diffusion (10), and discharge parameters
(flow, concentration, location, and direction of
discharge).  Ideal vertical mixing would be assumed in
shallow streams.
Biochemical Kinetics Models

    There is much to be learned about the empirical
functions which describe the rate constants of bio-
chemical "decay" of pollutants in a stream.  One of  us
(A.B.C.) has studied the rate constant for the
oxidation of phenol as a function of temperature and
concentration.  The data base used was for the 3-mile
reach on the Mahoning River from Struthers to
Lowellville.  At both ends of this reach the Ohio EPA
had determined phenol concentration and temperature
30 times "from 1973 to 1975.

    The usual practice of making the logarithm of the
rate constant linear in the temperature was found to
be grossly inadequate by not showing a peak; a
quadratic term is necessary, and a cubic term is
desirable.  The logarithm of the rate constant should
contain also a term proportional to the logarithm of
concentration.  It is hoped that similar regression
studies of the rate constants of this and other
reactions will be performed in Ohio EPA and elsewhere.

    Due to the poor reproducibility of the 8005 test,
an in-house research project was initiated to deter-
mine if more reliable tests such as COD, TOC, or TOD
could be used to model or estimate BODc.  The
carbonaceous component of BOD was studied by using a
chemical to inhibit the nitrogenous component.
Analysis of the data is incomplete, but preliminary
results indicate that a strong correlation between
BODs & COD, BOD5 & TOC and BODs & TOD does not exist.
                                                       519

-------
Stream Hydrology Models
                                                                                Conclusion
    Ohio water quality standards are expressed in
terms of the 7-day 10-year critical low flow.
However, the stream hydrological data are available
ordinarily only at much higher flow conditions.   It
is necessary, therefore, to find a means to ex-
trapolate the hydrologic data to critical low flow
from the flow value which existed.  Thus a model  of
stream hydrologic parameters as functions of flow is
required.

    The most commonly used model for this purpose is
a power function, namely,
          W
aQb; H   cQf; U - kQ"
where the lower case letters are empirical constants.

    The exponents must satisfy the constraint b+f+m=l,
in order that the empirical formulas yield values of
W, H, and U, whose product WHU equals Q.  Points
(b, f, m) can be plotted in an equilateral triangle
as shown in Figure 4L.  The points representing the
exponents for different cross-sections of a given
stream tend to cluster or form a streak in the tri-
angle.  Thus the triangular plot is useful in
studying data for the exponents.
Other Models
    Ohio streams and lakes constitute ecosystems which
are important to the Ohio EPA.  The importance arises
from, for example, the commercial and sports fish-
eries, on the one hand, and the difficulties associat-
ed with eutrophication and algae, on the other hand.
However, the Ohio EPA has not yet done any aquatic
ecosystem simulations.  Nevertheless, as modeling
evolves in the agency, such studies will be done in
the next few years.  Good progress in the development
of freshwater ecosystem models has been made by the
Deciduous Forest Biome project in the International
Biological Program.

    A benthic community analysis for the Scioto Basin
was undertaken by the aquatic biology staff of Ohio
EPA.  Their findings clearly demonstrate that a good
correlation exists between species diversity and D.O.
This is illustrated graphically in Figure J_.

    Dr. F.S. Bagi of the Ohio EPA has been applying
least-squares regression to actual cost data from
Ohio waste treatment facilities and interceptor
sewers, in order to build up a cost model for a
complete drainage basin.  A preliminary version of
such a cost model is currently being added onto the
WQPM program, in order that alternative treatment
plans for a basin may be compared on a cost basis.  It
is expected that the basin cost model will reveal
opportunities for savings through regionalization.

    In the near future several alternative treatment
processes will be costed, when a % removal of BOD is
specified.

    Many of the foregoing types of models would be
useful for long-range planning of treatment facilities.
In all such applications, a need exists for a pre-
program to generate projections of future flows and
loads from all industrial and municipal plants in the
basin.  The user would supply data such as assumed
constant growth rates of familiar parameters like
daily per capita water usage, BOD and ammonia
production, industrial production, and population.
                                                Progress has been made by the Ohio EPA in the
                                            development and utilization of water quality models
                                            and computer programs, as described herein.
                                            However, the problems facing this agency will require
                                            the development and application of various advanced
                                            models.
                   Acknowledgements

    The authors wish to acknowledge generous help from
George Garrett, Paul Flanigan, Ed Armstrona.
Dr. Tom Birch, and Pat Abrams, all of the Ohio EPA,
in providing information and data for this paper.  We
are indebted to George also for having conveyed to us
over a period of years much of his deep concern for
and some of his extensive knowledge of water quality
problems.
                                                                 References
                                            1.   Klein, "River Pollution", Butterworths, London,
                                                Vol.  2, 1962.

                                            2.   Harper, M.E., "Assessment of Mathematical Models
                                                Used  in Analysis of Water Quality in Streams
                                                and Estuaries", Washington State Water Research
                                                Center, Pullman, Wash., June 30, 1971, 81pp.

                                            3.   Gordon, John A., and Babb, Malcolm C., "Problems
                                                Associated with the Validation and Use of
                                                Reservoir Water Quality Models", presented at
                                                5th Annual Environmental Engineering and Science
                                                Conference, Louisville, Ky., March 3-4, 1975.
                                                Contains 31-item bibliography.

                                            4.   Baca, Robert G., Waddel, William W., Cole,
                                                Charles R., Brandstetter, Albin, and Cearlock,
                                                Dennis B., "Explore I - A River Basin Water
                                                Quality Model", Battelle Memorial Institute
                                                Pacific Northwest Laboratories, Richland, Wash.,
                                                99352, August, 1973.

                                            5.   Sheng, Yea-Yi Peter, "The Wind-Driven Currents and
                                                Contaminant Dispersion in the Near-Shore  of
                                                Large Lakes'1, Case Western Reserve Univ., Oct.,
                                                1975.

                                            6.   Horowitz, J., Adams, J.R., and Bazel, L.A.,
                                                "Water Pollution Investigation:  Maumee River and
                                                Toledo Area", U.S.E.P.A. Publication #905/9-74-
                                                018,  January, 1975.

                                            7.   Thomann, Robert V., "Systems Analysis and Water
                                                Quality Management", Environmental Science Services
                                                Division, Environmental Research and Applications,
                                                Inc., New York, 1972.

                                            8.   Eco-Labs Inc., "Water Quality Study of the
                                                Cuyahoga River (Literature Survey and Simulation
                                                Model)", Prepared for U.S.E.P.A., Region V,
                                                pg. 108-157, 1974.

                                            9.   Amendola, G.A., Schregardus, D., and Delos, C.,
                                                Technical Support Document for Proposed NPDES
                                                Permit-United States Sfeel Corporation Lorain
                                                Works", U.S.E.P.A., Region V, Michigan-Ohio
                                                District Office, Appendix VI, July, 1975.

                                            10.  Yotsukura, N., and Cobb, Ernest D., "Transverse
                                                Diffusion of Solutes in Natural Streams", U.S.
                                                Geological Survey Professional Paper 582-C, U.S.
                                                Government Printing Office, Washington, D.C., 1972.
                                                       520

-------
      o
      ci
                                    FIGURE I



              OEPA MODEL VERIFICATION DATA FOR MAHONING RIVER
          13
          12
                      5       10      15      20     25


                    MILES DOWNSTREAM FROMLEAVITTSBURG GAGE
                                                             30
                                                                     35
                + " OEPA MODEL OUTPUT  )

                ©= MODO USEPA FIELD DATA)
FEBRUARY 12-13,1975
                                                                                     FIGURE  2



                                                                        ACTUAL vi MODELED D.O. IN SCIOTO RIVER
                                                                                                                -  PROJECTED D.O.


                                                                                                                ®  MEASURED D.O.
                                                                                                       10
                                                                                                                20        30        40       50        60

                                                                                                                  MILES DOWNSTREAM OF OLENTANGY RIVER
                                                                                                                                                                 70
                                                                                                                                                                           80
                                FIGURE 3


       V  D.O. v« SPECIES DIVERSITY IN SCIOTO RIVER


                       FOR AUGUST, 1974
CO «W
ee
111
*•   	—	
                                                                 PROJECTED D.O.



                                                                 SPECIES DIV. VALUE
                10
                         20        30       40        50

                        MILES DOWNSTREAM OF OLENTANGY RIVER
                                                                60
                                                                         70
                                                                                    FIGURE 4



                                                                          WATER MODELING IN OHIO EPA.

-------
          THREE-DIMENSIONAL MODEL DEVELOPMENT FOR THERMAL POLLUTION STUDIES
                                   Subrata Sengupta and Samuel Lee
                               School of Engineering, University of Miami
                                        Coral Gables,  Florida

                                                and

                                            Roy A.  Bland
                             National Aeronautics and Space Administration
                                    Kennedy Space Center, Florida
               BACKGROUND

Because of the growing importance of thermal pol-
lution and because of the possibility of detecting it by
means of remote sensing,  the National Aeronautics
and Space Administration (NASA)-Kennedy Space
Center (KSC) has sponsored a study on this many-
faceted problem.  A team of researchers at the
University of Miami has been under KSC contract to
develop a universally applicable three-dimensional
thermal pollution math model. When completed this
model will predict the three-dimensional motion
and temperature of thermal plumes within waters to
which they are discharged.

This mathematical model can include effects of winds,
ocean currents,  cyclic tidal flushings in bays and
estuaries, variable winds and realistic bottom topog-
raphy.  Remotely sensed data and in situ measure-
ments are used for model calibration and verification.
An airborne scanner system backed up by satellite
infrared remote sensing systems is used to measure
water surface temperatures.

The airborne scanner system used is a Daedalus
DS-1250 multispectral line scanner system and is
installed in a C-45H Tri-Beechcraft (NASA-6).  An
Hg:CO:Te detector is used for sensing 8-14 micron
IR radiation. The system  is owned and operated by
the NASA Kennedy Space Center.  Two kinds of
satellite data are used to supplement the aircraft-
derived IR data.  These are the NOAA-2 and NOAA-3
satellites, which operate in the 10.5 and 12.5 micron
region with 0. 5 nautical mile resolution.  Also used
in the study are data from  the Air Force DMSP
satellite,  which operates in the  8-13 micron range
with 0.3 nautical mile resolution.

This mathematical model will serve a dual purpose.
It can be used in surveillance studies and also it
will enable environmental planners to predict the
behavior of hot discharges in a given region and,
therefore, to determine whether such discharges
exceed allowable temperature limits at depth and on
the surface.  In other words,  this model can be used
in the location of future nuclear power plants in
order to select the most advantageous sites.
         MATHEMATICAL MODEL
           DEVELOPMENT PLAN

After a feasibility study was completed in January
1974, it was concluded that the mathematical model
development would take four years to complete.  As
an end product, the model will be well documented,
readily transferable, and initial and boundary con-
ditions easily altered so that the model can be uni-
versally applicable.  This four-year period will be
divided into four phases.

In the first phase,  the basic concepts of the model
were established and the optimum numerical scheme
(finite differences) for  the solution of the math
model's governing equations was selected.  In the
second phase,  completed in December 1975, the
model was developed and applied to Biscayne Bay,
Florida, using the Cutler Ridge power plant thermal
discharges as a testing case. Both rigid lid and free
surface models of Biscayne Bay were written during
this phase, but only the rigid lid far field version
has been completely verified to date.  Velocity and
temperature fields have been computed for different
atmospheric conditions and for different boundary
currents produced by tidal effects.  The computations
have been carried out for different time periods
between one and six hours of real-time. Four air-
craft infrared data runs,  roughly one each quarter,
were made over Biscayne Bay during this phase to
supply data for the model.

In the third phase, which began in January 1976,  a
twelve-month period will be devoted to three tasks.
The modeling of Biscayne Bay will be completed and
computer results verified against remote sensing
and in situ measurements.  In the second task, the
mathematical model will be revised as needed and
applied to the St.  Lucie, Florida, nuclear power
plant, which discharges onto the off-shore continental
shelf.  This will be an  interesting contrast to
Biscayne Bay, which is a shallow lagoonal estuary.
The third major task results from a recommendation
by several EPA and NRC/ERDA officials.  Their
recommendation was to initially apply the rigid lid
version of the model to a deep lake reservoir. In
consultation with the progressive Duke Power
Company, it was concluded that the rigid lid model
                                                  522

-------
should first be tested on a thoroughly understood lake
where abundant real-time measurements are taken.
The lake selected and approved was Lake Belews,
North Carolina,  which serves as a cooling reservoir
for a  large fossil fuel plant.  Thus, the third task in
this phase of the implementation plan will be to model
Lake  Belews,  North Carolina, in cooperation with and
under the sponsorship of the Duke Power Company.

In the fourth and final phase of the development plan,
the mathematical model will be further generalized
so that it will be readily applicable to any geograph-
ical discharge area site.  The computer program will
be finally documented so as to afford a user comput-
ing facility minimum difficulty in making this program
operational.  In cooperation with EPA and NRC/ERDA
officials as well as the electrical power generating
companies,  the NASA-KSC will work toward utilizing
this model as an industrial standard.  As needed,
further applications of the model will be carried out
in this final stage.

          MATHEMATICAL MODEL

A considerable amount of work has been done in
modeling thermal discharges.  A three-dimensional
model including the effects of buoyancy, topography,
and other parameters has not been developed yet.
Akers  discussed some of the models that are in
existence.  Policastro2' ^' 4 in a series of review
papers has compared the existing plume models with
field  data.  He considered a range of models from
analytical to quasi-three-dimensional numerical
models. Harleman's^' 6, 7 pioneering work led to a
numerical model with Stolzenbach,  which has been
widely used in plume analysis. The only existing
complete three-dimensional models are by Waldrop
            Q          Q
and Farmer  and Paul.   However,  Paul's model
assumes symmetry, thereby eliminating the possi-
bility of including wind or current effects.  Both
models are for constant depth basins.  The objective
of the present study is to develop a comprehensive
three-dimensional model including the effects men-
tioned above.

Details of formulation, solution,  and development
have been discussed in reports by Lee et al.10> -11
The need for remotely sensed data in model develop-
ment  and verification has been discussed by Sengupta
      1 *?
et al.    A general description of the various models
comprising the thermal pollution mathematical model
package will be presented here.

Thermal anomalies caused by a heated discharge
usually affect  areas of a few miles in  extent. Initially,
the discharge is dominated by a jet-like behavior.
Then, turbulent entrainment and buoyancy influences
the trajectory and spreading.  Finally,  the flow is
governed by the far field conditions and the ambient
meteorological state.  The domain of interest can be
classified into a near field, where effects of the
discharge are significant, and a far field,  which
affects the plume but is not appreciably affected  by the
plume.  The numerical characteristics of these two
domains are quite different.  The procedure,
therefore, is to obtain a far field solution with a
coarse finite difference grid and to use this to obtain
the near field (plume) solution using a finer mesh
size.

The governing equations describing the state at a
point in the flow field are a system of coupled, non-
linear,  second-order, three-dimensional partial
differential equations which satisfy local conservation
laws for total mass,  species mass,  momentum  and
energy. The constitutive equations complete the
system  of equations.  In laminar flows the molecular
transport properties for heat and mass transfer may
be used. Most environmental situations  are,
however, turbulent.  The time averaged transport
equations are therefore used.  The turbulent closure
condition is  specified by approximating the Reynolds
stress terms by eddy transport coefficients.  For
studies  where salinity variations are important,  a
salt conservation equation similar in form to the
energy equation can be added.

The surface waves can be eliminated by imposing a
rigid lid condition whereby the vertical velocity at
the surface is equated to zero.  The transients are
somewhat distorted but the steady-state general
circulation is not significantly affected.  The elimi-
nation of surface  gravity waves allows larger integra-
tion time steps, thereby reducing computation time.
However,  in  cases where surface elevation changes
are significant and transients are to be investigated,
a free surface model has to be used.  The rigid lid
formulation for a variable depth basin has been
                              I O
developed by Sengupta and Lick.   A free surface
                                    14
model has been used by Freeman et al.   to study the
circulation and periods of oscillation of Lake Huron.

The computer program package that is being develop-
ed is to be applied in a wide variety of geophysical
situations, so both the free surface and rigid lid
models  are being developed.  Both these models are
further  sepcialized to be applied to near and far fields.
Therefore, there are four separate programs.

    Rigid lid model

        (i)     Far field version

        (ii)    Near field version

    Free surface model

        (i)     Far field version
        (ii)
Near field version
The rigid lid model has been used to obtain general
circulation and temperature distributions in Biscayne
Bay.  The free  surface model is in its final stages of
development and application.  The  rigid lid model
will be described here together with the results and
the verification based on ground truth and remote
sensing data.  Figure 1 shows the program package
with applicable geographical locations.
                                                   523

-------
Rigid Lid Model

    The programming difficulties for a three-
dimensional basin suggest a stretching of the vertical
coordinate with respect to the local depth.  This
converts the basin to constant depth.  The same num-
ber of grid points in the vertical direction can be used
at the shallow or deep parts of the  basin without
using variable grid sizes.  The stretching introduces
extra terms in the momentum equations.  The details
of the derivation are presented by Sengupta and
T • i, 13
Lick.

    The governing equations consist of the continuity
equations,  the three momentum equations, the energy
equations,  and the equation of state . The vertical
momentum equation is  simplified using the hydro-
static approximation.  The Boussinesq approximation
is made.  Constant though different eddy transport
coefficients are chosen for vertical and horizontal
diffusion.  The equation of state is an empirical
relation between density and temperature.  The
vertical velocity at the lid is zero.  This causes the
surface pressure to be different from atmospheric
pressure.  The x and y momentum equations are
integrated over depth and combined after differentia-
tion with respect to x and y, respectively.  The
resulting Poisson equation is the predictive equation
for surface pressure.  The above set of equations
with appropriate boundary and initial conditions
constitute the mathematical model.

Initial and Boundary Conditions

    The nature of the equations requires initial and
boundary conditions to  be specified.  The velocities,
temperature, and density are given throughout the
domain  as initial conditions. Boundary conditions are
specified at the air-water interface,  horizontal bound-
aries of the domain,  the bottom of the basin and efflux
points.  At the air-water interface, wind stress and
heat transfer coefficients are specified. The condi-
tions on the lateral walls allow no slip and no normal
velocity for the momentum equations.  These walls
are assumed to be adiabatic. At the floor of the
basin, the conditions of no slip and no normal velocity
are also applicable .  The energy equation has a heat
flux boundary condition, considered adiabatic for the
present study.  At points of efflux,  the velocities are
specified and the graident of temperature normal to
the domain boundary is considered zero.  These open
boundary conditions are most difficult to specify.

Method of Solution

    An explicit finite difference scheme is used to
integrate the transport equation. The general finite
difference form is:
u   -u
                          n
             = (convection)  + (pressure)   +
                n-1,  n, n+1
        (viscous)

    Here u may be replaced by v or T (for the T
equation the pressure term is not used).  The spatial
derivatives are centrally differenced using a modified
 Dufort-Frankel scheme to avoid time-splitting in
 long term integration.  Its advantages have been
                                  13
 demonstrated by Sengupta and Lick.    The pressure
 equation is approximated by a five-point scheme and
 solved by the Liebmann relaxation procedure.  The
 algorithm is as follows:

    a.   Using values at time step n,  calculate the
         forcing term for the pressure equation

    b.   Solve  the pressure equation iteratively

    c.   Calculate u and v from the momentum
         equations

    d.   Calculate w from the continuity equation
         using  u and v at n+1

    e.   Calculate T from the energy equation

    f.   Calculate o from the equation of state

    The procedure is repeated for each time step.

      APPLICATION AND VERIFICATION

Figure 2 shows a map of Biscayne Bay.  The bay is
open to  the ocean on the eastern side through a  shoal
region and some creeks.  The northern  end is
partially obstructed by a causeway.  At  the southern
end a shallow region effectively separates the bay
from  Barnes Sound.  There are two power plants
located  on the bay; one at Cutler Ridge and the other
at Turkey Point.  The far field conditions affect the
thermal plume  from the Cutler Ridge plant. The
Turkey  Point plant uses a cooling canal system.

The rigid lid model has been applied to Biscayne Bay.
A wide range of meteorological conditions have  been
modeled and detailed results and evaluations present-
ed in a report by Lee et al.    In this manner the
program was calibrated and the parameters were
chosen.  For verification, the model results were
compared with  data gathered from a field experiment
on April 15, 1975.  NASA-6 thermal scanner runs
were flown in the north-south direction.   Ground
truth  data was used to correct for atmospheric effects
and also to record the vertical variation of temper-
ature  in the bay.  The average wind was from the
southeast at 10 mph and the air temperature was 30 C.
The tide was incoming.   Figure 3 shows the interpo-
lated isotherms drawn from the thermal IR data.
There is a hot spot near  Featherbed Banks.  There
are warm regions wherever the depth is shallow.
Because the near shore regions are warmer, the
central parts of basins of Card Sound are seen to have
closer isotherms.   There are warm spots near  the
group of islands at Caeser Creek and also near  the
island two miles south of Turkey Point.  The verifi-
cation for the model involved comparison of this
thermal IR map with computed surface isotherms.

The numerically predicted circulation for the bay in
the case described above is shown in Figure 4.  The
incoming tide is primarily diverted to the south. The
effect of the wind is to turn the flow toward the north-
                                                  524

-------
west, though the tidal effect predominates.  The cur-
rent toward Rickenbacker Causeway is minimal.  The
velocities near the shoals are incoming.  The mass
flux through the creeks has very localized effects.
The Featherbed Banks reduce the current magnitude,
so the velocity increases in the deeper regions adja-
cent to the banks.   The velocities also increase as the
bay narrows to the south.

The measured quantities, namely wind speed (10 mph
southeast), incoming tide,  and ambient temperature
(3CPC),  were used to obtain the temperature field.
To minimize the effect of initial temperature condi-
tions, the run was executed with as early a starting
time as possible.  At 9 AM near Cutler Ridge (1=11,
J=3),  the measured temperature was 25.6°C with no
vertical stratification.  Since this is a near shore
location, the average temperature in the bay can be
assumed to be lower.  Since the detailed temperature
field was not known as an initial condition, it was
assumed that the bay was isothermal with a temper-
ature of 24.5°C at 8 AM.  The model predicted the
conditions  six hours later, which were compared with
mid-afternoon in situ and remotely sensed data.

Figure  5 shows the surface isotherms predicted by the
model.  The surface isotherms show a hot spot over
Featherbed Banks. The warmer near shore regions
are quite clearly seen.  The warm area near the is-
lands in the southern part of the bay is also evident.
The comparison with thermal IR data in Figure 3 is
excellent.  It  should be noted that the IR data are not
synoptic.   There is a time lag of almost three hours
between near  shore flights and flights over  the keys.
Considering this time  lag and the approximate speci-
fication of initial conditions, the model may be con-
sidered quite  satisfactory for ecological studies.

The ground truth data were obtained only in the morn-
ing of the April 15, 1975, field experiment.  Results
for the  model after three hours of heating can be
compared with ground truth measurements  at location
1-11, J=7 at 11 AM. Figure 6 shows a transect of the
bay along J=7.  The predicted isotherms are shown,
with ground truth  data at locations marked by aster-
isks.  It can be seen that the agreement is within
0.7°C.  This  is satisfactory for most environmental
applications.

Conclusions
   Figure 1.  Application Chart for Numerical Models
           (University of Miami—NASA-KSC Project)
     As part of a generalized model development
 program, a three-dimensional rigid lid model has
 been developed.  Remote sensing and ground truth
 data have been used to calibrate and verify the model.
 The model has been found satisfactory in predicting
 the general circulation and temperature field in
 Biscayne Bay.

 Acknowlegments

     The authors wish to express their gratitude to
 Drs. N. Weinberg, H. Hiser,  and Mr. James Byrne
 for their effort in processing the remote sensing and
 ground truth data.  The effort of the NASA-KSC data
 analysis personnel and flight crews were an integral
 part of the investigation.
APPLI CATION
MODEL
RIOID-LID HEM
nr-LO MODEL
FREE 6UWACZ
MODEL
FM FIELD
MODEL
FREE BURFAd
rM FIELD
MODEL
T1IERMA.
LAKE
•
•


L DI6C11ARUE
RIVEA
•
•


BAY
I CHORES
LEVEL



OCEAN

• •


GENERAL CIRCULATION WTO
TEMPERATURE FIELDS
LAKE


•
«
BAY


I CHORE*
CHAMCU
'
OCEAB


IGHORZa
LEVEL
'
COASTAL BOUNDARY
LAYEHfl
LAKE
IGNORES
LEVEL
CHAHCC4

•
•
BAY
LEVEL
•
LEVEL
CHASGES
•
*"*"

•

•
Figure 2.  Map Showing the General Area of Biscayne Bay
   TURKEY POIHT
                                NASA-*
                              IR SCANNER DATA
                                4/15/75
                       30.0 Interpretation Aided by
                           . Ground Truth
                                                              Figure 3. Biscayne Bay and Card Sound Florida
                                                    525

-------
     •' " / / / / I I  \, \ \ i -^ *s •»s ^ ~«
     \- ' < < t / I II V \l,'.\^J*^^'1N--\
     1	1 ///<;» i\\.\\~-»-^--- ^-»^=r^'
             Figure 4. Surface Velocities
                           1 TEMP:

                           TIAL TEMP: Zfl.5
                                  10 HPH, Southca

                                  I-cos ing
                                     (10 crVsf
                                  30 C
                          TIME ELAPSED:
                                  7£0 =TU/Duy'F-rT

                                  6 HP-3.
 Figure 5. Surface Isotherms for Biscayne Bay (Rigid-Lid Model)
         (Temperature in Degrees Centigrade)
                          ,B ki.lora»t«m horizontal «c»l»

                           3 ft. v«ttic»l ccnla
        (• danotaa locati
                WIND:  10 MPU, Southeast
                TIDE! 'incoming (10 cm/sac)
                AIR TEMP:  30°c
                INITIAL TEMP: 24.S°C
                HEAT TRANSFER COEFFICIEHTl
                 750 BIU/Day°F-FT*
                TIME ELAPSED: 3 HRS
                (TEMPERATURE IN DEGREES CENTIGRADE)

                SECTION I J-7
 Figure 6. Comparison of Calculated Isotherms for Vertical
         Section J-7 With Ground Truth Data
                 REFERENCES

1.  Akers,  P.,  "Modeling of Heated Discharges,
Engineering Aspects of Thermal Pollution, Krenkel
and Parker (Ed), " Vanderbilt University Press,  1969.

2.  Policastro,  A. J., "Heated Effluent Dispersion in
Large Lakes," Presented at the Topical Conference,
Water Quality Considerations in Siting and Operating
of Nuclear  Power Plants, Atomic Industrial Forum
Inc.,  1972.
  Argonne, Illinois, 1972.

4.  Policastro, A. J., and Paddock, R.A., "Analyti-
cal Modeling of Heated Surface Discharges with
Comparisons to Experimental Data," Interim Re-
port No. 1, Presented at the 1972 Annual Meeting
of the A. I. Ch  7.

5.  Stolzenbach,  K. D., and Harleman, D. R. F.,  "An
Analytical and  Experimental Investigation of Surface
Discharges of Heated Waters," R.M. Parsons
Laboratory for Water Resources and Hydrodynamics,
M. I. T., Cambridge, Massachusetts, Tech.  Report
No.  135, 1971.

6.  Stolzenbach,  K. D., and Harleman, D. R. F.,
"Three Dimensional Heated Surface Jets., "  Water
Resources Research,  Vol. 9,  No. I,  1973.

7.  Jirka, G. H. and Harleman, D. R. F., "The
Mechanics of Submerged and MiltLpart Diffusers for
Buoyant Discharges in Shallow Water, " R. M. Par-
sons Laboratory  for Water Resources and Hydro-
dynamics,  M. I.T., Cambridge,  Massachusetts,
Tech.  Report No. 169, 1973.

8.  Waldrop, W. R., and Farmer, W. J., "Three
Dimensional Computation of Buoyant Plumes,"
J. G. R.  Vol 79, No. 9,  1974.

9.  Paul, J. F. and Lick, W. J. , "A Numerical Model
for a Three-dimensional, Variable-Density Jet,"
FTAS/TR 7392, School of Engineering, Case West-
ern Reserve University, 1972.

10.  Lee, S. S., Veziroglu,  T. N., Weinberg, N. L.,
Hiser, H.  and Sengupta, S.,  "Application of Remote
Sensing for Prediction and Detection of Thermal
Pollution," NASA-CR-139182,  1974.

11.  Lee, S. S., Veziroglu,  T. N., Weinberg, N. L.,
Hiser, H.  and Sengupta, S.,  "Application of Remote
Sensing for Prediction and Detection of Thermal
Pollution," NASA-CR-139188,  1975.

12.  Sengupta,  S.,  Lee, S. S., Veziroglu,  T. N.,
Bland,  R., "Application of Remote Sensing to
Numerical Modeling, " Remote Sensing Energy Re-
lated Studies, T.  N.  Veziroglu (Ed),  John Wiley  &
Sons, 1975.

13.  Sengupta,  S.,  and Lick, W., "A Numerical
Model for Wind Driven Circulation and Temperature
Fields in Lakes and Ponds," FTAS/TR-74-99.
Case Western Reserve University, 1974.

14.  Freeman,  N. G.,  Hale, A.M.,  Danard,  M. B.,
"A Modified Sigma Equations Approach to the
Numerical Modeling of Great Lakes  Hydrodynamics,"
J. G. R., Vol 77,  No.  6, 1972.
3. Policastro,  A. J.,  and Tokar, J. V., "Heated
Effluent Dispersion in Large Lakes," Report No.
ANL/ES-11,  Argonne National Laboratory,
                                                   526

-------
                            SOME OBSERVATIONS ON MODELLING DISPERSION OF POLLUTANTS

                                     IN NEAR-SHORE WATERS OF LAKE MICHIGAN
                                                Richard H. Snow
                                              Engineering Advisor
                                             IIT Research Institute
                                            Chicago, Illinois 60616
ABSTRACT

Results of an investigation of the effect of effluents
on water quality in the Calumet area of Lake Michigan
are reviewed.  The study showed that part of the
effects could be directly traced to the plume of the
largest effluent source, the Indiana Harbor Canal.
Since the number of measurements in the plume was
limited, modelling of the behavior of the plume was
useful to show that the measured conditions were
typical, and could be expected to occur during a large
part of the time.

However, focussing attention on the plume ignored the
more general pollution of near-shore water, which in
one other Great Lakes location was shown to have a
residence time of 40 days.  The only existing model of
the long-term dispersion of this pollution does not
take into account the known and suspected behavior of
the near-shore water movement.  It is recommended that
measurements of the behavior of the near-shore water
be carried out to form the basis for dispersion
modelling.

INTRODUCTION

It has long been noticed that the near-shore waters of
the Calumet area of Lake Michigan contain higher con-
centrations of pollutants such as NH3-N and phosphorus
than other areas of the Lake which are farther from
large population centers.2  The Calumet region is in
the southwestern portion of the Lake, extending from
Chicago to Burns Harbor, Indiana.  Measurements of the
water quality at various water intakes could not be
directly correlated with known effluents, because the
transport and dispersion of the water masses was not
simultaneously measured or predicted.

A first step in understanding the effect of effluents
on water quality is to correlate the measurements with
the motions and dispersion of effluent plumes in the
area.  An attempt to do this was reported by Snow.10
The effort met with some success and is briefly re-
viewed below.  The purpose of the present paper is to
discuss limitations in the approach based on the study
of plume behavior, and to suggest an approach based on
the movement of near-shore water masses covering a
wider area and a longer time span than is observed in
a plume.

STUDY OF IHC PLUME

The largest source of effluents in the Calumet region
is the Indiana Harbor Canal (IHC).   Figure 1 is a
Skylab photo showing the IHC plume.   In a recent study
for the EPAiO measurements of the water quality were
correlated with motion and dispersion of the plume
                        Figure 1
              Skylab Photo of Calumet Area
             of Lake Michigan, Showing Plume
               From IHC, Sept. 13, 1973
from the IHC over a distance of up to 19 km.

The following parameters were measured, and this combi-
nation of measurements provided a fingerprint to iden-
tify and track the effluents from the IHC:  NH3-N,
total Fe, temperature, conductivity, chloride, fluo-
ride, coliform bacteria, and others.  Other agencies
have measured such parameters as phenols, cyanide, oil,
taste and odor, heavy metals, toxic bacteria and
viruses.I"  Other pollutants may be expected, such as
PCB's.


GRAVITY SPREADING AND MIXING

The water of the IHC is almost always warmer and less
dense than the Lake water, and this gives rise to a
typical estuary effect at the mouth of the Canal.  The
IHC is dredged to a depth of about 10 m; the warmer
canal water flows out in the top 3-5 m of this depth,
                                                       527

-------
and colder Lake water intrudes in the bottom portion.
This behavior is similar to that observed in a salt-
water estuary, and described by Ippen.^  The rate of
gravity spreading is given by Parker and Krenkel8 in
terms of the wave velocity
                                           (1)
where
          p is density of water, consistent units
                                                  2
          g is acceleration of gravity, 9.80 m/sec

          H is depth of heated water, m

Measurements of temperature and depth of heated water
were taken at the mouth of IHC, Station CAL06, on
three boat-sampling days, and are given in Table 1.
Velocities were measured with a current meter, and
density differences were computed from the measured
temperature gradient.  From these data and Equation 1
we can calculate the outflow velocity based on the
spreading mechanism.  It is compared with the measured
outflow velocity in the last two columns of the table.
The agreement is so good that this confirms the
mechanism of outflow.

The colder Lake water which intrudes under the canal
water mixes with the IHC water in the lower part of the
canal.  As a result of this inflow and mixing, the IHC
water is already diluted 20 to 50% at the mouth of the
canal, according to our measurements of inflow in
Table 2 (SnowlO).

Inflow is produced because the warmer canal water
flows out of the harbor faster than it is supplied
from upstream.  The Lake water is drawn in to make up
the deficit, and to conserve mass.

A calculation shows that gravity spreading is more
important than inertial jet flow of  effluents out of
the IHC mouth.  The ratio of these two effects is
measured by the Froude number, F.
         where
                                                 (2)
                                                                    Uo is centerline velocity of jet, m/sec
         From data given in Table 1 we calculate  F  = 1.1.   A
         value of F < 2 means that the gravity  effect is more
         important than the inertia of the jet  (Cederwall3).

         Once the plume passes outside the mouth  of  the IHC, it
         continues to spread over the colder Lake water, just
         as oil spreads on water, because of the  gravity differ-
         ence.  It seeks to flow out and become thinner,
         decreasing its gravitational potential.  Such a wave-
         front is vertical and sharply defined  (Cederwall3).
         Although the wave front is moving with respect to  the
         bulk of the water, it may appear stationary  if the
         water has the same velocity in the opposite  direction.
         Parker and Krenkel8 describe the phenomenon  in some
         detail and review mathematical descriptions  of it.

         The behavior of the plume just outside the  IHC mouth
         appears to depend on the interaction of gravity
         spreading with Lake currents that usually run parallel
         to the shore.  This pattern is clearly shown by the
         Skylab photo, Figure 1.  This photo suggests  that
         there was a fairly strong general current in the main
         Lake water flowing from north to south, dragging the
         plume around the landfill.  The plume gives  the
         appearance of gravity-spreading behavior on  the north
         boundary.  The width of the plume is determined by
         competition between the spreading rate and the Lake
         current speed.  The rate of spreading was predicted
         by a dimensionless correlation from the literature
         (Sharp9).  The actual dilution factors were measured
         in the plume within a few km of the IHC mouth on 3
         days.  The model for spreading was used to show that
         the measured dilutions were typical, since the spread-
         ing depends on the known temperature difference and on
         the along-shore lake current speeds.  The measurement
                                                        Table  1

                                           GRAVITY FLOW AT IHC MOUTH (CAL06)
Depth of
Date, outflowing la
1973 m
November 14
November 19
December 7
3.5
3.5
3.5
5*
Outflow velocity,
.,rr Temperature, °C „ ,„__.,*,,.. m/sec
yor,
Top
15.9
15.0
13.5
Bottom g/m£ Calculated
10.0
11.5
10.5
0.00076 0.15
0.00045 0.12
0.00034 0.10
0.12
Measured
0.15
0.13
0.13
    *Depth of outflow was  uncertain because a  large  eddy passed during one  of two measurements.
                                                    Table 2

                                         MEASURED  FLOWS  AT MOUTH OF IHC
             Date
                                                 Total  Outflow
        November 14,  1973

        November 19,  1973

        December 7,  1973
                                               m/sec
                                                             cfs
                                    Lake Inflow
                                  mVsec       cfs
 89
102
120
3150
3590
4226
 44
(14)
(22)
1546
(495)
(778)
                                                       528

-------
of Lake currents is described below.

Temperature data from Storet indicate  that  the  IHC  is
usually 5°C warmer than the Lake, and  this  is due to
the fact that IHC water consists of Lake water  that
has been pumped through industrial processes and used
for cooling.  The only exception is when the Lake
temperature is close to 0°C.  In this  case  the  IHC
water may be near 4°C, the temperature of maximum
density, and the plume will tend to sink.   Such a
sinking plume has been observed in January  (SnowlO,
p. 135).

After the plume has moved a few km, its subsequent
vertical and lateral mixing is expected to  depend on
turbulent diffusivity in the Lake.  We made an  attempt
to apply the diffusion model of Wnek and Fochtman-'-^
and found that with assumed constant diffusivities  the
calculated plume concentrations fell off more rapidly
with distance than the measured concentrations.
Wnekl2 had previously obtained more reasonable  results
using a diffusivity that varied with distance to the
4/3 power.  Further effort would be needed  to resolve
this question.

COMPARISON OF PLUME AND NEAR-SHORE WATER POLLUTION

Based on the measured dilution factors in the plume,
the study (SnowlO) recommended reductions of pollu--
tants such as NH3-N to meet water quality limitations
in the harbor within 1-2 km of the mouth of the IHC.
                        lll'llpl'I'I'I'I'I'I'I'I'I'I'ITI'I'iri'l'I'I'I'I'I'I'I'l
                        0 .01 J" """""" ~  ~~ " " " ~" ™
                     SSHuTY I
                     ITAMOMO I
                                                         Figure 2

                                                     Ammonia-Nitrogen
                                                     Annual Average &
                                                      Maximum,  mg/Jl
                                                   Chicago Water Dept.
                                                    South Shore Lake
                                                           Survey
                                                       529
At this stage 
-------
          2 8-
                                        o o
                                               Currant m*t«r »t 08th St. orl
                                               L»k« Michigan Chicago w*t*r lnt*k*
                                               M*t«r 17 ft. b«low «urf*c*
                                             n ° n' O'
                                                     Figure 3
                                            A Sample of Current Meter
                                              Data From Calumet Area
Further evidence of motion of near-shore waters  is
obtained by observing the turbidity patterns  of  the
water.  Surface waves stir the bottom  to a  depth of
about  half the wavelength, or about 2-22 ft  (Verber1-1-) .
Figure 4 is a Landsat photo showing sediment  patterns
that indicate the movement of near-shore waters
toward the south.
Figure 4.  Landsat Photo of Lake Michigan, Aug. 21,  1973.
Turbidity patterns indicate motion of near-shore waters.
Lake bottom cannot be seen.
                                                       530
MODELLING ATTEMPTS

Current action in Lake Michigan  is more  complex than
in the shallower Great Lakes.  For this  reason  hydro-
dynamic models developed for other Great Lakes  appear
not to be applicable.  The main  current  patterns in
deep water of Lake Michigan have been  identified and
        correlated with physical processes  that cause
        the currents  (FWPCA^, and other  references
        given by Snow-*-") .  Most  of the current  measure-
        ments were done in deep  water, because  the
        early current meters were subject to  interfer-
        ence by waves in shallow water.   It has been
        found (Verberll, FWPCA^) that  the near-shore
        currents may  follow a different  pattern from
        currents in deeper water.  Figure 3 represents
        some of the few data in  near-shore waters, and
        these data indicate that the near-shore currents
        follow the wind more directly  than  the  deep-
        water currents.

        Katz and Schwab^ attempted to model dispersion
        of pollutants by applying a hydrodynamic model
        previously developed by  Kizlauskas  and  Katz^.
        They predicted currents  near shore  that follow
        the wind direction; but  the predicted currents
        in deeper water sometimes go in  the wrong
        direction, when compared with data  from FWPCA .
        Katz and Schwab^ combined this hydrodynamic
        model with a  dispersion  model by dividing the
        Lake into 10-km square cells.  The  near-shore
        water was given no special treatment.   Calcula-
        tions for the dispersion of effluents from the
        IHC during a period of alternating wind direc-
        tions showed  that the pollutants tended to
        remain in the general area, although  they were
        not confined  to the near-shore region.   Whether
        the method of calculation would  give  more de-
        tailed results in the near-shore area if the
        grid size were finer, has not been  determined.

-------
Some general hypotheses concerning the behavior of the
near-shore waters can be gleaned from observations of
the current data, the turbidities, and temperatures,
and the pattern of pollutants.  It appears that a
demarcation often occurs as seen in Figure 4 between
the near-shore and deep water at a distance of 5 to 10
km from shore.  This corresponds to a depth of 10 to
20 m, the location where the summer thermocline inter-
sects the bottom (FWPCA2. p. 125).  The demarcation
can be sharp or diffuse.  Water in this region drifts
up and down the shore with a speed of 5 to 10 cm/sec
with a reversal frequency of 12 hrs to 4 days (SnowlO,
pp. 120-125), during which time it can travel a dis-
tance of 2 to 35 km.  Mixing can be expected at the
outer boundary of the near-shore water, and this might
be modelled like a boundary layer.  However, since the
depth is only 10 to 20 m at the outer boundary, while
the width is 5 to 10 km, it will take a long time for
mixing to penetrate to the shore.  A more plausible
mechanism for eventual dispersion of pollutants is the
intermittant flow of near-shore waters to the southern
tip of the Lake, where they may meet a current from
the eastern shore, and hence mix out into the Lake.
This is only a hypothesis, since details of such a
flow pattern have not been established.

Sudden replacement of the near-shore water by cold,
clear, and much purer deep-lake water is occasionally
observed.  Such incidents may result from upwelling,
caused by unusually strong off-shore winds.  The fact
that this occasionally happens is evidence that during
the rest of the time the near-shore residence time is
long.


RECOMMENDED INVESTIGATION

It  is the thesis of this paper that measurements of the
behavior of the near-shore waters are needed to form
the basis for dispersion modelling.  The objective
should be to follow the motion of the water masses,
and  to determine their residence time and trajectory
in  the area.

To obtain data needed to form the basis for dispersion
modelling, the following simultaneous measurements are
needed over a period of a few weeks:  1) Current meters
installed 1 and 3 km .from shore at 3 locations in the
area, plus a few deep-water locations.  2) Analysis of
satellite photos showing turbidity effects.  Landsat
passes occur so infrequently that few photos will be
obtained during any measurement period.  Earlier
photos can be studied, to correlate evidence of shore
current patterns with wind records.  3) Aerial sur-
veillance can provide photos of the near-shore area
twice daily from an altitude of 20,000 to 30,000 ft,
and motion of plumes and sediment can be observed and
measured.  4) Periodic launching into the IHC plume of
markers that can be tracked from aerial photos and
perhaps by radio.  Markers should be designed to float
just below the surface so that they follow the water
mass and not directly affected by winds.  5) Measure-
ment of a few water quality parameters, such as NH3-N,
F6i 02, temperature, and turbidity.  Samples can be
taken at 5 water intakes twice daily, and from a boat
at a few additional sites, preferably near the floating
markers, to follow the movement and dispersion of water
masses.  6) Recording of wind conditions at existing
Lake stations.

Expected results of such a study will be measurement
of residence time and dispersion of effluent water
masses in the near-shore area, and measurement of the
mixing with deeper water under measured wind conditions.
These data would make it possible to develop a model
for the dispersion in terms of a boundary layer or
other model.
A field measurement program such as  this would  require
cooperative efforts of several organizations, to
provide current meters, aerial surveillance, water
quality measurements, and data analysis and modelling.
Several agencies have an interest in such measurements
on Lake Michigan, and a program such as this could
provide a framework for a cooperative investigation.

                      REFERENCES

1.  FWPCA, Lake currents, a technical report.   364 pp.
    1967.

2.  FWPCA, Physical and chemical quality conditions,
    Lake Michigan basin.  81 pp.  1968.

3.  Cedarwall, Klas, Dispersion phenomena in coastal
    environments, J. Boston Soc. Civil Engrs.   57 (1),
    34-70, 1970.

4.  Ippen, A. T., Estuary and coastline hydrodynamics,
    McGraw-Hill, New York, 1966.

5.  Katz, P. L., and Schwab, G. M. , Modelling episodes
    in pollutant dispersion in Lake Michigan, Research
    report No. UILU-WRC-75-0097, University of  Illinois,
    Chicago Circle.  69 pp.  1975.

6.  Kizlauskas, A. G., and Katz, P. L., A 2-layer
    finite-difference model for flows in thermally-
    stratified Lake Michigan,  Proc. 16th Conf.  Great
    Lakes Research, pp. 743-753,  1973.

7.  Palmer, M. D., Coastal region residence time
    estimates from concentration gradients.  17th Great
    Lakes Research Conference.  1974.

8.  Parker, F. L., and Krenkel, P. A.,  Thermal  pollu-
    tion status of the art, Report No.  3 prepared for
    FWPCA, Vanderbilt University,  Nashville,  Tenn.
    1969.

9.  Sharp, J. J., Spread of buoyant jets at the free
    surface, J. Hydraulics Div. ASCE 95_ (HY3)
    811-825, 1969.

10.  Snow, R. H. , Water pollution investigation,
    Calumet area of Lake Michigan, Report  EPA-905/9-
    74-011-A, Region V EPA, Chicago,  Vol.  1,  307 pp.
    1974.

11.  Verber, James, Currents and dilution factors for
    lower Lake Michigan, unpublished report of
    Technical Committee, Calumet Area Enforcement
    Conference, June 1965.

12.  Wnek, W. J., and Fochtman, E.  G., Mathematical
    model for fate of pollutants in near-shore  waters,
    Environ. Sci. & Tech.  6 (4),  331-7,  1972.
ACKNOWLEDGEMENT

This work was supported in part by U.S. Environmental
Protection Agency, Region V Enforcement Branch, under
the Great Lakes Initiative Program, Contract No.
68-01-1576.  Howard Zar was project officer.
                                                       531

-------
                                       A RIVER BASIN PLANNING METHODOLOGY
                                     FOR STREAMS WITH DISSOLVED OXYGEN AND
                                           EUTROPHICATION CONSTRAINTS
                                                       by
                                        Thomas M.  Walski, Sanitary Engineer
                  Army Corps  of Engineers    Environmental Effects  Laboratory
                                                       and
                     Vicksburg, MS. 39180
                                           Robert  G.  Curran,  President
                    Curran Associates, Inc.   Engineers  and  Planners    Northampton, MA
                               01060
Optimal Waste Load Allocation Program 2 (OWLAP2) is a
user-oriented optimization model which selects waste-
water treatment levels to meet water quality con-
straints at least cost.  This program solves the pro-
blem faced by river basin planners, who by merely
insuring that each wastewater discharge meets effluent
standards, will still be unable to meet water quality
standards.  This program selects the least cost com-
bination of additional treatment in the basin that
will insure that water quality standards will be met.

OWLAP2 first simulates water quality in the reaches of
the river under consideration.  It then perturbs the
initial conditions to determine the sensitivity of
water quality to changes in effluent.   It uses these
sensitivities as inputs to an algorithm for lineariz-
ing non-linear systems so that they can be optimized
utilizing linear programming.

                   Scope of Problem

A flowing stream provides many valuable services to
those who live along its shores as it can be utilized
as a source of drinking water, a means of transpor-
tation, a location for recreation and a route for re-
moving waste material from a population center.   With
increasing population it had become obvious that if
the stream was utilized excessively for waste removal,
this function would interfere with other uses.   This
situation eventually reached the point where govern-
ment intervention was necessary to reduce the waste
load to acceptable levels.

Unfortunately, governmental boundaries were generally
not established along those of the river basins and
it was difficult to efficiently reduce the waste
loads to acceptable levels.  Realizing this, various
governmental organizations attempted working together
to effectively manage the nation's rivers, and it was
found that sound planning can decrease waste reduction
costs.

With passage of the 1972 Amendments to the Federal
Water Pollution Control Act (PL 92-500), the United
States became committed to "encouraging and facili-
tating the development and implementation of areawide
waste treatment management plans."  With this law,
wastewater treatment is emphasized as  the method of
choice for achieving water quality goals.   Once one
has established wastewater treatment as the method
for achieving water quality goals in a river basin
system, some tools are required for analysis of the
system to best employ the treatment efficiently.  The
systems are so complex that a plan based on experi-
enced guesswork will not be sufficient .  Instead, the
tools of operations research appear to supply the most
useful laethod for analyzing such complex systems and
arriving at rational engineering designs for waste-
water treatment facilities.  Granted,  there are a
multitude of engineering judgements which must be con-
sidered in the basin plan,  but the techniques of oper-
ations  research should provide a comprehensive frame-
work for making these judgements.

It is fortunate that river basin planning should come
to the  fore at the time when digital computer tech-
nology  is  currently  available to solve a broad spectrum
of  large problems.   This  allows  the planner to handle
a wider range  of problems than could be attempted
without computer assistance.
The most powerful  of operations  research tools is lin-
ear programming.   It allows  one  to  handle a wide vari-
ety of  very  large  problems in a  reasonably small
amount  of  computer time.   Linear programming will
therefore  be employed in  this study to find the "opti-
mal" plan  for  wastewater  treatment  plant construction
and improvement for  a given  river basin.

The word "optimal" in the previous  paragraph is of
great importance in  this  discussion as it is necessary
to translate this  concept into a linear mathematical
function which can be optimized.  This function is
known as the "objective function."   The form of this
objective  function is  of  crucial  importance as  it is
a mathematical manifestation  of  the goals of those in
the river  basin.   The  objective  function,  which econo-
mists refer to as  the  "social welfare function,"  can
be best described  as:
                       max (B-C)
                where  B   benefits
                       C  = costs

In the  cases when  benefits from  a public  good cannot
be estimated,  benefit  cost analysis  must  be abandoned
in favor of cost effectiveness analysis.   An  example
of this is national  defense,  where  there  is very
little  hope for evaluating benefits.   In  such a case
cost effectiveness analysis,  which  involves minimizing
costs for  a given  output, must be employed.   This
approach is attractive in a linear  programming  context
in which the water quality goals  can be included  in
the constraint equations  without  the need  to  formu-
late them  in terms of  cost.

                     Program Outline
OWLAP2  is  a river  basin planning program which  has as
its objective  function minimization  of wastewater
treatment  costs while maintaining at  least  a  minimum
level of water quality.   In addition  to considering
BOD-DO which exhibit  essentially  linear behavior,
OWLAP2  can optimize  such  parameters  as nutrients which
exhibit complex dynamics.   OWLAP2 determines  the opti-
mal treatment  level  for BOD,  organic  nitrogen,  ammonia-
nitrogen,  nitrate-nitrogen and phosphate-phosphorus.
The program methodology,  though,  can  be applied to
other interactive  systems fairly  easily.

OWLAP2 uses two criteria  for  judging  water  quality
levels   dissolved oxygen and algal  biomass.  While
dissolved  oxygen is  the water quality parameter of
most interest, eutrophication may be  a problem  in slow
moving streams.  In  addition  to the well-known  aesthe-
tic and taste  and odor problems,  a  large  algal popu-
lation may have a  serious negative  effect  on  dissolved
oxygen  levels.   This  is due to the  fact that  OWLAP2 is
a steady state model but  large algal  populations which
produce oxygen during the day and consume  it  at night
could cause serious  diurnal oxygen  fluctuation  so that
even though the steady state  goals may be  reached, the
standards may be frequently violated.  Constraining
the size of the algal population  can  circumvent this
problem.
As noted earlier, OWLAP2  handles  the  problem  of linear-
                                                      532

-------
izing and optimizing wastewater treatment costs for a
river system in which not only dissolved oxygen but
also nutrient related water quality goals must be met.
The OWLAP2 package consists of a simulation program
(SIMU)  and the optimization routine (OWLAP2).

Since the passage of PL 92-500, effluent standards are
very often so stringent that water quality constraints
are not binding.  There is virtually no reason to run
an optimization program in this case since the degree
of treatment is determined.  The user can check to see
if this is the case by running SIMU   the water qual-
ity model of OWLAP2   to determine if water quality
constraints will be binding.  Only then need the user
run OWLAP2.

This two-stage approach to determining an optimal sol-
ution serves another purpose.  Since the water quality
model employed in OWLAP2 is quite complex and requires
knowledge of a large number of rate constants, it is
desirable to test the model to see if it fits river
data before attempting to run it.   SIMU allows the
user to do this testing at a much lower cost than
OWLAP2.
The stream standards for OWLAP2 are enforced in terms
of meeting a given minimum dissolved oxygen concentra-
tion and a maximum algal biomass concentration (algal
biomass is expressed in terms of chlorophyll-a).   The
water quality parameters considered in SIMU and OWLAP2
are BOD, ammonia-N, nitrate-N, phosphate-P, organic
nitrogen, dissolved oxygen and algal biomass.   The
water quality goals are met by reducing the amount of
BOD, ammonia, nitrate or phosphate discharged in acc-
ord with the cost minimization objective.
The water quality model employed in SIMU and OWLAP2 is
based on several sources including the work of
O'Connor, Thomann and DiToro,1 and Chen and Orlob.2
The overall model is shown in Figure 1.  If the user
finds that the model does not accurately describe
the system under consideration, he is encouraged to
modify the model, during the running of SIMU, so that
it does.
kinetics are non-linear, it was virtually impossible
to directly convert the water quality dynamics into a
linear program form.  Griffith and Stewart  presented
an algorithm for linearizing problems of this type.
The linearization involves doing a Taylor expansion
about the previous solution (based on the effluent
standards initially).  The method also checks to in-
sure that the solution is not greatly different from
the initial solution since the equations may be linear
only over a small range.

The methodology of utilizing the OWLAP2 and SIMU pro-
grams is given in Figure 2.  The OWLAP2 program con-
sists of a main calling (OWLAP2) program and three sub-
routines :
         BUILD:  constructs the water quality con-
                 straints for the linear programming
                 problem;

         COST:   sets up cost vector for optimization;

         LP :     determines the optimal solution to
                 the linear programming problem.

                 Optimization Routine

The optimal solution to the problem is determined
using a linear programming formulation.  The general
form of the linear programming problem is:

         max Z = c_ x_                               (1)
subject to:
b
                                                  (2)
Equation (1) is known as the objective function (i.e.
the mathematical expression to be optimized).  Equa-
tions (2) are the constraint equations (i.e. the
mathematical expression of the water quality effluent,
or other standards to be met) .

There exist numerous programming packages which solve
the linear programming problem.  The most commonly
employed programs are those which utilize the "sim-
plex" or "revised simplex" method.  These methods are
reliable but somewhat slow.
 Since the differential equations describing the algal     Another approach is a primal-dual algorithm which can
                                                                                         LOSS TO SEDIMENT
                                                                                         AND HIGHER LEVELS
                                                                                         OF FOOD WEB
                                       Figure  1•  Water  Quality Model

                                                       533

-------
                 Figure 2:  OWLAP2 Outline

arrive at an optimal solution in a shorter time than a
simplex or revised simplex code can.   For OWLAP2,  the
MINIT (minimum iterations) algorithm developed by
Salazar and Sen4 and translated from ALGOL to FORTRAN
by this investigator were employed.   It combines the
features of being easy to use and not requiring a
great deal of computer time or payment of rental fees.
OWLAP2 does not fit precisely into the form of the
classical linear programming problem.  OWLAP2, though,
models the behavior of nutrients which cannot be accu-
rately described using linear equations.   Therefore,
the nonlinear nutrient models must be linearized.

If the simulation portion of the program indicates the
necessity to further improve treatment beyond the
effluent limits, the cost minimization for the addi-
tional treatment is performed as follows:
   Maximize G   c x +c x  ...+c x                   (3)
                 -L J.  Z Z     K K
subject to:

   gi(x1,x2...xk)  < bt    i   1,2,...,2m          (4)

   The x's are the decision variables (amount of pollu-
                                       tant discharged)
   The c's are cost constants.

   The b.'s are the requirements constants

   2m is the number of constraints and m is the num-
   ber of possible critical points as there is a dis-
   solved oxygen and algal biomass constraint at each
   critical point.

   k is the number of decision variables.
      The  superscript  °  denotes values of the variables
      at the  initial  solution (i.e.  only effluent con-
      straints  utilized).
The problem may be linearized by using  a Taylor  expan-
sion around a vector 3c°    '     °     °^ r—  *"u~  """
linear constraints.
                               !, x£...x°) for the non-
This gives:  Maximize G = I  [c x°-c  (x°-xr)]
                          =   T T      T
                                                      (5)
subject to: g.(x°)+£  (x°-x )
                          r
                     r=l
                                             i-l,2,...2m
                                                      (6)
   if one  makes  the Ax.=(x?-x.)  terms small enough,
                                                      (7)
         ax
   The wij  are constant.   Equations (5) and (6) can now be
   rewritten as:       ,
      Maximize (G-v0)=2  -
                           r  r
                                                     (8)
   subject to:   I  w.   Ax  < (b.-v.)  i=l,2,...,2m   (9)
                =1  ir   r -   i  i                  ^ '
where
            v.=E  c x°
              r=l
   Since the gi's are nonlinear, the w^j terms are con-
   stant only for small values of Ax.  To guarantee that
   these are indeed small, it is necessary to include
   constraints of the form:
        Ax. 
-------
                 Cost Vector
                 D.O. Constraints
                                  [Value of Objectiv
                                  •   function
                 Algal Biomass       |
                  Constraints       1

                	f
                 Linearity Constraints
                                  I
         1           »          4k       1
        Figure  3: Form of Linear Programming Input
                 (k B number of discharges)
                 (m = number of critical points)

     GROWTH   RATE  X AB                           (12)

The RATE of algal growth can then be given by the
Monod relation:
RATE   K
              N°3+NH3,
                           PO,
                          SP+P°4
                                     S +AB
                                      \-i
                                                  (13)
   N03
   NH3
   P04

   Sp
         nitrate-nitrogen  concentration
         ammonia-nitrogen  concentration
         phosphate-phosphorus  concentration
        half saturation  constant  for nitrogen
        half saturation  constant  for phosphorus
        constant for algal  self-shading
 The algal death rate is a  linear  function of algae
 present.  This model can easily be  altered for systems
 in which algal die-off follows different patterns.
 Algal loss differs.from death  in  that  loss represents
 the algae and nutrients being  removed  from the system
 (usually up the food chain).   Death, on the other hand,
 implies that the nutrients  are fed  back into the
 system.
 Biochemical Oxygen Demand
 BOD can be removed from the system  by  oxidation at
 rate KI, or sedimentation  at rate K3.   Aside from dis-
 charges at wastewater treatment plants and the BOD
 carried in by tributaries,  the only BOD source is
 death of algae.  The BOD considered is ultimate car-
 bonaceous BOD.  The following  formula  is employed:
 (BOD contribution   (death  of  algae as chlorophyll-a
 from algae)
      (DEATH)  78.4 gr BOD	
               gr dead chlorophyll-a              (14)

 BOD kinetics  can be described by:
   a BOD
   ~dt—    -(K]+K3)BOD+78.4(DEATH)                (15)

   DEATH   kDAB
   KD       rate constant  for algal death

.Ammonia Nitrogen

 Ammonia nitrogen is formed as  organic  nitrogen, is
 broken down, and can either be oxidized to nitrate
 and nitrite or used as a nutrient by algae.  This
 model does not explicitly  consider  the oxidation of
 ammonia to nitrite but rather  considers the overall
 reaction as given below:
   2NH,+40,
            bacteria
                     2N0+2H
The kinetics  of ammonia are given below as:
   d(NH3)
   ~dt      =Kc
                            GROWTH (NH3)7.2
                             (NH3+N03)
                                                  (16)
                                                 (17)
                                                              ON   =  organic nitrogen concentration
                                                              K
                                                               on
                                                                     rateoconstant for oxidation of organic
                                                          Nitrate  Nitrogen

                                                          Nitrate  nitrogen is formed by oxidation of ammonia at
                                                          rate  Kn  and  is  used as a nutrient for algal growth.
                                                          The kinetics of nitrate are given by:
                                                             d(N03)

                                                             ~dt
                                                                                 GROWTH  (NO  )7.2
                                                                                                               (18)
Nitrate reduction to gaseous  nitrogen is  not considered
to be significant but  can be  included in  the model if
low dissolved oxygen levels are  to  be expected in a
reach.
Phosphate Phosphorus

Phosphate-phosphorus has as its  source wastewater dis-
charges and release of phosphorus from dying algae.
The kinetics of phosphate are given below:

   -fa = 1.66 (DEATH   GROWTH)                       (19)

The constant 1.66 is derived  from the conversion  by
Megard.5
   1.66 gr P	
                                                              gr chlorophyll-a
                                                    (20)
                                                          If removal of phosphorus by sediments  is  significant,
                                                          it can be easily included in  the model.
                                                          Organic Nitrogen
                                                          Organic nitrogen as referred  to in  this problem  is
                                                          organic nitrogen, not as living biomass  (e.g.  amino
                                                          acids and polypeptides released from decaying  biomass).
                                                          Organic nitrogen decays to ammonia  and is  formed by
                                                          decaying living matter.  The  kinetics  of  organic nitro-
                                                          gen are given below:
                                                                       on
                                                                          (ON) +  (DEATH) 7. 2
                                                                                                             (21)
                                                          The constant 7.2 is a  conversion  factor,  since  there
                                                          are 7.2 mg organic nitrogen per every mg  of  chloro-
                                                          phyll-a, and DEATH is  expressed as mg algae  (as chlo-
                                                          rophyll-a) that die per day.   It  is  also  assumed that
                                                          nitrogen fixation is not  significant.

                                                          Dissolved Oxygen
                                                          Dissolved oxygen is consumed  in oxidztion of carbon-
                                                          aceous BOD, organic nitrogen,  and ammonia-nitrogen.
                                                          It is replaced through atmospheric reaeration and algal
                                                          respiration.  The rates are respectively  K-^, Kn, K2,
                                                          and Kr.  The kinetics  of  dissolved oxygen are given  by:

                                                             d(
                                                                   = -K. (BOD) -4.57 Kn(NH) + K(C  -DO) +K (GROWTH) -SR
                                                                        1                i    /  s       r
                                                             Cs
                                                             K2
                                                             Kn
                                                             Kr
                                                             SR
         mg/1 chlorophyll-a
         rate constant for oxidation of  BOD
         rate constant for atmospheric reaeration
         rate constant for nitrification
         rate constant for algal respiration
         other oxygen sources  and  sinks
The K's have units of I/day except for Kr, which is in
mg 02/mg chlorophyll-a/day.  The 4.57 in Equation (22)
represents the gram of ammonia nitrogen  to nitrate
stoichiometrically.
                     Special Note

While OWLAP2 provides the framework for  optimally
allocating waste loads,  the user must realize that the
results of the model will only be  as accurate as the
model, model calibration and cost  inputs allow.   The
program is constructed so that the model can  be easily
adjusted to fit virtually any  one-dimensional, steady-
state system.  Since the model is  so flexible, great
care must be exercised in adjusting the  model so that
                                                       535.

-------
it will be specific for each system.  This requires
considerable data collection.  Similarly, while typi-
cal cost data is included in the model, the user must
be aware that coses will be site specific and must in-
clude accurate costs as the model is quite sensitive
to these inputs.

Acknowledgements:   This program was developed under
EPA Contract 68-01-2916, William P. Somers, Project
Officer.
REFERENCES:
1.  O'Connor, D.J., R.V. Thomann and D.M. DiToro,
    Dynamic Water Quality Forecasting and Management,
    U.S. EPA, Office of Research and Development,
    EPA 660/3-73-009, Aug.  1973.
2.  Chen, C.W. and G.T. Orlob, Ecologic Simulation
    for Aquatic Environments, NTIS PB-218 828, Dec.'72..
3.  Griffith, R.E.  and R.A.  Stewart, "A Nonlinear
    Programming Technique for the Optimization of
    Continuous Processing Systems," Management Science,,
    Vol. 7, p. 379-392, 1961.
4.  Salazar, R.C.,  and S.K.  Sen, "MINIT Algorithm for
    Linear Programming", CACM, Vol. 11, No.  6, June'76.
5.  Megard, R.O.,  Rates of Photosynthesis and Phyto-
    plankton Growth in Shagawa Lake, Minnesota, EPA-
    R3-73-039, July 1973.
                                                       536

-------
                             MODELING POLLUTANT MIGRATION IN SUBSURFACE ENVIRONMENTS
                                            Amir A.  Metry,  Ph.D.,  P.E.
                                                  Project Manager

                                                Roy  F.  Weston,  Inc.
                                            West Chester, Pennsylvania
                       SECTION I
                   ACKNOWLEDGEMENTS

This study is  funded by a  Research and Development
Grant from Roy F.  Weston,  Inc.  The members of the
research team  would like to express their appre-
ciation for the financial  support and encourage-
ment of Weston Research and Development Committee
members.

Numerical  analysis and computer s-imulation was con-
ducted by  Dr.  Arun K.  Deb, Principal  Environmental
Systems Engineer of Roy F. Weston, Inc.

The technical  support of Dr. James Davidson,
of the University of Florida, and his co-workers
Dr. P.S.C. Rao and Dr. H.E. Selim is greatly
appreciated.
                       SECTION 2
                     INTRODUCTION
 Need for Proper Assessment of
 Potential Groundwater Pollution

 Groundwater is one of the earth's most widely distri-
 buted and most important resources.  Its quantity is
 estimated to be six times that of the fresh water
 flowing  in all the streams on earth.  Groundwater
 accounts for 20 percent of the total amount of water
 withdrawn from all other sources.  Groundwater re-
 sources  have many advantages over surface water
 resources because they are more widely and easily
 available than surface water supplies.  The physical
 and chemical quality of groundwater is fairly uniform
 throughout the year.  It is rarely, if ever, necessary
 to consider removal of sediment from groundwater.

 The existing demand for groundwater as a source of
 conventional water supply will continue to grow;
 furthermore, aquifers will be considered as storage
 media for flood water in place of dams and reservoirs
 as the cost of these facilities grows progressively
 more expensive.  With the effort to clean up streams
 under water pollution control acts, aquifers will be in
 demand as alternative means for direct and indirect
 disposal of both liquid and solid wastes from indus-
 trial and domestic activities.        v

 Although not as dramatic and apparent as surface water
 pollution, degradation of the quality of subsurface
 waters is widespread.  Several sources of groundwater
 pollution have been identified, including leachate
 from sanitary landfills; industrial waste seepage
 from storage basins; industrial waste introduced
 through groundwater recharge; dome'stic waste from
 septic tanks; fertilizer, pesticides, and irrigation
 salts leached from soils in agricultural areas, and
 leachate from raw materials and waste stockpiles, etc.

 The successful location and operation of a waste  dis-
 posal site require quantitative knowledge of how leach-
 ing fluids will migrate through an aquifer.   This will
 depend on hydrogeologic parameters of the leachate/
 aquifer system, type of waste, and climatic conditions
 at the site area.   Experimental methods are required
 to quantify the different parameters involved in  the
 mass exchange between leachate and the aquifer.  Among
 the  different  types  of models  suitable  for  dispersion
 patterns,  mathematical models  can  be  conveniently  de-
 vi sed.

 This mathematical model describes  the hydrogeologic
 relations  within a  leachate/aquifer system.   It  is
 usually  in the form  of a  second-order partial differ-
 ential equation together  with  a set of  auxiliary con-
 ditions  describing  the system's variables and con-
 stants.   If such equations are sufficiently simplified,
 exact solutions may  be possible.   On  the other hand,
 however,  these simplifications are often physically un-
 realistic.  Numerical solutions obtained with the aid
 of high-speed digital computers offer a great help for
 solving  such equations under physically realistic as-
 sumptions.

 Problem  Definition

 Rainfall  over a waste disposal processing or storage
 area causes infiltration, and  therefore leachate gener-
 ation.   Most of the  states in  the U. S. have net infil-
 tration;  the potential for leachate generation there-
 fore exists in these humid areas.

 Leachate containing  several polluting substances leaves
 the waste  material and travels through unsaturated
 media.   During their travel, many of these pollutants
 are subject to assimilation by soils, because of the
 adsorption and ion exchange capacity of such materials.
Attenuated  leachate  then  leaves the subsaturated zones
 and enters the aquifer, where  it is subject to dis-
 persion  in groundwater and to chemical reaction with
 earth materials.

 In order to determine the impact of a waste handling or
 disposal  facility on subsurface waters the following
 steps must be taken:

     1.   Determine quantities and  characteristics of
         the leachate.  This can be accomplished by
          laboratory and/or field investigations.

     2.   Determine the degree of leachate assimilation
         as it travels through unsaturated media.

     3.   Determine the pattern of  Leachate dispersion
         and its  eventual  concentrations and chemical
         reactions in the aquifer.

Scope and Objectives

Modeling  leachate dispersion and assimilation in sub-
saturated and  saturated media is an essential  step in
determining impact of waste disposal  facilities  on
subsurface water  quality.

 The main objective of this study is to develop math-
 matical   and computer models to predict  leachate-pol-
 1utant migration and fate in subsurface environments.
 Because  of  the hydraulic  discontinuity between sub-
 saturated  earth layers (soils) and saturated  layers
 (aquifers), two models were developed:  1) a one-
 dimensional model to predict pollutant attenuation
 in subsaturated media (soils), and 2) a two-
 dimensional model to predict pollutant migration
                                                       537

-------
and fate  in saturated media  (aquifers).  Four basic
criteria  were considered  in  developing these
models.   The models should be--
        • Representative  of  the  physical  conditions  of
         both  the saturated and unsaturated  media.

        •Based on sound  mathematical  principles.

        •Easy  to understand and usable  by  engineers
         and  scientists.

        •Practical,  and  economical  to run, on  commonly
         used  computers.
Summary of Literature Review

The literature is rich in theoretical background for
dispersion of soluble matter in porous media.  More
work has been done in the area of dispersion in
saturated media than in unsaturated ones.  There is
apparent agreement in the literature on the validity
of partial differential equations to model pollutant
dispersion and fate in porous media (soils and
aquifers); however, there is still a tremendous gap
between the degree of sophistication of research in
this field and the actual technology used in day-to-
day applications.  In spite of the great research
effort, almost all solid and liquid waste disposal
sites are located on land without any modeling
activity to determine their impact on subsurface
water quality and to determine the need for, and
degree of environmental controls.  One reason
for this great gap is the difficulty that engineers
and scientists working in the field have in under-
standing and applying much of the existing research;
unfortunately, most of the published research work
ended up as research for sake of research, rather than
application.  Therefore, the need for easy-to-under-
stand and easy-to-apply models utilizing easy-to-
quantify field conditions (hydrogeologic parameters)
is badly needed to predict pollutant fate and dis-
persion in soils and aquifers.
centration c  .  The  following  conceptual  model  was used
for this  investigation:
                                                                3t
                                                                                  9z
                              p  9s
                              e  at
                                                                                                Kc
where:
     t

     D
     2
     W
     P
     9
     k
constituent concentration  (
in soi1  solution
adsorbed constituent concentration
mg/kg)

time (yr)
Hydrodynamic dispersion coefficient  (
distance (m)
average pore-water velocity  (m/yr)
bulk density of dry soil  (g/cm ,)  .
soil water-content fraction  (cm /cm  )
transformation rate constant  (yr  )
                                (3.1)



                     g/cm  or mg/L)

                            M g/g  or
                             m /yr)
The third term on the right hand side of equation  (3.1)
represents adsorption.  An equilibrium adsorption state
will be assumed with a  linear  relationship between
solution and adsorbed solute phases.  This is expressed
as:
     S= Kdc
                                                    (3.2)
where Kj is the distribution coefficient  (cm3/g).  Dif-
ferentiating (3.2) with respect to time gives:
     as
              3c
     at  ~  Kd at
                                        (3.3)
Substituting (2.2)into (3.0 and rearranging gives:

         PKd  \  3c  = p92c   w jlfL   Kc

                3t
                                                   O.'O
                       SECTION 3
             MODELING POLLUTANT MIGRATION
                 IN SUBSATURATED MEDIA


Large quantities of solid waste and hazardous waste are
being disposed of by placing the material on the soil
surface or by burying it in large landfills.  These
practices are often conducted without proper consider-
ation of how the waste material will  behave in a given
soil or under specific climatic conditions.  Procedures
are needed to assist in site selection and to define
proper management schemes for applying waste onto or
below the soil surface.  The procedure needs to be
descriptive of how various constituents in a waste
behave in a soil-water system, but simple enough to
be of practical  use.

Mathematical Model
This section presents an approach to these problems
which may be of value.  The procedure, because of
insufficient research data, has not been validated
and should not be applied without first comparing it
to field data and experience.   The physical  system
assumes a constant application of a leachate con-
stituent (e.g., Cd, Pb, Hg, As, and Se) of concen-
tration c  (n g/cm  or mg/L) to the soil surface or
large sources of waste in a landfill that releases a
given constituent to the soil-water system at a con-
     _3c
     at
D_
R
  a?
_vv 3c
 R 3z
Kc
R
                                                   (3.5)
where R =  (1 + ^-p	), which is frequently referred to
as retardation factor.  When no adsorption occurs
(Kj   0),  the retardation factor is unity.  Note that
as the retardation factor is increased above unity, the
effective values for hydrodynamic dispersion, average
pore-water velocity, and transformation rate are re-
duced.  The net effect is to reduce the mobility and
chemical transformation parameters of the constituents
being described.
Equation (3.5) was solved for the following initial
and boundary conditions:
     c = 0
     c   c_
                                                                           x>0
                                                                           x =0
                t = 0
                t > 0
The physical meaning of the boundary conditions corres-
ponds to a situation where a soluble constituent in
leachate (e.g., Cd, Pb, Hg, As, or Se) is continually
supplied to a soil surface which did not contain the
material initially.  The chemical transformation pro-
cess represents irreversible adsorption, precipitation,
and/or changes in the chemical state of the constituent
being described.
                                                       538

-------
Mathematical Solution

The solution to (3.5), subject to the  initial and
boundary conditions, is
          (erfc  (Z   tVw'+4D-K:
          I     V    V4D7!
          /erfc ( Z + ' Vw" J-JPJi'
          I    \     \^DT
(3.6)
where:
     W  =  w/R
     D1  =  D/R
     K'  =  K/R

         erfc (z) is the complementary error function.

Model Parameters

The average pore-water velocities (w) used in this
section to illustrate the general utility of equation
(3.6) are 1.75,  0.876,0.438, and 0.088 m/yr.  The
corresponding fluxes or Darcy velocities depend upon
soil-water content (9), since average pore-water
velocity (w)  is  equal to flux or Darcy velocity divided
by soil-water content by volume (6).  Therefore, the
annual  groundwater recharge represented by these average
pore-water velocities will  also depend upon the average
soil-water content in the water transmission zone.  The
dispersion coefficient is a function of average pore-
water velocity and was obtained using the following re-
lationship from Biggar and Nielsen (to appear).
     D  =  0.022 + 0.17 w
                     1.14
(3.7)
The transformation rate coefficients selected are
0.00876, 0.0876, and 0.876 yr'1  .   These are, in
general, small  values, and may or  may not be realistic
parameters for  leachate containing heavy metals (Cd, Pb,
Hg, As, or Se).

A literature search was conducted  using the reference in
Copenhauer and  Wilkinson (197*0, but transformation rate
constants (k) and distribution coeefficients (Kj) are
not measured or  reported generally.  The values used in
this report are  thought to represent the range that
might be encountered in a natural  soil  environment.  A
Kd if not available for a given soil and leachate con-
stituent could  be measured by shaking leachate contain-
ing a known concentration of the solute with soil (e.g.
at 1:1 ratio) until equalibrium, and then measuring the
constituent concentration in the supernatant solution.
This procedure  gives the quantity  of the constituent
adsorbed.  The  ratio of equilibrium adsorbed concen-
tration (S) to  solution concentration (c) is equal to
Kd.   This procedure assumes that the adsorption isotherm
is linear over  the concentration range  in question.
                       SECTION 4
          MATHEMATICAL MODELING OF POLLUTANT
                MIGRATION IN AQUIFERS

In this section different mathematical models will be
formulated to represent migration of pollutants gener-
ated from a waste-disposal  site into saturated porous
earth materials.  Three major mass-transport mechanisms
are included separately or simultaneously  in each model:

       • Molecular diffusion -- the transport of pol-
         lutants in their ionic state because of the
         difference in concentration levels of a given
         species in the aquifer.

       • Convective dispersion -- the mixing of pollut-
         ants in the aquifer caused by the variation
         in the microscopic-pore velocities within each
         channel of flow, or from one channel to
         another.

       • Chemical reaction — the transfer of the pol-
         luting substances from their liquid carrier to
         the solid matrix of the aquifer.  In this study
         the transfer of pollutants is considered in the
         adsorptive rather than the desorptive sense.

Simultaneous Diffusion, Convective Dispersion and
Chemical Reaction Model

The rate of change of concentration  dc/d t can be
mathematically defined for the diffusion convective
dispersion models and a chemical reaction term which
can be defined as the function f(c).
             3c
           div  (D
                               c)
                  + div  (D'  grad  c)  -v div  c - f(c)         (4.1)

        Including  the  coefficients  Dm  and  D'  in  one  term  D,  the
        effective  diffusivity,  Equation  (4.1), can be  rewritten
        as
             _3c_
             3t
           div  (D grad  c)    v div  c
f(c)
                                                           (4.2)
        f(c)  is  a  function  of  concentration,

        where:
         pollutant concentration (ML )  .
         is the effective diffusivity (L T
         groundwater velocity vector (LT~^)
                 f(c)   =   b  (c-ms)n
                    s   =   concentration of  polluting  substance
                          per  unit weight of  the  solid matrix
              b  and  M   =   constants
                    n   =   exponent^ 1
                              SECTION  5
            NUMERICAL SOLUTION OF TWO-DIMENSIONAL MODEL
                OF POLLUTANT  MIGRATION IN  AN  AQUIFER

        In order to use operational  methods  for solving math-
        ematical models, many oversimplifications have to be
        imposed in order to solve the  second-degree partial
        differential  equations describing  the system:   col-
        lapsing the model  into a one-dimensional  state,
        assuming homogeneity  of the  medium,  and considering
        only one mechanism or two of the three mechanisms
        involved in the mass  transport in  every solution.
        These simplifications and assumptions, which are
        necessary but physically unrealistic, reduce the value
        and applicability  of  the operations  solutions to real
        physical problems.   In this  section,  a numerical tech-
        nique is developed to solve  the  mathematical equation
        describing simultaneous diffusion, convective dis-
        persion, and chemical reaction of  pollutants into an
        unconfined aquifer in two dimensions  under transient
        and steady-state conditions.
                                                       539,

-------
Formulation of the Finite Difference Scheme

As shown in Section k, the dispersion equation  in  the
x-z domain can be rewritten as:
    3c        32c        32c
   IT   Dx  ^ +  Dz  3?
                                  3c       Sc
                                u  "3^    w "37
                                                                        rDz
                                                                                         ru
                                                                         F  cm, n+1,s  ~  2h cm + 1,n, s    2h  cm-1, n, s
                                                                                        rw
                                                                        rw
                                                                        2k cm, n
                                                                                            m, n-1, s
                                                                                                         , n, s  (5.8)
                                                   15  -\}    Introducing the non-dimensional parameters a  , a z and
                                                           0 ,   0 ,  and  K  as:
In Equation (5.11 the different variables and para-
meters are defined as follows:

       x and T.:   Cartesian coordinates in the direc-
                 tion of groundwater flow and vertical
                 direction respectively (units:  L)

     Dx and Dz:   are the effective diffusivities in the
                 x and z directions respectively
                 (L2 T-1!

          u, w:   are the components of pore velocities
                 in the x and z directions respective-
                 ly (LT-'I

            K :   is the coefficient of chemical re-
                 action of the polluting substance with
                 the porous medium (T- 1.
                                                                       rDx
                                                                       h:  ,  and
                                                                       ru  a     rw
                                                                       ~h~ PZ    IT
                                                     rDz
                                                     k:
                                                                     rK
                                                                                                                (5.9)

                                                                                                               (5.10)


                                                                                                               (5.11)
                                                           Substituting  Equations  5-9,  5-10 and 5-11  into Equa-
                                                           t,on   g and  rear     ;
                                                                 ''in, n, s H 1
                                                                    <=m-1,n,s K  +  °-
                                                                                           ix -  2az
                                                                                            cm+1,n,s,
 Using the backward difference equation  for  time,

      3c
                 cm, n, s+ 1  cm, n, s
                                    +  (T)
                                                    (5.21
     'dt' m, n, s           T


'Ising the central difference equation for x and z


                                  lijL-s   +  0 (h2)  (5.3)
        3c           °m+1,n,s
        >3x' m, n, s
                           2h
                              cm.n.-1.s
       3z'm, n, s
<3zJ 'm, n, s

and

(^t
I3x: ' m, n, s


 where:
             -
             h-
                  ~m-1, n, s
                          -2cn
                               , n, s  +  cm,n+1,s
+  cm+1,n,s
                                                  +  0(k)
                                                                                                             (5.12)
                                                           The Computational Molecule for the x-z domain    can  be
                                                           written as:

                                                                 cm,M,b i 1 -  cm,n,sl1    2ax   2az    K)

                                                                 + Left  [ax  +  0.50X]  +  Right  |ax    0.50X|

                                                                 +  Up |az +  0.5j3z]  + Down  |az    0.5(3Z]     /5  ^

                                                           Left,  ^ig^vt,  Up,  and Down represent the concentra-
                                                           tions  of the  pollutants with respect to the specific
                                                   (5.5)    grid point cm n s in the finite difference scheme.

                                                           Condi tions for Stability of the Numerical S olut ion
                                                    0(h)
                                                           The necessary conditions for stability of the numer-
                                                           ical  scheme can be written as
                                                                 rDx
     h, k, T •   numerical increments of x,  z and t,  re-
               sp-ect i'Vely .
     m, n, s:   nannegative  integers corresponding  to
               x, z and t coordinates  respectively.

Substituting Equations  '5.2) - (5.61 into  Equation
(5.1):
       1,                        Dx
       T (cm, n, s  +1    cm, n,  s'     Tr~'cm-1, n, s  ~2cm,n,s
                                                                               rDz
                                                             and
                                                                  rK
                                                                                                             (5.l
                                                                                                              (5.15)
                                                                                  SECTION 6
         cm+1,n,s
      ~2h  'cm+1,n,s
                         (cm,n-1,s~2cm,n, s  +  cm,n+1,s'
                                w
                                    'c
                                     m, n
                                                                         DEFINITION OF PARAMETERS

                                                           Unsaturated Model

                                                           The  parameters that appear in the mathematical model
                                                           for  subsaturated media are those found in the fol-
                                                           lowing equation:
   - k  c
        m, n, s
                                                    (5.7)
Rearranging Equation 5.7, the explicit form of the dif-
ference equation can be written as follows:
                                                                     D_   32c
                                                                     R   3z:
                                                                                 w ^c
                                                                                 R 3z
                                                                                                               .11
      cm, n, s+
                   m,n,s
                           rDx
                           h2   cm-1, n, s
        2rDx
        h2
                                                  m,n,s
        rDx
                         rDz
                                        2rDz
                                                            c, t, z, D and K are similar to  the  parameters  for  the
                                                            saturated flow media and are discussed  in  the follow-
                                                            ing section.

                                                                 R  is known as the retardation  factor and  is equal
             cm + 1, n, s
                               cm, n-1, s   k2   cm, n, s
                                                       540

-------
     p is the bulk density of  dry  soil  or earth mate-
       rials fg/m') and can be estimated for  each
       type earth materials or quantified through
       laboratory testing of undisturbed core samples,
       or by correlation with  the  degree of soil
       saturation

    Kd is the distribution coefficient  CcrrvVg) .  The
       value of Kj is a leachate/soi1 specific and
       could be measured in the  laboratory by  shaking
       leachate containing a known concentration of
       the solute with soil (e.g.  1:1 or 1:10 ratio)
       until equilibrium is reached  and then  measur-
       ing the constituent concentration in the
       supernatant.  The ratio of  equilibrium
       adsorbed concentration  (s)  to  solute concen-
       tration I c) is equal to K., for  linear
       adsorption isotherms.

     $ water content, by volume  (ml  of  water  per cm  of
       soil)

     S concentration of adsorbed phase  (mg/g  soil)

Saturated Model

The parameters that appear in the mathematical  model for
saturated media are those found  in  the following
equation:
     If = Dx
+  Dz
 3;z
~W
  3c
u 3x~
                       dc
                             Kc (6.2)
The concentration,  co,  of each polluting sub-
stance in  the  leachate  as  it  enters  the  aquifer  is
determined by  applying  the  mathematical  equations
3.1 and 3.6 presented  in  Section  3.   The saturated
media model  predicts concentrations  (c)  at  various
locations  in the  aquifer  (identified by  x,z coordinates)
for differenct values of  time,  t.

Time  (t):   Time is  the  duration of travel of
polluting  substances  in the aquifer.  Time can be
described  as:   a) a buildup period,  in which the
concentration  of  a  certain contaminant increases;
b) a steady state period, in which the concentration
remains constant; or  c) recovery  period, in which
the concentration starts  to decline  with the passage
of time.  The  representative time for a  computer run
will vary, depending  on aquifer hydrogeologic
character, from several months to ten years.  Time
increments vary from  several  days to several months.

Space Coordinates (x  and  z) :   These  are taken in the
direction  of groundwater  flow and perpendicular to  it,
respectively.   In the computer runs, the spatial domain
in the direction  of flow  extends  from a  few hundred feet
to a few miles, and in  the vertical  direction it extends
from a few feet to  a  few  hundred  feet, depending on the
characteristics of  the  aquifer.  The increment varies
from  10-1000 feet in  the  direction of flow, and 1-10
feet perpendicular  to it.

_Effective  Diffusion Coefficients  (Dx and Dz) are a
function of molecular diffusion coefficients and pore
velocities (u  and w).   Their numerical values vary  from
from a fraction to  several  sq ft/day, depending on  type
of aquifer materials  and  pore velocities.

Chemical Reaction Coefficient  (<):  This value  is
expressed  as a linear  function of the concentration.  In
this study,* is considered  in the sense  of pollutant
removal from solution  into the solid matrix.  It is dif-
ficult, however,  to develop a single value for this
function since it is  dependent on geophysical and geo-
chemical properties of  the  aquifer and on groundwater/
                                       leachate chemical interaction.  The value of * is usually
                                       on the order of a small fraction of Day~1 for  most aqui-
                                       fers, and it increases with  increases  in reactive
                                       materials (e.g., clay or salts) content  in the aquifer
                                       materials.  Chemical reaction coefficients can be
                                       determined using laboratory columns (lysimeters).
                       SECTION 7
                  COMPUTER SIMULATION

General Considerations

Pollutant migration through subsaturated soil and
saturated aquifers has been dealt separately using
Equations (3.1) and (5.1), respectively.  Equation
(3.1) has been solved analytically and the solution is
given by Equation (3.6).  For saturated media, Equation
(5.0 has been solved numerically using finite
difference technique.  The finite difference form of
Equation (5.1)  is given by Equation (5.6).  The con-
ditions of convergence of solution of finite difference
Equation (5.6) to differential equation (5.1) are given
by Equations  (5-1^0  and (5-15).  A program has been
developed to  solve Equation (5.6) numerically using
a high speed  digital computer using proper initial
and boundary  conditions.

The differential equation for substaturated soil layers
(Eq. 1.1) will be solved first to find the concentration
of a pollutant as it enters the  interface of the sub-
saturated and saturated layers.  This would constitute
one of the boundary conditions (at z = 0)  for the
solution of the saturated model.  The concentration
of the pollutant at other boundary conditions is assumed
to be equal to background condition.  The initial con-
dition of pollutant concentration at all the x - z grid
points at time t = 0 has been assumed to be equal to the
background concentration.

The computer  simulation is expressed as two-dimensional
concentration profiles which can be oriented in either
the vertical  or the horizontal domain, under transient or
steady-state  conditions.

Program Logic

When the initial and boundary condition values of pol-
lutant concentration in two saturated aquifers are known,
the concentration of pollutants at all  the x - z grid
points at t = 0 are known.  The computation of concen-
tration for all x-z grid points for the next time in-
ternal, At, would be done using the finite difference
equation (5.6).  In a similar manner, a step-by-step
solution for  the pollutant concentration at all  x-z
grid points for various time intervals may be obtained.

The procedure of "marching solutions" is repeated for
progressing time intervals up to the required time
period for which the results are desired.   The selection
of finite difference grid intervals should be made in
such a way that the stability conditions (Equations
5.1^ and 5.15) are satisfied.  The  program has  been
written in FORTRAN IV and is installed  in  the Weston
computer system.

Input Requirements

The input data required for computation consist of grid
characteristics, initial and boundary concentrations of
pollutants, program operation and control  data, and
aquifier characteristics such as velocities and
diffusion coefficients in the x and z directions and
the coefficient of adsorption.  The output of the
program consists of the ratio of concentration of pol-
lutant in the two-dimensional domain for various time
periods.
                                                       541

-------
                                                           Figure 2 and 3 are plots of concentration profiles  for
                                                           various depts below ground water and for various
                                                           distances downstream of source respectively.
FIGURE 1 POLLUTANT ISOPLETHS (RUN 1)
Typical Run

Figures  1 through  3  are graphical presentations of
computer simulation of a typical run based on the
following parameters:

Number of Finite Space  Increments  in
X  Di rection                                  =  20
Number of Finite Space  Increments  in
Y  Direction
Number of Finite Time Increments
Background Concentration of Pollutant
Grid Space Size in X Direction  in  Feet
Grid Space Size in Z Direction  in  Feet
Grid Size in Time Direction in  Days
Coeff. of Chemical Reaction of Adsorp.
of  Poll.                                     =    0.001
Effective Diffusion Coeff. in
X-Direction, ft2/day                         =    4.000
Effective Diffusion Coeff. in
Z-Direction, ft /day                              0.50
Ground Water Flow Velocity in ft/day         =    5.00
Vertical Velocity of the Leachate  in ft/day  =    0.20

Figure 1  is a plot of pollutant  isopleths  in the
aquifer.  The leachate plume dips  in the aquifer, which
is  attributed to the vertica1-component velocity vector
(recharge velocity).
20
52
  0.0100
200.00
  5.00
  7.00
          FIGURES CONCENTRATION PROFILES FOR VARIOUS DISTANCES DOWNSTREAM
                 FROM SOURCE (RUN 1)
           SECTION 8
CONCLUSIONS AND RECOMMENDATIONS
      FOR FUTURE RESEARCH
          I  *
FIGURE 2 CONCENTRATION PROFILES FOR VARIOUS DEPTHS BELOW GWT (RUN 1)
           Cone I us!ons

              • The models  developed  in  this  study:   a)  have a
                sound mathematical  basis;  b)  account for major  mass
                transport mechanisms;  c)  are  flexible and prac-
                tical;  and  d)  are  accurate.

              • The models  can be  used to simulate subsurface
                contamination  from area  sources,  such as solid
                waste disposal  sites,  wastewater  holding basins,
                wastewater  or  sludge application  sites,  raw
                material stockpiles and  recharge  of  contaminated
                waters.

              • The models  can be  used as  a  tool  for water quality
                studies, disposal  site selection  and design, eval-
                uation  of environmental  impact,  recovery of ground-
                water contaminants, and  planning  related to sub-
                surface water  resources.

              • The accuracy of predicted  contamination  by the
                models  depends on  the  accuracy  of hydrogeologic
                parameters  used in the simulation.  It is rea-
                lized that  quantifying such  parameters is one of
                the most difficult tasks  in  simulating subsurface
                water contamination.

           Recommendations  for Future  Research

              • The "first  generation  models" developed  in this
                study should be further  tested,  using field data.

              • More effort is needed  in  defining and quantifying
                various hydrogeologic  parameters  used in the models.

              • The "second generation" models  should be upgraded
                to account  for:  a) multiple  sources;  b)  hetero-
                geneity of  soils and aquifers; c)  non-linearity
                of chemical reactions; and d) pumping and recharge
                in an aquiefer.

              •  It would be of  great value to engineers  and
                scientists  in  the  field  if the computerized
                solution of the models were expressed in a series
                of nomographs  that related pollutant concentration
                to hydrogeologicaI  parameters of  a disposal site
                                                       542

-------
                               A MODEL OF TIDAL FLUSHING FOR SMALL COASTAL BASINS

                                                 Albert Y. Kuo
                                      Virginia Institute of Marine Science
                                        Gloucester Point, Virginia 23062
                      Abstract

     An empirical theory is proposed to model the
flushing of a small coastal basin by tidal exchange.
The theory is adapted from Ketchum's  tidal prism con-
cept with modification.   The application of the method
requires that a water body be divided into segments
such that complete mixing at high tide within each
segment may be assumed.   Starting from the mouth, each
segment is defined such that its volume at low tide
equals the total tidal prism landward from the inner
boundary of the segment.  Therefore, each segment has
a length equal to the local tidal excursion.

     The flushing capability of a segment is defined
as the fraction of dissolved substance removed per
tidal cycle, i.e. the flushing rate, which was derived
from the principle of mass-balance.  The concentration
distribution of an introduced pollutant was expressed
in terms of discharge rate, volume, flushing rate,
and decay rate.  A model has been set up for the Cock-
rell Creek of Virginia to study a proposed 0.2 MGD
STP.  The model was used to project the distribution
of fecal coliform bacteria and biochemical oxygen
demand.

                    Introduction

     Estuaries and coastal waters are being used more
and more frequently as dumping grounds for pollutants
resulting from human activities.  If properly balanced
with the assimilative capacity, this may be a practi-
cal use of these water bodies.  However, careful plan-
ning must be executed such that the introduced pollu-
tants will not upset the ecological balance and pre-
clude other usage of the water bodies.

     The application of water quality models has proven
to be a powerful technique in water resource manage-
ment.  The primary results of the model are the pre-
diction of the distribution and concentration result-
ing from discharge of a new pollutant, or an increase
or decrease of an existing pollutant discharged to a
water body.  The fundamental goal of a water quality
model is to represent the complex interaction of the
prototype in a simplified form which not only simulates
the existing conditions with accuracy but also can
predict the likely consequence of a proposed change of
pollutant discharge.

     The majority of recent developments in the field
of water quality modeling pertain to numerical mathe-
matical modeling .  These models used advanced com-
puter techniques to find solutions to the governing
equations of motion and mass balance.  An important
feature of these models is the requirement of substan-
tial data from prototype, either for input data or for
calibration and verification of the model.  The appli-
cation of these models to a particular water body often
involves a large investment of time and effort.  In
the case of small coastal basins (e.g. a coastal creek
of the order of 10 km long, 100 meters width), it is
usually impractical to use this kind of model to study
a proposed small waste discharge.  A simple tidal
flushing model for small coastal basins which requires
only the data of tidal range and basin topography is
described in this paper.
   Theoretical Consideration and Basic Assumptions

      The tidal prism concept has been used to evaluate
                                                2 3
the ability of an estuary to disperse pollutants ' .
The tidal prism is equal to the difference between
water volumes at high and low tides.  In an estuary,
part of this volume is contributed by river flow, part
by water which enters through the seaward boundary on
the flooding tide.  In a small coastal basin, the con-
tribution of river water may be so small at times that
the tidal prism consists wholly of the water brought
in by the tide.  This inter-tidal volume of water
serves to dilute the introduced pollutants and event-
ually flushes them out of the estuary or coastal basin.

      The objective of this model is to calculate the
equilibrium distribution of introduced pollutants.
During each tidal cycle, the pollutant concentration
at any location varies with the stage of the tide, but
on successively similar tidal stages, the pollutant
concentration returns to the same value.  For the
equilibrium condition to exist, the pollutant discharge
rates and river flow, if any, must be kept constant for
a period much longer than the flushing time of the
water body.

      Ketchum's  assumption of the tidal prism concept
is adapted with modification.  Ketchum assumed complete
mixing of the water entering on flood tide with all of
the water present throughout the estuary at low tide.
He further assumed that the maximum length of the
estuary over which complete mixing is possible is de-
termined by the average excursion of a water particle
on the flood tide.  In the present model, Ketchum's
second assumption is retained and the water body is
divided into segments of length equal to the local
tidal excursion.  Instead of complete mixing with all
water present at low tide, the water entering on flood
tide is assumed to mix completely with the water
present in the most seaward segment at low tide.  Some
portion of this mixture, in turn, enters the next
landward segment and mixes completely with the water
present there at low tide.  This process progresses
landward until the limit of the estuary or coastal
basin.  On the ebb tide, the part of water making up
the local inter-tidal volume of each segment escapes
to the adjacent seaward segment.  The flushing is thus
accomplished by a series of tidal exchanges with the
pollutants moving progressively seaward.

                    Segmentation

      For the purpose of model construction, a water
body is divided into segments each having a length
equal to the local tidal excursion.  In a small coastal
basin, the critical time for water quality is usually
at the period when the freshwater input is at a mini-
mum, or zero.  The method of segmentation employed by
Ketchum cannot be applied because it requires the
river flow as a non-zero parameter to start the segmen-
tation process from the head of the estuary.  In the
present model, the segmentation process starts from
the mouth of the water body and the length of each
segment is chosen to equal the tidal excursion with
zero freshwater inflow.

      Figure 1 shows the plain view of a coastal basin
with its volume V(x) and tidal prism P(x) plotted as
function of distance x from the mouth.  V(x) is defined
                                                      543

-------
as the accumulated low-tide volume along the iain
stem from the mouth to the transect at x.  P(x) is
the inter-tidal volume landward from the transect at
x, including those of branches.  The most seaward
segment (segment no. 1) is defined between transects
1 and 2 such that its low-tide volume V^ equals to
the tidal prism landward of transect 2, i.e. P..  In
general,
           n+1
           n+2
(1)
        •
             l
                  n+1
where V  is the low-tide volume of the nth segment
       n
which is between the nth and (n+l)th transects, P  ,
                                                 n+1
is the tidal prism landward from the (n+l)th transect
and p +1 is the local tidal prism, or inter-tidal
volume of the (n+l)th segment.  Therefore, the low-
tide volume of a segment equals the tidal prism land-
ward from it and also it is equal to the high-tide
volume of its adjacent landward segment.

     The low-tide volume of a segment decreases mono-
tonically landward as the tidal prism decreases.  If
the basin has a vertical shoreline, then in principle,
V  •* 0 as n-*=° and there will be an infinite number of
 n
segments.  Complete mixing is never achieved at the
landward end of the basin because of the diminishing
tidal excursion.  If the basin has a sloping beach,
the volume of the most landward segment may be chosen
as the tidal prism of those areas which are exposed at
low tide and submerged at high tide.

     Each of the branches of the basin may be segmented
in the same way as that of the main stem.

      Distribution of Conservative Pollutants

     If one assumes that a conservative pollutant is
discharged into the mth segment at a rate of Q per
tide, the pollutant concentration in each segment may
be calculated by considering the mass balance.

Segments Seaward of Outfall

     Under equilibrium conditions, the net amount of
the pollutant 'flushed' across a transect seaward of
the outfall, i.e. n <_ m, must be equal to Q.  A volume
of water P  is transported seaward and landward on ebb
and flood tides respectively.  Let C  be the concentra-
                                    n
tion of the nth segment at high tide, then the total
mass transported seaward during ebb tide is P C .
                                             n n
Since the flooding water is assumed to mix completely
with the water present in the (n-1)th segment at low
tide before it is transported across the nth transect,
that water transported landward will have concentra-
tion C _,, the concentration of the (n-1)th segment
at high tide.  Therefore

     PC  -PC  . = Q, or
      n n    n n-1   x>
     Cn ' Cn-l
(2)
Equation (2) requires that CQ be specified before the

concentration distribution may be calculated.  This is
equivalent to the boundary condition requirement for
       solution of an advection-diffusion equation.  Assuming
       that a fraction a of the water entering the basin
       through transect 1 on flood tide is water that escaped
       from the basin during the previous ebb tide, then

            C  = aC,
       and equation (2)  becomes
                    Q
                                                               Cl
                                                                              for n
In general, equation (2) becomes
                      n   o
                          P,
                                                          (3)
                                                                                                              (4)
            If a flushing rate Y  is defined as the portion
       of the pollutant removed from the nth segment per
       tidal cycle,  mass balance requires that

                 (V   + >>  = «
       where C  (V  + p )  is the total mass of the pollutant
              n   n    n
       in the nth segment  at high tide.  Then

            ,,  -     Q
                                                          (5)
       Segment Landward of Outfall

            If the nth segment is located landward from the
       outfall, calculation of the pollutant concentration
       may be considered an intrusion problem.   Under equilib-
       rium conditions, there should be no net  transport of
       the pollutant across the nth transect, thus

            PnCn - PnCn-l = °

            Cn ' Cn-l
       In general,
            C    C
             n    m
                          if n > m
                                                   (6)
     Distribution of Nonconservative Pollutants

     In addition to flushing by tidal action, a non-
conservative pollutant will undergo a decaying process
which will further reduce the concentration distribu-
tion in a water body.  The mechanisms of tidal flush-
ing and decay may be assumed to work independently
and their combined effect may be studied through the
principle of mass-balance in a segment of the basin.

     If Wn is the total mass of the pollutant in the
nth segment, then the amount of the pollutant removed
per tidal cycle by tidal flushing is y W , where y
is the flushing rate defined previously.  The remaining
mass of the pollutant (1-y )W  will undergo decay.
Assuming that the pollutant decays linearly with a
decay rate of k per tide, the amount of the pollutant
                                                 —k
decaying in one tidal cycle will be (1-y ) W  (1-e  ) •
Therefore, the total loss of the pollutant per tidal
cycle is
                                                       544

-------
                         -kx
      Under equilibrium conditions, the same amount of
the pollutant has to be supplied by the adjacent seg-
ment closer to the pollutant source, thus,
                             n+1
      W  =
                        -k
                                                   (7)
If the pollutant were not decaying during the time it
is transported from the (n+1)th segment to the nth
segment, equation (7) might be reduced to
      (w
      "•
                      n+1
                      (8)
where (W )  is the total mass of the pollutant in the
nth segment with no decay in the segment.  By combin-
ing equations (7) and (8), the following is obtained
                           (W )
                             no
                      (9)
      Equation (9) states that the factor for pollutant
reduction due to decay within the nth segment is
                 -k
      1- (l-Yn) e
which also has been shown independently by Ketchum,
      4
et al. .   Equations (4) and (6) give the concentration
distribution due to the flushing by tidal action alone.
After applying the decaying factor, the concentration
distribution of a nonconservative pollutant may be
summarized as follows:
 m
 ir
i=n
                   Y-i
and
           n
           TT
          i=m
               1- (1-Yj.)
(Cn>o
                                    if n < m
       if n>m
                                                    (10)
(11)
 where  (C )   is the concentration of a conservative
 pollutant  and is given by equation (4) or (6) .

      If the decay rate is zero, equations (10) and
 (11) reduce  to C  = (C )  .  It is apparent that for
 any given flushing rate, any increase in the decay
 rate  will decrease the concentration.  However for a
 pollutant  with a given decay rate, a larger flushing
 rate  will result in higher relative concentrations
 compared  to those of the conservative pollutant, since
 the pollutant remains within the segment of the water
 body  for a  shorter time.

                 Model Application

      The model has been applied to the Cockrell Creek,
 Virginia ,  a small coastal basin located near the mouth
 of the  Great Wicomico River,  which itself is a tribu-
 tary  of the Chesapeake Bay.   The creek is about 3.5
 miles  (5.63 km)  long with a width ranging from 300 ft.
 (91.5m) to  1300 ft.  (396m).   The total drainage area is
       4.6 mi  (11.9 km ).   Examination of the salinity dis-
       tribution (figure 2)  reveals that at times in the
       summer the creek is  well-mixed and the freshwater in-
       flow is almost zero.   This condition makes the tidal
       prism concept most applicable.

             A 0.2 MGD (0.0088 m /S)  sewage treatment plant
       was proposed for the treatment of the sewage from the
       town of Reedville and two nearby fish processing
       plants.  The primary environmental concern is the
       effect of the proposed waste discharge on the shellfish
       due to the possible  increase of coliform bacteria and
       depletion of dissolved oxygen.  The Food and Drug Ad-
       ministration has a water quality standard of 14 MPN/
       100 ml of fecal coliform for shellfish harvesting.

             Figure 3 shows the segmentation of Cockrell
       Creek according to the tidal prism concept.   The tidal
       prisms, low-tide volumes and flushing rates  are listed
       in Table 1.  Two sets of flushing rates were calcula-
       ted, with a   0 and  0.5 respectively, where  a is the
       fraction of the water entering the creek at  flood tide
       which left the creek during the previous ebb tide.   It
       is preferable that a be determined by a tracer experi-
       ment in the prototype.  However, the data in the table
       show that the dependence of flushing rates on the value
       of a diminishes rapidly in the landward direction.   For
       example, a change of a from 0.0 to 0.5 changes the
       flushing rate at segment M5 by only 11%.

             The model was  used to calculate the fecal coli-
       form and BOD (biochemical oxygen demand)  concentrations
       in the creek.  The proposed outfall is located in
       segment M5.  The effluent is secondary treated sewage
       with 24 mg/1 of BOD5  and 200/100 ml of fecal coliform.
       Assuming a BOD decay rate of O.I/tide and coliform
       die-off rate of 0.5/tide, the  following concentrations
       for segments adjacent to the outfall were obtained  with
       the flushing rates corresponding to a = 0.5:
                                  Segment
                                    M4
                                    M5
                                    M6
                                    Bl
                                    Cl
                               BOD
                               mg/1

                              0.048
                              0.078
                              0.065
                              0.070
                              0.074
Fecal Coliform
 MPN/100 ml

    0.09
    0.25
    0.14
    0.17
    0.21
                                   The above table shows that the increase in the
                              fecal  coliform count at segment M5 will be 0.25/100 ml
                              if  a is assumed to be 0.5.  For a more conservative
                              estimate ,  a is assumed to be 0.9, then the flushing
                              rate and fecal coliform count at segment M5 will be
                              0.113/tide and 0.28/100 ml respectively.  The low sensi-
                              tivity of the coliform concentration in response to the
                              value of a lies in the fact that the lower flushing rate
                              allows a longer time for bacteria to die off in any
                              particular segment.

                                                  Discussion

                                   The tidal prism method of Ketchum has been modi-
                              fied and applied to small coastal basins with negligible
                              freshwater runoff.   Ketchum's method was designed for
                              use in the estuaries where the freshwater may be treated
                              as a tracer.  In his method, the estuary is segmented
                              from its head to mouth using both the river flow and
                              tidal prism as parameters.  The segmentation process
                              fails in the singular case of zero freshwater inflow.
                              The modified method proposed here uses the tidal prism
                              as the sole parameter to segment a coastal basin from
                              its mouth to head.   The flushing rate of each segment
                                                        545

-------
Table 1. Values of Pn, V
segments in the
Segment or P_ V,,
Transect
Ml
M2
Al
A2
A3
M3
M4
M5
Bl
Bll
B12
B2
B21
B3
B31
B4
Cl
C2
C3
C4
M6
Dl
M7
M8
El
M9
M10
Mil
M12
M13
M14
(104ft.3)
3810
3355
215
170
120
2705
2317
2105
603
136
105
364
60
236
84
96
246
200
155
122
950
62
725
600
93
400
280
200
150
100
60
(104ft.3)
3355
2920
170
120
357
2317
2105
1799
500
105
255
296
156
180
192
359
200
155
122
313
787
138
600
493
196
280
200
150
100
60
60
n' Yn f°r
Cockrell
the
Creek.
(I/tide)
a=0.0 a=0.5
1.00
0.53
0.89
0.53
0.11
0.40
0.32
0.26
0.47
0.68
0.14
0.44
0.25
0.40
0.16
0.11
0.69
0.46
0.37
0.09
0.37
0.26
0.32
0.28
0.21
0.30
0.30
0.29
0.28
0.30
0.17
0.5
0.36
0.85
0.52
0.11
0.31
0.27
0.23
0.44
0.66
0.14
0.42
0.25
0.39
0.16
0.11
0.66
0.45
0.37
0.09
0.33
0.26
0.31
0.27
0.21
0.29
0.29
0.29
0.28
0.30
0.17
was derived from a more rigorous mass-balance principle
instead of intuitively defining it as the ratio of
intertidal volume to segment volume.

     The proposed model is most practical for environ-
mental studies for a small project in a small coastal
basin.  In the absence of freshwater  runoff,  the small
coastal basins tend to be well-mixed  and the  tidal
exchange is the sole mechanism to flush out the pollu-
tant.  The method requires a minimum  amount of data:
the tidal range and the volume of the basin.   The only
parameter which needs to be calibrated is the returning
 ratio a, the fraction of water entering the  basin at
 flood tide which left the basin during the previous
 ebb tide.  This parameter may be determined by con-
 ducting tracer experiments in the prototype.   However
 the  dependence of the flushing rates on the  value of
 a decreases rapidly as the segments  proceed  landwards.
 The  predicted concentration distribution of  a noncon-
 servative  pollutant is rather insensitive to the
 change  in the value of a.

                  Acknowledgements

     I wish to thank Mr. G. Parker for his help with
 the  numerical calculation.  This model was developed
 under the Cooperative State Agencies Program, the
 continuing  support of the Virginia State Water Control
 Board is appreciated.

     This is Contribution No. 746 from Virginia
 Institute of Marine Science.
                                                           1.
                                                           2.
                References

Tracer, Inc., 1971, Estuarine Modeling:   An
Assessment. Water Pollution Control Research
Series, 16070 D2V 02/71, Environmental Protection
Agency, Washington, D. C.

Ketchum, B. H., 1951, "The Exchanges  of  Fresh and
Salt Waters in Tidal Estuaries."  J.  of  Marine
Res., Vol. 10, No. 1.
                                                           3.  Ketchum, B. H., 1951, "The Flushing of Tidal
                                                               Estuaries."  Sewage and Industrial Wastes,
                                                               Vol.  23, No. 2.

                                                           4.  Ketchum, B. H., J. C. Ayers and R. F. Vaccaro,
                                                               1952, "Processes Contributing to the Decrease of
                                                               Coliform Bacteria in a Tidal Estuary."  Ecology,
                                                               Vol.  33, No. 2.
                                                        546

-------
      Segmentation Criterion:  Vn » Pn+1 + Pb
      Vn = low-tide volume of  the nth segment
      fn±l = tidal prism landward from the  (n+l)th  transect
      Pj3 = tidal prism of the  branch connecting to the segment

 Figure  1.  Segmentation of a  Coastal Basin.
ii
SALINITY SURFACE O
     BOTTOM A
DISSOLVED OXYGEN SURFACED
          BOTTOM X
                                 LWS-JULY 19
                           6  • g
                                                                                                          TIBITHA
                                                                 GREAT
                                                                WICOMICO
                                                                 RIVER
                                                                                                    CHESAPEAKE
                                                                                                        BAY
                                                      3|     Figure  3.   Segmentation of the  Cockrell  Creek.
                 DISTANCE UPSTREAM (STATUTE MILES)
                     (1 «itut« •1-1.609 k»)
Figure  2.   Longitudinal Salinity  Distribution
             in the Cockrell Creek,  Virginia.
                                                        547

-------
                                        EVALUATION OF MATHEMATICAL MODELS

                           FOR THE SIMULATION OF TIME-VARYING RUNOFF AND WATER QUALITY

                                     IN STORM AND COMBINED SEWERAGE SYSTEMS
                                               Albin Brandstetter
                                               Research Associate
                                       Water and Land Resources Department
                                    Battelle - Pacific Northwest Laboratories
                                              Richland, Washington
                     Richard Field
                         Chief
            Storm and Combined Sewer Section
      Municipal Environmental Research Laboratory
         U. S. Environmental Protection Agency
                  Edison, New Jersey
                   Harry C. Torno
                   Staff Engineer
          Media Quality Management Division
         Office of Research and Development
        U. S. Environmental Protection Agency
                  Washington, D. C.
ABSTRACT

The use of mathematical models  for  the assessment,
planning,  design,  and control of  storm and  combined
sewerage systems is becoming wide-spread  in order  to
develop more cost-effective wastewater management
schemes than are possible with  conventional steady-
state analysis techniques.  The U.S.  Environmental
Protection Agency has sponsored an  assessment  of
simulation models to provide a  readily available
reference guide for selecting models  best suited for
specific purposes.  Most models reviewed  include the
computation of the time-varying runoff from rainfall
and flow routing in sewerage networks.  Some models
simulate the time-varying wastewater  quality,  and  a
few models include mathematical optimization tech-
niques for the least-cost design  of new sewer  system
components or for optimal real-time operation  of
combined sewer overflow structures.   The  assessment
summarized the principal features,  assumptions and
limitations of each model and compared numerical test
results and computer running costs  for seven models.
Additional model features were  recommended  which would
enhance or extend model simulation  capabilities and
use.

INTRODUCTION

Mathematical models are being used  more frequently for
the assessment of existing sewerage system  performance,
the planning and design of new  facilities,  and the con-
trol of untreated overflows during  rainstorms.  For
some purposes, primarily the design of sanitary sewer-
age systems, steady-state models  are  adequate  to
compute the least—cost combinations of sewer pipes and
slopes for specified inflows.  Nonsteady-state models
are required, however, to adequately  analyse complex
storm and combined sewerage systems under dynamic
runoff conditions.

A considerable number of steady-state and nonsteady-
state models have been developed  in the last few years
for the analysis of sewerage systems.  It is conse-
quently becoming increasingly confusing for the user
to select the model most suited for a particular
application.  A review of the more  comprehen-
sive nonsteady-state urban hydrologic models was  there-
fore conducted to develop a brief summary of most
available models and to provide a reference of model
features and their strengths and  limitations.1
MODEL COMPARISONS

The following models were reviewed  (models marked
with * were also tested with computer runs):

     *1.  Battelle Urban Wastewater Management
          Model2'3
      2.  British Road Research Laboratory Model4'5
     *3.  Chicago Flow Simulation Program6'7
      4.  Chicago Hydrograph Method8'9 and Runoff
          and Pollution Models10
      5.  CH2M-H111 Wastewater Collection System
          Analysis Model11
      6.  Colorado State University Urban Runoff
          Models12'13'14
      7.  Corps of Engineers STORM Model15'15
     *8.  Dorsch Consult Hydrograph-Volume Method17 and
          Quantity-Quality Simulation Program18
     *9.  Environmental Protection Agency Storm Water
          Management Model19'20
     10.  Hydrocomp Simulation Program21'22
     11.  Illinois State Water Survey Urban Drainage
          Simulator23
    *12.  Massachusetts Institute of Technology
          Urban Watershed Model24-25
     13.  Minneapolis-St. Paul Urban Runoff Model26
     14.  Norwegian Institute for Water  Research
          Sewerage System Models27
     15.  Queen's University Urban Runoff Model28
     16.  Seattle Computer Augmented Treatment and
          Disposal System29
    *17.  SOGREAH Looped Sewer Model30
     18.  University of Cincinnati Urban Runoff Model
     19.  University of Illinois Storm Sewer System
          Simulation Model32'33
     20.  University of Massachusetts Combined Sewer
          Control Simulation Model34
     21.  University of Nebraska Urban Hydrologic
          Simulator35
     22.  Watermation Cleveland Sewer Model1
    *23.  Water Resources Engineers Storm Water
          Management Model36
     24.  Wilsey and Ham Urban Watershed Model37

In general, the reviewed models combine  the runoff
from several  catchments and  route  the wastewaters
within  the  sewer networks.   Most  of them consider  the
spatial nonuniformity of  rainfall;  the  time-varying
runoff  resulting from rainstorms  of different  inten-
sities  and  durations; spatial  and  temporal  variations
in dry-weather flows; the attenuation  of flows  during
overland, gutter,  and sewer  conduit flow routing;  and
31
                                                       548

-------
                                 Table 1.  COMPARISON OF MAJOR MODEL CATEGORIES
MODEL ORIGIN
1
2
3
4
6
7
8
9
10
11
1!
13
14
15
16
17
1!
19
20
21
22
23
24
BATTELLE
NORTHWEST
BRITISH ROAD
RESEARCH LAB
CHICAGO
SANITARY DISTRICT
CH2M-HILL
CITY
OF CHICAGO
COLORADO STATE
UNIVERSITY
COR PS OF
ENGINEERS
CONSULT
AGENCY
HYDROCOMP
ILLINOIS STATE
MIT-RESOURCE
ANALYSIS
MINNEAPOLIS-
SI. PAUL
NORWEGIAN
WATER RES.
QUEEN'S
UNIVERSITY
SEATTLE METRO
SOGREAH
UNIVERSITY
OF CINCINNATI
UNIVERSITY
Of 1L1NOIS
UNIVERSITY
OF MASSACHUSETTS
UNIVERSITY
OF NEBRASKA
WATERMATION
WATER RESOURCES
ENGINEERS
WILSEY
AND HAM
MODEL
ABBREVIATION
BNW
RRL
FSF>
SAM
CHM-RPM
STORM
HVM-OQS
SWMM
HSP
ILLUDAS
MITCAT
UROM-9
NIVA
OUURM
CATAD
CAREDAS
UCUR
ISS

HYDRA
CSM
STORMSEWER
WH-1
YEAR
1973
1969
1974
1974
1974
1974
1974
1975
1974
1974
1974
1972
1971
1974
1975
1974
1974
1974
1973
1974
1974
1975
'973
1972
CATCHMENT HYDROLOGY
MULTIPLE
CATCHMENT INFLOWS


•


S





•

•
•

•
•
•
•



•
•
DRY-WEATHER FLOW


•


7J





•

•


•

•
•






INPUT OF SEVERAL
HYETOGRAPHS


•








•














|






»




•















RUNOFF FROM
IMPERVIOUS AREAS


•


•
»




•


•


•


•





'•
RUNOFF FROM
PERVIOUS AREAS






»




•


•


•








•
WATER BALANCE
BETWEEN STORMS






•













•






SEWER HYDRAULICS
FLOW ROUTING
IN SEWERS


•


«




•
•
•
•
•


•


•





•
UPSTR AND DOWNSTK
FLOW CONTROL





,














•





SURCHARGING AND
PRESSURE FLOW











•














I
1






•




•
•













PUMPING STATION




»





















STORAGE



•


•

•

•
•



•










PRINTS STAGE



•
«
«




•
•



•




•





PRINTS VELOCITIES





«





•








•





WASTEWATER QUALITY
DRY-WEATHER
QUALITY













•











g






•






•











QUALITY ROUTING













•


•








SEDIMENTATION
AND SCOUR























QUALITY
REACTIONS





•















•

WASTEWATER
TREATMENT
•











•







•


QUALITY BALANCE
BETWEEN STORMS





•

















RECEIVING WATER
FLOW SIMULATION

•
•







•




•
•




•

RECEIVING WATER
QUALITY SIMUIATI ON















•





•

M SCEUANEOUS
CONTINUOUS
SIMULATION


•


•






•





•




CAN CHOOSE TIME
INTERVAL


•
•
«


•


•
•
•
•

•
•
•
•

•
•
•
»
DESIGN
COMPUTATIONS










•


•




•


•

•
-o
si












•


•





•


APPLIED TO
REAL PROBLEMS


•
•


•
*



•
•


•
•




•
•
•
COMPUTER
PROGRAM AVAILABLE


•
•

•
•



•

•


•

•
•
•




the operation of flow diversion structures and stor-
age facilities under dynamic wastewater flow condi-
tions.  Only a few models exist, however,  which also
compute the water quality of the urban runoff and route
the pollutants through the sewerage networks.  Some
models include options for dimensioning sewer pipes
and two of them use mathematical optimization schemes
for least-cost design of new sewerage system compon-
ents.  Three models have provisions for the real-time
control of overflows during rainstorms.

Table 1 lists the principal features of the models.
Detailed model descriptions and comparison tables,
results of numerical testing for 7 models, and
recommendations for future model improvements  are
contained in the project report submitted to the U.S.
Environmental Protection Agency.1

A brief review of these models indicates a tremendous
diversity in scope and purpose, mathematical detail,
system elements and hydrologic phenomena being
modeled, size of the system that can be handled, data
input requirements, and computer output.  This
diversity, of course, is a result of the varying
conditions and objectives which govern the design and
evaluation of individual sewerage systems, limitations
in the available computer hardware, and progress in
the state-of-the-art of modeling specific phenomena.
For some applications, models are available with con-
siderable simplifications in their mathematical detail
to reduce input data requirements, computer storage
requirements, and computer running time.  Some models
include unnecessary approximations considering the
present state-of-the-art of hydrologic modeling and
computer capability.  Some of the simplifications,
however, are needed for applications to real-time
control of overflows which require repeated simulations
within fixed time constraints on a small process
computer.

Usually the simplest model which simulates the desired
phenomena with adequately accurate mathematical formu-
lations should be selected.  Input data requirements
and computer running times generally decrease with
decreasing complexity of the model.  Some models in-
clude options to suppress portions of the simulation
if only selected phenomena are of interest.  Although
this feature is not listed, it should be considered in
the model selection.  Some proprietary models have
features which appear superior to the publicly avail-
able models, but a user may prefer to run his own
model that does not exactly meet his requirements.

The simulation of water quality adds considerable
complexity to a model, even if it routes only conser-
vative substances.  The complexity increases substan-
tially if both storm and dry-weather water quality is
computed from land use characteristics.  Additional
                                                       549

-------
complexities are added if wastewater treatment and
receiving water flow and quality are being modeled.

The testing and review of the models indicated also
that the routing of flow and conservative pollutants,
although complex for looping and converging and
diverging branch systems with special structures, are
the best understood phenomena.  The selection of par-
ticular mathematical formulations and numerical solu-
tion techniques is governed only by the preference
and needs of the model developer and user.  Research
is required, however, to provide a better understand-
ing of sedimentation and scour, and of reactions and
interactions between various pollutants in the sewers.

Considerable uncertainties exist in the modeling of
catchment phenomena, both the flow and water quality
of storm and dry-weather runoff.  The definition of
adequate formulations for soil infiltration, the fill-
ing of depression storage, evapotranspiration, ground-
water seepage and soil moisture are extremely diffi-
cult considering the heterogeneity of catchment land
uses, geometry, vegetation, and soils.   The adequacy
of catchment water quality computations from catchment
land use and runoff has not been sufficiently demon-
strated.  Although various models have shown good
agreement between measured and computed catchment
runoff water quality, the comparisons have been too
limited to assign confidence limits to predictions
for catchments without measurements.   The models are
still useful, however, for the evaluation of relative
merits of alternative wastewater management schemes.

In general, a direct relationship between model
complexity and its cost of implementation and appli-
cation exists with respect to the number of major
phenomena which are modeled.  Efficient solution
algorithms, however, may reduce this difference
significantly.  This is true particularly for propri-
etary models due to their need to stay competitive.
A model which simulates many special sewerage system
facilities will be more complex in structure and
require more data and computer storage than a model
that computes only runoff from a single catchment
without routing flows or which routes only flow in
a simple converging network without computing run-
off from precipitation and land use.

Model testing with hypothetical data showed that
computer running time of models simulating the same
phenomena is governed more by efficient formulations
of the overall model logic than by the basic equations
used for specific phenomena.  For instance, no consis-
tent pattern in computer running time was evident be-
tween the use of the kinematic and dynamic wave
equation.  Consequently, since the dynamic wave
equation can be solved to simulate downstream flow
control, backwater, flow reversal,  surcharging, and
pressure flow (none of which can be simulated by the
kinematic wave equation) the application of models
using the dynamic wave equation is recommended,
provided the selected model includes an efficient
numerical algorithm for its solution.

Some models require only the input of typical sub-
catchment elements and perform hydrologic computations
only for these typical subcatchments, but then con-
sider the actual locations of all subcatchments for
the overland and sewer flow routing computations.
This can save considerable input preparation and
computer running time.

RECOMMENDATIONS

Various models stand out due to their completeness of
hydrologic and hydraulic formulations,  the ease of
input data preparation,  the  efficiency of  computa-
tional algorithms, and  the adequacy  of the program
output.  Other models,  although  deficient  in some of
these respects, merit consideration  due to special
features which are not  included  in the more compre-
hensive models but may  be required for specific
applications.

The following models are consequently  recommended  for
routine applications:

     1.   Battelle Urban Wastewater Management Model
         for real-time  control and/or  design optimiza-
         tion considering hydraulic, water  quality and
         cost constraints, provided  the hydrologic and
         hydraulic model assumptions are adequate for
         particular applications (lumping of many
         small subcatchments into few  large catchments,
         neglect of downstream flow  control, backwater,
         flow reversal,  surcharging,  and pressure
         flow).

     2.   Corps of Engineers STORM Model for preliminary
         planning of required storage and treatment
         capacity for storm runoff from single major
         catchments,  considering both  the quantity and
         quality of the  surface runoff and untreated
         overflows.

     3.   Dorsch  Consult  Hydrograph Volume Method for
         single-event flow analysis considering most
         important hydraulic phenomena (except flow
         reversal).  A Quantity-Quality Simulation
         Program for continuous wastewater flow and
         quality analysis is now available, but the
         model was completed too late for evaluation.

     4.   Environmental  Protection Agency Stormwater
         Management Model for single-event waste-
         water flow and quality analysis provided
         the hydraulic  limitations of  the model
         are acceptable (neglect of  downstream  flow
         control and flow reversal,  inadequate  back-
         water,  surcharging, and pressure  flow
         formulation).  A new version  patterned after
         the Corps of Engineers STORM  Model is  now
         available for  continuous simulation, but
         this version was completed  too late for
         evaluation,

     5.  Hydrocomp Simulation Program  for  single-
         event and continuous wastewater flow and
         quality analysis provided the hydraulic
         limitations of the model are  acceptable
         (approximate backwater and  downstream  flow
         control formulation, neglect  of flow reversal,
         surcharging, and pressure flow).

     6.  Massachusetts  Institute of  Technology  Urban
         Watershed Model for single-event  flow
         analysis provided the hydraulic limitations
         of the model (neglect of backwater, down-
         stream flow control, backwater, flow rever-
         sal, surcharging, and pressure flow),  or  the
         use of a  separate model for these  phenomena
         is acceptable.

     7.  Seattle Computer Augmented  Treatment and
         Disposal  System as  an example of  an operat-
         ing real-time  control system  to reduce un-
         treated overflows.  A more  comprehensive
         computer model simulating both wastewater
         flow and  quality and including mathematical
         optimization should be  considered,  however,
         for new  systems.
                                                      550

-------
     8.  SOGREAH Looped Sewer Model for single-event
         wastewater flow and quality analysis con-
         sidering all important hydraulic phenomena.

     9.  Water Resources Engineers Stormwater Manage-
         ment Model for single-event wastewater flow
         and quality analysis considering all
         important hydraulic phenomena.

The remaining reviewed models do not appear to provide
sufficient special features which are not included in
the models mentioned above.  Their use may be advan-
tageous, nevertheless, for certain applications where
model assumptions are adequate,  and especially where
assistance from the model developers is easily avail-
able.

SUMMARY

In general, the reviewed models  provide useful tools
to the engineer and planner for  assessing, designing,
planning and controlling storm and combined sewerage
systems.  It is extremely important, however, that
the potential model user study the formulations of the
models, their limitations and approximations, if he
is to use the models in an appropriate manner.  In
addition, discussions with both  the original model
developers and other model users can provide signifi-
cant information with respect to new model features
and use experience not found in  published reports.

ACKNOWLEDGMENTS

The work performed for this study was conducted under
contract No. 68-03-0251 of the U.S. Environmental
Protection Agency.  Work was completed in August
1975.

REFERENCES

 1.  Brandstetter, A.  Assessment of Mathematical
     Models for Storm and Combined Sewer Management.
     Battelle-Pacific Northwest  Laboratories, Richland,
     Washington, for U.S. Environmental Protection
     Agency (at press).

 2.  Pew, K. A., R. L. Gallery,  A. Brandstetter, and
     J. J. Anderson.  The Design and Operation of a
     Real-Time Data Acquisition  System and Combined
     Sewer Controls in the City  of Cleveland, Ohio.
     Journal of Water Pollution  Control Federation,
     Vol. 45, No. 11, pp. 2276-2289, November 1973.

 3.  Brandstetter, A., R. L. Engel, and D. B. Cearlock.
     A Mathematical Model for Optimum Design and
     Control of Metropolitan Wastewater Management
     Systems.  Water Resources Bulletin, 9(6):1188-
     1200, December 1973.

 4.  Terstriep, M. L., and J. B. Stall.  Urban Runoff
     by Road Research Laboratory Method.  Journal of
     the Hydraulics Division, American Society of
     Civil Engineers, 95(HY6):1809-1834, Proc. Paper
     6878, November 1969,  Discussions:  96(HY4):
     1100-1102, April 1970; 96CHY7):1625-1631, July
     1970; 96(HY9):1879-1880, September 1970,  Closure:
     97(HY4):574-579, April 1971.

 5.  Stall, J. B., and M. L. Terstriep.  Storm Sewer
     Design—An Evaluation of the RRL Method,  U.S.
     Environmental Protection Agency Report EPA-R2-
     068, October 1972.
 6.  Lanyon, R. F., and J. P. Jackson,  A Streamflow
     Model for Metropolitan Planning and Design.
     American Society of Civil Engineers, Urban Water
     Resources Research Program, Technical Memorandum
     No.  20, January 1974.

 7.  Lanyon, R. F., and J. p. Jackson.  Flow Simula-
     tion System.   Journal of the Hydraulics Division,
     American Society of Civil Engineers, 100(HY8):
     1089-1105, Proc. Paper 10743, August 1974.

 8.  Tholin, A, L., and C. J, Keifer.  The Hydrology
     of Urban Runoff.  Journal of the Sanitary
     Engineering Division, American Society of
     Civil Engineers, 85(SA2):47-106, Proc. Paper
     1984, March 1959.  Discussions:  85(HY8):119,
     August 1959;  85(SA5):37-51, September 1959.
     Closure:  86(SA2):112, March 1960.

 9.  Keifer, C. J., J. P. Harrison, and T. 0. Hixson.
     Chicago Hydrograph Method, Network Analysis of
     Runoff Computations.  City of Chicago Department
     of Public Works, Preliminary Report, July 1970.

10.  City of Chicago,  Development of a Flood Control
     and Pollution Control Plan for the Chicagoland
     Area.  Bureau of Engineering, Department of
     Public Works,  City of Chicago, with Metropolitan
     Sanitary District of Greater Chicago and
     Illinois Institute for Environmental Quality,
     Computer Simulation Programs, Technical Report
     Part 2, December 1972.

11.  O'Neel, W, G., A. L. Davis and K. W. Van Dusen.
     SAM-Wastewater Collection System Analysis Model -
     User's Manual.  CH2M-Hill, Corvallis, Oregon,
     to the City of Portland, Oregon, March 1974.

12,  Yevjevich, V,, and A. H. Barnes.  Flood Routing
     through Storm Drains.  Colorado State University,
     Fort Collins,  Hydrology Papers 43,  44, 45,  and
     46,  November  1970.

13.  Smith, G. L.,  N. S. Grigg, L. S. Tucker,  and
     D. W. Hill.  Metropolitan Water Intelligence
     System.  Colorado State University,  Fort
     Collins, Department of Civil Engineering,
     Completion Report—Phase I, for Office of
     Water Resources Research, June 1972.

14.  Grigg, N. S.,  J. W. Labadie, G. L.  Smith, D.  W.
     Hill and B. W. Bradford.  Metropolitan Water
     Intelligence  Systems.  Colorado State University,
     Fort Collins,  Department of Civil Engineering,
     Completion Report   Phase II, for Office of
     Water Resources Research, June 1973.

15.  U.S. Corps of Engineers.  Urban Runoff:  Storage,
     Treatment and Overflow Model "STORM".  U.S.
     Army, Davis,  California, Hydrologic Engineering
     Center Computer Program 723-S8-L2520, May 1974.

16,  Roesner, L. A., H. M. Nichandros, R. P.
     Shubinski, A.  D. Feldman, J. W. Abbott, and
     A. 0. Friedland.  A Model for Evaluating
     Runoff-Quality in Metropolitan Master Planning.
     American Society of Civil Engineers Urban Water
     Resources Research Program, Technical Memorandum
     No.  23, April 1974.

17.  Mevius, F.  Analysis of Urban Sewer Systems by
     Hydrograph-Volume Method.  Paper Presented at
     the National  Conference on Urban Engineering
     Terrain Problems, Montreal, Canada, May 1973.
                                                       551

-------
18.
19.
20.
21.
22.
23.
24.
Geiger, F. W.  Urban Runoff Pollution Derived
from Long-Time Simulation.  Paper Presented at
the National Symposium on Urban Hydrology and
Sediment Control, Lexington, Kentucky, July
28-31, 1975.

Metcalf & Eddy, Inc., University of Florida, and
Water Resources Engineers, Inc.   Storm Water
Management Model.  U.S. Environmental Protection
Agency Reports No. 11024 DOC 07/71, 11024 DOC
08/71, 11024 DOC 09/71, and 11024 DOC 10/71
(4 Volumes), October 1971.

Huber, W. C., J. P. Heaney, M.  A. Medina, W. A.
Peltz, H. Sheikh, and G. F. Smith.  Storm Water
Management Model - User's Manual - Version 2.
University of Florida, Gainesville, Department
of Environmental Engineering Sciences, and U.S.
Environmental Protection Agency Report No.
EPA-670/2-75-017, March 1975.

Hydrocomp International, Inc.  Hydrocomp
Simulation Programming—Operations Manual.
Palo Alto, California, February 1972.

Hydrocomp International, Inc.  Hydrocomp
Simulation Programming—Mathematical Model of
Water Quality Indices in Rivers and Impoundments.
Palo Alto, California, undated.

Terstriep, M. L., and J. B. Stall.  The Illinois
Urban Drainage Area Simulator.   Illinois State
Water Survey, Bulletin 58, 1974.
                                                         31.
                                                         32.
                                                         33,
                                                         34,
                                                         35.
                                                         36.
     Schaake, J. C.,  Jr.,  G.  LeClerc,  and B.  M.  Harley,
     Evaluation and  Control of Urban Runoff.   American   37,
     Society of Civil Engineers Annual and National
     Environmental Engineering Meeting, Preprint
     2103, New York,  N.Y., October/November 1973.

25.  Resource Analysis, Inc.   Analysis of Hypothetical
     Catchments and  Pipes  with the M.I.T. Catchment
     Model.  Resource Analysis, Inc.,  Cambridge,
     Massachusetts,  for Battelle-Pacific Northwest
     Laboratories, 2 Volumes,  October 1974.

26.  Minneapolis-St.  Paul  Sanitary District.   Dis-
     patching System for Control of Combined  Sewer
     Losses.  U.S. Environmental Protection Agency
     Report 11020 FAQ 03/71,  March 1971.

27.  Lindholm, 0. G.   Factors Affecting the Perform-
     ance of Combined Versus  Separate Sewer Systems.
     Pergamon Press  Ltd.,  Proceedings of 7th
     International Conference on Water Pollution
     Research, Paris, France,  September 9-13, 1974.

28.  Watt, W. E.  QUURM -  Queen's University  Urban
     Runoff Model.  Queen's University at Kingston,
     Canada, Department of Civil Engineering, Un-
     published Manuscript, May 1975.

29.  Leiser, C. P.  Computer  Management of a  Combined
     Sewer System.  Municipality of Metropolitan
     Seattle for U.S. Environmental Protection
     Agency Report No.  EPA-670/2-74-022, July 1974.

30.  SOGREAH.  Mathematical Flow Simulation Model
     for Urban Sewerage Systems, Caredas Program.
     Societe Grenobloise d'Etudes et d'Applications
     Hydrauliques, Grenoble,  France,  Partial  Draft
     Report, April 1973.  Translated from French
     by David Vetrano,  December 1973.
Papadakis, C., and H. C. Preul.  University of
Cincinnati Urban Runoff Model.  Journal  of  the
Hydraulics Division, American Society  of Civil
Engineers, 98(HY10):1789-1804, Proc, Paper
9298, October 1972.  Discussion:  99(HY7):
1194-1196, July 1973.  Closure:  100(HY4):608-
611, April 1974.

Chow, V. T., and A. Ben-Zvi.  The Illinois
Hydrodynamic Watershed Model III (IHW  Model III).
University of Illinois at Urbana-Champaign,
Department of Civil Engineering, Hydraulic
Engineering Series No. 26, September 1973.

Sevuk, A. S,, B, C, Yen, and G, E,  Peterson.
Illinois Storm Sewer System Simulation Model:
User's Manual.  University of Illinois,
Urbana-Champaign, Water Resources Center,
Research Report No, 73, October 1973.

Ray, D. L,  Simulation of Control Alternatives
for Combined Sewer Overflows.  University of
Massachusetts, Amherst, Department  of  Civil
Engineering, Report No. EVE 33-73-4, April  1973.

Surkan, A. J., and P. Kelton.  Binary  Tree
Model Simulation of the Behavior of Urban
Hydrologic Systems.  International  Journal  of
Systems Science Preprint, 1974.

Shubinski, R, P., and L. A. Roesner,   Linked
Process Routing Models.  Paper Presented at
American Geophysical Union Annual Spring
Meeting, Washington, D.C., April 1973.

Amorocho, J,  The WH-1 Urban Watershed System for
Hydrologic Simulation and Sewer Design.  Wilsey &
Ham, Consulting Engineers, Foster City,  California,
Unpublished Manuscript, 1972.
                                                      552

-------
                             USE OF MATHEMATICAL MODELS FOR HYDROLOGIC FORECASTING

                                        IN THE NATIONAL WEATHER SERVICE
                                             John C. Schaake, Jr.
                                              Assistant Director
                                        Hydrologic Research Laboratory
                                           National Weather Service
                                National Oceanic and Atmospheric Administration
                                            Silver Spring, Maryland
ABSTRACT

The National Weather Service is implementing a new
system of mathematical models to aid river fore-
casters throughout the United States.  Forecasts
of stages and discharges a few days ahead are pro-
duced routinely on a daily "basis and at six-hour
intervals during floods.  Also, extended streamflow
prediction of high, low, and expected discharges for
periods up to several months into the future are made
at routine intervals.

This system of models, known as the "National Weather
Service River Forecast System" (NWSRFS), was
                 q
initiated in 1971  and is now being improved and
expanded.  It includes conceptual hydrologic models
of snow, soil moisture, and streamflow routing; it
includes models of unsteady open channel flow; it
has provisions for reservoir operations models; and
it will include stochastic hydrometeorologic models
to account for uncertainty in streamflow forecasts.
NWSRFS also includes programs and procedures for
model calibration and verification with the histor-
ical data.  Studies of the validity and accuracy of
the models are reviewed, and some modeling issues
in need of further study are summarized.

Information generated by these models could con-
tribute to EPA's overall environmental mission.
Hydrologic information is readily available in NWS
forecast data files for use with convection and
dispersion models to forecast the fate of pollutants
suddenly released to the hydrologic environment or to
forecast the day to day variations in pollutant
transport properties of selected streams.  Currently
under development is a water temperature forecast
model utilizing hydrological and meteorological data
readily available in real time in NWS data files.

Problems faced by NWS managers in understanding and
utilizing NWSRFS are discussed.  NWSRFS is being
installed on an IBM 360/195* in Suitland, Md., and
is being operated from remote terminals by field
offices.  NWSRFS is developed and supported by the
Hydrologic Research Laboratory, Hydrologic Services
Division, and the field offices.

HISTORY OF MODEL USE IN NWS

For many years, river forecasts in the U.S.  have been
made using an Antecedent Precipitation Index (API)
type of rainfall-runoff relation to convert rainfall
                               7
into rainfall excess or runoff.   Unit hydrographs
or time delay histograms have been widely used to
translate runoff through catchments to forecast

"Trade names are mentioned solely for purposes of
 identification.   No endorsement  by the NWS, NOAA,
 or Department of Commerce, either implicitly or
 explicitly, is implied.
points.  These techniques historically have worked
well and are still in use.

In 1966 a project was initiated in NWS to evaluate
newly developed hydrologic models.   Models were com-
pared for a. group of seven carefully selected basins
throughout the country.  No single numerical scoring
factor seemed adequately to represent model accuracy
because important differences between models seemed
to be evident only in one or two aspects of the
simulation or only in certain hydrologic situations.
Several statistical measures based on observed and
simulated discharge were used to evaluate model
performance.  Two models showed an accuracy advantage
over API.  One was essentially the same as the

Stanford Watershed Model IV,  the other was the
initial version of the Sacramento River Forecast
                        2
Center Hydrologic Model.

The most notable accuracy advantage of these con-
ceptual models over the API model is during and after
a long dry spell.  The more complete moisture
accounting techniques give the conceptual models
enough "memory" to handle situations where large
amounts of rain give little or no streamflow response.

In 1971 a modified version of the Stanford IV model
was incorporated with other data processing programs
into the NWSRFS.    A snow accumulation and ablation
model was added to NWSRFS in 1973.   This snow model
accounts in detail for the energy balance of the
snow cover by using air temperature to estimate
energy exchanges.

The Hydrologic Research Laboratory  in 197^ compared
an improved Sacramento Model with the NWSRFS Stanford
Watershed Model IV.  Data from four catchments were
used to test model performance.  This was part of a
WMO project on intercomparison of conceptual models.
In general we concluded:  (l) there is no significant
difference in model performance in  very humid areas;
(2) there seems to be little difference in ability to
simulate large flood events; (3) the Sacramento Model
does simulate monthly volumes and small runoff events
significantly better in semi-arid and moderately humid
areas; and (U) improvements through research seemed
easier to make to the Sacramento Model because of its
modular structure.  Following these model tests ,
components of the soil moisture accounting of the
Sacramento Model replaced the original Stanford IV
components in NWSRFS.

Summary of HWSRFS

NWSRFS includes techniques and programs for developing
operational river forecasts from initial processing of
historical data during procedure development to the
preparation of forecasts in real time.  The programs
are generalized for use on any river _system including
headwater catchments and downstream river networks.
                                                      553

-------
Programs and example data sets for the initial
version are available to the public through the
National Technical Information Service (NTIS).
Information to purchase these from HTIS can be ob-
tained from the Hydrologic Research Laboratory (W23),
National Weather Service, Silver Spring, Maryland
20910.

The following techniques and models are included in
NWSRFS :

  . Mathematical model of the accumulation and
    ablation of Snow [Anderson, 1973]

  . A catchment model including "both (a) a soil
    moisture model to account for flow through and
    above the soil mantle and for evapotranspiration
    and (b) time delay models to move runoff from the
    soil moisture model through the catchment to the
    catchment outlet

  . Channel routing models to account for movement of
    water in a, channel system

  . Techniques for modeling the areal distribution
    precipitation

  . Techniques for estimating mean areal temperature

  . Methods to estimate model parameters using
    historical hydrometeorological data

CRITERIA FOR MODEL SELECTION

Some of the criteria we used for model selection  are:

  .  Input Data Sampling Interval   Operational rain-
    fall data are available from a 6-hour reporting
    network and a 2lt-hour reporting network.   With
    this 6-hour reporting interval there  is  a lower
    limit to the size of catchment that can
    adequately be modeled.

  .  Computational Efficiency - Models  are operated
    for most of the country.   Each day, computations
    are made for the next few days using  6 hour time
    steps .   During flood periods , computations are
    repeated every 6 hours .

  •  Data Availability   Historical hydrometeorological
    data are available  in digital form for model
    calibration (i.e.,  model parameter estimation).
    Four types of data  are  available:   (a) hourly
    precipitation data  from the National  Climate
    Center (NCC), Asheville,  North Carolina  (card deck
    U88); (b) daily observation data (NCC card deck
    1+86); (c) synoptic  meteorological  data for esti-
    mating potential evaporation (NCC  card decks  ikk ,
    3^5, and U80) ; (d)  USGS  daily streamflow data.
    All of these data for the period of digital record
    are available to NWSRFS  users from a.  tape library
    of about 500 tapes  at the NOAA computer  center in
    Suitland, Maryland.   Each of the tapes except
    streamflow is in a  special format  (0/H format)
    developed for the NWS Office of Hydrology (copies
    of tapes in this format  are available to the
    public from NCC).   Another main source of data are
    USGS topographic maps .   (We generally use
    1:250,000 scale maps. )

            ^ Validity -  Within constraints imposed by
            ^
    computational  efficiency and data availability,
    models  should  have physical basis for their
    structure  and  should simulate observed behavior
    reasonably well.  Although models are usually
    compared by looking at differences between models,
    it  is of interest to notice many models have some
     elements of common structure.  This occurs because
     (a) water is held in storage as it flows through
     the hydrologic cycle and (b) rates of flow depend
     upon amounts of water in storage and possible
     other factors such as temperature, humidity, etc.
     Flow into and out of storage is governed by
     (a) a continuity relation and (b) a dynamic
     relation.  Models differ in terms of spatial and
     temporal resolution of these relations and in
     terms of the factors accounted for in the dynamic
     relations.

   .  Building Block Structure - Models of individual
     processes(precipitation, evaporation, snow cover,
     soil moisture, channel routing, etc.) have been
     organized as building blocks.  This offers flex-
     ibility to  represent particular situations with
     varying degrees of physical detail, and it makes
     it  possible for research on one phase of the
     hydrologic  cycle to be evaluated in an environment
     that considers other phases.

          Benefits Gained from these Criteria

 Some of the benefits that accrue from these criteria,
 particularly the requirement for a strong physical
 base, are:

   .  Enhanced likelihood of adequately predicting
     future  events especially during unexperienced
     hydrologic  situations

   .  Potential to derive initial parameter values  from
     streamflow  records  and from observable basin
     characteristics

   .  Parameters  related  to basin characteristics  may
     possibly be adjusted without waiting for a new
     data base if basin  characteristics change.

   .  Conceptual  hydrologic models offer potential  for
     application other than for  forecasting river  stage
     and discharge such  as movement  of pollutants
     through the environment, water  temperature pre-
     diction, and prediction  of  soil moisture levels
     for agricultural purposes.

                   MODEL APPLICATIONS

 Operational River Forecast Preparation

 Daily river forecasts are prepared in 12 River Forecast
 Centers (HFC's) throughout the  U.S.   These RFC's
 transmit forecast information to Weather Service
.forecast offices (WSFO's) for dissemination to the
 public.  The WSFO's gather precipitation and other
 data and transmit these to the  RFC's.

 There currently are about 6700 precipitation gages  in
 our  operational network.  River stage data are
 gathered at least daily at 3100 locations.  These data
 are  used to prepare forecasts of river stage (and
 possibly discharge) at 2500 locations.  Conceptual
 hydrologic models are now used at less than 10 percent
 of these forecast points.

 Although the actual forecasts are made by profes-
 sionals, not by computers, the computer is an essential
 tool in generating forecast information.  A new
 operational forecast computer program currently is
 being developed under contract.  This will be a disk-
 oriented system incorporating all of the NWSRFS
 hydrologic models and will be used from remote
 terminals by our RFC's.  It will reside at the NOAA
 computer center in Suitland, Md.  NOAA has 3 IBM
 360/195 computers and these are used by NWS's National
 Meteorological  Center (NMC) to operate its atmospheric
                                                      554

-------
simulation and forecast models and by the National
Environmental Satellite Service (NESS) to operate two
Geostationary Orbit Environmental Satellites (GOES).
Additional current hydrologic and meteorological data
from NWS and NESS operations are available or
potentially available in various data files to our
RFC's through this new operational forecast program.

The general configuration of our new operational
program appears in Figure 1.  Forecasters enter data
as they become available from cards into time series
files through a time series input routine.  When a
forecast is to be made, a preprocessing routine checks
available data, estimates missing values, converts
stages to discharges and computes mean areal precip-
itation, temperature, and potential evaporation.
Then, the forecast routine reads the new mean areal
time series data, the carry-over files from the
previous forecast, and the model parameter data file.
The forecast routine produces river forecasts and
updates the carry-over files.
     Figure  1.   General  Configuration  of the NWS
                 Operational Forecast Program

 When new  forecast points are added, model parameter
 values must be  entered  in the parameter data  files
 and initial state variables must be entered in the
 carry-over  files.  The  main problem,  however, is to
 estimate  the model parameters by analysis of
 historical  data.

 Parameter Estimation

 To reduce the manpower  costs of extending HWSRFS to
 the entire U.S. it would be nice to completely
 automate  the parameter  estimation process.  However,
 it seems  essential in mathematical optimization of
 parameters to start with good initial values  and to
constrain the domain of variation to avoid unrealistic
estimates.  This means some method other than
automatic optimization is needed to analyze available
information to find good initial values.

Our present approach is first to analyze historical
precipitation and streamflow data to make initial
estimates.    These are then used to simulate the
system and results are analyzed to find possible
                                       Q
adjustments.  Finally, a pattern search  automatic
optimization is used to "tune" the parameter
estimates.

The most difficult part of our estimation procedure
is to know how to make manual adjustments.   Not
only must one understand physically the dynamics of
the natural process, but one must also understand
mathematically the dynamics of the model of the
process.  There seems to be extremely strong ten-
dencies for most professionals to rely only on their
understanding of the physical process.   We  tend to
assume how parameters should change rather  than deduce
this from our knowledge of the mathematics.

Historical Data Processing

Before parameter estimates can be made, historical
data must be organized.  We begin with a library of
about 500 data tapes containing h different  types of
hydrometeorological data.   We hope to add SCS snow
course data in the near future to aid parameter
estimation for our snow model.  We also hope to add
some USGS bi-hourly stage or discharge data.   Data
tapes are immediately available to our RFC's  and we
have programs to inventory individual tapes.   We also
have programs to strip selected time series  and enter
these into permanently mounted disk data files for
future analysis.  These disk files are part  of our
NWSRFS data file system.  All of our data analysis and
parameter estimation programs read and write  time
series using these files.

The initial version of NWSRFS was tape-oriented.  All
time series data, both measured and computed, were
processed with magnetic tapes.  This was extremely
cumbersome because many intermediate tapes were
required in preparation for model calibration.  The
direct access disk files in our current version
greatly simplified our data handling problems.

Figure 2 illustrates the data processing options
available to our RFC's to estimate parameters in our
models.  The inventory programs and preliminary
processing programs strip data from tape to  disk.
The program MAP is used to convert raw precipitation
data at hourly and daily stations, into 6-hour mean
areal values.  Consistency checks are made  via double
mass plots of one station vs. any combination of other
stations.  Adjustments can be made in inconsistent
data and missing data are estimated.  Programs MAT
and MAPE perform similar functions to produce mean
areal temperature and potential evaporation data.  Our
manual calibration program, MCP, is used to  simulate
historical events using given parameter estimates.
Our automatic optimization program uses direct search
to find better parameter estimates.

Forecast Updating

Updating is needed in river forecasting because
computed river stages up to the present time do not
agree exactly with observed stages.  Differences are
due to errors in estimation of mean areal precipitation
(our average precipitation gage density is  only one
gage per 1*50 square miles) and to modeling  errors.
In general improved forecasts can be made if
                                                       555

-------
     Figure 2.
Data Processing for Parameter
  Estimation in HWSHFS
differences between observed and computed stages  are
used to adjust forecast stages.

This can "be done in several ways.   One is to "blend"
computed and observed stages directly by adding a
proportion of the latest difference to the forecast.
This proportion would decrease to zero into the future
and the computed forecast would eventually prevail.
A physically more attractive approach would be  to
adjust precipitation input data or unit hydrograph
ordinates until observed and computed values agree
within acceptable limits.  Such adjustment procedures
are now being studied by our Hydrologic Research
Laboratory.

Mathematically, this updating problem arises whenever
observations can be made of computed state variables.
For example, we can observe snow water equivalent,
extent of snow cover, soil moisture content, and
ground water levels.  Each is related in some way to
model state variables.  Unfortunately there is  no
general and practical way to use these additional
data as input to conventional deterministic models.
Perhaps a theoretical or conceptual framework can be
derived from the Kalman filter in estimation theory.
But this remains a difficult area of hydrologic
research not only in river forecasting but wherever
measurements of some output state variables are to be
used to improve the estimates of other state
variables.

              POTENTIAL INTEREST TO EPA

Water is an important vehicle for transporting
pollutants  from point and non-point sources in  the
environment.  Information on the current and forecast
states of motion of water throughout the United States
are continuously available in NWS data files.
Streamflow Routing

Potentially the streamflow routing models  in  NWSRFS
could be of particular interest to EPA.  We use
several types of routing models ranging from  unit
hydrographs and time delay histograms to dynamic
routing models based on the St. Venant partial dif-
ferential equations for unsteady flow in open
channels.

Unit hydrographs and time delay histograms are used
currently to route runoff in headwater basins and
local inflows to a downstream forecast point.  Most
widely used to route flow in streams and rivers is a
"variable lag and K" method of accounting  for the
attenuation and delay of flood waves moving down-
stream.  We currently are investigating possible use
of Kinematic Wave and Diffustion Wave models  in
addition to these other models.

We have spent the last few years developing a dynamic
routing model that would be computationally efficient
and sufficiently accurate for operational
            h 5
forecasting.     We have a project underway to apply
this model to the Mississippi and Ohio Rivers,
including their junction.

Pollutant Transport Models

The potential exists for NWS or EPA to operate con-
vection, dispersion, or other water quality models in
conjunction with NWS models for such purposes as to
forecast the fate of pollutants suddenly released
into the environment, to aid in estimating the
quantities of pollutants present (as opposed to
concentrations), to forecast the day to day pollutant
transport properties of selected streams, or to
forecast quality changes in reservoirs.

                   ACKNOWLEDGMENTS

Many thoughtful suggestions on the organization and
detailed presentation of these ideas from Dr.
Eugene L. Peck, Hydrologic Research Laboratory
Director, are greatfully appreciated.

                     REFERENCES

 1. Anderson, E.A., "National Weather Service River
    Forecast System   Snow Accumulation and Ablation
    Model," NOAA TM NWS HYDRO-17, 1973

 2. Burnash, R.J.C., R.L. Ferral and R.A.  McGuire,
    "A Generalized Streamflow Simulation System:
    Conceptual Modeling for Digital Computers," U.S.
    Dept. of Commerce, national Weather Service and
    State of California Department of Water Resources,
    Sacramento, California, 1973

 3. Crawford, N.H. and R.K. Linsley, "Digital
    Simulation in Hydrology:  Stanford Watershed
    Model IV," Dept. of Civil Engrg Tech Rept 39,
    Stanford Univ., 1966

 k. Fread, D.L., "Technique for Implicit Dynamic
    Routing in Rivers with Tributaries," Water
    Resources Research, 9CO, 1973, pp 918-926

 5. Fread, D.L., "Numerical Properties of  Implicit
    Four-point Finite Difference Equations of Unsteady
    Flow," NOAA TM NWS HYDRO-18,

 6. Linsley, R.K., M.A. Kohler and J.L.H.  Paulhus,
    Applied Hydrology, McGraw-Hill,
                                                      556

-------
 7.  Linsley,  R.K.,  M.A.  Kohler  and J.L.H.  Paulhus,
    Hydrology for Engineers,  McGraw-Hill,  1975

 8.  Monro,  J.C.,  "Direct Search Optimization  in
    Mathematical Modeling and a. Watershed  Model
    Application," HOAA TM NWS HYDRO-12.  1971

 9.  Monro,  J.C. and E.A.  Anderson, "National  Weather
    Service River Forecasting System," Journal of the
    Hydraulics Division, ASCE,  Vol. 100, Ho.  HY5,
    May 1971*

10.  Peck,  E.L., "Catchment Modeling and  Initial
    Parameter Estimation for  the National  Weather
    Service River Forecast System," NOAA Technical
    Memorandum in Press, 1976

11.  Staff,  Hydrologic Research  Laboratory  "National
    Weather Service River Forecast Procedures,"
    HOAA TM NWS HYDRO-lU, 1972
                                                      557

-------
                              TESTING OF THE STORM WATER MANAGEMENT MODEL OF US EPA
                                                  Jiri Marsalek
                                               Research Scientist
                                          Hydraulics Research Division
                                         Canada Centre for Inland Waters
                                           Burlington, Ontario, Canada
                       SUMMARY

The results of testing the Storm Water Management
Model (SWMM) on a number of urban test catchments are
presented.  The runoff quantity subroutine was tested
and good results were obtained on eight catchments.
The SWMM runoff quality subroutine was tested on
three catchments only.  The lack of data allowed only
a qualitative discussion of the quality results ob-
tained.

                     INTRODUCTION

Rapid advances in urban hydrology led to the develop-
ment of a large number of urban runoff models in rec-
ent years, but only in the last three years have sever-
al comparative studies of various urban runoff models
been undertaken to assist model users in model selec-
tion.  Among these studies, the most notable were
those sponsored by the Environmental Protection
Agency1 and the Canadian Department of Environment
As a result of these studies, the Canadian Urban
Drainage  Subcommittee decided to adopt the SWMM model
of US.EPA for further study, modification and applica-
tion in urban runoff studies in Ontario.  Some of the
questions raised during this process were those of
reliability of the SWMM model, the conditions under
which the model could fail, and the accuracy of the
SWMM simulations.  All these questions are of utmost
importance in planning and design of urban drainage
systems.

When the  SWMM model was developed, very little urban
runoff data was available for model testing and veri-
fication.  Consequently, only a limited testing of the
model was carried out on four catchments and the
limited data available allowed only a qualitative
evaluation of the SWMM simulations1 3.   Since then,
several more extensive studies have been carried out
on urban test catchments and the results were re-
ported by Keeps and Mein"1,  Jewell et al!,  Marsalek
et alf, Preul and Papadakis9, and Shubinski and
Roesner   .  In all these cases,  the number of test
catchments was limited.

In this paper, the results of the SWMM model testing
on a number of new test catchments are reported and
a correlation between the accuracy of field observa-
tions and the accuracy of model simulations is demon-
strated for runoff quantity.

         METHODOLOGY FOR TESTING RUNOFF MODELS

When testing conceptual runoff models, the model
tested is used to simulate the observed phenomena
and the goodness of fit of the simulations to the
observations is then evaluated.   A set of criteria
for evaluating the goodness of fit has to be devised
and applied.
Modelling Errors

There is a number of sources of error causing  the
differences between the observations and simulations.
These error sources include the following:
     1.  Bias in the simulated output (i.e. flows and
their quality) because of incomplete or biased model
structure.
     2.  Bias in the simulated output because of random
or systematic errors in the input data (e.g. precipita-
tion, catchment characteristics).
     3.  Random and systematic errors in the observed
output (flows and their quality) used for comparisons
with the simulated output.
     4.  Bias in the simulated output because of an
incorrect application of the model (e.g. poor catch-
ment discretization, selection of time steps, etc.).
     5.  Errors in the simulated output caused by an
erroneous model calibration.

When testing conceptual models and their accuracy, it
becomes extremely difficult to separate the effects
of individual sources of error and to determine their
contribution to the overall error.  The last two
errors, i.e. those caused by incorrect model applica-
tion and calibration, can be significantly reduced and
are eliminated here from further consideration.  The
errors due to uncertain input and output data  (observa-
tions) are grouped here together and their effect on
the accuracy of model simulations will be studied by
statistical methods.

Selection of goodness of fit criteria

     Runoff quantity.  Numerous criteria of goodness
of fit have been proposed for runoff models.   For a
review of some of these criteria, a reference  is made
to Fleming's work2.  Fleming concluded, that no re-
search has been undertaken to compare the various
criteria available, and therefore, one can not define
the best criteria for hydrologic modelling.  He also
suggested that the criteria should evaluate the
following three parameters of a runoff hydrograph:
the total runoff volume, the peak flow and the time to
peak.  Consequently, the following three rather simple
criteria were selected for use in this study:
a)Runoff volumes -  the ratio of volume observed and
volume simulated
b)Runoff peaks - the ratio of peak observed  and peak
simulated
c)The  time  to peak  - the ratio of the time-to-peak
observed and  time-to-peak simulated.

     Runoff quality.  The assessment of runoff quality
simulations is even less developed than that  of
quantity simulations.  From  the  runoff management
point  of view, the  criteria  can  be defined  for each
constituent similarly as it  was  done for  the quantity,
i.e. describing the constituent  pollutograph by  the
following three parameters:
                                                       558

-------
a)The total constituent emission
b)The peak constituent concentration
c)The time to peak concentration.

These goodness of fit criteria for runoff quantity
and quality were then used on the test catchments
studied.

                URBAN TEST CATCHMENTS

Description of Data Collection Projects

The Urban Drainage Subcommittee has obtained urban
runoff data from a. number of test catchments.   These
catchments and their basic characteristics  are
listed in Table 1.
Catchment
lame
iannatyne
Brucewood
Calvin Park
East York
lalifax
iiamilton
Malvern
Toronto-West
Toronto-East
Location
Winnipeg, Man.
Toronto, Ont.
Kings ton, Ont.
Toronto, Ont.
Nova Scotia
Ontario
Burlington, Ont .
Ontario
Ontario
Sewer
System
Combined
Separate
Separate
Separate
Combined
Combined
Separate
Combined
Combined
Catchment
name
Bannatyne
Brucewood
Calvin Park
East York
Halifax
Hamilton
tolvern
Toronto-West
Toronto-East
Phenomena monitored
Precip. Runoff Quality
a
X
X
X
X
X
b
X
X
a
X
b
X
a
X
X
X
X
X
b
X
X
a
X
b
X
a
X
X

X

b
X
X

b
X
Area
Size
(acres)
542
48
89
40
168
176
58
2330
338
Refer-
ence
14
14
10
16
15
3
7
14
8
limited number of events
projects started recently, no data available as yet
The test catchments cover a wide range of catchment
sizes (40 acres to 2300 acres) as well as of resid-
ential developments.  Brucewood, Calvin Park and
Malvern represent modern residential areas served
by separate sewers.  Bannatyne, Halifax, Toronto-
West and Toronto-East are older residential areas
served by combined sewers.  East York is an older
area on which the sewers were separated only re-
cently.  The storm sewers receive runoff mostly
from roads and side-walks.  The roof drains are
connected to the old combined sewer.

On all the areas, precipitation and runoff were
monitored.  Quality data were collected with a
various degree of success on all the areas except
for Calvin Park and Toronto-West.

All of the projects are not at the same stage.  The
Brucewood and Bannatyne projects have been dis-
continued.  The remaining data collection projects
are continuing to a various extent although the
data collected in East York have not yet been fully
analyzedsand the Hamilton and Toronto-East projects
which started only recently have as yet no signifi-
cant data.

Some results from a. previous study10 with the SWMM
model on two additional urban catchments (Oakdale,
Chicago and Gray Haven, Baltimore) were also in-
cluded.  Thus for runoff quantity simulations, the
data for the following eight areas were available
for the testing of the SWMM model: Bannatyne,
Brucewood, Calvin Park, Halifax, Malvern, Oakdale,
Gray Haven and Toronto-West.

The runoff quality data are much less plentiful.
In fact, only limited data and quality simulations
were available for the Bannatyne, Brucewood and
Malvern catchments.

Uncertainty in the collected data

A quantitative evaluation of uncertainties in the
collected data was not possible due to the lack of
information.  Therefore, only a qualitative evalua-
tion was made here, the uncertainty in the data was
ranked and this ranking was then used in a later
part of this study.  The ranking of the data from
the eight areas under consideration is shown in
Table 2, where a low rank number indicates the
better data set.
Table 1.Urban Test Catchments.
AREA
Bannatyne
Brucewood
Calvin Park
Halifax
Gray Haven
Oakdale
Malvern
Toronto-West
Rank
7
5
1-3 (assigned aver.rank=2)
6
1-3(2)
4
1-3(2)
8
                                                            Table 2.Ranking of  data  uncertainties  for  the studied
                                                            urban areas.
                                                       559

-------
The Calvin Park, Gray Haven and Malvern data were
given the highest rank.  In all these cases, the
catchments were well defined and surveyed, the
precipitation was measured on the catchment, and
checked against another gauge, flows were measured
by calibrated constriction flow meters, a good
synchronization of precipitation and runoff records
was evident.  The measured data were checked for
correctness.

The Oakdale and Brucewood data were rated slightly
lower.  It would appear that the flow meters were
not calibrated and there is no evidence that the
collected data were checked.  It was expected
that the data from the smaller Oakdale catchment
were better defined (more accurate) than those from
the Brucewood catchment.

The next data ranked are the Halifax data collected
on an older area with some uncertainties in the
catchment imperviousness.  Otherwise, the instru-
mentation system is fairly good; a raingauge is
located within the catchment and flows are measured
by a critical flow meter.

The lowest rated data were those collected on the
Bannatyne and Toronto-West catchments.  There were
no rain data collected directly on the Bannatyne
catchment.  Consequently, the data from some
nearby rain gauges had to be used.  In the case of
Toronto-West, the flow rates were only inferred
from the depth of flow measurements and the Manning
equation.  Only one raingauge was used to measure
the precipitation.

                DISCUSSION OF RESULTS

Runoff quantity

The results of runoff quantity simulations with the
SWMM model are given in Table 3.  For runoff
volumes, peak flows, and times to peak, the ratios of
observed to simulated, values were computed.  The
results were described by the mean value of these
ratios, standard deviation about mean and the
percentage of simulations for which the simulated
values were within ±20% of the observed ones
(see Table 3).

Bannatyne
Brucewood
Calvin Park
Gray Haven
Halifax
Oakdale
Malvern
Toronto-West
Runoff volumes
Ratio Vol. , /Vol. .
obs. sim.
aver-
age
1.40
0.91
1.03
—
1.01
—
1.01
0.87
standard
deviation
0.34
0.19
0.17
—
0.14
—
0.12
0.26
% of simulations
within ± 20% of
observations
24%
66%
75%
~
85%
—
89%
50%

Bannatyne
Brucewood
Calvin Park
Gray Haven
Halifax
Oakdale
Malvern
Toronto-West
Runoff peak flows
**ti0 Qpobs./Qpsim.
aver-
age
1.12
1.22
1.09
0.98
0.78
1.04
1.05
1.12
standard
deviation
0.09
0.26
0.16
0.24
0.22
0.19
0.16
0.14
% of simulations
within ± 20% of
ob servat ions
81%
42%
72%
61%
44%
70%
77%
70%





Bannatyne
Brucewood
Calvin Park
Gray Haven
Halifax
Oakdale
Malvern
Toronto-West
Times to peak
**tio TP0bs./Tpsim
aver-
age

0.98
0.91
0.93
1.02
1.11
0.92
0.96
1.13
standard
deviation

0.12
0.10
.09
0.05
0.21
0.13
0.07
0.22
% of simulations
within ± 20% of
observations
90%
87%
92%
100%
60%
81%
99%
55%
Table 3.SWMM runoff quantity simulations-goodness of
fit.

For runoff volumes, the best goodness of fit was ob-
tained for the Malvern catchment - nearly 90% of all
the simulated volumes were within the ±.20% limits.
For peak flows, the best fit was found for the
Bannatyne catchment, 81% of all simulations were with-
in the above accuracy limits.  Finally, for the
times to peak, the best fit was found for the Gray
Haven catchment, practically all the simulations were
within the above accuracy limits.  The overall good-
ness of fit was also evaluated.  The Malvern catch-
ment ranked the highest, the Toronto-West data
ranked the lowest.

A large variation in the goodness of fit of the SWMM
simulations on the test catchments led to a question
of whether there is a correlation between the un-
certainty in the input data and the goodness of fit.
Since the data on hand did not allow the use of
parametric statistics, this question was studied
using non-parametric statistical methods.  The null
hypothesis was defined as follows: There is no
correlation between the uncertainty in the input data
and the goodness of fit of simulated to observed
data.  This would imply that the errors in the simula-
tions are caused by a biased model structure.

The above nul'l hypothesis was tested using the Spear-
man rank correlation coefficient.  The calculation
is given in Table 4.
                                                      I 560

-------
Test catchment
Bannatyne
Brucewood
Calvin Park
Gray Haven
Halifax
Oakdale
Malvern
Toronto-West
Input data
uncertain-
ty rank
7
5
2
2
6
4
2
8
Goodness of
fit rank
(after
Table 3)
5
6
3
2
7
4
1
8
Differ-
ence
2
1
1
0
1
0
1
0
     Ex2 +Zv2 -Ed2
         xz£z1
73.0
80.9
0.90
Table 4.Ranking of input data uncertainty and the
goodness of fit.

For eight observations, the value of Spearman rank
correlation coefficient of 0.90 is significant at the
0.01 level of confidence12 and the null hypothesis
has to be rejected.  Thus there is a correlation
between the uncertainty in the input data and the
goodness of fit of the SWMM runoff quantity simula-
tions.  This indicates, that lower simulation
accuracies obtained with the SWMM model on some
areas, e.g. Toronto-West, are not necessarily caused
by the modelling bias, but rather by inaccurate
input data.  A rigorous evaluation of the input data
errors could not be done for any of the studied
areas, since this would require much more extensive
data records than those available (e.g. several
precipitation records, etc.).  Only on a thoroughly
instrumented area one could directly separate the
modelling bias errors from those caused by the input
data errors.

One condition, under which the SWMM model fails, is
the surcharged flow in sewers.  A technique in
which the sewer surcharging was avoided by arbitrarily
increasing the sewer pipe capacity was used by
Waller15 in conjunction with the SWMM model on the
Halifax catchment.  As one would expect, it led to
an overestimate of peak flows and a shortening of
times to peak.  These results, however, were more
realistic than the truncated hydrographs produced
by the normal SWMM runoff subroutine.

Runoff quality

Only limited runoff quality data have been collected
on the studied areas so far and not all of these
data have been processed to this date.  In fact,
quality data were available only for the following
three catchments:  Brucewood,  Bannatyne, and Malvern.
These data do not allow proper statistical analysis
as was done for the quantity data.   Consequently,
only a qualitative discussion of the processed data
follows.

The runoff quality data and the SWMM simulations are
given in Table 5.   The ratios of observed to
simulated values were calculated for the total
pollutant emissions and peak concentrations.  For
individual catchments,  these ratios were character-
ized by the mean values.

(a)
Total BOD obs.
Total BOD sim.
Total SS obs.
Total SS sim.
Total COD obs.
Total COD sim.
Total N obs.
Total N sim.
Total P obs.
Total P sim
(b)
Peak BOD obs.
Peak BOD sim.
Peak SS obs.
Peak SS sim.
Peak COD obs.
Peak COD sim.
Peak N obs.
Peak N sim.
Peak P obs.
Peak P sim.
Reference
Bannatyne
ISS=0 ISS=1







2.90 6-43




14
Brucewood
ISS=0 ISS=1












14
Malvern
ISS=0












7
                                     Table 5.SWMM model runoff quality simulations des-
                                     cribed by mean values of the ratios (a) Total con-
                                     stituent emission observed to that simulated (b) The
                                     peak constituent concentration observed to that
                                     simulated.

                                     The Brucewood and Malvern catchments are relatively
                                     clean areas, served by separate sewers.  The observed
                                     Biochemical Oxygen Demands (BOD)  for minor storms did
                                     not exceed 25mg/litre, the observed Suspended Solids
                                     (SS) concentrations did not exceed the value of
                                     500 mg/litre.  A large scatter in the observed and
                                     simulated data comparisons was evident.  No conclusions
                                     can be drawn regarding the use of the options to cal-
                                     culate the suspended solids.   The exponential decay
                                     option (code ISS=0) yielded simulated concentrations
                                     that were too high; the other option (an empirical
                                     relationship, code ISS=1) yielded simulated concentra-
                                     tions that were too low.  On average,  the calculated
                                     BOD concentrations were underestimated.  The estimate
                                     of the suspended solids concentrations depended on the
                                     selection of the calculation option.

                                     The concentration of Nitrogen and Phosphates were on
                                     average underestimated in the SWMM simulations.  On
                                     the other hand, the Chemical  Oxygen Demands (COD)
                                     were consistently overestimated in the simulations.
                                     It is expected that these runoff  quality data will be
                                     further analyzed and attempts will be made to explain
                                     the lack of goodness of fit.

                                     The Bannatyne catchment is served by combined sewers.
                                     Unusually high values of BOD  and  SS concentrations
                                     were observed on this area.   As indicated in Table 5,
                                     the SWMM simulations underestimated the total BOD
                                     and SS emissions as well as the peak concentrations
                                     of both BOD and SS.

                                     Uncertainties in the collected runoff quality data
                                     cannot be estimated and in fact,  they could be fairly
                                     high.   Consequently,  one cannot conclude, if the
                                     errors are due to modelling bias  or due to errors in
                                     the quality data.   It may take another one or two
                                                      561

-------
years before a sufficient volume of runoff quality
data is accumulated under the present program and
a full evaluation of the SWUM quality subroutine
is possible.  Meantime, the runoff quality data
obtained with the SWMM should be accepted and used
only with great caution.

                   CONCLUSIONS

The runoff quantity subroutine of the Storm Water
Management Model was tested with a good success
on a number of new urban test catchments.  The
goodness of fit of the simulated to the observed
data was found to be dependent on the uncertainty
in the input data.  No presently instrumented
catchment allows separation of the errors due to
the modelling bias from those due to the uncertainty
in the input data.  On the best instrumented
catchment, fairly accurate results were obtained
with the SWMM model.  In fact, up to 90% of runoff
volumes, 77% of runoff peak flows and 100% of times
to peak were simulated with an accuracy better than
±20% of the observed values.

The SWMM model runoff quality simulations were found
to be less satisfactory.  Though the insufficient
data prevent drawing any firm conclusions, it
appears that the quality subroutine is not readily
applicable to all urban catchments.  The SWMM
quality simulations should be treated with great
caution, particularly if used for a selection of
urban runoff control alternatives, or policy en-
forcement.  It may require another one or two years
of data collection before the SWMM quality subroutine
can be fully evaluated for the feasibility of
application on Canadian urban catchments.

                 REFERENCES

1.  Brandstetter, A., "Assessment of Mathematical
    Models for Storm and Combined Sewer Management",
    Preliminary Report, Battelle Pacific Northwest
    Laboratories, RIchland, Washington, Aug. 1975.
2.  Fleming, G., "Computer Simulation Techniques in
    Hydrology", American Elsevier Publishing Co.
    Inc., New York, 1975.
3.  Gore & Storrie, Ltd., "Interim Report on the
    Hamilton Test Catchment", (Unpublished), sub-
    mitted to the Canada Centre for Inland Waters,
    Burlington, Ont., March 1975.
4.  Heeps, D. P., and Mein, R. G., "An Independent
    Evaluation of Three Urban Runoff Models",Civil
    Engineering Research Report No. 4, Monash Univer-
    sity, Victoria, Australia, 1973.
5.  Jewell, T. K., et al., "Application and Testing
    of the EPA Storm Water Management Model to
    Greenfield, Massachusetts",  In: Short Course on
    Applications of Storm Water Management Models,
    University of Massachusetts, Amherst, Mass.,
    Aug. 1974.
6.  Marsalek, J., Dick, T. M., Wisner, P. E.,  and
    Clarke, W. G., "Comparative Evaluation of Three
    Urban Runoff Models", Water Resources Bulletin,
    Vol. 11, No.  2, pp. 306-328, April 1975.
7.  Marsalek, J., "Burlington Urban Test Catchment -
    Progress Report No. 1", Techn. Report, Hydraulics
    Res. Div., Canada Centre for Inland Waters,
    Burlington, Ont., March 1976.
8.  M.  M. Dillon, Ltd., Communication re the Toronto-
    East Test Catchment, 1975.
9.  Papadakis, C.  N., and Preul, H. C., "Testing of
    Methods for Determination of Urban Runoff",
    Journal of the Hydraulics Div., ASCE, Vol.99,
    No.HY9,  pp.  1319-1335, Sept. 1973.
10.  "Review of Canadian Design Practice and Compari-
     son of Urban Hydrologic Models",  Research
     Report No. 26, Canada-Ontario Agreement Re-
     search Program, October 1975.   Available from
     Training and Technology Transfer  Division,
     Environment Canada, Ottawa, Ontario,  K1A OH3
11.  Shubinski, R. P., and Roesner,  L.A.,  "Linked
     Process Routing Models", Symposium on Models
     in Urban Hydrology, AGU, Washington,  D.C.,
     April 16-20, 1973.
12.  Siegel, S., "Nonparametric Statistics for the
     Behavioral Sciences", McGraw-Hill Book Co.,
     New York, 1956.
13.  "Storm Water Management Model,  Volumes 1-IV",
      Environmental Protection Agency,  Water Quality
     Office, Water Pollution Control Research
     Series, Washington, D.C., July  1971.
14.  "Storm Water Management Study", (Unpublished),
     A Draft Report on a Study Commissioned by the
     Canadian Urban Drainage Subcommittee  to
     Proctor & Redfern, Ltd.,and J.  F.  MacLaren,
     Ltd., Jan. 1976.
15.  Waller, D. H.,  Coulter, W. A.,  Carson,  W. M.,
     and Bishop, D.  G., "A Comparative  Evaluation
     of Two Urban Runoff Models", A  Report  submitted
     by the Nova Scotia Technical College  to En-
     vironment Canada, April 1974.
16.  Mills, W. G., Personal Communication,  1974.
                                                       562

-------
                                  APPLICATION OF STORM AND SWMM FOR ASSESSMENT
                                    OF URBAN DRAINAGE ALTERNATIVES IN CANADA
            Paul E.  Wisner
    Manager,  Water Resources Group
       James F.  MacLaren Limited
     Andrew F. Roake
 Environmental Engineer
James F. MacLaren Limited
       Adel F. Ashamalla
Senior Water Resources Engineer
   James F. MacLaren Limited
                                  Environmental Consultants, 435 McNicoll Avenue
                                          Willowdale, Ontario, M2H 2R8
                       Abstract

     A limited programme of research and several app-
lications of urban runoff models indicate that there
is no unique pattern for model application in drainage
and pollution control studies.  Use of the simplest
model compatible with the requirements of planners and
decision makers helps to minimize unnecessary data
collection and avoid communication problems.   More
sophisticated models will be required as a study pro-
gresses from screening and initial planning phases to
the final planning and design phases.  STORM is con-
sidered as primarily a screening model for comparison
of alternatives, identification of critical events and
problem definition.  For predominantly urban areas a
lumped SWMM and a recently developed Generalized Qual-
ity Model are considered as planning models for the
analysis of critical events.  A detailed SWMM and the
WREM are considered as tools for final planning and
design work.  A computerized unit hydrograph approach
is preferred for planning in areas with low percentage
imperviousness, while the comprehensive analysis of
Stanford-type models is recognized as necessary for
major projects in large watersheds.

Background

     At present there is no specific urban runoff con-
trol legislation in Canada.  A 1973 inquiry into urban

drainage practice in Canada  revealed that at that
time all major municipalities  (with the exception of
                                      2
Toronto) employed the Rational Formula  exclusively in
urban drainage planning and design.  Within the last
few years the Canada/Ontario Urban Drainage Programme
has sponsored several studies related to the calibra-
tion/validation and development of urban runoff models
134
      and the advantages of a modelling approach to
drainage problems have been recognized by several large
urban centres which have instigated programmes of model
implementation.  The principal models currently used
in drainage studies in Canada are STORM , SWMM , WREM
and computerized unit hydrograph models.

     The Water Resources Division of James F. MacLaren
Limited has been extensively involved in the model de-
velopment and verification studies for the Canada/
Ontario Urban Drainage Programme and in a considerable
number of practical model applications for cities and
municipalities in Canada.  It is the intent in this
Paper to present our views, as consultants on the rela-
tive priorities for model improvement and the imple-
mentation of modelling in drainage planning and design,
and the role of some existing models in these fields.

Model Application and Improvement

     Most urban runoff models encountered in the pro-
'   «_                    1 4
grammes of model testing  '  have required a  significant
effort in "de-bugging" before being  rendered fully
operational.  Additional de-bugging  is  often required
upon the release of later versions of an existing model.
The correction of programme errors can  be a lengthy and
frustrating process and is apt to deter potential users
and cause mistrust of models in planners and decision
makers.  Some operational models have proved to be well
              suited to certain applications but inaccurate in others.
              Table 1 summarizes the modifications and routines dev-
              eloped to enable existing models to be applied in Cana-
              dian conditions.

                                      Table 1
                               4
                    Some Recent  Improvements to SWMM and STORM

              1.  Snowmelt Quantity and Quality model integrated with
                  the SWMM RUNOFF block for Canadian conditions

              2.  Recommendations for lumping SWMM in simplied simu-
                  lations

              3.  Modification of STORAGE and TREATMENT blocks reflec-
                  ting new data

              4.  Modification of the TRCOST routine for Canadian
                  Cost estimates

              5.  Development of a Data Analysis Model for processing
                  of Canadian Atmospheric Environment Service data
                  for direct input to STORM.

                    Models, such as STORM, are readily accepted by
              non-modellers because of their simple formulation and
              statistical interpretation of the model output.  How-
              ever, traditional hydrologists appear sceptical of
              these 'oversimplified' models.  Because of the large
              investments involved in major watershed projects, the
              use of a much more sophisticated approach such as that
                                             Q
              offered by Stanford-type models  has been advocated.
              The dilemma appears to be that as more models are for-
              mulated, the chance of each of these being accepted by
              planners or decision makers at a municipal level becomes
              more remote.  In the interim, outmoded empirical meth-
              ods continue to be used for design purposes in some
              costly storm sewer projects and modelling is not used
              to its full extent in the examination of alternative
              solutions to such problems as  sewer separation, and
              the need  for upgraded treatment plants.   We consid-
              er that at the present status of model implementation
              in Canada, further refinement of models and the cons-
              truction of new models should be carefully weighed
              against the hidden constraints involved, i.e. addit-
              ional de-bugging, time required for familiarization
              and potential reluctance to implement new and untested
              models.  If currently applied models can be used in a
              creative and problem oriented manner and be demonstrat-
              ed to be a means to novel design and economic benefits
              then at present these efforts will be more effective in
              promoting the widespread acceptance of modelling than
              further model refinement.

              No Single Model

                    The designation of STORM as a "planning model" or
              SWMM as a "design model" may  cause a user to be model
              oriented rather than problem  oriented.  Few models,  if
              any, are completely universal  and, therefore,  some  cau-
              tion should be exercised when an existing model  is  app-
              lied in a new and untested  role.  The deficiencies  of
              some well accepted models noted  in Table  2  indicate
              some of the  situations  in which  these models do  not
              perform well.  For  instance,  STORM does^pot  simulate
              peak flows accurately because of its  long  time  step (1
                                                       563

-------
 hour)  and  lack of  flow routing routines.   SWMM does
 not  accurately simulate hydrographs under  surcharged
 conditions.  Conversely the highly sophisticated WREM
 is unsuitable for  initial planning applications be-
 cause  of the extensive data preparation and consider-
 able computer time required.  While SWMM and WREM have
 been widely tested and verified on urban watersheds
 9 104
  '   '   little evidence of their suitability in predom-
 inantly rural situations has been published.  Conse-
 quently we have used a computerized unit hydrograph

 approach in predicting rural runoff flows

                        Table  2
Model
STORM
SWMM
WREM
  Some General Model Deficiencies

                Comment

- peak flows not accurate due to 1 hour time
  step and no flow routing

- simplistic storage and treatment routines

- antecedent conditions have usually to be
  assumed

- poor simulation of hydrographs in surchar-
  ged systems

- not well validated for predominantly rural
  areas

- quality model hard to calibrate, somewhat
  oversophisticated for most applications

  receiving water model does not account for
  pollutant transport by diffusion

- very short time steps required to avoid
  unstability

- extensive data requirements
     Many models offer significant advantages if used
sensibly within their proven limitations.  Table 3
summarizes 12 practical modelling studies in which the
authors participated.  The model applications involved
in these studies may be broadly categorized as; screen-
ing  (1) - (3) ,-  planning (4)   (8) ; final planning/des-
ign  (9)   (12).   This work has emphasized the advan-
tages of choosing the model most appropriate to the

task in hand and of using several interfaced   models
during a single study.

A Hierarchy of Models

     The early planning stages of some urban drainage
projects are typified by limited amounts of relevant
data and several alternative patterns for development
or solution of existing problems.  The use of a simple
model, such as STORM, at this stage represents an econ-
omic approach to screening alternative policies at a
level of sophistication compatible with available data
and acceptance by non-technical planners and decision
makers.  If the limitations of STORM are recognized it
may also be used to screen long meteorological records
for the identification of conditions antecedent to
critical events.  For instance,  the relative import-
ance of snowmelt compared with summer storm runoff may
be assessed and important sequences of meteorological
events identified.   STORM can be used to determine the
events to be simulated in more detail later in the pro-
ject.  This facilitates the selection of a historical
"design" storm with known antecedent conditions, rath-
er than the somewhat hypothetical synthetic design
 storm  ,  f°r subsequent modelling with SWMM or WREM
 (see Figure 1).

      Three recent studies conducted by James F. MacLar-
 en incorporated this screening approach.

 (1)   Screening  a Long-Term Meteorological Record

      An assessment of frequency of flooding and prob-
 lems associated with soil erosion during floods is be-
 ing investigated for a watershed of about 9 square
 miles in Eastern Ontario.  Urban encroachment onto the
 lower regions of the flood plain has aggravated flood-
 ing problems.   The relative significance of spring
 snowmelt  flows  compared with the standard 1:25, 1:50
 and 1:100 year  design flows is required for the selec-
 tion of the appropriate control measures.  STORM was
 used to simulate all snowmelt events in a meteorologi-
 cal  record of precipitation and temperature of 100 yrs.
 duration.   Critical events can be extracted from the
 summary output  and detailed event hydrographs computed.
 The  model can then be used for initial estimates of the
 effects of storage reservoirs on critical flows.

 (2)   Screening  a Number of Development Alternatives

      Twelve alternative development concepts were  pro-
 posed in  the initial planning stage for the new North
 Pickering Townsite,  east of Toronto.   STORM was used
 to provide an initial assessment of the probable  annual
 changes in urban runoff volume and quality associated
 with each of the alternatives in the  three main water-
 sheds affected  (West Duffin Creek   58 square miles,
 East Duffin Creek -  46 square miles,  Petticoat Creek
 10  square miles).   The proposed land  uses were supplied
 as input  to the  model for each case and the annual poll-
 utant load (B.O.D.,  S.S.,  Settleable  Solids,  N, P04>
 and  the total annual runoff was computed.   Similar com-
 putations  were performed for the existing land use patt-
 ern,  which provided  a base case.   A simple ranking mod-
 el was  developed to  facilitate comparison and ranking
 of the  development alternatives on the basis  of their
 overall water quality impact.   It  was  demonstrated, us-
 ing  STORM,  that  for  any  alternative, a storage-treatment
 relationship  (i.e. a set  of storage-treatment  combina-
 tions)  exists for which  the overall water  quality  imp-
 act  of  that  alternative  can usually be reduced to  a pre-
 determined  level  (existing condition or a  pre-defined
 'allowable  loading1).  The  effect  of low frequency
 floods was  investigated  in  the  same study.  The unit
 hydrograph method is  generally  applied for  flood syn-
 thesis  in Ontario.   A computerized version  of this
method  (FROUT) was developed  to  predict the effects of
 low  frequency floods  and  associated erosion before  and
after urbanization.   The results of this  study formed
part of an overall planning matrix involving considera-
tions other than water quality  impacts of urbanization,
with a view to selecting the preferred alternative.

 (3)   Identification  of Critical Areas

 Sewer  system inadequacies  and changing land use patt-
 erns  often  result  in a considerably higher number  of
 combined  sewer overflows  from some areas in a city than
 from other  areas  in  the  same city.  The most  cost-effec-
 tive  approach to  limiting  the pollution of receiving
 waters  due  to combined sewer overflows is  to  limit over-
 flows  from the critical  areas.   The first  stage in such
 an effort  is obviously to  define the critical  areas.
 STORM was  used for this  purpose for the City  of Winni-
 peg.   Five  years meteorological records were  processed
 for  the simulation of the  quality  of runoff and snow-
 melt  in 35  combined  sewer  areas.   Calibrations to  City
 records enabled period total  emissions of  B.O.D. and  SS
 to be  reproduced to  within t 10  percent of the measured
 totals.  A  number of critical  areas were identified on
                                                      564

-------
the basis of average annual pollutant discharge and
mass discharge per overflow event and critical events
were subsequently simulated in more detail with SWMM
in order to evaluate various control alternatives.

      It has been shown that fairly sophisticated single
event models, such as SWMM, can be applied in a lumped
5 14
 '   manner  (i.e. the input characteristics describing
the subcatchments and transport system can be aggrega-
ted) for simplified simulation.  This implies that in
situations where only basin outlet hydrographs from
design events are required, such as in the initial pla-
nning stages, data preparation time may be minimized.
At this stage, it is considered that the sophisticated
pollutant routing routines of SWMM are not justified.
Consequently, the use of a Generalized Quality model
(Appendix 1) in conjunction with lumped SWMM, or unit-
graph models is advocated.

      In the final planning stages of the project, more
detailed information and monitoring results become
available and the number of alternatives is reduced.
At this stage a detailed simulation involving fine dis-
cretization and calibration for accurate water quality
prediction is warranted.  At this juncture, the effects
of untreated and treated discharges to the receiving
water would be assessed.  A more sophisticated one-
dimensional or two-dimensional receiving water model
than that in SWMM (i.e. RECEIV) might be required in
          15
some cases

      The WREM model is sometimes employed in final
planning for an analysis of the benefits of surcharged
design.    According to our experience in studies con-
ducted in Winnipeg,  Port Credit and Edmonton, the in-
tentional use of surcharge in the design of relief sew-
ers  '   or in the analysis of interceptor capacity
can lead to considerable reduction in peak stormwater
flows.  Consequently, it appears reasonable to employ
only a model with sophisticated routing, such as the
WREM, capable of simulating surcharged flow in the des-
ign of relief sewers or in the final planning phase of
new projects where in-line storage by surcharge is feas-
ible.

      During the course of a project from screening to
design, model sophistication, data requirements and
computer costs will all increase.  The results of each
model should lead logically to the next, more sophis-
ticated application and good communications with the
decision makers should ensure a shared objective.  The
involvement of non-technical planners and decision
makers in regular consultations is essential in this
regard.

      Our experience in studies for the City of Winni-
peg indicates that once flow simulation techniques are
understood and the potential benefits appreciated, there
is a natural tendency of planners and decision makers
to become interested in quality simulation as a part of
pollution control policy planning.

Conclusions

      One of the primary goals of those involved in
urban runoff modelling in Canada should be the replace-
ment of outmoded empirical design formulae currently
widely used in Canadian urban drainage planning and
design by more accurate and reliable methodologies. At
present this goal will be best served by the implement-
ation in engineering practice of existing well valida-
ted models.  The experience in a number of research
and practical urban drainage and pollution control stu-
dies confirms that modelling is a dynamic process.  No
single model or unique pattern of application can be
 recommended.   Best results are likely to be achieved
 with a series of interfaced models applied within pro-
 ven limitations.   This approach may logically result
 in  the ultimate acceptance of highly sophisticated con-
 tinuous simulation models in the final planning phase
 of  most major watershed studies.

                       Table 3
Examples of Practical Model Application and  Interface
  STUDY
                    SCOPE
                                 MODELS
                                           SIMULATIONS
(1) Effects of Comparison of FROUT
urbanization. low frequency
Screening devel- floods s water
opment alterna- quality impacts
tives for North of 12 develop-
Pickering Commu- ment alterna-
nity (Phase 1) tives STORM








(2) Comparison Relative mag- STORM
of Flood Cont- nitude of
rol Alternatives snowmelt run-
on the partially off compared
urbanized Graham to summer =WMM
Creek watershed storm runoff .
( snow-
melt)
1:10, 1:25,
1:50 years
floods and
associated
solids ero-
sions
Modification
of total ann-
ual runoff &
runoff pollu-
tant loads S
effects of
storage S
treatment (1
year)
Screening of
100 yrs met-
eorological
data
snowmelt ,
urban areas

                                 FROUT
                                          non-urban
                                          area runoff,
                                          flow routing
(3) Evaluation
of combined
overflow poll-
ution in Winni-
peg
Problem iden-   STORM
tification.
Critical areas
and major
events.  Pre-
liminary anal-
ysis of control
alternatives
                                 SWMM
                                 RECEIV
Comparison of
annual pollu-
tant loads in
overflow over
5 years in 34
districts
Simulation of
critical ev-
ents for diff-
erent control
policies
(4) Master       Effects of dev- FROUT
Drainage Plan    elopment on
for Thornhill-   peak flows.
Vaughan develop- Outline for     SWMM
ment, Ontario    main drainage   (lumped)
                 lines and sto-
                 rage facilities
                         1:25, 1:100
                         year floods

                         Sizing of main
                         trunk storm
                         sewers and
                         runoff deten-
                         tion ponds
 (5) P.A.C.E.     Simulation of   STORM
Study of Runoff  overflows
from Oil Distri- Quantity and
bution Terminals Quality of Ter-
                 minal Runoff

                                 SWMM
                                 GQM
                         Simulation of
                         annual over-
                         flows-reduced
                         time step em-
                         ployed for
                         small areas
                         Runoff hydro-
                         graphs
                         peak concen-
                         trations and
                         event total
                         pollutant load
                                                       565

-------
                 Table 3  (cont'd)
                                                             Association Congress in New Orleans
STUDY
(6) Toronto In-
ternational
Airport Run-
off Study






(7) Humber
River Out-
fall,
Toronto



(8) Winnipeg
Drainage
Criteria Man-
ual




(9) Port
Credit Storm
Relief Study





(10) Relief
Sewers in
Jessie,
Winnipeg



(11) Edmonton
Interceptor
Study






(12) Winnipeg
Stormwater
Pumping Sta-
tion Study


SCOPE MODELS
Demonstration STORM
of relation- SWMM
ships between
airport oper-
ations and GQM
runoff pollu-
tion . Com-
parison of
control alt-
ernatives
Planning RECEIV
study for
the optimum
location of
nearshore
landfill de-
velopments
Development SWMM
of drainage
policies and
procedures
involving
small ponds
and roof
storage
Analysis of SWMM
existing storm
sewer system.
Requirements WREM
for upgrading
system per-
formance

Analysis of SWMM
existing com-
bined system
in Jessie WREM
district.
Design of
relief lines
Analysis of SWMM
existing sys-
tem S relief
requirements.
Investigation WREM
of regulators
and overflows


Analysis of WREN
pumping sta-
tion perfor-
mance and
effect of in-
line storage
SIMULATIONS
Snowmelt events
Storm runoff
simulation

Event total
pollutant loads




Simulation of
critical water
quality condi-
tions for diff-
erent landfill
configurations

Examples of
sewer and stor-
age design
methodologies
(<200 acres)



Runoff and non-
surcharged flows

Analysis of sur-
charged condi-
tions, testing
of relief
alternatives
Runoff and non-
surcharged flows

Analysis of sur-
charged condi-
tions, design
with surcharge
Inlet hydrograph
from detailed
areas

Analysis of sur-
charge and over-
flows during
design and his-
torical storms
Simulation of
flows during
critical his-
torical storms,
surcharged con-
ditions.
References
1.  James F.  MacLaren Ltd.,  1974,  "Review of Canadian
   Storm Sewer Design Practice and Comparison of  Urban
   Hydrologic Models",  a report to the Canadian Depart-
   ment of the Environment,  Canada Centre for Inland
   Waters
3. James F. MacLaren Ltd., 1974, 1975, 1976 "Report on
   the Brucewood Monitoring Program" unpublished reports
   obtainable via Canada Centre for Inland Waters,
   Burlington, Ontario

4. Proctor and Redfern Ltd. and James F. MacLaren Ltd.,
   1976, "Storm Water Management Model Study", a draft
   report prepared for Environment Canada and the Ont-
   ario Ministry of the Environment

5. U.S.  Army Corps of Engineers H.E.C., 1975, "Urban
   Storm Water Runoff, STORM", A Generalized Computer
   Program, Davis, California

6.  U.S. Environmental Protection Agency, 1971, "Storm
   Water Management Model", Vol. I-IV, Water Pollution
   Control Res. Series. No. 11024 DOC09/71, Washington
   D.C.

7. "San Francisco Storm Water Model User's Manual and
   Program Documentation" prepared by Water Resources
   Engineers for the City and County of San Francisco,
   Dept. of Public Works, 1975

8. Crawford, N.H., and R. K. Linsley, 1966, "Digital
   Simulation in Hydrology; Stanford Watershed Model
   IV",  Tech. Report No. 39, Dept. Civ. Eng., Stan-
   ford University

9.  Brandstetter, A.B., 1975, "Assessment of Mathemat-
   ical Models for Storm and Combined Sewer Management"
   a preliminary report, Battelle Pacific Northwest
   Laboratories

10. Keeps, D.P. and R. G. Mein, 1973, "An Independent
   Evaluation of Three Urban Storm Water Models",  Mon-
   ash University Civil Engineering Res. Report No. 4

11. Soil Conservation Service, U.S. Dept. of Agricul-
   ture "Hydrology" Section 4, SCS National Engineer-
   ing Handbook

12. Wisner, P.E. et al, 1975, "Interfacing Urban Run-
   off Models", a paper presented at ASCE Environmental
   Eng.  Div. Specialty Conference on Environmental Eng.
   Research and Design, Gainesville, Florida

13. Kiefer, C.J., and H.H. Chu, 1957, "Synthetic Storm
   Pattern for Drainage Design", Journal of Hydraulic
   Division ASCE, August 1957

14. Wisner, P.E., and Perks, A., 1975, "Lumping the
   SWMM Model", a paper presented at the SWMM User's
   Meeting, Gainesville, Florida

15. Barnwell, T.O., Cavinder, T.R., 1975, "Application
   of Water Quality Models to Finger Fill Canals", Pro.
   2nd Annual Symposium of the Waterways, Harbours &
   Coastal Eng. Div., ASCE

16.Clarke, W.G. et al, "Hydrograph Methods in Relief
   Sewer Design   A Case Study", a paper presented at
   the SWMM User's Meeting, Gainesville, Florida

17. James F. MacLaren Ltd., 1975,  "Report on Storm Wat-
   er Outlets in Port Credit", an unpublished report
   prepared  for  the City of Mississauga, Ontario.

18. James F. MacLaren Ltd., 1976,  "Edmonton Master Dra-
   inage Study", an unpublished report prepared for the
   City of Edmonton, Alberta
2.  Koplyay,  T.M.,  1975 "Urban Drainage Studies  in Can-
   ada",  a paper  presented at the American Public Works
                                                      566

-------
                     Append ix 1

           Generalized Quality Model  (GQM)

     The  Generalized Quality Model computes surface
runoff quality in basically the same manner as the SWMM.
Some of the important aspects of the new model are
summarized below:

  (a)   Aggregated single catchment model
  (b)   Five land uses possible
  (c)   Separate input hydrograph forms basis for qual-
       ity computations.  (This may be generated by
       any model or originate from measurements.  Any
       time interval can be used for hydrograph input)
  (d)   User supplied dust and dirt composition and
       loading rates for each land use
  (e)   No quality routing or pollutant decay computa-
       tions
  (f)   No erosion or deposition of sediments computed
  (g)   Two methods for Suspended Solids computation
  (h)   Catchbasin contributions modelled
  (i)   Quality calculations may be based on either
       flow from the total area, or only on flow from
       the impervious area
  (j)   Ten pollutants may be simulated (BOD, COD, Sus-
       pended Solids, Settleable Solids,  Coliforms, N,
       PO , Cl, lead, oil and grease)

  (k)   No default values supplied
  (1)   Pollutographs, mass curves and surface load
       statistics are available for each pollutant

     The  principal advantage of the Generalized Quali-
ty Model  is the reduced cost of computations and re-
duced data preparation time achieved by aggregating
the properties of the entire study area and modelling
it as a single catchment.  The model is extremely flex-
ible and  may be used for the study of many aspects of
stormwater pollution.
                                                        FIGURE 1

                                  THE ROLE OF MODELS IN DIFFERENT PHASES OF DRAINAGE STUDIES



I
^ STORM
3
D
UNIT
HYDROGRAPH
SWMM
WREM
STORM
GQM
X
3
§, SWMM
Sophisticated
Receiving
Water Model
f
TIME^
^ . PLANNING 	 	 — 	 >~* 	 DESIGN 	 *-

Critical Annual Effect of
Events Runoff Storage
Changes
Rural and semi-rural
basins
lumped: urban areas detailed: no surcharge
surcharged conduits only
Critical Comparison Estimates
Events of Alter- of Storage
natives s Treatment
capacities
Peak concentrations
Event mass emissions
Effect in Refine Storage Calibrated
Receiving and Treatment for final
water Options analysis
1-D or 2-D for diffusion

                                                       567

-------
        ON THE VERIFICATION OF A THREE-DIMENSIONAL PHYTOPLANKTON MODEL OF LAKE  ONTARIO
               Robert V. Thomann
 Environmental Engineering & Science Program
               Manhattan College
               Bronx, N.Y. 10471
              Richard-P. Winfield
  Environmental Engineering  &  Science Program
               Manhattan College
               Bronx, N.Y. 10471
INTRODUCTION

The purpose of this paper is to highlight
the growing need for detailed and quantita-
tive verification of water quality models
that goes well beyond model computation and
determines measures of model adequacy for the
decision maker.  In particular, this paper
is focused on the need for verification of
phytoplankton-nutrient models, the number of
which has increased significantly in recent
years.  These models all make use of a simi-
lar underlying deterministic framework of
coupled interactive non-linear differential
equations which are solved numerically in
discrete space and time.

Indeed, the state of the computing art of
such frameworks is advancing rapidly and to-
day it is no longer of great moment if hun-
dreds of sets of non-linear equations are
successfully solved on a large computer.  What
is of significance however, is whether the
numerical computations are "reasonable" rep-
resentations of the real world.  It is at
this point that considerable confusion re-
sults both in the realm of the model builder
and in the mind of the decision maker.  What
is "reasonable"?  Is it sufficient to gener-
ate computed values that "look" like what
we are observing?  For example, is it suffi-
cient that a phytoplankton model simply
generate a spring pulse which has been ob-
served or is there a certain quantitative
measure that must be introduced to determine
not only that a spring pulse is calculated
but that it's magnitude is correct in some
sense?  The question addressed in this paper
therefore is "What criteria might one use to
determine the adequacy of the model?" It is
strongly believed that unless a detailed
examination of the comparison of the model
to observed data is carried out, there is no
way of judging the adequacy of the computa-
tion.  There may, of course, be situations
where this is not possible; as for example in
projecting phytoplankton conditions in a res-
ervoir that is not yet in existence.  Such a
problem context is not considered here.  The
thrust of this paper is aimed at detailed
verification, where possible, so that the
credibility and utility of a modeling frame-
work  are  established.

THE LAKE ONTARIO MODEL

A three dimensional model of the phytoplank-
ton of Lake Ontario is used as an illustra-
tion of the kind of problem that one faces in
attempting a detailed verification analysis.
The basis of this model, called Lake 3 has
been discussed previously(.  The kinetics
of the model include linear and non-linear
interactions between 1) phytoplankton chloro-
phyll, 2)  herbivorous  zooplankton, 3) car-
nivorous zooplankton, 4) detrital nitrogen,
5) ammonia nitrogen, 6) nitrate nitrogen,
7) detrital phosphorus and  8) orthophosphate
phosphorus.  The details of the  kinetics  are
given in  (1) where a lakewide model  (using
only vertical definition) was used to  verify,
by judgment, open lake behavior.

Fig. 1 shows the spatial configuration of the
computational grid used for Lake  3;  67 seg-
ments are used and for the  eight  dependent
variables, 536 non-linear equations  are inte-
grated in time for a maximum period  of 14
months.
                                   *• i i n r.
Fig. 1  Lake 3 Model Grid

A time step of .08 days is used throughout and
solution is accomplished on a CDC  6600 and re-
quires some 63K of storage and about 1 hour
of equivalent main frame computing time.  The
model is relatively large and for  any one run
generates some 100,000 numbers.  The analyst
attempting to absorb the behavior  of such a
model faces a formidable, indeed almost im-
possible task since attention can  only be
directed towards certain portions  of the
model (either in variable or physical space).
Furthermore, since the various portions of
the model are so interactive, "adjustments"
to improve the model in one region may result
in an undersirable change in another of the
model.  Therefore, a strategy for  determining
the behavior of the model and its  verifica-
tion status must be developed.  Such a
strategy must of necessity include the utili-
zation of an available data base such as the
results from the International Field Year on
the Great Lakes  (IFYGL) for Lake Ontario. Fig.
2 shows the flow diagram adopted for the anal-
ysis of the Lake 3 model.

REDUCTION OF IFYGL DATA BASE

The IFYGL data base is resident in STORET and
contains approximately 200,000 observations,
encompassing 75 water quality parameters.
This data base is the most complete set of
observations obtained to date on Lake Ontario
and contains a wealth of information on the
                                              568

-------
">nrfss
STORET
Data Base

iCDIT

SUMMARY STATISTICS
IFYGLOata



SUMMARY
PLOTS
Iki
PflOCKT
LAKE 3
Model Output
& EDIT

SUMMARY STATISTICS
Model Output



SUMMARY
PLOTS
Lv/
                     Fig. 2. Flow diagram for model verification analysis
dynamics of the Lake.  The task as shown in
Fig. 2 is to utilize the data set to produce
summary statistics of the IFYGL data, which
would be used to analyze the results of the
Lake 3 model.  These statistics are generated
for volumes of the Lake corresponding to the
segmentation of the Lake 3 model.  Given the
IFYGL cruises, monthly mean and variance stat-
istics are used.    After the segment statis-
tics are generated for the various water qual-
ity parameters of interest, a display package
is accessed to generate microfilm or paper
plots of the parameter statistics versus time.

The STORET data base is accessible to the user,
through program packages for standard re-
trievals and manipulations of the data with
fixed output format.  Since the data set is
large (2 x 10  observations) a methodology
had to be formulated that would facilitate the
sizable reduction task.  Recognizing that the
reduced statistical data set would be used on
an entirely different computer system (CDC
6600) than EPA's 370-155 and the need to ac-
complish the data reduction in the shortest
time possible,such a methodology is a neces-
sity.

The scheme was carried out for each of the 67
segments and a total of over 200 runs were
made.  Each segment required three reduction
runs since a maximum of eight parameters per
run could be made and twenty variables were
reduced per s-egment.

The first step was to prepare decks which_des-
cribed the segment volumes.  Each volume was
defined using a latitude/longitude polygon
with depth constraints.  The STORET program
Mean was used to generate the segment statis-
tics; monthly mean,  standard deviation,  num-
ber of observations, maximum and minimum.
Since the output from Mean is fixed and the
results were to be transported to the CDC-
6600 via data cards, manipulation of the out-
put file was necessitated.  OSI's 370-155
operating system contains an online interac-
tive text editor named Wylbur.  Using Wylbur
and its limited macro capability, text edit-
ing module programs  were developed that re-
duced the output from 140 to 80 characters
per line and eliminated all extraneous lines
of information.  This compressed data set was
then punched and therefore, was in a form
processable by the CDC-6600.  A Fortran pro-
gram was written to  manipulate this data in-
to the format required by the verification
analysis and graphic display programs.   The
result of this effort was an IFYGL data set
of monthly statistics for twenty variables
for 67 segments for the period May,  1972
through June, 1973.

A graphical display program was also written
in Fortran to display the temporal variation
of the parameters.  Monthly means plus or
minus one standard deviation are displayed.
The graphical output of this program can be
routed either to paper or microfilm.  The
use of microfilm for both graphical and
printed output has proven to be of immense
utility when dealing with large scale prob-
lem such as Lake Ontario and is to be recom-
mended.

VERIFICATION ANALYSIS

In the lower path of Fig. 2, the continuous
Lake 3 model output is also processed to gen-
erate monthly mean values by segment, month
and variable.  A merge of data and model out-
put is then accomplished, computer generated
plots of theory and data are prepared for the
analyst and a verification program is then
accessed for testing the behavior of the
model and for preparation of verification
scores and summaries.  Fig. 3 shows a typical
plot(redrawn)as generated from one of the
runs for segment #21 and shows the overplot
of the theory and the observed data.  The
                    SEGMENT 21
                                          728
Fig. 3.  Typical merge of model output(solid
         line) and data

amount of effort to reach the stage of Fig.3
is significant and should not be underesti-
mated.

Several simple tests comparing model output
to observed data have been constructed and in
this paper, emphasis is placed on testing the
difference of means.  A standard "t" test is
used.
Thus, let x. ., =observed mean for vari-
           1JK
                                             569

-------
able If segment j and month, k and

comparable computed mean.  Then d=c - x is
the difference of means assumed to be distri-
buted as a Student's "t" probability density
function.  If the variance of the model is
assumed equal to the observed variance, then,
          t =
              d -
               sd
                                           (1)
where 6 is the true difference between the
model and the data and s-; is the standard

deviation of the difference given by the
pooled variance or

                     2
                   2s
              sd =
                    N
                                          (2)
d  = ±t
 c
                 Sd
and for a 95% confidence range (5% chance of
making a Type I error),
                      2.83
                                         (4)
The distribution of d and the critical regions
are shown in Fig. 4.  As indicated if
 (-d < d dc

                                                           -i- d  for -d>-d
                                                              C          C
for s  as the data variance for specific month,
segment and variable.  Under the_null hypothe-
sis: 6=0, there is a "critical" d which de-
lineates the region of rejection of the hy-
pothesis and is given by

                                          (3)
                                           (6)

                                           (7)
Another simple measure that may  be  used is  the
number of segments in a given month that have
a V score equal to zero.  Therefore,  let
       K. .,  = 1 for V.   = 0.
        ijk          i]k
A score defined as the S score for  variable
i and month k is therefore given by
             n
       sik
= I
                                                            /n
                                                                                          (8)
where n is the total number of segments where
a V score can be computed, either  for  the
entire lake or for just certain vertical
layers or regions of the lake  (as  for  example,
near shore vs. open lake).  The score  then
simply represents the fraction  (or percent) of
segments that "passed" the verification test
of V=0.  Since up to perhaps 10 variables are
analyzed in this verification analysis, an
overall aggregated S score can be  also com-
puted.  Verification of all variables  may not
be of equal concern.  For example, one may be
willing to accept a lack of verification of
ammonia nitrogen for the Lake but  may  be par-
ticularly concerned about say, total phosphor-
us and chlorophyll.  Therefore, a  series of
weights, w., can be assigned to each variable
i representing the relative importance of each
variable.  The aggregated score for month k
is then given by

     Sk=|   X w.K. .k/(nZw. )            (9)

where r is the number of variables that are
in the aggregated score.  S,  therefore repre-
sents the weighted fraction of the total num-
ber of segment variables that passed a "t"
test of V=0 for month k.  It should be noted
that not all segments and variables can be
tested at each month, so that r and n  are
functions of the data availability for month k.

RESULTS FROM LAKE 3 MODEL RUNS

For this paper, three runs of the  Lake 3 model
were available for verification analysis with
the IFYGL data.  These runs emphasized the
sensitivity of the verification to the initial
conditions for each variable and segment.  For
all runs, the average temperature  variation,
solar radiation, flow transport ancL horizontal
and vertical dispersion were used   .  This
is in contrast to using the actual conditions
during the 1972-1973 IFYGL year.   The  kinetic
structure used for the homogeneous Lake 1
model was also employed.  The runs are: l)Run
#1, which used initial conditions  equal
throughout the Lake for January 1, such con-
ditions being chosen from Lake 1 runs, 2)Run
#2, which incorporates some spatial changes
in initial conditions for chlorophyll, ortho-
phosphate and nitrate based on IFYGL data,
computation also begins on January 1;  3) Run
                                             570

-------
#3,  which begins  computation on May 1,  1972
and uses as initial conditions the observed
segment averages  for May, 1972 as given by
the IFYGL data.   Run #3 presumably then repre-
sents a "better"  run in the sense that  the
initial conditions  are chosen from the  ob-
served data.  Not all segments had equal
amounts of data and in some cases, signifi-
cant data gaps existed for various months.
Fig. 5 shows  some typical results of  the veri-
fication analysis for phytoplankton for seg-
ment #16; Run #1.  The gaps in the record can
 . .
O
DC
g

1
o

i
       -2
                   SEGMENT NO. 16
                           I

                           I  S  Ik s
                           g-g-s  $& §
                           K  K  U_  /)>C LL
- - MODEL MEAN- OBSERVED MEAN

£2 195% CONFIDENCE LIMITS/
  NO SIGNIFICANT DIFFERENCE
  BETWEEN MEANS
    I
  1  I I  I  I   I  I  I
         MJJASONDJFMAMJ
                1972
                    TIME OF YEAR
                                1973
 Fig.  5.   Typical statistical comparison of
          model and data, Run #1

 be  seen  as well as the region  of  no statis-
 tically  significant difference between model
 and observed mean (Eq.(4)) and the monthly
 differences between the model  mean and the
 observed mean.  "Insufficient  data" indicates
 that  the variance of the sample mean could
 not be computed, implying  that only one sam-
 ple was  available.  The range  of  the no
 difference region is significant  and as shown
 can be as much as - 4yg chlor/1.   The appli-
 cation of Eqs.  (5) to  (7)  would therefore
 lead  to  V scores of zero for months such as
 July  to  a maximum overestimation  of 3.7 yg/1
 in  June.

 Computations such as represented  in Fig.6
 are carried out for each segment  so that the
 analyst  can also view the  V  score spatially
 by  month.  A typical result  for Run #2 and
 June  1972 conditions is shown  in  Fig. 6.
 44°
 43°
                   JUNE 1972 CONDITIONS
            Contours: Phytoplankton Chlorophyll Verification Score Ijig/l)
     TORONTO
            79°         78°         77"
Fig.  6.  Distribution of V score,  Run  #1,
                                             As shown, a  significant region of the  Lake is
                                             verified for this  run although there are  cer-
                                             tain sectors where the model overestimated
                                             the mean.

                                             In order to  provide further insight into  the
                                             behavior of  the  model compared to the  observ-
                                             ed data, lake wide averages of each of the
                                             segment statistics were computed for two
                                             layers.  Figs. 7 and 8 show some of these
                                             results for  phytoplankton and Run #1.
                                  O +2
                                  
-------
 across  each  of  the  segment  V scores  for each
 month as opposed  to Fig.  7  and  8  where the
 average was  first taken of  all  of the means
 and a single score  computed.  All three runs
            0 to 4-METER LAKE-WIDE AVERAGE
  yj
     LL ^
     O  40
     I-
     z
     LU
     U
     S  20
     Q.
                 "PERFECT" VERIFICATION
             RUN NO. 1
            1   1  1  1
                              J	L
          MJJASOND|JFMAMJ
                1972    TIME OF YEAR   1973

Fig. 10. Overall segment-variable score for
         Runs #l-#3

percent of the total number of segment-vari-
ables that had individual V scores of zero.
During the 1972 period of verification approx-
imately 50% of the segment-variables verified
while during the 1973 period only 30-40% veri-
fied.   (It should be noted however, that the
data available in 1973 is significantly less
than that in 1972.)  Further, Run #3 which was
constructed to further improve model perform-
ance did not really improve the overall veri-
fication; in fact it decreased the S score.
None of the runs provided a substantial change
in the S score.  The S score therefore, repre-
sents a measure  that incorporates the be-
havior of  each of  the key variables and pro-
vides a basis for  determining whether the
model verification is improving or deterior-
ating under  different model input.  It does
not however  indicate quantitatively the de-
gree to which each segment variable failed
to verify.   The  quantitative V score provides
such an estimate.   Plots such as Figs. 9 and
10 therefore complement each other in terms
of displaying the  overall verification of the
model.

CONCLUSIONS

This paper has not addressed directly the
question of  whether the present Lake 3 model
of Lake Ontario  is a "good" model, but has
rather concentrated on highlighting the need
for development  of measures of model verifi-
cation.  The analysis of the verification
statistics of Lake Ontario does provide how-
ever an illustration of this very important
need.  It is no  longer sufficient to simply
develop numerical  solutions to the complex
interactive  equations of phytoplankton models.
Rather, a substantial effort must be expended
to utilize available data bases together with
statistical  measures of verification in order
to determine the overall credibility and
adequacy of  the model.   The illustrative re-
sults presented here for Lake Ontario indi-
cate how these measures of verification per-
formance behave under different assumptions
on model initial conditions.   Overall,  phyto-
plankton chlorophyll was verified to about
lyg/1 chlorophyll  outside of the limit  of no
difference between model mean and observed
mean and approximately  40-50% of the segment-
variables were verified regardless of the
initial conditions.

REFERENCES

1. Thomann,  R.V. et al.   Mathematical Model-
inging of_ Phytoplankton in. Lake Ontario.  1.
   Model Development and Verification,  EPA
   660/3-75-005, ORD,  Corvallis,  Oregon,
   March,  1975, 177pp.

ACKNOWLEDGEMENTS

The assistance of  William Beach,  Jan-Tai  Kuo
and John Segna of  Manhattan College  is  ac-
knowledged together  with the insights of  our
colleagues Drs.  Donald  O'Connor and  Dominic
Di Toro.   This work  was  carried out  under
EPA Research Grant  No.  R803680-01.
                                              572

-------
                             MATHEMATICAL MODEL FOP. THE EXCRETION OF  14C02  DURING
                                          RADIO RESPIROMETRIC STUDIES
                                                  Rumult  Iltis
                                     U. S. Environmental  Protection Agency
                                      Health  Effects Research  Laboratory
                                                Cincinnati,  Ohio
                       ABSTRACT
     "Mathematical  model"  described in this  paper
applies  to  a  biological  process  that can  be  expressed
 in a well  defined  analytical  form.  Specifically,
it pertains to  the  rate  of excretion of 14r  (injected
i.v.) from  the  lungs  of  rats  during the radiorespiro-
metric investigations.    In the  experiment,  the rate
of excretion  of l^cc^ from the lungs is changed by a
toxic agent (methyl mercury)  ingested 24  hours prior
to the experiment.   In this study,  the model  for '^£$2
excretion  is  presented in  the form  of a solution to
four first  order differential  equations reduced to a
fourth order  differential  equation.   The  integral of
the model  for the controls and the  exposed animals at
a selected  time ti  is used to measure the severity of
toxicity by taking  the difference of the  two integrals.
The model  has 8 constants, thus  it  is possible to
take 8  independent measurements, at the early stages
of the experiment,  and obtain eight independent
equations  to  yield  a  solution (i.e. the distribution
of the excretion with time).   In this way a  predic-
tion of the effect, if any, can  be  made.  Using
heuristic  approach, however,  the model can be
simplified  to yield a skewed  distribution that can be
fitted to  data up to  the selected time tj .  The
heuristic  distribution contains  only two parameters
(unknowns), thus only two measurements at the beginn-
ing of the  study are  sufficient  to  predict the
effects at  any other  point in time.
                      INTRODUCTION

     Biological  investigations including radio-
respirometry require collection of large amounts of
data in order to arrive at a statistically signifi-
cant conclusion.  This statement implies that
experiments must be lengthy with large number of
animals   a costly exercise that is not always
possible.   Mathematical modeling serves to circumvent
this difficulty  by predicting effects based upon
small sets of data.  As to its validity in an "absolute
sense" it  can be only stated that as long as it
represents a simplification of reality its "utility"
  i.e. the extent to which it helps the user - is the
only fruitful criterion on which it can be judged.
The modeling of  radiorespirometry provides a con-
venient and economic method of screening large number
of toxicants by  reducing the time required for each
experiment and the number of experiments required to
make proper assessment.

     In the experiment proper, investigators have been
able to successfully explain the effects of metabolic
conversion of ^C-labeled substrates to respiratory
^COo and  the influence of various factors on the
metabolism (Wang,  1967; Dost et al., 1973).   The
 development  of this  method  contributed  significantly
 towards being able to observe chemical  reactions that
 take place in experimental  subjects without sacri-
 ficing the subjects.  An animal can be  used for both
 control and experimental purposes and,  furthermore,
 the cumulative effect of repeated administration of
 an agent or recovery from a certain effect can be
 observed.

 Materials and Methods

      Forty-eight male rats  (Charles River Laboratory)
 were used in this series of experiments.   The radio-
 labeled substrate (  '^C-1-glucose) used  in this study
 was obtained from New England Nuclear Corporation.

      The theoretical consideration, design of experi-
 ments, detailed methodology, and other  background
 information have been published by this laboratory
 (Lee et al., 1972) and others (Wang, 1967; Tolbert
 et al., 1956).
      Exhaled ^C02 was monitored continuously using
 Gary vibrating reed electrometers in conjunction with
 ionization chambers.  The analog output of the
 electrometers was fed into a data acquisition system
 that printed the data in digital form on paper tape
 which was then decoded on a PDP8I computer.   The
 block diagram of the flow system and the instrumenta-
 tion are shown in Figure 1.  The decoded data was
 later used for modeling and curve fitting on an analog
 and digital computer.  The derivation of the model  is
 based on biological processes in the course  of
 excretion of ^C02 from the lungs of experimental
 animals.  In the experiment, 14c is introduced into
 the animal by i.v. injection.  The rate of 14co2
 excretion from the lungs is then modified by a toxic
 agent (methyl mercury in this case) via ingestion,  24
 hours prior to i.v. injection of 14c.

      In the model it is assumed that there exists a
 "two-pool open system" (Shipley et al ., 1972) in which
 one pool is the central compartment~Tblood pool) and
 the other is the conglomerate of all peripheral pools;
 liver,  kidney, lung, etc.   Any communication between
 peripheral pools occurs only through the central
 compartment as shown in Fig. 2.  On solving  the pro-
 blem a  system of 4 first-order differential  equations
 is  obtained leading to a solution for a skewed
 distribution that has eight constant coefficients as
 unknown.  To predict the excretion rate and  the
 severity of toxicity at a  later time,  eight  measure-
 ments of excreted rate data at the beginning of the
 study are made.   Eight independent equations are set
 yielding the eight constant coefficients.  The distri-
bution  (hence the solution)  for any desired  time is
                                                      573

-------
then obtained.   The integrals  of the distribution at
a selected time t  for the control  and the exposed
animals are compared.   The difference, if any,
represents the  severity of the effect.

     The validity of the model is proven by testing
it on an analog computer and fitting data from  four
experiments to  the model on a  digital  computer.

     Another advantage of the  method is the possibil-
ity of predicting effects for  a given concentration
of administered toxic agent, provided the controls
and some intermediate curves for a particular animal
are available.   At this point  it can be stated  that
the model is valid for any agent and for any animal
that has the process behaving  the way it is presented
in this paper.  Only the rate coefficients could be
different for different animals or agents.

     A clinical compartment follows linear kinetics
in which the rate of flow from a compartment is
proportional to the partial pressure of an agent (C02
in this case) within the compartment (Piotrkowski,
1971, Atkins, 1969).  The output from each compartment
is a solution to the first order differential equation
of the form:

                                                 (1)
     where y is the output or the amount of an agent
     excreted as a percent of total  pollutant and
     t is the running time.
     The lung is not a single compartment (Riley, R.
L, 1920), but is composed of three classical  ones.
The first one consists of the alveoli -- the gas
exchange compartment [blood releases and takes in
gases via the capillaries surrounding the alveoli].
The second one is the anatomical "dead space" in the
alveoli.  The third one is the "dead space" in the
respiratory tracts.  It is quite reasonable to con-
sider the last two compartments as one, thus
simplifying the analysis via a two compartmental model.
Figure 3 shows graphically the two compartments.  It
should be noted that COg from the blood is transferred
to alveoli with a rate coefficient k , but also (not
necessarily at the same time) some of the agent in the
alveoli is fed back into blood.  This feedback is an
important process in the analysis.  Blood (the rapid
exchange compartment) is a complex one, exchanging
its content with other compartments  leading to an
assumption that blood is a vehicle by which the effect
of ingested Ch^HgCl is superimposed  on all other
peripherals, thus influencing the I^C excretion
pattern. The excretion from the blood pool is not
linear in nature, thus the solution  to the kinetics
is not known (Piotrowski, 1971, pp 9-22), but it is
probable that at some point in time  there will be an
equilibrium during which the rate of excretion of 14c
from the blood is proportional to the concentration
of 14c in the blood.  Thus, within this span of time
blood can also be represented as a first order process
(R. Aris, 1966).  To make it right,  however, the
effect of other storage organs (liver, kidney, etc.)
on the blood must be taken into account.  If one
considers the effect of all the storage organs
collectively as one compartment,  one obtains a fairly
good model as shown in Fig. 4.

     From Figure 4 we observe that the total  amount  of
14c must be accounted for (preservation of matter)  at
every instant of time.
              Where B,  L,  A,  EO and F are concentrations of
        1*C  in blood,  lung,  expired air,  storage organs and
        excretion  via  feces  and urine, respectively.

              A System of  differential equations that will
        satisfy this condition is  as shown:

                •jj|  =   -UT  + k4)  B + k6L +  k8EO


                ^.     k?A + k]B - (k6 +  kg)  L   k]B-(k6+k2)L
                       since ky =  0
                                                         -(3)
                dA
                dzO
                       k2L
                k4B
                                   k5)
       At  t  =  0,  '^C  is  injected  into blood,  thus  B(0)    100%
         constant  C.

             The four first  order differential  equations  are
       reduced to  one fourth order of the  form:

                4n       3         2      HA
                                      +^      +  AA    ° ------
       +L+A+i0+F= Constant
-(2)
The solution  to  the model  is:

      A = CT  EXP  (-^t) +  c2  EXP  (-A2t)  +  c3  EXP

      (-V)  + C4 EXP  CV)  ----------------------- (5)

      were A   amount  of total  injected, 14C  excreted
      in unit time

      u,e,A,A1)A2,A3,Alt,   constant coefficients that

      depend  on the animal and  the toxic agent.

      The initial conditions for the model  require that
at  t    0, A(o) = 0  therefore   C]  + C2 + 03 +  64   0.

      The model  was  tested against data from four
experiments  on an analog  and  digital  computer.   The
system  of data acquisition and curve  fitting  of model
to  the  data  is shown  in Figure 5.

      The analog  computer  program  is shown  in  Fig.  6.
The output of amplifier 10 is the solution to equation
5.  The output from integrator 5  is the total  cumula-
tive  value of '^C-excreted,  and its  value  at  time ti
is  used to measure  the severity of effect  by  comparing
the values of the integral at t]  between the  control
and the exposed  animals.

      In modeling  the  process  on the analog computer,
data  mean values  are  plotted  vs.  time on a graph  paper
and the output from amplifier 10  is plotted (using
x   y recorder)  on the same graph.  Rate coefficients
represented by potentimeters  (numbers in circles   see
 Fig. 6) are  varied,  until a  fit  is obtained.   Typical
output is shown  in Fig. 7.

      The total cumulative  value of excreted 14C is
obtained from integrator 5 (Fig. 6).   The  result is an
 S  shaped curve.  By  selecting a point  in time t]
V^IM   f+F  *h? Saturat1on region) as a reference, the
rnntrn?   H*  Inte9»;al   at this point is used to compare
control  and exposed animals yielding a "yardstick of
severity"  of  effect.   Example is shown in  Fig! 8.

     The actual  computation of  effects for the four
experiments  has  been done on a  digital computer
                                                      574

-------
The results are tabulated in Table 1 and the closeness
between the actual data and the simulated one is clear.
Typical fit between data and the model is shown in
Fig. 9 (control) and Fig. 10 (exposed), while the
cumulative value computed is shown in Fig. 11.

Simplified Model

     Although taking 8 measurements reduces the time
required for the experiment, it still makes the method
of  prediction a tedious one.  Real simplification  is
obtained if some approximation of the model is accepted.
Models having two unknown coefficients can be developed
that will  provide same answers at time t-j to  that  of
the actual model.  The simplified model  is developed
as  follows:

     As mentioned before, blood is a  central  compart-
ment where mixing of a number of effects  occurs.
Mathematically  this can  be  expressed  as  a multiplica-
tion.  The proposed model is shown  in  Figure  12.

     The decrease cf 14C  in the blood  (following
 injection) has  a  distribuion that decreases non-
 linearily  with  time.   By trial and  error, an  empirical
 function  is  suggested  that  offers a  solution  to  the
 problem.
     Assume  the  decrease  with  time  of
blood follows  a  distribution:
                    2
                                       14,
                                         C in the
 where BJ  is a constant and t is the running time.

 The output from the blood compartment is then equal  to:
         11  =
                 t (l-_
                     1-Pi t2
                              •  y
                                                  -(6)
 Where  y1   = 1st derivative of 14C02 excretion
         11
              2nd derivative of   C02 excretion
       y

  Let  y1     p

Then

   In  p       1 Bit2  + In (1  - Bjt2) + ^nal
where  a

  and  p
integration constant

rvp I  o + *-/?\
LAr I  p i L / L. }
                                1-Bit2 oj = y1
                                                  -(7)
  Integrating equation 7 a distribution with time for
                   is:
the exhaled

       y
                  EXP(-Bjt2/2)
                                                   (8)
 with 04 and BI being constant coefficients that are
 used in prediction of effects. Equation (8) satisfies
 all boundary conditions:
             t = 0

             t = -

             t =\T
                           y = 0

                           y   0

                           y   max
                                                              The model in eqn. 8 has only two unknown co-
                                                         efficients alt Bi therefore taking two separate
                                                         measurements at the beginning of the study, prediction
                                                         can be made as to the effects at a later time.  Using
                                                         the model of eqn. 8, the four studies have been
                                                         analyzed on the digital computer with results shown
                                                         in Table 2.  The closeness between the model of eqn. 3
                                                         and the model in eqn. 8 is self evident.

                                                                                DISCUSSION

                                                              A mathematical model has been described that
                                                         simulates the distribution with time of the l^COp
                                                         excretion from the lungs during the radiorespiratory
                                                         experiment.  The model described by eqn. 3 calls for
                                                         8 measurements during the early stages of the experi-
                                                         ment.  To simplify the matter, a simple model with
                                                         only two coefficients is described in eqn. 8.  To
                                                         simulate the models, analog and digital computers have
                                                         been used.

                                                               Although the analog method is sufficient to
                                                         solve the problem and the value of al and B!
                                                         coefficients for each animal can be read directly from
                                                         the settings of the coefficient potentiometers, a
                                                         solution on digital computer has been sought.  A
                                                         digital computer provides the direct numerical values
                                                         needed to answer the effects of a pollutant.  The
                                                         analog computer, however, permits a quick determination
                                                         of the on-going process and also permits an
                                                         instantaneous change of coefficients making it a
                                                         flexible tool in the hands of a researcher.

                                                              In Tables 1 and 2, results from four experiments
                                                         compared to the two model are presented.  In
                                                         particular, it can be seen that, up to time t   t],
                                                         just before the saturation, the values for total
                                                         excretion indicate a substantial decrease in the
                                                         excretion in the exposed animals and that the correla-
                                                         tion between the calculated values and the models is a
                                                         readily acceptable one.  These facts suggest that it
                                                         is possible to predict biological effects of a
                                                         pollutant or toxicant under specific conditions by
                                                         simulation and interpolation on a computer.
                                                                  TABLE I. Comparisi
Croup
A
B
C
D
No. of
Animals
4
4
4
4
Calculated Cu
^ = 145
Control
65.08
61.56
66.17
Exposed
61.64

60.27
mulative
ing Data
a
3.43

5.9
*.fi
5.20

8.92
Cumulative Value in *
Using Model
t! = i4S rain.
Control
66.17

65.21
Exposed
50.56

5S.4
fl
S.61

6.8
U
8.47
8.27
10.5

Group


A
B
C
D

No. of
Animals

4
4
4
4
'Calculated Cumulative
tj = 145 min.

Control
65.08
71.04
61.56
66.17
Exposed 1 A
04.64
48.577
55.97
60.27
3.43
22.46
5.6
5.9
\4
5.20
jl.6
9.01
8.92
Cumulative Value in '*
tj = 145 min.

Control
62.7
71.34
61.64
66.175
Exposed
59.5
47.05
56.145
58.87
A
3.2
24.29
5.50
7.30
%4
5.10
34.00
8.90
11.03
                                                       575

-------
                      REFERENCES

1.    Atkins, R. L.  Multicompartment Models for
     Biological Systems.  Methuen and Co., Ltd.
     London, Ennland, 1969.

2.    Aris, R.  "Comnartmental Analysis and the Theory
     of Residence Time Distribution in Intracellular
     Transport", Ed. K. P. "arren, II.Y. A.P. 1966.

3.    Dost, F. M., Johnson, D. E. and Wang, C. H.
     Metabolic Effects of Monomethyl Hydrazine.
     •"erospace Medical Research Laboratory, Aero-
     sr>ace Medical Division Air Force Systems Command,
     ',!right-Patterson Air Force Base, Ohio.  AMRL-TR-
     73-33, .lune 1973.

4.    Lee, S. D., Butler, K. C., Danner, P.. M.,
     McMillan, L., Moore, H.  and Stara, J. F.
     Radiorespirometry in the Study of Biological
     Effects of Environmental Pollutants.  Amer.
     Laboratory, December, 1972.

5.    Piotrkowski, .1.  The Application of Metabolic
     and Excretion Kinetic to Problems of Industrial
     Technology.  U.S. DHE'<',  1971.

S.    Riley, P.. L.  Has Exchange Transportation.
     "Physiology and Biophysics", T. C. Ruch and
     H. D. Patton, Eds. '!.!<'.  Saunders Co. Phil.
     and London on 771-77.

7.    ^hipley, R. A. and Clark, R. E.  Tracer Methods
     for i_n vivd Kinetics.  Academic Press, 1972.

3.    Tolbert, B. "   Kirk, M. and Baker, E. M.
     Continuous c'^0, and CO Excretion Studies in
     Experimental Animals.  Amer. J. Phvsiol.  185:
     269-274, 1956.

9.    l.'ano, C. H. Radiorespirometry.  Methods of
     P-iochemical Analysis, XV:  312-368, 1967.
  FLOW METER


 MSHKOID
          METABOLISM
           CAGES           lONIZtTIOH
                  DRIHITE   CH1M8EBS




D




k
k4

L _
K5




1






(



I


I
«





k


k




3


2
KG
0 f 1
" 1



 Figure 2.  Generalized Kinetic MDdel for Transfer Process
of an Agent from Blood to Other Parts of the Animal's Body
            B = Agent in the blood

            C = Agent in fast exchange organs

            D = Agent in slow exchange organs

            I = Agent removed "irreversibly" from lungs

            E = Agent excreted by urine and feces
                k2, k3' k5' k6 = ^^ coefficients; and
                depending on the agent,  k4 and ks >_ 0
                     LUNG
                                                                                                            *~AIR
                                                                 TO & FROM
                                                                 BLOOD COMPARTMENT
                                                           Figure 3.  Lung System Represented by Two Cctnparbnents


                                                                      L = Alveoli compartment

                                                                      A = "Dead space" compartment

                                                                      k^,  k2,  k3, kg, TK.-J = Rate coefficients of
                                                                         agent excreted to and  from compartments
Figure 1.   Flow System and Instrumentation Block Diagram
        for  Radiorespirometry
                                                       576

-------
                                               INTO AIR
 Figure 4.  Model for Blood Lung  Interaction


   B = Agent  level in  the  blood

   L - Agent  level in  the  alveoli

   A = Agent  level in  the  "dead space"

  ZO = Agent  in  other  organs  (liver,  kidney, etc.)

   kj, k2,  k3, k4  =  Rate coefficients of excretion from
      a  compartment

   kg, ky,  kg   Rate coefficients  of excretion returned
      to a compartment

   ky = Coefficient  of return  rate from "dead space" to
      alveoli - 0

   ks  Rate  coefficient of excretion of irreversibily
      removed agent by  urine  and feces
Figure 6.  Analog Computer Program for the Msdel of   CO.
  Excreted from the Lungs



 Integrator B = Blood compartment

 Integrator L = Alveoli compartrrent

 £ 0 =  Compartment representing other body organs

  A =  "Dead space"  compartments

  6, 7,  8, 10 = Amplifiers

  5 =  Integrator  producing total cumulative value of
       excreted 14C

  Numbers in circles represent rate  coefficients

  (2)=  Initial condition (I.V.  injection)
       SIMULATING  AND CURVE    I
         FITTING  SYSTEM      I
   Figure 5.  System Flow Diagram for Data Acquisition and
Curve Fitting of Model to Data
                                                         577

-------
                                                                                                     o MODEL
                                                                                                     XDATA
                                                                                                     . PERFECT
                                                    TIME IN
                                                    MINUTES
                                                                           49.9       97.7       146
                                                                                 TIME SCALE: .25"=10 MIN.
                                                                                                      193
                                                                                                       TIME IN
                                                                                                  "^"'MINUTES
                                                             Figure 10.  Curve Fitting Between Experimental Data  and
                                                                          the Model of Group B - Exposed
Figure  7.   Comparison Between the Curve Generated by the
            Analog Computer and the Experimental Data
                               CONTROL



                               EXPOSED


                                = MEASURE OF EFFECT
                                                                                              ^SELECTED TIME
                                                                                                FOR COMPARATIVE
                                                                                                MEASUREMENT.
                 97.7      146       193
                TIME SCALE: .25"=10 MIN.
                                                                                          i    i
                                                                                                   i ,_ ( TIME IN
                                                                                                  241   MINUTES
                                                             Figure 11.  Total  Cumulative  Recovery for Control and
                                                                          Exposed Group A
                                             • t TIME IN MINUTES
Figure
Exairple of a Total Cumulative Excretion Curve
for a Control and Exposed Aniiral
                                        X MODEL
                                        ° DATA
                                        . PERFECT
              49.9
                       97.7       146
                    TIME SCALE: .25"=10 KIN.
                                                      TIME IN
                                                      MINUTES
                                                                                                  EFFECT OF
                                                                                                OTHER ORGANS
                                                              Figure 12.  Simplified Kinetic Model for  Transfer Process
                                                                           of an Agent  from Blood to Lungs
                                                                             = Percent of ^C excreted in unit time
y
y1
wii
                                                                 = First derivation of y

                                                                 = Second derivation of y
 Figure 9.
 Curve Fitting Between Experimental Data and
 the l>todel  of Group B - Control
                                                           578
k, d      = Constant rate coefficients that depend
             on the process
I.V.  14C  = I.V.  injection  of 14C into blood

B          = Blood compartment (multiplier)

L          = Alveoli compartment

A          = "Dead space1' compartment

-------
                           ESTIMATION OF THE OPTIMAL SAMPLING INTERVAL  IN ASSESSING

                                           WATER QUALITY OF STREAMS
                               Leo J. Hetling, G.A. Carlson and J.A. Bloomfield
                                     Environmental Quality Research Unit
                           New York State Department of Environmental Conservation
                                    50 Wolf Road, Albany, New York  12233
     The problem of estimating the sampling resources
necessary to characterize water quality is one that
has received little attention.  Until recently, the
problem was unimportant because the resources avail-
able for monitoring were limited and the thrust of
most programs was not so much to characterize the
system as to detect specific violations of a set
water quality standard.  Recently, however, the
resources available for monitoring of the environ-
ment have been increasing and a determination of
optimum utilization of these resources becomes of
practical significance.  Additionally, the progress
made in point source pollution control has led to
greater emphasis on  non-point source pollution
problems.  Non-point source problems require a more
detailed knowledge of annual and seasonal loadings
rather than measurement of deviation from a standard.

     The problem most often encountered is to deve-
lop a sampling strategy for statistical characteri-
zation of the concentration and annual loading of a
pollutant from a watershed.  Most existing monitoring
programs are set up such that chemical samples of a
stream are collected at fixed sampling intervals;
i.e., weekly, bi-weekly, monthly, etc.  Stream
flow at the point and time of sampling is noted.
Average annual concentrations and loadings are then
calculated from this data base.  Although the
limitations of this approach have been recognized,
little analytical or experimental work has been
done to document errors involved with the strategy.

     We first attempted to approach the problem ana-
lytically. However, it was found that since neither
stream flows nor concentrations are normally dis-
tributed, attempts at analytical solutions were
frustrating.  As a result, the following more ex-
perimental approach was pursued.  A continuous
stream flow gage was installed on a small stream
and grab samples for water quality were collected
daily.  Using equally-spaced subsets of the resulting
data set, average concentrations and loadings were
calculated and compared to the average obtained by
utilizing the entire data set.  This approach was
first suggested by Treunert, e_t al_ (l); however,
his analysis was limited in that concentration
measurements available to him were spaced at three-
day intervals.

     Mill Creek, the stream utilized for this study,
is a small stream draining 2,454 ha in Rensselaer
County near Albany, New York (Figure 1).  The land
use in the watershed is predominantly forest (54%)
and agriculture (43%).   The stream is unregulated
and has no known point sources of pollution.  A
complete description of this watershed is available
(2).  Stream flow was measured via a standard stage
height recording station installed, rated and main-
tained by the United States Geological Survey.
Chemical samples were collected, preserved and
delivered to the New York State Department of
Health's Division of Laboratories and Research for
analysis.   The actual chemical analyses performed
are described in Krishnamurty and Reddy (3).
This paper utilizes 275 daily  samples  taken  from
March 1, 1975 through November 30,  1975.

     Twenty water quality constituents were
analyzed including major ions  and the various  forms
of carbon, nitrogen and phosphorus.  This paper will
concentrate on elucidating the effect  of sampling
interval on the estimation of  the average daily con-
centration and average instantaneous load of total
suspended solids (retained on  a 0.45 M filter),
chloride and particulate and dissolved phosphorus.

     The average daily concentrations  (C) and  the
average daily loadings (L) were calculated for sub-
sets of the data taken at fixed intervals ranging
from one to 60 days.  Sampling frequencies of longer
than two months were considered to be random rather
than fixed-interval sampling and hence ignored.
Ten sample populations of 265 days length were
withdrawn from the general population by beginning
fixed interval sampling on each of the first ten
days of the sample space (Sample - March 1, 1975)
(Figure 2).   The average concentration is defined
as follows:
Cj.n, =   I
where:
and
                        j-i-s-r
                          E

                         by m
Ci
                    r -t- m
                (1)
                                               (2)
     n =  number of samples in each discrete
           population
     j — the initial sample chosen (March 1=1,
          1<  J 
-------
     The equation for average instantaneous daily
loading is:
                         i+s-r
                         J
where :
                           ^

                         by m
     Qi = instantaneous flow at the time of colleo-
  _         tion of the ith sample (m3/sec)
  Lj,m  = the arithmetic average daily loading
           calculated beginning on the jth day at
           an interval of m days (Kg/day)

     The results of the calculations  are shown in
Figures 3 through 6.  The abscissa of these plots is
the imposed sampling frequency; i.e., daily, every
other day, weekly, every third day, etc.  On the
ordinate, the average of the ten data sets obtained
for imposed sampling frequency along with the
range of values observed was plotted.  As a reference,
the average value obtained utilizing the entire data
set is shown along with lines showing a 25, 50 and
75% deviation from this value.

     All of the plots have several characteristics
in common.  The range of values spreads rather
rapidly as the sampling interval is increased.  This
rapid deviation is to be expected since the number
of samples within a data set decreased rapidly as
the sampling interval is increased.  This is il-
lustrated in Figure 7 where the number of samples
from the 275 sample data sets obtained is plotted
against the sampling frequency.

     The rate at which the range increases varies
with the parameter being studied.  It is least
rapid for the chlorides (a dissolved, conservative
ion) , more rapid for dissolved phosphorus which
takes part in a variety of chemical sorption, and
exchange reaction with the stream bed and surrounding
soils, and most rapid for the particulate-related
parameters of suspended solids and particulate
phosphorus.

     A wider band of errors can be noted for load-
ing than for the concentration values.

     It is apparent from these plots that unless
one samples very frequently (at least every other
day) , average values calculated from the data can
range considerably from the actual daily average.
Frequent sampling is of even more importance with
the particulate-related material where order of
magnitude errors could be encountered with less than
a three-day sampling frequency.

     A tendency for the average value to decrease
as sampling frequency is increased is also noted
for the particulate parameters.  Apparently as the
sampling frequency decreases, the likelihood of
sampling the major stream flow events also decreases.
Since extreme events have a profound effect on
averages, a value somewhat less than the average
obtained by utilizing all of the data is obtained.
This indicates that as sampling frequency decreases,
there will be a tendency to underestimate the actual
average concentrations and loadings for particulate
material.
Conclusions

      As a result of this analysis,  it  can  be  con-
cluded that for small streams similar to  Mill  Creek,
fixed interval sampling to obtain  average concen-
trations is of little value unless the  sampling
interval is less than two or three days.  Attempts
to obtain annual loads are even less productive
unless daily or every other day samples are taken.

      Apparently this occurs because of the im-
portance of a relatively few major hydrological
events on the annual average.  Hopefully, a sam-
pling strategy centered around these events can
be devised to obtain reasonable estimates of
average concentrations and yield without  the ex-
treme investment needed for daily  sampling.

      Future plans of this study are to repeat
the above and similar analyses utilizing  a  full
year's worth of the Mill Creek data.  A greater
number of parameters will be studied and  the effect
of utilizing the continuous stream flow record with
fixed interval chemical quality sampling  will be
investigated.  The effectiveness of various  event
sampling strategies will also be tested.

References

1.  Treunert, E., A Wilhelms and H. Bernhardt,
     "Effect of the Sampling Frequency on the
     Determination of the Annual Phosphorus  Load
     of the Average Streams", Hydroohem.  hydro-
     geol.  Mitt., Vol.  1, pp 175-198, March  1974.

2.  El-Baroudi, H., D.A. James and K.J. Walter,
     "Inventory of Forms of Nutrients Stored in a
     Watershed", Rensselaer Polytechnic Institute,
     Troy,  N.Y., August 1975.

3.  Krishnamurty, K.V.  and M.M.  Reddy, "The
     Chemical Analyses  of Water and Sediments in
     the Genesee River  Watershed Study    Procedure
     Manual", Environmental Health Center, Division
     of Labs, and Research, New York State Depart-
     ment of Health, Albany, N.Y.,  September 1975.
                                                      580

-------
                                                                  CHLORIDE LOADS
               RENSSELAER  COUNTY
               STATE OF  NEW YORK

               SCALE I INCH « 5 MILES

            noune   I.
DI

WI
1 9 10 19 K
  MARCH
10 19 » 25 31
OCT.
    AVAILABLE SAMPLE SPACE (S ) • 275 doyt
              FIGURE  2.
1000

J>
O
z
0
g
J too

F ., AsA A n /s. A
^^ ^W XA^V . ,\x . \ / V /.. U \.

. - - "VAX ^^-AXY/-


75
25
0
25
90
75

                                                            CHLORIDE  CONCENTRATIONS
                                                                                              75
                                                                                              SO
                                                                                              29
                                                                                              0
                                                                    FLOW
                                                               SAMPLING INTERVAL
                                                                 FIGURE 3.
                                                  10.000 t-
                                                             DISSOLVEO  PHOSPHORUS  LOADS
                                                    O.I
i

K

I
                                                        DISSOLVED PHOSPHORUS CONCENTRATIONS
N>>\ A^V/y  .
                                                                                       v
       _   _J    U         T
       5   S    t!         1
       2   ^    5         I

                  SAMPLING  INTERVAL

                             4 .
                                                                                            61

                                                                                            i

                                                                                            i
                                          581

-------
           PARTICULATE  PHOSPHORUS LOADS
100,000  r
 10.000
  1.000  r
    100  -
   O.I
   0.01  -
          MRTICULATE  PHOSPHORUS CONCENTRATIONS
                   SAMPLING  INTERVAL

                    FIGURE    5.
                                                         1,000,000
                                                          100,000
                                                                       SUSPENDED  SOLIDS  LOADS
                                                                  SUSPENDED SOLIDS CONCENTRATIONS
                                                           1,000
                                                             100
                                                       i
                                                       t-
                                                       I
                                                                          SAMPLING  INTERVAL
FIGURE  6.
                                                             0    20    40   60   80   100   120  140   160   180   200
                                                                               FIGURE  7.
                                                    582

-------
                         FIELD DATA  FOR ENVIRONMENTAL  MODELING—ADJUNCT OR  INTEGRAL?
                                            Philip  E.  Shelley,  PhD
                                 Director,  Energy  and Environmental  Systems
                              EG&G Washington Analytical  Services  Center,  Inc.
                        Summary

Two types of field data are required for virtually all
environmental prediction models:  calibration  data and
verification data.  The former type is used within the
model itself to ready  it for specific application,
whereas the  latter is  used to establish the validity/
and probable accuracy  of the results obtained  from the
model.  The  paper points out that,  historically,  too
little attention has been given  to  the collection of
field data for use with environmental models and
that the quality of the current  modeling state-of-the-
art generally far exceeds the quality of the supporting
data base.   An incredible lack of good data faces the
model developer, evaluator, and  user alike, and this
often unrecognized fact quite seriously impacts on the
activities of each.  Using urban hydrology models as
an example,  the severity of the  problem is demonstra-
ted.  The need for determining estimates of data
quality prior to their use in modeling  is  noted,  and
the dangers  associated with the  "blind" use of existing
data are indicated.

                     Introduction

adjunct  (o/'uwg kt)  1.  Aam&tlting added to  an.oth.eA th^ing
but not ut,e.ntiaLty a  pcuvt o& -it.

tnte.g'iai  [tn'ts gAsi)  7. o{>, p&itcu.nj,ng to, on. be.-
tongtng a!> an eJiAe.ntiat pafit of,  the. whote..

Environmental modeling is performed in order to obtain
a  picture of probabilistic events likely to occur
given a set  of input conditions.  The "portrait"
obtained will be of the right "person", but even  a
charicature  can be useful in many instances, despite
its exaggerations, as  long as it is recognizable.
Regardless of whether  the application arises from the
needs of planning, design, facility operation, alter-
native assessment, or  other needs,  the simulation
process involved is intended to  duplicate  the  essence
of a system  without actually attaining reality itself;
the model is simply a  device used to carry out the
simulation.  The validity of the results,  i.e.,
their agreement with reality, depends upon two primary
factors:  how well the model represents the actual
processes involved, and how representative the set
of input conditions are.   In general, both of these
factors involve the use of field data and, for any
given model, the better (i.e., more realistic/
representative the field data, the  closer  the  simula-
tion will be to reality.  Although  the following  is
drawn from the field of urban hydrology, most  of  what
is stated essentially  applies to other environmental
modeling areas as well.

Regardless  of whether the  model  of interest is  sto-
chastic  versus  deterministic   on the one hand,  or ana-
lytic versus  synthetic   on the other,  there are two
uses  that  are generally made  of field data.  To em-
phasize  the  importance  of  distinguishing between the
two,  they  will  be  referred to  here as data  types.
Because  of  the  complexity  of  the processes  being mod-
eled,  most  of the  models that  are popular  today re-
quire field  data  both for  estimating empirical
parameters  in their  structure  and for fitting other
application-specific  parameters   (calibration).   For
example, one  version  of the  Sanford  Watershed Model
has twenty parameters:  two are based on meteorological
data, four are based on hydrograph separation, five
are computed from physical measures, three are estima-
ted from empirical tables, and six are fitted.  All
field data that are used within any model structure,
i.e., to ready the model for specific application,
will be referred to here as calibration data.

The chief concern of the model-user is how well the
model outputs (which are its sole reason for being)
compare to reality.  This comparison forms a measure
of the predictive capability of the model.  Here,
also, input and output field data are required, but
their fundamental use is quite different from that of
calibration data.  They will be referred to here as
verification data, since they are used to verify the
results of a particular model exercise.

This distinction between field data types is not made
to suggest that different gathering techniques are re-
quired for calibration versus verification data; in
fact, they are the same.  The reason for making the
distinction is simply that calibration data must never
be used for model verification.   The importance of this
simple statement cannot be overestimated.
                Data in Model Selection

The various stormwater management models that are
available today require data on the catchment, precip-
itation, and runoff quantity and quality.  Such data
might include historical and current records plus re-
lated information such as present and projected land
use parameters, demographic projections, remote (sat-
ellite or aerial) imagery, treatment plant records,
and the like.

The purpose of the simulation, i.e., the use to which
the model results are put, must be kept in mind in
selecting the model to be employed, but it is also
very important to carefully review and inventory the
existing data base.  As Lager has noted, "Rather than
selecting a model and then seeing if you can fill its
data requirements, it is preferable to analyze your
available data and then choose the model that can use
these data most effectively to achieve study objec-
tives." !

The various models not only have different basic data
requirements; they also vary widely in the detail
of temporal and spatial distribution of data required.
For example, some models require time steps of less
than one minute to satisfy numerical stability condi-
tions, while others can be run with hourly, daily, and
even up to semimonthly data.  These considerations,
with their attendant field data gathering cost implica-
tions, bear heavily on model selections.

Considerations other than the model structure are also
involved in determining the cost of a field data gath-
ering program.  For example, both the quantity and
quality of stormwater runoff are highly variable and
transient in nature, being dependent upon meteorologi-
cal and climatological factors, topography, hydraulic
characteristics of the surface and subsurface conduits,
the nature of the antecedent period, and the land use
                                                       583

-------
activities and housekeeping practices employed.   It is
this highly variable and transient nature of storm-
water flows that makes their characterization so dif-
ficult and, hence, expensive.   In addition to
tremendous dynamic ranges, the poor quality of storm-
water draining from the urban environment has a signif-
icant effect on the choice of suitable sampling and
flow measurement equipment and methods as well as an
impact in the analytical laboratory.

                     Data Quality

In addition to assessing the quantity of data on hand
and to be gathered, some assessment of data quality is
required if we are to be sure that our "portrait" is
of the "right person."  We are not yet overwhelmed
with existing data on stormwater characteristics.  As
stated by Torno, "One of the serious problems that
faces either a new model developer or one who must
evaluate several models is the incredible lack of good
data..."2   To be effectively utilized, we need
more than simple values as data products.  We need to
know something about the data quality, i.e., about the
"goodness" or truthfulness of the data.  Two terms
that relate to the data-gathering process are conven-
tionally used to describe data quality: accuracy and
precision.  Accuracy refers to the agreement between
the measurement and the true value of the measurand,
with the discrepancy normally referred to as error;
precision refers to the reproducibility (repeatability)
of a measurement when repeated on a homogenous time-
stationary measurand, regardless of the displacement of
the observed values from the true value, and thus,
indicates the number of significant digits in the
result.  We are, therefore, interested in establishing
the best estimate of a measured quantity and the
degree of precision of this estimate from a series of
repeated measurements.  Calibration, whether it be of a
piece of flow measurement equipment, a chemical  method
for wastewater analysis, or a stormwater management
model, is simply the process of determining estimates of
accuracy and precision.

Discrepancies between the results of repeated observa-
tions, or errors, are inherent in any measurement
process,since it is recognized that the true value of
an object of measurement can never be exactly estab-
lished.  These errors are customarily classified in
two main groups:  systematic and random (or accidental)
errors.  Systematic errors usually enter into records
with the same sign and frequently with either the same
magnitude (e.g., a zero offset) or with an establish-
able relationship between the magnitude of the measure-
ment and the error.  The methods of symmetry and
substitution are frequently used to detect and quantify
systematic errors.  In the method of symmetry, the
test is repeated in a symmetrical or reversed manner
with respect to the particular condition that is
suspect.  In the method of substitution, the object of
measurement is replaced by one of known magnitude (a
calibration standard), an instrument with a known
calibration curve is substituted for the measuring in-
strument in question, and so on.  Thus, systematic
errors bear heavily on the accuracy of the measurement.

Random errors, on the other hand, are due to irregular
causes, too many in number and too complex in nature
to allow their origin to be determined.  One of their
chief characteristics is that they are normally as
likely to be positive as negative and, therefore, are
not likely to have a great effect on the mean of a set
of measurements.  The chief aim of a data quality as-
surance effort is to account for systematic errors and
thereby reduce errors to the random class, which can
be treated by simple probability theory in order to
determine the most probable value of the object of
observation and a measure of the confidence placed  in
this determination.

The statistical measures of location or central  ten-
dency (e.g., the various averages, mean, median, mode)
are related to accuracy.  The statistical measures  of
dispersion or variability (e.g., variance and  standard
deviation, coefficient of variation, and other measures
derived from central moments of the probability  density
function) are related to precision.

There are also some annotations that the data  gatherer
can make to increase the usefulness of the data.  For
example, inspection of equipment and records may indi-
cate periods of instrument malfunction or failure
(e.g., power interruptions).  These facts are  important
and should form a part of the total record.  There may
be circumstances discovered during site visits that
would have had an effect on preceding data that cannot
be readily determined, e.g., a partially blocked sampler
intake or a rag caught in the notch of a weir.  These
facts should also be noted and, where at all feasible,
some qualitative notation as to expected data quality
(e.g., poor or very good) should be made.

The importance of notations of data quality results
from the ultimate use of the data.  For example, at
the risk of seeming ridiculous, ±50-percent data
should not be used to calibrate a model whose outputs
are desired within ±20 percent, nor should strong
model verification judgments be made based upon a very
small sample of data with a high variability.   The
levels of data quality desired vary with the intended
use of model outputs.  The needs for overall basin
planning, treatment plant design, plant operation, and
research are all quite different, and this must be
kept in mind in designing the data-gathering program
(or system).

                    Instrumentation

The ability of available instrumentation and techniques
to gather reliable wastewater characterization data
varies widely with design and implementation factors.
Shelley has reviewed the sampling problem3>4 and      ,-
has collected comparative data using various samplers.
Shelley and Kirkpatrick have recently provided   ;
in-depth monographs on-instrumentation for flow meas-
urement*^ and sampling.   A summary of the use of
instrumentation for collecting field data for storm-
water model calibration  has been given by Shelley.^;
who has also examined the use of remote sensor data to
measure water quality, especially sediment.9

To summarize the foregoing as it pertains to stormwater
flow measurement, it can be stated that, although ac-
curacies bn the order of ±5 percent can be achieved
with the proper site, instrumentation, and care, in-
strument readings that differ from spot checks by 25 to
50 percent or more are much more typical.  There are a
number of factors involved, but the greatest con-
tributor seems to be the use of slope-area methods
such as the Manning formula in uncalibrated reaches or
inappropriate instances.

Reviews of project experience have revealed cases where
individual meters have been in error by over 200 per-
cent, due to lack of proper maintenance, installation
errors, or misapplication.

Insofar as sampling is concerned, the lack of  a manual
sampling protocol has resulted in a situation where
differences of as much as 150 percent have been ob-
served between samples taken manually from the same
source at the same time, the differences being attri-
butable to equipment and technique.  In a recent
                                                       584

-------
side-by-side test of four different automatic sampler
designs,  a controlled synthetic waste stream was em-
ployed.5   The results of laboratory analyses of
samples taken with these equipments ranged from
understatements of pollutant concentrations by 25 per-
cent or more to overstatements by as much as 200 per-
cent and  higher.  Again, a number of factors are
involved, but equipment design characteristics (espe-
ially intake and sample-gathering subsystems)  appear to
account for much of the observed performance variation.

As regards chemical methods for the analysis of water
and wastes, the picture is relatively brighter.  Even
here, the lack of well-accepted baseline standards or
alternate techniques results in an inability to speak
of accuracies for a number of tests.  Furthermore,
precision expressed as standard deviation is not out-
standing  for some tests.  As an example, USEPA10
quotes the results of 86 analysts in 58 laboratories
who analyzed natural water samples plus an exact in-
crement of biodegradable organic compounds.  At a mean
value of 175 mg/1 BOD, the standard deviation was
±26 mg/1  (±15 percent).  For other tests, especially
at low levels, even larger variances may be
encountered.

When the foregoing sources of error are combined, for
instance, as is necessary if mass discharges are to be
computed/predicted, the picture is not very optimistic.
Straightforward calculation shows that results can
vary by over an order of magnitude; hardly a comforting
situation for model calibration or verification.
Along the same line, Harris and Keffer, 11  as  a •
result of extensive comparative testing in the field,
have noted that apparent treatment plant efficiencies
can be varied by a factor of 2 or 3 or even higher,
depending upon site selection and equipment used.

                      Conclusion

Although it was strongly implied earlier that field
data should be viewed as integral to environmental
modeling, the present state of affairs suggests that
such data have been treated as adjunct in terms of the
effort and resources that have been applied to the^
development and refinement of computer models vis-a-
vis that devoted to the collection of good field data.
This observation is not meant to suggest that all work
on model  refinement and use should be abandoned for a
massive data-gathering expedition, but rather that we
must bring our application of resources for environ-
mental characterization/prediction into better balance.
Obviously, much remains to be done in terms of refine-
ment of equipment and techniques before our field data
are up to the sophistication of some of our models,
and it is past time that this fact be more widely
recognized.

                      References

1.  Lager, J.A., "Criteria for Selection of Stormwater
Management Models," in Application of Stormwater Man-
agement Models -1975, University of Massachusetts
Short Course Handbook, Amherst, MA, 1975, 47 p.

2.  Torno, H.C., "Stormwater Management Models," in
Urban Runoff - Quantity and Quality, W. Whipple, Jr.,
ed., American Society of Civil Engineers, New York, NY,
1975, pp 82-89.

3.  Shelley, P.E., "A Review of Automatic Sewer Sam-
plers," in Urban Runoff - Quantity and Quality.
VI. Whipple, Jr., ed., American Society of Civil Engi-
heers, New York, NY, 1975, pp 183-191.
4.  Shelley, P.E. and G.A. Kirkpatrick,  "An Assess-
ment of Automatic Sewer Flow Samplers," in Water
Pollution Assessment, ASTM Publication STP-58T7 1975,
pp 19-36.

5.  Shelley, P.E., Design and Testing of a Prototype
Sewer Sampling System, USEPA Environmental  Protection
Technology Series No. EPA-600/2-76-006, 1976  ix and 96p.

6.  Shelley, P.E. and G.A. Kirkpatrick, Sewer Flow
Measurement - A State-of-Art Assessment, USEPA
Environmental Protection Technology Series  No. EPA-
600/2-75-027, 1975, xi and 424 p.

7.  Shelley, P.E. and G.A. Kirkpatrick, An  Assess-
ment of Automatic Sewer Flow Samplers - 1975, USEPA
Environmental Protection Technology Series  N. EPA-600/
2-75-065, 1975, xiv and 336 p.

8.  Shelley, P.E., "Collection of  Field Data for
Stormwater Model  Calibration," in  Application of
Stormwater Management Models - 1975, University of
Massachusetts Short Course Handbook, Amherst, MA,
1975, iii and 179 p.

'9.  Shelley, P.E., Sediment Measurement in  Estuarine
and Coastal Areas, WASC TR-7115-001 1975, to appear
as a NASA publication, vi and 97 p.

10.  United States Environmental Protection Agency,
Methods for Chemical  Analysis of Water and  Wastes.
Environmental Monitoring Support Laboratory (formerly)
Methods Development and Quality Assurance Research
Laboratory) and Office of Technology Transfer,
Cincinnati, OH, 1974, xvii and 298 p.

11.  Harris, D.J. and W.J. Keffer, Wastewater San-
pling Methodologies and Flow Measurement Techniques,
Report No. EPA 907/9-74-005, Surveillance and Analysis
Division, USEPA Region VII, Kansas City, KA, 1974,
ix and 117 p.
                                                       585

-------
                                DATA DEFICIENCIES IN ACID MINE DRAINAGE MODELING
                                             Vincent T. Ricca, Ph.D.
                                                    Professor
                                         Department of Civil Engineering
                                            The Ohio State University
                                                 Columbus, Ohio
                       ABSTRACT

Recently developed digital computer models for Acid
Mine Drainage Quality and Quantity for discharges from
deep mines, strip mines, and refuse piles are currently
being validated by application to field sites.  A task
in this current research was to select suitable test
sites with extensive hydrologic and acid mine drainage
data.  Experience has shown that data for the some 200
modeling parameters as well as the length and consis-
tency of the records have not been satisfactorily col-
lected in the past on even the most highly investigated
study sites.  The object of this paper is not to criti-
cize these prior collection efforts, for some were
quite extensive indeed; but rather, to indicate what
data should be collected if acid mine drainage modeling
is to be advanced.

                      BACKGROUND

Over the past five years, researchers at The Ohio State
University have been developing computer models to de-
scribe the quantity and quality generation of coal mine
drainage.  A two-year, EPA-sponsored research project
titled, "Resource Allocation Model to Optimize Mine
Pollution Abatement Programs"'- was completed in 1974.
A major component of the work in that project was the
development of unit source models.  These models pre-
dict the mine drainage flow and its associated acid
load for deep or drift mines, strip mines, and refuse
piles.  They were created by combining highly sophisti-
cated hydrologic simulation models and mine acid pro-
duction models developed by the acid mine drainage task
group at The Ohio State University.  The outcome of
this initial work on these unit source models is deemed
quite successful and encouraging by the researchers in-
volved.  An objective of this original work was to pro-
duce models as detailed and sophisticated as possible,
using the highest level currently available in the
fields of hydrologic simulation and mine acid pro-
duction.  This approach was taken with the belief that
it is a more feasible future task to simplify these
highly detailed models than to upgrade simplistic
models as field applications disclose the nature and
availability of data involved in the phenomenon of acid
production and mine discharges.  Details of the basic
unit source models are discussed in the final report of
the model development project-'-.

A follow-up EPA project^ is currently nearing comple-
tion.  An objective of this latter project is to apply
the previously developed models to field situations to
evaluate their validity.  These applications provide
information for another project objective:   "Identi-
fying Data Deficiencies and Formulate Data Acquisition
Guidelines".  This last objective will be the subject
of this paper.

      DISCUSSION OF THE DATA NEEDS OF THE MODELS

The unit source models are considered as highly sophis-
ticated in their structure and performance.  They are
capable of producing continuous time outputs of gener-
ated mine site discharges and attendant acid quality
of the flows as well as receiving stream or basin
outlet flows.  In order to accomplish this continuous
time trace throughout the modeling period it  is  neces-
sary to have compatible detail and consistency on  the
climatic input data.  Also,much detail  is needed on  the
physical and chemical aspects of the mines and spoils
along  with the site watershed.  Some 200 parameters in
total may be involved in a modeling endeavor  depending
upon degree of detail desired.

The listing below is a category description of the
major information items, or input, required to operate
the models.  Explicit details on the input data  and
model parameters are given in the technical discussion
of the model found in the project report. 1
Basin Information.  Watershed drainage, Land  use and
distribution, Flow capacity of main channel,  Mean  over-
land flow path length, Retardance coefficient for  sur-
face flows, Average ground surface slopes, Interflow
and baseflow recession constants, Channel routing  para-
meters, and Index parameters reflecting interception,
depression storage, infiltration, soil moisture  storage,
interflow movement, groundwater movement, etc.
Climatic Data.  Precipitation records,  Streamflow
records, Evaporation rates and coefficients,  and Mete-
orological information for snowmelt.
Deep Mine Information.  Mine area; Coal seam  descrip-
tion, materials, thickness; Pyrite oxidation  rate  para-
meters reflecting diffusion, reaction, and temperature;
Acid transport parameters reflecting gravity  diffusion,
inundation, and leaching; Initial acid storages; and
Alkalinity conversion factors.
Refuse Pile-Strip Mine Information.  Strip mine  and
refuse pile areas; Representative soil profiles  of acid
producing areas; Pyrite oxidation rate parameters  re-
flecting diffusion, reaction, and temperature; Initial
acid storages; and Acid transport mechanism parameters
reflecting depth leached by direct runoff, leaching
parameters, effective acid solubilities.
Discharge Data.  Drainage  flow records; and Drainage
quality records.

                SELECTION  OF TEST SITES

In  order  to  validate  the unit  source models,  field or
"test  sites'1  were  sought.   Those working  in  the  area
of  acid mine drainage and  related  coal  mining problems
are aware  that  many reports  on demonstration  projects
and mining operations are  available  in  the  literature.
Project researchers at  OSU were  aware  of  such litera-
ture and  expected  to  find  numerous  "test  sites"  upon
which  to  apply  the models.   However,  once  into this
task,a wealth of  information was  found; but,consistency
or  completeness immediately surfaced  as a major  problem.
To  properly  assess the  possible  test  sites  a  systematic
evaluation or selection methodology was developed.  A
discussion of this methodology follows.

The Nature of the  Literature Review for Site Selection.
Since  this project is concerned  with testing of  the
models, it  is important to find  the best  possible
watersheds,  that  is  to  say,  the  watersheds  with the
largest amount  of  available field data to use in the
models.   This requires  the best  Streamflow records,
stream quality  records,  meteorological and physical
                                                       586

-------
data that can be obtained, thus eliminating as many
sources of error as possible.  The researchers re-
viewed 40 separate reports concerning 33 watersheds.
The majority of these reports were prepared by private
consulting firms as a part of "Operation Scarlift"
which was funded by the Department of Mines and Mineral
Industries, Pennsylvania.  Other sources of literature
were EPA demonstration projects, U. S. Geological Sur-
vey Professional papers and those from other federal
and state agencies.

A typical Scarlift-type report contains, first, a
description of the watershed, including information
such as area, population, land use, geology, and the
mining history of the region.  Next, the definitions of
key terms  (i.e., pH, acidity), and the procedures and
the results of the watershed study are given.  This
section describes the general quality and  quantity of
the water  in the basin during the study period.
Finally, sections giving  the conclusions drawn from  the
study and  recommendations for treatment or abatement
measures are included.  Most of the reports conclude
with  several appendices which include all  data from  the
study, various maps of the watershed  (i.e., topo-
graphic, extent of mining, land use), drawings of
recommended treatment measures, and any other appro-
priate, supporting information.

General Test Site Evaluation.   Since  there was an ex-
tensive amount of literature to search  for appropriate
test  sites, which was impossible for one person to ac-
complish  in a reasonable  amount of time, a joint
effort was-undertaken by  five project graduate research
associates.  The input data needed for  the models was
categorized into general  information  topics of
physical,  climatological, streamflow, deep mine,  strip
mine,  spoil and refuse pile, pyrite reactivity, acid
solubility, and cost  for  treatment.   Each  topic is
further  subdivided, but  only into  general  areas.  Any
possible watershed was to be first rated along these
general  guidelines, using a  scale  from  0 to 10, with
0 being  unacceptable  or  totally absent  (missing)  data
and 10 being excellent data  available.  This  is the
initial  evaluation of possible  watersheds; first,
allowing  elimination  of  totally unsuitable watersheds
due to gross deficiencies of data, and  second,indi-
 cating a  beginning priority  of  watersheds  to  study.   If
only one  watershed is being  considered,  the first  eval-
uation gives an indication of whether  to continue with
 the watershed  or to dismiss  it  as  an  unsuitable basin.
Biemel's  Master of Science thesis  discusses these
 evaluations  in  detail.^

Of the  33 mined basin investigations,  the  use of  the
 general  evaluation worksheets  indicated that  only  these
 eight were worthy  of  further  investigation:  Alder  Run,
 Penna.;  Beaver  Creek, Kty.;  Big Scrub  Grass Creek,
 Penna.;  Cherry  Creek  and Casselman River,  Md.; Elkins
 Demonstration  Project, W. Va.;  Hillman  State  Park,
 Penna.;  Moraine State Park,  Penna.; and Two Lick  Creek,
 Penna.

Detailed Evaluation Worksheets.  An intensive study  was
 undertaken next to evaluate  the eight  chosen  watersheds
more closely.  After  a thorough study of the  acid mine
 drainage models, detailed evaluation  worksheets were
 developed which not only account for  the nine cate-
 gories mentioned above,  but  also for  the availability
 of the data  and importance of  each data set.   Follow-
 ing is a brief  discussion of  the detailed  worksheets
 used in  the  evaluation.   Biemel's  thesis^  contains  an
 expanded  version.

 The detailed evaluation  worksheets list a  certain para-
meter or an  aid in computing  that  parameter.   For
example,  the watershed area was input to the  models
and a topographic map aided  in  determining the area.
All the required model input information was  listed by
category on worksheets.  Each parameter  (or aid) was
assigned an importance factor (IF) ranging from  '!',
data unnecessary to  '4', most necessary.  Some exam-
ples are:  average daily dewpoint temperature  '!',
total daily solar radiation '2', average ground  slope
'3', and precipitation data '4'.  Next,as a test site
was evaluated, a rating from '0', worst, to '10', best,
was assigned to each parameter to reflect the goodness
of its data.  Guidelines were developed to increase
consistency, among the different researchers evaluating
test sites, for each parameter evaluated.  A weighted
adequacy number was formed by multiplying its impor-
tance factor by its goodness value.  These individual
weighted parameter values were then tallied as a grade
to be assigned to the data group.  For example, cli-
matic data might have a score of 34/40.  This score can
then be used to compare test sites by category data
group.  This indicates the strength and weakness in
certain areas of the reports and permits fairly easy
comparison among test sites.  These evaluation sheets
include reviewer's comments to qualify the data con-
ditions or add other information that might be useful
in assembling the data.

A detailed evaluation of a watershed report requires a
considerable amount of time.  While a reviewer reads
the watershed report, he can also complete many
sections of the worksheets.  However,  the report may
indicate other sources of data for which the reviewer
must search before deciding on the suitability of a
particular watershed for modeling.

The detailed evaluation worksheets, being fourteen
pages long, were found to be unwieldly for comparing
watersheds.  The results of these analyses were com-
piled onto summary sheets to allow easier comparison
among several watersheds.

Evaluation of the Data Availability.   The previously
described analyses were applied to the reports avail-
able to this research project.   Of the original 33
basins under study, eight were found to contain enough
information to merit a. detailed study.   All of the
watersheds were given this analysis,  but only the
eight indicated earlier were considered for computer
modeling.  The remaining 25 basins received analyses
in an effort to evaluate the detailed worksheets and to
provide insight as to data deficiencies.  Seven basic
topics were analyzed with respect to availability of
data and suitability of data included in the reports.

The following chart illustrates the frequencies of
occurrence of specific rankings given to several gener-
al data divisions required for operating the models.
These are based upon about 25 site evaluations.  The
last column gives a weighted value of the data avail-
ability.  Generally, scores less than 5 indicate poor
status.  Following the chart is a discussion of the
problems found in the major data categories.

Water Quality Records for the streams are usually avail-
able, frequently being monitored at the mine sources or,
in some instances,by the USGS in the receiving streams.
Groundwater quality records are more frequently not
contained in the reports.  If they have been considered,
they are usually satisfactory.   Water quality data is
used to check the output from the acid generation
portion of the model.  Therefore, a '4' would probably
result in much difficulty in making an accurate  check
on the modeling output.

Deep Mine Parameters are determined from mine maps.  If
maps are not contained within the report, they can
sometimes be obtained from the mining companies.
Several reports stated that maps were not available for
the watershed.  Thus, difficulty would arise  in  assign-
                                                        587

-------
Chart 1.
Categories
Climatological
Data

Precipita-
tion Data
Evapotran-
spiration
Snow Melt
Parameters
Physical
Data

Watershed
Parameters
Interception
Parameters
Overland
Flow
Soil Moisture
Parameters
Streamflow and
Routing
Parameters

Streamflow
Parameters
Routing
Parameters
Water Quality
Groundwater

Quality
Parameters
Groundwater
Parameters
Deep Mine
Parameters

Physical
Parameters
Complexity of
Deep Mine
System
Acid Removal
Parameters
Acid Pro-
duction & Mine
Conditions
Refuse Pile
Inputs
Combined Refuse &
Pile Strip
Mine Model

Physical
Data
Acid Producing
Parameters
Hanking Values
0
0
0
0
0
0
0
0
0
1
5
5
7
0
0
7
1
1
0
0
0
3
1
0
1
1
0
0
0
0
0
0
0
0
1
3
1
1
1
1
5
0
2
0
2
0
0
0
0
4
2
0
0
0
0
0
0
0
0
7
0
1
1
3
0
0
1
1
1
1
1
2
4
0
2
3
0
0
0
0
0
0
1
0
3
1
I
2
5
1
1
1
1
1
1
0
6
4
5
1
4
0
0
2
0
3
1
4
0
4
1
2
0
3
1
6
3
0
3
2
0
3
2
6
3
b
0
0
4
4
7
3
9
7
1
2
1
7
4
6
2
3
4
0
8
12
2
3
3
7
6
5
1
8
13
4
6
2
I
2
0
0
0
1
4
0
6
2
2
0
7
2
4
4
2
7
10
0
5
5
4
3
1
3
2
5
1
1
3
3
2
3
6
4
3
0
2
2
1
0
8
7
6
2
0
2
6
1
4
0
4
1
2
0
3
0
1
2
3
4
0
0
0
1
0
9
0
9
1
0
0
2
1
4
0
2
5
0
0
1
0
0
0
3
2
1
0
0
0
0
10
0
6
0
0
0
0
1
2
0
0
2
0
0
0
0
0
1
3
0
0
0
0
0
0
Avg.
7.1
8.9
6.2
6.0
5.8
7.1
5.5
7.1
3.4
4.6
5.0
3.2
4.1
5.8
2.4
5.1
5.4
6.9
5.5
5.4
3.5
4.1
4.7
3.5
 ing values to the parameters.   Soil borings and geo-
 logic profiles are needed for  a portion of the para-
 meters.   Most acid production  parameters and mine con-
 dition parameters ^will have to be found by trial and
 adjustment.   Since not much information is available
 on these parameters,  their initial values must be esti-
 mated.   As more information is found or as problems
 occur in acid mine drainage simulation, these para-
 meters  may be adjusted to improve simulation.  Due to
 the lack of mine maps,  deep mine parameters are some-
 times difficult to obtain.  This is often a weak part
 of the  reports.   A minimum rank of '5'  is acceptable
 here.

 Refuse  Pile Parameters.   These are usually found
 by trial.   Such data  as bulk porosity or pyritic con-
 tent can be used to determine  the acid production rate
 and solubility of acid product.   This information is
 obtained from pile borings which frequently are missing.
 A  minimum rank of '3'  is acceptable because something
 must be known about the piles  in order to make an
 initial  estimate of the parameter values.

 Climatological Data is  usually obtained from the
 National Weather Service.   When the gaging stations
 are not  in the watershed,  particularly in mountainous
 regions,  problems with  precipitation records  can be
 substantial.   Sometimes watershed precipitation can be
 synthesized  from nearby,  outside the basin records.
 Evaporation  data has  similar problems plus it may be
 missing  for  the winter  season  at some stations.   Of the
 three subtopics for climato]_ogical data (precipitation,
 evapotranspiration, and snowmelt),  precipitation data
 is usually the most complete,while availability of
 snowmelt parameters is  the most  uncertain.  The hydro-
 logic model  can be run  without  snowmelt data,  but simu-
 lation  improves  with  snowmelt.   In order to run the
 model,  the overall ranking for  climatological data
 should be  at  least '5'.   This value is  frequently
 attained without  much difficulty.

 Physical Data  Parameters  are partially  obtained from
 USGS  topographic  maps,  aerial photographs, and land use
 maps, which are  usually adequately included in the
 reports.   Soil borings,  geologic  profiles, and well
 logs needed  to  assess soil moisture parameters are
 frequently not  included.   Watershed parameters and
 overland flow  parameters  are the most complete, while
 soil moisture  parameters  are the  least  complete.  On
 the whole, physical data  is usually good enough  for
 the models.  A value of  '5' is  the minimum acceptable
 for  physical data.

 Streamflow and Routing  Parameters  are obtained from
 USGS records and  rating  curves  for  the  watershed.  If
 a  USGS station  is  not located on  the stream,  then
 Streamflow data will probably be  incomplete.   However,
 having a USGS  station does not necessarily guarantee
 that there will be sufficient data.   The frequency
 chart above shows  that  in  most cases, either  Streamflow
 data is  satisfactory  ('5'  or more),  or  it  is  totally
 unacceptable  ('0'  ranking).  Streamflow records  are a
 high-priority  data set  for the hydrologic  simulation
 portion  of the models.

 Refuse Pile and Strip Mine Parameter information can be
 found from borings, topographic maps, and  aerial photo-
 graphs.  From  the  soil  borings  the coal type  can be
 determined which will be used to determine the pyritic
 content.   This  then can be used  to  find the acid solu-
 bility and pyrite  reactivity.  Some knowledge  of the
 chemistry  involved  here  is  desirable.   Discussions by
 Clark et al.1 will  give  a  good background  in  this acid
 chemistry.  The parameters  for refuse pile and strip
mine modeling are  often a  weak part of  the reports.
 Since many of  the values for refuse pile and  strip mine
                               588

-------
parameters must be found by trial and adjustment, a
ranking of 'A' is required.

Evaluation Conclusions.  Many differences occurred in
the evaluating and ranking of the modeling parameters
for the different watersheds.  What one evaluator con-
sidered good data on a subject, another considered to
be poor data.  Even when the evaluators agreed on the
quality of the data available, often the ranking
values were different.  It was therefore necessary to
specify the requirements needed for the ranking by
compiling a list of definitions for the model para-
meters and the means and considerations made in each
evaluation.  These definitions are in Biemel's thesis.^
Differences, undoubtedly, will continue to exist due
to personal biases; however, these definitions should
minimize evaluation discrepancies.

The minimum rankings assigned to the various topics are
intended to indicate the lowest level at which infor-
mation pertaining to the particular topic is accept-
able for computer modeling.  In most cases the
rankings should and will be higher than these minimums.
Also, if all subtopic rankings are satisfactory ex-
cept for one, then probably enough work can be done on
the unsatisfactory subtopic so that a run can be made.

Throughout the evaluation process,comments concerning
where specific pieces of data can be found are neces-
sary.  These comments are based on information in the
report about what agency collected certain data and
where the data is stored.  Undoubtedly, all the data
needed for both hydrologic and acid mine drainage
simulation will not be included in the report, thus
these comments serve to remind the researcher where
the data can be found.  Experience shows that tele-
phone calls and trips to the data holding agency will
probably be necessary.

As previously mentioned, eight of the 33 watersheds
were deemed worthy of.consideration beyond the initial,
general evaluation.  The evaluations were weighted
toward hydrologic and acid mine drainage simulation.
If the report of a studied watershed indicated that a
particular data set required for hydrologic or acid
production simulation was unsuitable or absent, then
that report was immediately rejected as having un-
suitable data.  For example, daily recorded average
streamflow is required for hydrologic simulation.  In
many cases, streamflow measurements were made once per
month, or even with less frequency and this lack of
acceptable data was cause for immediate rejection of
a given basin.

Another area of interest in the watershed review pro-
cedure was the amount of reclamation, treatment, or
abatement cost information contained in the reports.
This information, used in the Basin Optimization Model
(another model developed in this research to aid in
pollution abatement) for determining optimal acid re-
duction decisions, must be kept up to date as costs of
construction, operation, and maintenance increase.
While cost information was not emphasized throughout
the reviewing, space for comments on the availability
and amount of cost data in each report was provided
on the worksheets.  This research group found that
approximately 75% of the reports contained cost infor-
mation which the reviewer felt was detailed enough to
aid in assessing values for acid controlling measures.

Chosen Study Watersheds.  From the evaluation de-
scribed  above, three of the possible eight watersheds
appeared most suitable for hydrologic and acid mine
drainage simulation.  Two of these watersheds are the
Roaring Creek basin and the Grassy Run basin near the
town of Elkins, West Virginia.  These basins border
each other and were studied by the U. S. EPA as well as
other governmental agencies.  The Roaring Creek basin
is large (75.63 km2, 29.2 mi2) while the Grassy Run
basin is comparatively small  (7.41 km2, 2.86 mi2).  The
watersheds have the three acid producing mechanisms;
that is, strip mines, deep mines, and refuse piles.  A
complication with these two watersheds is an under-
ground mine that transfers about 12,000 m-Vd (3500 ac-
ft/yr)of water from the Roaring Creek basin to the
Grassy Run basin.

The third basin chosen is the Cane Branch sub-basin of
Beaver Creek in south-central Kentucky.  This sub-
basin, along with two others on Beaver Creek, were the
subjects of an extensive study from 1956 through 1961
under the United States Geological Survey and other
state and federal agencies.  This is a small sub-basin
(1.74 km2,  0.67 mi2) containing mainly strip mines and
refuse piles with two drift mines.

Intensive effort is now underway on applying the unit
source models to these basins.  Even those selected as
the most promising of the group have major data de-
ficiencies.  However, data synthesis techniques are
being developed in attempt to salvage these sites for
modeling.

       SUMMARY, CONCLUSIONS AND RECOMMENDATIONS

Summary.  Application of Acid Mine Drainage Unit
Source Models requires large amounts of input data.
This research group developed and tested a method of
systematic watershed evaluation when considering
applying these computer models.   Many government
sponsored mined watershed studies were reviewed using
this evaluation procedure and from these reviews an
analysis was made of general data deficiencies in the
reports.  Of the 33 watershed reports reviewed,  three
were chosen for modeling; Roaring Creek basin and
Grassy Run basin near Elkins, West  Virginia, and the
Cane Branch basin of Beaver Creek,  Kentucky.

Conclusions.  The major conclusion possible from this
study is that the majority of Scarlift-type reports
performed on mined watersheds do not result in suf-
ficient collection and publication of data to permit
straight-forward hydrologic and acid mine drainage
modeling.  In general,  the reports  are strong in water
quality data and weak in streamflow data.   The models
need daily average streamflow as input data, but
streamflow measurements were often  made no more than
twice monthly, far too infrequent for hydrologic
modeling.  Water quality data, because it is only used
for comparing simulated acid loads  to recorded acid
loads, need not be collected daily, although daily
quality data would be ideal.

Daily precipitation and evaporation data are crucial
for operating the model. Some of the studies recorded
precipitation in the watershed while others did not.
Evaporation data was not collected on the site in any
case.  For accurate simulation using the models, the
precipitation data should be collected on the study
site, and,  if possible, the evaporation should be
measured on the site using Class A pans.

A major weakness found in the reports is the lack of
usable data for mining parameters;  that is, deep mine
parameters, refuse pile inputs, and combined refuse
pile-strip mine parameters.  Information such as void
ratio of the strata, minimum flow rate for acid removal
by flooding, oxygen concentration in the mines, tem-
perature in the mines,  gas diffusivity, acid production
rates, solubility of acid products and other parameters
are rarely included in coal mine drainage basin re-
ports.  Some of these parameters can be evaluated from
other data (i.e., pyrite reactivity, soil borings);
however, even these ancillary pieces of data are often
                                                      589

-------
absent, forcing the reviewer to either search further
for the data, go to the watershed and directly collect
the data, synthesize or estimate the data, or complete-
ly abandon the watershed for acid production simula-
tion.

Recommendations.  Six areas of large data deficiencies
were found to be common to the majority of mined water-
shed reports which were reviewed.  The first three
recommendations given below will aid with the applica-
tion of the hydrologic model by supplying required in-
put data for adjusting the model to a particular study
watershed.  The final three recommendations, if fol-
lowed, are designed to collect the data necessary to
run the acid production model and to verify its output.
The six recommended steps for data collection are:
(1)  Record at least three years of daily streamflow,
(2)  Install a recording precipitation gage in the
     watershed during the period of streamflow gaging,
(3)  Install a Class A pan for evaporation in the
     watershed during the streamflow monitoring period,
(4)  Make accurately documented soil borings in spoil
     piles, and unmined areas within the watershed,
(5)  Determine pyrite content of refuse piles and strip
     mines, and
(6)  Record water quality at the watershed outlet and
     at major acid pollution sources.

The hydrologic portion of the model requires a minimum
of three years of daily, average streamflow data in
order to self-adjust to conditions of the watersheds.
This data is essential, first, to compare simulated
streamflow to recorded streamflow for testing the
accuracy of simulated flow, and secondly, to internally
adjust certain parameters to improve streamflow simu-
lation when a comparison of simulated and recorded
streamflow indicates unacceptable differences between
them.  Thus, daily streamflow gaging is recommended for
any future watershed monitoring projects.

Due to the dependence of the hydrologic model on large
quantities of input data, and because the accuracy of
this data is of utmost importance, a recording, year-
round precipitation gage in the watershed is of prime
importance.  This gage, when properly maintained and
used for the entire stream monitoring period, will pro-
vide an accurate account of all water entering the
basin as precipitation, thereby removing the problems
of heavy localized rainfall at a precipitation gage
outside the watershed while the watershed receives no
precipitation, or vice versa.

Class A pan evaporation data, another essential data
group, should be collected on the watershed throughout
the duration of streamflow gaging.  Daily, year-round
evaporation data on the watershed removes the need to
search for suitable nearby evaporation data stations
and it assures that the evaporation data is representa-
tive of that in the watershed.

Accurately documented soil borings are important so as
to assure precise description of certain hydrologic in-
puts as well as to aid in determining acid production
parameters.  These borings should record the types of
soil encountered, the depths to and thicknesses of the
different soil types, the level at which groundwater is
reached,  and also other data, such as permeability,
porosity, or void ratio, which will assist in esta-
blishing input parameters.

Pyrite content of refuse piles and strip mines provides
the means of determining several acid—production-re-
lated parameters.  Actual on-site pyritic content data,
as opposed to estimated values made in the absence of
field data, gives the most accurate acid mine drainage
simulation, thus making collection of pyritic content
data important.
 Finally,  a record of the quantity and quality of acid
 mine discharge from major acid pollution sources into
 the receiving stream is of value for comparing the
 simulated acid loads with actual field data.  The deep
 mine discharges, while not subject to the large fluc-
 tuations  encountered with surface phenomena, do have
 long-term variations which the computer simulation
 must reflect.  The quantity and quality of strip mine
 and refuse pile drainage varies much more quickly than
 that of deep mines because strip mine and refuse pile
 drainage  depends largely on surface runoff, a response
 occurring during and immediately after precipitation.
 The water quantity and quality from major acid pro-
 duction sources, as well as the water quality at the
 basin outlet, should be monitored at a minimum of once
 every second week, more frequently when feasible.
 Several times during the period of data collection,
 the quantity and quality of runoff from strip mines
 and refuse piles should be monitored immediately
 following precipitation in order to check the simu-
 lated values against the actual data.

 In  closing,  it  may be  said that any good  mine drainage
 models  (present or future)  for acid mine  drainage are
 going to  need the aforementioned types  of data to
 verify  and calibrate them.   Therefore,  it is  highly
 recommended  that future demonstration projects or
 monitoring endeavors have their data collection schemes
 consider  the parameters discussed herein  before the
 data collection gets too far along to permit  the
 proper  acquisition of  the crucial pieces.

                      REFERENCES

 1.   Clark, G. M., Ricca, V.  T. , Shumate,  K.  S.,  and
     Smith, E. E., Resource Allocation to  Optimize
     Mining Pollution Control,  Office of Research and
     Monitoring, United States Environmental Pro-
     tection Agency, Contract #68-01-0724,  final
     report,  June, 1975.

 2.   Clark, G. M., Ricca, V.  X., and Smith,  E. E.,
     "Predictive and Pollution Abatement Model for Mine
     Drainage",  United  States Environmental Protection
     Agency,  Mining Pollution Control Branch,  IWTRL,
     NERC,  Cincinnati,  Ohio,  Eugene Harris,  Project
     Officer, Contract  #68-03-2008,  1974-76.

 3.   Ricca, V. T., The  Ohio State University Version
     of  the Stanford Streamflow Simulation Model,
     Office of Water Resources Research, U.  S. Depart-
     ment  of  the Interior, Project #B-005-OHIO and B-
     019-OHIO, 1972.

 4.   Biemel,  G.  C., Watershed Evaluation and Data Needs
     for Hydrologic and Acid Mine Drainage Modeling,
     Master of Science  Thesis,  Department  of Civil
     Engineering, The Ohio State University,  1975.

                  ACKNOWLEDGEMENTS

The author would like to  express his appreciation of the
efforts of the many individuals who have assisted in
this project.  Acknowledgement  is given to Ron Hill and
Gene Harris, project officers, Mining Pollution Control
Branch, IWTRL, NERC, U.S.EPA,  Cincinnati,  Oh., for their
cooperation  in  furnishing the  numerous reports and con-
sultations needed  to accomplish  this study.  Gratitude
is expressed  to  the U.S.EPA for  their support through
the above  two referenced  projects.  Most of the work
accomplished herein was  done by  OSU, Dept. of Civil
Engineers  Graduate Research Associates George Biemel,
James Bonta, Michael Hemmerich,  Gary Lockwood, Ronald
Schultz, and William Wallace.   Special thanks  is given
to project co-researchers, Professors Gordon  Clark and
Edwin Smith.  Special  thanks are extended  to  Joan A.
Orosz for her spendid job in  typing  this manuscript.
                                                       590

-------
                                   A MODELING TECHNIQUE FOR OPEN DUMP BURNING
     Marvin Rosenstein
     Systems Analysis Branch
     U.S. Environmental Protection Agency  (EPA)
     Region I, Boston, Massachusetts
       Valentine J. Descamps
       Regional Meteorologist*
       U.S. Environmental Protection Agency (EPA)
       Region I, Boston, Massachusetts
                    ABSTRACT

This paper describes a modeling technique for estimat-
ing the impact of open dump burning upon ambient par-
ticulate concentrations.  The technique is based upon
two major assumptions.  First, the entire area of the
dump is not fired at the same time.  Rather, it is
assumed that the dump burns "progressively" with a con-
stant rate at which the fire spreads through the dump.
Second, once a portion of the dump is fired, the total
burn time for that portion consists of an initial "hot
phase" followed by a longer "smolder phase".  Plume
rise is based upon Briggs' formulations and diffusion
calculations are performed by the EPA point source
model PTDIS.  The technique is crude and no  attempt
has been made to validate it with field data.  Limita-
tions of the technique and suggestions for improving
it are discussed.

                    BACKGROUND

Primarily  for economic reasons, several New England
states are permitting towns, particularly small rural
communities, to use open dump burning for the disposal
of community solid waste under variance procedures
until more environmentally acceptable dis-
posal methods are implemented.  Because of their con-
cern for the maintenance of adequate air quality, the
Department of Environmental Protection of the State of
Maine requested assistance from us in developing a
technique for estimating the impact of open dump burn-
ing upon ambient particulate levels. (Note that EPA
regards open dump burning as hazardous to air quality,
public health, and water quality and opposes it as a
permanent solution to the disposal of solid waste.)
Discussion with other EPA personnel in the fields of
diffusion modeling and solid waste disposal, as well as
a literature search, revealed scant information upon
which we could base the development of a technique.
Also, the press of operational duties, which leave
little time for development activities, dictated that
the technique should utilize existing diffusion and
plume rise formulations that could be rapidly and con^
veniently applied.  It was therefore recognized at the
outset that the technique would of necessity be crude
and most likely conservative in nature, such that pre-
dicted concentrations could be regarded as upper bounds.

            THE NON-PROGRESSIVE TECHNIQUE

Our first attempt at a modeling technique involved what
we call a "non-progressive" burn.  Although we later
discarded this technique, we will describe it in some
detail because many of the features and assumptions of
this technique carry over to our final "progressive"
model.   Also,  we feel that the idea of a progressive
burn is of paramount importance and can only be fully
appreciated when viewed against the background of a
non-progressive burn.

In the  non-progressive approach,  we assume that the
entire  area of the dump is set on fire at the same
time.   Gerstle and Kemnitzl,  based on laboratory sim-
ulations of the open burning of municipal refuse,
suggest a total burn time of 12 hours consisting of
an initial "hot phase" of duration 1 to 1^ hours fol-
lowed by a "smolder phase" for the remainder of the
12 hours.  (In future sections, subscript A will always
refer to the hot phase and subscript B will always re-
fer to the smolder phase).  Approximately 90% of the
refuse burns rapidly during the hot phase, and the re-
mainder burns slowly during the smolder phase.  After
consultation with regional personnel in the field of
solid waste disposal (hereafter referred to as regional
personnel) we settled on a (1 hour - 75%) hot phase and
an (11 hour   25%) smolder phase.  We assumed that the
refuse burns fairly evenly with respect to time during
each of the two phases.  Gerstle and Kemnitz also
suggest an emission factor of 16 pounds of particulates
per ton of refuse burned.  The State of Maine sugges-
ted that each person generates 20 pounds of refuse per
week.  With these figures one can easily calculate par-
ticulate emission rates for the first hour and for each
of the next eleven hours. Populations of 1000, 2000 and
3000 were considered.

The hourly concentrations for each of the twelve hours
were calculated as follows.  First, plume rise was
calculated using Briggs'  (2,3,4) formulations.  The
buoyancy flux, F, was calculated via the equation
below.

F(m4sec-3)= KQH,  K= (3. 7) • (10-5)m4Cal-1sec-2       (1)

The heat emission rate  Q,,(cal sec- )  was easily cal-
culated for each of the twelve hours  from the computed
pounds of refuse burned per hour and  the heat factor
of 4675 Btu per pound of refuse burned.   This heat
factor was extracted from LoweS who states that it is
for municipal waste that has had some separation treat-
ment for use in a power boiler.   A lower value seems
more appropriate to unsegregated residential/commercial
waste that may be dampened by exposure to a generally
moist climate.   After consultation with regional per-
sonnel we decided to run the technique for an addi-
tional case that used a heat factor of 2000 Btu.
Diffusion calculations, using the emission rates and
olume rises as determined above, were performed by the
EPA point source model PTDIS developed by Turner& and
based on the steady-state Gaussian formulations as
detailed in Turner'.  This model computes hourly center-
line concentrations at specified (by the user) dis-
tances downwind of an isolated point source in rela-
tively flat and open terrain and for a rural atmos-
phere.  Meteorological conditions consisting of sta-
bility class, wind speed and mixing height are input
by the user.  We used seven receptor distances from
0.1 to 10.0 km and four meteorological conditons of
stability classes B and F combined with wind speeds of
1.0 and 5.0 m sec  • Constraints on vertical disper-
sion via a mixing height were not included.  For each
combination of meteorology, population and heat factor
only two diffusion calculations are necessary, i.e.,
one for the hot burn hour that yields an hourly con-
centration of XA and one for the smolder burn type

*0n assignment from ARL/NOAA Department of Commerce
                                                       591

-------
of hour that yields an hourly concentration of XB (the
same value for each of the eleven smolder hours).  The
24-hour concentration is therefore calculated through
the relationship presented below.
          X = XA +
                                                   (2)
                  24
Before presenting some results for the non-progressive
technique, several conservative assumptions should be
pointed out since they will also apply to the progres-
sive model developed in the next section.  First, a
given stability-wind speed condition persists for the
entire burn time.  Second, only centerline concentra-
tions are calculated.   Therefore, a constant wind
direction over the entire burn time is assumed.   Third,
the dimensions of the dump are small enough such that
the dump can be considered a point source instead of
an area source.  A line source model was considered
for the technique but was discarded after trial  calcu-
lations which invoked assumptions concerning the di-
mensions of the dump produced results essentially iden-
tical to those obtained with the point source model.

Tables 1 and 2 present some results for the non-pro-
gressive technique for the wind speed of 1.0 m sec~l.
The effect of reducing the heat factor from 4675 Btu to
2000 Btu is dramatic with maximum concentrations in-
creased by a factor of two or more due to the smaller
plume rises.  For the 5.0 m sec  case, the overall
maximum concentrations were 47 and 93 ug m~3 for the
4675 and 2000 Btu heat factors respectively.  The
large increase in maximum concentration from the 1.0
m sec~^case to the 5.0m sec~^case indicates the dom-
inance of the effect of wind speed on plume rise over
that of the wind speed on dilution.  There is some
tendency for maximum concentration to increase with
increasing population (particularly for the 5.0  m sec"*
case which is not shown) although this is not always
the case because of the competing effects of increas-
ing emissions versus increasing plume rises.
                    Table 1
Final Plume Rise (m)  during (hot phase/smolder phase)
for non-progressive technique and wind speed of 1.0
Population

Stability B
   4675 Btu
   2000 Btu

Stability F
   4675 Btu
   2000 Btu
                 1000
896/78
528/41
129/40
 97/30
                             2000
1359/131
 799/68
 162/50
 122/38
                        3000
                                       1735/178
                                       1020/93
                                        185/58
                                        140/44
                     Table 2
Maximum 24-Hour Concentrations (ug m~3)  for non-
progressive techniques and wind speed of 1.0 m sec~l
Population

Stability B
   4675 Btu
   2000 Btu

Stability F
   4675 Btu
   2000 Btu
                 1000
                   5
                  11
                   9
                  17
                             2000
               4
              13
               9
              21
                                        3000
              10
              21
                                                         THE PROGRESSIVE TECHNIQUE

                                          Discussion of the non-progressive  technique  with the
                                          State of Maine and regional personnel  indicated several
                                          improvements could be made.  Particularly alarming were
                                          some of the very large plume rises as  given  in Table 1.
                                          Field observations by Maine indicated that such large
                                          plume rises do not occur.  These same  field  observa-
                                          tions also indicated that the total burn time  is  more
                                          likely to be on the order of twenty-four hours and
                                          that an entire dump is not normally fired at once.
                                          Instead, the fire normally spreads through the dump
                                          at a gradual rate such that it takes hours for the
                                          entire dump to be fired.  This consideration lead to
                                          the development of the progressive model.

                                          In the progressive technique, it is assumed  that  the
                                          dump is fired progressively at a constant rate (which
                                          can be varied from case to case) such  that the fire
                                          gradually spreads through the dump.  The rate  of
                                          spreading should depend on prevailing  meteorological
                                          conditions but this  effect is ignored in the  cases to
                                          follow. Once a portion of the dump has  been  fired,  the
                                          assumption of a hot phase-smolder phase regime applies
                                          in the same way as for the non-progressive technique.
                                          As an example, refer to Table 3.  Here we have assumed
                                          a total burn time of twelve hours.  In the first  hour,
                                          1/6 of the dump is fired and part of this portion of
                                          the dump is consumed in a hot phase burn.  The remain-
                                          ing part of this portion of the dump smolders  over  the
                                          following six hours (hours 2 through 7).  In the  second
                                          hour, another 1/6 of the dump is fired.  The smolder
                                          phase now lasts for hours 3 through 8.  And  on it
                                          goes.  It takes a total of six hours for the entire
                                          dump to be fired.  Each block in Table  3 gives the
                                          partial contributions of each 1/6 portion of the  dump
                                          to the total hourly concentrations listed at the bot-
                                          tom of the table.  As with the non-progressive tech-
                                          nique, only two basic diffusion calculations need to
                                          be done.  In contrast to equation  (2)  for the  non-
                                          progressive twelve hour burn, the 24-hour average con-
                                          centration is calculated via the equation below.
                                                                    X =6XA+(6)(6)Xs= XA+6XB
                                                                            24         4
                                                                                             (3)
                               The technique can be generalized (for total burn times
                               of 24 hours or less) as follows.  Let Y equal the num-
                               ber of hours it takes for the fire to spread through
                               the entire dump, i.e., 1/Y of the dump is fired in
                               each of the first Y hours.  Let S equal the number of
                               hours that each 1/Y portion of the dump smolders. Then
                               the 24-hour average concentration is calculated by the
                               equation below.
                                                          „
                                                          xi
                                                         24
                                                    (Y)XA+(Y) (S)XB
                                                         2%
                                                          Therefore, for a 24-hour burn with a hot phase d iration
                                                          of 12 hours, and a 12 hour smolder duration, we nave
                                                                         12XA+(12)(12)XB
                                                                               24
                                                                                             (5)
                               The progressive technique can also be used for total
                               burn times of greater than 24 hours, but  the general
                               relationship expressed in equation (4) will not hold.
                               But equations similar to equations (3) and (5) can
                               still be derived from tables constructed  like Table 3.
                                                      592

-------
                    Table 3

Twelve Hour Progessive Burn Sequence, Hot Phase Dura-
tion = 6 hours, Smolder Phase Duration = 6 hours. See
discussion in text for explanation.
HOUR
                  234567
                                       9  10  11  12
HOT BURN
SLOW BURN
DUE TO FIRE
IN HOUR It
1
2
3
4
5
6
TOTAL
HOURLY
CONCENTRATION
Xi, 1=1, 12
XA







-A
3
xj>

Xj





XB


>-B
XB
XB
XB
XB
XB
>XB



XB
XB
XB
XB
XB
iXB




XB
XB
XB
XB
4X





XB
"B
XB
3XB






XB
XB
2XB







XB
XB
The particulate and heat emission rates, Qp and Q$
that are required for the diffusion and plume rise cal-
culations can be calculated from the following equa-
tions :
                                 00 (S)
                         QHB=-25(
                                                    (6)
                                                    (7)
    = (population)  (Ibs refuse) (Ibs particulates)   (8)
                     person     pound of refuse
= (population)  (Ibs  refuse)  (_
                 person
                                     Btu
)
                                pounds of refuse
The plume rise calculations and diffusion calculations
are performed in the same way that they were for the
non-progressive technique.  The same conservative
assumptions that were discussed in the section on the
technique apply   here as well.  Each portion of the
dump that is fired is considered a point source and
separation distances between the various portions are
ignored.   Meteorology is constant for the duration of
the burn.

Three cases using the progressive technique were in-
vestigated.  Table 4 lists the assumed values for the
operating parameters for each case.   Equation (3)
applies in case 1, while equation (5) applies in cases
          2 and 3.  Receptor distances of 0.1, 0.25, 0.4.0.7,1.0,
          1.5, 2.0, 2.5 and 3.0 km were used.  A representative
          range of up to 22 stability-wind speed combinations
          were investigated.

          Table 5 lists some of the calculated plume rises for
          the three cases for a wind speed of 1.0 m sec~l. Table
          5 is constructed in the same fashion as Table  1 to
          enhance comparisons between the non-progressive and
          progressive techniques.  The plume rises are purposely
          listed for the 1.0 m sec"1 case for two reasons. First,
          the inverse relationship between plume rise and wind
          speed implies that the listed plume rises should be
          representative of the maximum rises that can be expec-
          ted.  Second, high wind speeds are not relevant because
          of the danger of spreading fire to areas surrounding
          the dump.  The progressive cases yield plume rises that
          are much less than those of the non-progressive cases.
          The most realistic plume rises appear to be those for
          progressive case number 2 which yields plume rises
          generally less than 200 m.
                                                                                 Table 5
Final Plume Rise (m) during (hot phase/smolder phase)
for progressive techniaue and wind speed of 1.0m see"
Population
Stabilities
Case 1
Case 2
Case 3
Stability E
Case 1
Case 2
Case 3
1000
(A-D)
223/26
87/6
164/11

78/30
51/16
68/21
2000

374/43
146/10
277/19

98/37
64/20
86/26
3000

487/58
198/14
375/26

112/43
74/22
98/30
Table 6 presents a selection of predicted 24-hour con-
centrations under neutral (D) stability and wind
speeds of 1.0 and 5.0 m sec~l for the three progres-
sive cases.   Results for these meteorological condi-
tions were chosen for presentation because of the
prevalence of these meteorological conditions in many
sections of New England and because only neutral
stability can persist for 24 hours.  Neutral stability
also generally yielded the highest predicted concentra-
tions.  As previously discussed,  observations  in  the
field by the State of Maine personnel indicate cases
2 and 3 to have the most realistic operating parameter.
It is for these cases that the technique indicates a
potentially serious threat to the National Ambient Air
Quality Standards of 260 ug m-3 (primary) and 150 ug
m~3  (secondary) for 24-hour average concentrations of
particulates.  Case 2 yields the most realistic esti-
mates of plume rise and the highest predicted con-
centrations.

       DISCUSSION OF PROGRESSIVE TECHNIQUE

Thus far, we have not been able to obtain sufficient
data with which we could attempt to validate the
technique.  However, it is encouraging that the tech-
nique is able to produce plume heights that seem
                                                       593

-------
                                                     Table 4
                         Assumed Operating Parameters for Three Progressive Burns
          Total Burn Time     Hot Phase     Smoulder Phase
              (Hours)       Duration (hrs)    Duration (hrs)
Case 1

Case 2

Case 3
12

24

24
 6

12

12
     6

    12

    12
                                               Heating Value
                                               Per Ib waste
                                                (Btu/lb)
         3500

         2000

         4675
                                               Waste Per Person
                                               (Ibs/person/week)
                    20

                    20

                    20
          Emission Factor
          Ibs of partic-
          ulate per ton
          of waste(Ibs/T)

               16

               16

               16
                                                    Table 6
     Predicted 24-Hour Average Concentrations (ug m~3) for progressive techniques, neutral  (D)  stability,  wind
     speeds of 1.0 and 5.0 m sec  , (X = concentration less than State of Maine Standard of  100 ug m~^) .
    Case  1
    Case  2
    Case  3
                                1000
                    Wind
                   Speed
                   m/sec  0.1  0.25 0.4 0.7  1.0
                        Town Size in Population Count

                                                 2000

                                  Distance Km
                           X
                          192
                     X
                     X
           786   288   138
           366   115   X

           X    186   114
           313   X    X
         X
         X
X
X
             X
             X
             X
             X
                                         0.1 0.25 0.4 0.7   1.0
 X   X
128  X
X
X
            X  132  138   X
           504 127   X    X
           348 426  240  102  X
           647 139   X    XX
                                                                                              3000
                                                                        0.1  0.25   0.4  0.7   1.0
X   X
X  108
X
X
                                X   X   108
                              576  180   X
                                X  432  300  144  X
                              889  201  100   X   X
 reasonable  on  the basis of rural sightings of dump
 plumes.  As for predicted concentrations, without
 sufficient  field data to compare them with, about all
 we  can  say  is  that  they do not seem implausible.

 The technique  has two important advantages.  First, it
 is  relatively  easy  to use, being based upon familiar
 and readily available diffusion and plume rise form-
 ulations. Secondly,  the technique can be easily tuned
 via the  values that  must be assumed for the following
 operating parameters:  total burning time, hot phase
 duration, smolder phase duration, fraction of refuse
 consumed in each phase  (values other than 75% for the
 hot phase and  25% for the smolder phase may be approp-
 riate),  pounds of refuse generated per person, pounds
 of  particulates generated per ton of refuse, and the
 heating  factor (Btu  per pound of refuse).

 As  previously  discussed, the technique as it presently
 exists  has  several  limitations that may introduce a
 very high degree of  conservatism into the predictions.
 The very bad assumption of constant meteorology as
 well as  the lesser  evil of using point sources instead
 of  areas sources could both be remedied through suit-
 able computer  programming.  The virtual point source
 concept  could  be introduced to handle area source
 configurations. The  technique could also be changed
 so  as to allow meteorological input  (stability, wind
 speed and wind direction) that varies from hour to
 hour and receptor locations that vary in both horizon-
 tal directions.  Unfortunately, as discussed in the
 background  section,  we did not, nor are we likely to
 ever, have  the time  to carry out the above suggestions.
                                          In conclusion, we
                                          technique that is
                                          ing the impact of
                                          ticulate levels.
                                          of future efforts
                                          validity.
                                           believe  that we  have  developed  a
                                           potentially very useful  for  estimat-
                                           open dump burning upon ambient  par-
                                           Furthermore, we  believe  it is worthy
                                           to both  improve  it and establish  its
                                                REFERENCES
                                           (1) Gerstle, R.W. and D.A. Kemnitz, 1967; Atmospheric
                                           Emissions from Open Burning, J. Air Pollution Control
                                           Association, 17 P 324

                                           (2) Briggs, G.A., 1969; Plume Rise USAEC Critical
                                           Review Series TID25075, National Tech. Information
                                           Service, Springfield, Virginia  22151

                                           (3) Briggs, G.A., 1971; Some Recent Analyses of Plume
                                           Rise Observation, pp 1029-1032, in Proceedings of the
                                           Second International Clean Air Congress, edited by
                                           H.M. Englund and W.T. Berry, Academic Press, New York

                                           (4) Briggs, G.A., 1972; Discussion on Chimney Plumes
                                           in Neutral and Stable Surroundings, Atmospheric
                                           Environment, 6 P 507

                                           (5) Lowe, R.A., 1973; Energy Reocvery from Waste, EPA
                                           Publication SW-36d.ii,  24 pp, Available from Super-
                                           intendent of Documents, U.S. Government Pringing Office

                                           (6) Turner, D.B.  Draft  Copy PTDIS User's Manual, EPA
                                           Research Triangle Park, North Carolina

                                           (7) Turner, D.B., 1970: Workbook of Atmospheric
                                           Dispersion Estimates, EPA Publication AP-26, 84 pp.
                                                       594

-------
                           DEVELOPMENT AND USE OF A FIXED CHARGE PROGRAMMING MODEL
                                      FOR REGIONAL SOLID WASTE PLANNING*

                                                Warren Walker
                                       The New York City-Rand Institute

                                               Michael Aquilina
                                          Champlin Petroleum Company

                                                 Dennis Schur
                                     U.S. Environmental Protection Agency
                      Abstract
     The problem of deciding on the number, type, size
and location of the solid waste disposal facilities to
operate in a region, and allocating the region's
wastes to these facilities is formulated as a fixed
charge problem.  A system for solving this problem,
developed for the U.S. Environmental Protection Agency
is described.  The system includes a heuristic al-
gorithm for the fixed charge problem.  A description
is given of a hypothetical application of the system
in the Seattle-King County region of the State of
Washington.

                    Introduction

     Solid waste management is a crucial problem
facing every municipality.  The average person gen-
erates over five pounds of material per day for which
he no longer has any use and, therefore, discards.
Among the services supplied by most -municipal gov-
ernments is the collection, transport, and disposal
of such solid wastes.
     The collection operation consists of the removal
of solid waste  (usually in a truck) from its point of
generation.  It is then transported to an intermediate
facility or an ultimate disposal site.  At an inter-
mediate facility it may undergo some processing  (such
as incineration, resource recovery, biochemical ox-
idation, or compaction) that leaves a residue of
waste that must still be disposed.  Most ultimate dis-
posal sites are sanitary landfills.
     Prior to their explosive growth, in the 1950's and
1960's, most cities had no problem disposing of all
their solid waste within their own borders.  However,
as available land begins to disappear and sites begin
to be used up, cities are beginning to look outside
their own boundaries for disposal sites.  In addition
technological processes for waste reduction and re-
cycling are too expensive and inefficient for single
municipalities to consider using.  But, by taking ad-
vantage of economies of scale, such facilities can be
built and operated for use by several municipalities
with a net cost saving to the region as a whole.
     Therefore, an increasing number of cities, towns,
and villages are joining together to perform solid
waste planning for an entire region.  For example, in
New York State, 35 percent of the State's 1500 muni-
cipalities use solid waste disposal facilities op-
erated in cooperation with other units of local gov-
ernment .
     Several recent studies have dealt with the mini-
mization of transportation and disposal costs in re-
gional solid waste management systems.
*This paper was prepared for presentation at the 46th
joint meeting of the Operations Research Society of
America and the Institute of Management Sciences held
in San Juan, Puerto Rico, October 16-18, 1974.
Anderson  developed an algorithm for determining the
optimum solution to a regional disposal problem, but
his model assumes that all costs are linear.    Marks
and Liebman-' treat the more limited problem of deter-
mining the locations of transfer stations and include
the fixed costs of building and operating such facil-
ities in their formulation.    The special structure
of the resulting problem  (a capacitated transshipment
problem) allows them to obtain optimal solutions using
a. minimum cost—maximum flow network algorithm.
   In6, Morse and Roth formulate the problem without
capacity constraints and attempt to solve it by com-
plete enumeration, comparing the costs resulting from
each of the 23-1 solutions for a given set of j pos-
sible disposal facility locations.  Kuhner and
Harrington solve a similar problem using the branch-
and-bound algorithm that is part of IBM's MPSX-MIP
program^.
   The formulation that Skelly developed  provides the
basis for the model described in this paper.  To solve
his problem he used an early version of the computer
program we describe.
   In this paper we develop an integer programming
model for selecting from potential and existing sites
those that should be developed, at what capacity and
how the wastes should be routed through them so that
the total transport, processing, and disposal costs
for the entire region are minimized.  Fixed charge
cost functions are included for all facilities, as is
a consideration of the time-staging of the construction
of facilities.
   The output from the model provides information on:
           What types of disposal facilities should be
           built?
           When should they be built?
           What should be their capacity?
           How much of its solid waste should each
           community ship to each of the disposal
           facilities?
           What will be the cost of the solution (both
           fixed cost and cost per tonl?
   It is what Marks and Liebman-' refer to as a capaci-
tated transshipment facility location problem, with
fixed and variable costs associated with the use of
each facility and variable haul costs.  A heuristic
*  Figures in superscript refer to bibliographical
references listed at the end of this paper.

** Transfer stations are intermediate facilities where
collection vehicles transfer their loads to larger
vehicles, more suited for long-haul transportation.
The larger vehicles transport the waste to the disposal
site.
                                                      595

-------
algorithm has been developed that produces solutions
that are almost always optimal.  This algorithm has
been embedded in a computer program tnat includes a
routine for generating the problem matrix and a. re^
port generator for displaying the results.  The pro^
gram has been given the name "SWAM" for Solid Waste
Allocation Model.
     In the following sections we describe the math.^
ematical model, the heuristic algorithms -used to
solve it, and the data requirements of the SWAM model.
A sample application of the SWAM -model using data
from the Seattle region where the model was "used in
d regional planning effort during 19J3 will be pre-
sented at the Conference but cannot Be included with.
the text of this paper due to the excessive space re^
quirements.

The Mathematical Model

     In order to simplify the problem and make it
computationally feasible to solve, we assume that all
the refuse of a community is generated at d single
point, its center of mass, and, therefore, that all
waste transported from the community- travels from its
center of mass.  This permits TIS to separate the
transport and disposal subsystems from the collection
subsystem.
     We assume that there are K potential disposal
sites being considered, J intermediate facilities,
K-J final disposal sites, and that there are N com-
munities .
     The general problem formulation for the one-
period (say, one week)  static model is:
          K   N            K     J            K
Minimize  v   v   c-,x.,  + £     Y   C.x   + y
          22    ik ik   L     •*-    jk jk   L  F  $
          k=l i=l          k=j+l j=l          k-1
                                                                                       (k=l,2,
                                                           where:
                                                                  C   is the cost of transporting  and proces-
                                                                   1  sing one ton of waste  from source i  at
                                                                      disposal facility k  ($/ton);

                                                                  C   is the cost of transporting  and processing
                                                                      one ton of waste from  intermediate facility
                                                                      j at final facility k  ($/ton);

                                                                  F   is the fixed cost associated with opening
                                                                      and operating disposal facility K ($/week) ;

                                                                  W.   is the quantity of waste generated at
                                                                      source i (tons/week);

                                                                  A.   is the capacity of intermediate site j
                                                                   -1   Ctons/week) ,-

                                                                  B   is the weekly capacity of final disposal
                                                                      site k (B  depends upon the number of
                                                                               k
                                                                       trucks the site can handle,  and,  since
                                                                       landfill sites get filled up,  the length
                                                                       of time it is desired to operate  the site)
                                                                       and;

                                                                  P.   is the proportion of the weight of the
                                                                      waste that remains after being  processed
                                                                      at intermediate site j.

                                                           The decision variables are:
Subject to

        K
        2
        k=l
           Xik
                                       .
                                           x    (1=1,2,...,N; k*l,2,...,K):  the  amount of
                                               community  i's waste  that  is  to be  sent to
                                               disposal facility k;  and

                                           x.k  Cj=l,2,...,J; k=J+l,...,K) :  the  amount of
                                               waste that is to be  transported  from  in-
                                               termediate disposal  facility j to  final
                                               disposal facility k-

                                                        The Constraints

                                      We discuss each of the constraining equations  in
                                    turn.
        N
        2
J

2  x.
                                                               Constraint (2.2) requires that all of the solid
                                                               waste generated at a source during one week be
                                                               transported to some disposal facility during the
                                                               same week.
    P.
     D
        N
        2
        N
        2 x..
       2 *-v
        .  ,  ik
K
2    a
k=J+l
                       J
                   +    2
           =   ° if yk ' °
               1 if y  > 0
                    K.
                                    =  0 (j=l,...,J)
                                   -y,  =  0  Cj=l,-..,JJ
                                        Constraint  C2.3) recognizes the  limited processing
                                        capacities of incinerators and transfer stations.
                                        If this constraint is omitted, the model  can be
                                        used to determine a desirable capacity for  a pro-
                                        posed facility.

                                        Constraint  (2.4) recognizes the  limited capacity
                                        of a landfill site.  Its weekly  capacity  depends
                                        on its ultimate capacity  (say, T tons) and  its
                                        targeted useful life  (say, Yk years).k  Then Bfc
                                        is given by: B^T-^/52 Yk.  The weekly capacity may
                                        also be affected by the capacity of  access  roads
                                        to handle traffic without congestion, and the un-
                                        loading rate at the landfill site.
                                         Ck=l,2...,K)
                                                      596

-------
     • Constraint C2.5L is a balance equation for
      intermediate sites.  It specifies that what-
      ever  waste is received at an intermediate site
      must  be shipped from that site to a final dis-
      posal site,  adjusted for the weight reduction
      produced by intermediate processing.

     • Constraints C2.6L,  C2.7L, and (2.81 insure that
      the fixed costs of  building and operating a
      disposal facility are included in the objective
      function if the site is to Be utilized at a
      positive level.  If site k is utilized, y will
      be greater than zero and * will assume the value
      "1".   Thus,  the fixed cost Ffc associated with
      facility k would be added to the -value of the
      objective function.  If site k is not utilized
      y^ will be zero, i  will be zero, and Fj^ will
      not be added to the objective function.

     • Constraints C2.&1 and  C2.10). are the non-
      negativity constraints on the xilc's and y^'s.

The Objective Function
     The objective of this model is the -minimization
of the total regional costs of solid waste disposal.
These costs include transport costs  (from a community
to an intermediate site, from a community to a land-
fill site, and from an intermediate site to a land-
fill site),  and operating  and capital costs associated
with a disposal facility.
     We  assume that each of the individual components
of total cost can be represented by one of the two
types of cost functions shown in Figs. 1 and 2.
     Typically, the cost of transporting waste from a
site i  to site k can be represented by a linear
function  (Fig. 1) of the amount of waste shipped.  We
will let H.   be the unit transportation cost.
     The cost function associated with each disposal
facility, k, is of the form shown in Fig. 2.  There is
a fixed  initial construction and/or overhead cost, F^,
as well  as variable operating costs, V^, which are
dependent on the amount of waste processed.
     Each cost factor C^ or Cjk, defined above, is
therefore, the sum of a. unit transportation cost and
a unit  operating cost.  That is:
   cik   vk
   -jk
                                   , .. . ,N;K=1,2,.. . ,
Hjk
     In many cases the operating cost curves for fac-
ilities such as incinerators and transfer stations are
not linear, but exhibit economics of scale.  We will
assume that any such curve can be represented by a
piece-wise linear concave cost function such as that
shown in Fig.  3.   In this illustration there are three
different operating ranges.  If less than h2 tons are
shipped to the facility, the operating cost is c^

dollars per ton;  for between h2 and 113 tons, it is c2
dollars per ton;  and for above {13 tons, it is C3
dollars per ton.   The three segments of the cost curve,
when extended, intercept the y-axis at points f^, f2,
and f3.
     Such a function can be easily represented as a
fixed charge cost function and added to the general
objective function derived above.  For the example
shown in Fig.  3,  define new decision variables A 1,
 A2, and A3, corresponding to the three segments of
the cost curve (in general, the curve can have any
number of segments).  With each Aj, associate the
variable cost Cj  and the fixed cost fj, so that the
cost function for each Aj is of the form shown in
Fig. 2.  Then' the general formulation is modified as
follows:
    • One constraint  is added  to  the  problem:

             x    y A  =0
     The following  terms are  added  to  the  objective
     function:
     A fixed charge constraint is added for  each
        variable:

                  0 if A  =o
                  1 if
                          > 0.
Note that no upper or lower bound constraints need be
put on theAj's.  It is shown in " that, if Aj  is
positive in an optimal solution to the problem

    (1)  all other A associated with that cost function
        will be zero, and

    (2!  hj 
-------
Multi-Year Planning Considerations
     The mathematical model described above assumes
that every week or year in the planning period looks
the same as every other week or year (in terms of
costs and waste production) and that any site selected
is available at the start of the planning period and
lasts for the entire period.  The optimization of this
system is a gross simplification of real-world pro-
blems.
     In the Solid Waste Allocation Model,the concept
of "staging," breaking the planning period into sev-
eral smaller periods, is used to provide a more re-
alistic solution.  One fixed charge problem is solved
for each period.  Although each period is solved sep-
arately and independently from all other periods, re-
maining ultimate capacities of facilities such as
landfills are transferred and updated from stage to
stage.  At the end of the planning period the results
for each of the stages are summarized and totaled to
determine the overall cost for the planning period.
The number and duration of the periods are determined
partly by the user and partly by the program.  The
closing of a facility, the opening of a new facility,
and the creation of a new waste source will all cause
a new stage to be started  (i.e., a new problem to be
solved).  The user supplies all opening and closing
dates as well as ultimate capacities.  In addition,
if a facility reaches its ultimate capacity during
one of these user-defined stages, the program will
create a new stage, and a fixed charge problem with
this facility eliminated will be solved to determine
where the waste that had been going to that facility
should now be transported.
     Although there is no real interdependency be-
tween stages, the concept of "staging" does allow
for a great deal of flexibility in creating a real-
istic solid waste plan.

          The Solid Waste Al-Ic-catron Model

     The Solid Waste Allocation Model (SWAM) is a
system of FORTRAN programs that has been developed
by the U.S. Environmental Protection Agency's Office
of Solid Waste Management Programs under contract to
Roy F. Weston, Inc.^'  to solve the fixed charge in-
teger program  (2.1)-(2.10) .  It accepts as input some
simple data on the disposal facilities and the com-
munities in the region under consideration, and cal-
culates the coefficients for the integer program.  It
then solves the problem for one or more stages, and
prints output reports summarizing the solution.
     The following sections briefly discuss the model's
data requirements and the heuristic algorithm used in
the solution of the problem.  The output reports are
described as part of the discussion of the application
of the model to be covered in the verbal presentation
of this paper.

Data Requirements
     In order to properly model the transportation,
processing, and disposal components of solid waste
planning, some specific and detailed data are re-
quired.  These data are divided into three categories:
(1) source data;  (2) facility data; and  C3) transpor-
tation data.
     The "source" in SWAM is a point of waste gener-
ation that represents a reasonably large residential
area, such as a census tract, transportation zone, or
planning district.  The quantities of waste generated
at these sources, specified in tons per week, will be
allocated by the model to various processing and dis-
posal facilities.  Associated with each waste source,
in addition to the generated waste, is a haul cost
(in dollars per ton-hour].  This cost is used to con-
vert the transportation time from the source to each.
facility into a dollar cost.  The haul cost is a func-r
tion of  the  collection  vehicles and crews associated
with the specific  source.
   The Solid Waste Allocation Model considers Both in-
termediate and ultimate disposal facilities.  Th_e
basic data necessary  to define  an intermediate fac-
ility are the maximum operating capacity in terms of
tons per week, an  operating  cost curve,  the capital
cost of  the  facility, associated useful  life, and the
transfer coefficient  indicating the percentage weight
of incoming  waste  that  will  remain after processing.
In addition, a unit haul cost (in dollars per ton-
hour) is required  in  order to convert the transpor-
tation times from  intermediate  facilities to ultimate
disposal sites into dollar costs.   To define an ul-
timate disposal  site  the following pieces of data are
needed:   the maximum  operating  capacity  in tons per
week, the ultimate capacity  in  tons,  the capital cost
of the facility, an operating cost curve,  and a use-
ful life.  The operating cost curve associated with a
disposal facility  can be one of three types:  (1)  st-
raight line  of the form y=ax+b;  (2)  semi-log of the
form log y=ax+b; and  (3)  log-log of the  form y=a log
x+b.  Curve  types  (2) and  (3) are  approximated by
piece-wise linear  curves using  the linear regression
procedure described in  Section  II.
   In order  to determine allocations  of  waste from
sources  to facilities,  the model must be supplied with
the set  of paths along  which the waste can be trans-
ported.   These paths are defined by pairs  of  locations
which can indicate  paths from sources to intermediate
facilities,  from sources to  ultimate  disposal sites,
or from  intermediate facilities  to ultimate sites.
The transportation  time in minutes plus  the turn-
around time  at the  facility  must be supplied  for  each
pair of  locations.
   The data  described above  are  all that is necessary
to run the model for one time period.  There  are other
options  in the model, such as multi-year planning  and
automatic path generation, each  with  its own  data  re-
quirements, which will  not be described  here.
Solving  the Fixed Charge Problem
   The problem formulated in Section  II  is a  fixed
charge problem, which is a special type  of integer pro-
gramming  algorithm.  It can  be  solved exactly by any
mixed integer programming algorithm.   Unfortunately,
these algorithms are generally  too slow  to solve the
large problems constructed for practical applications
in a reasonable amount  of time  (although Kuhner  and
Harrington report  some  success using  IMB's MPSX-MIP
system on large problems^).
   The United States Environmental Protection Agency's
Office of Solid Waste Management Programs,  therefore,
decided  to use a heuristic algorithm  developed by
Warren Walker to solve  the problem.   The algorithm is
described completely in 9, where its  speed of execu-
tion and optimality of  solutions are  compared to other
solution  techniques.  It was found to be computational-
ly efficient and sucessful in producing  the optimum
solution  a high percentage of the  time.
   The algorithm consists of two phases.   The first
phase is  identical  to the standard simplex method  of
linear programming, except that  the method of choosing
the vector to bring into the basis is modified to
take the  fixed charges  into  account.   In the  second
phase, vectors are  forced into  the basis even though
they increase the total cost, in the  hope  that, by
resuming  simplex iterations  from a new extreme point,
a better  solution can be found.
                                                      598

-------
                   References
1.    "A Mathematical Model  to  Plan and Evaluate Re-
      gional  Solid  Waste  Systems," a report prepared
      for  the New York  State Department of Environ-
      mental  Conservation by Roy F.  Weston, Inc.,
      West Chester, Pennsylvania, June 1971.

2.     Anderson,  L.E.,  "A  Mathematical Model for the
      Optimization  of  a Waste  Management System,"
      University of California Sanitary Engineering
      Research Laboratory,  Report No. 68-1, February
      1968.

3.    "Development of a  Solid Waste Allocation Model,"
      a  report prepared for the U.S. Environmental
      Protection Agency by  Roy F. Weston,  Inc., West
      Chester, Pennsylvania, July 1973.

4.     Kuhner, Jochen and  Joseph J. Harrington, "Large-
      Scale Mixed Integer Programming for Investigat-
      ing  Multi-Party  Public  Investment Decisions:
      Application to a  Regional Solid Waste Management
      Problem," presented at  the 45th National ORSA/
      TIMS Meeting, April 23,  1974,  Boston, Mass.

5.     Marks,  David  H.  and Jon  C. Liebman,  "Mathemati-
      cal  Analysis  of  Solid Waste Collection,"  Public
      Health  Service Publication No. 2065, U.S. Dep-
      artment of Health,  Education and Welfare, 1970.

6.     Morse,  Norman and Edwin  W. Roth, "Systems Analy-
      sis  of  Regional  Solid Waste Handling,"  Public
      Health  Service Publication No. 2065, U.S. De-
      partment of Health, Education and Welfare, 1970.

7.     Midwest Research Institute, "Resource Recovery-
      The  State of  Technology,"  February 1973.

8.     Skelly, Michael  J.,  "Planning for Regional Re-
      fuse Disposal Systems,"   Ph.D. thesis, Cornell
      University, September 1968.

9.    Walker, Warren E.,   "A  Heuristic Adjacent Ex-
      treme Point Algorithm for the Fixed Charge
      Problem,"  P-5042,  The  New York City-Rand
      Institute, June  1973.
                                                      599

-------
                 INCENTIVES FOR  WASTE COLLECTION BASED  ON  WORK CONTENT MODELING
          Richard L. Shell, P.E.
    Professor of Industrial Engineering
              Dean S. Shupe, P.E.
Associate  Professor of Mechanical  Engineering
                                     UNIVERSITY OF CINCINNATI
                                      Cincinnati, Ohio   45221
                  ABSTRACT

  A successful incentive system JOT solid waste
  personnel must satisfy both technical and politi-
  cal requirements.  The technical requirement is
  that each route assignment must contain a known
  collection work time.  This paper describes the
  use of computerized modeling to develop waste
  collection route areas for either time or wage
  incentives.  A recommended wage incentive program
  for solid waste workers is included with a simple
  example to illustrate application.  The recommended
  program features an  "Elective Incentive Contract"
  that combines three basic concepts:  incentive
  teams, time and wage incentives, and elective work
  loads, i.e., teams choose their level of incentive
  work load.

             INTRODUCTION

Service cut-backs,  layoffs, reduction in
capital investments,  and  possible financial
default -- these headlines evidence the
growing economic plight  of many cities
across the nation.   The  typical municipal-
ity is being hard pressed to maintain its
service-oriented, labor  intensive functions
in the face of  continuing inflation while
on a relatively fixed income base.  "Over
the past two decades, state and local
spending, now running at  $221.5 billion,
has grown faster than any other sector  of
the economy.  State and  local  expenditures,
exclusive of federal  aid,  rose from 7.4
percent of gross national product in 1954
to 11.6 percent last  year"[l].

The revenues of local government have not
kept pace with  the  spiraling expenditures.
The bottom line result to date has been
deeper budget deficits.   Figure 1 illus-
trates the worsening  financial condition
since 1973.
PRODUCTIVITY AND  SOLID WASTE COLLECTION

In most cities, solid waste collection
ranks third in total  cost,  behind educa-
tion and roads.   Collection has been
traditionally noted  for its intensive la-
bor requirements.  Typically over 70
percent of collection/disposal costs are
required for collection manpower.
    Collection productivity has  remained es-
    sentially  unchanged since the  introduction
    of the  compactor truck nearly  four decades
    ago.  One  approach to increasing producti-
    vity  is the application of worker incen-
    tives .
         + 12
       w
       3
       o
       H
         -12
            1970
                                             75
       Source:  U.S.  Commerce Department

       FIGURE 1.  STATE AND LOCAL GOVERNMENT
                  BUDGET  SURPLUS/DEFICITS
               WORKER INCENTIVES

    Any compensation offered for improved
    performance or  behavior is an incentive.
    Incentive plans can be divided into three
    broad categories:   direct monetary, indirect
    monetary, and non-monetary [5].  Under  a
    direct monetary plan,  each employee is  com-
    pensated directly for  his or her output or
    increased output.   Direct plans can be
    either individual or group.  Under the  group
    plan, each member of the group is compensated
    an equal percent of bonus for the group's in-
    creased output.
                                              600

-------
A study of over 400 companies of all sizes
to determine the effect on productivity of
work measurement and wage incentives
indicated that productivity in plants with
wage incentive plans was 42.9 percent
higher than plants with measured day work
alone, and 63.8 percent higher than plants
with no measurement [3,4].  These surveys
indicate that a key to increased producti-
vity is work measurement.  "Without mea-
surement, we don't know where we are or
where we're going"[6].

Although monetary incentives have been
utilized in manufacturing industries for
several decades, their application to the
service-oriented sector of the economy
has traditionally been limited.  The re-
cent  shift in the economy from manufac-
turing to services has been accompanied by
a growing interest in the use of incentive
programs for service-oriented public
employees as recently reported by the
National Commission on Productivity and
Work  Quality [2].
     SOLID WASTE  SYSTEM REQUIREMENTS

 A wage  incentive program for  collection
 personnel must satisfy two major  require-
 ments:   teahn-ioal  and polit-lcal.

 A sound technical  base is fundamental  to
 an incentive program.  Each route assign-
 ment must contain  a known collection work
 content, i.e., the work  time  required  to
 accomplish  a given task  at a  normal work
 pace.   In waste  collection, work  content
 is the  standard  time required for com-
 pleting the collection assignment, proper-
 ly allowing for  variations in tonnage,
 haul distance, equipment, crew-size, and
 other influencing  variables.   The work
 content must be  accurately calculated
 utilizing work measurement techniques  [10].

 The second  major requirement  is political
 in nature.  All  involved parties  -- elected
 officials,  citizens, public works manage-
 ment, and workers/union  -- must be amenable
 to the  concept of  time and wage incentives.
 On an on-going basis, management  must  con-
 tinue to support the program  fairly, e.g.,
 defend  the  program to the citizens and to
 municipal employees not  included  in the
 incentive plan,  and maintain  competitive
 base earnings.   In addition,  all  parties
 must be willing  to share the  savings re-
 sulting from increased productivity.
          TEAM TIME  INCENTIVES

 Several  constraints  are  inherent  in  solid
 waste  collection effect  system  design  and
 modeling.   These include 1)  the daily  col-
 lection  route is fixed,  2)  the  daily work
 content  is  variable,  and 3)  the work is
 typically accomplished by crews.

 If  each  collection  route is  to  be picked up
 consistently on its  scheduled day, provi-
 sion must be made for handling  the fluctu-
 ations that inevitably occur in the  work
 content  of  a collection  area from day  to
 day.   Management has  only two alternatives
 for  dealing  with  these  variations:   either
 make changes in the  resources  assigned to
 the  task,  or allow the  time  required for
 the  task to  vary  into overtime or
 into 'undertime'  (in which case the  men
 would be idle a portion of the day).

 Generally, municipalities tend to avoid
 overtime,  electing instead to  permit the
 crews to complete their assignments  early.
 If the workers are allowed to  leave  on
 completion of their  assignment and are paid
 for  a complete day,  the program provides a
 time incentive (the  'task system').

 Although time incentive programs  are  fre-
 quently used and  offer  advantages to  both
 management and workers,  they require  bal-
 anced work assignments  for cost effective-
 ness and fairness to individual employees
 [12].   On a  day-to-day  basis,  balanced
 work assignments  are difficult to maintain
 between individual crews due to uncontrol-
 lable factors, especially sudden  equipment
 failure [11].

 An approach  used  by  some municipalities to
 reduce the inequities between  individual
 crews is to  group several crews together
 into a team  under the field  supervision of
 a foreman  [7].  The  team approach not only
 provides a mechanism for dealing  with daily
 fluctuations in the  work loads of its mem-
 ber  crews but also tends to  average  out
 inevitable changes in waste  generation
 patterns occurring in a dynamic city,  there-
 by requiring somewhat less frequent  route
 revision [9].
  WASTE COLLECTION ROUTE DEVELOPMENT

In the- development of waste collection routes,
it is recommended that each route assignment
be specified by a bounded geographic area.
Detail sequential routing within the
assigned area is best accomplished by public
works management and collection personnel
utilizing their experience and knowledge.
Typically, a collection area consists of
only a few square blocks, thus permitting
heuristic design by collection personnel.  In
addition, most motivation and job performance
investigations indicate that workers perform
at higher overall productivity levels if
they have been involved in the design and
planning of their work activities.  There-
fore , detail sequential modeling of waste
collection, e.g., the Chinese Postman,
Eulerian Tour, and Traveling Salseman algo-
rithms, has little practical value.

Route area development requires partition-
ing of the city into team and crew col-
lection areas each defined so as to provide
the desired work day.  The total work con-
tent for each area must account for all work
time elements:  the sum of the pickup times
as computed for each block, disposal time,
refueling time, allowance for unavoidable
delays, and worker rest breaks.

The development of route areas through
modeling requires a knowledge of all work
content time elements.  Fundamental to the
determination of pickup times for each
block is detailed block-by-block field data
                                            601

-------
on all variables influencing collection
times and tonnages.   This data includes set-
out containerization, street width and
street length, and may be obtained by field
survey teams that accompany crews during
collection.

Utilizing the route area concept based on
work content modeling, new collection routes
were developed and implemented for Coving-
ton, Kentucky.  Covington has a population
of over 50,000 and is located in the Great-
er Cincinnati metropolitan area.

Based on the block-by-block data for Coving-
ton, together with work measurement results,
predictive equations were developed for cal-
culating the standard collection time for
every block in the City.  Times for the
other work content elements were obtained
by direct field timing and work sampling.

A system of simulation programs incorpor-
ating all of the work time elements was
utilized to design collection route areas
for the City.  The programs were user-ori-
ented with remote terminals linked to an
IBM 370/168.  Fixed inputs included collec-
tion frequency, crew size, and length of
work day.  Variable inputs included individ-
ual block identification numbers and map
distances.  Detailed outputs included all
clock times and load tonnages as each truck
progressed through its work day   (reference
Appendix A for example printout).  Final
output defined the number of truck crews re-
quired for collection of the entire City.
In addition, balanced route area assignments
were determined for individual truck crews,
as shown in Figure 2.  This type of route
development provides the technical basis
for a wage incentive program.
     A RECOMMENDED WAGE INCENTIVE
        PROGRAM FOR SW WORKERS

If the technical and political prerequi-
sites can be satisfied, an effective wage
incentive program may be developed that
offers high probability of long term suc-
cess.  A wage incentive program that lends
itself particularly well to solid waste
collection in medium and larger cities
combines three basic concepts:  incentive
teams, a time/wage incentive, and elective
work loads.   It will be referred to as an
Elective Incentive Contract (EIC) program C8 ].

Incentive Teams

Under EIC, collection is accomplished by
incentive groups or teams consisting of
approximately three to nine trucks and
their assigned crews supervised by a
field foreman.  All supportive SW personnel
including maintenance and disposal workers
as well as the superintendent receive an
incentive bonus based on the average of
all teams.

Time Incentive

Each team is assigned a collection route
area consisting of a standard work day
              mm,.
FIGURE 2.  AREA ROUTE ASSIGNMENTS FOR TEAM II
           (TRUCKS 7 THROUGH 12).
 (e.g., 6.5 hours average) that is less than
 the normal 8-hour day.  Team members are
 paid for the full day and are permitted to
 leave work after their respective team area
 has been collected to the satisfaction of
 their field foreman.  The time incentive
 encourages workers to reduce their collec-
 tion time by increasing the .work pace,
 reducing break time, or through worker
 ingenuity.

 Elective Participation

 In addition to the 6.5-hour standard work
 day, individual teams may elect to contract
 additional work content in return for a
 wage incentive bonus.  The bonus is based
 on an equal sharing by workers and city
 of the resulting savings.
                                            602

-------
             EIC EXAMPLE

To illustrate the EIC program,  consider a
small solid waste collection/disposal
system with two collection  teams,  both
teams initially consisting  of  five rear-
loading packers and drivers , five  laborers,
and a field foreman.  Team  I contracts the
standard 6.5-hour work day  and  receives
base salaries without a wage incentive
bonus.  Team II contracts for  a 7.5-hour
work day and receives base  salaries plus a
wage incentive calculated by the  simple
equation below:
Wage Incentive
   (percent)
     (Contract - Standard Day)
            8  Hours

     x 100
 For a 7.5-hour  contract  day,  the computa-
 tion is :
 Wage  Incentive  =  (7.5  - 6.5)  100
   (percent)             8 .0
                       12.5%
 In  the  case  of  a Team II worker earning a
 base  salary  of  $200  weekly, the resulting
 wage  incentive  bonus would be $25 weekly.

 A 7.5-hour  contract  work day for the 10-
 worker  team reduces  the equivalent manpow-
 er requirements for  the total system by:

 Manpower  _ (7.5 - 6.5 hours/worker) 10
 Reduction         6.5 hours/worker

             1.54 workers

 Assuming a  30  percent overhead (including
 fringe  benefits), the manpower reduction
 savings resulting from the 7.5-hour con-
 tract work  day  are:
 Manpower Reduction
     Savings
         ($200/week)(1.54)

         x (1.30)   $400/week
 Net labor savings to the city is equal to
 the manpower reduction savings less bonus
 payments  to team workers, foreman, and
 supportive personnel.  Bonus payments to
 the workers of Team II would amount to
 $250 (10  workers @ $25 each).  Assuming
 that bonus payments to the foreman and
 support personnel amount to $50, the net
 labor savings to the city would be:
 Net Labor
   Savings
$400 - $250 - $50   $100/week
 In addition to labor savings, the city
 would realize equipment savings associated
 with the reduction in manpower.  For each
 crew reduced (2 workers), there is a reduc-
 tion of one truck.  Assuming a weekly truck
 cost of $200, the equipment savings would
 be:
 Equipment
   Savings
1.54 ($200/week) = $154/week
                                     In this example, total weekly  savings  to
                                     the city equals $254 while the total weekly
                                     bonus payments to workers  (including fore-
                                     man and support personnel) equal  $300.
                                     Obviously truck and manpower reductions  can
                                     occur only in integer units.

                                     Since under the EIC program, the work
                                     groups contract to complete their elected
                                     collection assignment, overtime pay is
                                     avoided except for scheduled holidays.
                                     Care must be exercised during  implementa-
                                     tion of collection improvements to avoid
                                     employee layoffs if at all possible.
                                     Consequently, actual savings may lag
                                     implementation until manpower  levels are
                                     adjusted through attrition or  reassignment.
              CONCLUSION

The findings from this project  indicate
that a wage incentive program for  municipal
solid waste personnel is  feasible  and
technically possible, but that  political
problems more complex than  found in  private
industry must be dealt with.

The recommended Elective  Incentive Contract
(EIC) program is based on carefully  de-
fined truck route areas with  work  content
determined from computerized  modeling based
on detailed field data and  work measure-
ment techniques.  Incentives  are paid to
small groups electing work  content levels
above standard.  Savings  resulting from
increased productivity are  shared  equally
between worker groups and management, and
the municipality.
             ACKNOWLEDGEMENTS

This research was in part supported by Environ-
mental Protection Agency Grant R801617.   The
authors wish to extend their appreciation to the
City Manager, the Department of Public Works, and
the City Commissioners in Covington, Kentucky;
and to the officials of District Council No.
51, The American Federation of State, .County, and
Municipal Employees, AFL-CIO, and the men of its
Local No. 237.
              REFERENCES

1.  Borrowing Too Much To  Keep  Running",
    Business Week, September  22,  1975.

2.  Employee Incentives To Improve  State
    And Local Government Productivity,
    National Commission on Productivity
    and Work Quality, March 1975.

3.  Fein, M., Rational Approaches To
    Raising Productivity,  Work  Measure-
    ment and Methods Engineering  Division,
    Monograph No.  5, American Institute of
    Industrial  Engineers,  1974.

4.  Fein, M., "Work Measurement and Wage
    Incentives",  Industrial Engineering,
    Vol. 5, No.  9, September  1973.
                                              603

-------
  5.  Niebel, B.W., Motion and Time Study,
      Fifth Edition, Irwin, 1972.

  6.  Rice, R.S., "Work Measurement: An
      Upward Trend," Industrial Engineer-
      ing, Vol.  7, No.  9, September 1975.

  7.  Shebanek,  R..B., Shell, R.L., and Shupe,
      D.S., "Increase Productivity Through
      Crew Assignment'1 , Proceedings , Twenty-
      Fifth Annual Institute Conference,
      American Institute of Industrial
      Engineers ,  1974.

  8.  Shell, R.L., Shupe, D.S., and Albrecht,
      O.W., "The  Use of Incentives in Solid
      Waste Collection/Disposal" , News of
      Environmental Research in Cincinnati,
      U.S.Environmental Protection Agency,
      March 1976.

  9.  Shell, R.L., and  Shupe,  D.S., "A Study
      of The Problems of Predicting Future
      Volume of Wastes", Solid Waste Manage-
      ment Refuse Removal Journal, Vol.  15,
      No.  3, March 1972.

 10.  Shell, R.L., and  Shupe,  D.S., "Pre-
      dicting Work Content For Residential
      Waste Collection", Industrial Engineer-
      ing, Vol.  5, No.  2, February 1973.

 11.  Shell, R.L., and  Shupe,  D.S., "Work
      Standards For Waste Collection",
      Proceedings, Annual Systems Engineering
      Conference, American Institute of
      Industrial  Engineers , 1973.

 12.  Shupe, D.S., and  Shell,  R.L., "Balanc-
      ing  Waste Collection Routes", J. En-
      viron. Sys., Vol.  1(4),  December,  1971.
  HOW GO RHERD RHD STRRT PUNCHING  THE ID'S

  ID #
 74^341
  ID *
 ?34£
  ID #
 ?344
  ID #
 7351
  ID *
 7353
  ID £
 7354
  ID  #
            n.4     i0s;
            £>:.9     £££0     8:50
            33.9     371::
            53.3     5835     9s£4
            69.7     6888     9:35
            C'O . o     o
                              9:49
            98.6     96£8   '10; 4
     s*:ss  THE LORD IS FULL «®®®
  PLEHSE ENTER THE DISTRNCE IN MRP-INCHES FROM
  THE LRND-FILL
    THIS  LORD  WEIGHS    96£8 POUNDS
    TOTfiL WEIGHT  IN THE ROUTE =  96£8 Pill INDS
    TOTfiL TIME IN COLLECTION =  98.6 MINS.
    TIME  SPENT IN TRRUELLING = 34,1  MINS.
    TOTRL COLLECTION  TIME IN THE ROUTE =   98
    TIME  SPENT RT THE LRND-FILL = 10.0 MINS.
    CLOCK TIME IS 10:48
 APPENDIX A.
              SAMPLE COMPUTER OUTPUT OF ROUTE
              DESIGN SIMULATION. TEAM I.
RUN NEWMOIU

NEWNOD1     14:49   0£.-'07.-"76  SRTURIiflY
DO YOU WfiNT RBBREUIRTED OUTPUT?
    YES=1
     N0=0
71

 IS IT R NEW ROUTE?
    YES=1
     NO=Q
71
 ID #
                                                           111.9
                                                           144. 3
                                                                     1357
                    ,1.91313
                                                                             11: 1
                                                                             11 s 34
              G ss;*s WRRNING **«*  WRRNING ®*W
 ID #    4 WRS ENTERED RS  ITEM tt   9  IN THE RC
 PLERSE ENTER: OTHER ID# RGRIN
 ID #
76
 PLEHSE PUNCH THE ESTIMRTED MfiP-IHSTRNCE IN IN    I:'  s
 FROM THE GRRRGE TO THE FIRST STOP              ':''7
?£6
 TIrlE FIT THE FIRST STOP IS  8:18

 RLLOWING 8.0 MINS.  FOR FUELLING
 CLOCK TIME IS  8: £6
                                                  ID ft
                                                •"'9
                                                           149.9
                                                           156.3
                    6046
                    67£0
                                                                             11: 39
                            US'
                                                                     7349
                                                                             1£:
                                                 NOW  IT  IS  TIME  FOR  THE LUNCH.  WE SHRLL
                                                 RDURNCE  THE  CLOCK BY  30 MINUTES.
                                             604

-------
                               PLANNING FOR VARIATIONS IN SOLID WASTE GENERATION

                                        Donald Grossman,  Graduate Student
                                       Civil Engineering  Systems Laboratory
                                      Massachusetts Institute of Technology
                                         Cambridge, Massachusetts 02139
     A methodology  developed for analysis of municipal
solid waste  collection  explicitly plans for variations
in waste  generation.  Using the method, a variety of
district  size,  truck  size,  and crew size alternatives
can be evaluated  in a multiobjective framework,  based
upon a probabilistic  description of overtime require-
ments. The  methodology is  applied in a case study for
Warwick,  Rhode  Island.

     Objectives considered  in the case study included
economic  efficiency and the fraction of days requiring
overtime. The  model  uses the historical distribution
of district  waste generation to forecast variations in
waste generation  as a function of district size.  Col-
lection productivity  for a  variety of truck and crew
combinations is also  forecast.  System alternatives are
evaluated analytically.  The output provides a basis
for choice by explicit  representation of the tradeoffs
between objectives.

             Resource Allocation Decisions

     Selection of truck, crew, and district size is a
key element  in municipal solid waste management.  This
research  investigates these resource allocation deci-
sions and develops a  model to forecast the tradeoffs
between objectives for alternative planning policies.
The resource allocation decision variables are isolated
at the primary controls on collection available to
local system managers.   A variety of other controls are
possible, but these are typically constrained by envir-
onmental  factors  or decisions taken over a longer time
horizon.

     The choice of truck size is one principle resource
decision.  Capacity,  given district sizes, crew sizes,
and a processing  site configuration, determines the
expected  frequency and total duration of haul. As capac-
ity increases,  there  are increased costs due to large
truck sizes, but  these trade off in savings in crew
and vehicle costs due to decreased frequency of haul.
Secondary effects from choosing large vehicles are a
limited  turning ability and less queuing at processing
sites.

     The choice of crew size is the second principle
resource decision.  Increased crew size, given truck
size, district size,  and a processing site configura-
tion, results in  expected decreases in overtime; these
trade off against expected increases in fixed hour
labor costs and in nonproductive haul time.  Secondary
considerations are crew safety and crew comfort; also,
labor relations or political considerations may act to
constrain the available crew sizes.  Note that recent
practice  supports one to three member crews.

     The choice of district size, and therefore the
number of trucks  and crews required for collection
(assuming one truck and crew per district), is the
third principle resource decision.  A priori, the number
of districts, and therefore the number of trucks and
crews, likely has the greatest impact upon system per-
formance.  Larger district sizes imply a smaller number
of districts, and therefore fewer trucks and crews to
service a given town.  For a fixed truck and crew size,
larger districts result  in  an  expected  decrease  in  cap-
ital costs because fewer resources are  required,  but
these trade off against  increases in  truck and crew
operating costs due to longer  expected  work days.

     The above exemplifies  some of the  cost tradeoffs
in resource allocation.  The problem  is framed as a
supply problem with trucks  and crews  as inputs.   One
objective to consider is cost.  In addition, there  is
likely some disutility for  excessively  long collection
days, and thus suggests  that the system manager might
want to consider multiple planning objectives.

     Waste generation is not deterministic.  Assuming
a constant rate of collection, the length of the  collec-
tion day varies, and this induces  a  variation in the
cost and other objectives.  In addition, costs associat-
ed to resources are not well behaved, but are realized
in discrete increments.  Resources themselves are also
discrete.  For example, crews  are typically paid  for 8,
9, 10, or 11 hour days, and not for continuous time-
steps.  Both the stochastic character of waste genera-
tion and the discontinuities in input cost can be
modeled using the methodology  of this research.

                     Methodology

     This section presents a procedure  for choosing a
resource allocation alternative, readily adaptable  for
use by a local system manager.  First,  possible collec-
tion system objectives are proposed,  including appropri-
ate measures of effectiveness.  Second, the set of
alternatives to be considered  is identified.   Third, a
structure is developed for the evaluation of proposed
resource allocation alternatives.   Fourth,  a multi-
attributed utility framework is presented as  a means of
choosing between alternative collection system  config-
urations.

     The analysis assumes a single decision maker, and
choice within a maximum  utility framework.  The  most
obvious objective is financial, measured in present
value dollars.  The case study has shown that dollar
costs provide insufficient  basis for  choice.  Using only
dollar costs, the manager would always  choose to  operate
very few small trucks, and  prefer to  pay large amounts
of overtime.  Instead, a second objective is to  limit
the expected length of overtime, expected number  of days
on which overtime is paid,  or  the expected number of
days with one, two, three,  or  more hours of overtime.
Modeling this objective  affords the manager control of
distribution and frequency  of  overtime  hours.

    The set of alternatives may be characterized  as
points in a three dimensional  space.  Truck size  altern-
atives are based upon industry standards (e.g. 13,  16,
18, 20, 25 cubic yards)  and limited only by technical
considerations such as highway weight regulations.  Crew
sizes generally range from  one to three members  and may
be constrained by union  or  similar considerations.  Dis-
trict size alternatives  depend upon the number of trucks
and a design number of households.  Generally, the  number
of districts will be a uniform multiple of the number
of trucks  (e.g. 1,2,3... trucks imply 5,10,15...  daily
collection districts).
                                                       605

-------
      The  framework  for  evaluation of  alternatives  is  an
 accounting  scheme.  The proposed method will yield ana-
 lytic results.   District  size  determines  the number of
 trucks and  crews.   This,  in  conjunction with truck
 capacity  and  crew size, determines  the regular  costs
 paid  on a fixed  length  workday basis, and the capital
 costs associated to owning the trucks and ancillary
 support facilities.  This component of total system
 cost  is deterministic and easily calculated.  The  other
 important costs  are those for  truck operation or over-
 time.   The  evaluation of  these requires a description
 of  the demand for collection services including the
 time  variation of district waste generation.  The
 evaluation  also  requires  a   model of  the  supply side,
 that  is a model  of  the  productivity of truck and crew
 combinations.

      The  waste collected  from  a residential waste  col-
 lection district will normally vary greatly over time.
 The data  from Warwick showed the magnitude of these
 excursions  to be greater  than  50 percent  of the mean
 weekly wasteload measured in pounds.  Three sources are
 expected  to explain these variations.  First, even
 after correcting for changes in the numbers of house-
 holds,  the  data  may exhibit  a  long range  trend in waste
 generation  rates.   This suggests changing  consumption/
 disposal  patterns in the population as a whole.  If the
 analyst has a sufficiently long record of weights,  it
 is  possible to remove the trend and later  reintroduce
 it  for an appropriate design year.  Second, variations
 will  normally be manifest within a single year due  to
 seasonal  consumption and disposal patterns.  These
 arise because of sociological  factors including vaca-
 tions,  habits, and  economic  trends, and because of
 natural factors  including weather and climate.   Again,
 if  the analyst has  a sufficiently long, trend free
 record, the seasonal variations could be removed by a
 technique such as Fourier analysis.  Third, it is
 postulated  that  there is an  underlying random component
 in  household waste  generation.  The process need not
 necessarily be known, but the  analyst must test whether,
 at  the district  level of aggregation, an arbitrarily
 chosen set  of households exhibits the same waste gener-
 ation  character  as any other district containing an
 equal  number of  households.

      The  following discussion  assumes that the decision
 maker's purpose  is to choose a single resource alloca-
 tion.   The  importance of considering only a single
 allocation  is that it enables  the analyst to model the
 distribution of  the quantity of waste generated for
 collection,  without the need to model the relationship
 of  different waste quantities over time.   Specifically,
 the preservation of autocorrelation need not be con-
 sidered.

     The minimum data requirement to model time varia-
 tion are weekly wasteloads from sample daily collection
 districts for a year,  and the members of households for
 each observed district.   If  the wastes aggregated to
 the district level can be shown to be normal,  it is
 possible to  model the district wasteload as the sum of
 independent, identically distributed waste generation
 distributions.  The household distribution independence
and form cannot be tested with available data.   It is
possible,  however,  to  arbitrarily choose to model indi-
vidual households as normal,  and,  as long as a  reason-
able number  are aggregated into districts, the  normal
distribution in district level waste,  if substantiated
by the data, can be  preserved.   For the case where ob-
served districts have  an average of k households
              m =  n u

              32=  k n s2
(1)
(2)
where  n   is  the number of households in the district
size alternative to  be forecast,  and  u  and s  are the
estimated  parameters assumed to characterize the normal
distribution  on wastes generated for collection by in-
dividual households.  Using the model of equations (1)
and  (2), the  demand  for waste collection services for
a  range of district  size alternatives may be forecast.

    The second  concern is to describe the productivity
of various crew and  truck size alternatives for the
set of proposed district size alternatives.  To relate
these to system objectives,  total collection time must
be predicted  for forecast wasteloads, and, in turn,
these collection times can be translated into expected
cost and other  systems objectives.   For a particular
district size,  consider the examination of t ,  t,, t

...t, length  workdays (for example 8, 9, 10, and 11

hour days).   The discretization should correspond to
cost increments.   Corresponding to these times  are
maximum collectable  wasteloads,  call these w.,  w_, w_,
...w, , for each truck and crew alternatives.


    The total workday may be characterized as the sum
of on route collection travel (from garage to and from
route, and from route to processing or disposal site),
and nonproductive time (breaks, breakdowns, and main-
tenance) .   Typical values for nonproductive time may
be estimated.   The number of hauls  and on route col-
lection time  depend  on wasteload,  truck size, and crew
productivity.   To complicate this  analysis, trucks are
volume constrained.   Therefore, an  expected density of
waste is required to convert volumetric capacity of
each truck size alternative  to a  corresponding  weight
capacity.   The  procedure for estimating density is to
use observed  data points having more than a single haul
to the processing site,  and  assuming the first  haul
full.  Then the analyst can  estimate density using
known truck weights  and volume.   In the Warwick case
study, a point  estimate of density  was justifiable.

    Shuster1*  has  developed and calibrated, a model  to fore-
cast  Y, the  service time per household,  in minutes.
The regression  equation,  estimated  for a curbside,  once
weekly, incentive system is
      Y =  .0088 X:L   .0570 x2 - .0010 XB - .0423  + .770
                                                     (3)

where x. is the pounds of waste per service per week,
x,, is the  crew  size,  x. is the percent one-way  items,

and x, is  the collection miles per  day.   Shuster esti-
mates similar models for other levels of service and
work rules.   Therefore,  for  each  resource allocation
alternative,  the  on  route collection time may be fore-
cast for all  levels  of w, .   Under  an assumed waste
                         k
density, the  number  of hauls  and  the corresponding
travel times  may  also be determined.   Then, for each
resource allocation  alternative,  the analyst has de-
termined each w,  associated  to each workday length t^.

It is important to note that productivity is assumed
to be deterministic,  and more research is required to
test the validity of this assumption.

    Therefore,  for each resource  allocation alternative,
the distribution  on  wasteloads yields a distribution
on the total  length  of collection  day.   Given estimates
of cost for varying  leneth collection days, the expect-
ed cost of  collection and values  for other objectives
may be developed.  The  levels  of attainment of  objectives
can be represented by  points  on a transformation  curve.
The decision maker's problem is to  choose between
alternative operating  points  along  the  transformation
curve.   The chosen point  is  the desired  level of
                                                       606

-------
tradeoff between system objectives.  It should repre-
sent the alternative with the greatest utility to the
system manager.  Choice can be made subjectively,
from a graphical representation, or can be made in a
multiobjective framework as described by Keeney3, and
others.

           Case Study - Warwick, R.I.

    The resource allocation methodology was applied
in a case study for Warwick, R.I.  First, Warwick and
the available data sources are described.  Second, the
key steps and results of the evaluation process are
presented.  Third, output under a variety of assump-
tions and system objectives is presented.  Finally,
tentative resource allocation policy recommendations
are drawn.

    Warwick is a moderately large community with a
1970 census count of 83,694 people.  The municipal
Public Works Department provides collection from resi-
dences:  there are nearly 24,000 dwelling units ser-
viced  in public collection.  At the time of data col-
lection, eleven 20-yard  loadmaster compactors were
the town's primary collection vehicles.  The vehicles
were operated by three member crews paid on a 40-hour
incentive system, with overtime paid at time and one-
half.  Data were collected by ACT Systems1 for 20 of
Warwick's collection districts  for the entire period
from November, 1972 to November, 1973.  Data collected
included total weekly wasteloads, and the respective
numbers of households in each of the districts.  Dis-
aggregation to weight per haul  enabled estimation of
waste  density.  The data used for the Warwick case
study  represent a minimum set necessary for analysis.

    The 20 timestreams of 52 observations in weekly
district waste generation contained only 996  non-zero
observations.  These, average weights per household
per week were  calculated. A week test suggested that
 all producers  behaved as if drawn  from a single popu-
 lation.  Using the 996 observations as independent
 samples  from a distribution on  average household
wastes,  then the hypothesis that the underlying dis-
 tribution is normal may be  tested.  Independence of
 samples was screened by checking correlation statis-
 tics  across districts, the  estimated sample para-
meters for the distribution were a mean of  61.63
 pounds and a standard deviation of 18.42 pounds.   In
 addition, a  'minimum correlation sample of  52 observa-
 tions  produced essentially  the  same distribution para-
 meters.   Using a Chi-squaredgoodness of  fit test,  the
 hypothesis that the samples came from a normal  dis-
 tribution with the estimated parameters  could not  be
 rejected.  Using  the sample parameters,  distributions
 in waste  generated were  forecast for district size
 alternatives ranging from  270  to 600 households per
 district.

    The modeling  of waste  collection time used  Shuster'
 regression equation.   In addition  three-quarter hours
 of official breaks were  assumed.   The garage and pro-
 cessing  site are  at the  same location, and were model-
 ed as  equidistant   from  all districts. Only the three
 member crew size  alternative was tested.  ACT  Systems
 data  reported  72  percent one-way items and  also re-
 ported data which were converted to  collection  miles
 per household.  Using  the model of equation (3),  the
 total  collected wasteload w ,  for  each level of time,
                            f>
 t, ,  could be  forecast.

    Five  commercially available truck size  alterna-
 tives  were selected.   Capacities ranged  from  13 to 25
 cubic  yards.  The mean  density of waste  used  to  con-
 vert  volumetric capacity to weight capacity was 666
 pounds per cubic  year.   The assumed  discretization of
 costs  were those  associated with 8,  9, 10,  and  11  hour
long days.  These were converted to probabilistic
equivalents.  Table 1 shows the model output for a 20
cubic yards truck, with a three member crew, operating
in a district of 480 households.

  P  - Prob (t s 8 hours)       .73

  P  = Prob (8 < t < 9)      =  .20

  P  = Prob (9 < t < 10)     =  .04

  P4 = Prob (t > 10 hours)   =  .03
  Table 1:  Sample Distribution on Collection
            Day Length

The model forecasts that roughly 73 percent of the
days have collection times less than 8 hours.  Similar
information may be inferred for all other resource
allocation and day length alternatives.

    Two objectives were chosen for the Warwick case
study.  One proposed objective is the cost for each
system alternative, measured in dollars; costs include
regular, capital, operating, and overtime components.
In a supply model framework the dollar benefits should
be the same for all supply alternatives.  The second
proposed objective, the fraction of days exceeding 8
hours in length, provides some measure of the dis-
utility of overtime incidents not captured by the
dollar costs.  Resource allocations having both lower
cost and lower fraction overtime requirements are pre-
ferred.

    The expected fraction of workdays exhibiting over-
time is easily developed from information such as that
in Table 1.   Standard cost data collected by ACT
Systems were  used in the assessment of system costs.
For the Warwick case study, a single year planning
horizon was modeled in order to assure comparibility
with the historical data.  Costs were normalized to
dollars per ton.  Figure 1 shows the tradeoffs between
the attributes of the system objectives.
  18. §
  17--
  16-..
  14..
 il2


  11
     0        .1       .2        .3       .4        .5
              Fraction Overtime

     Figure  1:  Attribute  Transformation  Array
                                                       607

-------
Each point represents a distinct resource allocation
and shows the levels of attributes forecast for that
alternative.  A southwest corner rule identifies non-
dominated alternatives.  These are the circled points
in the figure.

     The actual choice of objective requires prefer-
ences for tradeoffs between objectives.  The past War-
wick resource allocation shows implicitly the prefer-
ences between objectives.  In 1973, Warwick used eleven
20-yard trucks with three member crews.  The model
forecast for Warwick's chosen alternative was collec-
tion costs of $12.75 per ton, and overtime paid on 15
percent of the workdays.  The actual Warwick system,
using the historical records  from November, 1972 to
November, 1973 exhibited collection costs of $12.82 per
ton.  The model forecast falls within one percent of
the actual system cost.  No historical data on the act-
ual distribution of overtime were available.

     Actual validation of the forecasts is difficult.
A plausible technique is testing the sensitivity of the
chosen alternative to the input assumptions.  To illus-
trate, this research tested changes in overtime and
wage rates.  The effect of different overtime rates
could provide information for a negotiation process.
For Warwick, double, and even triple normal wage rates
for overtime resulted in only minor cost advantages for
larger trucks.  Higher overtime rates had greatest
effect for alternatives with a greater expected function
of overtime, but overall fixed hour wages and base
truck costs dominated total system costs.  Similarly,
doubling regular wage rates had little effect on the
least cost system alternative; this suggests that wages
already dominated the total costs.

     In general, the Warwick Case Study shows that the
actual 1973 collection system had evolved to represent
a fairly reasonable tradeoff between the investigated
alternatives.  Observe that low overtime alternatives
result in significant cost increases.  For the total
system, the model forecast approximately a one-quarter
million dollar increase in costs by changing from a 15
percent to a no overtime alternative.  Due to the vari-
ability in the district waste generation, planning for
even moderate amounts of overtime is likely advantage-
ous.  Whenever possible within the context of overtime
considerations, fewer and larger districts are prefer-
able.  Total system costs seem relatively insensitive
to truck costs:  labor costs dominate.  This suggests
the choice of larger trucks, but, beyond a certain
size, additional capacity affords little cost advantage
because the expected number of hauls do not decrease
further.  In any case, the resource allocation method-
ology applied to Warwick provides a wealth of informa-
tion conveniently represented and useful for decision
making.

                Conclusions

     The resource allocation proposed in this research
incorporates multiobjectives, uncertainty, and non-
linearity.  It is not an optimization procedure:  there
is no directed search towards most preferred alterna-
tives.   Instead all alternatives are simulated, analytic-
ally, and exhaustive search is used to choose the
resource allocation.  A more detailed simulation is
possible,  but this would require more extensive data
and might be a useful tool for testing the method.   Non-
linear vector optimization using mathematical program-
ming is also possible, but the formulation would be
difficult, and integer constraints might make the
solution impossible.  In light of the relative ease of
application of the method, and the importance of certain
modeled system characteristics, the resource allocation
procedure of this research seems effective for the
 planning of municipal  collection  services.

     Several extensions are  possible.   The analyst
might model changes  in  the underlying waste generation
process  through trend extrapolation, or alternatively
through  causal modeling of the waste generation process.
The latter is currently beyond the state of the art.
Alternatively, seasonal allocations of  resources in-
stead of a single level of resource allocation might  be
developed.  Different seasons appear to have different
waste characteristics.  Therefore,  seasonal allocation,
facilitate better matching of supply to the demand for
collection services.  Tests  with Warwick data indicate
that each season can be modeled as  described in this
paper, and a dynamic programming formulation used to
coordinate the seasonal allocations to  maximize overall
system objectives.   Another  extension,  requiring more
research, is modeling several populations of waste pro-
ducer.  One possible approach is to partition the popu-
lations into separate subsystems.

     The potential gains due to the use of an analysis
model of the type developed  are significant.   The method
is easily applied to local collection system planning.
Analysis helps identify and  clarify the framework and
assumptions of the decision  process, and provides a
basis for testing and comparison of alternative decision
strategies.  The method, using easily available data,
tests and can model  a variety of important process
characteristics, test a variety of  input assumptions,
and lend valuable insight into the  tradeoffs  between
system objectives under alternative resource  allocations.

                  Acknowledgements

     This work was supported by a  grant from the Sloan
Research Foundation.  Data for the  case study were
collected by ACT Systems for the Office of Solid Waste
Management Programs  of the Environmental Protection
Agency.  The author  appreciates the insights  and guidance
from Professor David Marks and colleague James  Hudson.

                     References

1.  ACT Systems, Inc., Residential  Collection Systems -
    Final Report, Cincinnati:  U.S.  Environmental
    Protection Agency, 1974.

2.  Grossman, Donald, Resource Allocation for Solid
    Waste Collection, Cambridge:  Unpublished S.M.
    Thesis, Massachusetts Institute of  Technology,
    Department of Civil Engineering, June, 1975,

3.  Keeney, R.L. Multidimensional Utility Functions:
    Theory, Assessment, and  Application.  Cambridge:
    Massachusetts Institute  of Technology, Operations
    Research Center, TR 43,  October, 1969.

4.  Shuster, K.A., Districting and  Route Balancing for
    Solid Waste Collection,  Washington:   U.S.  Environ-
    mental Protection Agency, unpublished mimeograph,
    1973.

5.  Stone, R., A Study of Solid Waste Collection Systems
    Comparing One Man With Multi-man Crews, Washington:
    U.S.  Department  of Health, Education,  and Welfare,
    1969.
                                                        608

-------
                              MODEL OF THE MOVEMENT OF HAZARDOUS WASTE CHEMICALS FOR
                                              SANITARY LANDFILL SITES
                      Eugene Elzy
          Department of Chemical Engineering
                Oregon State University
                Corvallis,  Oregon 97331
                   F. Tom Lindstrom
      Departments of Statistics and Mathematics
                Oregon State University
                Corvallis, Oregon 97331
     A simple mathematical model has been developed to
aid in the  management of hazardous chemical disposal in
sanitary landfill sites.  The model is based upon a chem-
ical mass balance and incorporates the important physi-
cal-chemical parameters:  1)  hydrodynamic flow velocity
based upon  the porosity and hydrodynamic gradient of
the porous  medium; 2) variable water table; 3) variable
rainfall; 4) reversible adsorption-desorption phenomena;
5) first-order irreversible sorption, if any; 6) first-
order chemical reaction; and 7) first-order microbial
degradation kinetics.  The chemical, which is deposited
into the landfill in any time pattern desired, is routed
vertically  by rainfall infiltration to the water table
where movement in the horizontal direction occurs.  The
simplicity  of the model and the resulting computer sim-
ulation program permits a ten year run to be computed
and plotted automatically for approximately sixty dol-
lars.  The  application of the model for a typical sani-
tary landfill is demonstrated.

                     Introduction

     In determining whether a specific material is en-
vironmentally hazardous under a given disposal situa-
tion, a number of factors must be considered.  Import-
ant material properties or characteristics include tox-
icity, solubility, biodegradation rate, vapor pressure,
adsorption  on soil, amount, concentration, and others.
Other important factors include containment and geologic
or hydrologic conditions of disposal.

      In only a few instances can the environmental haz-
ard  of  disposal  of a certain material be defined on
the  basis of only one or two of the factors mentioned
above.  In most cases it appears necessary to consider
many factors and consequently hazard evaluation may be-
come quite complicated.  The major  threat to the envir-
onment presented by  the disposal of hazardous or toxic
chemicals in sanitary landfill disposal sites is con-
tamination of ground water or surface water.  In order
to predict  potential ground water or surface water con-
tamination it would be necessary to consider all impor-
tant physical and chemical characteristics and environ-
mental conditions  (geologic and hydrologic) at the same
time by a mathematical approach.

      A review of the literature reveals numerous papers
dealing with the mathematical aspects of water and chem-
ical movement in both unsaturated and saturated porous
media.  Extensive mathematical modeling and computer
simulation studies of regional ground water flow have
been performed by Freeze-*' .  He considers the inter-
action between a pollutant source and the soil-moisture
and  groundwater flow systems.  The Freeze model can pre-
dict both transient and steady state subsurface flow
patterns in two or three dimensions and includes con-
sideration  of both saturated and unsaturated zones.
Quantitative interpretation of Freeze's results provides
predictive  values of the rate of entry of pollutants
into  the flow system, lengths of flow paths, travel
times of pollutants, discharge rates to surface water,
water table movements, and pressure field development.
These results do not consider dispersion or hydrochemi-
cal  interactions between pollutants and soils.
     Finder and coworkers '   have also developed so-
phisticated two and three dimensional models of ground
water flow systems, including mass transport in flowing
ground water.  Schwartz considered the simulation of
hydrochemical patterns in regional ground water flow.

     In this sanitary landfill modeling project, con-
straints of time and funds virtually eliminated con-
sideration of modeling using the techniques of the
above workers.  For example, Freeze^ reported that
transient two-dimensional hydrodynamic models required
from 10 to 30 minutes of computer time (IBM 360/91) for
100 time step solutions.  Since the current project re-
quires a simulation of the landfill behavior over a
time period of years, it is obvious that computer
charges would be prohibitive for routine use of the
model.

     For practical reasons,  a simple approach was taken
using the vertical moisture routing procedure of Remson
et. al.8, Fungaroli,5 and Bredehoeft et. al.2, coupled
with a simple model of the chemical transport in the
horizontal direction.  The hydrodynamics are not com-
puted.  Constant horizontal water velocities in the
landfill and soil are estimated from soil or landfill
permeability and porosity and local hydraulic gradients.
The water table variations are entered as input data and
are obtained from measurements taken near the landfill
site.

     While this approach of using greatly simplified
hydrodynamics has obvious inadequacies, the simple mod-
el should be useful for management of chemical disposal
in sanitary landfills.  The assumptions and simplifica-
tions utilized to construct the simple model result in
higher predicted concentrations than is expected in
actual disposal situations.   Thus the model is con-
servative with respect to potential health hazard, a
desirable approach to waste disposal management.  Fu-
ture comparisons of model predictions with actual sani-
tary landfill behavior will enable the model accuracy
to be determined.

                   The SLM-1 Model

     The objective of this project is to develop a com-
puter model of a sanitary landfill which is as simple
as possible and yet still include the principal factors
affecting the underground transport of the contaminant:

     At a minimum, the model must account for the fol-
lowing factors:

     1.  both vertical and horizontal movement of the
contaminant (i.e. two dimensional distribution of chem-
ical) ,

     2.  adsorption of the porous media,

     3.  biodegradation of the contaminants,

     4.  variable water table which may rise to any
height in the landfill (possibly even completely flood-
ing the landfill) or drop to a depth below the landfill,

     5.  permeability, porosity, hydraulic gradient and
moisture bearing characteristics of the soil and landfill.
                                                       609

-------
    The sanitary landfill model SLM-1f developed in
this study, is based on a vertical routing of contamin-
ant by a method similar to Remsom et al.° and a hori-
zontal routing corresponding nearly to flow through a
series of stirred tanks.  The model is greatly simpli-
fied by performing a mass balance on the contaminant
only.  No water balance is performed.  The horizontal
velocity of the groundwater is assumed constant in the
landfill and soil and is estimated from the permeabili-
ty, porosity, and the hydraulic gradient of each media.
Both the landfill and the soil are assumed to be homo-
geneous with uniform permeability, porosity, hydraulic
gradient, biodegradation, and adsorption characteris-
tics within each porous medium.

Two-Dimensional Structure of SLM-1

    The landfill and soil region is divided into a
grid, each compartment having dimensions of length
DELX, depth DELZ = 2 feet, and width WIDTH sufficient
to encompass the contaminated zone of the landfill.
SLM-1 is considered to be a two-dimensional model since
calculations account for distribution of chemical in
two directions only; i.e., vertical and horizontal.
Since dispersion of chemical in a lateral direction
is ignored, the model tends to calculate a higher con-
centration at a point downstream from the landfill than
would exist if the three-dimensional dispersion char-
acter were modeled.
             Rainfall
    Landfill
      Column   1234567     KL=3, KT-7
    Figure 1.  Two-Dimensional Structure of SLM-1

    The elevation of the top of each landfill and soil
column and the elevation of the bottom of each land-
fill column are specified as input data.  It is assumed
that columns 1 to KL are landfill columns followed by
KL+1 to KT columns of soil where KL and KT are speci-
fied as input data.  KL=0 means that the entire region
is soil.

Water Movement
    The horizontal groundwater flow below the water
table is assumed to be unidirectional with a velocity
V(l) ft/day in the landfill and V(2) ft/day in the
soil.  Movement of the chemical in the lateral direc-
tion is neglected.

    Rainfall at an arbitrary rate R(J) falls on the
landfill and soil region and a fraction XINFL is as-
sumed to infiltrate into the porous media.  This water
moves downward in the columns according to the simple
mechanism suggested by Remson et. al.°

    Each two foot layer above the water table has an
initial moisture volume fraction of YI(1) for landfill
and YI(2) for soil.  Water entering the top layer in a
column is retained until a moisture volume fraction
corresponding to field capacity is reached, i.e.,
YF(1) for landfill and YF(2) for soil.  Additional water
entering a layer at field capacity freely drains to the
next layer below and so on.  Eventually, all layers
above the water table will reach field capacity.  Ad-
ditional water into the top layer will then move down-
ward to the water table carrying the chemical contamin-
ant into the groundwater.  Each calculational time per-
iod is two days; thus, it is assumed that the porous
media above the water table can drain from saturation
to field capacity within this time.
                                                          Chemical Source

                                                                At time zero,  the  chemical  contaminant distribu-
                                                          ted in any compartment of  the  landfill or soil columns
                                                          is specified as M(I,K) grams  (entered as input data)
                                                          where I is the layer number and K is the column number.
                                                          An arbitrary source  S(I,K) of  chemical can be speci-
                                                          fied for any layer I,K as  a function of time period J.
                                                          Groundwater flowing  below  the  water  table into column
                                                          1 and the precipitation  entering  the top layer of each
                                                          column are assumed to contain  no  chemical contaminant.

                                                          Adsorption Characteristics

                                                               Reversible adsorption of  the contaminant onto the
                                                          soil and/or landfill material  is  assumed to be describ'-
                                                          ed by the Freundlich equation:
                                                               MA = K  • C  • SOLID
                                                                                                              (1)
                                                          where:  MA = chemical adsorbed  (grams),
                                                                  C = concentration of chemical  in  free  solution
                                                                      (mg chemical/liter or ppm),
                                                                  SOLID = grams of porous solid material,
                                                                  K = adsorption constant; may be different for
                                                                      soil and landfill material  (liter/gm solid).

                                                          Biodegradation of Contaminant

                                                               Biodegradation of the contaminant is assumed to
                                                          be first order:
                                                               MC = k •  C •  W • At • 10
                                                                                       -3
                                                                                                              (2)
                                                          where:  MC   chemical degraded by  reaction (grams),
                                                                  k = rate constant  (hr~l);  may  be different
                                                                      for soil and landfill  material
                                                                  W = volume of solution under consideration
                                                                      (liters),
                                                                  At = time period (hours).

                                                          Chemical Mass Balance

                                                               Layer Above the Water Table.   Each  layer  receives
                                                          leachate from the layer immediately above and  dis-
                                                          charges leachate of different concentration  to the lay-
                                                          er immediately below.  The mechanism proposed  for mass
                                                          balance calculations for a two-day period is as follows:

                                                               The volume of leachate from Qin liters, is added
                                                          to the volume of liquid in the layer from the  previous
                                                          time period.

                                                          W = WQld + Q    with W and Q, measured in liters.  (3)

                                                          The total grams of contaminant is  computed.
                                                          M
                                                           total
M    + Q.   • C.   •  10 3 + S,
 old    in    in            '
                                                                                                             (4)
                                                          where S is the source function, i.e., grams of contam-
                                                          inant added during this two-day period.  The  total grams
                                                          of chemical now is considered to adsorb on the porous
                                                          surface, to degrade by reaction or to remain  in free
                                                          solution.
                                                               "
                                                                                                             (5)
                                                                total ~ lui T M """ ML

                                                          where:  MA   adsorbed chemical  (grams),
                                                                  MF   contaminant in the free solution  (grams),
                                                                  MC   chemical degraded  (grams).
                                                                    MF
                                                          Since C = — (1000), where C is the concentration in

                                                          ppm, the equations can be combined to yield:
                                                      610

-------
    MF =
                total
         1 + 77 •  SOLID •  10  + kAt
                                                   (6)
                                                          justed  accordingly, M        =  M  MC.
                                                          Variable Water  Table
The total grams of chemical in free solution MF, the
free concentration C, and the grams of chemical de-
graded MC can now be calculated.  If the volume of liq-
uid in the layer exceeds that corresponding to field
capacity, VFC liters, the layer is drained to field
capacity,  i.e., if W > VFC, Q

Q01

Q,,.
                                   W - VFC, otherwise
       0.  The loss of contaminant to the layer below
        out
           10
             -3
                grams, is computed next.  The total
liquid in the layer is now reset to W - Q    and the
total grams of chemical adjusted to M = M      - MC -
Q    .  C •  10~3.
 out

    The layer collects the leachate from above, mixes,
adsorbs, reacts, and then drains to field capacity to
supply leachate to the layer below.  Thus, the process
proceeds.

    Layer Below the Water Table.  A layer below the
water table has a horizontal flow input and output due
to groundwater flow.  It is assumed that the layer im-
mediately below the water table receives all the chem-
ical in the leachate which is routed vertically due to
rainfall infiltration.  This assumption implies that
the landfill is located in a groundwater discharge area.
Layers further below the water table do not distribute
the chemical vertically.
     Rising Water Table.  If the water table has risen
since the last time period, it is assumed that the lay-
ers now saturated which were previously at field capa-
city (or lower) are brought to saturation with water
having no contaminant.  That is, bringing these layers
to saturation has resulted in no movement of chemical.
Then, calculations are performed to distribute the
chemical vertically by infiltration and horizontally
by groundwater flow as described earlier for the con-
stant water table case.

     Falling Water Table.  When the water table drops,
the layers at saturation capacity above the new water
table must drain to field capacity which causes a verti-
cal routing of chemical in a manner similar to the us-
ual case for layers above the water table.

     For calculational simplicity, the water due to
rainfall infiltration and this excess water (VSAT-VFC)
are routed vertically at the same time.

            Validation of the SLM-1 Model

     In the SLM-1 model, there are three major calcula-
tional procedures which should be validated.

1.   Vertical rSuting of the chemical from the landfill
     or soil media to the water table, an unsaturated
     flow mechanism,
    A layer below the water table is saturated, i.e.,
W   VSAT.  Horizontal routing is assumed to occur in
the following way.  QH liters of liquid flows from a_,
compartment at concentration C ppm, thus QH • C • 10
grams are transferred downstream to the next column in
the same layer.  QH = volume of liquid into the layer
in a two-day period.
    QH = V • DELZ  • WIDTH
                                28.32  • YS
                              (7)
where:  V = groundwater velocity  (ft/day),

    28.32 = conversion factor  (ft  to liters),

       YS = saturated volume fraction for porous
            media = porosity.

Source chemical (S grams), chemical in the groundwater
from the layer immediately upstream (MTX) and chemical
from the layer above (only for the first layer below
the water table) are added to the layer.
M = M,      - QH
    last J  x
10 3 + MTX + Q.   . C.
              in   in
                                            10
                                              -3
            (down-          (up-         (above)
             stream)      stream)
    + S.
      (source)
                              (8)
QH litet
MTX grams
               Kn. Cin      I
           Source S grams

                 QH liters,  C
             Free Concentra-
              tion, C
2.   Horizontal distribution of chemical by groundwater
     flow beneath the water table, a saturated trans-
     port mechanism,

3,   And routing of the chemical near the water table
     interface as the water table rises or falls.

Vertical Routing of Chemical Above the Water Table
                                                     o
     The SLM-1 model uses the method of Remson et al.
to route moisture downward in the unsaturated media to
the water table.  These investigators have shown that
this simple procedure satisfactorily agrees with exper-
imental results in a laboratory landfill.  The SLM-1
model extends the Remson procedure to chemical routing
by assuming that each two foot layer of porous media
acts as a well-mixed vessel in transporting the chemi-
cal downward.  Although untested with experimental
data, this procedure is expected to satisfactorily pre-
dict chemical movement above the water table, provided
the soil can drain freely to field capacity in a two
day time period.

Horizontal Distribution of Chemical in Groundwater Flow

     To determine the validity of the SLM-1 model pre-
dictions of chemical movement beneath the water table,
two auxiliary models were developed in this study.H A
continuous one-dimensional model with an exponential
source function and a multi-tank approximation of the
continuous model were compared with SLM-1.  SLM-1 pre-
dicts essentially the same horizontal distribution as
the multi-tank model.  Results from the multi-tank model
approach the continuous model behavior as the tank size
is decreased.  It was concluded that the SLM-1 model
agrees reasonably well with the classical model of one-
dimensional species movement in a saturated porous media.

Variable Water Table Effect on Chemical Distribution
The total chemical now mixes, adsorbs, and is partially
degraded.  The total grams of chemical Mthis j is ad-
                                          The model calculations for the case of a rising
                                     or falling water table have not been validated due to
                                                       611

-------
the lack of a satisfactory standard for comparison.
Future studies should attempt to validate the assumed
distribution mechanism.

                      Case Study

Brown's Island Landfill, Salem, Oregon

     A general description of the Brown's Island area
is included in the report by Balster and Parsons^.  A
complete report by Sweet 10 concerning the hydrogeology
of the landfill site is on file with the Oregon State
Engineer and the Department of Environmental Quality.

     The Brown's Island landfill is located between the
Willamette River and a meander channel of the river.
It occupies the lowest geomorphic unit in the valley,
the flood plain, and is subject to surface water inun-
dation.  Both the soils and the immediate subsurface
deposits at the site have relatively high hydraulic
conductivity.

     Infiltrating precipitation and a water table which
regularly saturates the putrescible material deposited
at the site results in the generation of leachate at
the site.  The down-gradient flow of the leachate is
sub-parallel to the flow direction of the adjacent sur-
face water bodies.  This results in the degradation of
the shallow ground waters in the local system and the
eventual drainage of some contaminants into the local
surface water bodies, i.e. the sloughs,, the ponds in
the borrow pit bottoms, and the Willamette River.

     A ground water monitoring system has recently been
installed at the site.   In the future it will be pos-
sible to monitor the quality of the groundwater in the
vicinity of the landfill and to compare the observed
leachate concentrations with those predicted by the
model.

SLM-1 Model Calculations - Hypothetical Source

     Figure 2 gives all the input data used in this
case study.  A typical annual water table elevation,
in feet above mean sea level, of the Brown's Island
area is shown.   It was assumed that 45,400 grams of a
chemical were initially distributed in an area 4 ft by
40 ft by 20 ft at the top of the landfill with no
source added thereafter.  The simplified rainfall and
soil characteristics correspond to conditions typical
of the Brown's Island area.

     Figure 3 shows concentration distributions as a
function of time at 400 feet down the hydraulic grad-
ient from the landfill site.  Observe that in all the
cases shown in this figure that it takes at least three
years before any appreciable concentration amplitude is
obtained at the 400 foot distance from the landfill
site.  That peaks (pulses) of chemical concentration
are generated and then dispersed while translating
down gradient is a very real physical phenomenon and
reflects among other things, the physical interplay of
a pulse type annual rainfall and the variable elevation
of the water table under both the landfill site eleva-
tion (chemical source)  and the soil conduit.  The ex-
planation of the peak(s) formation is as follows.   A
portion of the rain that falls upon the surface of the
landfill site penetrates the surface creating the po-
tential for moving some of the chemical vertically
downward according to the rules of moisture routing.
Simultaneously,  the water table is moving up and down.
When enough water moving downward from the top of the
landfill  site  (carrying some but not all of the chem-
ical with it)  meets the water table,  then chemical
moves horizontally and  eventually out into the various
soil conduits.   Only four layers of soil conduits are
shown in Figure 2.   However, this is  adequate to demon-
strate the model,  Once the chemical  pulse  reaches one
of the soil conduits, it can continue to  be distributed
by convection and dispersion down  gradient  so long as
the water table covers that conduit.   When  the water
table drops below the level of  that conduit then hori-
zontal motion ceases and vertical  motion  is allowed to
proceed according to the previously mentioned rules of
the model.  The net result, as  might  be observed in a
monitoring well (impervious casing) bored through the
top three conduit (layers) and  into the fourth at the
400 feet down gradient point, is the  concentration dis-
tribution curves shown in Figure 3.   This figure demon-
strates the effects that reversible linear  adsorption
and irreversible microbial degradation and/or first or-
der chemical reaction would have on the concentration
distributions.

     Numerous computer simulation  runs have been made
for various source functions and for  a wide range of
adsorption-degradation conditions.  All parameters of
the model have been studied to  demonstrate  model sensi-
tivity.  Adsorption and degradation are the key para-
meters for predictive model calculations.

     Each computer run of ten year duration costs  ap-
proximately $60 using the Oregon State University  CDC
3300 computer, including the plotting of  all results.
The model is usually run from a remote location by
timesharing with the results plotted  on a graphics ter-
minal for ease of interpretation.

                    Recommendations
     It is believed that this rather general predictive
model for the movement of hazardous waste  chemicals in
both the landfill and the surrounding porous medium is
valid enough to be used as a decision-making tool in
the management of hazardous waste disposal.  It clearly
sets the upper limits on the expected concentrations
for a real field situation.  However, the  complete mod-
el should be given a long-term field test.  This field
test might be carried out by incorporating a sufficient
number of monitoring wells together with known charges
(geometric position and actual chemical mass known at
the time of introduction) of certain industrially and
agriculturally important chemicals, which  may typically
be dumped into a landfill site.  While the model is com-
posed of generally field-tested components (vertical
routing techniques worked out at Drexel University by
Remson et. al.,  and horizontal saturated  flow tech-
niques well-known in chemical engineering), this par-
ticular model which combines both the vertical and hor-
izontal techniques has never been field tested.

                    Acknowledgement

     This research work was supported by the National
Institute of Environmental Health Sciences (Grant ES-
00210) and the Oregon Department of Environmental Qual-
ity's grant with the Environmental Protection Agency
(#2-G05-EC-0014-04).  Special thanks are also due to
Professor L. Boersma, OSU Department of Soil Science;
Randy Sweet, State of Oregon Engineer's Office; and
Pat Wicks, Oregon Department of Environmental Quality
for their active participation on the OSU  Environmental
Sciences Center Task Force on Environmentally Hazardous
Wastes .

                      References

1.  Balster, C.A. and R.B. Parsons  (1968), Geomorphol-
        ogy and Soils, Willamette Valley,  Oregon, Agri-
        cultural! Experiment Station, Oregon State Uni-
        versity, Special Report 265, 9-15.

2.  Bredehoeft, J.D. and G.F. Pinder (1973), Mass Trans-
        port in Flowing Groundwater, Water Resour. Res.,
        9, 194.
                                                       612

-------
3.   Freeze,  R,A.   (1971),  Three-Dimensional, Transient,
          Saturated-Unsaturated Flow in a Groundwater
          Basin, Water  Resour. Res.,  ]_, 346.

It.   Freeze,  R.A.   (1972),  Subsurface Hydrology  at
          Waste Disposal Sites, IBM J. Res, Develop.,
          _16, 117.

5.   Fungaroli, A.A.   (1971), Pollution of Subsurface
          Water by  Sanitary Landfills, Interim Report
          SW-12rg to U.S.  Environmental Protection
          Agency, Vol.  1.

6.   Pinder,  G.F. and J.D.  Bredehoeft (1968), Applica-
          tion of the Digital Computer for Aquifer  Eval-
          uation, Water Resour. Res,, 4^ 1069.

7.   Pinder,  G.F. and H.H.  Cooper,  Jr.  (1970),  A Num-
          erical Technique  for Calculating the Transient
          Position  of the  Saltwater Front, Water Resour.
          Res. , 6^,  875.
9.
Remson,  I.,  A.A. Fungaroli, and A.W.  Lawrence
      (1968) , Water Movement in an  Unsaturated  Sani-
      tary Landfill, J.  Sanitary Engineering Divi-
      sion, ASCE, SA2, April, 307.

Schwartz, F.W. and P.A.  Domenico  (1973), Simula-
      tion of Hydrochemical Patterns  in Regional
      Groundwater Flow,  Water Resour,  Res., 9_,  707.
10. Sweet,  H.R.  (1972),  Brown's Island Landfill, Re-
          port to Department of Environmental Quality,
          September 6,  1972, Oregon State Engineer's
          Office, 7.

11. Wang, C.H.  et.al.   (1974), Disposal of Environ-
          mentally Hazardous Wastes, Task Force Report
          for  the Environmental Health  Science Center,
          Oregon State University.
                                          OS. STUN Of A SftllTAW UtOTL. SlffllM 10 BKMi'S [SUM AT SAlfA CSEOH
                                      ISO'-
                                                                 400 ft -
                                                 IkwuolS
                                                          2 Dw lilt tarn
                                           T-0
                                           HATER T»
                                           3ft - 4020), m, 2025), 3024), 2<]25), ]26, ]24, 2022), ]26, ]24, 122, 3<120), J22,
                                             E8, 126,122, 3020), 124, 3U26), 130,132, 134, 12, '30, 15, 128. 2(126),
                                             4024), 3U2D, 5020), 122,126, 2(128), 124, 122, 4020), 2(126), 3U24),
                                             5020), 3024), ID5U2D)
                                           SOME:  <6, 4X owe DISTRIBUTED UIIRRH.V IN TOP 4 FT. v LMFIU. (ncR A HIDIH OF
                                               20 FT. WD A \OKTH CF 4) FT. AT TIIC ZERO. Ib ADDITICNAL SORCZ AF1ER T-0.
                                           SUIF«I EitvATlOH: 6Q«), 44(126)
                                              M OF BOTTOM OF LMFIU.: 4(120)
                           Figure 2.   A Case Study  of  Brown's Island  Sanitary Landfill
                          Figure 3.  Typical  Concentration Plots at 400  Feet Fro  the  Landfill
                                                          613

-------
                  PHYTOPLANKTON BIOMASS MODEL OF LAKE HURON AND SAGINAW BAY
              Dominic M.  DiToro
Environmental Engineering and Science Program
              Manhattan College
            Bronx,  New York 10471
           Walter F. Matystik, Jr.
Environmental Engineering and Science Program
              Manhattan College
            Bronx, New York 10471
                   Summary

 The basis for this analysis and projection of
 Lake Huron and Saginaw Bay phytoplankton bio-
 mass is a dynamic mathematical model which
 relates the growth and death of phytoplank-
 ton biomass to the nutrient concentrations on
 the one hand and zooplankton biomass on the
 other, as well as the effects of mass trans-
 port due to the water motions and the exo-
 genous variables, water temperature and inci-
 dent solar radiation.  The particulars rele-
 vant to the application of such a model to a
 setting such as this where there exists large
 differences in concentrations of biomass and
 nutrients between Saginaw Bay and Lake Huron
 proper are discussed.  The model is shown to
 agree reasonably well for both regions
 simultaneously which provides strong evidence
 that it is a reasonable representation of the
 situation in the lake and bay.

                  Background

 Lake Huron, the second largest of the Great
 Lakes and fifth largest lake in the world,
 has a water surface area of 23,000 square
 miles.-'-  Saginaw Bay is an inland extension
 of the western shore of Lake Huron pro-
 jecting southwesterly midway into the south-
 ern peninsula of Michigan.  It receives
 drainage from a basin seven times bigger than
 the bay itself, or over 8,000 square miles.2
 Major inflows to Lake Huron proper are from
 the St. Mary's River draining Lake Superior
 and across the Straits of Mackinac from Lake
 Michigan.  Other tributary flows enter the
 lake in Georgian Bay and the North Channel
 from the Canadian basin and along the State
 of Michigan shoreline on the U.S. side.  Out-
 flow is via the St. Clair River.

 The Saginaw River is the major tributary to
 Saginaw Bay and enters the bay at its south-
 western end.  It receives both municipal and
 industrial discharges and its total tribu-
 tary system drains an area of approximately
 6,200 square miles.1  Significant loading
 to Saginaw Bay results from the input of the
 Saginaw River.  Since the bay is shallow
 relative to Lake Huron most of the effect of
 Saginaw River input is felt within the bay
 itself.  The resultant situation, then, is an
 essentially oligotrophic Lake Huron with eu-
 trophic conditions in Saginaw Bay.

 It is this complex problem setting which the
 model described herein specifically addresses.

      Kinetics of Phytoplan'kton Biomass

 Application to other problem settings3'4
 and in particular a detailed exposition and
 application to Lake Ontario 5'6 provide the
 background for the discussion below.  The
 basic structure, assumptions,and compilation
 of relevant coefficients is also available,7
 The phytoplankton biomass that develops in a
 body of water depends on the interactions of
 the transport to which they are subjected and
 the kinetics of growth, death, and recycling.
 The structure of the model is shown in Figure
 1.  Phytoplankton biomass growth kinetics are
 a function of water temperature, incident-
 available solar radiation, and nutrient con-
 centrations, specifically inorganic nitrogen
 and phosphorus.  Phytoplankton also endogen-
 ously respire and are predated by herbiverous
 zooplankton which grow as a consequence.  They,
 in turn, are predated by carnivorous zooplank-
 ton whose biomass increases as a result.  Zoo-
 plankton grazing and assimilation rates are a
 function of temperature and, for the herbiv-
 erous zooplankton, the phytoplankton biomass
 as well.  Zooplankton respiration is tempera-
 ture-dependent.  The nutrients, which result
 from phytoplankton and zooplankton respira-
 tion and excretion, recycle from unavailable
 particulate and soluble organic forms to in-
 organic forms,ammonia and orthophosphate for
 nitrogen and phosphorus respectively.  The
 recycle kinetics are temperature-dependent.
 In addition, they are a linear function
 of the phytoplankton biomass present. The
 latter assumption is a modification introduced
 for the Lake Huron model and is based on the
 following reasoning: the recycling is either
 being accomplished by the phytoplankton them-
 selves; they break down the soluble organic
 material prior to assimilation, or by the
 bacteria present as a consequence of their
 metabolizing the detrital material.  For the
 former mechanism phytoplankton biomass de-
 pendence is expected.  For the latter situ-
 ation, if the rate is dependent on bacterial
 biomass, and if phytoplankton primary pro-
 duction is the major source of organic
 carbon for the bacteria, it is reasonable to
 suppose that bacterial biomass is proportion-
 al to phytoplankton biomass, which results
 in the same dependence of recycle rate on
 phytoplankton biomass.

 In addition to the kinetics described above,
 the mass balance equations which comprise
 the model account for the transport of
 material between Lake Huron and Saginaw Bay,
 the inputs into Saginaw Bay from the Sag-
 inaw River and other sources, and the loss
 of phytoplankton and particulate detritus
 via sedimentation.  The magnitude of the
 rate of regeneration of the nutrients
 associated with the sedimented material
 is an issue yet to be resolved.
                                              614

-------
                                                                ©AtH-KATE*
                                                                IMTEAfACE


DISTRIBUTED &
K)LNT SOURCES
ORGANIC
NITROGEN
"0
f

^-^
-Qy—
AMMONIA
"I

~*
f
* *
                               Figure 1.  Kinetic Interactions
The range of the magnitude of the rate con-
stants for the kinetic terms in the mass
balance equations are obtained in the first
instance from the literature.   The ac-
tual values used are obtained by a calibra-
tion of the model using a set of observed
data which includes observations for every
variable computed.  For the case of the Sag-
inaw Bay - Lake Huron model, the constants
are chosen so that the model reproduces the
observed behavior of all variables in both
the bay and the lake proper.  This is a
stringent test of such a model, since nutri-
ent concentrations, primary production rates,
phytoplankton and zooplankton biomass differ
by an order of magnitude.

         Segmentation of  the System

The model constructed for Lake Huron is a
large spatial scale seasonal time scale
model comprising five volumes representing
Northern Lake Huron epilimnion and hypo-
limnion, southern lake epilimnion and hypo-
limnion and Saginaw Bay.  Figure 2 illus-
trates this model segmentation.

This structure reflects the three character-
istic regions of the lake:  Saginaw Bay;
Southern Lake Huron which is influenced to
some degree by the bay due to circulation
patterns; and the open waters of the Northern
Lake which appears to act as a large receiv-
    Top Layer

 0-15 meters
ing body for the inputs from Lakes Michigan
and Superior.

The top layer ranges from the surface to a
depth of 15 meters which is the depth of
stratification.  The second layer extends
from 15 meters to the lake bottom.  The verti-
cal layers are necessary for the incorpora-
tion of effects such as biomass sinking with
associated nutrient loss from the epilimnion
as well as the effects of stratification on
nutrient availability.

              Data Sources

The verification of a complex eutrophication
model requires a large amount of comprehen-
sive, detailed, physical, chemical, and biol-
ogical data.  The credibility of a model is
judged, in large measure, by its agreement
with observations.  Thus a detailed review
of available data was made.  The historical
data for Lake Huron and Saginaw Bay was inad-
equate in many ways so that a coordinated sur-
vey effort was mounted in 1974.  The agencies
involved were: the Canada Centre for Inland
Waters (CCIW), University of Michigan, Great
Lakes Research Division  (GLRD), and Cranbrook
Institute of Science  (CIS) with both GLRD
and CIS under the direction of the Environ-
mental Protection Agency, Grosse lie Labor-
atory.  CCIW concentrated in the Northern
Lake, GLRD in the Southern Lake, and CIS in
Saginaw Bay.
       Second Layer

   15 meters - bottom
                                 Figure 2.   Model Segmentation
                                             615

-------
The verification data base which resulted
from aggregation of these sources is quite
large.  Altogether, a total of 35 cruises
and about 225 individual sampling stations
measured data over a range of depths.  The
processing of these data for use in model
verification requires that means and stan-
dard deviations for all stations of each
survey within a segment for each cruise be
computed.  The values for each survey are
then overplotted and the result is a set of
data for each model segment for all para-
meters to be verified.  The utility of this
comprehensive data set cannot be over-empha-
sized since the historical data prior to
these surveys were not adequate for the veri-
fication of a lake model of the type pre-
sented herein.

  Transportation Structure and Verification

The major external influences on Saginaw Bay
are the incoming flow of the Saginaw River
and a circulating flow from Lake Huron which
enters along the northwestern shore and exits
along the southeastern shore.  The flow has
been characterized by several investiga-
tors 8,9,10 all of Whom have postulated this
west to east circulatory flow in Saginaw Bay
at least under one generalized type of wind
pattern.  This Saginaw Bay - main lake flow
exchange is incorporated into the model .
One of the aims of the transport verifica-
tion is to estimate its magnitude, since it
determines the flushing rate of the bay.

Another feature of the model transport struc-
ture is a north to south main lake circula-
tory flow.  Values used are consistent
with observed surface velocities, although
no strong gradients exist, making it
difficult to verify.

The other major aim of the transport verifi-
cation exercise is to estimate the magnitude
of vertical mixing between northern and
southern main lake epilimnion and hypolim-
nion.  This is vital to a vertically struc-
tured model since it determines the degree
of nutrient availability in the open lake
epilimnion during stratification.

The verification procedure for transport in-
volves calculating the distribution of a
suitable tracer and comparing it to observa-
tions.  In Saginaw Bay, the horizontal trans-
port regime was verified using the large
gradients which exist between the bay and
the main lake for temperature, chlorides,
and total phosphorus.  In the main lake the
vertical transport was verified using ver-
tical temperature gradients.  Figure 3 shows
the observed versus computed profiles for
temperature and total phosphorus in the Sag-
inaw Bay model segment and for temperature
in the main lake epilimnion and hypolimnion
segments.  The agreement achieved indicates
that the Saginaw Bay transport and the ver-
tical exchange rates are consistent with
observation.

       Estimates of Nutrient Inputs

Having verified a transport regime and incor-
porated this into a phytoplankton modeling
framework, the only remaining  exogenous  var-
iables to be specified are  the waste  load
inputs for the parameters to  be modelled.

Recently, much new data for  Lake Huron waste
loading has been made available. 1/1?
This comprehensive data base  includes Sagi-
naw River loading as well as  municipal  and
industrial inputs for both  the Province of
Ontario and the State of Michigan,  tributary
inputs for these same sources including esti-
mates of load contributed by  ungaged  drain-
age basins, atmospheric inputs and  inputs
from the St. Mary's River and the Straits of
Mackinac.

Utilizing the best available  information,
these loads were structured for input to the
model.  Some significant results of this in-
formation are that inputs to  Saginaw  Bay of
approximately 7,000 Ibs total phosphorus/day
make up about one third of  total phosphorus
input to the entire lake, that atmospheric
sources contribute a significant portion of
the total nitrogen  (31%), and  that most  of the
impact of Province of Ontario tributary  load-
ings to North Channel and Georgian Bay  are
felt within those localized areas and do not
have a great impact on Lake Huron proper.
This formulation is used for  model
verifications.

; Phytoplankton Biomass Model  Verification

Figure 4 illustrates computed versus  observed
profiles for phytoplankton  chlorophyll,  zoo-
plankton, ammonia and reactive phosphorus,
for the Saginaw Bay and northern and  southern
main lake epilimnion model  segments.  Other
parameters which were equally well verified
were total phosphorus, nitrate, and primary
productivity.

Phytoplankton chlorophyll in  the northern
lake epilimnion segment increases during the
spring to an early summer maximum limited by
available phosphorus. Herbivorous zooplank-
ton grazing then lowers the concentration
substantially which then recover as carniv-
orous zooplankton prey on the herbivors.
The secondary recovery utilizes phosphorus
provided by the recycle mechanisms. The
spring increase of phytoplankton chlorophyll
is more pronounced in the southern lake
epilimnion segment.  Zooplankton predation
again causes a decrease with  a secondary
bloom occurring in the fall.  Nutrient pat-
terns are similar to the northern lake epil-
imnion.  Total zooplankton  biomass is both
calculated and observed to  remain substantial
throughout the fall months.

It is important to note the order of  magni-
tude difference in concentrations of  phyto-
plankton biomass and nutrients for the Sagi-
naw Bay model segment versus  the northern
and southern epilimnion model segments.
Agreement between calculations and observa-
tions indicates that the model is capable  of
reproducing behavior in both  situations.  It
should be emphasized that the kinetic con-
stants used are the same for  both the main
lake and the bay.
                                             616

-------
                                                                                           TEMP
 '0.03
        60.00    120.00   180-OO    2W-00
            TIME DRYS SEGMENT  1
                                     300.00
                                            3  °D.OJ
          eo.oo
                                                             ISO. 00
                                                                    ^80.oo
              TIME DflYS SEGMENT  4
                                                                           eso .00
                                                                                  300'00
                                                                                         360.00
 'o.CO
        eo.00    120.00   leo.Qo    aw-oo
            TIKE DRY3 SEGMENT  2
                                     300.00
*  §
                                                                                           TEMP
                                                                                          360.00
                                                                                           TOT P
  C-O)    60.00    1M.M   180.00    SW .00
            TIME MIS SEGMENT  3
                                     300.00
           60.00    120.00    160.CO    2*3 ,K
               TIMF OflVS SEGMENT  3
                                                                                          390.00
  Figure 3.  Computed Versus  Observed Profiles for Temperature  (°C) in All  Model  Segments
                        and Total Phosphorus in the Saginaw Bay Model Segment
                 Conclusions

 It  is common  to require as a criterion for
 verification  of a model that it reproduces
 observed phenomena over a range of envir-
 onmental conditions.   It is especially use-
 ful if the conditions  cover the region for
 which model projections are desired.  Al-
 though this is  seldom  possible, the Saginaw
 Bay - Lake Huron  model does reproduce obser-
 vations which vary over a wide range of
 nutrient concentrations.   Hence it appears-
 reasonable to claim that the model is veri-
 fied at least to  that  extent.  The signifi-
 cance, then,of this verified model is that
 it becomes'a useful planning tool to esti-
mate the effects  of nutrient reduction
policies.   These  policies  will tend to lower
the concentrations  in  Saginaw Bay and thereby
bring values closer to  Lake Huron values  for
which the model is  also verified.

               References
   Great Lakes Basin Framework Study,
   Appendix 7, Great Lakes  Basin Commission.
   Freedman,  Paul L.  EPA-905/9-74-003,  1974.

   Di Toro,  D.M.; D.J. O'Connor &  R.V.
   Thomann;  Simulation in Ecology,  Vol.  Ill,
   1975.
4.  Thomann,  R.V.; D.M. Di Toro & D.J.
   O'Connor; Proc.  Amer. Soc. Civil  Eng.
   100  (EE2), June 1974.
      5.  Thomann, R.V. R.P. Winfield  &  D.M.
         Di Toro; Proc. 17th Conf. Great  Lakes
         Res. 1974.

      6,  Thomann, R.V.; D.M. Di  Toro;  R.P.
         Winfield & D.J. O'Connor, EPA-660/3-75-
         005, 1975.
      7.  Di Toro, D.M.; D.J. O'Connor,  &  R.V.
         Thomann; "Nonequilibrium Systems in
         Natural Water Chemistry," Advan. Chem Ser.
         106, 131-180.  Amer. Chem. Soc.'
         Washington D.C., 1971.

      8.  Ayers,  J.C.; D.V. Anderson; D.C. Handler;
         & G.H.  Lauff, University of Michigan
         Great Lakes Res. Div. Pub 1, 1956.
      9.  Beeton, Alfred M. Trans. Amer. Fish.
         Soc. 87, 1958.

     10.  Johnson, James H., Spec. Soc.  Rept.
         Fish. No. 267, U.S. Dept. Int. Fish
         & Wildlife Service, 1958, 1956.

     11.  Richardson, William L.  & Victor  J.  Bierman,
         Jr.  to  appear in EPA, Ecological Res.Series,

     12.  International Joint Commission - Vol.  II
         Ch.  2 Upper Lakes Reference Study Report .
                Acknowledgements

      The insight  of our colleagues,  Drs.  Robert
      V.  Thomann and Donald J. O'Connor^is grate-
      fully acknowledged as well as the assistance
      of Suwan Numprasong and  William Beach. This
      research was  sponsored by  the U.S.  EPA under
      Grant No. R803030.
                                              617

-------
og^g^JiaoTooiso.oo    940.00    300.00    aao.oo  "Q^l'00      60l°°     iao.oo    ieo.oo     eno.oo    soo.oo    seo.o
                                                           °IWO      80.00     120 .DO     180.00    ESQ.00    300.00    380.0
°0.y>     60-00     130.00    180-00    £40-00    300.00    33
                                     ^owc^b"iio.oo    °o.oo      eo.oo     iso.ro    leo-oo    aw-oo     wo.oo   MO.«
°o-ao     eo.oo     120.00    190.00    eno-oo    300.00    aeo-oo    °o.oo      eo.oo     120.00     iso.oo    aso.oo    300-00    aao.D
          SO.OO     UO CC    160 CO    240-00    "OO.-OO    360-OC     D0-00      60-00     123-00    '40-00     640-00    300-00

                                                  SEGMENT 3

                                 °bserYe?  Profiles  for  phytoplankton  chlorophyll  (A);  herbivorous
                       )   and total  (D)  zooplankton;  amnonia nitrogen  (E);  soluble reactive
                  in  northern  lake  epilimnion  (segment 1).  southern lake epilimnion (segment  2)
and  Saginaw Bay  (segment  3)  model segments.

                                                      618

-------
                        COMPARISON OF PROCESSES DETERMINING THE FATE
                               OF MERCURY IN AQUATIC SYSTEMS


                       Lassiter, R.R., Malanchuk, J.L., Baughman, G.L.
                              Environmental Research Laboratory
                                       Athens, Georgia
Summary

    An analysis of factors affecting the fate
of mercury in aquatic systems is made using a
mathematical model.  Three forms  of  mercury
(mercuric,   elemental,   and   methyl)   are
represented.  All forms are considered to  be
present   in  both  the  water  and  sediment
portions   of    the    system.     Processes
influencing the behavior of mercury forms are
assumed    to    be   oxidation,   reduction,
methylation,     demethylation,     sorption,
sediment/water  exchange, volatilization, and
longitudinal    transport.      Environmental
factors  of  importance are pH, concentration
of suspended particulates,  depth  of  water,
and  depth  of  sediment.   Three dimensional
graphs  (concentration vs. time and  distance)
are  used to portray the temporal behavior of
the mercury forms along a stretch  of  slowly
moving   stream.    Mercuric   mercury  flows
through  the  reach,  partitioning  into  the
sediment  as  it  flows.  The spatio-temporal
pattern of methyl and elemental forms in both
water and sediment is controlled  largely  by
the mercuric mercury sorbed to the sediments.
This  effect  and  the sensitivity of all the
forms to a  range  of  values  used  for  the
sediment/water   partition   coefficient  for
mercuric ion, lead  to  the  conclusion  that
sorption  is the single most important factor
influencing  the  behavior  of   mercury   in
aquatic  systems.   There  is  a slow loss of
total    mercury    to    the    system    by
volatilization.         Predicting        the
concentrations of mercuric mercury species in
a system  accounts  for  most  of  the  total
mercury.  However, the model, directed toward
environmental   pollution  predictions,  must
also account for the fate  of  low-level  but
hazardous forms such as methyl mercury.
Introduction

    The  fate  of  mercury  in  environmental
systems   results   from    the    concurrent
functioning of many processes.  Some of these
processes  respond in complex ways to several
environmental  factors.   For  example,   one
pathway of oxidation of elemental mercury   is
a  function  of  the oxygen concentration and
hydrogen ion  concentration;  the  extent   of
sorption of mercuric ion is a function of the
concentration  and  nature of the particulate
material, the hydrogen ion concentration, and
other factors.

    Simultaneous  (parallel)  processes   are
competitive,  yet any one of them may operate
in  a  chain  of  serial   processes,   e.g.,
sorption  of  mercuric  ion could  act as the
concentrator   for    microbially    mediated
methylation.    The   web  of  interconnected
serial  and  parallel  processes  forms   the
biogeochemical cycle of mercury.  A necessary
condition  for predicting the fate of mercury
is to  have  a  basic  understanding  of   its
biogeochemistry.  That condition, however,  is
not  sufficient  to permit one to predict  its
fate, because its biogeochemistry is  complex
and  resources  are  finite.  Thus predicting
the fate of mercury becomes  an  exercise   in
judicious  selection of the chemical forms  to
consider, the important processes  that  link
these  forms,  and  the  resources to use  for
making the predictions.

    Because of the complexities introduced  by
environmental   variables    driving    these
processes,  and  by the multiplicity of these
processes occurring serially and in parallel,
a computer model was  the  means  chosen   for
making  the  prediction.   The forms selected
and the processes linking these forms into  a
biogeochemical    cycle    were   chosen    to
ultimately  permit   prediction   of   methyl
mercury   and  its  biological  consequences.
Before attempting  the  detailed  predictions
for   methyl   mercury,  the  biogeochemistry
linking it and the other two forms,  mercuric
and    elemental   mercury,   needs   to    be
satisfactorily represented in the model.  The
biogeochemical cycle linking these  forms   of
mercury   in   a   water/sediment  system   is
represented in Figure 1.
                      Air
      l-Oxidation     4-Methylation
      2-Reduction     5- Demethylation
      3-Volatilization  6- Sediment-Water Exchange
                 7-Flow


 Figure 1.   Schematic Representation of the
 Components, Transformations,  Exchanges  and
 Transport Pathways Represented  in  the Model.
                                             619

-------
    One process, sorption, is  not  indicated
explicitly  in  Figure 1.   However, sorption
is represented in the model  by  partitioning
mercuric  and  methyl mercury into sorbed and
dissolved fractions.
Model Description

    A set of differential equations  is  used
to  describe  the  dynamics  of  the forms of
mercury in the system.  In Figure 2 a set  of
equations  for a system without hydrodynamics
is given, using  mnemonic  abbreviations  for
the term descriptions.
      d[Hg(+2)]w
          dt
                = oxid - swx- red - meth
      d[CH3HgX]w.
           dt
      d[Hg°]w
        dt
                 = meth-swx-demeth
              = red + demeth -swx-oxid - volat
      d[Hg(+2)]s
          dt
                =oxid + swx- red-meth
      d[CH3HgX]s.
          dt
      d[Hg°]<
        dt
                 = meth+swx-demeth
             = red + demeth + swx-oxid
 Figure 2.   Differential  Equations  Indicating
 Source/Sink Terms.

       swx  = sed/water exchange
       meth = methylation
       volat = volatilization
       demeth = demethylation
       oxid = oxidation
       red  = reduction

    The aquatic system described by the model
consists of a body of moving water in contact
with the atmosphere and underlying sediments.
The  hydrodynamics  of  the  moving  water is
represented by advection and dispersion terms
in one dimension.  Each equation  is  of  the
form
          =  D 32[Hg] _  V 3 [Hg]
               3x2
                           3x
                                    S([Hg])
in  which  Hg represents any one of the forms
mentioned above and listed in Figure  2,  the
first    term   on   the   right   represents
dispersion, the  second  advection,  and  the
third  a  set  of  terms from the appropriate
equation of Figure  2,   and  which  represent
sources and sinks.  The source-sink terms, S,
are  written  as  functions  of environmental
descriptors,  e.g.,  pH,   concentration   of
suspended  particulate  material,  or depth of
sediment and water.  Although  a  great deal is
known about the chemistry of mercury, none of
the source  or  sink  terms  can  be  written
without conjecture.  Nevertheless,  an attempt
was made to structure each term  to reflect as
much  as is understood about the chemistry of
the process.

    An example from the  mercury  model   will
illustrate    some   of   the    uncertainties
accompanying the writing of  source  or   sink
terms.   Oxidation and reduction of elemental
mercury and mercuric  ion,  respectively,   is
described   by   the   following  equilibrium
expression (Parks, G.A., pers. comm.)
         [Hg°]
                                                               [Hg2+]
However, observations from several   types  of
experiments  show  that  the reactions do not
proceed   rapidly   to   completion,    i.e.,
solutions containing mercuric  ion continue to
produce  elemental  mercury for days  (1,2,3).
Because of these observations  the   reactions
are  better  represented by rate terms rather
than by  algebraic  equilibrium  expressions.
Equilibrium  expressions  do   not necessarily
reflect  mechanisms,  and  rate   expressions
cannot  properly  be  obtained from  them.  In
spite of this fact, the following  reactions,
given  by  Parks   (4), were used to  represent
oxidation and reduction pathways:
                                                     Hg
                                                       2 +
                                                The term for oxidation was taken  to be
     k  [Hg°] [H+]2
and reduction was
    For the environment reduction  cannot  be
as  simply  represented  as this term.  It is
complicated in most  natural  waters  by  the
presence  of  suspended  particulate material
which is  probably  encased  by  a  layer  of
organic material  (5).  Mercuric ion complexes
strongly    (6)   with   sulfhydryl-containing
compounds and is expected to bind strongly to
suspended particulates.  The question  facing
the  modeler is whether reduction of mercuric
ion bound to particulates can be  represented
by  a  simple  proportion  as assumed for the
dissolved  fraction.   If  not,   the   bound
mercuric  ion concentration must be computed.
The  following  equations  are  used  as   an
approximation   for   computing   the   bound
mercuric  ion,  also  taking   into   account
hydrolysis.
                                              620

-------
       Hg2+ + H 0
       HgOH

       Hg2+
             + H o:
                2
                   :Hg
HgOH  + H

Hg(OH)  + H"1

    2H+
    Each term in the  model  was   subject   to
similar  uncertainties.   The question  of  how
to express accepted chemical, biochemical   or
other  equations  in the context of a complex
natural system is the basic difficulty  facing
the "environmental modeler".
                                                 Results  and Discussion
Here, <  ,  is a symbol for suspended particles
treated  as  though  they  are  a   dissolved
constituent.  The three equilibria are
           [HgOH+]
              [Hg2+]
                        = K
           [Hg(OH
                                                    Figures   3-8   show  a  pulse   input   of
                                                mercuric   ion  to   the system and its fate in
                                                the system,  i.e, its  transformation to other
                                                forms,  its transport  to the sediment, and its
                                                loss  from  the system.   The  simulated time  is
                                                20  days.  Mercuric mercury is introduced and
                                                quickly flows through the system  (Figure  3}
                                                leaving  mercuric    mercury   bound  to  the
                                                sediments  (Figure  4)   and  leaving  elemental
                                                and   methyl   mercury   transformation products
                                                 (Figures 5-8).
              [HgOH+]
           [Hg]
and the free mercuric ion fraction, <$, is
              [K     K K  + K   [P]"]-1

             IK*? +    [H+]2     J
Depending upon whether reduction of only free
mercuric ion or of both bound and free can be
described  by  the  constant  proportion, the
reduction term is either
      
-------
Figure 5.  Concentration of Elemental Mercury
in Water Versus Time (Right Axis,  20 Days)  and
Distance (Left Axis, 1 km).
 Figure 7.   Concentration of Methyl Mercury in
 Water Versus Time (Right Axis, 20 Days) and
 Distance (Left Axis,  1 km).
 Figure 6.   Concentration  of  Elemental Mercury
 in Sediment Versus  Time  (Right Axis, 20  Days)
 and Distance (Left  Axis,  1 km).
 Figure 8.   Concentration of Methyl Mercury in
 Sediment Versus Time (Right Axis, 20 Days) and
 Distance (Left Axis, 1 km).
     Sorption  holds mercuric  mercury   in   the
 sediments   so  that,  unlike   the   pattern in
 water,  it  remains at  a  point along  the stream
 with loss  essentially by  transformation only.
 This gives the  graph   (Figure   4)   its solid
 appearance relative to  that of Figure  3.

     In   the  sediments  there  is  continual
 transformation   of mercuric  mercury   to   the
 elemental   and  methyl forms.   Graphs of these
 forms (Figures  5-7}   more or   less resemble
 that of  mercuric  mercury  (Figure 4). It is
 apparent from these figures that sorption  is
 a  major  controlling  factor  of the temporal
 behavior of mercury in  aquatic systems.
    Only    two    permanent    losses    are
represented,  outflow and volatilization. The
effect of outflow is shown in  in  Figure  3.
The   effect   of   volatilization   can   be
visualized by observing that all  the  slopes
of   the    concentrations   with   time  are
negative.  These negative slopes result  from
volatilization  of elemental mercury from the
water and from  recycling  of  the  forms  to
mercuric  with  subsequent  outflow.  Another
loss, not explicitly considered, which may be
nearly permanent is  conversion  to  mercuric
sulfide.  However, it can be considered to be
implicitly   contained   in   the   partition
coefficient.
                                              622

-------
    In evaluating the importance  of  sorption,
the  partition  coefficient  for  binding   of
mercuric  ion  to solids, Kp, was varied over
five orders of magnitude, and the behavior  of
the forms of mercury was observed.   Figures 9
and 10 show the behavior of methyl mercury  as
a result of varying degrees  of   sorption   of
mercuric     mercury.      Methyl    mercury
concentration was  higher  in  the   sediments
for  smaller values of Kp  (Figure 9), whereas
the opposite is true of water  (Figure 10).
    0.27


 — 0.26
 K>

 O 0.25


 7  °-24
  IT
     0.23
x  0.22

UJ  0.21

O
S  0.20

   0.19
         10
          -2
               10"'
                    10°

                   Kn
10'
10'
10°
 Figure 9.  CH..HGX IN SEDIMENT AS A FUNCTION
 OF Kp

    The bulk of total mercury in a system   is
undoubtedly  some  form  of mercuric mercury.
The elemental form has rarely, if ever,  been
measured  in  an  environmental  sample,  and
methyl mercury is seldom  measured  in  water
samples   (7,8).    One   could  delete  both
elemental and methyl mercury from  the  model
and  still  account for an overridingly large
portion of the total mercury.  But  elemental
mercury  is important to the ultimate fate  of
mercury.  It is  formed  at  different  rates
under  differing  conditions, and is the only
loss  from  total  water   systems.    Methyl
mercury,   a   minute  portion  is  important
because  of   its   health   and   ecological
implications.   In  general,  predicting  the
fate  of  a   pollutant   will   consist    of
predicting the fate of the bulk, so that  the
fate  of  small   but   hazardous    portions
can  be  predicted.
Conclusions

    Using    a    skeletal   model   of   the
environmental  chemistry  of  mercury  a  few
processes  emerge  as  dominant  in its fate.
Sorption is the most prominent feature in the
temporal behavior of mercury.  It affects the
pattern of loss of mercury from the system by
both outflow and volatilization.   Predicting
the   fate   of  mercury  involves  not  only
predicting the fate of the major portion, but
also predicting the fate of minute  fractions
whose health and ecological effects make them
important.
                                                    0.32


                                                 sT Q28
                                                 I
                                                 g
                                                 ^ 0.24

                                                 '(/)
                                                 C£ 0.20
                                                 UJ

                                                 -; 0.16
                                                 (f)
                                                 UJ
                         0.12

                         0.08

                         0.04-


                         0.00
                                                         IO
                                                           '2
                      IO'1
                                    10
                                                 I01
                                                                                  I0
                                                                                           I0
                                                Figure 10.  CH3HGX  IN WATER AS A  FUNCTION  OF K
          Literature Cited

Bisogni, J.J., Lawrence,  A.W.,  Kinetics
of  Microbially  Mediated  Methylation of
Mercury in Aerobic and Anaerobic  Aquatic
Environments.    Technical   Report   63.
Cornell University  Water  Resources  and
Marine   Sciences  Center.   Ithaca,  New
York.  180p.  (1973).

Holm, H.W., Cox, M.F., Mercury in Aquatic
Systems:      Methylation,     Oxidation-
Reduction  and  Bioaccumulation.   Report
NO.        EPA-660/3-74-021.         U.S.
Environmental      Protection     Agency.
Corvallis, Oregon.  38p.   (1974).

Baier, R.W., Wojnowich, L.,  Petrie,  L.,
Mercury  Loss  From Culture Media.  Anal.
Chem. 44,2464-2467  (1975).

Parks, G.A., Dickson, F.W., Leckie, J.O.,
McCarty, P.L., Trace Elements  in  Water:
Origin,  Fate  and  Control:   I. Mercury.
Report of Progress, Mar. 1, 1972 to  Feb.
1,  1973.   Submitted to National Science
Foundation.     Stanford      University.
Stanford, California.  247p.  (1973).

Neihof, R.A.,  Loeb,  G.W.,  The  Surface
Charge of Particulate Matter in Seawater.
Limnol. Oceanogr.  17,7-16  (1972).

Baughman,  G.L.,  Gordon,  J.A.,   Wolfe,
N.L.,    Zepp,    R.G.,    Chemistry   of
Organomercurials  in   Aquatic   Systems.
Report    No.   EPA-660/3-73-012.    U.S.
Environmental     Protection      Agency.
Corvallis, Oregon.  97p.  (1973) .

Andren,   A.,    and    Harriss,    R.C.,
Methylmercury   in  Estuarine  Sediments.
Nature 245,256-257  (1973).

Chau, Y.K., and Saitoh, H., Determination
of Methylmercury in Lake Water.  Int.  J.
Environ. Anal. Chem. 3, 133-139  (1973).
                                              623

-------
                             ASPECTS OF MATHEMATICAL MODELS AND MICROCOSM RESEARCH

                                               James W.  Haefner
                                        Department of Wildlife Science
                                             Utah State  University
                                              Logan, Utah  84322

                                               James W.  Gillett
                                        Environmental  Protection Agency
                                              200 SW 35th Street
                                           Corvallis,  Oregon  97330
                     ABSTRACT

     Some features  of the conceptual  structure of
a potential  mathematical  model  are presented to
illustrate the important  role mathematical  models
can and should play in future microcosm research.
Mathematical models can help microcosms achieve their
goals of screening  exogenous substances, understand-
ing the fate and effects  of exogenous substances, and
designing suitable  management strategies.   The con-
ceptual structure reported here emphasizes  the role
of biotic components in controlling pesticide flows.

                   INTRODUCTION

     As the experimental  manipulation of microcosms
becomes an increasingly important tool  for  both pure
research and management purposes, it is necessary
that the role of mathematical models be clearly
delineated.   It is  our position that mathematical
models are extremely useful for research programs
embodying microcosm studies. We hope,  in this
paper, to document  this position by snowing how the
conceptual structure of a potential mathematical
model can influence the measurements and experiments
of a terrestrial microcosm.  The mathematical model
itself is still in  early  stages of forumulation.

A.  The Terrestrial Microcosm

     Since 1974 the Corvallis Environmental Research
Laboratory has been developing  and testing  a terres-
trial laboratory microcosm system for screening the
disposition and effects of pesticides.   This system
                                           o p
was derived from the work of Metcalf e_t al_.   , and
from a conceptual model of pesticide fate (Gillett

ejt aK, 1974 ).  The basic objective of the program
is to develop a tool for  screening potentially ad-
verse environmental behavior of pesticides  by
assessing the fate  of radio-labeled chemicals and
quantifying observable effects  associated with the
introduction of the chemical into the system.  A
secondary objective is to correlate these disposi-
tions and effects with the physical properties of
the chemicals and environmental components  and thus
synthesize an understanding of  the relationships of
classes of compounds to ecological effects.  It
should be apparent  that,  whereas pesticides are
discussed in relation to  the studies, one could
apply this approach to any potentially hazardous or
toxic substance.
     The details of this  system have been presented

elsewhere.   It is  comprised of (a) a Terrestrial
Microcosm Chamber (TMC) and associated biota and
support systems; (b) an operational format  or pro-
tocol; and (c) an analytical scheme for determining

the distribution and identification of   C-residues.
The TMC is a 101 x  75 cm glass  box with plastic lid
enclosing about 40  cm of head space over 20 cm of an
artifical soil containing endemic micro-flora and
fauna.  The unit is semi-closed,  in  that  it  is open
to energy (heat and radiation  supplied  by overhead
fluorescent and incandescent lamps), while purified
air and water are circulated through the  TMC and
exit to sampling systems  (but  can  be arranged to be
closed or recirculating).
     Normal operation includes addition of certain
terrestrial macrofauna  (nematodes, earthworms, soil
insect larvae) prior to planting selected crops and
weeds (alfalfa, rye grass, corn, soybean).   Sub-
sequently higher organisms are added after plant
growth has reached the desired level (Collembola,
crickets, pi 11 bugs, snails and then gravid vole --
Microtus canicaudus).
     Experimentation and  sampling  in the  chambers
                                           14
involves the application  and monitoring of  C-
labeled pesticides.  Variations in these  experiments
can be achieved by altering the biotic  constituents
(especially the plant species and  planting pattern),
the abiotic operating parameters,  and the chemical
(nature, formulation, quantities,  timing,  and mode
of application).
     Monitoring during an experiment includes
periodic measurements of  the pesticide  and its
degradation products in the water, air, at specified
soil depths, and in selected plant species.  Soil
moisture and temperature  are monitored.   The amount
of pesticide adsorbed onto the inside surfaces of
the box can also be measured frequently.  At the
end of an experiment (6-8 weeks) the vertebrates are
sacrificed to determine the pesticide levels in their
tissues.  Appropriate concomitant  controls are
utilized.

B.  Model Objectives

     The primary objective of the model of the
microcosm is to determine the dynamic behavior of
14
  C-isotope residues when the radioactive  label  is
attached to specified classes of chemicals that
are physically applied to the microcosm in known
amounts and by known methods.  The model  is to be
constructed so as to permit a mass balance analysis
of the residues as they are distributed between
diverse components and compartments of  that ecosystem.
To accomplish this the mathematical model must pro-
vide under the biotic and abiotic  conditions pre-
vailing at the time of the experiment a dynamic des-
cription of:

     (1)  the amounts of  pesticide (exemplified by
          "dieldrin" and  "parathion") and  its pro-
          ducts that are  incorporated into plant
          and animal tissues, bound to  biotic com-
          ponents, or removed from the microcosm via
          water and air;

     (2)  the amounts of  pesticide transformed into
          major derivative by-products  (metabolites,
          conjugates, or  bound residues);
                                                       624

-------
    (3)  the effects of the  pesticide  and  by-pro-
         ducts on the feeding,  growth,  and repro-
         ductive behavior of the  organisms in  the
         microcosm;

    (4)  the effects of the  feeding, defecation,
         movement and related behavior of  the
         organisms  in the microcosm on the dispo-
         sition and movement of the pesticide;

    (5)  the amount of CCL and  organic carbon  pre-
         sent  in (a) the living tissue of  the
         organisms, (b) non-living particulates,
         and (c) both the atmospheric  and  water
         portions of the microcosm; and

    (6)  the effects of abiotic conditions (e.g.,
         temperature, pH, and soil moisture) on the
         processes  governing the  movement  and
         transformation of the pesticide.

                MODELING APPROACH

    The  primary concern of this model  is the move-
ment of a pesticide  among the components of a
terrestrial  microcosm.  These components include
both biotic  and abiotic elements and a  model must
incorporate  both.  This is accomplished in  this
modeling  approach by explicitly modeling the role
that biota play in regulating pesticide movement.
    The  concept of  biotic control of abiotic
processes can be given a very simple structural
description, illustrated in Figure 1.   The  Pesticide
Transport Process  (PTP) is a  system that receives in-
puts from external sources and from the Biotic  Control
Processes (BCP).   In addition, PTP delivers outputs  to
external  sinks  and to BCP.  PTP also possesses  internal
pathways  and feedbacks that are symbolized  by the
curved, self-directed arrow.   The  BCP component is
structured in much the same way; it has external
sources and  sinks, internal feedbacks,  and  a coupling
with PTP. In later  discussion, the coupling of PTP
and BCP will be referred to as the "control system."
PESTICIDE
INPUT
/"
PESTICIDE
TRANSPORT
PROCESSES
(PTP)
PESTICIDE
OUTPUT
^
V
BIOTIC fc
INPUT
BIOTIC
PROCESSES
«s
BIOTIC
OUTPUT
Figure 1.   Diagrammatic representation of the control
     system.   Shown are its two basic components:
     pesticide transport and biological control.
     The control  system approach to modeling pesti-
cide disposition  has  several  advantages that are
important enough  to note.   First, this approach is
more realistic since  it specifically includes the
mutual  actions and effects of organisms and pesti-
cides.   At the same time,  by  uncoupling pesticides
and biological  flows  except for control effects, the
model  structure takes cognizance of the minor effect
the biota have on the mass flow of pesticides in
natural  or constructed ecosystems.   Second, the
control system approach permits simulation experimen-
tation on the counter-intuitive effects that pesti-
cides have on a particular biota  (including economi-
cally important crops) through actions on other
pathways or components in the food web.  Third, the
uncoupling aspect of the control  system is important
because the dimensions of state variables describing

pesticide disposition (molar equivalents of   C)
differ from the contents of state variables descri-
bing the biota (grams of C).  Models with consistent
equations require state variables with consistent
units.

A.  Pesticide Transport Processes

     Figure 2 provides more detail of the control
system approach.   For the purpose of the PTP the
microcosm is composed of four layers:  an above-
ground subsystem, soil layer I, soil layer II, and
soil layer III (the number of layers is not fixed
and can easily be either increased or decreased).
Each layer is composed of a "center" and an "edge".
Within the three soil layers are two different
classes of soil types:  Isolated  (I) and Contiguous
(C).  I'solated soil is soil that has no connection
with the atmosphere by means of its interstitial
connections.   Contiguous soil  has a direct, if not
straight, connection with the atmosphere.   It is
assumed, further, that the vapor phase in contact
with contiguous soil is in equilibrium with the
above-ground atmosphere, although there may be indirect
connections with the atmosphere by means of soil  water.
This distinction is important in describing volatili-
zation of pesticides from soil  particles.   Volatiza-
tion can occur and have an input to the above-ground
atmosphere only if the surface of the soil  particle
is connected, in the gaseous phase, to the above-
ground atmosphere.  It seems clear, also,  that the
fraction of isolated soil  will  increase with increasing
depth from the surface and increasing percent soil
moisture.  This provides a good operation definition
of soil layers.  The "surface" (soil  layer I)  can  be
defined as that set of depth intervals for which the
soil possesses, for example, 5% or less isolated soil.
One can also define soil layer III to be that set  of
depth intervals for which the soil possesses 5% or
less contiguous soil.  Other layers can be defined in
a similar way.
     The total volume of soil  comprising the surface
will vary in time.  This variation will be caused
by the dynamics of such processes as moisture fluc-
tuations, tunneling, and root growth.  As  a result,
for computational purposes the terrarium will  be
divided up into many thin "layers" each of which can
be described by its percent isolation.  Since the
total amount of pesticide volatizing in any time
step will be a function of the total  volume of
"surface" soil present,  it will be necessary to sum
the volumes of all layers meeting the criterion for
soil layer I.
     Within each of these compartments (layers; edge,
center; isolated and contiguous) certain processes
of pesticide movement and transformation occur.   Those
that will be discussed in this report (after Gillett
et^ aj_., ) are movement by mass flow in water,  adsorp-
tion, volatilization, and transformation.   Figure  2
shows the basic control  system of biota and pesticide
movement plus some detail  in the pesticide component.
Most pesticide movement in PTP is associated with
physical transport by water flow.   Within the soil,
gravity, diffusion, and capillary action are forces
and processes for transferring pesticides dissolved
in water.  A certain fraction of pesticides volatizes
from contiguous soil and may be lost from the system
through the exhaust system or may adsorb onto above-
ground surfaces.   Because of water condensation on
                                                       625

-------
the walls and roof some atmospheric pesticide may
find its way back to the soil.  Within the soil a
portion of the pesticide may adsorb more or less

severely to soil particles.    The water moving
through the soil may leach some pesticide to ground-
water.
     Besides the direct control effects of the BCP
on these pesticide processes (described below), there
are a number of important abiotic factors that
influence their rates:  temperature, moisture, pH,
and ionic content.  These not only directly alter
the rates of pesticide movement, but they influence,
and are indirectly influenced by, the BCP as well.
Thus, the biota have both direct and indirect effects
on the PTP.
    | ABOVE-GROUND
     BELOW-GROUND
                                    r
                              13
                           CENTER
               (SOIL LAYER I)
          EDGE
               (SOIL
                    LAYER II)
          EDGE
             (1,0)
                          CENTER
               (SOIL LAYER I
    I	

  PESTICIDE TRANSPORT PROCESS (PTP)
Figure 2.   The processes of pesticide transport in
     the microcosm.   Processes are denoted by number.
     (1)   loss from atmosphere and unrecycled water,
     (2)   loss to biota plus effects on Biota, (3)
     = recycled water, (4)    water input, (5) = pest-
     icide application,  (6)   input from Biota plus
     control  by Biota, (7)  = volatization from
     contiguous (C)  soij_.  (8)   runoff from walls
     and roof, (9) = adsorption and desorption onto
     walls and roof, (10)  = lateral movement in soil,
     (11)  - gravity  flow of water and pesticide, (12)
     = upward diffusion  and capillary action, (13)
     adsorption onto soil  surface, (14)   degrada-
     tion.  Unit of  flow within PTP is molar equi-
     valents  of 14C.
      In  addition  to  physical  removal  from the micro-
cosm  by  the  air and  water evacuation  systems a mole-
cule  of  pesticide may  also disappear  because of de-
gradation or transformation.   This PTP process (via
photolysis and hydrolysis) is accelerated by the
BCP (via metabolic transformation).

B.  Biotic Control Processes

      In  this  section we  illustrate the role of the
BCP in the control system by  describing those por-
tions of the  BCP  that  interact with the PTP as shown
in Figure 2.  As  stated  in the model  objectives the
substance of  flow in the  BCP  is  carbon.   The general
pattern of this flow is  shown in Figure 3.   The below-
ground ecosystem  is  further elaborated in Figure  4;
the "higher trophic  levels" compartment is  elaborated
in Figure 5.  These  relationships  permit us to enun-
ciate some of the mechanisms  by  which the PTP and  BCP
interact in this  model.
Figure 3.  The basic structure of the biotic compon-
     ent of the microcosm control system.  Unit of
     flow is carbon.
Figure 4.   Generalized trophic processes of the below-
     ground ecosystem.  Unit of flow is carbon.
                                                      626

-------
Figure  5.   Inter-relations  of the  higher trophic level
     subsystem.
     An important effect of the biota on the eco-
system is the alteration of the physical structure
of the soil.   By tunneling, compaction, and root
growth the biota alter the effects of gravity, diffu-
sion, and capillary action on patterns of movement of
water and therefore, pesticide.  Conversely, a major
effect of pesticidal toxication on the biota is to
alter gross behavior such as "tunneling" (Figure 6).
Vertical  grazers influence vertical flows between
layers directly by their tunneling activity (as repre-
sented by the dotted information arrow from "vertical
grazers"  to "tunneling factor").  The effects of the
pesticide can alter this behavior without signifi-
cantly affecting the amount of carbon contained within
a state variable (e.g., "vertical grazer"), as repre-
sented by an  informational arrow from PTP to the
"tunneling factor".
     The  biota also influence the physical  structure of
the soil  by the production of Particulate Organic Car-
bon (POC) that has great adsorptive potential.  Such
particles include solid feces, exoskeletons, root
parts from sluffage, and so on.  Moreover,  organisms
that move horizontally alter mechanically the flow
rates between the Edge and Center compartments (Fig-
ure 6).
Figure 6.   Illustrative interactions between BCP and
     PTP.
     The Dissolved  Organic Carbon (DOC) compartment is
important in  determining carbon fixation.   Bacterial
metabolism is predominantly internal  so that organic
material must be in dissolved form to be utilized by
bacteria.  Because different organic substances  (cellu-
lose, hemi-cellulose, lignins, sugars, etc.) have dif-

ferent decomposition rates (Edwards, e_t al_., ; Dickin-

son and Pugh, ) both the quantity and quality of DOC
must be considered.  Mayberry, e_t al_.,  present experi-
mental data which suggests how this can be done
efficiently.  They have shown that pure strains of
bacteria yield 3 gm of cells per equivalent "available
electron" in the substrate.  An "available electron"
is one which is "not involved in a molecular orbit
with oxygen" in the structure of the substrate (refer-
ence 7).  This interesting approach should be pursued,
since co-metabolism is the driving force permitting
bacterial biotransformation of pesticides.
     In these few figures we have indicated some
sections of a model designed to study the complex
interactions between biotic and abiotic components
of microcosms.  In what follows we articulate how
such models may guide microcosm research.

    RELEVANCE OF MODELING TO MICROCOSM RESEARCH

     The conceptual structure of this model and the
processes of elaboration and documentation that will
provide a functional mathematical  statement for sim-
ulation experimentation are intimately involved in
the overall achievement of the several  objectives
of the microcosm research program, i.e., screening,
understanding of ecosystem processes, and develop-
ment of effective management strategies.  Current
studies on pesticides in microcosms rely on empiri-
cal efforts that need a sound, theoretical  basis.
Outputs for screening decisions are based on a
relatively small number of observations (in compari-
son to that number which might be used); a success-
ful model would help assuage criticism of those
empirical measurements by revealing the necessary
and sufficient data points most crucial  to the
disposition of a given chemical.   Further,  the inte-
grating process within the model  and the requirement
of the mathematical model for explicitness should
reveal features of the disposition or effects not
visible in the empirical approach.
     The conceptual model has already enunciated
several aspects of the microcosm system that need
theoretical or pragmatic definition:  the distinc-
tion between "edge" and "center",  the "isolated" vs
"contiguous" soil, the "tunneling factor" and other
behavioral variables, and the interaction between
pesticides and abiotic factors in their effects on
carbon flow in BCP.  Explicit hypotheses have sug-
gested particular experiments, and eventually the
mathematical model should reach a state where mathe-
matical simulations can be performed.  Then parameter
sensitivity analysis and other modeling techniques
should lead to further improvements in critical
analysis of microcosm tests.
     Mathematical modeling is also directing micro-
cosm research toward better understanding of the
processes of disposition and effect of classes of
chemicals.  A dynamic, simulation model  places
emphasis on elucidation of rates of flows and their
controls, rather simply on the measurement of state
variables.  The interaction between the microcosm
research program and the Benchmark Chemical Program
becomes clearer and more explicit as one considers
the need for high quality physical chemical data to
parameterize the model processes.   As the relation-
ships between the benchmarks and the parameters are
better understood, then simplification of process
diagnostic analysis should result, especially with
the aid of computer model simulation.
     Finally, a mathematical  model based on the pre-
sent conceptual structure can also influence manage-
                                                      627

-------
ment strategies based on microcosm research.  Simula-
tion experiments can explore the requisite control
characteristics needed to effect a particular pesti-
cide disposition, by indicating the restricted
classes of natural ecosystem to which the pesticide
might be applied without disrupting the ecosystem
or the pesticide disposition goals (i.e., delivery
to target for effective time).  Alternatively, the
model might indicate management action required to
counteract adverse disruption of the system or
disposition goals when the target area is not in
one of the above classes.

                    CONCLUSIONS

     In this paper we have reviewed sections of the
conceptual structure of a mathematical model as it
pertains to the improvement of microcosm research.
We have used this discussion to show how models of
microcosms may aid the achievement of the microcosm
objectives of screening, understanding, and manage-
ment strategies.  Our emphasis has been complex and
subtle interactions between pesticides and the
biotic component of terrestrial microcosms.   This
was formalized by the "control systems" approach
which constitutes the core of the conceptual struc-
ture.  Our recommendations for future microcosm
research emphasize investigations of the distinction
between soil sectors and their properties, the be-
havioral effects of pesticides on biotic components
of the microcosm, and the interaction of abiotic
inputs with pesticide effects.

                    REFERENCES

1.   Bailey, 6. W. and J. L. White.  1970.  Factors
     influencing the adsorption, desorption  and
     movement of pesticides in soil.   Res. Rev.
     32:29-92.

2.   Cole, L.  K., R. L.  Metcalf, and  J. R. Sandborn.
     1975.  Environmental Fate of Insecticides in
     Terrestrial Model  Ecosystems.   Presented at
     Ecological Society of America  meeting,  Cor-
     vallis, Oregon.  August 20, 1975.

3.   Dickinson, C. H.  and G. J. F.  Pugh (eds).  1974.
     Biology of plant litter decomposition.   Vols. I
     and II.  NY, Academic Press.   775p.

4.   Edwards,  C. A., D.  E. Reichle, and D. A. Cross-
     ley.   1970.  The role of soil  invertebrates in
     turnover of organic matter and nutrients.  In:
     Analysis  of temperate forest ecosystems.  D.  E.
     Reichle (ed) Berlin, Springer-Verlag Publishers.
     p.  147-172.

5.   Gillett,  J. W.  and J. D.  Gile.  1975.  Progress
     and Status Report on Terrestrial  In-House
     System.  In:  Substitute Chemicals Program   The
     First Year of Progress.  Proceedings of meeting
     at Fredericksburg,  VA.   July 30,  1975.

6.   Gillett,  J. W., J.  Hill,  IV,  A.  W. Jarvinen,  and
     W.  P. Schoor.  1974.  A conceptual model for
     the movement of pesticides through the  environ-
     ment.  Environmental Protection  Agency, Wash-
     ington, DC.  USEPA Rep. No. EPA-660/3-74-024.
     79p.

7.   Mayberry, W.  R.,  G.  J.  Prochazka, and W. J.
     Payne.   1967.  Growth yields  of  bacteria on
     selected  organic  compounds.  Appl. Microbiol.
     15:1332-1338.
Metcalf, R. L., G. K. Sangha, and  I.  P.  Kapoor.
1971.  Model ecosystem for the evaluation of
pesticide biodegradability and ecological mag-
nification.  Environ. Sci. Techno!. 5:709-713.

Stotzky, G.  1974.  Activity, ecology, and
population dynamics of microorganisms in soil.
In:  Microbial ecology.  Laskin, A. I. and
H. Lechevalier (eds) Cleveland, Ohio, CRC
Press,  pp 57-135.
                                                      628

-------
                                    AN ECOLOGICAL MODEL FOR THE GREAT LAKES
                                                       by
                              Donald Scavia,  Brian J.  Eadie, and Andrew Robertson
                               National Oceanic and Atmospheric Administration
                                Great Lakes Environmental Research Laboratory
                                            2300 Washtenaw Avenue
                                          Ann Arbor, Michigan 48104
                      Summary

A one  dimensional  ecological model,  developed and cali-
brated for Lake  Ontario,  has been applied to the other
Great  Lakes  to test  its  generality.   The model, physi-
cally  segmented  into two layers,  simulates concentra-
tions  of phytoplankton,  zooplankton, detritus, phos-
phorus,  nitrogen,  and total inorganic carbon.  Driving
the ecological model with physical data from the other
Great  Lakes  results  in accurate simulations for the
upper  lakes  after minor recalibration of the kinetics;
however, the western and central basins of Lake Erie
could  not be simulated due primarily to the effects of
physical phenomena which are not considered in the
model  (e.g., sedimentary regeneration of nutrients,
resuspension).

The kinetic recalibration involved the adjustment of
two coefficients,  one representing algal phosphorus
requirements (the half-saturation constant for growth
on phosphorus) and the other determining the regulation
by food of zooplankton growth.   The adjustment of these
coefficients was based on the theory of competitive
succession.

As a result of the verification tests the following
conclusions were reached:
 (1)  For an ecological model to be able to predict
ecological changes occurring during eutrophication, it
must include at  least several compartments in each
level  of the food chain to allow natural selection to
be simulated.  In this way "recalibration" will take
place  automatically as succession and adaptation would
in nature.
 (2)  Further investigations are needed to gain insight
into the mechanisms that allow phytoplankton and zoo-
plankton to exist in varying environments.  The mecha-
nisms  governing  the succession and adaptation of spe-
cies will have to be studied in order to develop
mathematical relations that describe these mechanisms.
 (3)  In some lakes,  especially shallow ones, the ef-
fects  of physical and chemical interactions between
sediment and the water column can greatly influence
the seasonal dynamics of the lake biota.  Models
should account for these processes when they are
important.
 (4)  The seasonal effects of allochthonous loads
should be included in a eutrophication model for lakes
with short residence times.
(5)  Once the model discussed in this paper has been
parameterized to include coefficients of the individ-
ual phytoplankton and zooplankton groups, it should be
broad  enough for rather general application.

                    Introduction
There  presently  exist two broad classes of aquatic
models: (1)  general, relatively simple models, based
on large diverse data bases and usually addressing
a single water quality variable1"2'3 and, (2) more
site-spec!fie, complex ecological models, developed
for a  particular system or system-type and usually
 Contribution No.  64 of the Great Lakes Environmental
 Research Laboratory,  NOAA, Ann Arbor, Michigan.
                                                  4 5
simulating many ecologically significant variables '  '
6>7.  Models of the first group are generally designed
to transform simple input into useful output.  While
these models provide accurate predictions of their
respective parameters, they do not address many of the
questions pertinent in water resource management.  The
second category of models attempts to address more
specific water quality parameters and processes (i.e.,
concentrations of phytoplankton groups; seasonal
dynamics of the model components; and phosphorus,
nitrogen, light, and temperature limitation) but have
been indicted as losing their generality during the
development of complex process formulations and the
evaluation of coefficients-^.

The generality of a model is most critical to its
applicability; that is, if a model is very specific,
its use beyond simulating historical data is disput-
able.  Very specific models may be useful for testing
certain hypotheses, but extrapolations into the future
and to other bodies of water are of questionable
validity.

The purpose of this paper is to investigate the poten-
tial generality of the Lake Ontario ecological model
developed by Scavia et al.  by applying it to all
five of the Lauentian Great Lakes.

             Lake Ontario Calibration

This model was developed parameterized, and calibrated
for Lake Ontario and has undergone extensive testing
for ecological realism^.  It includes calculations of
the concentrations of available phosphorus, dissolved
organic nitrogen, ammonia, nitrate, non-living parti-
culate organic carbon, inorganic carbon, four groups
of phytoplankton, six groups of zooplankton, and
benthic macroinvertebrates.  Solution of the equations
describing the biological processes is accompanied by
calculation of diffusion and sedimentation between
three vertical segments, as well as of the concentra-
tions of the components of the carbonate equilibrium
system.  The ecological model, driven by temperature
and diffusion calculated by a physical model based on
the work of Sundaram and Rehm^ and by solar radiation,
represents the open-water zone of Lake Ontario.

Certain modifications were made to the existing model
to aid in its application to the other lakes.  The
physical segmentation of the model was reduced from
three to two segments: the epilimnion and hypolimnion,
and the solar radiation and driving data for the
physical model were obtained from climatological stud-
ies9»10.H>l2.  Also, the method of numerical inte-
gration was changed from a fourth order, variable-
step Runge-Kutta-Merson algorithm to a simple, for-
ward step Euler procedure.

None of the alterations seriously affected the results
obtained for Lake Ontario.  For a complete list of
coefficient values and initial conditions for Lake
Ontario consult Scavia et al.^

 Verification: Application to the other Great Lakes

Data for verification were obtained for all the Great
Lakes from two sources --a broad survey of the
literature and unpublished measurements35 made by
                                                      629

-------
GLERL.
                                                          trophic status  similar  to  that  of Lake Ontario.
With few exceptions, the available phytoplankton data
are reported as chlorophyll or total biomass.  For
this reason, the values for concentrations of carbon
obtained for the four phytoplankton groups in the model
have been combined.  This total has then been compared
to the actual measurements after these were converted
to carbon by assuming a carbon to chlorophyll ratio of
50:1 or a fresh weight biomass to carbon ratio of 10:1.
Further, the literature values are usually only repre-
sentative of surface (0-5m) conditions, whereas the
model simulates average epilimnetic conditions.  This
difference is critical during late summer-early fall
when simulated epilimnion depths reach over 30 m in
most of the Great Lakes.  Predictions of average
concentration of the phytoplankton in the epilimnion
segment will usually underestimate the observed sur-
face (0-5 m) concentrations.

The total zooplankton carbon prediction is the summa-
tion of the five epilimnetic zooplankton groups of the
model.   These five groups do not necessarily represent
all of the zooplankton present in the lakes, and the
literature generally provides  only values for crus-
tacean carbon.  Because the predictions and the
observed data do not exactly correspond, we are limit-
ed to comparing the general dynamics and approximate
concentrations of the zooplankton.  Data from the 50 m
vertical hauls are used for comparison to the average
epilimnetic concentrations calculated by the model.

The seasonal nutrient values35 from the model are
average epilimnion concentrations based on the predict-
ed segment thickness, while most of the ranges in the
literature are for surface conditions.

The data used for comparisons and to drive the model
are limited at this point, and results from the follow-
ing verification give only an indication of the model's
validity for each lake.   While caution is urged in
interpreting the results, one can consider the poten-
tial generality (and limitations) of such a model.

Initial Simulations

The initial verification test consisted of using the
coefficients determined by the Lake Ontario calibra-
tion and the physical driving data and initial condi-
tions for each new lake or basin.  The results of each
simulation were then compared to the observed data.
The simulations of phytoplankton for the upper lakes
(Superior, Huron, and Michigan) resulted in generally
lower predictions than corresponding measurements.
Also, in all three lakes the predicted zooplankton
concentration decreased sharply throughout the year.
Lake Erie was considered as three separate lakes: the
shallow western, larger central, and deeper eastern
basins.   The simulations of all three basins failed in
a fashion similar to the upper lakes, but additional
constraints were also evident and will be discussed
later.

In an attempt to explain the poor verification of our
model for lakes other than Ontario,  we considered the
differences in trophic status among the lakes.
Differences in nutrient levels are most certainly an
important factor in determining the dominant species
in the  phytoplankton community of a lake.   Taking
this argument one step further, one can also hypothe-
size that differences in types and abundance of phyto-
plankton in lakes will result in shifts in dominance
and in  intraspecific adaptations within the zooplankton
community.   Since the original model had been calibrat-
ed to the indigenous biota of Lake Ontario, we con-
cluded  that,  with these coefficient  values, accurate
predictions would only be possible for lakes with
The structure of the model, however,  is such that the
coefficients describe the  state  variables on a species-
type level whereas the  formulations  are based on func-
tional relationships and should  be  consistent for all
species within the modeled group.   With this in mind
one would expect the model could be  recalibrated for
each new lake.

There are procedures available for  efficient curve
fitting, and the use of trial-and-error could also lead
to recalibrated simulations of the  lakes; however, such
simulations would not be very interesting or useful,
for they would not produce any additional information
about the lakes or even the model itself.  Instead,
we have varied a minimum number  of  coefficients in an
ecologically prescribed manner to determine if simple
recalibrations can lead to simulations  that agree to  a
substantial degree with the available data.

The selection of these  coefficients was based on two
assumptions: (1) the Great Lakes phytoplankton are, for
the most part, phosphorus  limited, and  (2)  the relative
dominance of specific phytoplankton and zooplankton
groups has been determined by the specific  environments
of the lakes.  The model equations involving these two
assumptions are the nutrient-regulated  growth equation
for phytoplankton and the  food-dependent grazing
equation for zooplankton.  The coefficients  critical
to these mechanisms are the half saturation  constants
of phytoplankton growth on phosphorus (XKP)  and of
zooplankton grazing on  total food (XKG).  A complete
description of these constructs  can be  found else-
where'*.

Low nutrient levels in  Lake Superior  probably have
favored the dominance of algal species  with  the ability
to grow successfully at these concentrations.   This is
represented mathematically by a  low half-saturation
constant.   XKP has generally been reported between 1
and 10 ygP/113'14'1^, so we used 1.0  ygP/1  for oli-
gotrophic Lake Superior.  The constants  for  the other
lakes were selected (Table 1) following the  trophic
schemes  suggested for the  Great  Lakes by Vollenweider
et al.16 and Dobson et  al.17

 Table i.  Predicted trophic status variables  and recalibration coefficients.

Maximum
Phytoplankton
Sedimentation

Maximum
Production
Available
Phosphorus
Inorganic
Nitrogen
XKP
XKG
«

.05
5.6
.04
110

0.1
1.0
.26
.29
1.0
.02
1

.12
7.8
.13
438

0.2
3.0
.21
.26
2.0
.04
£

.15
8.3
.10
411

0.3
4.0
.24
.32
5.0
.08
u

.28
11.6
.36
775

0.6
8.0
.09
.19
6.0
.16
u

.31
10.3
.43
706

0.7
9.0
.05
.16
8.0
.16
g

.56
21.6
.26
1648

0.6
15.
.06
.27
9.0
.16
^

.83
8.7
1.09
1026

0.5
25.
.23
.64
10.0
.16
3

mgC/1
gC/m2/yr
gC/m%r
mgC/m2/day

minimum
maximum
minimum
maximum
UgP/1
mgC/1
The coefficient relating zooplankton grazing to food
concentration (XKG) has not been as thoroughly studied
as XKP.  Data from RichmanlS imply a value of 1.3xl04
cells/ml or, in terms of carbon, 0.16 mgC/1 for this
constant.  This value worked rather well in the Lake
Ontario calibration and is therefore considered to be
in the higher end of the range for XKG.  The nutrient-
poor waters of Lake Superior support a  sparse food
supply for the zooplankters, and species adapted to
such conditions have most likely been favored.  Math-
ematically this would be represented by a  lower value
for XKG.  The lower extreme of the range for XKG was
set at 0.02 mgC/1 based on relative phytoplankton
concentrations in Lake Superior and Ontariol7,19, and
                                                      630

-------
the  values  for  the remaining lakes were determined in
the  same  way as those for XKP (Table 1).

With only these two changes, the model was rerun for
the  various lakes.  Selected aspects of the results
are  compared to reported measurements in Table 2.  The
seasonal  dynamics of certain model parameters are
presented and compared to actual results in Figure 1.

             Table 2.  Predicted and observed variables-

      Predicted     Observed    Units    Ref.      Note
Lake Superior
Max.
Phyto.

Max.
Zoo.
Max.
Prod.
N03+NH3
Avail. P
Lake Huron
Max.
Phyto.
Zoo.
Range
Max.
Prod.
NOj+NHj
Avail. P

Sed.
Rate
0.05
1.0


0.01
110

.26-. 29
0.1-1.0

.12
2.4
.002-. 012


438
.21-. 26
.15-3.0

7.8


0.4-1.8
<1.0

0.018
183
76-507
.22-. 28
0.5

.03-. 18
1.2-2.4
.006-. 058


121-358
.18-. 26
.5
2.0
5.6
15.1

mgC/1
ugChla/1
vgChla/1

mgC/1
mgC/m2/day
mgC/m2/day
mgN/1
ygP/1

mgC/1
pgChla/1
mgC/1


mgC/m2/day
mgN/1
HgP/1
HgP/1
gC/m2/year
gC/m2/year


17
16

21
20
20
17
17

16
17
24


23
17
19
19
22
22


obs. range
obs . max.

from #/m3
obs. max.
obs. range
obs. range
obs. const.

from wet wt .
obs. range
obs. range


from hr
obs. range
obs. mean
obs. max.
north
south
Lake Michigan
Phyto .
Peaks
Max.
Max.
Zoo.
Max.
Prod.
NH3
N03
Avail. P
Sed.
Rate
Lake Erie
Max.
Phyto .
Max.
Zoo.
Max.
Prod.
NOs+NHj
Avail. P
Sed.
Rate
Lake Erie
Max.
Phyto.
Max.
Zoo.
Max.
Prod.
NOs+NHj
Avail. P
Lake Erie
Max.
Phyto.
Max.
Zoo.
Max.
Prod.
N03+NH3
Avail. P

.15, .06
3.0
.012

411

.01-. 028
.22-. 30
0.3-4.0
8.3

- eastern
.28
5.6
.02

775

.09-. 19
0.6-8.0
11.6

centrat
.31
6.2
.03

706

.05-. 16
0.7-9.0
- western
.83
16.6
.07

1026

0.3-.64
0.5-25.

0.2, .04
0.6-3.7
.037
.2
67-1030

.006-. 024
.1-.21
10. -3. 5
11.12

basin
0.1-0.4
1.4-5.4
.06-. 27

140-1440

.02-. 18
1.0-7.0
160

basin
.06-. 60
2. -10.
.06-. 27

120-1590

.02-. 14
1.-8.
basin
.1-1.3
4.9-25.9
.06-. 27

110-1900

.08-. 64
5.0-23.

mgC/1
ligChla/1
mgC/1
mgC/1
mgC/m2/day

mgN/1
mgN/1
(JgP/1
gC/m2/year


mgC/1
ygChla/1
mgC/1

mgC/m2/day

mgN/1
ygP/1
gC/m2/year


mgC/1
ygChla/1
mgC/1

mgC/m2/day

mgN/1
UgP/1

mgC/1
ygChla/1
mgC/1

mgC/m2/day

mgN/1
HgP/1

27
16,26
28
28
16,26

27
27
26
29


16
30
24

16

17
17
22


16
17
24

32

17
17

16
17
24

32

17
17

Gr. Trav. Bay
obs . range
from ml/m3
nearshore
obs. range

Gr. Trav. Bay
Gr. Trav. Bay

assume 4% C
(see ref. 22)

from wet wt.
obs. range
from dry wt.

obs. range

obs . range
obs . range
see text


from wet wt.
obs. range
from dry wt .

obs. range

obs. range
obs. range

obs . range
obs. range
from dry wt.

obs. range

obs. range
obs. range
 With a few exceptions the results of these simula-
 tions match the measurements quite well.  The sedi-
 mentation rate for the eastern basin of Lake Erie
 (Table 2) is considerably underestimated.  Burns
 suggests that much of the eastern basin sediment
 originates in the western basin.  The model prediction
 is based solely on material derived from the over-
 tying water and does not consider horizontal
transport; therefore, if Burns is correct our under-
estimate is to be expected.  The prediction of sedi-
mentation in the central basin is also considerably
lower than that observed and the same mechanism may
be operating.  The most obvious failing of the model
is in its inability to predict accurately the seasonal
dynamics of the properties of the central and western
basins of Lake Erie; this problem will be discussed
in more detail below.

Lake Comparisons

In Table 1 the lakes are arranged in order of increas-
ing trophic state according to predicted maximum
phytoplankton concentration, sedimentation, and
primary production.  The ordering of the lakes based
on our results agrees with trophic schemes reported
in the literature16'17.

The trend is also observed in predicted available
phosphorus, especially the maximum values.  Since
phosphorus is generally accepted as the most common
limiting nutrient in the Great Lakes, the increase in
the maximum values is an accurate portrayal of the
trend in trophic status.  All of the lakes seem to
have sufficient nitrogen available to the plankton;
however, the trend in the minimum predicted value
indicates that, as we look toward the more eutrophic
lakes, the minimum inorganic nitrogen value decreases
and the difference between the minimum and maximum
values increases.  The indication that the supply of
nitrogen in the lower lakes is approaching levels
critical to phytoplankton growth is also observed in
the data compiled and reviewed by Dobson et al.17 The
effect of this nitrogen depletion is emphasized when
one realizes that the average nitrogen half-saturation
constant for phytoplankton growth is approximately
0.027 mgN/l14-33.

                     Discussion

The results of the above simulations allow the lakes
to be categorized in two groups: (1) those simulated
quite well after minor recalibration of the Lake
Ontario model and (2) those simulated less well and
requiring further work.  The upper lakes and possibly
eastern Lake Erie fall into the first category, while
the western and central basins of Lake Erie are in the
second group.  Examination of the lakes that fall in
the two categories provides information on the gener-
ality and limitations of this specific model, as well
as some implication of the requirements for models in
general.

Upper Lakes

Although these lakes do have similarities (e.g., great
size, climate, origin), they differ in trophic status
and in aspects of ecological dynamics.  Modeling a
series of lakes of varying trophic status is analogous
to modeling the eutrophication process in one lake.
The ability of the model to predict accurately the
levels of nutrients and biota in the upper Great Lakes
lends credence to its potential generality; however,
the fact that recalibration was necessary to obtain
these predictions suggests that the model would not be
able to predict accurately eutrophication within a
lake without such recalibration.  A model that was
parameterized for phytoplankton adapted to the low
nutrient levels in an oligotrophic lake would over-
predict the outcome of enriching the lake.  Upon in-
creasing the nutrient levels, the algae as parame-
terized would grow excessively.  In nature, algae out-
competed at low nutrient levels would succeed at high-
er levels because of other competitive advantages
(e.g., size selective predation) and replace the
oligotrophic forms.  Multispecies models are required
                                                      631

-------
                   Lake  Superior
Figure 1.   Comparisons of seasonal changes  in  observed and predicted values  for phytoplankton carbon (C)  and
           available phosphorus (P)  for the Great Lakes with  separate comparisons for the western,  central,  and
           eastern basin of Lake Erie.   The predicted values  for both C and  P are represented by the lines.  The
           observed values of C^6'Z5>30 (mgC/l)  are represented by  dots and  the observed P values35 (vgP/l)  by
           the white areas which include the mean ± one standard deviation.   The abscissa represents time from
           day 60 to 330.
                                                           The need to change the second coefficient, XKG, im-
                                                           plies the necessity for multispecies models of zoo-
                                                           plankton.  It also directs attention to the fact that
                                                           very little is known about this coefficient and re-
                                                           search should be directed towards its evaluation and
                                                           determining what other processes are operating to
                                                           affect succession during eutrophication.
to predict this sequence since single-species models
cannot produce such succession.
To test this hypothesis,  a simulation was run for Lake
Superior with two of the  four phytoplankton groups
given a XKP value of 1 ygP/1 and the other two a value
of 9 ygP/1.  This simulation resulted in almost exact-
ly the same prediction for total phytoplankton as be-
fore; however, the algal  groups with low nutrient re-
quirements dominated and  the ones requiring higher
concentrations (XKP=9) were lost.  This recalibrated
model is not necessarily  a general model, however,
since it only accounts for one side of the competitive
interactions at various trophic levels.  We assume
that in lakes with higher nutrient levels some
counteracting competitive relationship favors the
algal types that have higher nutrient requirements.
The nature of this relation (or relations) is not
clear at present, however, and so it cannot be added
to the model.  Even in this incomplete state of the
model, however, our test  indicates that multispecies
models can mimic natural  selection and, if the model
is detailed enough to allow this succession to occur,
can accurately predict the total phytoplankton bio-
mass, as well as succession.
                                                           The development of multispecies models may not be the
                                                           only alternative available to simulate eutrophication.
                                                           Another method could be to make the important coef-
                                                           ficients functions of the environment.  Allowing XKP,
                                                           for example, to change as a function of phosphorus
                                                           concentration is one way to simulate adaptations of
                                                           the phytoplankton community.  Although this method
                                                           may be more easily implemented, we feel multispecies
                                                           models will be more realistic.

                                                           Lake Erie

                                                           The Lake Erie simulations were generally  less success-
                                                           ful than those for the upper  lakes.  This  limitation
                                                           of the model suggests that ecological models will
                                                           often fail when applied outside the physical realm
                                                           for which they were developed.
                                                      632

-------
The failure in the central basin was most attributable
to not  accounting for the special processes associated
with anoxic hypolimnetic conditions.  Under anaerobic
conditions in Lake Erie, the regeneration of phospho-
rus from the sediment is approximately 11 times great-
er than under aerobic conditions  .   As a result of
ignoring this process, the seasonal dynamics of the
biota of this basin were not simulated accurately; and
the phytoplankton concentrations were grossly under-
estimated during late summer and early fall, when the
sedimentary regeneration occurs.

The western basin is also greatly influenced by physi-
cal properties not considered in the model.  Alloch-
thonous loadings to the large, deep upper lakes
probably do not affect the seasonal dynamics, whereas
the episodic inputs to western Lake Erie, which has a
short residence time, certainly affect the biota.
Also, the shallow basin would be perturbed by any
relatively high intensity winds, resulting in resus-
pension of sediment.  This process and anoxic release
of sedimentary phosphorus are not included in this
version of the model, and we feel these omissions have
resulted in the poor predictions for this basin.

                     Conclusion

We feel that this analysis points out that models
built to describe the ecology of lakes can be general
enough for use in a range of situations; however, they
will have to include multispecies compartments in the
food web in order to simulate natural succession.  To
accomplish this end, substantial efforts are needed
to describe and document the processes critical to
the ecological models at the "functional species"
level.

This study also emphasizes the need to analyze criti-
cally a body of water before developing a new model.
In many cases only the physical processes of a model
(depth, stratification, sediment/water exchange) need
to be altered to adapt a preexisting model.

                     Acknowledgements

The authors are grateful to J. Manor for compiling
the seasonal nutrient data for this study.  The
drawing was done by R. James, and the manuscript was
typed by J. Grasso and R. Hill.

                   References Cited

1.  Vollenweider, R. A. 1968.  Tech. Rept. DAS/DSI/68,
    1-182.  Organ. Econ. Coop. Dev., Paris.
2.  Vollenweider, R. A. 1975.  Schweiz. Z. Hydrol. 37:
    53-84.
3.  Dillon, D. J. and Rigler, F  H.  1974.  Limnol.
    Oceanog. 19: 767-773.
4.  Scavia, D., Eadie, B. J. and Robertson, A. (in
    review).  An Ecological Model for Lake Ontario
    formulation and preliminary evaluation.
5.  Thomann, R. V., DiToro, D. M. , Winfield, R.P., and
    O'Connor, D. J. 1975.  Ecological Research Series.
    Environ. Prot. Agency Rept. No.  EPA-660/3-75-005.
    177 pp.
6.  Park, R. A., Scavia, D., and Clesceri, N. L. 1975.
    In:  (C.S. Russell, ed) Ecological Modeling in a
    Resource Management Framework.  Resources for the
    Future, Inc., Wash., D. C. , 49-82.
7.  Chen, C.  W.  and Orlob, G. T. 1975.   In.:  (B. C.
    Patton, ed)  Systems Analysis and Simulation in
    Ecology, Vol. 3.  Academic Press, New York, 476-
    488.
8.  Sundaram, T.  R. and Rehm, R. G.  1973.  Tellus. 25:
    157-167.
9.  Phillips, D.  W. and McCullock, J.  A.  W. 1972.
    Climatological Studies No. 20.  Environment
10.

11.


12.


13.

14.



15.


16.

17.

18.

19.


20.


21.

22.


23.


24.

25.


26.



27.


28.


29.

30.

31.


32.


33.

34.


35.

36.
     Canada, Atmospheric Environment Serv., Toronto,
     100 pp.
                               19.
                             G.
                  and Stadelmann,
                 31: 739-762.
                  and Sly, P. G.
              31: 731-738.
              Verein. Limnol. 16:
Anckerson, T. W.,
          J. Sed.
Thomas, R. L.,
Petrol. 44:

and Vollenweider,
on Great Lakes
Irbe, J. G. 1972.  Climatological  Studies  No
Canada Department of Transport, Met. Branch.
Richards, T. L., Irbe, J. G., and  Massey,  D.
1969.  Climatological Studies No.  14.  Canada
Department of Transport,  Met. Branch
Anonymous.  1972-73.  IFYGL weather data.
Atmospheric Environment Serv., Dept. Environment,
Toronto, Ontario
Hallmann, M. and Stiller, M.  1974.  Limnol.
Oceanog. 19: 774-783.
DiToro, D. M., Thomann, R. V., and O'Connor, D.J.
1971.  In:  (R. F. Gould, ed) Non-equilibrium
systems in Natural Water Chemistry.  Adv.  in
Chem. Ser. 106: 131-180.
Fuhs, W. G., Demmerle, S. D., Canelli, E., and
Chen, M. 1972.  Limnol. Oceanog. Spec. Symp.
Vol. 1, 113-132.
Vollenweider, R. A., Munawar, M
P. 1974.  J. Fish. Res. Bd. Can
Dobson, H. F. H., Gilbertson, M
1974.  J. Fish. Res. Bd. Can.
Richman, S. 1966.  Verh. Int.
392-398.
Great Lakes Water Quality Board. 1973. Ann. Rept.
to the International Joint Commission, Windsor,
Ontario.
Olson, T. A. and Odlaug, T. 0. 1966.   In:  Proc.
9th Conf. on Great Lakes Res., Univ. of Mich.,
109-118.
Watson, N. H. F. 1974.  J. Fish. Res.  Bd.  Can.
31: 783-794.
Kemp, A. L. W.
and Mudrochova, A. 1974.
207-218.
Glooschenko, W. A., Moore, J. E.
R. A. 1973.  In_: Proc. 16th Conf
Res., Interna. Assoc. Great Lakes Res., 40-49.
Watson, N. H. F. and Carpenter 1974. J. Fish. Res.
Bd. Can. 31: 309-317.
Robertson, A., Powers, C. F., and Rose, J. 1971.
In_:. Proc. 14th Conf. Great Lakes Res., Internat.
AFsoc. Great Lakes Res., 90-101.
Fee. E. J. 1971.  A numerical model for the es-
timation of integral primary production and its
application to Lake Michigan.  Ph.D. Thesis,
Univ. of Wise., 169 pp.
Canale, R. P., DePalma, L. M., and Vogel, A. H.
1975.  Univ. of Mich. Sea Grant Tech.  Rept. No.
43, 150 pp.
Schelske, C. L. And Roth, J. C. 1973.
Lakes Res. Div., Univ. of Mich.
108 pp.
Robbins, J. A. and Edgington, D. N. 1975.
Geochim. Cosmochim. Acta. 39: 285-304.
Glooschenko, W. A., Moore, J. E., and  Vollenweider,
R. A. 1974. J. Fish. Res. Bd. Can. 31: 265-274.
Burns, N. M. 1976. Epilimnion nutrient budget for
Lake Erie. Presented at Great Lakes Research Div.
Seminar, Univ. of Mich., Ann Arbor, Mich.  January
Glooschenko, W. A., Moore, J. E., Munawar, M.,
and Vollenweider, R. A. 1974. J. Fish. Res. Bd.
Can. 31: 253-263.
Epply, R. W., Rogers, J. N., and McCarthy, J. J.
1969. Limnol. Oceanog. 14: 912-920.
Burns, N. M. and Ross, C. 1972. In: (N. M. Burns
and C. Ross, eds) Project Hypo.  U.S.E.P.A. Tech.
Rept. No. TS-05-71-208-24, pp. 85-119.
Anonymous (in preparation) Great Lakes Chemical
Data Reports.  GLERL, NOAA, Ann Arbor, Mich.
Orlob, G. T. 1975. In_: (C. S. Russell, ed).
Ecological Modeling in a Resource Management
Framework. Resources for the Future, Inc.,
Washington, D. C., 286-312.
                       Great
                     No. 17,
                                                      633

-------
        Simulation and Mathematical Modeling
      of Water Supply Systems - State-of-the-Art

                  Rolf A. Deinlnger
                School of Public Health
              The University of Michigan
                Ann Arbor, Mich.  48109


                        SUMMARY

Mathematical Modeling and Simulation Techniques have
been used extensively in many parts of the overall
water supply system ranging from the actual abstrac-
tion of water from ground and surface water sources,
to the primary collection and conveyance system, the
water treatment plant and the final distribution sys-
tem.  The following areas have found attention:
      Population projections and forecasts of demand
      Design and Operation of Wellfields
      Regional Water Supply Networks
      Design and Operation of Treatment Plants
      Design and Operation of the Distribution System

Population Projections and Forecasts of Demand

This very important aspect of water supply planning
is covered elsewhere at this conference and will not
be treated here.

Design and Operation of Well Fields

In the design and operation of well fields several
aspects are of interest.  The first one is the
following: given an areally extensive aquifer, how
many wells of what size should be placed where and
how should they be pumped to obtain the required yield
from the field at minimum total cost?  The second
level of the problem is: given an existing well field,
how should it be operated for maximum yield, or given
the required yield, how should it be operated for
minimum cost of production.

The first of the above problems is the more difficult
one, since it involves the process of selecting the
number, type, and location of wells.  An approach in
this direction has been shown in a paper by Aguado
involving an optimal plan of dewatering a construction
site.  The flow of water in the aquifer is described
by finite-difference approximations of the governing
differential equations.  The problem is then for-
mulated as a linear programming problem which deter-
mines the necessary amount of pumpage from the wells
which arranged on a grid line.  Both steady state and
transient conditions were investigated.
The second type of problem,  namely the operation of a
well field, has been investigated by Deininger^6 and
also Aguado £t al.    Linear programming techniques
were used to determine the optimal pumping from a
given set of wells,  subject  to pump limitations,
boundary drawdown limits,  and aquifer characteristics.
Regional Water Supply Networks
                                                the
The major problem addressed in these studies is
development of sources of water supply and the
delivery of the water to the points of consumption.
Typical of these studies is for example the one by
Carey7 who studied the water transfer in the New York
Metropolitan area, the one by Deininger^6 who described
algorithms for the optimization of regional water
supply networks.  The basic problem can be stated as
follows:   Given a number of surface or ground water
sources,  which of these sources should be developed
at what time and to what extent and how should that
water be treated, stored and transported such that
                                                       634
 the needed quantity and quality of water will be avai-
 lable  at  the demand points and the total costs are re-
 duced  to  a minimum.

 The major design variables are the number, size and
 location  of surface reservoirs, including their ele-
 vations;  the size,  number and location of wells in a
 well field;  the routing,  number and sizes of the
 transmissions mains including the pumping stations;
 the treatment of the water to make it potable; and
 finally the rules of operation of the system.

 The formulation of  the above problems leads to ty-
 pical  network problems and a variety of algorithms,
 mostly linear and nonlinear programming have been
 used to analyze the problem.

 Two further studies which are worth mentioning are
 those  by  Young and  Pisano^-'- and a study by Weddle  .
 The study of Young  focuses on the James River and
 estuary and the water supply to the cities of Richmond
 and Hampton Roads.   A variety of sources of water is
 considered,  such as surface water,  ground water,
 brackish  water,  sea water, and renovated waste water.
 The major decision  variables are the surface water
 reservoir,  the location and number  of wells for
 tapping the ground  water,  the location of a possible
 electro-dialysis plant and a desalination plant,  the
 location  and type of a. waste water  renovation plant,
 and the necessary pipe lines for transferring the
 waters.   The entire quantity and quality problem  was
 formulated  as  a nonlinear  programming problem,  and
 was solved  under different assumptions for costs  and
 technology  to  identify the most promising solutions.
 The study by Weddle focuses on an unspecified coastal
 situation,  and again the  elements of supply consider-
 ed  are conventional surface water,  seawater desalina-
 tion and  wastewater renovation.   The demands are
 municipal,  industrial and  agricultural.   The resulting
 mathematical programming model was  a mixed integer
 programming  model,  and several good solutions were
 generated.

Design and Operation  of Treatment Plants

At  the treatment plant  level  the  questions  arise as
 to what type of  treatment  is  necessary, what  combi-
nation of units  is  required,  and  how a minimum cost
 treatment plant  can be  designed.  There are very few
attempts  in  the  literature to formulate this  as an
 optimization problem,  and  the few existing  studies
appear to be more on  an academic  level.   On  the other
hand the  capacity expansion of  a  plant has  found
greater attention,  as  shown by  other papers here at
 this conference.

Design and Operation  of the Distribution  System

This area of a water  supply system  has attracted by
far  the most studies.   While  the  earlier  work has
been mostly  in  trying  to balance  the flow in  the
network,  the newer  ones attempt  to  design a  least
cost network which  satisfies  the  demand and pressure
requirements.   Several  good methods  and algorithms
exist,  although  from  a  strictly mathematical  point of
view we still  lack  a method for  designing a  least
cost network.   The  basic questions  of network
connectivity and reliability  should  also  warrant
more attention.

For  all the  above-mentioned areas of modeling the
dynamic and  time aspects must be  taken into  consider-
ation,  which means  a  study of the capacity necessary
at  a given time, looking over a  finite planning
period.  This gives rise to the  typical capacity
expansion problems  and  the sequencing of  the  construc-
tion of individual  parts of the  system.

-------
Towards a Drinking Water Quality Index

The Public Health Service Drinking Water Standards
were first adopted about 60 years ago to protect the
health of the traveling public.  In 1946, 1956, 1962,
and again in 1975 amendments and revisions were made
to reflect the changing environment and new knowledge
about what substances to expect in water and what
concentrations are thought to be allowable.  And thus
the "Safe Drinking Water Act"  (PL 93-523) calls for
new standards on the maximum allowable concentrations
of substances in drinking water.

Any water supplies not meeting the new standards will
be subject to the provisions for correcting the
situation, but the interest should be on those water
supplies which meet the standards.  Among these,
there must be some which are better than others.
The basic question is how to rank them, and if such
a ranking would be undertaken, whether or not one
would  see a difference in the  quality of the supplies.

  In an attempt to see if a ranking is possible a
  printout of the data of the Interstate Carrier
  program was obtained which lists values for 24
  parameters of water quality.   In an attempt
  to stratify one city from each of the 52
  states was selected only guided by the principle
  that as many parameters as possible should be
  available.

  Each of the parameters has a single numerical
  standard.   Thus, for example, the concen-
  trations for lead and mercury are .05 and
  .002 mg/1, respectively.  It can be argued
  that none of these substances are needed by
  the human body, and that the most desirable
  value would be zero. In other cases, for
  example, sulfates, it was felt that while the
  standard was 250 mg/1, a desirable value would
  be about 35 mg/1.  In other words, for each
  parameter there exists a standard and the most
  desirable value, the latter being usually zero
  or lower in concentration than the standard.

  An average index can then be calculated which
  measures the degree by which a particular
  water is close to the desirable concentration
  levels.  Such an index formulation was applied
  to the data, and a ranking of the supplies
  was possible and showed the wide differences
  in quality.

  Conclusions

  Mathematical Modeling and Simulation techniques have
  been  used in practically every aspect of a water
  supply system.  The number of studies and applica-
  tions in the design of well fields appears to be
  rather limited, and some further concentrated
  effort appears to be in order.  In the area of
  regional water supply systems several models exist,
  and the practice of carefully evaluating alter-
  natives seems to be standard practice.  Direct
  optimization of the design of treatment plants
  and the operation of them is rather limited and seems
  to be constrained to academic exercieses.

  The design of a distribution system has attracted
  many  studies, and at the present time good algorithms
  exist, although from a strict mathematical point
  of view none of .them guarantee the global optimum.
  Basically, the use of mathematical models and
  simulation techniques such as linear programming,
  dynamic programming, etc. , aids in the analysis
  of water supply systems in four major ways:
     1.   It allows  the  analysis  of  more alternatives
          at every level of  decision-making;

     2.   It allows  a better testing of  the assump-
          tions  and  estimation  of the influence of
          economic,  political and environmental un-
          certainties;

     3.   It provides a  mechanism whereby all  assump-
          tions  and  judgements  are made  explicit
          and  are clearly laid  out,  and

     4.   It serves  as a communication tool for alll
          the  professionals  involved in  water  supply
          systems planning.
An  extensive  literature  exists,  as  shown  on  the  pages
following.  It  is up  to  the profession  to put  it into
practice.
 Note:  Due  to  space  limitations  only  a brief  summary
       of the  paper  is  presented here.   A complete
       paper is  available  from the  author.

                      REFERENCES

                General Water Supply Network

 1.  Aron,  G.  (1969), "Optimization of Conjunctively
     Managed Surface and Groundwater Resource by
     Dynamic Programming," Water  Resources Center,
     Contrib.  129, Univ. of Calif.,  Berkeley, Calif.

 2.  Becker, L. and W. Yeh (1974),  "Optimal Timing,
     Sequencing,  and Sizing of Multiple Reservoir
     Surface Water Supply Facilities," Water Resources
     Research,  10, No. 1, 57.

 3.  Bugliarello, G., W.D.  McNally,  J.T.  Gormley,
     and  J.T.  Onstott (1966), "Hydro,  A.  Computer
     Language for Water Resources Engineering,"
     Civil Eng.,  J36_ (11), 69.

 4.  Buras,  N., and Z. Schweig (1969), "Aqueduct Route
     Optimization by Dynamic Programming," Jour.
     Hydraul.  Div., Proa. Amer.  Soa. Civil Engr., 95
     (HY5),  1615.

 5.  Burt,  O.R. (1970), "On Optimization Methods for
     Branching Multistage Water Resource Systems,"
     Water Resources Res.,  6^ (1).

 6.  Butcher,  W.S., et al. (1969), "Dynamic Programming
     for  the Optimal Sequencing of Water Supply
     Projects," Water Resources Res.,  5_ 1196.

 7.  Carey,  G.W., and L. Zobler  (1968), "Linear
     Programming of Water Transfers in the New York
     Metropolitan Region," Proa.  4th Amer. Water
     Resources Conf., New York,  N.Y.,  658.

 8.  Clyde,  C.G., and R.N.  DeVries  (1971) "Optimiza-
     tion in Municipal Water Supply System Design,"
     Water Resources Bulletin, 1_ (5),  1049-48.

 9.  Clyde,  C.G., B.C. Jensen, and J.H. Milligan
     (1967), "Optimizing Conjunctive Use of Surfac'e
     Water and Groundwater," Proa.  Symposium Ground-
     water Development in Arid Regions, Utah State
     Univ.,  Logan, Utah, p. 59.

10.  Cochran,  G.F., andW.S. Butcher  (1970), "Dynamic
     Programming for Optimum Conjunctive Use,"
     Water Resources Bulletin, 6 (3), 311-322.
                                                       635

-------
11.  Deininger, R.A.  (1969), "Systems Analysis for
     Water Supply and Pollution Control," in
     Natural Resource Systems Models in Decision
     Making," Proa.  1969 Water Resources Seminar,
     Water Resources  Res. Center, Purdue Dniv.,
     Lafayette, Ind.

12.  Deininger, R.A.  (1970), "Systems Analysis of
     Water Supply Systems," Water Resources Bulletin,
     §_ (4), 573-579.

13.  Delucia, R.J.,  and P. Rogers (1972), "North
     Atlantic Regional Supply Model," Water Resources
     Research, _8  (3), 760-765.

 14.  deNeufville, R. (1970) "Cost Effectiveness
      Analysis of Civil Engineering Systems: New York
      City's Primary Water Supply as an Example"
      M.I.T. Dept. of Civil Engineering, Cambridge,
      Mass.

 15.  deNeufville, R., et_ al., (1971), "Systems Analy-
      sis of Water Distribution Networks," Proa. Am.
      Soc. of Civil Engineers, 9T_ (SA6), 825.

 16.  DeVries, R.N.,  and C.G. Clyde (1971), "Optimiza-
      tion in Municipal Water Supply System Design,"
      Water Resources Bull., _7, 1048.

 17.  Doody, J.J. (1969), "Conjunctive Use of Ground
      and Surface Waters," Jour.  Amer. Water Works
      Assn., 61, 345.

 18.  Dracup, J.A.,  and Y.Y. Haimes (1968), "On
      Multilevel Optimization in Ground Water Systems"
      Proc. Nat'1. Syrnp. on Anal. Water Res. Systems,
      Amer. Water Resources Assn., Denver, Colo.,
      p. 122.

 19.  Gupta, I.  (1969), "Linear Programming Analysis
      of a Water  Supply System," AIIB Trans., !_  (1).

 20.  Heaney, J.P. (1968), "Mathematical Programming
      Analysis of Regional Water Resource Systems,"
      Proc. Nat'I. Symp. on Anal. Water Resources
      Systems, Amer.  Water Resources Assn., Denver,
      Colo., p.  231.

 21.  Hoppel, S. K., and W. Viessman, Jr.,  (1972),
      "A Linear  Analysis of an Urban Water  Supply
      System," Water Resources Bulletin, &_  (2),
      304-311.

 22.  Howson, L.R. (1957), "Factors Affecting Long
      Distance Transmission of Water," Jour. Amer.
      Water Works Assn., «_  (10), 1359-1368.

 23.  Hufschmidt, M.M.  (1966), "Water Resources
      Systems Analysis," Proc. Amer.  Water Resources
      Assn., p.  460.

 24.  Hufschmidt, M.M., andM.B. Fiering  (1966),
      Simulation Techniques for Design of Water
      Resource Systems, Harvard University  Press,
      Cambridge,  Mass.

 25.  Hughes, Trevor C., and Calvin G. Clyde  (1969),
      "Municipal Water Planning - Mixed  Integer
      Approach," Journal of the Hydraulics  Division,
      ASCE.

 26.  Rally, E.  (1969),  "Pipeline Planning  by Dynamic
      Computer Programming," Jour. Amer, Water  Works
      Assn.,  61  (3), 114.
27.  Lauria, D.T.  (1973),  "Water Supply Planning by
     Mixed Integer Programmins," E.S.E.  Publication
     Ho. 298, Dept.  of  Environmental Schiences and
     Engineering, University  of  N.C.,  Chapel Hill,
     N.C.

 28.   Linaweaver, F.P., Jr., and C.  Scott Clark  (1964),
      "Costs  of Water Transmission," Jour. Amer.  Water
      Works Assn., 56. (12), 1549-60.

 29.   Loucks, D.P. (1968) "Comment on Optimization
      Methods for Branching Multistage Water Resource
      Systems," Water Resources Res., b_, 447.

 30.   Meier,  W.L., Jr., and C.S. Beightler (1967), "An
      Optimization Method for Branching Multistage
      Water Resource Systems," Water Resources Res.,
     _3, 645.

 31.   Meier,  W.L., Jr., and C.S. Beightler, (1968),
      "Optimization of Branching Multistage Systems:
      A Reply to a Comment by D.P. Loucks," Water
     Resources  Res., _4, 1385.

 32.  Meier,  W.L., R.W.  Lawless, and C.S. Beightler
      (1968)  "Geometric Programming:  New Optimization
     Technique  for  Water Resource Analysis," Proc.
     4th Amer.  Water Resource Conf., New York, N.Y.,
     p. 524.

 33.  Meier,  W.L., A.O.  Weiss,  C.D.  Puentes,  and J.C.
     Moseley (June  1971),  "Sensitivity Analysis:  A
     Necessity  in Water Planning,"  Water Resources
     Bulletin, ]_ (3),  529-41.

 34.  Mobacheri,  F.,  V.S.  Budhraja,  and F.E.  Mack
      (Aug. 1971), "Conjunctive  Use  of  Sea and Fresh
     Water Resources:  An Integer Programming Approach"
     Water Res.  Bull.,  ]_ (4),  823-830.

35.  Morin.T.L.,  and A.M.  Esogbue (1971)  "Some Effi-
     cient Dynamic Programming  Algorithms  for the
     Optimal Sequencing and Scheduling  of Water
     Supply  Projects,"  Water Resources Res., ]_ (3).

36.  Russell, C.S.  (July  1-3, 1968),  "An Application
     of Non-Linear Programming  to the  Planning of
     Municipal Water Supply Systems: Some Results,"
     Proc.  of the National Symposium on  the Analysis
     of Water Resource  Systems,  Denver,  Colo.

37.  Singh, K.P.  (1971),  "Economic Design of  Central
     Water Supply Systems  for Medium Sized Towns,"
     Water Resources Bull., ]_,  79.

38.  Shipman, H.R. (1967), "Water Supply Problems in
     Developing Countries," Jour. American Water  Works
     Assn., .59  (7),  767.

39.  Sturman, G.M. (1971), "Systems  Analysis  for  Urban
     Water Supply and Distribution," Jour. Environ.
     Systems, I, 67.

40.  Weddle,  C.L., S.K. Mukherjee, J.W. Porter, and
     H.P.  Skarhein (1970), "Mathematical Model for
     Water-Wastewater Systems," Jour. Amer. Water
     Works Assn.

41.  Young, G.K., and M.A. Pisano (1970), "Nonlinear
     Programming Applied to Regional Water Resource
     Planning," Water Resources Research, 6_  (1),
     32~A2'         Design of Well Fields

42.  Aguado,  E., and I. Remson  (1974), "Ground-Water
     Hydraulics in Aquifer Management, Proc. Am.  Soc.
     of Civil Engrs., 100  (HY1), 103.
                                                      636

-------
43.   Aguado,  E.,  et aZ.,(1974), "Optional Pumping for
     Aquifer  Dewaterlng," Proa. Am. Soa. of Civil
     Engrs.,  100  (HY7),  869.

44.   Bear,  J.,  and 0. Levin (1966), "An Approach to
     Management and Optimal Utilization of Aquifers,"
     Proc.  Amer.  Water Resources Conference,
     Chicago, p.  200.

45.   Brown, G., and R. Deacon  (1972), "Economic
     Optimization of a Single-Cell Aquifer," Water
     Resources Research, J3  (3), 557-64.

46.   Deininger, R.A. (1970), "Systems Analysis of
     Water Supply Systems," Water Resources Bulletin,
     k_ (4), 573-79.

47.   Young, R.A., and J.D. Bredehoeft (1972), "Digital
     Computer Simulations for  Solving Management
     Problems of Conjunctive Ground Water and Surface
     Water Systems," Water Resources Research, _8 (3),
     533-56.

                Water Distribution Systems

48.  Adams, R.W.  (1961), "Distribution Analysis by
     Electronic Computer," Jour, of the Inst. of
     Water Eng.,  London, 15, p.415.

49.  Camp, T.R. (1939), "Economic Pipe  Sizes for
     Water Distribution Systems," Trans. ASCE, 104,
     190-213.

50.  Camp, T.R.,  and H.L. Hazen (1934), "Hydraulic
     Analysis of Water Distribution by Means of an
     Electric Network Analyzer," Jour, of HEWWA,
     48, 383.

51.  Cembrowicz,  R.G. (1971),  "Mathematical Model of
     a Water Supply System Under Fluctuating Demand,"
     dissertation, Harvard University

52.  Cembrowicz,  R.G. (1973),  "Models of Water Supply
     Systems," in Models for Environmental Pollution
     Control, R.A. Deininger,  ed., Ann Arbor Science
     Publishers.

53.  Cembrowicz,  R.G., and  T.T. Harrington  (1973),
     "Capital Cost Minimization of Hydraulic Network,"
     Proc. Am. Soc. of Civil Engrs., 99_ (HY3), 431.

54.  Cembrowicz,  R.G.,  (1975)  "Least Cost Design of
     Water Distribution Networks," Proceedings
     Second World Congress  on  Water Resources, New
     Delhi, India, Dec. 1975.

55.  Chenoweth, H. and C. Crawford (1974), "Pipe
     Network Analysis," Jour. Am.  Water Works Assn.,
     p. 55.

56.  Cross, Hardy  (1936), "Analysis of Flow Networks
     or Conduits or Conductors," Bulletin #286, Univ.
     of 111.  Engineering Expt. Station, Urbana, 111.

57.  Deb, A.K., and A.K. Sarkar (1971), "Optimization
     in Design of Hydraulic Networks," Proc. Am. Soc.
     of Civil Engrs., 9T_ (SA2) p.141.

58.  Deb, A.K. (1974), "Least  Cost Design of Branched
     Pipe Network System," Proc. Am.  Soc.  of Civil
     Engrs.,  100 (EE4), 821.
59.  deNeufville, R., and J. Hester  (1969), "Discuss-
     ion of 'Water Distribution System Analysis,'" by
     Shamir, U., and Howard, C.D.D., ASCE,Hydraulics
     Div.  J.,  95_ (HYL).

60.  deNeufville, R., et al., (1971), "Systems
     Analysis of Water Distribution Networks,"
     Jour.  San.  Engr. Div., Proc.  Amer.  Soc. Civil
     Engr., £7  (SA6), 825.

61.  Demand, J., Bobee B., and J.P. Villeneuve,  (1975)
     "Analysis and Management of Water Distribution
     Systems," Journal, Urban Planning and Develop-
     ment,  ASCE, Vol. 101, No. UP2, p. 167.

62.  Dillingham, J.H. (1967), "Computer Analysis of
     Water Distribution Systems, Part II," Water and
     Sewage Works,  pp. 43-54.

63.  Dillingham, J.H., "Computer Methods for Water
     Distribution Systems Design," 16th Annual ASCE
     Conf.j Hydraulics Div.

64.  Donachie, R.P. (1974), "Digital Program for Water
     Network Analysis," Proc. Am.  Soc. of Civil Engi-
     neers, 100 (HY3), 393.

65.  Duffin, R.J. (1947), "Non-Linear Networks,"
     Bulletin of Amer. Math.  Soc., 53_, pp. 963-971.

66.  Epp,  R.,  and A.G. Fowler (1970), "Efficient Code
     for Steady-State Flows in Networks," Jour. Hyd.
     Div.,  Proc. ASCE.

67.  Graves, Q.G.,  and D. Branscome (1958), "Digital
     Computers for Pipeline Network Analysis," Jour.
     San.  Eng. Div., Proc.  ASCE, Paper #1608,  and
     Discussion by Hamblen, J.,  in Jour.  San.  Eng.
     Div.,  Proa. ASCE.

68.  Gupta, Ishwas  (1969), "Linear Programming Analysis
     of a Water Supply System,"  AIIE Transactions,
     1 (D.

69.  Gupta, I. e_t al.  (1972) "Linear Programming
     Analysis  of a Water Supply  System with Multiple
     Supply Points, Trans. Amer. Inst. Ind. Eng.,
     Vol.  4, No. 3, pp. 200-204, 1972.

70.  Hoag,  L.N., and G. Weinberg (1957),  "Pipeline
     Network Analysis by Electronic Digital Computer,"
     Jour.  Amer. Water Works Assn., 49,  5, p.  517-24.

71.  Howson, L.R. (1957), "Factors Affecting Long
     Distance  Transmission of Water," Jour.  Amer.
     Water Works Assn., 49, 10,  1359-68.

72.  Hughes, T.C.,  and C.G. Clyde (1973), "Municipal
     Water  Planning   Mixed Integer Approach," Proc.
     Am.  Soa.  of Civil Engineers,  _99 (HY11), 2079.

73.  Jacoby, S.  (1968), "Design  of Optimal Hydraulic
     Networks," Jour.  Hyd.  Div., ASCE, JM (HY3), 641-
     660.

74.  Jacoby, S.L.S., and L.W. Twigg (1968), "Computer
     Solutions to Distribution Network Problems,"
     Proc.  Natl. Symp. on Anal.  Water Resources Systems
     Amer,  Water Resources Assn.,  Denver, Colo., p.167.
                                                      637

-------
75.   Karmeli,  D.,  Y.  Gadlsh,  and S.  Meyers (1968),
     "Design of Optimal Water Distribution Networks,"
     ASCE j.  of Pipeline Div., 9± (PLl).

76.   Kohlhaas, C.A. and D.E.  Mattern, (1973), "An
     Algorithm for Obtaining  Optimal Looped Pipe
     Distribution Networks",  Computers and Urban
     Society,  ACM, p. 138-151.

77.   Lemieux,  P.P. (1972), Efficient Algorithm for
     Distribution Networks,"  Jour.,  Hyd.  Div., Proa.
     Am. Soa.  of Civil Engrs., (HY11), 1911.

78.   Liang, T. (1971), "Design Conduit System by _
     Dynamic Programming," Proa.  Am. Soa. of Civil
     Engrs.,  97_ (HY3), 383.

79.   Linaweaver, P.P., Jr., and C. Scott Clark,  (1964),
     "Costs of Water Transmission," Jour. Amer. Water
     Works Assn., 12, 1549-60.

80.   Lischer, V.C. (1948), "Determination of Econo-
     mical Pipe Diameters in Distribution Systems,"
     Jour. Amer. Water Wks. Assn., 40  (8), 849-67.

81.   Manne, A.S.  (1961), "Capacity Extension and
     Probabilistic Growth," Eaonometriaa, 29  (4),
     632-49.

82.  Martin, D.W., and G. Peters  (1963), "The Appli-
     cation of Newton's Method to Network Analysis
     by Digital Computer, Jour, of Inst.  of Water
     Engr., _17_  (2),  115.

83.  Mcllroy, M.S. (1950), "Direct Reading Electrical
     Analyzer for Pipeline Networks," Jour. Amer.
     Water Works Assn., 42,  p. 347.

84.  McPherson, M.B.  (1962),  "Application of  System
     Analyzer," Water and Sewage  Works,  Ref. and Data
     Numb er,  pp. R-5 3.

85.  McPherson, M.R., and M.  Heidari  (1966),  "Power
     Consumption with Elevated Storage Compared  to
     Direct and Booster Pumping," Jour.  Amer.  Water
     Works Assn.,  5S_ (12), 1585.

86.  Neigut,  E.G.  (1964),  "Distribution  Designed by
     New  Method,"  Waterworks and  Wastes  Engineering,
     pp.  46-49.

87.  Pitchai, R.  (1966),  "A  Model for  Designing
     Water Distribution Pipe Networks,"  Ph.D.  Thesis,
     Harvard  University.

88.  Rasmussen, J.,  (1975),  Simplified Optimization of
     Water  Supply Systems",  Proa. Am.  Soa.  of Civil
     Engineers,  102, (EE2),  313-327.

89.  Schaake,  C.,  and D.  Lai (1969),  "Linear Program-
     ming and Dynamic Programming Application to
     Water Distribution Network Design," Mass.  Inst.
     of Technology,  Hydrodynamics Laboratory,  Report
     No.  116.

90.  Shamir,  U.,  and C.D.  Howard  (1968), "Water
     Distribution Systems Analysis," Jour.  Hyd.  Div.,
     ASCE, JM (HY1), 219-33.
 91.   Shamir,  U.,  (1947),  "Optimal Design and Operation
      of Water Distribution Systems," Water Resources
      Research, W_ (1),  p. 27.

 92.   Smith,  D.V.  (1966),  "Minimum Cost Design of
      Linearly Restrained Water Distribution Networks"
      Master  Thesis,  Dept. of Civil Engineering, M.I.T.

 93.   Sturman, G.M.  (1971), "Systems Analysis for Urban
      Water Supply and Distribution," Jour. Environ.
      Systems, _1,  67.

 94.   long, A.L.,  F.T. O'Connor, D.E. Stearns, and
      W.O.  Lynch  (1961)  "Analysis of Distribution
      Networks by  Balancing Equivalent Pipe Lengths,"
      Jour. Amer.  Water Works Assn.,  _2, 192.

 95.   Urban Water  Resources Research (1968), A Study
      for OWRR, ASCE,  New York, N.Y.

 96.   Warga, J. (1954),  "Determination of Steady-State
      Flows and Currents in a Network," Paper #54=43=
      4,  Proc. of  the  Inst. Soc. of Amer.  9, Part 5.

 97.   Watanatada,  T.  (1973),  "Least Cost Design of
      Water Distribution Systems," Proc.  Am. Soa.  of
      Civil Engr., 99.  (HY9),  1497.

 98.   Whittington, R.B.  (1960), "Pipe Lines with
      Constant Draw-off  by Side Branches," Jour.  Inst.
      of Water Engineers,  London,  Vol.  146,  p.  459.

 99.   Wood. D.J. (1971), "Analog Analysis  of Water
      Distribution Networks," Proc. Am.  Soc.  of Civil
      Engrs.,  Transport  Eng.  Jour., TE2,  p.  281.

 100.  Zarghamee, M.S.  (1971),  "Mathematical  Model  for
      Water Distribution Systems," Jour. Hyd. Div.,
      ASCE, pp. 1-14.

         Models of Treatment  Plants (Staging)

 101.  Dostal,   K.A., £t al.  (1966),  "Development of
      Optimization Models  for Carbon  Bed Design,"
      Jour., American  Water Works  Assn., 58_  (9), 1170.

 102.  Harrington, J.J. (1967),  "The Role of  Computers
      in  Planning  and Managing  Water  Utilities,"
     Jour., New England Water  Works  Assn.,  81  (3),
      231.

 103. Lauria,   D.T. (1973),  "Water  Supply Planning in
     Developing Countries," Jour. Amer. Water Works
     Assn., 4£ (9), 583.

 104. Muhich,   A.J.  (1966),  "Capacity  Expansion of Water
     Treatment Facilities,"  unpublished Ph.D. disser-
      tation,   Harvard University,  Cambridge, Mass.

105. Riordan,   C.   (1969),  "Toward  the Optimization of
     Investiment-Pricing Decisions:  A Model for Urban
     Water Supply Treatment Facilities,"  Ph.D. Thesis
     Cornell Univ., Ithaca, N.Y.

106. Singh, Krishan P. and C.G. Lonnquist  (1972),
      "Water Treatment Plant  Staging  Policy," Water
     Resources Bulletin, J3 (2), 239-49.
                                                        638

-------
                        CAPACITY EXPANSION FOR MUNICIPAL WATER AND WASTEWATER SERVICES:

                                         INCORPORATION OF UNCERTAINTY
   Robert  G.  Curran
   President,  Curran Associates, Inc.
   Northampton,   Massachusetts
                   David H. Marks
                   Resource Analysis, Inc.
                   Cambridge, Massachusetts
                                              Donald S.  Grossman
                                            Resource Analysis, Inc.
                                            Cambridge, Massachusetts
      Methods  for management of local water and waste-
water investments are outlined.  The strategy is to
choose the least  cost supply alternative that services
a forecast but uncertain equilibrium demand.  Careful
attention is paid to the overall usability of the
method by local planners.

      The research defines water and wastewater ser-
vice demands,  and identifies some controls available
to local decisi %n makers for modification of these
demands.  The  l^vel of future requirements for planning
is uncertain;  the form and magnitude of the uncertain-
ty is explicitly  included.  Supply alternatives and
forecast costs of supply are also presented.  Fore-
casts are developed for local relevance and to best
utilize available information.

      A detailed  analysis of time phasing and scale of
capacity expansion requires forecasts of the impact of
supply shortage.   Short term alternatives in shortage
can either act to limit demand or to increase supply..
Recommended cost  assessment for these strategies is
empirical.  The criterion for expansion planning is to
choose the alternative which minimizes total costs,
where costs include construction, operating, and short-
age penalty fees.  The recommended expansions explicit-
ly incorporate forecast uncertainty in the evaluation
of alternative investment patterns.

              Problem Description

      Municipal water and wastewater investments repre-
sent a large and  important segment of the capital ex-
penditures made by local governments.  The traditional
response of designers to the problem of sizing incre-
ments of capacity has been to build for arbitrarily
long planning  periods, that is, to overbuild in order
to assure safe and adequate supplies.  This is due, at
least in part, to the relatively naive approach pre-
sented by traditional engineering textbooks and requir-
ed by Federal  funding guidelines.  A growing body of
evidence indicates that overbuilding is not the best
response to the uncertainties inherent in future demands
for service.  Two basic reasons may be cited.  First,
oversized system  elements are not economically efficient.
Second, water  resource related investments may have
impacts upon whether or not land is used, and for what
purposes.

      Public water supply and wastewater disposal is
undertaken for a  variety of reasons.  Commonly accepted
considerations for municipal provision of water supply
and wastewater disposal include public health, public
safety, resource  regulation, land use regulation, and
economic efficiency.   Given the basic premise of pro-
viding water related public services, the local de-
cision maker still has a range of options that define
the extent and quality of service.  The character of
service depends also upon the sorts of demands, and
the consequences of not meeting those demands.

    The immediate decisions available include size and
location of water distribution or sewer collection
mains, the size and location of treatment or supply
facilities, pricing or metering policies, and the
types of users and uses allowed.  Some of these, espe-
cially those related to sizing capital facilities, are
generally made in the long term in order to take advan-
tages of the economies of larger developments.  Others,
such as changes in pricing or allowable uses, are
easily changed on a short term basis.

    Demand has important effects upon the quality of
the services offered.  In order to size the supply,
the factors which influence demand for service, and
the magnitude of the influence, must be ascertained.
It is relatively recent that an outstanding of the
elasticity of the demand for water has been establish-
ed.  Also, the consequences of not meeting demand must
be thought of in realistic and unemotional terms.
Shortage of supply need not result in unsanitary con-
ditions or shortage of water for drinking.  Rather than
restricting the availability to always meeting demand,
planning should exhibit some sensitivity to the costs
of not meeting demands.

    The traditional engineering approach may be briefly
characterized as supply oriented.  The steps are to
project demand, and then to find the least cost supply
to satisfy demand.  There are several shortcomings with
this approach.  First, methods for demand projection
including curve fitting or graphical analysis are naive
in that they only preserve past trends in the data.
Second, the method limits the range of study to struct-
ural rather than nonstructural alternatives (e.g pric-
ing).  Third, demand is assumed as a given, no matter
how much it costs to satisfy that demand:  that is,
planned inadequacies or shortages are ruled out.  Fourth.
arbitrary design horizons are often set, which ignore
the tradeoff between economies of scale and the cost of
capital.

    More generally, a large number of inputs necessary
for decision making are uncertain.  These include
future demands, cost of addition to supply, costs of
shortage, and interest rates.  In order to effectively
plan in an environment of uncertainty, the analyst must
understand the sensitivity of system objectives to
variations in policy.  If the output does not show
much response to input assumptions or policies, then
despite uncertainties or policy changes, then the un-
certainties are of little concern.  If the outputs are
sensitive, then the analyst may collect more data, re-
model the planning process, or try to explicitly model
the uncertainties.
                                                       639

-------
                    Model Framework

      The framework proposed for planning in this
paper is basically a least cost supply model.   The
simplest case to plan for is the expansion decisions
for a single facility,  for example, a single link in
a pipe network,  or a single treatment plant.  A key
assumption is that there is an identifiable service
area.  The steps proposed for planning in such a case
are to:

      (a)  Forecast  demand

      (b)  Estimate costs of expansion of supply

      (c)  Estimate costs of shortages

      (d)  Decide on the increment of plant
           capacity, and the timing of these
           increments,  based upon forecast
           demands, and the costs of meeting
           or failing to meet forecast demand.

Rather than ignore the  uncertainties that are known to
exist, and are universally agreed upon, an effort is
made to focus upon the  forms and degree of uncertain-
ties.  The philosophy is that this enables the analyst
to use more of the information available for planning.

      Unfortunately, it is difficult to check the
validity of this method, except through in depth
studies of field experience with the planning tool.
These have not been possible within the limited time
horizon of this research.   Rather, the proposed
approach is to see how  the model results compare with
models using differing  input assumptions.  For example,
one test would be to compare total system costs and
investment strategies assuming certainty, and then
assuming uncertainty in demand, all other things being
equal.  The output from tests such as this should
better enable the analyst to choose a method which
matches  his  understanding of the problems in planning
water and wastewater investments.

         Demand Analysis and Forecasting

      The traditional methods of demand forecasting by
naive trend extrapolation are reviewed in several
references (4, 32, 36,  37), but McJunkin's article (32)
is the classic study for civil engineers.  More sophis-
ticated models require  some understanding of the
causality of demand processes.  The view of demand for
service taken in this research is that of a heirarchi-
cal process of development, inhabitation, and consump-
tion.
      Models for understanding the development process
are numerous, and readily found in the transportation
or urban planning literature.  It is anticipated that
developers look at sites from the same viewpoint as
households or businesses that seek to locate.   The
developers seek sites consistent with the preferences
of the groups to whom they are trying to market.  A
theoretical basis for understanding residential loca-
tion is well developed.  Alonso (3) postulated that
budget constrained households choose a site that maxi-
mizes their utility, where utility is a fraction of
amount of land,  commuting costs, and composite measure
of all household goods.  A number of models have been
developed for forecasting on this basis (17,23,42).
The theory of location  of the firm is less well devel-
oped.

      Although this describes the relative attractive-
ness of locations within a region, it in no way pro-
vides a rationale for the driving force behind growth
in population, or differences in growth between
regions.  Therefore,  the  usual  approach is to forecast
population  and  employment on  a  regional level, and to
allocate  the  forecast within  the region.

    As early  as 1963, research  (39)  showed that sewer
service was significant in inducing  conversion of
vacant to developed  land.   Later, it was shown (24)
that an index of utility  availability explained inter-
regional  differences  in growth.   More recent studies
(13, 14,  44,  45)  have attempted to quantitatively
model the impact of wastewater  infrastructure invest-
ments. The  state-of-the-art ability  to model the
magnitude of  development  changes is  limited. Key draw-
backs of  the  models  include strong data dependence, and
the unavailability of accurate  projections for the para-
meters that force or  drive the  output.   Rather,  the
importance  for  the analyst or planner is careful recog-
nition of water or wastewater policy relationships to
development.

    Given an  equilibrium  developed stock of residences,
stores, offices,  and  industrial sites,  consumption
depends directly upon the degree to  which the stock is
utilized..  Normal vacancy rates for residences  are
approximately 3% (21); in some  urban care areas  the
vacancy rate  can be  dramatically above 10%.  The
important observation is  that even if developed  stock
can be accurately inventorized,  this need not be a good
indicator for assessing consumption.

    Conditioned upon  development, and thereupon  inhabi-
tation of an  area, the total  water and wastewater
supply required is determined by both level of service
offered and by  individual consumer demand character-
istics.   The  ensuing  discussion emphasizes that  water
and sewer services can and should be treated as  economic
goods.  Level of service  changes can cause shifts in
demand, and,  similarly, the preferences of consumers
can change  over tine.

    Different elements of the system must be designed
to satisfy  different  components  of demand.  Demand for
water exhibits  daily, weekly, and annual cycle varia-
tions.  Annual  cycles are important  for planning basic
source; demands on the maximum  day are important for
planning  transmission, facilities, treatment facilities,
distribution  pumping  stations,  and major feeder  mains;
peak hour demands or  maximum  fire flow are important
for planning  local distribution mains,  connections, and
local storage.   Wastewater demands typically exhibit a
close relationship to observed water demands.  A number
of crude  rule-of-thumb multipliers are available for
relating  demand components.

      Four  district  classes of  user  generally cited in
water and wastewater  planning are residential, com-
mercial,  industrial,  and  public  unaccounted uses,  Re-
search on residential water usage has been extensive
(15, 16,  19,  20,  25,  42,  44,  46). The basis for most
of the studies  cited  is,  at least indirectly,  a  project
to study  residential  water use  conducted at Johns
Hopkins University.   A summarization of the project
results is  presented  in a paper  by Howe and Linaweaver
(20).  The  Howe and Linaweaver  study concludes that
residential users respond to  price as a quality  of
service indicator.  The importance of this sort  of re-
search is in  quantification of  the effects that  changes
in level  of service can have  on  demand.   It provides a
basis for analysis that is  readily applicable  and easily
reproduced, and helps guide in  the formulation of level
of service  policy changes.

      Unfortunately,  due  to a lack of transferrable
models and  data,  the most  convenient way to forecast de-
mand for  service  is to project population,  and then to
apply population  to use multipliers.   Two  points  must be
                                                      640

-------
emphasized.   First, the focus need not be on the devel-
opment of better point estimates for future population
levels,  but  instead on a quantification of the uncer-
tainty in population forecasts.  Second, the analyst
should,  as much as possible, utilize locally-based con-
sumption data in calibrating use multipliers.

      The traditional engineering texts have paid pain-
fully little attention to the subject of demand fore-
casting.  A  study by James, Matalas, and Bower (22)
shows, for one particular system, the economic develop-
ment projection to be the most important variable in
water resources planning, yet many current texts still
propose graphical extrapolations or simple regressions
as the basis for demand forecasts.  A method for dealing
with this dilemma is to make several projections, and
perform an ad hoc sensitivity analysis. If, in fact,
investment planning  decisions  are shown to be sensitive
to economic  development projections, as expected, then
the analyst  must in some way combine the information
from several projections in order to formulate an in-
vestment strategy.  Rather than choose between projec-
tions in some arbitrary fashion, the analyst might con-
sider trying to model the likelihood of the projections.

      Several models of this type for modeling popula-
tion growth  as a stochastic process are available and
easily applied (5, 28, 33, 34, 35).  Limited evidence
shows that the stochastic model performs better in
capturing the variance of the underlying population
growth  (34):  Often, this is severely underestimated by
regional planners.  A common formulation is to model
birth, death, and migration rates as stochastic pro-
cesses.  First, a form for the process is chosen.
Second, process parameters are estimated using histor-
ical data.  Third, the observed parameters and chosen
form are used to simulate future population growth.
Fourth, a distribution form is chosen, and statistics
gathered on  the uncertainty in future population levels.
This modeling approach uses data available more fre-
quently than census data, so process parameters may be
estimated with greater confidence.  Also, the method
has causal structure which allows for improvements
beyond those possible with aggregate population models.
One important area for future research is extension of
the basic model structure to use subjective or regional
information  in a Bayesian fashion.  Another is the re-
finement of  models for use in regions with dependent
subareas.  The use of stochastic population models seems
to be a promising area for additional research.

      Population is only a surrogate for the desired
metric of demand for water or wastewater services.  The
proposed method is to convert stochastic population
forecasts to demand forecasts through the use of con-
sumption multipliers.  Standard multipliers are avail-
able in a number of sources (19).  These do not account
for consumption habits, pricing effects, climate, and
other factors which can cause variations in water or
wastewater production.  Therefore, the analyst should
endeavor to  gather local data to estimate consumption
multipliers.  Care should be taken to include the com-
ponent of demand due to infiltration, inflow, or leak-
age (4, 19).  A preferred method, to model the variation
in observed  use, is not possible due to the lack of data
and lack of  theoretical framework.

                   Supply Analysis

      The choice of supply  alternative depends upon the
type of demands being planned for, and upon the site
characteristics of climate, topology, geology, and
existing or  planned development.   The choice of supply
will also depend upon the relative costs and upon the
availability of supply alternatives to satisfy demand.
A key assumption taken in this analysis is that the
cost function for the current project is independent
of the number and sizing of projects  preceding  the cur-
rent decision.  In other words,  the cost  function  for
system expansion appears identical at  all  points in
time.

      Within the analysis of water or  wastewater facil-
ities, the planner can choose the depth of analysis.  The
simplest approach for facility analysis is to determine
a functional form for the costs  of expansion, and  then
to utilize standard parameters to determine  the exact
scaling of the function.  A more sophisticated approach
is to use observed or synthetically generated points  to
calibrate the cost function parameters.  Both of these
methods will be reviewed in this section.  It is im-
portant to note that due to the  strong dependence  on  site
characteristics, local calibration of  parameters is the
preferred alternative.

      In general, over a wide range of sizes, water and
wastewater system elements exhibit economies of scale:
it is possible, however, to get out of the range of
economies.  The most commonly used representation  for a
cost relationship of this sort is
                 C = kQ"
(1)
where C is the total cost (usually in dollars, k is the
scale parameter,  Q is the total capacity, and m is a
scale parameter.  This relationship exhibits economies
of scale for values of m between zero and one.  Observe
that the function is continuous, suggesting that the
equipment is available in any sizing.  Although in
practice, this is not possible due to standardization
of components or to site irregularities, it seems to be
a reasonable assumption that helps improve analytic
tractability.

      The two parameters have been estimated for a number
of system elements.  These estimates have been taken,
in general, from two sources of information.  One way to
estimate the parameters is to use a broad base of observed
costs and installed capacities, and use regression or
some other curve fitting technique.  A second way to
develop a basis for fitting the cost function is to
synthetically cost a number of alternative installation
sizes, and to fit a function to the synthetic data.  The
latter method is useful in trying to develop cost func-
tions with local significance.  Examples of parameters
in the literature are numerous (1, 2, 4, 5, 7, 8, 9, 10,
18, 27, 36, 40).  The variation in reported parameters
reflects a number of differences in data or underlying
assumptions.  The comparisons may be misleading due to
site specific differences.   Also, units may be incommen-
surate due to misadjustment for exchange level or tech-
nology.

      Site specificity is of primary importance.  Clearly,
the costs of capacity depend heavily upon the relative
suitability of sites for development.  The above model
assumes the only difference between projects to be size*
Variation in cost could also be due to hydrology, top-
ology, geology, existing development, planning develop-
ment, durability of installation, site aquisition, legal,
construction materials, site preparation, and so forth.
Any number of these might be included as explanatory
variables if the data were available and if they could be
forecast for future planning:  neither is the case.  A
suggested approach, therefore, is to try and qualitatively
control for sources of variation other than size.  This
would require definition of categories exhibiting
significantly different cost parameters, and estimation
of those parameters for use in a look-up table.  Neither
the data nor the theoretical basis exist to accomplish
this.  In the interim, generalized scale factors are the
only alternative to site specific cost assessment.
                                                       641

-------
      Differences in costing assumptions or accounting
stance may also explain variations in scale factors.
The parameters are based solely on primary construction
costs, and no secondary environmental or socioeconomic
impacts are included.  Other accounting issues include
transformations between currencies and intertemporal
comparisons.  These require choice of a suitable ex-
change rate and discount rate for transformation to a
common datum.  Another accounting issue is the problem
of inflation of water and sewer prices differing from
the general rate of inflation.  In the case of water and
sewer plant, the ENR index has been increasing at a rate
of 5.5 percent per year, while the consumer price index
has increased at a rate of 2.8 percent per year.  A cor-
rection must be made to reduce the opportunity costs of
capital by the rate of relative price increase.

      Thus far, little attention has been paid to the
definition of the quality supplied.  Assuming a design
configuration (and operating policy, if applicable), the
level of supply is variable due to climatic variations
or reliability problems, and there is typically some
probability that the source installation chosen will not
satisfy demand.  The usual method for treating this is
to consider component reliability for a certain confi-
dence level.  If possible, sensitivity analysis should
be employed to test the validity of the reliability level
chosen.

      Finally, a simple analytic form was chosen for the
cost function.  The capacity expansion model, in its
most general form, does not require an analytic cost
function.  The models developed in this research may be
readily generalized for use with any monotonic cost
function.  However, this has not been fully implemented
in this version.

                    Costs of Shortage

      Estimation of the costs of shortage is a new and an
important area for research (40).   The key concept is
that the costs of shortage are not infinite.  Shortage
may merely imply inconvenience or it may mean a much more
serious condition.  Adjustments can sometimes be made by
consumers in order to lessen the impact of shortage. This
section discusses the types of adjustments possible, and
considerations for estimation of the costs of these ad-
justments.

      Most engineers and utility managers accept the
premise that systems should be designed to accommodate
demand at all times.  Shortfalls are not acceptable, and
should, therefore, be avoided with accurate forecasting,
planning, and (over) design of facilities.  Implicit in
this approach is the assumption of an infinite cost of
shortfalls in supply capacity or in delivery capacity.
One suspects the true costs to relate to the adjustments
that are possible.

      Adjustments in the case of shortage may be categor-
ized as either acting to increase supply or to reduce
demand.  For water, measures that increase supply in-
clude emergency storage and interconnections with other
systems.   Measures that reduce demand include changes in
pricing,  changes in the pricing mechanism (e.g. the
installation of meters), restrictions on uses (e.g. lawn
watering or car washing prohibitions) , and restrictions
requiring reuse (e.g.  recirculating air conditioning
equipment).  Note that adjustments that reduce water de-
mand affect both distribution and source shortage.  For
sewer, measurements that increase supply include inline
storage and flow regulating devices.  Measures that re-
duce demand are expected to be similar to those used for
water supply.
      The major study of supply shortage is that of
Russell, Avery, and Kates (40), who have documented
productivity  losses  for  several Massachusetts commun-
ities during  the  Northeast  drought from 1961-1966.
Their general methodology defined water-shortage losses
as  "gross  annual  benefits lost by disappointed users
less costs avoided by  the supplier."  The authors
assumed  full  employment  and assigned costs to the
resources  diverted to  meet  a drought crisis.   The cal-
culation of actual losses was corrected in several
ways to  reflect different interest rates and  accounting
stances.   In  cases where firms deprived of normal
quantities of water  undertook investments in  water con-
serving  technology,  it was  counted as a benefit.   In a
number of  cases,  the net result from a national point
of  view  of the effect  of drought on commerce  and in-
dustry was that the  investment in water saving technology
produced a benefit rather than a cost.

      Observed losses  were  related to the percent
shortage,  which was  in turn related to the measure of
system inadequacy.   Two  difficulties arose.   First,
system managers'  anticipation of shortage caused costs
to  be incurred without shortage having actually occurred.
Second,  existing  safety  factors were not known.   None
the less,  an  exponential cost function was esti-
mated.

      There is strong  reason to believe that  the  loss
relationships will differ for different geographical
sections of the country  and for areas with differering
levels of  usage in industrial,  commercial,  domestic,
and municipal sectors.   For example,  cost of  drought
could be significantly higher in more arid areas.

      Overall, the work  of  Russell,  Avery,  and Kates  is
an  excellent  effort  in quantifying an elusive relation-
ship.  More work  needs to be done to verify the  results,
and their  transferability.   A significant effort  must;
be made to obtain similar results for the costs of waste-
water service shortfalls.   Analysis  of  capacity expan-
sion investments  should  help quantify the sensitivity of
total system  costs to  the costs of shortage.

         Capacity Expansion Decision Making

      Given demand forecasts,  costs  of  expansion, and
costs of shortage, the final planning step  is  the choice
of  timing  and sizing of  increments for  supply  expansion.
This problem  is closely  related to the  inventory  con-
trol problem, and a number  of models have been proposed
in  the literature.  This  section discusses the available
models, including several new applications  to  water and
wastewater planning.   A  number of models  are  presented
to allow the user a considerable degree of  flexibility
in  the depth of analysis  undertaken.   This also allows
for the added ability  to  incorporate uncertainty  in
demand, or not.

      There have  been  two classes of  research  in  this
problem.   One is  the choice of  a finite set of projects
with known costs  and supply potential,  and simple
sequencing of these projects to meet  forecast  demand at
least cost  (11, 30, 31).  This  approach requires  the
solution of a combinatorial problem,  and  the  feasibility
of  solution depends heavily upon the number of projects
under consideration.   The advantage  of  this method is
that it takes account  of  the exclusivity  of projects.
The disadvantage, which  is  overriding,  is that the choice
of projects,  and  their sizes,  is made independently of
rather than simultaneously  to the sequencing  problem.
This hierarchical decision  process can  be suboptimal.
The second approach  (5,  6,  7,  12,  18,  26,  28,  29, 38, 40,
41) is to  choose  the timing and sizing  of capacity
increments to satisfy  demand at least cost, where de-
mand may be uncertain, and  where the costs may be  due
to  construction,  operation,  maintenance,  and  to  short-
age.
                                                        642

-------
         Eight  models are proposed for a planning pack-
 age.  These  include varying assumptions about linearity
 or nonlinearity  and certainty or uncertainty in demand.
 They  also  include  varying assumptions about allowabil-
 ity of  shortages.   Table 1 shows a categorization of
 the models.
 Linear
 Non-
 Linear
               Certainty
          Shortfalls
                 Shortfalls
          Shortfalls
              no  Shortfalls
                                   Uncertainty
                                Shortfalls
                                    no Shortfalls
                               Shortfalls
                                    110 Shortfalls
                Table  1:   Model  Typology

 The  reason  for  considering this  broad a set  is  that
 the  simpler models,  for example  certain linear  demand
 with no  shortfalls,  have  closed  form or very efficient
 solution procedures.   The more complex models,  for
 example  nonlinear uncertain demands  with shortfalls,
 require  elaborate solution procedures such as stochastic
 dynamic  programming.   The analyst  interested in exten-
 sive sensitivity analyses could, therefore,  sacrifice
 the  greater realism  of the more  complex models  for the
 efficient soluability  of  the simpler model.
 jects, expansion costs for the form C   kQ  , shortage
 costs quadratic (or greater) in the magnitude of shortage,
 no budget constraints, and no operating or maintenance costs.

       Two uses were made of the model.  It was applied,
 ex poste, to judge the quality of water supply investments
 for several New England towns.  This is somewhat unfair
 because it uses information on population not available
 at the time the actual decisions were made.  The model was
 also applied to develop prototype rule of thumb guidelines
 for investment planning by local decision makers.  Results
 were represented in terms of a dimensionless level of in-
 adequacy, defined as the ratio of 'current'  use to safe
 yield. Based upon input assumptions about the discount rate,
 shortage costs, and economies of scale, the  tabulated re-
 sults indicate the level of inadequacy at which to build,
 and also the length of planning period for which to build.
 Higher discount rate,  higher economies of scale,  or higher
 shortage costs all lead to the recommendation to build for
 shorter planning periods,  and vice versa.  The research does
 not, however,  provide  insight into the value of modeling
 nonlinear demand growth as compared to using a  linear ap-
 proximation to the growth in demand.  It does show  that if
 demands are uncertain  then total  system cost is  sensitive
 to the costs of shortage.

       The solution procedure was  a nonlinear programming
 algorithm based on the method of  Zoutendijk.  A  number of
 combinations of loss  function parameter,  scale  parameters,
 discount rate,  population  growth  rate,  and per  capita con-
 sumption growth rate were  studied.  Total  costs were fairly
 insensitive to  the loss  function  parameter,  but  highly
 sensitive to the discount  rate. Not  only  did costs  change
      All  four cases of model  assuming  linear  demand  have wlth  differin§  discount rates, but capacity increment
                                                          sizes also  changed.   The  total costs were sensitive to
                                                          economy  of  scale parameter but the sizing of increments
                                                          was less  sensitive. Overall, the model provides a basis
                                                          for analyzing nonlinear growth in demand with shortage.
been studied extensively by Manne (28, 29), and several
have been applied in planning water or wastewater in-
vestments (5, 6, 26, 38).  The assumptions that go into
these models are all similar.  The assumptions include
infinite economic lifetimes for projects, an infinite
planning horizon, and no budget constraints.  For the
cases of uncertain future demands, a Bachelier-Wiener
diffusion process in continuous time is used as the
model.  The form of expansion costs is C = kQm.  Opera-
ting and maintenance costs cannot be included in these
models.  Costs of shortfalls are linear in the magnitude
of shortage.   Although these assumptions result in a
highly simplified model of the capacity expansion plan-
ning process, the advantage is that the models have
closed form solutions or require simple one or two di-
mensional searches.
                                                                It can be shown that capacity expansion decisions
                                                           for an arbitrary monotonic demand path can be modeled as
                                                           a dynamic programming problem. For the cases with shortage,
                                                           this simply requires a suboptimization step. The dynamic
                                                           programming problem formulation can be readily solved us-
                                                           ing any of a number of efficient shortest path algorithms.
                                                           To expand the nonlinear demand model to include cases of un-
                                                           certainty in demand, a stochastic dynamic programming form-
                                                           ulation can be established. This is related to previously
                                                           studied stochastic inventory control models. The distribu-
                                                           tion on demands is generally discretized. Models of this
                                                           type have yet to be applied to planning water or waste-
                                                           water investments.
      Using these models, the sizing of capacity incre-
 ments has been found to be sensitive to discount rates,
 economy of scale parameters, demand, and demand uncer-
 tainties.  The indication in research by Manne is that
 lower discount rates, or higher economy of scale para-
                                                                For the nonlinear demand models,  the additional com-
                                                          plexity of a budget constraint may be introduced.  Similarly,
                                                          restrictions on available plant sizes and maximum allowable
                                                          shortages may be included.  The form of  cost and shortage
meters,  leads  the analyst to recommend larger investments COSt functlons maY be varied,  and operating and maintenance
in capacity installments.   The exact nature of the re-    <;OSts added' A11 of these are  relatively  straightforward
lationship  between time that the investment should sup-   but as yet untrled extensions.
ply,  economy of  scale,  and the discount rate is available
in several  published  sources,  including Manne.   When the
penalty  cost is  assumed to be  infinite, that is, no
shortages are  allowed,  and in  the case of  demand uncer-
tainty,  Manne  indicates that one should overbuild, as
compared to the  case  of certainty.   When the shortage
penalty  cost is  less  than infinite,  no definite results
have  been drawn.
      The four cases of model assuming nonlinear growth
in demand have not been dealt with as extensively.  The
only related study in the water and wastewater planning
literature is that of Russell, Avery, and Kates (40)
which reviewed costs of shortage for the mid-1960's
New England drought.  The model used to optimize invest-
ment strategy for deterministic demands was a nonlinear
programming algorithm.   The assumptions included deter-
ministic  demands,  infinite economic lifetimes for pro-
                                                       643
                    Conclusions
There is no general availability of a set of models for fore-
casting demand, estimating expansion costs, estimating short-
age costs, and planning capacity expansion for water and waste-
water investments. Such models in a usable package would pro-
vide municipal decision makers with the flexibility to model
in a simple sketch planning fashion, or the ability to model
more complex assumptions about the underlying system. Rather
than develop a single plan for capacity expansion, a range of
plans might be developed.  Sensitivity to input assumptions,
or the underlying distributions of input variables would aid
in incorporation of the large uncertainties inherent in plan-
ning social systems.  Ultimately, these might be abstracted
to provide simple rules of thumb for planning.

-------
1.  Ackermann, William, "Costs of Water Resource Deve-
    lopment," W & SW: R-74-R-80,  1968.

2.  Ahern, P., and J. F.  Brotchie, "Estimation of
    Economies of Scale of Sewerage Systems for Cities
    of Different Size Using a Cost Model,  Sidney,
    Australis":  unpublished mimeograph, May, 1974.

3.  Alonso, W., Location and Land Use,  Cambridge,
    Harvard University Press, 1964.

4.  American Society of Civil Engineers and the Water
    Pollution Control Federation, Joint Committee. De-
    sign and Construction of Sanitary and Storm Sewers,
    New York: ASCE Manuals and Reports  on Engineering
    Practice Number 27, 1970.

5.  Berthouex, Paul M. and Lawrence B.  Polkowski,
    "Design Capacities to Accommodate Forecast Uncer-
    tainties," JSEDJ_ASCE,_ NY,: 7633 (SAS): 1183-1210,
    October, 1970.

6.  Berthouex, Paul M., "Accommodating  Uncertain Fore-
    casts in Selecting Plant Design Capacity," JAWAA,
    Washington,: 63 (1):  14-20,  January, 1971.

7.  Berthouex, Paul M., "Evaluating Economy of Scale,"
    JWPCF, Washington: 44(11): 2111-2119, November,
    1972.

8.  Burley, M., and P. Mawer, "The Conjunctive Use of
    Desalination and Conventional Impounding Reservoirs,
    Water and Water Engineering,  London; (7): 275-277.
    July, 1968.

9.  Burley, M., and P. Mawer, "Water Supply Costs,"
    Water and Water Engineering,  London: (8): 327-329,
    August, 1968.

10. Burley, M. and P. Mawer, "The Present State of
    Desalination," Water and Water Engineering, London:
    (9): 368-370, September, 1968.

11. Butcher, William S.,  Yacov Y. Haines, and Warren A.
    Hall, "Dynamic Programming for the  Optimal Sequen-
    cing of Water Supply Projects," Water Resources

12. Douillez, P. J. and M. R. Rao, Optimal Network
    Capacity Planning:  A Shortest Path Scheme, Opera-
    tions Research, Baltimore: 23 (4):  810-818, July -
    August, 1975.

13. Environmental Impact Center,  Inc.,  Secondary Im-
    pacts of Highways and Sewers  on Environmental Qual-
    ity, Washington : Council on  Environmental Quality,
    1974.

14. Environmental Impact Center,  Inc.,  Secondary Im-
    pacts of Transportation and Wastewater Investments:
    Review and Bibliography, Washington: Council on
    Environmental Quality, 1974.

15. Gysi, Marshall, The Long Run  Effects of Water
    Pricing Policies, Ithica: Cornell University Water
    Resources and Marine Sciences Center, V.I (NTIS
    PB-198 440) TR 25, March, 1971.

16. Hanke, Steve H.,  "Demand for  Water  Under Dynamic
    Conditions," WRR, Washington  6(S):  1253-1261,
    October, 1970.

17. Herbert, J. D.  and B.  H. Stevens, "A Model for the
    Distribution of Residential Activity in Urban
    Areas," Journal of Regional  Sciences: 2 (2): 21-36
    1960.              '	'	
18. Himmelblan, D. M., The Optimal  Expansion of Water
    Resources System, Austin, Texas: University of
    Texas at Austin, Department of  Chemical  Engineering.

19. Hittman Associates, Inc., Main  I, A System of
    Computerized Models for Calculating and  Evaluating
    Municipal Water Requirements, Columbia,  Maryland:
    V, 1., (NTIS PB-182-SSS), June, 1968.

20. Howe, Charles W. F. P. Linaweaver,  Jr.,  "The Impact
    of Price on Residential Water Demand and Its
    Relation to System Design and Price Structure,"
    WRR, Washington: 3(1): 13-32, First Quarter,  1967.

21. Hudson, James F., Demand for Municipal Services:
    Measuring the Effect of Service Quality,  Cambridge;
    Massachusetts Institute of Technology, Department
    of Civil Engineering, R75-21, June,  1975.

22. James I., Bower, B. and N. Matalas,  "Relative
    Importance of Variables in Water Resource  Planning,"
    Water Resource Research, Washington:  5(6):  1165-1173,
    December, 1969.

23. Kain, J.  F., The National Bureau of Economic  Research
    Urban Simulation Model: Supporting  Empirical
    Research (V. II), New York: National Bureau of
    Economic Research, 1971.

24. Kaiser, E.  J., A Producer Model for Residential
25.
Growth, Chapel Hill North Carolina: University of
North Carolina at Chapel Hill, Center for Urban and
Regional Studies, Institute for Research in Social
Sciences, 1968.

Linaweaver, F.  P., Jr., James C. Beebe, and Frank
A. Skrivan, Data Report of the Residential Water
Use Project, Baltimore:  The John Hopkins University,
Department of Environmental Engineering Science,
June, 1966.
26. Lauria, Donald T., "Water Supply Planning for Develop-
    ing Countries," Water Resources Research, 65 (9):
    583-589, September, 1973.

27. Lauria, Donald, et al, "Models for the Optimal Tim-
    ing and Scale of Water Systems," AWWA Annual
    Conference, Minneapolis: unpublished mimeograph,
    June, 1975.

28. Manne, Alan S., "Capacity Expansion and Probabilistic
    Growth," Econometrica: 29(4): 632-649, October, 1961.

29. Manne, Alan S.,  Investments for Capacity Expansion:
    Size, Location and Time Phasing, Cambridge: The
    M.I.T. Press, 1967.

30. Marin, Thomas, "Optimal Sequencing of Capacity Ex-
    pansion Projects," Journal of the Hydraulics Division
    ASCB, NY: 9972 (HY 9): 1605-

31. Marin, Thomas L., and Roy E. Marsten, A Hybrid
    Dynamic Programming/Branch and Bound Approach to a
    Class to Sequencing Problems, Evanston, Illinois:
    Technological Institute, Northwestern University,
    March, 1975.

32. McJunkin, Frederick E., "Population Forecasting by
    Sanitary Engineers," JSED, ASCE, N.Y.: 3993
    (SAY): 31-58, August, 1964.

33. Meier, Peter, Stochastic Population Dynamics for
    Regional Water~Supply and Waste Management Decision
    Making, Amherst, Ma.: University of Massachusetts,
    Department of Civil Engineering, EVE 25-70-S,
    August, 1970.
                                                      644

-------
34.  Meier, Peter, "Stochastic Population Projection at
    Design Level," JSED. ASCE. N.Y.: 9436 (SA6)
    883-896, December, 1972.

35.  Meier, Peter M.,  "A Search Algorithm for the Estim-
    ation of Interregional Migration in Local Areas,"
    Environment and Planning; ( ) 5:45-59, 1973.

36.  Metcalf and Eddy, Inc., Wastewater Engineering:
    Collection, Treatment, Disposal, New York: McGraw
    Hill Book Company, 1972.

37.  Morrison, Peter A., Demographic Information for
    Cities: A Manual for Estimating and Projecting Local
    Population Characteristics, Santa Monica:  The Rand
    Corporation, R-618-HUD, June, 1971.

38.  Rachford, Thomas, Russell Scarato, and George
    Tchobanogloss, "Time Capacity Expansion of Waste
    Treatment Systems," JESED, ASCE, NY.: 6957 (SA6):
    1063-1077, December, 1969.

39.  Rogers, A., The Time Lag of Factors Influencing Land
    Development. Chapel Hill, North Carolina: University
    of North Carolina at Chapel Hill, Center for Urban
    and Regional Studies, Institute for Research in
    Social Science, 1963.

40.  Russell, Clifford S., David G. Avery, and Robert W.
    Kates, Drought and Water Supply;  Implications of
    the Massachusetts Experience for Municipal Planning,
    Baltimore; The Johns Hopkins Press, 1970.

41.  Russell, Clifford S., "Uncertainty and Choice of
    Plant Capacity," NAWWA  (Notes), Washington: 63 (6):
    390-391, June, 1971.

42. Thompson, Russell G., et al, Forecasting Water
    Demands, Arlington, Virginia:  National Water Com-
    mission, (NTIS PB-206 491), November, 1971.

43.  Traffic Research Corporation, Reliability Test
    Report;  EMPIRIC Land Use Forecasting Model, New
    York:  Report to Boston Regional Planning Project,
    1964.

44. Turnovsky, Steven J., "The Demand for Water: Some
    Empirical Evidence on Consumer's Response to a
    Commodity Uncertain in Supply," Water Resources
    Research. Washington : 5(2): 350-361, April, 1969.

45.  Urban Systems Research and Engineering, Interceptor
    Sewers and Suburban Sprawl:  The Impact of Con-
    struction Grants on Residential Land Use, Cambridge:
    Volume I, Analysis (NTIS PB-236 477), Volume II,
    Case Studies (NTIS PB-236 871), July, 1974.

46.  Whitford, Peter William, Forecasting Demand for
    Urban Water Supply, Stanford:  unpublished Ph.D.
    Thesis, Stanford University, Department of Civil
    Engineering, 1971  (Also University Microfolms
    71-13, DOS). NTIS PB 195 664).
                                                      645

-------
                                  ADAPTIVE SHORT-TERM WATER DEMAND FORECASTING

                                               David H. Budenaers
                                             Systems Control, Inc.
                                             Palo Alto, California
    A dual set of short-term water demand models is de-
scribed.  These models have the feature of adaptability
to changing data.  That is to say, given changes in the
data sequence, the models' parameters will self-adjust
to provide a better model.  The models also have the
property of being real-time computer-implementable.

    The two models are a stochastic (or time-series)
model and weather component of demand model.  The
stochastic model is an extension of the Box-Jenkens
type of modeling for time series.   The weather model
uses the method of principal components to identify the
effective weather variables.

    The results of the application of these models to
data from the San Jose, California, Water Works are
presented.

                       Background

    Demand forecasting is of central importance for the
development and implementation of a methodology for de-
sign and operation of water distribution systems be-
cause it forms the basis for developing operational
policies for  [1,2]:

    •     Storage management

    •     Scheduling of sources of water production.

    The methodologies selected for implementation in
the demand forecasting algorithm are of two types:

    •     Stochastic models

    •     Weather model methods .

    Stochastic methods make use of the historical
empirical demand time series to predict or extrapolate
the future.   Stochastic methods attempt to explain the
demand time series by using the series' internal cor-
relation structure without use of any external or ex-
planatory variables.  In order to implement a stochast-
ic model, a detailed statistical analysis of the time
series' correlation structure must be performed.

    For example, it must be determined if, as a rule,
today's water demand is highly correlated with yester-
day's demand.

    On the other hand, weather models try to explain
the demand time series by use of external variables.
Intuitively,  the most predominant external variables
to affect water demand are weather variables.  Thus,
in weather models, a detailed study of the relationship
between weather variables and water demand  must be
implemented.

    Once the  relationship between weather variables
and water demand has been identified, then it becomes
possible to forecast water demand based upon weather
predictions.

                 The Stochastic Model

    An examination of typical demand data for water
demand  (Fig.  1) suggests  that a high-gain stochastic
model could model the demand time series.  The  term
"high gain" means that the  internal correlation,  de-
scribed in the previous section,  is highly adaptive
to new data.  In light of this  fact and  the fact  that
stochastic modeling has proven  to be successful in
other similar applications,  the following  model is
proposed  [3] :
     D(t) = B(t) + X(t),        t    0, 1, 2,...
     X(t) = a X(t-l) + u(t),    t  =  1,  2,  3,.
                               (1)

                               (2)
where
     D(t)     is the demand at  time  t

     B(t)     is the base effect  at  time  t

     X(t)     is the autoregressive  term  given by  (2)
              (autoregressive lag one)

     u(t)     is a sequence of  independent  random
              variables where E[u(t)] = 0,  and
              Var[u(t)] = a.
                    2
     Both  a  and  O  are unknown parameters.   In  ad-
dition, B(t) is an unknown quantity.
     Observe that

          E[D(t)]
B(t)
          Var[D(t)] =
(3)
                               (4)
                      1-a
DATE
Day/Mo
710
719
720

722
7?3
725
726
727

72?
730
73)
t 1
e 2
e 3
8 4
e s
e b
6 7
* a
B 9
e 0
e i
e 2
6 J
6 4
8 5
6 6
6 7
s e
e 9
6 0
8 t
8 2
e 3
6 5
e 6
e 7
e B
e 9
630
ej i
9oi
902
903
JOfl
905
906
907
906
909

91 1
912
DAY INDEX
(1974) 12
564.00

546.00
S(.7.0Q
568.00
569.00
571.00
572.00
573-00
574.00
575.00
576.00
577.00
57B.OQ
579.00
5BQ.OD
581,00
582-00
583.00
584.00
585.00
SBt.OO
567.00
586.00
569.0
590.0
591.0
592.0
593.0
574.0
595-0
596.0
597.0
596.0
599.0
600.0
602.0
603.0
604.0
605.0
606.0
607.0
608.0
604.0
610.0
611.0
612.0
613.0
614.0
615.0
616.0
617.0
618.6
614.0
620>0
DEMAND x 10 GALLONS
0 1
,

i

r


4


t
i
t

i

t
t
t

i

f
4
*
1
*

9
r

1
t
t

*
f
,
t
I

T

»
,
(
I
1
*

t
4

t
*

0






,



























f




















Figure 1.  Example  of Water  Demand Time Series for  To-
           tal  System Demand for San Jose Water Works
                                                      646

-------
     It can also be assumed that

                              2
         D(t)
                                                     (5)
where
                                                       Equation  (12) is  the key  equation  for  estimating
                                                  the base effect.  The  a  is known  as the "gain"  of
                                                  the estimates.

                                                       Equation  (10) can be used  to compute expectation
                                                  and the variance of the estimator B(T).
is a normal distribution with mean B(t)  and variance
           2
                                                           E[B(T)]  =   I   a B  E[D(T-i)]
                                                                       1-0


                                                                        ITT               ^

                                                                        £   a B1 B.  =  B
                                                                       1=0         t      T
                                                                                                              (13)
Estimating and Updating the Base Element

     The base effect can be modified by using  short-run
averages.  This can best be formulated by  introducing
the notion of discounted estimation  (discounted  least
squares).  The problem is as  follows:

     Given D(t),     t   0,..., T, it is required  to
     estimate B(t)

where     E[D(t)] = B(t) = B                         (6)
                                                  and
                                                           Var(B(T))
                                                                                       (14)
                                                       In summary, Equation  (12) gives a procedure for
                                                  updating the base.  This equation is known as an ex-
                                                  ponential discount scheme.  The gain is  a  and the
                                                  discounting is 1 - a.  To  compute the effective memory
                                                  of Equation (12) , it is required to compare  (l-ot)K
                                                  with D(T) for various values of  K .  When (l-a)K D(T)
                                                  is relatively small, the effective memory is  K  time
                                                  periods.  The procedure is initialized by using
          Var[D(t)] = -2—,  0 <  B <  1,  B  is  known,
                      6
                             t    0, 1.....T         (?)
where  B   is the true short-run  average  at  t.

     Equation (6) has replaced  the  time variability  of
the base by a constant and Equation (7) indicates  that
the variance on the demand gets larger the further back
in time one goes  (T is the present).

     Using standard methods of  estimating B,  the result
                                                           B(T)  =  ^   I   D(i)                     (15)
                                                                       1=1

                                                  over an appropriate time period.

                                                  Estimating and Updating the Autoregressive Element

                                                       The autoregressive element of the model is esti-
                                                  mated by forming
                                                                  R(t)  =  D(t) - B(t)
                                                                                                     (16)
                                                            over  an  appropriate part  of  the historical  data  base.
          B  =
        E   BT ± D(i)
       1=0       	
                       T-i
                                                     (8)
                  i=0
                                                                Using  the Yule-Walker  equations  for  an  auto-
                                                            regressive  process,  the  following  result  is  obtained
                                                            [4]:
                                                                     ro  =   E  [ytyt]   =
                                                                                                              (17)
 Since
 T
 I
1=0
              T-i
                      1 -
                           _T+1
1 -
                                                     (9)
 Then Equation (8) can be written in limiting  form
 (|B| < 1, T large)and putting 1 -  0 = a) as:
                                                    (10)
         B  -   Z   a B  D(T-i)
               i=0

     Writing S in Equation  (10) as  a  function  of  t
         B(T)  =  a D(T) +   I  a  B   D(T-i)          UD
                            1=1

Equation (11) can be rewritten as

         B(T)  -  a D(T) +  B B(T-l)                 (12)
                                                                                           (1-a2)
                                                                                                              (18)
                                                                 T   and  T^  can be  estimated by  using  either  the
                                                           maximum likelihood method  or  the method of  moments
                                                            [4,5].
                                                       The estimates are:


                                                           r      !   v
                                                           F°  =  »  1=1
                                                                            N-l
                                                                        N
                                                                        Z
                                                                       1=2
                                                                                      (19)
                                                                                                              (20)
                                                       The same analysis that was carried out in the
                                                  previous subsection can be implemented for the esti-
                                                  mates of F  and T .
                                                       647

-------
     Letting  y  be the gain,  the result is:
         r0(N)  =  Y y N + (i-Y) r0 (N-D
                                       (N-D
                                                   (21)
                                                   (22)
The remarks in the previous subsection concerning the
gain  Y  or the discounting factor (1-y)  can be made
here.
     Before proceeding to a description of  the model,  a
few remarks on the correlation of other weather  vari-
ables with demand are in order.  Scatter plots of other
variables and demand are available in  "Identification
of Water Demand Models" [6].

     An overview of the correlations of demand to
lagged weather variables is presented  in Figure  4 for
the test data.  From Figure 4, as well as the individ-
ual scatter plots, it becomes a clear that  the only seri-
ous candidate for a set of exogenous variables are the
lagged average temperatures.
As in the previous subsection, the values of r and
F are initialized by using Equations (19) and (20) °
over part of the historical data base.
Having estimates of TQ and T , the estimate of a ,
and a2 for the demand model is easily obtained using
Equations (17) and (18) as follows: ;
a(N) j, ,j^ (23)
0
02(N) = rQ(N) (1 - a(N)2) (24)
The Weather Model
An examination of a weather plot relating tempera- !>8
ture to demand indicates that there exists a high de- '.
gree of correlation between demand and incident temp- -6
eratures. Figure 2 illustrates an example of such a
scatter plot. The correlation coefficient for the data
illustrated in Figure 2 is - 0.85. (Water demand .2
increases with temperature.)
Water tenant] x 10 Callona
]!:;: ( ( / ( - 	 " " 	 " -4
Jin ^ [t • _ ' 	 _r _ ,;. -6
;i i ' ! S '
\ I i i is' i
'. ) 2 J V I 1 1
S; 01 3 i s l 1 I 1 l 1
Si 9. i ii i i 1 ^ l (i: • ^1 l 	 	 	 	
It 1 ' V 'i I * 1111 112IJ 21 HI

° 0 L 1 1 1 1 1221 II Il2i2« 2211 „_
.. ; _ . _. 	 	 l - - 1 1 -1-- U J l 2221 21 2 	 	 Va
.1 t 13 1 1 22 _.
!•; . ,', „',',',?* ,'S .. ' 	 l°i
^ ,; i i i a e i i i u
« 3 •! 1 11 1 1 1 1
i 0 ~ " " 1 11 "l 1 111
S 1 111
/ 7 11 1
i ^ • 1 I
n
!'.<* 4 1
! " ! :"- ; correlation 	
correlationy- ;


lag
. i-
.6 ^ _!..
-.It • i --
.12 : .
!; . J ;J 	 i.., .;.|ij. :j_- .;..;_•..
• iii '
...._. :. . i !
i M !
-. . ' .-—...; !•• -•-
_..._.-. ... .... ...... j.. . -: ....
: ! ! . :
. : ; 1 | : . 1 ;
• i . !. j ' 	
' ': ' ' 1 i , '
.,: i • 2.j.i |.j« , 5. .6 ; i 8 i 9 ;io .^....'..j .:.. . :. ,.| :: |. ;...
nigure 3. Lagged Days Average Temperature . |
Correlated to Demand, 1973-1974,— i 	
San Jose Water Works !
,- .
! j ' ! •
•; | :-
1 ,• 1 !
| • .; ••-. • . i •
! L. ' i i '
~~-iT:fTf~~~"
"~~-»- — . — . — . • • • , i • i
; : : ; • \: ,;:-'-!;-:'.-
:

2 i 3 ; 4
: i '
i 1 - 6 •

! ^ \ - - /;- •
• .[ . p ?! 8 ; '
.:|.. .:,...i...
. .^_. .. .. .^. ;_


; 2, day1 :
. i ! • ;
— ^J . .; . . . j. • j._ .... L 	 ; .
^V ' LM; . -
\ : . : . i '.['..
10 \ 11 ; 12 13 14 15 16 AT~I$
'•••• ' ^— — ^_ ' ' ' Y'
9,10' : Il"-'l5 " 17,18
dew' . 6 days average
2 days '• , ; j . ; sPe?d .
Figure 4. Lagged Days Other Weather Correlated ; -j—
to Demand, 1973, San Jose Water Works
In light of these introductory remarks on weather
riables , the weather demand model is defined as f ol-
xrs:
D(t) B(t) + W(t) + X(t) t = 0, 1, 2,... (25)
X(t) b X(t-l) + u(t) t = 1, 2,... (26)
W(t) c fj(t) t 1, 2,... (27)
            Figure 2.  Temperature vs.  Demand
                                                          where
     Further examination of scatter plots  for demand           D(t)
vs.  temperature on the previous  day indicates again
that a strong correlation exists.   This  lagged correla-        B(t)
tion of course makes sense, since  water  demand is  an           W(t)
effect that has "memory."  Figure  3 illustrates the
lagged temperature vs.  demand correlation  effect for
the test data.

     The fact that the lagged temperature  vs. demand           X(t)
relationship is so strong,  as well as the  fact that
when operating with a pure stochastic model the fore-
cast "outliers" are highly weather-correlated, leads to
the consideration of a structured  weather  model.
                                                                       is the water demand at time t

                                                                       is the base effect at time t
                                                                       is the weather component of demand given
                                                                       by Equation (27) where  c  is an unknown
                                                                       constant and fl(t) is the effective weather
                                                                       variable at time  t  (defined later)
                                                                       is the autoregressive term given by Equa-
                                                                       tion  (26) where  b  is unknown and u(t) is
                                                                       a sequence of independent random variables
                                                                       with
                                                                            E[u(t)] = 0
                                                                       and               .
                                                                            Var[u(t)] = a  .
                                                      648

-------
     An examination of Equation (25) yields that the
following parameters are to be estimated from  the  data
base:

     B(t)  =  the base effect
     c     =  the weather effect loading constant

     b     =  the autoregressive coefficient
     O     =  the model variance .
     Observe that

          E[D(t)] = B(t) + W(t)

and that

          Var D(t)/ [W(t), B(t)]
                                                    (28)
                                                    (29)
Equations (28) and (29) will be used for generating a
weather demand forecast.  Equation  (29) means the con-
ditional variance of D(t) with respect to known W(t)
and B(t).

     As in the stochastic model, the parameters are
estimated using step-wise regression.  The mathematical
details for the estimation of B(t),  b and a  , are
similar to the details presented in the above section.
The estimation  c  uses the same principles  [6].

Identification of the Effective Weather Variable

     The method of principal components (PC)  is used to
define an effective temperature [3,4].  The PC method
works as follows:
                                                           had been performed on the original untransforaed
                                                           variables [7].

                                                                For the test data of lagged temperatures (San Jose
                                                           Water Works service area weather information 1973-1974),
                                                           it turns out that 80% of the variability is explained
                                                           by one variable.

                                                                The results  for the principal component analysis
                                                           are presented in  Table 1.  This table shows that one
                                                           component is sufficient to define an effective weather
                                                           variable.
                                                                                   Table 1
                                                                  CORRELATION OF EFFECTIVE TEMPERATURE WITH
                                                                  DEMAND FOR 1973 AND 1974 SJWW DATA
\^OMPONENT
YEAR ^^-^^
1973





1974





FIRST COMPONENT

Explains 79% of
lagged temperature
variance.
Correlation = 0.87
95% Confidence =
(0.84, 0.89)
Explains 83% of
lagged temperature
variance.
Correlation = 0.86
95% Confidence =
(0.83, 0.88)
SECOND COMPONENT

Explains 8% of
lagged temperature
variance.
Correlation = -0.10
95% Confidence =
(-0.20, 0.00)
Explains 7% of
lagged temperature
variance.
Correlation = -0.14
95% Confidence =
(-0.23, 0.00)
     Let
            = (t
                0.
                               t )  be a vector of
 lagged temperature variables observed at time  i  (t
                                                   _
                                           j   tempera-
ture,  etc.).  Then  the  sample  covariance matrix of the
 incident temperature, t. is lagged to day
   is defined as follows:
          S  = -   I    (T.-T)
             n  .  -     i
where
                                                    (30)
                                                    (31)
Now suppose it is possible for an artibrary  T  vector
to choose a set of vectors C', C ',..., C^ such that:
         Var(C'T)  > Var(C!T)  > ,  ..
               1    —       Z    —
                                     .. >  (Var(C'T)  (32)
                                       —        K.
 and
and
                      for i ^ j (orthogonal)
                     1 (unit length)
                                                   (33)
     Then the quantities in (32) are called the princi-
pal components and the C , i = I,..-, k are called the
principal component transformations.  If it turns out
that a small number of Var(C^T) explains most of the
variance of  S  then these small number of scalar
quantities can be used as effective (or substitute) in-
dependent variables in a regression problem.  Further,
it can be demonstrated that regression problems per-
formed using principal components have smaller variance
of the estimated coefficients than if the regression
                                                                  Performance of Models for San Jose Data

                                                               A summary of the data analysis is as follows:

                                                               1.  The weather model and the stochastic model are
                                                                   completely correlated for one-step-ahead
                                                                   forecasting.

                                                               2.  The best stochastic model performance is given
                                                                   in Table 2 for a data base of 682 days through
                                                                   all seasons of a year.

                                                                                   Table 2
                                                               OVERVIEW OF PERFORMANCE OF STOCHASTIC MODEL
                                                                FOR APPROXIMATELY TWO YEARS OF FORECASTING
R « Relative Error
in 2
3.5
7.0
10.5
Z of Days With Error
Less Than R
40
62
78
                                                          Details  on the performance will be provided in Figures
                                                          5-8.   Complete details  are available in [6].

                                                               Figure 5  illustrates the distribution of the rela-
                                                          tive  errors for the choice of a. priori parameters
                                                          a = 0.8  and y  = 0-8.   From this figure it is clear that
                                                          most  of  the relative errors are less than 7%.

                                                               Figure 6  presents a  histogram of the forecast
                                                          standard deviations for a = 0.8 and Y = 0.8.  Figure 7
                                                          illustrates the number of estimated standard deviations
                                                          that  the actual value of  demand differs from the fore-
                                                          cast  demand.
                                                               The utility of Figures 6 and 7 becomes apparent
                                                          when  probabilistic forecasts are made using the
                                                       649

-------
forecasted standard deviation.   For example,  suppose
the forecasted demand is D and  that the forecasted
standard deviation is 5.  Then  Figure 7 is used to con-
struct the probability associated  with the confidence
interval for the true demand, D, as follows:
and
          D + 2.27 0 =  55% confidence for true demand
          D + 6.80 a =  78% confidence for true demand
That is to say, D + 2.27 S contains  the true demand
with 55% probability.

     From Figure 5 it can be observed that a value of
a   1 x 10^ is "typical."  Therefore,  in a "typical"
case, demand can be forecasted  to  roughly 2.3 x 10^
gallons.
                        x - 7
                        S - 7
                        682 Events
                   .7   4.9  8.4 11.9 15.4 18.9
                       X Relative Error
Figure 5.  Distribution of  Stochastic  Residuals
           (a =  .8, y = -8)
                          X - 1.53
                          S - 1.34
                          682 Events
                    .26  .77 1.28 1.79 2.30 2.81 3.32 3.83 4.43
                             0  106 Gallons

Figure 6.  Distribution  of 3 for Stochastic Model
           (a    .8, y =  .8)
                         X • 6.81
                         S - 9.08
                         632 Events
                   2.27  6.80 11.33 15.86 20.39 24.92
                     Number of a Units
      It  is  important to note that the  75% confidence
interval can be used to "flag" outliers  or anomalies in
the  forecast.

      Figure^S shows the distribution of  the estimated
parameter  a  for the stochastic model.   Recall that  a
is the autoregressive loading for the  model.   Values of
a >  1 imply that the model is becoming unstable.
                .1

               .09

               .08
               .07

               .06
             >>
             s -05
             3.
             £ -04
               .03

               .02

               .01
X - .61
S - .40
662 Events
                 0             .67            i.43
                              Value of o

Figure 8.  Distribution of the Estimated Parameter  a

Acknowledgement

     The author wishes to express his thanks to W. Wink-
ler for programming support for this project.  Also the
author wishes  to  thank T. G.  Roefs, who was the contract
monitor for  the project which supported this research.
The research for  this  project was conducted while work-
ing on Contract 14-B1-0001-5221 for the Office of Water
Research and Technology, U.S. Department of Interior.

References

[1]  R. DeMoyer,  "A Statistical Approach to Dynamic
     Modeling  and  Control," Ph.D. Thesis 1973, Poly-
     technic Institute of Brooklyn, ESS.

[2]  H. S. Rao, et al., "Studies on Operations Planning
     and Control  of Water Distribution Systems," SCI
     Final Report, Contract Number 14-31-0001-4242, pre-
     pared for Office  of Water Research and Technology.

[3]  D. H. Budenaers,  "Sequential Short-Term Electric
     Power Demand  Forecasting," Proceedings ASA Business
     and Econometrics,  Atlanta Meeting, August 1975.
[4]  T. W. Anderson, "The Statistical Analysis of Time
     Series,"  John Wiley & Sons, Inc., New York, 1971.
[5]  M. G. Kendall and A. Stuart, "Advanced Theory of
     Statistics,"  Vol.  2, Hafner Publishing Co., New
     York, 1960.
[6]  D. H. Budenaers,  "Water Demand Forecasting," SCI
     Final Report, March 1976.
[7]  H. Theil, "Principles of Econometrics," John Wiley
     and Sons, Inc., New York, 1971.
Figure 7.  Number of a Units  Forecast Deviates from
           Actual (a = .8, y  =  -8)
                                                       650

-------
                             HYDROLOGIC IMPACT STUDIES OF ALTERNATIVES TO MEET

                                WATER NEEDS IN SOUTH -(CENTRAL  PENNSYLVANIA
              David  H.  Marks,  Guillermo J.  Vicens,  Brendan M.  Harley, and John C.  Schaake, Jr.
              Resource  Analysis,  Inc., 1050 Massachusetts Avenue, Cambridge, Massachusetts 02138
                     ABSTRACT

This  paper presents  the use of several  interrelated
models  to investigate the potential  hydrologic impacts
of several  proposed  water supply alternatives for the
South Central  Pennsylvania area.  The area contains
major demand  centers in Harrisburg.York, Lancaster,
Lebanon,  Manheim,  Elizabethtown, Ephrata, New Holland,
Lititz, Carlisle,  and Mechanicsburg, which for the
most  part depend on  local surface waters for their
water supply  with  supplemental withdrawals from the
Susquehanna River  and from groundwater.  Withdrawals
from  all  of these  sources could have an impact on the
flows in  the  Susquehanna itself.  Since this river is
the main  source of freshwater to Chesapeake Bay, it
was important to assess the relative impact of each of
the proposed  alternatives on the outflow distribution
to the Bay.  The scope of the study was limited to the
hydrologic aspects of the problem. The models used to
evaluate  the  impacts of the alternatives were:

1.  A synthetic streamflow augmentation and generation
   model to  first augment the existing records up to
    a full  80-years, and second generate a set of 200-
   year synthetic records which resembled the histor-
    ical  records in  their statistics.

2.  A linear  regression model relating monthly rain-
    fall  and  evapotranspiration to streamflow in the
    tributaries used to evaluate the impact of
    groundwater withdrawals on surface water flows.

3.  A simulation model used as an accounting device
    to show the impact of the alternatives on the
    monthly flows  at several locations in the area
    including the  outflow of the Susquehanna to
    Chesapeake Bay.

                   INTRODUCTION

The objective of this paper is to describe the
methodology used in  hydrologic investigations
carried out on a series of water supply alternatives
for the South Central Pennsylvania area (Resource
Analysis. Inc.. 1974b).   The area contains major demand
centers in Harrisburg, York, Lancaster, Lebanon, Man-
heim, Elizabethtown, Ephrata, New Holland, Lititz,
Carlisle  and  Mechanicsburg, Pennsylvania as shown in
Figure 1.  In general, these communities depend on
local surface water  for their water supplies, with
additional  supplies  coming from the Susquehanna River
and from  groundwater.  With continuing  increases in
population in the  area,  major capital investment in
new facilities  and water sources will be necessary.
Withdrawals from groundwater or local surface water
storage may have a different impact on  flows in the
Susquehanna and its  tributaries than withdrawals from
the Susquehanna itself.   While the area is itself
relatively water rich, different withdrawal patterns
will  lead to  changes in  the flow characteristics of
the local tributaries and to different  distributions
of outflows from the Susquehanna to Chesapeake Bay.
Since a change  in  the outflow distribution for the
main  freshwater input to the Bay could  have major

1  Now at  Hydrologic  Research Laboratory,  National
  Weather Service, Silver Spring,  Maryland 20910
ecological impacts, this outflow is of significant
interest.

Objectives of Study

The primary objective of the study was to assess the
relative impact of the proposed alternatives for water
supply development on  the distribution of monthly
outflows from the study area.  In addition, estimates
of the impacts on low flows in local  tributaries were
made.  Other parts of the study conducted by other
contractors dealt with institutional, ecologic, and
engineering feasibility considerations.  Our study
dealt only with hydrologic considerations, i.e., the
distribution of outflows from the system and on the
tributaries as they are affected by the different
alternatives.

Study Area

The study area is shown in Figure 1 and contains all
or parts of Cumberland, Adams, York,  Dauphin, Lebanon,
and Lancaster Counties in the south central part of
the Commonwealth of Pennsylvania.  Major tributaries
to the Susquehanna River in the study area are Swatara,
East Conewago, Chickies, and Conestoga Creeks on the
east side; and Conodoquinet, Yellow Breeches, West
Conewago, and Codorus Creeks on the west side.  Present
water supply usage and future water demand (year 2020)
are shown for each major municipal area in Table 1.
The major demand areas include surrounding water com-
panies as well as the new municipalities.  In general,
Harrisburg (East) presently depends on Clark, Stony,
and Swatara Creek sources;  Harrisburg (West) on
Yellow Breeches, and Conodoquinet Creeks; Mechanics-
burg on Yellow Breeches; Elizabethtown and Manheim on
Chickies Creek; Lebanon on the Swatara; Lititz and New
Holland on groundwater;  Ephrata on Conestoga Creek;
and York on the Codorus.  Only Lancaster presently
draws major supplies from the Susquehanna.

Alternatives for Mater Supply

A variety of different water supply options exists for
the area ranging from all ground and  local surface
water to all Susquehanna water, as well as combinations
of the two.  As there is a relatively large amount of
water available in the area, the question of importance
is which sources should be developed  rather than
whether it is possible to find the water.  For example,
Harrisburg (East and West), Mechanicsburg, Lebanon,
Elizabethtown, York, Lancaster, and New Holland could
go directly to the Susquehanna for additional sources.
Alternatively, new or improved impoundments on the
Conodoquinet, Swatara, E. Conewago, and W. Conewago
Creeks; and the South Branch of Codorus Creek could
also be used to supply future water needs.  Ground-
water in areas like York, Lebanon, Elizabethtown,
Ephrata and New Holland could serve their new water
needs.  To consider the options available a series of
alternatives were conceived by the U.S. Army Corps of
Engineers and the Commonwealth of Pennsylvania, and
developed by the Anderson-Nichols Co. to consider
                                                     651

-------
combinations of these possibilities.  A brief discus-
sion of the alternatives is shown in Appendix A.

Hater Supply Service Areas and Demands for 1970 and 2020
Major Demand Areas


Carlisle


Mechanicsburg


Harrisburg (West)

Harrisburg (East)
Lebanon




York




Elizabethtown


Manheim

Lancaster
Lititz

Ephrata


New Holland
Total
Municipalities   Demand (MGD)
   Included        1970    2020
Carlisle Boro and   3.7
Suburban

Millsburg, Grantham,1.8
Mechanicsburg  W.C.
Riverton  W.C.
 7.7
Harrisburg  W.C.,  22.4
Dauphin, Hershey,
Middletown,
Steel town

Lebanon-City,       8.2
Keystone, Cornwall,
Meyerstown,
Heidelburg

Red Lion, Dover    21.0
Boro, Dover Twp.,
West Manchester,
York  W.C.

Rheems, Elizabeth-  0.8
town, Mount Jay
Manheim

Columbia, Mount-
ville, E. Hemp-
stead, E. Peters-
burg, Lancaster,
Millers

Lititz

Akron, W. Earl-
ham, Ephrata

Leola, New
Holland, Blue
Bell
 0.5

17.4
 1.0

 1.1


 0.7
 6.9


 6.2


19.8

32.7




16.8




40.2




 3.6


 0.7

40.1
 3.0

 2.2


 2.7
                                        86.3   174.9
Outline of Methodology

The information available for this study was  the
following:

1.  Estimates of future demands from municipal  and
    industrial (M&I), agricultural, and consumptive
    powers cooling users.

2.  Monthly gauging records for several  locations in
    the area including the Susquehanna River,  Codorus,
    Conodoquinet, Swatara, W.  Conewego,  and Conestoga
    Creeks.

3.  Monthly precipitation records  at York, Harrisburg,
    Lancaster, and Lebanon.

4.  Configurations for each water  supply alternative
    including reservoir capacities and allocation of
    demands to sources.
Given this data base,  the objective was  to  assess  the
hydrologic impacts of  each alternative through  a  simu-
lation study.  The following tasks were  carried out to
evaluate the alternatives:

1.  Process the rainfall and streamflow  data  into  the
    RAI Hydrologic Data Management System  (Resource
    Analysis.  Inc., 1974a).

2.  Augment the streamflow records to produce a "full"
    set of records of  consistent length  to  be used for
    parameter  estimation purposes.

3.  Estimate the statistical parameters  of  these
    records, and generate a set of 200-year synthetic
    records,

4.  Develop a  linear regression model relating  the
    effect of  groundwater withdrawal on  future  stream-
    flows.  This relation was to be used to. assess
    impacts of groundwater development on local
    surface water flows.

5.  Simulate the operation of the system under  both
    the historical and synthetic streamflow records
    for each alternative plan in order to assess its
    reliability and the resulting hydrologic impacts.

The following sections briefly describe  each of the
above steps.  A full  discussion of the methodology
and results is contained in the final project report,
Resource Analysis, Inc. (1974b).

     GENERATION OF SYNTHETIC STREAMFLOW  RECORDS

Available Streamflow Data
Historical records at eleven gauging sites in or
near the study area were available.  The length of
these records is shown on Figure 2.  All stations
had at least 40 years of observations except for
Station 5755 which had 32 years, and Stations 5745
and 5765 which had some small gaps.

An improvement in the parameter estimates was obtained
by extending or "filling-in" the shorter records by
correlation with nearby stations.  Regression analysis
has been frequently used to carry out the augmentation
of records.  The theory on which these procedures are
based has been discussed by Fie ring (1962), Matalas
and Jacobs (1964), and Gjlroy (1970), and will only be
briefly summarized here.

The streamflow data at the gauging stations with the
shorter record   yt»are related to the data at other
sites x^, Xg^.,..., Xpt, through a linear regression
model given by:
                          = a
                                                   blxlt + b2x2t
                                               bpV
                                                                      (1)
                                     where et  is a standardized normal  random  deviate.
                                     The parameters of this model:   a,  bj,  b2	 and bp
                                     are computed from the available data through standard
                                     least square procedures for regression analysis.
                                     These values are then used in the  model to estimate
                                     the streamflow at station y where  these values are
                                     missing.  Similarly, in the case of shorter records,
                                     the record at station y is extended by this same
                                     procedure from the longer observed nearby or related
                                     records.
                                                      652

-------
Three data augmentation  runs were  carried  out.   These
are described  in Table 2.   The  objective of the first
run was to obtain a  full  forty  years.   Run No.  2
extended the 8 stations to 73 years.  Finally in,
Run No. 3, the nine  shorter records  were extended  an
additional seven years by regression from  the longest
record.  The final output was a set  of 80-year records
at all eleven  stations.

                      Table 2

               Data  Augmentation

Run   Stations    Other       Period      Period of
No.   Augmented   Stations   Augmented  Estimation
         5145
         5755
         5765
5730 5705
5700 5750
5740
10/1932
to 9/1972
*10/1932
 to 9/1972
where Q  represents streamflow  in month  t,  Pt  and  Et
denote the rainfall and evapotranspiration  in  month t,
and the values of aQ, a^ bQ, ... , bm are  to  be eval-
uated for each sub-basin.  The precipitation variables
Pt should be basin average values which  can only be
estimated from point values.  Likewise,  the evapotrans-
piration variable, Et, should be the basin  average
value.  The disturbance term \T  accounts for the
errors introduced by using point measurements  instead
of the "true" basin average values.

The effects of groundwater withdrawals on surface
flows was then assumed to be similar in  response to
the rainfall-runoff relations derived above.   Thus the
streamflow depletion in month t denoted  as  Dt  was
related to the groundwater withdrawals for  the six
                             previous  months,  W
                                      through  W   ,  by:
                                               L-O
       5700 5730
       5745 5750
       5760 5765
       5740 5755

       5670 5700
       5745 5750
       5760 5765
       5730 5255
5670 5705
5705
10/1899
to 9/1972
10/1891
to 9/1972
 10/1932
 to 9/1968
 10/1932
 to 9/1968
             Dt=  £
                   i=0
                                                                ciWt-i
 *Includes only extension of record length, not
  monthly gaps.

 Synthetic Streamflow Generation

 A 200-year synthetic streamflow record was generated
 based on the procedures described in Valencia and
 Schaake (1972, 1973).  Briefly, the procedure is first
 to generate a series of annual flows at the selected
 stations.  These annual flows are then disaggregated
 into seasonal flows.  Finally, a similar procedure
 disaggregates seasonal flows into monthly flows.
 This scheme preserves the means and variances of the
 seasonal and monthly flows, the correlation between
 monthly flows at the s.ariie.>s.ite or different sites, and
 the correlation between any monthly flow and any
 seasonal flow, and between the seasonal flow and the
 annual flow.  The generated monthly values at any site
 will add up to the corresponding annual value, which
 guarantees the preservation of annual statistics.

                GROUNDVJATER MODEL

 A simple model of the impacts of groundwater with-
 drawals or surface water flows was developed.  This
 model was based on a theoretical analysis of the
 range of potential impacts to be expected, as well as
 a statistical analysis of rainfall and streamflow data
 to evaluate the dynamic properties of the aquifers in
 the study region.

 The historical rainfall and streamflow records avail-
 able for this region were used to determine the time
 delay characteristics of the natural groundwater
 system.  A mathematical description of this system
 was created, based on the following assumptions.
 First, the average streamflow in each month consists of
 groundwater and direct runoff components.  Second, the
 amount of direct runoff is assumed to depend upon the
 current month's precipitation.  Finally, the amount of
 groundwater is assumed to depend upon the current and
 previous months'  precipitation in excess of evapotrans-
 pi ration.  An equation representing this is:

                                        where the coefficients  C-  are computed from:
                                                               b.
(3)
                                                                                           (4)
                                                                                i=0
                                        A similar formulation was  used  by Nieswand and
                                        Granstrom (1971)  to  model  the Mullica  River Basin  in
                                        New Jersey.

                                        The coefficients  obtained  from  the analysis of the
                                        Susquehanna  data  are shown in Table 3.
                                                             Table  3

                                                Groundwater Withdrawal  Impacts  on
                                                       Local  Streamflows

                                        BASIN     Streamflow responses  in  various months  due
                                                  to a  unit groundwater withdrawal  in  month t

                                                  t      t+1   t+2    t+3    t+4    t+5   t+6

                                        Codorus   .147   .188  .237   .212   .151   .051   .014
                                        Creek
                                        Conodo-   .178  .171  .225  .201   .153  .018  .054
                                        quinet
                                        Creek
                                        Swatara   .456  .207  .105  .087   .115   .030  0.00
                                        Creek
                                        Conestoga .165  .139  .175  .206   .165  .102  .048
                                        Creek
                                                          SIMULATION MODEL

                                        To assess the impacts  of each of the alternatives on
                                        the distribution of flows in the Susquehanna and the
                                        low flows in the tributaries, and to evaluate the
                                        reliability of the proposed alternatives, a simulation
                                        study of the operation of the system was carried out.
                                        A modified version of  the MIT River Basin SIMulation
                                                       653

-------
 Model  (MITSIM)  was  used  for  this  purpose  (Schaake,  et.
 al_,  1974).

 MITSIM was  designed to generate and  display  both eco-
 nomic  and physical  information to aid  in  evaluating
 system response.  The model  is an accounting procedure
 that takes  the  synthetic or  historical  data  developed,
 seasonal water  demands and consumptive  use,  the ground-
 water  response  functions, the operating rules  for the
 various reservoirs, pipelines, and groundwater
 systems, and  operates them to find the  monthly system
 flows  at specified  locations.  The structure of the
 model  is of nodes connected  by branches with all water
 entering or leaving the  system at the  nodes.   Typical
 nodes  are:

 1.   Start nodes  nodes  at which  historic or synthetic
     streamflow  data is input to the  system.   For the
     case study, a start  node was  used  for all  streams
     including non-gauged streams, and major  overland
     flow areas  to the Susquehanna.   A  special  program
     was written to  disaggregate data available at
     gauging stations (both historic  and synthetic
     records)  to input data for the start  nodes.

 2.   Confluence  nodes  the joining of two branches  of
     the system  used to show  the connectivity of the
     activities.

 3.   Reservoir nodes  for each reservoir  node, a capac-
     ity, seasonal target, and seasonal  release sche-
     dule is given.   Water may be  removed  from  a reser-
     voir node to meet demands provided  enough  water is
     in storage  and  release requirements are met.

 4.   Groundwater Nodes    represents the  pumping of
     groundwater to  meet  a specified  demand.  A ground-
     water function  relates seasonal  withdrawal to
     impacts on  local surface water in present  and
     future  seasonal  withdrawal to impacts on local
     surface water in present and  future seasons.
     A  seasonal  consumptive use coefficient shows how
     much of the groundwater  is released to the surface
     water after use.

 5.   Irrigation  Node   for each irrigation area, seasonal
     demands and consumptive  use coefficients are com-
     bined to  compute the portion  of  the specified
     demand  in season that is returned to  local surface
     waters  in the present and future seasons.

 6.   M&I Node    a municipal and industrial  demand and
     consumptive use  coefficient is specified for each
     surface water demand in  each  season to calculate
     withdrawals and  returns  to streams.

     A  typical  schematic  for  a system is shown in
     Figure  3 and Appendix B  describes the function  of
     each of the nodes shown.

                      RESULTS

 All  of the  alternatives described in Appendix A were
 simulated with the 200-year  synthetic record.  In
 addition, Alternatives 1,2 and 3 were simulated with
 the  80-year augmented historical  record as inputs.
 The  first question to be investigated was  which of  the
 records was  more stringent or conservative.   Compari-
 son  of the  simulation results of Alternatives 1 through
 3 for both  the historic and synthetic records, showed
 that the 200-year synthetic record produced,  on the
 average, lower monthly outflows  from the study area
 even though  two  extensive drought periods  were obser-
 ved  in  the  historic  record.   Since the  relative
 impact  of the  alternatives on the  distribution of the
outflows from  the system was  of  utmost  importance in
 this  study,  the  synthetic  record  was  selected for
 detailed comparison of  alternatives.

 Monthly outflows were calculated  at the  lower boun-
 dary  of the  study area  which was  the  intersection of
 the boundary of  Lancaster  County, Pennsylvania with
 the Susquehanna  River.   This line is  slightly above
 the Conowingo pool,and thus our  calculation  of system
 outflow represents runoff  from  a  slightly smaller
 drainage area than that supporting inflows  to the
 Conowingo Pool, which other studies have focused  on.

 Table 4 presents a summary of the results obtained
 from  the simulation runs.  This table  shows  estimates
 of the annual and monthly  30-day  low flow which occurs,
 on the average, once every twenty years  (Q30-20)  at
 the outflow  of the study area.  The results  for each
 alternative  plan with Year 2020 demands are  shown, as
 well as a "Present" case for comparison purposes.

                   Table 4
 Estimates of Q30-20 at  the System Outflow  200-Year
 Synthetic Streamflow Trace and  2020 Demands.
 (Present Run Uses 200-Year Synthetic Trace and
 1970 Data)~

Al tern .
Present
1
2
3
4
5A
SB
5C
6A
6B
6C
7A
7B
7C

Aug.
3622
3412
3410
3405
3411
3411
3411
3405
3415
3425
3411
3422
3427
3407
Q3Q-20
Sept.
3266
3051
3050
3048
3052
3052
3052
3048
3056
3044
3052
3062
3065
3050
(cfs)
Oct.
3055
2943
2945
2944
2944
2944
2944
2944
2944
2944
2944
2946
2945
2945

Annual
3075
2840
2820
2818
2840
2840
2840
2818
2853
2873
2840
2858
2879
2821
The overall  results of the study were:

1.  There is very little difference between alterna-
    tives in terms of Q30-20 or monthly average flows
    at the outflow of the study area.  The values of
    annual Q30-20 for the alternatives range from
    2818 cfs to 2879 cfs.  The present case produces
    a value of 3075 cfs.  Estimates of Q30-20 for
    August, September and October show similar results
    Most of this decrease is due to a consumptive use
    increase in power cooling through 2020, which peaks
    at 345 cfs.  The lower values are also due to
    increased groundwater and Susquehanna water usage,
    while the higher values are a result of reservoir
    storage in the tributaries.

2.  From the viewpoint of flows in the tributaries, the
    alternatives with large groundwater usage decrease
    tributary flows slightly.  However, all other alter
    natives tend to substantially increase tributary
    flows due either to diversions of Susquehanna
    water or larger reservoir impoundments.

3.  Any reliability problems for M&I water availabil-
    ity are due to over estimates of present source
    capabilities and can be easily improved by
    increasing reliance on new sources.  All alterna-
    tives are equally reliable.
                                                      654

-------
                     SUMMARY
The use of several interrelated models to investigate
the potential hydrologic impacts of several proposed
water supply alternatives for South Central Pennsyl-
vania has been presented.  The objective of the study
was to assess the relative impacts of the alternatives
on the distrubution of the outflows to Chesapeake Bay.
The methodology developed for this study consisted of
several hydrologic models which processed the availa-
ble hydrologic and water demand data to evaluate the
impacts of the alternatives.

                ACKNOWLEDGEMENTS

This study was performed by Resource Analysis, Incor-
porated under contract to the U.S. Army Corps of
Engineers (North Atlantic Division, Special Studies
Group) for their Northeastern United States Water
Supply (NEWS) Study.

                    REFERENCES
 Fiering, M.B., "On the Use of Correlation to Augment
  Data," Journal of the American Statistician Associa-
  tion, 57  (297), pp. 20-32, 1962.

 Gilroy, E.J., "Reliability of a Variance Estimate
  Obtained  fron a Sample Augmented by Multivariate
  Regression," Water Resources Research, 6 (6),
  pp.  1595-1600, 1970.

 Matalas, N.C., and B. Jacobs, "A Correlation Proce-
  dure for  Augmenting Hydrologic Data," U.S. Geologi-
  cal  Survey Professional Paper 434-E, 1964.

 Nieswand,  G.H., and M.L. Granstrom, "A Chance-Con-
  strained  Approach to the Conjunctive Use of Surface
  Waters and Groundwaters," Water Resources Research,
  7  (6), pp. 1425-1436, December, 1971.

 Resource Analysis, Incorporated, Hydrologic Data
  Storage System:  Users' Manual, Report to the Depart-
  ment of Natural Resources, Commonwealth of Puerto
  Rico, January, 1974a.

 Resource Analysis, Incorporated, Hydrologic Impact
  Studies of Alternatives to Meet Water Needs in South
  Central Pennsylvania, Report to North Atlantic Divi-
  sion of U.S. Army Corps of tngineers, September,
  1974b.

 Schaa.ke, J.C., et.al., "Systematic Approach to Water
  Resources Plan Formulation," M.I.T. Ralph M. Parsons
  Laboratory for Water Resources and Hydrodynamics
  Report No. 187, July, 1974.

 Valencia,  Dario and J.C. Schaake, Jr., " A Disaggre-
  gation Model for Time Series Analysis and Synthesis,"
  M.I.T. Ralph M. Parsons Laboratory for Water Resources
  and  Hydrodynamics Report No. 149, June 1972.

 Valencia,  D., and J.C. Schaake, Jr., "Disaggregation
  Processes in Stochastic Hydrology," Water Resources
  Research. 9 (3), pp. 580-585, June, 1973.

                    Appendix A

           DESCRIPTION OF ALTERNATIVES
 Alterna-
 tive #
Developments
                                      5A
        draw from present sources with some additional
        groundwater development.
       Lebanon would develop new  storage and improve
        existing storage on the Swatara, with some new
        groundwater development.
       York would go to the Susquehanna as a source
        and develop groundwater.
       Elizabethtown, Ephrata, New Holland, and Lititz
        would develop additional  groundwater sources.
       Lancaster and Manheim would continue with pres-
        ent sources.

       Considers major use of groundwater in the
        future, especially for York and Lebanon.  No
        new impoundments or Susquehanna sources.

       Considers Susquehanna as the major source of
        new water demands for Lebanon, York, and Eliz-
        abethtown,with no new impoundments built.

       Considers new impoundment  on Swatara Creek for
        Lebanon, and York water supply
        from Susquehanna.

       Considers impoundment on Swatara Creek for
        Lebanon and Elizabethtown, and York supply
        from Susquehanna.
      1  Considers development each municipality
         would undertake without outside assistance.
         Harrisburg, Mechanicsburg.and Carlisle would
   5B  Same as 5A except reservoir development on
        E. Conewago Creek is considered for Eliza-
        bethtown.

   5C  Susquehanna is used for Lebanon, Harrisburg,
        (East and West), York and for some additional
        needs in Carlisle and Mechanicsburg.  Reser-
        voir on E. Conewago Creek is used for Eliza-
        bethtown .

   6A  Considers a new reservoir on the Swatara and
        groundwater for Lebanon.  New reservoir on
        S. Branch of the Codorus for York.

   6B  Lebanon impoundment retained, but Elizabeth-
        town switched to Susquehanna and York to a
        W. Conewago reservoir.

   6C  York, Lancaster, New Holland and Elizabethtown
        use Susquehanna, and new impoundments are dev-
        eloped on Conodoquinet Creek and Swatara Creek.

   7A  Large groundwater development, Lebanon uses
        Susquehanna and York uses impoundment on W.
        Conewago Creek.

   7B  Same as 7A except Lebanon uses a reservoir on
        the Swatara.

   7C  Combines 5C and 6C and includes a reservoir on
        the Conodoquinet.

            Appendix B   Node Descriptions

Reservoir   CODRRES
M&I - CARLILMI, HARRWCDQ, MECHBGMI, HARRWYBC, YORKCODR,
      YORKCBC, YORKSUSQ, LANCSTSQ, EPHRTAMI, ELIZCHK,
      MANHMMI, LEBSWT, HARRESWT, HESUSQ,  HARRESCL,
      SUSQMI
Groundwater   CARLILGW, MECHBGGW, YORKGW, LEBANGW,
              ELIZGW, LITITZGW,  NEWHOLGW, LANCSTGW,
              EPHRTAGW
Irrigation -  CDQIRR, WCONIRR, CODRIRR, CSTIRR, CONWIRR,
              SWTIRR

All others are start or confluence nodes.
                                                     655

-------
                                               Figure 1;    Study Area
                                                       PERIOD OF RECOf
5700   Conodoqulnet Cr.

5705   Susq.  K. at Kirrlsburg  '	

5750   i. Br. Codorui Cr. near York

5765   Corwstoga Cr.

5730   SirtUra Cr.

5745   H. Br. Codorus Cr. at Spring Grove

5740   U. Conewgo Cr.

5755   Codorus Cr. near Tort

56fl5   Clark Cr.

S760   Susquehannfl R.  »t Harriett!

5670   Juniata R.                      ,_
                                    1900      1910       1920     1930      1940      1950       I960


                                       Figure  2:   Available StreMf
                                                                   C E E «  L
                                                                   S P P E  A
                                                                   T H H H  N
                                                                   I R R H  C
                                                                   R T T 0  S
                                                                   R A A L  T
                                                                     H G G  G
                                                                     I H H  U
                                       Figure 3:     Typical  Schematic
                                                                                                               656

-------
             Ashok N.  Shahane
       Environmental  Systems  Engineer
                                       THE OPERATIONAL WATER QUANTITY MODEL
     Paul Berger
Scientific Programmer
Robert L. Hamrick
Division Director
                               Water Planning Division, Resource Planning Department
                                Central  and Southern Florida Flood Control  District
                                           West Palm Beach, Florida  33402
                        SUMMARY

A recently completed operational  water quantity model
based on hydrologic-hydraulic simulations is presented
in this paper.   Using the rainfall  input, initial  state
conditions and  basin parameters,  the model  estimates,
among many hydrologic entities, the streamflows contri-
buted by the watersheds.   An iterative type routing
model is then developed to distribute the simulated
streamflows through the primary conveyance systems of
lakes, canals,  and channelized river controlled by the
gate operations at the controlling  structures.   The
designed methodology is demonstrated for the Kissimmee
River basin of  Florida for the year 1970 by considering
21 canals, 14 lakes and 14 controlling structures.  The
outcome of the  model relates to simulated lake stages,
water levels at tailwater and headwater sides of the
controlling structures and simulated discharges through
controlling structures every 3 hours for the full  year
of 1970.  The comparison of simulated values with the
corresponding historical  data indicates clearly the
"working" of all  the individual pieces of the opera-
tional water quantity model, although a few critical
links are currently being refined to obtain better simu-
lated lake stages.
                     INTRODUCTION

Although the conventional  watershed models are devel-
oped with different purposes, methodologies, tools and
settings, they are usually valid for natural hydrologic
drainage systems.   Therefore, it seems that these
models have to be  modified in some fashion to analyze
the typical water  system with a chain of lakes and
channels managed by several  controlling structures.
Thus, a basic characteristic of the operational  water
quantity model  is  related to its capability of including
operational functions of the water system with adequate
theoretical and experimental data for formulating basic
hydrologic processes. Secondly, based on the analytical
principles, simulation and optimization techniques with
stochastic and deterministic inputs are currently being
used in planning and design of water systems.  Con-
sidering the necessary assumptions and speculated condi-
tions required for reaching a mathematical solution,
these design models give general answers to the over-
all problem and do not generate the most desired prod-
uct for the operational needs4:  As further pointed out
by Lindahl  and Hamrick4, operationally oriented models
should give specific answers to very specific questions
and circumstances.   Thirdly, the operational models  are
usually designed to function as a short-term and long-
term decision making aide  within an operational  set-up
and within  existing peripherial monitoring capabilities
for a typical  system.   In  other words, using hydrologic
and hydraulic characteristics of the river basin, the
operational  models  can provide valuable assistance in
operating the gates manually or automatically to main-
tain water  levels  or adequate flow of water in normal
as well  as  unusual  circumstances.
            The specific water system for which the presented
            operational water quantity model was developed is the
            Kissimmee River system as depicted in Figure 1.  As
            shown in Figure 1, the Kissimmee water system consists
            of 14 lakes, 25 canals and 14 controlling structures.
            As shown in Figure 2, the Kissimmee basin is further
            divided into 19 drainage basins (also called planning
            units) that drain into the primary conveyance system
            of lakes, canals and controlling structures of Figure 1.
            It is to be emphasized that the procedure of the oper-
            ational water quantity model  and the related computer
            programs are developed for specific configuration of
            the water system as shown in  Figures 1  and 2.
              COMPONENTS OF THE OPERATIONAL WATER QUANTITY MODEL

            Since past attempts have been made in three distinct
            stages to bring the model to its current form, its
            developmental procedure is broken down into three
            component parts:  1. Sub-basin model, 2. Routing pro-
            cedure, and 3. Routing methodology to combine the
            routing technique with the sub-basin model.  Basic
            computational steps of the model are outlined in
            Figure 3.
            Description of the sub-basin model:

            The basic foundation on which the sub-basin model  was
            developed and modified is essentially a parametric
            approach for formulating the physical system of the
            Kissimmee basin in terms of hydrologic
            simulation.4>5,8,9,10  it can be seen from Figure 4
            that the major computational steps  are related to:
            (a) processing of input rainfall values, (b) Formu-
            lations of infiltration phenomenon,  (c) surface storage
            and overland flow equations, (d) estimation of water
            losses, and (e) quantification and  routing of sub-
            surface flow through a multi-layer  soil system.

            Since the detailed descriptions and discussions of
            rationale behind these formulations  were previously
            reported by Hoi tan, Lopez, Lindaljl.  Sinha. Hamrick,
            Khanal, and Shahane, et. al,] ,2,4,5,8,9,10 tnese
            formulations are briefly discussed  in the following
            section.
                 Processing of input rainfall  values:  Using the
            available network of raingaging stations over the
            entire Kissimmee River basin, daily rainfall  values
            are obtained for each of the 19 planning units from
            the daily rainfall values of surrounding representative
            raingaging stations.  These recorded daily rainfall
            values are further synthesized to generate hourly
            values using a linear stochastic model  for the consecu-
            tive hourly rainfall record as reported in reference 10.
                                                      657

-------
     Formulations for infiltration phenomenon: Among
various formulations and concepts proposed by many soil
scientists, a modified form of the empirical  equations
originally developed by Hoi tan is used in quantifying
the infiltration phenomenon.1'2  Such equations are:
     f = A(SA)1'4 for SA >_ G
and
     f = A(SA)1'4 + FC for SA < 6
                                    (1)
                                - - (2)
where
     f = capacity rate of filtration, A   surface
     penetration index, SA   storage currently available
     in the soil reservoir, FC   constant rate of infil-
     tration between consecutive layers, G   total
     amount of free or gravitational water in a soil
     profile of selected depth.!»2,8,9

     Surface storage and overland  flow:  Besides
infiltration, a part of precipitation is contributed  to
the storage in surface depressions.  Such surface
storage is computed as
     VD
f-DT
(3)
     After a part of precipitation input percolated
into the ground and after a part filled the maximum
volume of surface depressions,  precipitation excess  is
contributed to overland flow.   Mathematically,  it is
computed from simple subtraction as
     Overland flow   P-f when VD=VDM and P  >  f    (4)
where
     P   precipitation input,  f   infiltration  rate,
     VD = amount of water currently stored in  surface
     depressions, VDM   maximum volume of surface
     storage.8,9
     Estimation of water losses:   In  the  sub-basin
model, water losses are considered  as  the part  of preci-
pitation input that reaches  the  around surface  but  never
appears at the watershed outlet.'.8,9,10   with  this
definition, water loss  can occur  in different categories;
i.e., water loss due to direct soil evaporation, evapo-
transpiration by existing vegetation  and  water  loss  due
to deep percolation.  These  losses  are in turn  functions
of various factors as shown  in the  following-formula-
tions :

    1.   Water loss due  to indirect  soil evaporation,

        Loss 1    Ci(l    nm-M^—on  ](PT)-          (5)

    2.   Portion of water that is  lost  due primarily  to
        the existing vegetation
        Loss  2  =  C^GJC^^^I DT                  (6)

    3.   Water loss due  to deep percolation,

        Loss  3    (FC)(DT)                           (7)

where

    C-]    ratio  of maximum evapotranspi ration to maximum
    pan  evaporation value,
    DWT   water table depth = (SA)(D)/G; -          (8)
    D    total depth of'soil profile; G = total amount'of
    free  gravity  water  that could exist in a soil
    profile,  DWTM  maximum depth to water table at
    which DWT will have a negligible contribution toward
    Loss  1, EP    pan evaporation, NW = number of weeks,
    DT =  time increment, C2   constant = 0.78, Gi = an
              overall growth index for the existing vegetation,
              FC   constant rate of infiltration between  consec-
              utive layers, SA = storage currently available  in
              reservoir.

         Adding these three losses together gives the  total loss
         of water from a given soil profile.  This value  of
         total water loss is accounted for in estimating  the
         recovery of water from the soil reservoir to  the main
         channel.
     Quantification and routing of sub-surface flow:
The basic purpose of this computational step is to
estimate the spatial and time contribution of the sub-
basin flow from different soil reservoirs to the main
channel.  Thus, the first task is to determine the
number of reservoirs.  This is done by reverse integra-
tion of the runoff hydrograph by establishing storage-
flow relationships for a simple recession curve.  Using
this technique it is established that for our 19 plan-
ning units, soil profile can be represented by not
more than three soil reservoirs.  After determining
the number of soil reservoirs, the basic continuity
equation and a storage outflow curve is combined to
provide contributions of each soil reservoir to the
stream channel and also the total storage available
in these reservoirs at the end of each time step.
These computations reported by Lindahl^ take into
account (l)the volume of water that is infiltrated
during time DT, (2)initial  available storage in a  soil
reservoir, (3)sum of water losses, (4)volume of sub-
surface drainable water, (5)time interval  for the  vol-
ume of the subsurface drainable water, and (6)the  up-
dated available storage.  At the end of these compu-
tations, the discharges contributed by each soil  layer
and overland flow are obtained for each time interval.
In the next step, these discharges are multiplied  by
the routing coefficients (which are estimated from
Nash's routing equation) and resulting values are
added together to obtain time distribution of stream-
flows at the watershed outlet.8,9

Input Data Requirements:

To carry out these computational steps for the 19  plan-
ning units of the Kissimmee basin, the parameters  of
the formulations should be known.  Since these param-
eters represent the agricultural-related water charac-
teristics of the basin, they are estimated based on
the available research publications of the ARS and
many reports delineating the regional  character-
istics. 4,5,8,9,10

To compute infiltration characteristics, the appropri-
ate basin parameters are:  (a) total  available storage
in three soil reservoirs, i.e., TAS(l), TAS(2) and
TAS(3); (b) constant rates of infiltration in three
layered soil systems from one layer to another desig-
nated as F(l), F(2) and F(3); (c) total ammount of
gravitational water in these three layers, i.e., G(l),
G(2), and 6(3); (d) portion of G that can be drawn
into surface water i.e., GD(1), GD(2), and GD(3),and
(e) total depth of the soil  profile (D) in inches.

In addition, for estimating three types of water
losses, overland flow, and sub-surface flow, the fol-
lowing parameters are required:  (a) depth of water
table at which evaporative water loss is considered
significant, (b) maximum volume of surface storage
(VDM), (c) ratio of evapotranspiration and maximum pan
evaporation value (PPAN), (d) sub-surface discharges
through three soil layers Q(l), Q(2), and Q(3), and
(e) corresponding storages in these three soil reser-
voirs SG(1), SG(2), and S6(3).
                                                       658

-------
Finally,  routing coefficients to combine flows from
three sub-surface layers with the overland flow i.e.,
TK(1),  TK(2),  TK(3), TK(4) for representative loca-
tions in  the  Kissimmee basin are also necessary along
with the  assumed number of cascades in layer i (CNR
                    ROUTING MODEL
In our specific investigations, the basic purposes of
developing routing methodology are: (1) to distribute
sub-basin model output through the system of the lakes,
channels and controlling structures,(2) to combine
stage-storage fluctuations of the lake with the stage-
discharge characteristics of the channel sections for
developing a simple joint methodology of reservoir and
channel routing, (3) to include operational character-
istics of the controlling gates coupled with the
routed simulated stages for estimating discharges
through various controlling structures, (4) to improve
sub-basin model output by including the key process
(if any) of the lake or channel which might be excluded
from the assumed conceptual physical system, and (5) to
provide the basis  for examining the effects of changing
operational parameters on the hydrologic characteristics
o}". the Kissimmee water system with complete independ-
ence from the analysis of the historical data.

Input  Information  and Essential Formulations:

     Input information:  While trying to demonstrate
the routing model  for a one year period of 1970, it is
essential to obtain hydrologic base line information
just before this period for all the lakes, channels,
and controlling structures.  Such information (also
known  as initial conditions) includes:   (1) the re-
corded stages at 14 lakes of the upper and lower
Kissimmee, (2) recorded tailwater and headwater eleva-
tions  at 14 controlling structures, (3) proportioning
factors for distributing sub-basin model output in
corresponding lakes of a particular planning unit,
 (4)various constants to convert monthly pan evapora-
tions  to 3 hour lake evaporation values.

Essential Formulations:

As an essential part of the simulation procedure, our
methodology also depends heavily on the formulations of
various water systems.  Basic forms of the equations
which are used in  our analysis are summarized in Table
1.  As shown in this table, formulations are classified
according to the type of system (i.e., lake, channel
or controlling structures).  They are described below.

    Formulations for lake system:  Essentially, the
parameters which are useful in the simulation are
stages, storages,  inflows and outflows for various
lakes.  The first  two equations of the lake system given
in Table 1 tie together, change in storage (AS) and
changes in stage to the characteristics of inflow, out-
flow and initial stages.   These equations are simple
forms of mass-balance equations.   In addition, it is
also necessary to  know the stage-storage relationships
for all the lakes  of the upper Kissimmee.   These rela-
tionships can  be in either tabular form or in math-
ematical  form.
     Formulations for controlling structures:  Opera-
tional characteristics of the Kissimmee water systems
are reflected in the formulations of the controlling
structures.  Variables considered in these formulations
are gate openings (60), headwater elevation (HWE), and
tailwater elevation (TWE) with discharge as a depend-
ent variable as shown by Equation 1 for structure oper-
ations in Table 1.  In the routing methodology these
equations are used to compute the discharge through the
structure knowing the simulated tailwater and headwater
stages for a given set of gate openings.

     Channel formulations:  The development of the
channel formulations and using them in a convenient
fashion in routing methodology are some of the steps
that make our procedure different than previously
attempted techniques.   Essentially, the hydraulic
formulations given in Table 1 for the channel  system
relate to:  (1)  a differential equation representing
gradual varied flow with- slope of energy line, channel
bottom slope, discharges, cross-sectional  area, top
width of the channel and velocity head coefficients as
variables and rate of change of depth (with distance)
as a dependent variable (Equation 1 of Table 3),
(2) Manning's equation combining hydraulic character-
istics of the flow (i..e., velocity, Manning's  coeffi-
cients, slope of energy line) with the physical  charact-
eristics of the channel cross-sections such as cross-
sectional area (A) and perimeter (P).  (Equation 2 of
Table 1 of channel system.), (3) an interative equation
based on a numerical integration technique of  trape-
zoidal rule applied by Prasad6 to estimate the water
depth (and then water surface elevation) at the end of
the channel section (Equation 3 of Table 1 of  the
channel system).  Using these formulations, the exist-
ing FCD backwater program is run to perform backwater
computations for all the 25 channel sections of the
Kissimmee with the available channel  cross-sectional
data.  For a given channel section, the program gener-
ates a set of upstream, downstream stages  along with
discharges and storages.  Using this  data  set, empirical
relationships based on statistical principles  are de-
rived for these variables.  These established  mathe-
matical relationships (also known as  backwater func-
tions) are then used in the program to replace directly
the backwater computational steps.  Among  many develop-
ed equations, the selected formulations are given in
Tables 2, 3, and 4.  For better accuracy,  these
formulations are used in conjunction  with  corresponding
correction factors.  The rationale and different points
for developing these equations are discussed in detail
by Shahane, et al.'
               COMPUTATIONAL METHODOLOGY

After developing various pieces presented earlier, the
next important step is to link them together to distri-
bute the sub-basin flows through the lake, channel, and
controlling structures of the Kissimmee basin.  Among
other possible procedures, the selected method is de-
scribed briefly in the following section.

At the outset, the three lakes system is considered
with emphasis on the middle lake and the associated
two channel sections (i.e., one on each side of the
middle lake).  Using the recorded initial stage of the
                                                       659

-------
middle lake, its initial storage is computed from a
stage-storage values.  From the initial recorded stages
of the three lakes, the initial discharges are esti-
mated by channel formulations given in Table 5.  If a
controlling structure is located in one or both chan-
nel sections, the initial  discharges are computed
from the discharge rating curves for controlling
structures knowing the recorded tailwater, headwater
elevations (TWE and HWE) and the 3 hour gate opening
data.  Using these initial  estimates of discharges
flowing into or away from the middle lake and the
local inflow generated by the subbasin model, the
change in storage (AS) in the middle lake is esti-
mated from the simple mass-balance equation.  Know-
ing the initial storage and the computed change
in storage, a new storage and new stage is obtained
for a prescribed time step.  If the new discharges
corresponding to the new stage make the change in
storage (AS) in the middle lake significantly dif-
ferent than the previously estimated S, then S is
again computed using the average values of the new and
previous discharges through two channel sections.
This iterative procedure is continued until  ;the dif-
ference between previous and new estimates of AS is
within the prescribed limit.  At the end of the itera-
tion, final estimates of discharges through the chan-
nels, and the lake stage of the middle lake are ob-
tained.  These steps are repeated for the next three
lake systems and continued for each lake system start-
ing from Alligator Lake to Kissimmee Lake and for five
channel systems of the lower Kissimmee basin using all
the formulations shown in Tables 2, 3, and 4.
Other details of the computational  methodology are
discussed by Shahane, et al.7
               RESULTS AND VERIFICATIONS
Results:
After putting together the pieces of the water quan-
tity model as shown in Figure 9, the output is essen-
tially the net result of the interactions of various
sub-components of such hydraulic simulation procedure.
The primary output from such methodology consists of
(1) simulated hydrologic parameters such as sub-surface
flow, total losses, deep seepage, available storage in
the soil, storage in depression and mean streamflows
for 19 planning units on a 3 hour basis, (2}  3 hours
simulated discharges through all the channel  sections
of the upper and lower Kissimmee for the year 1970,
(3) 3 hours simulated mean discharges through all the
control structures for the full year of 1970, C4) 3
hours simulated stages for 14 lakes of the upper
Kissimmee basin, (5) 3 hours simulated tailwater and
headwater stages at all  the control structures of the
upper and lower Kissimmee basins, (6)  storages in all
the major lakes and storages for five sections of the
lower Kissimmee at the end of every 3 hours for the
entire year of 1970.

Verifications:

The methodology of the sub-basin model was first
applied by previous investigators to the Taylor Creek
drainage basin of 100 square miles located on the
north side of Lake Okeechobee in Florida.  Since the
hydraulic, hydrologic and agricultural characteristics
of the Taylor Creek watershed are well monitored by
the ARS of the U. S. Department of Agriculture, and
since this drainage area was in its natural form with
no control structures to change its natural drainage
characteristics during the test period, it was an
ideal  place to verify and test the FCD sub-basin model.
The typical result of such an effort is depicted in
Figure 5 which indicates clearly the adequacy of the
sub-basin model  and  suggests  the  appropriate choice of
coefficients  covering  the  key hydrologic processes.8»9

When the same sub-basin model  is  applied to  the 19
drainage basins  (also  known as  planning  units)  of the
Kissimmee, the streamflows at the mouth  of these drain-
age areas are generated.   These values are compared
with the available yearly  historical  data (with wet
and dry period values) compiled by the Hydrology Divi-
sion of the FCD.  The  typical  graphical  comparison
is shown in Figure 6.  Based  on this  comparison,  it
appears that  the sub-basin model  simulates hydrologic
components which are in general agreement with  the
recorded values.

The success of the routing methodology can be viewed
in terms of the  various comparisons of simulated
stages and discharges with the  corresponding recorded
values.  The results of our routing methodology for
the upper Kissimmee  basin were  compared  with the  histor-
ical values using a  particular  set of state  conditions,
basin parameters of  sub-basin models  coupled with  a
specific set of  proportioning factors, tabular  values
and mathematical formulations of  the  routing model.
A typical comparison is shown in  Figure  7. Although
the correlations depicted in  Figure 7 for discharges
are excellent, the comparative  graphs of  simulated
and recorded stages  of some of  the lakes  of the
upper Kissimmeeshow  significant differences.  To
illustrate this  point, typical  results in  the form of
graphical comparison for Lake Tohopekaliga are
depicted in Figure 8.  These comparisons  indicate
clearly, (1) the capability of  our overall framework
of operational watershed model  to  comvine  the sub-basin
model with the routing methodology and to  generate
the wanted simulated information,  (2) the  relative
importance of gate openings as  against the head
difference across the structure in the discharge
rating formulations  for the control structures,
and (3) the adequacy of the developed operational water
quantity model for considering  the interactions of
stage-storage and discharge characteristics of  lakes,
canals and controlling structures, making  it possible
to further examine the effects  of  changed  conditions
on the different parameters under  investigation.
                     CONCLUSIONS

1.   After designing, formulating, modifying and re-
fining various component parts of the operational water
quantity model as shown in Figure 3, it is demonstrat-
ed that the hydrologic and hydraulic performance of the
controlled water system for a given set of rainfall
distribution and gate operations can be adequately
simulated.
2.   With a realistic framework of assumptions, simpli-
fications and approximations the developed computer
program (which takes about 5 hours of computer time for
one year of simulation on the CDC 3100 computer) is
shown to be successful in performing the following
operations:  (a) simulating hydrologic parameters (such
as sub-surface flow, surface flow, evaporation losses,
deep seepage loss, available soil storage, storage in
depression and finally streamflows) on a 3 hour basis
for 19 planning units using rainfall, state conditions
and basin parameters as input data for the year of
1970, (b) routing these generated streamflows of 19
planning units through the controlled system of lakes>
channels and operating structures, (c) simulating 3
hour lake stages, headwater and tailwater-elevations
at structures and discharges through the structures of
the upper and lower Kissimmee, (d) comparing the simu-
lated values with recorded values in terms of plotted
graphs and tables, and (e) performing parametric sensi-
tivity analysis by changing the key parameters of the
                                                      660

-------
sub-basin model  and the routing model.
3.   It appears that the developed model can be di-
rectly or indirectly useful in  (a) examining the
effects of certain physical parameters on the final
outcome of discharges and stages, (b) providing opera-
tional information regarding the required set of gate
openings to maintain water levels and discharges at  a
particular level  at specific locations, and (c) uti-
lizing the generated hydrologic information in the
other practical  aspects of water management.
                    ACKNOWLEDGEMENTS

The authors wish to acknowledge Mr. W. V. Storch,
Director of the Resource Planning Department,  Central
and Southern Florida Flood Control District, for en-
couraging the authors to present their modeling method-
ology in this national conference on  Environmental
Modeling and Simulation.

                      REFERENCES

1.  Holtan, H. N., Stiltner, G. J., Henson, W. H.,
    Lopez, N. C., "USDAHL-74 Revised  Model of  Water-
    shed Hydrology", Technical Bulletin  No. 1518, ARS,
    United States Department of Agriculture, Dec. 1975.

2.  Holtan, H. N., "A Concept for Infiltration Esti-
    mates in Watershed Engineering",  ARS 41-51, Oct.
    1961, p. 25.

3.  "Hydrology and Hydraulics Section",  Soil Conserva-
    tion Service, National Engineering Handbook, Aug.
    1972.

4.  Lindahl, L. E., "Review of Techniques Pertaining
    to Basin Models: a Memorandum Report to W. V.
    Storch, Director of Engineering,  Central and
    Southern Florida Flood Control District, Dec. 1967.

5.  Lindahl, L. E. and Hamrick, R. L., "The Potential
    and Practicality of Watershed Models in Operation-
    al Water Management", a paper presented at ASCE
    National Water Resources Engineering meeting at
    Memphis, Tenn., Jan. 26-30, 1970.

6.  Prasad, R., "Numerical Method of  Computing Flow
    Profiles", ASCE Hydraulic Division,  Vol. 96, No.
    HY1, Jan. 1970.

7.  Shahane, A. N., Berger, P. and Hamrick, R. L.,  "A
    Framework for the Operational Water  Quantity Model"
    an interim report of FCD submitted to the  Florida
    State Department of Administration,  July 1975,
    p. 71.

8.  Sinha, L. K., "An Operational Model:  Step 1-b,
    Regulation of Water Levels in the Kissimmee River
    Basin", American Water Resources  Association Con-
    ference, Oct. 27-30, 1969.

9.  Sinha, L. K. and Lindahl, L. E.,  "An Operational
    Watershed Model:  General Considerations,  Purposes
    and Progress", Transactions of ASAE,  Vol.  14, No.
    4, 1971, pp. 688-691.

10.  "Water Yield of Kissimmee River Basin by the Use
    of the FCD Model" an in-house report of Central
    and Southern Florida Flood Control District, 1973.
                      NOTATIONS

AS          change in storage,
WSE         water surface elevation,
S           storage,
SO          bottom bed slope,
SE          slope of the energy line,
n           Manning's coefficient,
V           velocity,
HR          hydraulic radius,
Q           discharge,
A           cross sectional area,
Y           depth,
GO          gate opening,
EH          headwater elevation (HWE)  - tailwater  ele-
            vation (TWE),
DX          distance between reaches i+1 and  i,
a           velocity head coefficient,
T           top width of the channel,
g           gravitational acceleration,
a,b,p,r,s   constants
      UPPER
  KISSIMMEE BASIN
                                   LOWER KISSIMMEE BASIN
               SCHEMATIC REPRESENTATION OF THE
   CHAIN OF UPPER KISSIMMEE LAKES AND LOWER KISSIMMEE FIVE POOLS
                                                           NOT TO SCALE
                        FIGURE 1
                                                      661

-------
         MAP SHOWING THE LOCATIONS
OF THE 19 PLANNING UNITS OF THE KISSIMMEE BASIN
                                                                                               DEPRESSION
                                                                                               3TORAOE
                                                                                             SOIL MOISTURE
                                                                                                LAYER H
                                                                                             SOIL MOISTURE
                                                                                                LAYER HI
                                                                           Figure 4     FT C. D.   SUB-BASIN   MODEL
              Figure 2
                                                                  Figure 5  COMPARISON OF SIMULATED AND RECORDED DISCHARGE
                                                                           FOR  TAYLOR  CREEK  (8,9)
 Figures   FLOW CHART OF MAJOR
          COMPUTATIONAL STEPS INVOLVED
          IN FC.D. WATER QUANTITY MODEL
                                                                                 COMPARISONS OF  ANNUAL VALUES OF RAINFALL
                                                                           RUNOFF ESTIMATES OF  SUB-BASIN MODEL WITH  HISTORICAL
                                                                        ' DATA FOR THE PERIOD 1960-70 FOR THE ENTIRE  KISSIMMEE BASIN
RUNOFF-INCHES
Figure 6
                                                           662

-------
                           Figure  7
            50      100     150     200      250     30O      35C
                 1970-EAST LAKE TOHOPEKALIGA


    Figure  s    COMPARISON  OF  SIMULATED AND
                 RECORDED STAGES  FOR  EAST  LAKE
                TOHOPEKALIGA FOR THE YEAR 1970
Table  1.   Basic forms of  equations useful  in the model.*
System
Lake System
Channel System
Structures
Operations
Formulations
1. (stage)t+1 (stage)t + (4S)t+1
2. US)t+, - lt+, - Ot+,
3. USE • a (S)b
4. polynomial equations
, d^ _ SO - SE = »
'"J?
" "C 2.22 (H.R)V3
n2Q2p4/3
2.2 A l0/3
1. Q(H) - P(GO)r (EH)S

(A)
(B)
(C)
)* (o)
(E)
(F)
(G)
(H)
   'Notations are explained at the end of the paper.
                                                                     Table  2.   Nonlinear formulations  of discharges for the
                                                                                 typical  seven channel  sections of the  upper
                                                                                 Kissimmee basin.
Channel
Section

C-32G
C-32B
C-32D
C-32F
C-29
C-37
C-36
Nonlinear Relationship
Q = (US-DS)* (DS)B
A
0.19562817
0.12933563
0.11312801
0.02812316
0.23189354
0.44362025
0.40565648
B
1.37327452
1.31192007
1.28232715
1.14778715
1.53817995
2.25302023
2.17679705
r2
0.99036109
0.99028838
0.98835559
0.98488793
0.99206125
0.99901088
0.99862609
                                                                          r •= correlation coefficient,

                                                                          Q = mean discharge,

                                                                         US = upstream stage,

                                                                         OS • downstream stage
                                                                    Table 3.   Stage-storage-discharge relationships  for  the
                                                                                lower Kissimmee  basin.
Channel
Section

C-38A
C-38B
C-38C
C-38D
C-38E
Nonlinear Relationship
US = (DS)fl(logt))B
A
0.93525909
0.80300638
0.72539726
0.72979747
0.84436366
B
0.12357836
0.34915258
0.45337676
0.42254163
0.22342889
r2
0.99999427
0.99997801
0.99993335
0.99995117
0.99995183
  r a correlation coefficient,

 US c upstream stage,

 DS = downstream stage,

  Q = mean discharge,

 Q > 0
                                                                         * C-38A = channel section of C-38 between structures S-65 and S-65A
                                                                          C-38B     "      "    "   "     "      "    S-F5A and S-65B
                                                                          C-3BC =    "      "    "   "     "      "    S-65B and S-65C
                                                                          C-38D =    "      "    "   "     "      "    S-65C and S-65D
                                                                          C-38E »          "    "   "            "    S-66D and S-65E
                                                                    Table  4.   Stage-storage-discharge relationships  for  the
                                                                                lower Kissimme  basin.
Channel
Section

C-38A
C-38B
C-38C
C-38D
C-38E
Nonlinear Relationship
DS = (logQ)A(logST)B
A
0.46661167
0.06123664
-0.11387716
-0.32464046
-0.31844141
B
1.29225620
1.59817418
1.70097296
1.79672855
1.68094907
r2
0.9974083
0.99994493
0.99985066
0.99987939
0.99921370
 r = correlation coefficient,

DS - downstream stage,

 Q = discharge,

ST = storage in acre ft.,

Q > 0
                                                                663

-------
                                    HOW TO MAKE SIMULATIONS MORE EFFECTIVE
                                               George S. Fishman
                                         University of North Carolina
                                                 Chapel Hill
      Discrete event simulation on a digital computer
has been with us as a tool of analysis for over two
decades.  Among the attractions it offers are the
ability to model detail (where analytical methods
fear to tread) and the ability to control variation
(which real world experimentation cannot).  The first
of these abilities, modeling detail, seems to have
been oversold, and the second, control of variation,
has been undersold.  The presentation concentrates on
these two topics, offering examples and guidance as
to how thoughtful reflection on these two areas, be-
fore simulation modeling and programming begin, can
lead to more effective use of the simulation method.

                 1.  Introduction

      In today's presentation my remarks are intended
to put the problems that arise in a simulation on a
digital computer in perspective and to offer direc-
tion in solving some of the statistical ones.  Hope-
fully, this perspective and direction will facilitate
our discussion here on just what can be done to
increase user satisfaction with the building blocks
of simulation.

      2.  Features of the Simulation Method

      As a tool for studying complex systems, simula-
tion offers many attractions.  These include:
          1.  compression of time
          2.  expansion of time
          3.  model detail
          4.  selection of outputs
          5.  control of measurement errors
          6.  control of variation.
      A properly constructed simulation model can
compress time so that several years of system activ-
ity can be simulated in minutes or, in some cases,
seconds.  This ability enables one to run through a
variety of operational designs of interest in a frac-
tion of the time required to try each on the real
system.
      The ability to expand time also has its bene-
fits.  By arranging for statistics of interest to be
produced over small intervals of simulated time, one
can study the detailed structure of system change
that cannot be observed in real time.  This figura-
tive time dilation is especially helpful when little
data exist on change in the real system.
      Model detail is often cited as the most notable
feature of computer simulation.  Although all model-
ing involves some abstraction from reality, the
ostensible reason for using simulation in the minds
of many analysts is that it allows them to model de-
tail that other methods would have to omit in order
to admit a solution.  This ability to include detail
has occasionally led to a euphoria about what simula-
tion can do.  Unfortunately the dark side of the pic-
ture is seldom mentioned in advance and inevitably a
user who exploits this ability to include detail
learns that all is not well at a later stage in his
use of simulation.  Section 3 discusses the subject
of detail with regard to its dark side in depth.
      The ability to select output and reports of
varying degrees of detail  also contribute to the ap-
peal of simulation.  However, it should be remembered
that the computation of output  statistics take time.
Therefore, a judicious simulation user devotes prior
thought to what the relative importance of different
outputs is and to the ways in which he can manipulate
a small  internal data base to produce many outputs of
interest.  For example,  in a  queueing  system the  iden-
tity  L   AW  where L denotes mean  queue  length;  A,
the arrival rate; and W,  the  mean waiting time  holds
under fairly general conditions.  Therefore,  one  need
collect data to estimate  either L or W since  the
other can be obtained by  either division  or  multipli-
cation by the known arrival rate \.
   Control of measurement errors offers a great com-
fort to simulation users.  Presumably  the automatic
fashion in which data are collected in a  computer
simulation together with  the  fact that machine errors
are virtually absent has, until recently,  led to
complacency about the possibility of error.  With the
advent of the simulation of computer systems in which
the time between events is of the order of micro-
seconds but run lengths are of the order  of hours, it
has become clear that the accumulated  simulation  time
which is generally computed by adding  the  times be-
tween events is subject to substantial error.  Whether
or not this is a serious  issue for a simulation
study depends on the nature of interevent times
relative to run length times.
   Control of variation is the least appreciated
feature of computer simulation.  This  may  be a result
of the fact that some knowledge of statistics is
necessary to exploit this feature.  In particular,
application of this control of variation  enables one
to obtain results with a specified accuracy at lower
cost than if one ignored the potential for control.
Section 4 offers a number of examples  to  illustrate
how easily this exploitation can be made  to work.

                     3.  Detail
   There exists a general presumption among analysts
that if they were just able to make their models con-
form more closely to the observed behavior, then they
would increase chances of having a successful study.
Simulation, being a descriptive tool, allows one in
theory to make a model as close to resembling reality
figuratively as one likes.  However, in order to
close the gap between model and reality, one has to
have a definitive picture of the behavior to be
modeled.
   To study detail we use a simulation of a fire sup-
port system as an example.  Fire support involves a
host of microphenomena; they include:
              A.  target acquisition
                  1.  detection
                  2.  identification
                  3.  location
              B.  target engagement
                  1.  priority rules
                  2.  weapons availability
                  3.  weapons selection rules
              C.  fire support performance
                  1.  target characteristics
                  2.  weapons characteristics
                  3.  measures of effectiveness.
   Each calls for detail which hopefully would arise
from actual battlefield experience.  If the knowledge
needed to derive a more adequate representation of
target detection, location and identification exists,
then one has to decide whether its inclusion in the
simulation will improve representational accuracy to
an extent that makes the extra modeling effort worth-
while.  However, this improvement can only be mea-
sured after the fact.  In particular, inclusion of
known detail in a comprehensive fire support descrip-
tion would have to be preceded by extensive testing of
alternative mathematical and logical representations.
                                                      664

-------
To do  this  one needs data.
    Every extension of a simulation's detail intro-
duces  new parameters.  These require estimation which
relies on data,  whether it be sample observations or
expert judgment.  Naturally the more detail that is
desired, the more data that are required.  This poses
a dilemma for the analyst.  While he may be able to
describe a  phenomenon conceptually, he may not have
the data needed  to fit the parameters of the corres-
ponding mathematical representation.  If he does have
the data, he must then face the issue as to how rep-
resentative the  parameter estimates are when this
particular  micromodel is used in a variety of alter-
native settings.  That is, parameter values may be a
function of the  setting in which the model is used
and, therefore,  an analyst may need several sets of
data to estimate the values that parameters assume in
different settings.
    The third dark issue that more detail induces is
increased bookkeeping and computation in a simulation
computer  program.  More detail implies more events or
state changes per unit time in the model.  From a
programming viewpoint this requires additional data
structures  and logical structures.  This requirement
adds to the cost of putting the program together.
Although  it is true that languages such as GPSS and
SIMSCRIPT  II make these supplements relatively easy
to  introduce representationally, an analyst is still
faced with  the problem of fitting his program into
the computer on which he plans to do his work.
    If FORTRAN is used for modeling then a serious
additional  problem arises.  Fire support simulation
involves  relatively intricate time sequencing of many
diverse events.   Whereas specialized simulation pro-
gramming languages all contain timing routines that
perform this time sequencing automatically, the user
of  FORTRAN  must build his own timing routine.  This
effort alone can be so cost consuming as to defeat
the purpose of using FORTRAN for its computational
efficiency.  In particular, FORTRAN lacks a list pro-
cessing capability, a principal feature of all simu-
lation programming languages.  For this reason alone
one has to  question the flexibility and versatility
of  a fire support model programmed in a language
other than  a simulation programming language.
    The effect of detail on program development rep-
resents only one issue in this area.  Detail seriously
affects program execution also.  In addition to
creating more data and logical structures, more de-
tail causes more events to occur per unit time in a
simulation.  This implies that the list of scheduled
events on which  the timing routine relies for direc-
tion is longer.   This means that when a new event is
to  be scheduled  the timing routine takes more CPU
time to find the correct position for the correspond-
ing event  notice in the list of schedule of events.
    Unfortunately the current state of development of
most simulation  languages have contributed to the
seriousness of this problem in practice.  In order to
retain a  simplicity in list structures and processing
for general simulation, these languages search, add
and delete  from these lists using algorithms that in
no way exploit the nature of the event list for par-
ticular problem settings.  Moreover, many simulation
users do  not recognize that alternative ways exist to
process the list of scheduled events as well as other
lists that  materialize during the course of a simula-
tion.
    By now, many people recognize that the generality
of simulation programming languages may represent an
impediment  to computational efficiency in the fire
support area.   This recognition has led to a proposal
there for more tailoring to the needs of this kind of
simulation.  This idea deserves encouragement.  How-
ever,  one hopes  the tailoring will not be restrictive
of the resulting simulation programs'  use for alter-
native fire support studies.  Using a simulation
language to formalize concepts and structures would
help to insure this generality.
   Few, if any, tailored simulations have been re-
ported in the literature.  What has been reported are
ways to speed up list processing in general.  One
suggestion which most experienced simulation users
follow, regardless of the problem, is to create a
single event notice for two diverse events that
always occur simultaneously.  Then a subroutine call
within the executable code of one of the events en-
ables the other event to be executed.  A second sug-
gestion concerns conditionality.  Occasionally one
event occurs only after another type of event has
occurred.  However, the second event does not always
occur.  In this case an event notice for the neces-
sary event is generated in the simulation and within
the executable code for this event a test is made to
see if execution of the other type of event has to
occur.  The effect of these two suggestions is to re-
duce the number of event notices in the list of
scheduled events, thereby reducing the processing
time for this list.  Unfortunately the very emphasis
on events in a language such as SIMSCRIPT encourages
a user to overlook the fact that simple suggestions
such as these two can considerably shorten execution
time.
   Recently, other suggestions have appeared in the
literature.  The papers by Vaucher and Duval [9] and
Wyman [10] in the Communications of the ACM relate
experience with alternative search procedures aimed
at reducing list search time.  In GPSS the judicious
user of a user chain to shorten the length of the
current events chain offers dramatic savings, when
properly used
   Improved processing of other lists can also induce
efficiencies in large scale simulation.  For example,
suppose that available resources in a fire support
simulation are all kept on a single available resource
list.  Presumably the type of resource is distin-
guished by a value assigned to its attribute that
designates type.  Every time a resource is required,
a search of the resource list occurs.  If there are
many available resources of many different types the
search is time consuming.  Alternatively, if one ju-
diciously constructs several lists based on type then
the simulation needs only to search the selected
shorter list.  The price paid for search efficiency is
the increased number of list structures defined in the
simulation.  The exact balance between the cost of
having more lists and the saving in search time de-
pends on the particular system under study.

              4.  Control of Variation

   Although control  of variation seldom receives seri-
ous attention in large scale simulation, it is in
this writer's mind one of the most attractive fea-
tures of the simulation method.  Control of variation
includes the ability to control the pattern of varia-
tion in the streams  of random numbers that serve as
input to an ongoing  simulation.  Thoughtful use of
this ability enables a user to attain a desired sta-
tistical accuracy with less computer time than ne-
glect of the option  would require.  This benefit can
accrue when running  replications of an experiment in
which all input parameters are the same.  It can also
occur when comparing runs of an experiment in which
at least one of the  input parameters assumes different
values.  An example  illustrates the point.
   Consider an airline reservation office with m
reservationists.  If at least one reservation!'st is
idle when a call occurs the call immediately receives
service.  If all reservationists are busy the caller
listens to a 9 second recorded message excusing the
delay.  At the end of the message the caller receives
                                                      665

-------
service, if a reservationist is available.  Otherwise,
he is put into a queue with first-come-first-served
discipline.  Intercall times follow an exponential
distribution with mean 1/X.  Each caller makes a one-
way reservation with probability 1-p and a round trip
reservation with probability p.  Service times for one
way trips are exponential with mean 1/ui.  Round trip
service times are Erlang with shape parameter 2 and
mean 2/u.  Times are in minutes.
     Consider the case in which  x = 1,  w = 0.5,
m   6, and  p = 0.75.  Suppose one wishes to estimate
mean waiting time to within +0.025 minutes or, equiya-
lently, +J.5 seconds.  Let Y. denote sample mean wait-
ing time on replication i.   Let

(1)  Y. = k'1 l  Y,    s2  (Y)   (k-1)'1 I (Y, - Yk)2.
      k      i=1  1      K             i=1  1    K

Suppose we adopt the following design for our experi-
ment:  Continue to collect independent replications
until [1]   ,               oo
           s , (Y) < k(0.025r/tf i
             K    —            K-1
where tk -, is the .975 significance point of the t dis-

tribution with k-1 degrees of freedom.  Then if
Y,,...,Y, are normally distributed the probability

that Yk  is within +0.025 of the true waiting time is

approximately'1' 0.95.  Table 1 shows the results using
independent replications.
     This particular simulation was run in SIMSCRIPT
II.5 with intercall times generated on stream 1, ser-
vice times on stream 2 and type of call (one way or
two way) on stream 3.  In a simulation of a single
server queueing system Page [7] has shown that revers-
ing the  streams of random numbers for interarrival and
service  times on a second replication can induce siza-
ble variance reductions.  Presumably, low interarrival
times and high service times produce high congestion
on the first«run whereas reversal of streams produces
high interarrival times and low service times and,
therefore, low activity on a second run.  Therefore,
average  sample output over the two runs should have a
smaller  variance than in the case of independent rep-
lications.
     Table 2 presents the results of reversing seeds
on streams 1 and 2 on pairs of replications.  In order
to allow comparison with Table 1, the experiment here
was designed to have about half as many completions
per run  as in Table 1.  The results in Table 2 indi-
cate that only 12354 completions were required to
obtain the same statistical accuracy as in Table 1,
which  required 25543.  In  terms of variance one has

(2)    S25(X) = 7.55 x 10'4   S25(Z) = 23.34 x 10~4.

Then one way to measure variance reduction is to ex-
amine  the simple ratio

(3)          [s25(X) + s25(Z)]/4s25(Y) = 2.03

which  indicates that seed  switching has cut the vari-
ance by  about one half.
     Other methods of controlling variation are also
available.  Let X and Y have means y  and y  , respec-

tively.  Suppose that y  is known but y  is to be
                       x               y
estimated.  One estimate is Y, another is Z=Y+c(X-y  )
                                                   X
for which var(Z) <_ var(Y)  if
(4)
c <_ - 2 cov(X,Y)/var(X)
     Consider the airline reservation problem again
and let X denote the sample intercall time, y  = 1/x
and c   1.  The choice of c is based on the observa-
tion that if X   yx is positive the intercall times in
a replication are above average and, therefore, con-
gestion and waiting time are below average.
                                                                 Table  3  presents  the results of using intercall
                                                           time  as  a  control  variate.  The extent of variance is
                                                           evident.
                                                                 When comparing results on experiments with dif-
                                                           ferent inputs,  variance reduction is again possible.
                                                           These range  from using  common seeds for corresponding
                                                           streams  to varying the  number of observations col-
                                                           lected on  each  run [3].   For example, suppose that one
                                                           wants to measure the reduction in mean waiting time
                                                           that  accrues when  the number of reservationists in-
                                                           creases  from 6  to  7.  Moreover, the accuracy required
                                                           is d    1/60 minutes or  1  second.
                                                                 Table 4  shows the results when common seeds  are
                                                           used  for corresponding  streams on corresponding runs.
                                                           Since  -                 .     ~                 ,,
                                                           (5)    s  3(X)    4.4 x  10'4    s 3(Z)   2.10 * 10'4
                                                           variance reduction is estimated to be
                                                           (6)      [s23(X) + s23(Z)]/s23(Y) = 15.5,
                                                           impressive by most standards.
                                                                 In some simulation settings it is not possible
                                                           to match seeds  or  to  induce  the necessary correlation
                                                           between  runs to effect  a variance reduction.   This  is
                                                           especially true when  comparing the results  or  radi-
                                                           cally different experiments.   Here one may  have to
                                                           settle for independent  replications,  however,  variance
                                                           reduction  can still occur.   Consider two  experiments
                                                           with  outputs X  and Z  and sample sizes per replication
                                                                                        2                 2
                                                           of n   and  n .   Let var(X)  =  a /n   and var(Z)   a /n
                                                           under the  assumption  that  one  is  able to  create inde-
                                                           pendent observations  within  each  replication[(2, 4].
                                                           Let c  and c  denote  the unit  costs  of collecting and
                                                           processing observations  in each replication.   If one
                                                                                                    2       2
                                                           wants  to achieve a specified  variance V=  a  /n  + a /n
                                                                                                    XX    Z  Z
                                                           for Y  = X  - Z on each replication then n  and n
                                                           should be  selected so that
                                                          (7)
                                                                            r = nx/nz =
                                                                       2    2,2
                                                                      rl = Vaz
                                                                           - cx/cz
                                                          Using (7) with n  + n  = n instead of n    n/2 leads
                                                                          X    Z                 X  o
                                                          to a saving in computing cost of  (r,   r~) /

                                                          (1 + r2)(l + r2,) x 100 percent.

                                                                In preliminary runs of the  simulation for m   6
                                                                             -2  2
                                                          and 7 we estimated a /a    5.5 and c /c  = 0.95 so
                                                                              X  Z            X  Z
                                                          that r = 2.41.  Ten replications  of each experiment
                                                          were run with n    600 and n  =250.  Upon computation
                                                                         X            Z
                                                          of the appropriate terms the estimated saving in com-
                                                          puter^ tinie needed to achieve the  resulting variance
                                                          for Y,  = X,  - Z,  was about one third.  From this one

                                                          has to deduct the cost of the two preliminary runs;
                                                          but that cost was incidental.
                                                                The methods of variance reduction discussed here
                                                          represent a few among many techniques.  All exploit
                                                          the structure of the individual problems to a marginal
                                                          extent only.  However, methods do exist that exploit
                                                          the properties of individual problems in such a way
                                                          that substantial variance reductions are possible.
                                                          These are discussed in [3, Sections 11.2   11.3].
                                                          fSee  [8] for details.
                                                      666

-------
                    5.   References

1.   Chow, Y.S. and  H.  Robbins, "On the Asymptotic
    Theory of Fixed Width Sequential Confidence Inter-
    vals for the Mean," Ann.  Math. Stat..  Vol.  36,
    1965, pp. 457-462.

2.   Crane, M.A. and D.L.  Iglehart, "Simulating  Stable
    Stochastic Systems  I: General Multiserver Queues,"
    J. ACM, Vol, 21,  No.  1, January 1974,  pp. 103-113.

3.   Fishman, G.S.,  Concepts and Methods  in Discrete
    Event Digital Simulation, Wiley, 1973.

4.   Fishman, G.S.,  "Statistical Analyses  of Queueing
    Simulations," Man.  Sci.,  Vol. 20, No.  3,  November
    1973, pp. 363-369.

5.   Fishman, G.S.,  "Achieving Specified  Accuracy in
    Simulation Output Analysis," Technical  Report
    No. 74-4, Curriculum in Operations Research and
    Systems Analysis,  University of North  Carolina at
    Chapel Hill, 1974.

6.   I.B.M., General Purpose Simulation System V User's
    Manual, 5734-XS2(OS), 5736-X53(DOS),  White  Plains,
    New York, 1971.

7.   Page, E.S.,  "On Monte Carlo Methods  in Congestion
    Problems  II: Simulation of Queueing  Systems,"
    Oper. Res.. Vol.  13, No.  2, 1965, pp.  300-305.

8.  Starr, N., "The Performance of a Sequential Pro-
    cedure for the  Fixed width Interval  Estimation of
    the Mean," Ann. Math. Stat., Vol. 37,  1966,
    pp. 36-50.

9.  Vaucher, J.G. and P. Duval, "A Comparison of Simu-
    lation Event List Algorithms," Comm.  ACM, Vol. 18,
    No. 4, April 1975,  pp. 223-230.

10. Wyman, F.P., "Improved Event-Scanning Mechanisms
    for Discrete Event Simulations," Comm.  ACM. Vol.18,
    No. 6, June  1975, pp. 350-353.
                   Table  2
     Sequential  Estimation of Mean Waiting Time
               Using Seed Switching
     d   0.025 minutes, significance level   0.05
k
1
2
3
4
5
Xk
0.1649
.2240
.2250
.1755
.2618
Zk
.1699
.2338
.1679
.2483
.1373
V
(Xk+Zk)/2
0.1674
.1739
.1965
.2119
.1695
\
0.1674
.1707
.1893
.1874
.1838
SVY)
dS-4)

2.11
2.33
4.22
3.81
kd2/t2k ,
do'4) '

0.08
1.01
2.47
4.06
No. of
Completions
1262 + 1335
1255 + 1113
1165 + 1220
1352 + 1251
1178 + 1223
                                             12354
                   Table 3

    Sequential Estimation of Mean Waiting Time

            Using a Control Variate

       d = 0.025, significance level = 0.05
k
1
2
3
4
5
6
7
Zk
0.2100
.1648
.1760
.1528
.1624
.2079
.2134
\
0.2100
.1874
.1836
.1759
.1732
.1790
.1839
Sk (Z)
HO'4)

10.22
5.54
6.07
4.91
5.94
6.64
"X-l
do'4)

0.08
1.01
2.47
4.06
5.64
7.31
No. of
Completions
2651
2497
2781
2500
2629
2550
2595
                                         18203
                        Table 1
          Sequential Estimation of Mean Waiting Time

        d  0.025 minutes, significance level   0.05
k
1
2
3
4
5
6
7
8
9
0
Yk
0.2243
.1705
.1721
.1619
.1583
.2275
.2222
.1576
.2362
.2138
Yk
0.2243
.1974
.1890
.1822
.1774
.1858
.1910
.1868
.1923
.1944
no-4)

14.47
9.37
8.08
7.20
9.94
10.18
10.12
11.56
10.74
(ip-4)"

0.08
1.01
2.47
4.06
5.67
7.31
8.94
10.58
12.22

No. of
Completions
2651
2497
2781
2500
2629
2550
2595
2422
2440
2478
25543
                   Table 4
Sequential Estimation of Mean Waiting Time Difference
     d = 1/60 minutes, significance level = 0.05
k
1
2
3
Xk
m=6
0.1884
.1575
.1976
Zk
m=7
0.0628
.0411
.0687
Yk
Xk Zk
0.1256
.1164
.1289
Yk
0.1256
.1210
.1236
Sk
io-4

0.42
0.42
kd'/tjl-l
ID'4

0.03
.45
                                                         667

-------
                                  THE FACTUAL BACKGROUND OF ECOLOGICAL MODELS:
                                         TAPPING SOME UNUSED RESOURCES

                                                  E. C. Pielou

                                   Biology Department, Dalhousie University,
                                         Halifax, Nova Scotia, Canada
     Successful ecosystem modeling on a large scale re-
quires knowledge of the relative performance of many
species in response to changes in numerous environmen-
tal variables.  Without such knowledge it is impossible
to estimate the relative magnitudes, or even the signs,
of the numerical constants in prediction equations.
Acquiring this knowledge is difficult, expensive, and
time-consuming.  If the information now stored in the
ecological literature and data banks can contribute,
it should be used.
     Much information exists on biogeographic and eco-
logical zonation patterns, and it constitutes a partic-
ularly rich source for deriving new ecological insights
from old data.  This paper describes two new ways of
analyzing such data.  One entails determining the
overlap score of a group of species in order to judge
whether the species are competing.  The other (incom-
pletely developed) method entails comparing coeffi-
cients of concordance in order to judge the relative
importance of different environmental factors in con-
trolling community composition.

                     Introduction

     Most theoretical ecologists, and hence the applied
ecologists who consult them, are aware of a growing
split in their subject.  It has two separate, diverging
areas.  One is "mathematical" ecology and the other
"statistical" ecology.  However, ecology per se is
still one subject and the unfortunate divergence of its
parts is merely because of the styles of mathematical
argument employed and the kinds of theoreticians who
practice them.
     For the most part, theoretical modeling in ecology
has been the work of mathematical, as opposed to sta-
tistical, ecologists.  Models of many kinds all have
two indispensable ingredients, or sets of ingredients:
processes and parameters.   The processes are the changes
in size and age structure of interacting living popula-
tions, and the accompanying flows of energy and mate-
rials, as modeled by equations.  Whether these are sim-
ple linear regression equations or esoteric non-linear
integro-differential equations, they are still "forms"
with (temporarily, at least) no numerical content.  The
parameters are the numerical coefficients (or, for
studies of qualitative system stability, the signs of
the coefficients) that must be entered in the process
equations before any concrete predictions can emerge.
     Now consider from where the numbers are to come.
There are various possibilities.  Educated guesses are
one source, to see how the model (i.e., the process)
will behave in a wide range of conditions.
     A second source is experiment.  For example, if
processes in such microcosms as Paramecium species in
vials of water or Tribolium species in vials of flour
are to be modeled, the birth and death rates and the
growth and feeding rates of the animals, and the way
these rates vary in response to changing abiotic con-
ditions can, with persistence and patience, be discov-
ered by experiment.
     A third source of numerical parameters is observa-
tion of the system to be modeled, itself.  This is the
customary procedure when a "statistical model"   a
hypothesized statistical distribution - is to be fitted
to an empirical frequency distribution.  The data are
 first made to yield the desired parameters, with the
 familiar loss of degrees of freedom and, of course,
 generality.
      A fourth source of numbers is the vast accumula-
 tion of miscellaneous ecological data, reposing in the
 literature and in various data banks, gathered for pur-
 poses of every conceivable kind.  A body of data gath-
 ered for one purpose is available, if it has been suit-
 ably stored, for another purpose; not to use hard-won
 data in as many ways as possible is wasteful in the
 extreme.  Admittedly, to expect data collected for one
 purpose to yield the precise coefficients required for
 an unrelated ecological model to be "run" is probably
 to expect too much.   However, some kinds of data can
 certainly yield useful information.  For example, the
 data on the zonation patterns of plant and animal spe-
 cies, both ''regional" (along short environmental gra-
 dients a few kilometers long) and "geographical" (along
 long gradients, typically latitudinal gradients, of
 perhaps thousands of kilometers) could yield useful in-
 formation.
      Much published data exist on regional (or ecolog-
 ical) and geographical (or biogeographical) zonation.
 They obviously tell something about the tolerance
 ranges of different species in response to different
 abiotic environmental factors, and about the relative
 importance of the various factors.  They also tell
 something about the way species interact, and the ways
'in which their interactions vary from place to place.
 Thus, perhaps, one can learn whether groups of related
 species do in fact compete, instead of postulating they
 compete and inferring a result that is merely con-.
 ditional on the correctness of the postulate.   It is
 obviously worthwhile to devise ways of ransacking exist-
 ing data on zonation for useful information that can  be
 fed into, or at any rate can inspire,  models.   This
 paper describes two rather tentative approaches to the
 task.

         Competing Species and Overlapping Zones

      Everyone knows that related species may occupy
 somewhat different zones on an environmental gradient.
 The gradient performs a natural sorting experiment (a
 laboratory analog is paper chromatography)  and each
 species comes to occupy its characteristic zone.  This
 raises the following question:  Do related species, for
 example, congeneric species, tend to occupy zones whose
 amount of overlap is slight because of competitive ex-
 clusion, either over the short term, or evolving over
 the long term?  Or,  alternatively, do  their zones tend
 to coincide because, owing to their common ancestry,
 their tolerance ranges for the chief factor and its
 associated factors that vary along the gradient are all
 fairly similar?  To discriminate between these con-
 trasted possibilities, one must set up a null hypothesis:
 What would be observed if the spatial  extents and ar-
 rangements of a set of zones were mutually independent?
      Consider two species, A and B,of  sessile organisms
 living on a gradient.  Suppose their zones are discern-
 ible.  Label the species' upgradient boundaries Al and
 Bl and their downgradient boundaries A2 and B2.   Assume
 that (in the diagram below) the gradient descends from
 left to right.  Then we must have Al to the left of A2
 and Bl to the left of B2 but, under the null hypothesis
 of zone independence, all permutations of Al, A2, Bl, and  B2
                                                       668

-------
consistent with these constraints are equiprobable.
There are only three such permutations, namely:
  In symbols

Al  A2  Bl  B2


Al  Bl  A2  B2


Al  Bl  B2  A2
Graphically -
Overlap score
     0
     In the graphic representation the zones are ass-
umed to stretch up and down the page and the lines
(solid for  A, broken for  B) show the widths of the
zones.  Scores to be assigned for the three degrees
of overlap are shown on the right.
     Now suppose there are several, say k, species.
We shall assign to their zone pattern a total overlap
score,  L , which is the sum of the  k(k-l)/2  pair-
wise scores reached by taking the species  two at a
time.  For example, for the pattern below  in which
k = 4, the total score is easily found to  be  L = 7.
     Hence it will be found that

        f3(L)   1  for  L = 0 and 6 ;

        f (L)   2  for  L = 1 and 5 ;

        f3(L) =3  for  L = 2, 3  and  4  ;

 and  £f3(L) = 15 .

     Similar arguments lead straightforwardly to the
following recurrence relation for  ^v^)  •
               L
                                                               fk(D  =
                                                          fk-l(j)       f°r L   0.!»••-.k(k-D
                                                  j=L-2k+2

                                      with   fk_1(J)  =  0  for   j <  0  and   j  > (k-1)(k-2).


                                          The  maximum  value of L,  which is k(k-l),  occurs
                                      when  all (k-1)1/2  scores of  the zones taken  in pairs
                                      are equal  to  2.  From symmetry,  it  is seen that the
                                      mean  of  L  must fall  halfway  between its extremes.

                                      Therefore   E(LJk) = k(k-l)/2 .

                                                  kfk-1)
          xxxxxxxx
                                       Also  put
                                                                2" k:
 Observe that the lengths of the lines  (representing
 the widths of the species' zones) are  immaterial;  it
 is only the relative arrangement of  their boundaries
 that concerns us.
     Now derive the probability distribution,  and
 the mean and variance, of L given the  null hypothe-
 sis.
     Let  rk(L)  be the number of ways  in which  k
 zones can give a score of  L.  As shown above, when
 k  2,  fk(L) = 1  for  L = 0,1,2.   Now suppose  that
 to A and B a third species, C, is added.  Its  upper
 boundary, Cl, is assumed to be to the  left of  Al and
 Bl; this assumption does not  reduce  the number of  pos-
 sible zone arrangements since the species can  always
 be labeled so that their upgradient  boundaries are in
 the order  Cl, Al, Bl.  Then, whatever the pattern
 (and hence the score) of the  pair of zones A + B,  there
 are five possible positions for C2 relative  to the four
 existing boundaries Al, A2, Bl and B2.  Thus,  if the
 pair A + B has the pattern shown by  the solid  and
 broken lines in the diagram below, addition  of species
 C, whose possible zones are the dotted lines,  can  lead
 to five distinguishably different patterns labeled
 Zl,..., Z5.  Then, depending  on the  position of  C2
 relative to Al, A2, Bl and B2, the total score for all
 three species together is the sum of the score L   1
 that pertains to the A + B pair already, and the
 "added score" (from C + A and C + B) shown on  the
 right.  Moreover, these five  equiprobable values for
 the added score are the same  regardless of the score
 already possessed by the A +  B pair.
      Cl
             Al
                   Bl
                           A2
                                 B2
 zi
 Z2
 Z3
 Z4
 Z5
                                       Added score
                                       Then  the  probability  of  obtaining  a  specified score,
                                       L,  for  given  k,  is

                                                   Pk(L)  - fk(L)/Tk   .

                                       The variance of   L for  given  k,  namely Var(LJk)   is
                                       found as  follows.
                                       First,  put   V (LJk)
                                                      Pk(D  .
                                       That  is,   V (L|k)   is  the  second  moment  of   L  about
                                       an arbitrary constant   x  given that  the number of
                                       species  is  k.   Then,  since

                                          £)LPk(L)  = E(L|k) = k(k-l)/2 ,
                                          V (LJk)   =  V (L|k)  - xk(k-l)  + x2
                   Now, from the way in which
                   is seen that
                          V  (L|k+l) =  ^   j*  x
                                        k+1   x=-2k
                                                                          is  constructed,  it
                                                                       V (L|k)
                                                    (2k+l)V0(L|k)    k(k-l)
                                                           ,x +
                                            =   V (L|k)  + k (k-1)  + k(4k+l)/3

                                       Therefore  Var(L|k+l) = Var(LJk) + k(k+l)/3 .
                                       Repeated use of this recurrence relation now shows
                                       that  Var(L|k)   Var(L|k-l) + (k-l)k/3
                                                        Var(L|k-2) + (k-2) (k-1)/3 + (k-l)k/3
                                                            then  Var(L|k)   k(k-l)(k+1)/9  .
                                                       669

-------
     The distribution tends to normality with increas-
ing  k  and for  k > 20  the discrepancy between the
true cumulative distribution of  L  and that of a nor-
mal distribution with the same mean and variance no-
where exceeds 17..  For low values of  k, the distrib-
ution of  L  is platykurtic.  To test the null hypo-
thesis (that the boundaries of the zones of  k  spec-
ies occurring on a gradient are independent of one
another) when  k < 20, the critical values  C,   tab-

ulated below may be used.  These values are for a 5%
two-tailed test. That is, if  Ck < L < k(k-l)-C,

the null hypothesis should be accepted; (note that C,

itself, and  k(k-l)-C,  , are included in the accept-

ance region).
Table 1.  Critical Values of  L  for  k
                                                 19
k
2
. 3
4
5
6
7
Ck
0
0
2
4
7
11
k
8
9
10
11
12
13
Ck
16
21
28
35
43
52
k
14
15
16
17
18
19
Ck
62
73
85
98
111
126
 impracticable.   In particular, measurements of species
 quantities seldom inspire confidence, and tests using
 ranks  are therefore often more appropriate than those
 depending on absolute measurements.  A "natural" way
 to  perform the  comparison described above (natural in
 that it  would occur to anyone with a taste for non-
 parametric statistics) would therefore be to use as
 data the ranks  of the species in each of the samples.
 Suppose  a total of  nd  regularly arrayed stations
 have been sampled, at  n  latitudes and  d  depths,
 and the  quantities of the same  k  species are ranked
 for each sample.   Then the data can be displayed as
 an  n*d   table  in which each cell contains a list of
 the ranks of the  k  species.  For instance, if  k=3,
 the table would appear thus:

                	Depth	
                                                                           1
                                                         Latitude
                                                                         1   2   3
                                                                            3   2
                                                                                                 3  2
                                                                                                 2  1
                                             3l  W
                                               I    n-
An example of the use of the test is given in the
section, Examples.
    Concordances Among Different Groups of Samples

     In this section, an entirely different way is
described of extracting, from data on species zona-
tion, information that may be useful to ecosystem
modelers.  The method is "work in progress" which
means work incomplete; a lot remains to be done both
to the theory and the practice.
     Every species of organism has tolerance limits
with respect to numerous environmental factors.  For
the sake of concreteness, think of several related
species of the marine benthos.  Each species has its
own specific tolerance limits for temperature and
light intensity, to name only two of the  (presumably)
many environmental factors that control its survival.
Considering the effect of a gradient of one of these
factors on a mixed community of these species, the
relative proportions of the species in a sample would
be expected to depend on the level of the gradient at
which the sample was taken.  And if a batch of samples
were taken from various levels, the within-level vari-
ation in species composition would be less than the
between-level variation.
     Next suppose that we wish to compare the relative
importance of two environmental factors.  As before,
let these factors be temperature and light intensity,
and imagine an idealized north-south coastline where
the following conditions obtain:  there is a strong
latitudinal gradient in temperature and, at right  ,
angles to it,  a steep east-west depth gradient and,
hence,  a strong gradient in light intensity at the
bottom.   We can now ask which of the two factors,
temperature or light intensity, dominates in control-
ling community composition (this is a conceptual ex-
periment and we choose to disregard all the other vary-
ing factors).   By sampling at an array of sampling
stations so that we have samples from a sequence of
depths in each latitude and from a sequence of lati-
tudes at each depth,  the necessary data for a straight-
forward multivariate analysis of variance could (in
theory)  be obtained.
     However,  ecological sampling rarely is straight-
forward,  and ambitious plans often turn out to be
     The coefficients of  concordance of  the  rankings
within each of  the  rows,  say  W1.  ,...,  W   ,  are now

computed; likewise, the  concordances  of the rankings in
                         	,  W ,.   (For computa-
each of the columns, W

tional details see, for example, References 1 and 5).
We do not enquire whether any of these concordances
differ significantly from zero.  Almost certainly all
of them will.  But we do enquire whether the within-
latitude concordances tend to differ from the within-
depth concordances.  If, for example, the within-depth
concordances tended to be the greater, we should infer
that light intensity influenced community composition
(and hence the success of the component species) more
strongly than temperature.
     The test one is tempted to use to compare the two
sets of concordances is the Mann-Whitney test.  But,
strictly, it is a test for two independent samples and
obviously the values of  W.
                              (i = 1,..., n)  and  W.
(j - 1,..., d) are not  independent  since  each set is
based on all nd rankings  in  the  table.  I do not
know whether the Mann-Whiney test is robust enough for
this not to matter.  The  desired comparison is, in-
deed, difficult to make.  It is difficult enough to
compare two coefficients  of concordance,  that is, two
individual values of W, let alone two interdependent
sets of W's.  Li and Schucany2 (and see Schucany and
Frawley, ) described a test to compare two independent
values of W,  Their test  statistic,  W  ,  takes values
in the range [-1, +1].  If concordance  is perfect
within each set and also  the two sets concord exactly
with each other, then W = +1; if concordance^is per-
fect within each set and  the two sets discord totally
with each other (so that  the ranking in one set of
lists is the exact opposite of the  ranking in the
other) then W = -1.  However, values of W close to
zero are ambiguous; they  imply either that one (or
both) of the sets has poor internal concordance; or
else that,  though concordance is good within each set,
the two sets are somewhat (not totally) discordant.
This latter state of affairs is the one most likely to
crop up in the ecological context we are  considering.
And, as the foregoing discussion suggests, it may
prove difficult to diagnose.
    Another, and perhaps more important,  difficulty
that arises is that when  a community occupying a
                                                      670

-------
gradient is examined it is usually found that as one
goes down the gradient, "upslope" species successively
disappear and at the same time a succession of "down-
slope" species are encountered for the first time.  If
the abundance of the same k species have to be ranked
at each station to enable a test to be done, then only
a short segment of the gradient can be studied.  The
same set of k species will not be found in distant
samples.
     This raises in acute form a conundrum often faced
by ecologists concerned with presences and absences of
species.  Present species can be ranked but not absent
ones, though one is often justified in feeling that
some are more absent than others.  At any point on a
gradient a species whose zone starts nearby is "less
absent" than one whose zone starts farther away.  And
besides having magnitude, an absent species' degree of
absence from a point should have a sign that depends
on whether its presence (its zone) is upslope or down-
slope from the point.
     At this stage, speculation must (temporarily) stop.
The analysis of ecological data from environmental
gradients obviously has much to offer, both to statis-
ticians devising methods and to ecological modelers
searching for information on how species in nature do,
in fact, react to environmental factors and to one
another.
                       Examples

     To exemplify the methods described in the two pre-
vious sections, data from Phleger  are used.  He gives
lists of the percentages of different foraminifera
species (living and dead) in bottom samples collected
at stations at different depths along 12 traverses
across the continental shelf in the Gulf of Mexico.
The number of stations per traverse ranged from 25 to
55.
 Overlap Scores Within Genera

      For benthic organisms, zones are not visible, of
 course.  Therefore the zone of any one species was es-
 timated to begin at the shallowest station where it was
 found and end at the deepest station.  It was hoped
 that because of the large number of traverses errors of
 estimation in individual traverses would have negligi-
 ble effect.  Small overlaps might chance to go unde-
 tected but such errors would tend to be offset by "ac-
 cidental" specimens occurring outside their zones. Since
 observations were necessarily discrete, "ties" were
 possible and were scored thus:
        Score 0.5
                        Score 1.5
                                        Score 1.5
 (The method of portrayal corresponds with  that in the
 section, Competing Species and Overlapping Zones.)
      All genera with three or more species in at least
 ten of the traverses were tested to ascertain whether
 there was any reason to reject the null hypothesis that
 their zone boundaries were randomly and independently
 located.  The two alternatives were that the zones
 might show excessively low, or excessively high, over-
 lap.  Results for three genera are tabulated below.
 The two columns for each genus show k, the number of
 species of the genus in the traverse named (by a Roman
 numeral) on the left, and the standardized overlap
 score  L*   (L-E(L|k)}//Var(L|k).   With data from  12
 traverses available, values of  L  need not be tested
 individually.  It is clear that the species in the
 genera Cassidulina and Elphidium  show too much over-
 lap for the null hypothesis to be acceptable, whereas
 for  Cibicides  it  is  acceptable.   It should be noticed
 that (disregarding type II  errors)  the null hypothesis
 may  be  found acceptable either because departures
 from it are insignificant,  or because they are inde-
 terminate.  The latter  will happen  if the traverses
 are  too short  to  go  beyond  the shallowest and deepest
 zone boundaries of many of  the species;  their apparent
 boundaries will then be randomly  ordered.
      Table  2.
               Values  of  k  and  L*  for  Three  Genera
               in  Twelve  Traverses
Traverse
I
II
III
IV
V
VI
VII
VIII
IX
X
XI
XII
Cassidulina
k
4
5
5
5
5
2
5
5
5
5
5
4
L*
0.61
1.92
1.92
2.05
1.64
1.23
2.33
1.92
2.60
1.23
0.68
-0.41
Elphidium
k L*
3
3
3
4
4
4
4
4
4
4
2
2
0.92
0.92
0.61
1.74
1.36
1.55
1.36
0.97
1.16
1.36
0.61
0.61
Cibicides
k
6
5
8
8
9
6
9
8
8
8
8
6
L*
0
0.41
0.53
-0.47
-0.06
-0.21
-0.78
0
-0.73
0.07
-0.60
-1.04
 Concordances of Ranked Species Abundance Lists

     The relative abundances of eight common speciest
 (chosen to serve as "community indicators") in samples
 from three traverses at four depths were listed. The
 table below  shows the  traverse numbers  (roman) as row
 labels, the  depth ranges as column labels, and values
 of  W.   and W  .  to  the right of and below the re-

 levant rows  and columns.  The three entries in each
 cell of the  table are, from top to bottom, the station
 number where the sample was collected, the depth at
 that station in meters, and the number of foram tests
 in the sample.  As may be seen from the values of W.

 and W ., there is no reason to suppose that community

 composition  varied more with depth than with the hori-
 zontal distance between traverses.  The traverses did
 not sample different latitudes; they extended roughly
 southwards from the Gulf coast of Louisiana and Texas.
 Even so, the concordances among species rankings from
 the same depth did not, so far as this small sample
 shows, tend  to exceed  the concordances among rankings
 from different depths on one traverse.  With only four
 traverses and three depths (and these covering only a
 small depth  range) a significant difference would be
 unlikely to  appear in any case; the example is given
 here merely  for illustration.  Consideration of only a
 small range  of depths was necessary to ensure that
 never fewer  than six of the eight species chosen as
 "community indicators" were present in a sample.  For
 the method to be applied over a larger range of depths,
 a means of scoring absent species objectively must be
 devised.  This is the  direction that the work will take
 next.
tThe species: Bolivina lowmani, B.simplex, Cibicides
concentricus, Elphidium discoidale, E.gunteri var gal-
vestonense, Proteonina comprima, P.difflugiformis,
Rotalia beccarii var parkinsoniana, Virgulina pontoni.
The names given here are those in Phleger's^ memoir.
Name changes resulting from taxonomic  revisions have
been ignored to facilitate consultation of the original
data.
                                                       671

-------
Table 3.  Concordances of Species' Ranks Among Samples
          Grouped by Depths and by Traverses

                     Depths
       22-27m
                  28-32m
                             33-37m
                                        38-42m

VI


VIII


X



#97
22m
350
#380
27m
2000
#411
26m
1300
W , =
•1
0.887
#101
29m
300
#378
31m
400
#418
29m
3200
W -
•2
0.425
#105
33m
4300
#375
35m
2300
#420
35m
5400
W - =
•3
0.860
#111
38m
1550
#373
40m
4800
#424
42m
250
W , =
.4
0.590

Wl
0.

2
0.


0.



                                                 0.665
                   Acknowledgements

     I thank Eric Robinson, University of Sydney,
Australia, for help with computer programming, and
Anita Williams, Halifax, for help with data collation.
The work was funded by a grant from the National
Research Council of Canada.

Added Note:  Dr. F. B. Phleger of the Scripps Institu-
tion of Oceanography revised names of the foram spe-
cies listed in the footnote in the last section.  A
complete list of names is, in the same order as in the
footnote:  Bolivina lowmani, B_. ordinaria, Hanzawaia
strattani, Cellanthus discoidale, Elphidium gunteri,
Nouria polymorphinoides. Reophax difflugiformis,
Ammonia beccarii var parkinsoniana, Fursenkoina
pontoni.
                      References

1.  Conover, W. J.,  Practical Nonparametric Statistics,
    Wiley, New York, 1971.

2.  Li, L. and W. R. Schucany, "Some Properties of a
    Test for Concordance of Two Groups of Rankings,"
    Biometrika 62:417-423, 1975.

3.  Phleger, F. B.,  "Ecology of Foraminifera, North-
    west Gulf of Mexico.  I Foraminifera Distribution,"
    Geol. Soc. Amer. Mem.  46, 1951.

4.  Schucany, W. R.  and W. H. Frawley, "A Rank Test for
    Two Group Concordance," Psychometrika 38:249-258,
    1973.

5.  Siegel, S., Nonparametric Statistics for the
    Behavioral Sciences, McGraw Hill, New York, 1956.
                                                      672

-------
                             TIME SERIES ANALYSIS AND FORECASTING FOR AIR POLLUTION
                                    CONCENTRATIONS WITH SEASONAL VARIATIONS
                                        Der-Ann Hsu and J. Stuart Hunter
                                        Department of Civil Engineering
                                  School of Engineering and Applied Science
                                             Princeton University
                                            Princeton, New Jersey
ABSTRACT

Annual time series records of daily averages of hourly
sulfur dioxide concentrations recorded over several
major cities exhibit strong seasonal patterns in both
level and variation.  To construct models useful for
prediction and analysis, a Box-Cox transformation is
first employed to stabilize the data variability.  The
transformed data provides a dramatic improvement in the
data plots.  Further, the transformed data are readily
modeled using simple seasonal plus stochastic com-
ponents following Box-Jenkins time series methods.  The
fitted models, forecasts and confidence limits are then
constructed.

New statistical comparison techniques to compare the
stochastic structure of the time series in periods be-
fore and after changes in pollution regulations are
briefly discussed.  These procedures should prove use-
ful in the evaluation of environmental policies.

INTRODUCTION

In this paper, as an illustration of some useful
statistical techniques, we analyze twelve annual
series of air pollution SO. concentrations, collected
over four cities: Chicago, Philadelphia, St. Louis and
Washington, D.C.  All of the series exhibit strong
seasonal patterns in both the level and variance.  A
Box-Cox transformation [1] is first used to stabilize
the variances.  The transformed data are then fitted
by a cosine curve to model the seasonal influences and,
following the techniques suggested by Box and Jenkins
[2], a stochastic model fitted to account for the day
to day time dependent character of the data.  Parameter
estimates for each of the twelve annual series are
provided.

An illustration of forecasting for Chicago S02 con-

centrations using the fitted model is further demon-
strated.

In the later part of this paper some techniques useful
for comparison of time series are briefly sketched.
Potential applications of these methods in the eval-
uation of environment policies are suggested, and
further references furnished.

THE DATA AND THEIR SEASONAL STRUCTURE

Data on SO- concentrations were collected from four

major cities in the U.S. during the three year period
1969-71.  The data consist of hourly readings of the
SO. concentration for each of the twenty-four hours of
a day, measured in units of 0.01 ppm.   Although instru-
ment breakdowns, failures of the measuring process, and
the negligible levels of concentration during the
summer months caused approximately 15% of the readings
to be missing, daily averages were computed based upon
the available readings within days.   A time series plot
of the daily averages of SO- concentrations in Chicago

during 1969,  typical of all the series obtained, is
displayed in Figure 1.
As can be seen from Figure 1, both the level and the
variation are large during the winter and small during
the summer.  Reasons for these changes are the vast
amounts of SO  released from household heating
systems in the winter, and the different diffusion
characteristics due to seasonal temperatures.  Over
short periods of time, the data also demonstrate
skewness towards high SO  values.

The failure of the observations to possess a Normal
distribution, and/or a homogeneous variance,
seriously impedes the ability of the engineer and
statistician to postulate models, and to estimate the
parameters In these models.  For this reason, the Box-
Cox transformation[l] was applied to the S0_ data to
enhance Normality, to stabilize the variance, and thus
improve the modelling and estimation procedures.
For a variable y,
z    where
                  the transformation is expressed as
  (X)
             (y+x2)
                      - i
            X1[gm(y+X2)]

            gm(y+X9) log  (y+X9)
                                           0)
                                           o)
                                                   (1)
and
               gm(y+X2)
                          rn
                            11
                          L±-l
(yi+x2)
                                       l/n
where  n  is the number of observations of the vari-
able y and gm is the geometric mean.  It has been
shown by Box and Cox that the estimates of X  and X-,
the parameters required in the transformation, can be
obtained by minimizing the sum of squares of the
residuals after fitting a model.  In the present case,
the model is a cosine curve, illustrative of seasonal
changes in the response level and given by

       c  = 8  + 8, cos(2irt/365+a),  t=l,2,... ,365, (2)

where t is the ordinal number of the days within the
year and a is the phase angle indicating the starting
location of the cosine curve.  The model can thus be
expressed as
         (X)
        (X)
                                                   (3)
where z ^" , is the Normal variance-stabilized series
of the observed SO  concentrations.  The series of

residuals ^etK maY De serially correlated, as dis-
cussed in a later section.

The estimation of the various parameters in Equations
(1) and (2), i.e. Xj, X2, a, 8 , 81, requires a set of
                      365  2
values which minimize  £  e .  Many published
                      t=l
computer programs are useful for determining this
minimum point (e.g. the subroutine ZXPOWL in the
                                                      673

-------
                  E
                  PL
                  PL
                  iH
                  O

                  C
                  C
                  O
                  O
                   It)
                   >
                         Y*
                         •  •
                          ..*
                                                                      *• V
                         0      50     100     150     200     250     300     350    400

                                                 Time (in days)

                         FIGURE I.  OBSERVED DAILY SO  CONCENTRATIONS IN CHICAGO FOR THE YEAR 1969.
                                                                               • :  Transformed observed data

                                                                               - :  Fitted cosine curve
                                50      100       150      200     250      300

                                                  Time (in days)
                                                                         350
                                                                                400
                         FIGURE 2.   THE  TRANSFORMED DAILY  S02  CONCENTRATIONS  IN CHICAGO FOR

                                     THE YEAR 1969  AND  A FITTED  COSINE  CURVE.
IBM's IMSL package).  Again, as an illustration, the
estimated parameters for the 1969 Chicago data are as
follows:
= 0.29, \  = 0.00, a = -.29,
                                      = 4.43, B ,=5.63.
Using this set of estimates, the transformed observed
data and fitted cosine curve are displayed in Figure 2.
The remarkable performance of the transformation in
both stabilizing the variance and in elucidating the
model is readily seen from the plot.

Similar parameter estimates were obtained for the re-
maining annual series for the other cities and years.
The results are reported in Table 1.  Some features of
the results are worth commentary.  First, all but one
estimate of a (the phase angle) falls within the range
-.64 to .33 indicating the association between the
winter and the high level of SO  concentration.  Second,

all but one of the estimates of the transformation
parameter \^ are negligible, revealing that a trans-

formation with A  = 0 is adequate in general.  Third,
the estimates of X  fall between 0.00,  (the logarithm

transform) and 0.33, (a cube root transform).

THE MODEL FOR THE RESIDUALS

No modelling is complete without an investigation of
the residuals.  As part of this investigation the
sample lagged autocorrelation coefficients, r,  for each
of the twelve series were computed using the residuals
from the fitted model, where
                                                                 365
                                                                      (e -e) (e   -e) /m
                                                                               -
                                                                 t=k+l
                                                                 365

                                                                             / n
                                                    and where m and n are the number o_f available residuals
                                                    involved in the calculations, and e is the average of
                                                    the series {e }.  Here e equals zero.  The first ten
                                                    values of r^, i.e. r ,..., r1Q for the 1969 Chicago
                                                    data are: (.26, -.03, -.02, .08, .07, .04, .11, .10,
                                                      674

-------
                 TABLE 1.   ESTIMATES OF THE PARAMETERS IN THE MODEL FOR DAILY SO  CONCENTRATIONS
            City    Year   f Obe.    a        \i       X2       6        BI     gm(y+X2)   SSR        Pi       8        "£

          Chicago   1969   272   -0.2911    0.2857   0.0000    4.4295   5.6257    3.883   2304.4158   0.2605    -0.2811  7.8518


                   1970   273   -0.5791    0.0882   0.0000    1.6039   3.8816    2.263    815.7686   0.2641    -0.2856  2.7627


                   1971   196   -0.5703    0.1952   0.0000    1.3642   3.9261    2.783   1000.3269   0.4626    -0.6707  3.5202


          Phila-    1969   226     0.1902    0.0208   0.0000    2.4258   1.0056    2.627    865.5786   0.1651    -0.1699  3.7226
          delphia

                   1970   313   -0.6367    0.1928   0.0021    3.8179   1.4788    3.340   1998.3765   0.3030    -0.3375  5.7317


                   1971   284   -0.1887    0.2587   0.0000    1.9199   0.1773    2.303    869.4617   0.3749    -0.4512  2.5436


          St.Louis  1969   289   -0.3646    0.3168   0.0000    2.8203   0.2137    2.822   1521.6904   0.4375    -0.5896  3.9072


                   1970    236     0.3229    0.2571   0.0000    1.7968   0.2833    2.281    779.0459   0.2797    -0.3059  3.0186


                   1971    314    -2.0339    0.0563   0.0000    0.7458   0.4240    1.616    477.5051   0.3528    -0.4130  1.2992
           Washington
             D.C.   1969
                   1970
                   1971
 283    -0.3215    0.2764   0.0000    1.0519   1.6009    1.609    183.3711   0.3038    -0.3386  0.5813


 270    -0.3170    0.2597   0.0000    0.4850   1.0057    1.320    232.9175   0.3790    -0.4588  0.7127


 326    -0.3914    0.1106   0.7438    3.2043   1.5680    3.019   255.6146    0.2721    -0.2959  0.7210


  The number of  observations used in estimation.
**The sum of squares of residuals  £ 
-------
 transformations were applied  to  each  series.
FORECASTING OF THE CONCENTRATIONS

Forecasting is important in all time series modeling,
both as a check on the adequacy of the model as a
description of a system, and for the purpose of control
of the system.  From the models fitted in previous
sections, forecasts of the S02 concentrations for a

moderate length of time ahead can be obtained.  The
first stage of determining forecasts is to predict the
values of e  at time T+l, T+2, ..., assuming that we

stand at time T, the time origin of prediction.  Again,
as suggested by Box and Jenkins, the best forecasts of
ST+k' k=1'2'"' are
       and
                T+l
                 T+k
                        - 6
                                  for k > 2
                                                     (7)
where a  can be obtained by fitting the model ex-
pressed by Equation (4) to the observed series of e .

The value of e in this case is equal to zero.  The
upper and lower bounds, at the .95 confidence level,
of the forecasts can be computed following the
equations displayed below:
                                                           o
                                                           p
                                                           B
                                                           HI
                                                           O
                                                           e
                                                           o
                                                           CJ
                                                                                      » : Observed concentrations
                                                                                     — : Forecasts

                                                                                     — : 95% confidence limits
                                                                                             of forecasts
                                                                  0
                                                                          20
                                                                                 40
                                                                                         60
                                                                                                 80
                                                                                                         100
                                                                                Time  (in  days)

                                                                FIGURE  3.
                                                                 CHICAGO FOR THE FIRST HUNDRED  DAYS  OF  1970.
                  FORECASTING SO  CONCENTRATIONS IN
                                                          COMPARISON OF TIME SERIES
                        ± 1-96
       T+k
              T+k
                                           for k
where e's indicate half the length of the confidence
intervals.

To express the forecasts in the original form and
scale, a   seasonalization and reverse variance-
stabilizing transformation is made where
                                                          Besides forecasting, the comparison of time series is
                                                          an important problem in applied statistics.
                                                    (8)    Situations which require comparison of time series may
                                                   }       arise when the effects of, say, changes of pollution
                                                          regulations are to be assessed.  The time series of
                                                          pollution concentrations before and after the regu-
                                                          lation changes, for instance, may be investigated and
                                                          compared with respect to such essential features as
                                                          their overall level, autocorrelation structure,
                                                          variation and the probability of exceeding some regula-
                                                          tory standard (or the frequency of occurrence and
                                                          duration of excedences of a regulatory standard).
       T+k
       T+k
                                      a]
                                      +1}     +
                                                    (9)

                                                   (10)
The confidence limits of yT+,  are secured by adding or

subtracting from Equation (9)  an ET+,  already defined
and then going through the procedure expressed in
Equation (10).  Using the Chicago example the daily
averages of SO- concentrations for the first hundred

days of 1970 are displayed in Figure 3.   The forecast
which is the expected median of future realizations of
SO^ concentration at time T+k and its associated upper

and lower 2.5% probability limits are also displayed in
Figure 3.  The observed concentrations fall within the
forecast confidence band in a fashion very consistent
with theoretical expectation.

In viewing Figure 3, it is important to remember that
the probability density in the original metric of the
observations appears skewed to the upper side and
further, that the variance is  not independent of level.
In addition, it is interesting to note the tendency of
the observations to gradually  drift below the forecast
line near the end of the 100 day series.  We have here
an indication of the possible  inadequacy of the fore-
cast function and are led naturally to the question
of comparing the 1969 and 1970 models.
Comparison through the use of forecasts.  A simple
method to test whether two time series have identical
structure is to perform forecasting for one series
(series B), using the model constructed from  the  other
(series A).  The differences between the forecasted and
observed values for series B can be tested for
potential model discrepancy.  Tiao, Box and Hamming[5]
proposed that the errors of one-step-ahead-forecasts,

i.e. a   , k=l,2,3,..., be squared and summed and then

checked against the significance point of a x  distri-
bution, given that Normality may be assumed for the
errors.
                                                                         SO- concentrations, the fitted model
In the case of
                 L
including the estimates of \-\ , Xj, a, B   Si and 0, for
                                       o
the A series can be used to obtain forecasts for B
series.  If the parameters are different between the
two series the corresponding  forecast errors will ex-
hibit a systematic bias, or a structured stochastic
pattern.  An illustration of  this comparison technique
is not shown in this paper, but is under current study.

Comparison using a test statistic for Normal stationary
models.   A test statistic for comparing two autore-
gressive time series has been developed by Hsu[3] and
illustrated in a practical example by Hsu and Hunter[4].
This technique is useful for  comparing two series of
residuals with respect  to parameters 6 and a .

Comparison using a complete test.  Ideally, a test
                                                      676

-------
statistic or scheme can be developed to examine all the
parameters of two series.  Such a test, which promises
to be complicated and to require much computation is
left for future research.
CONCLUDING REMARKS

Series of daily averages of S02 concentrations have
been analyzed to demonstrate a profitable use of some
statistical techniques.  Data which originally appeared
to be lacking a simple structure were variance-stabil-
ized, deseasonalized and fitted by a simple stochastic
model.  Forecasts of future levels of pollution con-
centrations were easily obtained using the fitted
model.  Further, series observed from different
locales could be compared based on the information
gathered from their analyses and models.  Activities
such as evaluation of environmental policies,
selection of alternative regulations, etc., may thus
benefit from such studies.  Further, analyses of
hourly data within a day are also possible and may be
expected to provide valuable information.  Geographi-
cal comparison, combinations of statistical and
physical models, schemes adaptive to pollution control,
etc., are all subjects for further investigation.   The
authors hope the statistical techniques described here
may prove useful in these future research activities.
REFERENCES

[1]  Box, G.E.P., and Cox, D.R. (1964), "An Analysis
     of Transformations," Journal of the Royal
     Statistical Society, Series B26, 211-252.

[2]  Box, G.E.P., and Jenkins, G.M. (1970), "Time
     Series Analysis, Forecasting and Control, Holden-
     Day, San Francisco.

[3]  Hsu, D.-A. (1973),  Stochastic Instability and the
     Behavior of Stock Prices. Ph.D.  Dissertation,
     Dept. of Statistics, University of Wisconsin-
     Madison.

[4]  Hsu, D.-A. and Hunter,  J.S. (1975), "Analysis of
     Simulation-Generated Dynamic Responses," Civil
     Engineering Research Report #75-TR-2,  Civil
     Engineering Department,  Princeton University,
     March 1975.

[5]  Tiao, O.C., Box, G.E.P.  and Hamming, W.J. (1975),
     "Analysis of Los Angeles Photochemical Smog Data:
     A Statistical Overview," Journal of Air Pollution
     Control Association, 25 . March 1975.
                                                      677

-------
                         METEOROLOGICAL ADJUSTMENT OF YEARLY MEAN VALUES FOR
                                 Am POLLUTANT CONCENTRATION COMPARISONS
                     Steven M. Sidik
              NASA-Lewis Research Center
                     Cleveland, Ohio
                   Harold E.  Neustadter
               NASA-Lewis Research Center
                     Cleveland, Ohio
                        Summary

    The results of some linear regression analyses relating
pollutant concentrations to certain specified meteorologic
and economic variables are presented.  The resulting models
provide about a 20 percent improvement in predicting con-
centrations.  An outline of the use of the predictive equations
in adjusting for meteorological effects is then presented.

                       Introduction

    This paper presents an approach to interpretation of 24-
hour averaged air pollutant measurements taken in compli-
ance with U. S.  Environmental Protection Agency guidelines
when analyzed in  conjunction with such meteorological data
as may be readily obtained from the National Weather Ser-
vice.   The  specific example considered is Total Suspended
Particulates in Cleveland, Ohio, for which some monitoring
has been performed by the municipality since 1967; initially
every 6th day and currently every 3rd day. The meteoro-
logical data are also for the same 24-hour periods and was
obtained from National Oceanic and Atmospheric Agency as
decks of punched  cards.  The information  is ground level in-
formation and devoid of such things as inversion heights.

    We fit  linear regression models to pollutant concentra-
tions using the following combinations of meteorologic vari-
ables as predictors:  daily delta temperature (defined as the
maximum temperature minus the minimum) and its first dif-
ference; daily minimum temperature and its first and second
differences; daily average barometric pressure; daily total
precipitation (water equivalent in inches);  and daily resultant
wind velocity.  We included two rough indicators of economic
activity and allowed for the existence  of both a linear '' drift''
in time and a seasonal component with a period of 1 year.

    The overall results are that the mean TSP concentration
(1) increases as delta temperature increases and as its first
difference decreases;  (2) increases as mini mum tempera-
ture increases  and as the first and second differences in-
crease; (3) increases as pressure increases; (4) generally
decreases initially with increasing wind velocity except when
there is a source upwind;  and (5) signifjp.ant.1y decreased
over the period of the study with a clear indication of sea-
sonal fluctuation.

    The goodness of fit of the estimated models to the data
is partially reflected by the squared coefficient of multiple
correlation, indicating that at the various  sampling stations
the models accounted for about 23 to 47 percent of the total
variance of observed TSP concentrations.  However, there
is still a large variability unaccounted for so that predic-
tions of individual values are not very helpful.
    About a 20 percent improvement when using these equa-
tions in place of simple mean observed values is obtained
when (1) predicting mean concentrations for specified meteo-
rological conditions or (2) comparing yearly averages after
being adjusted so as to remove meteorological effects.

                Pollutant Concentration Data

    The Cleveland Division of Air Pollution Control has
taken 24-hour averaged air quality samplings of TSP since
January 1967.  There are currently 21 sampling stations
around the city which sample TSP.  A more complete analy-
sis of all these stations (including analysis of SO2 and NO2
data) is presented elsewhere.   Only a summary of the re-
sults for TSP is included herein and illustrated by the re-
sults from a typical station.

    Summaries of the air pollution data used for this study,
including tabulations of means, standard deviations  and
goodness of fit to lognormality on an annual basis have been
                 o
reported earlier.

    The sampling method for TSP is high volume air sam-
pling using Glass fiber filters.  A previously published study
showed that,  for such HiVol air sampling  of TSP in Cleve-
land, approximate 95 percent confidence limits on the errors
introduced by filters and samplers were about 12 percent
                      o
high to  11 percent low.

                   Regression Analysis

Models and Method

    The method chosen for data analysis was multiple linear
regression analysis which is explained in  such texts as
Searle,  Draper and Smith,   and Daniel and Wood.
    We assume models of the general form
                                                      (1)
where

vi
the i   observed pollutant concentration, or some
  transformed value of that concentration.  In this
  paper we use  y = log(TSP)
the observed value of the j  predictor variable (i. e.,
  meteorologic economic) for the i  observation.
  The particular predictor variables (such as baro-
  metric pressure) used are presented in Table 1
the unknown intercept values
                                                        678

-------
0.  unknown coefficients (slopes) which are to be estimated.
       Multiple linear regression as used here estimates
       these unknown coefficients by the method of least
       squares.   (Estimated values are denoted by J3.)
ej  an unobserved random error component.  This random
       error is assumed to follow a normal distribution with
       mean of zero and a standard deviation of a which is
       unknown.  We further assume that the e-  are uncor-
       related with each other

    The random error ^  will include, among other things,
errors of measurement of the concentrations,  inherent vari-
ability of concentration because of varying emission rates
and/or atmospheric instability, inadequacies in the model,
and to some extent the errors of measurement of the predic-
tor variables.  Our data base consists primarily of 24-hour
averaged concentrations at 3  day intervals.  A previous
study  found that concentrations observed every 3 days have
a very low correlation.  Thus the assumption that the  ej
are uncorrelated is reasonable.

       Derived Variables and Estimated Coefficients

     Pollutant concentrations  at a given time and location are
the result of emissions from  various sources which have
undergone transport and dispersion processes in the atmo-
 sphere.   In general, for a fixed rate of emission from all
 sources,  pollutant concentrations are inversely proportional
to atmospheric mixing.  The  factors generally considered to
 control the degree of mixing are the effective mixing height,
                                 Q
wind velocity, and wind stability.  In most locations,  how-
 ever,  the NWS does not routinely monitor mixing heights.
 Thus,  this information has not been incorporated even
though such measurements were made locally by the NWS
for a period of 1 year.

     To construct model equations which can predict pollu-
 tant concentrations for known meteorological conditions,  we
 defined new predictor variables derived from those basic
 variables known or suspected to be related to atmospheric
 mixing.   In constructing derived variables we were guided
 primarily by Holzworth' s   qualitative account of large scale
 weather influences on air pollution concentrations.

     Table 1 presents  the 29 derived variables used in the
predictive models.  These variables,  the  rationale for their
 inclusion and the results are  discussed in depth in Ref. 1.
 This model was fitted separately at each station.  Due to
 space limitations,  we present the results  of the regression
 analysis at only one of the sampling stations.  Full analysis
is presented in Ref. 1.  The chosen station is typical in the
 sense that it falls approximately in the middle of the range
 of how well the equations fit the data.

     Table 2 presents  (1) the estimated coefficients for each
predictor variable,  (2) the value of square of multiple cor-
                      o
relation coefficient (R ),  (3) the number of observations
available for fitting,  (4) the estimate of the error variance
(a ) and error standard deviation (cr),  and  (5) the mean of
the observed concentrations (y).  The meaning and use of
each of these quantities are discussed in the following sec-
tions.

             Goodness of Fit and Error Estimate
     Table 2 of Ref.  1 shows that for TSP the R  values
range from a low of 0. 23 to a high of 0. 47 with most of the
values near 0. 40.  In other words,  the models account for
from 23 to 47 percent of the total variance of the log(TSP)
values.

     Table 2 of Ref.  1 also shows that, for log(TSP),  a
ranges from 0. 140 to 0. 233 with most values being around
0.160.  The importance of a to the problem of using the
models to predict concentrations will be covered in the
following section.

                     Applications

Predictions from Fitted Models

     The primary motivation of this work was to develop a
method for making predictions.  Actually,  two different pre-
dictions are of interest.  The first is the prediction (or esti-
mate) of the mean pollutant concentration as a function of the
predictor variables and the second is the prediction of a
single  further pollutant concentration.   Both predictions re-
sults from inserting the specified values of the predictor
variables  (i. e. , the  x.) into the estimating equation yielding
However, the uncertainties (standard deviations) associated
with each application are different.
    The uncertainty in the prediction of the mean of the y1 s
for specified x.
          r of tiie
uncertainty i
                is a function only of the actual x. and the
              tie estimates  J3..  (See Draper and Smith
and Hahn   for details.  We consider predictions only at the
means of the x=  for notational and conceptual simplicity. )
The estimated standard deviation of  y when the  x.  are all
equal to the means of the  x.  is 3yvN.  For the TSP data
of station  1 we obtain a  standard deviation of 0. 176/\/364 =
0. 0092. Thus an approximate 95 percent confidence limit
on y  is

      y - (1. 96)(0. 0092) ^ log(TSP) s y + (i. 96)(0. 0092)

In terms of TSP directly this results in proportional limits
of
or roughly =t4 percent.  Thus the regression equation itself
is pretty well estimated.  These confidence limits change as
the x.  change.

    The uncertainty in a further predicted value includes not
only the uncertainty in the regression equation but also the
uncertainty involved in a single  observation.  The standard
deviation of a further predicted  value at the mean of the
                                                         679

-------
predictor variables is thus
At station 1 for log(TSP) we thus obtain 0. 1762.

    Approximate 95 percent confidence limits (in terms of
proportional limits) thus becomes
That is we can predict single values with a 95 percent con-
fidence of being within 55 percent low to 122 percent high.
Thus although the regression function is well estimated,  it
is obvious that it is practically useless for prediction of  spe-
cific single day concentrations because of the large residual
error.  We will now consider a situation where the regres-
sion equation can be used to advantage.

              Use in Meteorological Adjustment

    The previous section showed that the large residual
variability precluded meaningful individual predictions of
concentrations.  Nevertheless,  if many concentrations are
predicted and then averaged, the average concentration can
be estimated with dramatically improved reliability.

    Suppose we use the current predictive models for a pe-
riod of say 1 year and that during this year we accumulate
N = 100 further observations.   Among other differences  be-
tween this year and previous years are the differences in
meteorological conditions on the days for which data was ob-
tained.  If we assume that measured concentrations are re-
lated to emission rates and we want to compare the emission
rates of this year with those of previous years based on the
changes in measured concentrations, then it is necessary to
first remove (adjust for) these meteorological differences.
This is accomplished by computing the estimated deviations
of the predicted concentrations (yp from the observed (y^
values.   If there have been no changes in the processes  gen-
erating the pollutants then the predicted and the observed
concentrations  should be the same on the average.  That is,
the ej would ideally have a distribution with a mean of zero
and we expect the computed mean, ~i, to be near zero.
(Note that the e^  will not have the same variance.  See
Ref. 5 for details. )

    If there has been an increase in  emissions, "e would
tend to be greater than zero,  while the opposite would be
true if there were a decrease in emissions.  Thus a statis-
tical test of the hypothesis of unchanged conditions is equiv-
alent to a test of the significance of the difference of ~e
from zero.  The standard deviation of ~i is related to the
standard deviation of 6j by the factor of  l/>/N.  Thus,
using the log(TSP) data of Table 2 and assuming  N = 100
observations we obtain an estimated  standard deviation of
0.01762 for "e  which translates to approximately ±8 percent
for a 95 percent confidence interval on ~e.
     Obviously,  as more data became available (e. g., each
year) updating of the data base should be done and the models
refitted.  At these times the models could also be improved
by the inclusion of other variables found to be of significance.

                  Degree of Improvement

     We have discussed how well the models fit the data and
the use of the models for prediction purposes.  We now
consider the question of how much improvement we have
achieved by using the estimated regressions as opposed to
using the mean of the observed concentrations without any
adjustment.  The quantity  D = 1 - fl - R  , where  R2  is
as defined previously (i. e., the square of the multiple cor-
relation coefficient)  expresses the proportional decrease  in
the standard deviation of a predicted concentration when the
regression equation is used as opposed to simply  using the
mean of the observed values.  (Duncan,   pp. 696-699.)
     From the
values of Table 2 of Ref. 1 we find that
D=l- yi-R^ ranges from a low of 0. 123 to a high of
0.272.  Most of the  R2  values are near  R2 = 0.40 which
gives a value of D = 0. 225.  We thus find a percent im-
provement of from 12. 3 to 27. 2 percent with most values
near 20 percent.

                        References

1. Sidik, S. M.; and Neustadter, H. N. -  Meteorological
    Adjustment of Yearly Mean Values for Air Pollution
    Concentration Comparisons.  NASA TN in process.

2. Neustadter,  Harold E.; et al.;  Air Quality Aerometric
    Data for the City of Cleveland from 1967-1970 for
    Sulfur Dioxide, Suspended Particulates,  and Nitrogen
    Dioxide. NASA TM X-2496,  1972.

3. Neustadter,  H.  E.; et al.: The Use of Whatman-41
    Filters for High Volume Air Sampling.  Atmos.
    Environ., vol. 9,  no.  1,  Jan. 1975, pp. 101-109.

4. Searle,  Shayle R.:  Linear Models.   John Wiley  & Sons,
    Inc.,  1971.

5. Draper,  Norman R.; and Smith, H.: Applied Regression
    Analysis.  John Wiley & Sons,  Inc. , 1966.

6. Daniel,  Cuthbert; Wood, Fred S.:   Fitting Equations to
    Data.  Wiley-Interscience, 1971.

7. Neustadter,  Harold N.; and Sidik,  Steven M.. On Evalu-
    ating Compliance with Air Pollution Levels  "Not to be
    Exceeded more than once a Year. " Air Pollution Con-
    trol Assoc.  J., vol.  24, no.  6, June  1974,  pp.  559-563.

8. Stern, Arthur C.;  ed..  Proceeding of  Symposium on
    Multiple-Source Urban Diffusion Models.  AP-86,
    Environmental Protection Agency  (PB-198400),  1970.
                                                        680

-------
 9. Holzworth,  G.  C.;  Large-Scale Weather Influences on
     Community Air PoUution Potential in the United States.
     Air Pollution Control Assoc.  J., vol.  19, no. 4, Apr.
     1969, pp.  248-254.

10. Hahn,  Gerald J.; Simultaneous Prediction Intervals for
     a Regression Model.  Technometrics,  vol. 14, no. 1,
     Feb. 1972, pp. 203-214.

11. Duncan,  Acheson J.:  Quality Control and Industrial
     Statistics.  3rd ed.,  Richard D.  Irwin, Inc.,  1965.
                 TABLE 1. - DERIVED PREDICTOR VARIABLES USED IN THE REGRESSION MODELS
Vari-
able
Xl


*2



X3
X4


X5

X6

x7


X8
Y



X10

x
11

X12
Symbol
AT


AT'



min
min'


min"

B. P.

Pr


(Pr)2
Work



Steel


VN

VN
Definition
Tmax T . • maximum temperature
minus temperature, °F

+3 AT, 4 AT. .. + AT. Q; related to
1 1~ J. 1~ £i
noncentral first difference of AT on
day i

Tmin; minimum temperature, °F
+3 min^ 4 min. ^ + min. 2; related to
noncentral first difference of min
on day i
-2 min. + 5 min. , 4 min. „ + min. „;
related to noncentral second difference
of min at day i
Daily average barometric pressure in
inches of mercury
Total water equivalent of precipitation
in inches

The square of X^
Indicator of workdays vs nonworkdays
Co Saturday, Sunday, Federal
Work = J holidays
[l Otherwise
Weekly regional steel tonnage index
f Resultant velocity; when wind is
J out of the North octant
1
[0.0; Otherwise
*?1
Vari-
able
X13

X14


x15

X16


x17

X18
X19
X20
X21

X22
•y
X23


Xgg

X26
-27



X28
X29
Symbol
VNE
V2
VNE


VE

VE


VSE
2
VSE
VS
• vl
VSW
o
vsw

vw
V2
W
VNW

^W
t



sin(0)
cos(0)
Definition

Similar to X. .., X_2 when wind is from NE




Similar to X, , , X, „ when wind is from E
11' T.2


Similar to X, , , X, „ when wind is from SE
11 L&

Similar to X. , , X, „ when wind is from S


Similar to X11? X12 when wind is from SW



Similar to X^, X^2 when wind is from W


Similar to XI]L, X^ when wind is from NW

Number of days from Jan. 1, 1967
divided by 100 (Jan. 1, 1967 is
nominal beginning of sampling
program)

sin(2Trt/3. 6525)
cos(27rt/3. 6525)
                                                       681

-------
   TABLE 2.   RESULTS OF REGRESSION ANALYSIS AT A TYPICAL STATION FOR
                        TOTAL SUSPENDED PARTICULATES
Coeffi-
cient
00
*1
02
03
04
05
06
07
08
09
Variable
Intercept
AT
AT
min
mln'
min"
B. P.
Pr
(Pr)2
Work
Estimate
-2.91
a. 0080
-.00034
a. 0033
a. 0022
a. 0013
a.17
a-.30
a.16
.016
Coeffi-
cient
010
011
012
013
^14
%5
016
017
018
019
Variable
Steel
VN
VN
VNE
VNE
VE
vE
VSE
vL
VS
Estimate
-0.00040
a-.040
.0019
a-.045
a.0021
a-.070
a. 0044
-.027
.00065
a-.020
Coeffi-
cient
020
021
022
023
024
025
026
027
028
029
Variable
^1
VSW
^W
vw
vW
VNW
VNW
t
sin 9
cos 6
Estimate
0. 00042
-.011
.00008
a-.024
.00099
a-.044
a. 0027
a-.0081
all
a. 090
Denotes the coefficient is significantly different than zero at approximately the 10 percent
 significance level.
N
Y
R2
364
2.09
0.38
a2
CT
D
0.0310
0.176
0.21
                                       682

-------
                                    THE APPLICATION OF CLUSTER ANALYSIS TO

                                           STREAM WATER QUALITY DATA
                                               Jay A.  Bloomfield
                                       Environmental Quality Research Unit
                             New York State  Department of Environmental Conservation
                                      50 Wolf  Road,  Albany,  New York  12233
                      Abstract

Cluster analysis,  a multivariate classification
technique was used to examine spatial and temporal
heterogeneity in stream water quality data.  To
examine spatial patterns, stream water quality data
from 44 watersheds in the Genesee River basin in
western New York State were analyzed.  Nine groups
of watersheds and  seven groups of water quality vari-
ables were identified.  To examine temporal trends,
data collected daily at a small rural watershed,
Mill Creek, were examined.  Three clusters of state
variables were produced, based upon the stability
of each variable during runoff events.  Cluster
analysis by sample yielded subsets representing
runoff events, recessional periods and base flows.

                    Introduction

One of the more challenging problems in the en-
vironmental sciences today is the interpretation
of data collected  from field studies.  These data
often consist of many state variables measured at
several locations  over a period of time.  The sheer
volume of data obtained in this manner often over-
whelms even the most thorough observer, intent upon
discerning meaningful patterns in the data.  Several
roultivariate techniques have been used to find
patterns in environmental data, primarily in the
fields of geology, taxonomy, and terrestrial ecology.
These techniques include multiple components analysis
(McCammon, 1968),  ordination (Bray and Curtis, 1957)
and cluster analysis (Fortier and Solomon, 1966).
This paper will be concerned solely with the latter,
a hierarchical classification technique used to
determine subsets  of samples or state variables.
Two data sets will be analyzed using cluster analy-
sis:  one to examine spatial heterogeneity, and the
other to examine temporal variability in stream water
quality.

Sources of Heterogeneity in Stream Hater Quality
Data

Heterogeneity in stream quality is due to two fac-
tors,  time and location.  Temporal variability in
water quality is induced by seasonal and short-term
changes in climatic factors, primarily precipitation,
but to a lesser extent, solar radiation and the
movement of air masses.  Spatial variability is
often explained by differences in soil or bedrock
type,  topography,  vegetation or the influence of
the activities of  man.
Stream quality surveillance networks are rarely
designed to evaluate both spatial and temporal
variations in water quality, either because of
limited resources ox lack of insight.  Little effort
is made in determining  l) the proper sampling
interval,  2) the placement of stations, or  3)
redundancy in the state variables (primarily
chemical constitutents) measured.  Cluster analysis
can be used to examine each of these three problem
areas.

Cluster Analysis

Cluster analysis is simply a classification technique
which graphically describes a similarity matrix with a
dendrogram (Sokal and Sneath, 1963).  The similarity.
matrix can be constructed by comparing samplesft-mode)
or state variables (R-mode). For each pair of samples
or variables, a similarity coefficient is calculated.
Although the correlation coefficient is often used,
it has the disadvantage of marked sensitivity to the
nature of the frequency distribution of the state
populations (variable or sample)  considered (Park,
1968; Gevirtz, Park and Friedman, 1971).

A distance coefficient proposed by Sorensen (1948)
tends to be less sensitive to the form of the fre-
quency distribution of the data (Park, 1968) and
therefore, has been used in this paper.

Sorensen' s coefficient (s) for multistate data is
defined as»
                 n
Sjk = 2*
                       (minimum (X^, Xifc))
For Q-mode analysis, n is the number of samples, X^j
the value of the jth state variable in the ith sample,
and X^k the value of the kth state variable in the ith
sample.  For R-mode analysis, the logic is transposed.
It can be seen that the coefficient represents the
relative amount a pair of samples or variables has in
common.

The generation of the dendrogram is accomplished by
pair-wise calculation of elements of the similarity
matrix.  From this matrix, the pair of states with
the maximum value of S are temporarily deleted from
                                                     683

-------
the analysis, and the program recalculates the simi-
larity matrix with an additional state obtained from
a combination of the pair of states deleted.  Thus,
n-1 iterations are required to produce the dendrogram.
This step-wise procedure allows one to examine the
hierarchy among subsets (King, 1967).

An example of a dendrogram is shown in Figure 2.  The
horizontal scale represents percent similarity
(100. * S).  The states are listed vertically and are
connected with parallel lines.  If states A and B are
similar at a 90% level, parallel lines are drawn hori-
zontally to this level and connected.  If state C is
similar to this subset at a 75% level, parallel lines
are extended from the A-B subset (90%) to 75% and are
connected to a horizontal line from state C.  The
simplest procedure for defining a cluster is to com-
pare the relative similarity within a group to the
similarity of the group to the remainder  of the states.

Gevirtz et al (1971) sites a disadvantage with clus-
ter analysis:  the technique produces subsets of
states, and hence obscures gradients among states.
Additionally, McCammon (1968) has warned  to ignore
low level clusters (S < .2) and Park (1974) has sug-
gested scaling each state variable (either by the .
maximum value of each variable or the range).  Both
of these suggestions have been followed in this
paper.  Also, the author believes the results ob-
tained from cluster analysis should be used judi-
ciously, taking into account the limitations of the
data and should function as one facet of  a rigorous
analytical strategy.

Cluster analysis has been used in geology (Harbaugh
and Merriam, 1968), marine biology (Gevirtz et al,
1971), paleoecology (Del Prete, 1972; Bloomfield,
1972) and limnology (Cairns, Kaesler and  Patrick,
1970).  The technique has application to  a wide range
of multistate environmental data where grouping of
similar states is desired.
Results

Two sets of multivariate stream water quality data
were examined using a cluster analysis program written
in FORTRAN and described by Gevirtz et al (1971).
Each data set was chosen so as to examine the problems
of spatial and temporal heterogeneity.

The Genesee River Watershed, 1972

During June 1972, the United States Geological Survey
sampled 44 streams in the Genesee River Basin in
western New York (Figure l).  Twenty-one water quality
parameters including water temperature and stream dis-
charge were determined for each stream (United States
Department of the Interior, 1973).   On June 23, 1972,
Hurricane Agnes, one of the most severe storms ever to
affect New York State, caused vastly increased stream
discharges.  Of the 48 total sampling sites, six were
sampled after the storm.  Five of these post-storm
sites were on the main stem of the Genesee River and
one on a major tributary.  Cluster analysis was used
to group both sampling sites (Q-mode) and state vari-
ables (R-mode).  The Q-mode dendrogram is shown in
Figure 2 and the R-mode dendrogram in Figure 3.

The Q-mode analysis generated nine distinct clusters,
four intermediate samples, and one sample (Vfolf Creek)
that was unique from the other 44 samples.  Four
samples were not included in the analysis because of
missing values for one or more state variables.  Two
of the clusters (C and D) and two intermediate sta-
tions (Genesee River at Wellsville and Avon) account
for the six post-storm sampling sites.  The remaining
 cluster  (AI,  A2>  AS,  A4,  B,  E and F-) represent pre-
 storm sites.   The geographical distribution of these
 clusters is  shown in  Figure  4.  It is apparent that
 the  pre-storm clusters are areally compact, repre-
 senting  regions with  similar environmental conditions.
 Table 1  displays  land use, soil quality and bedrock
 geology  for  the seven pre-storm clusters.

 The  A clusters represent  a forested region (Hardy and
 Shelton,  1970) in slightly-coarse textured, moderate
 to somewhat poorly-drained acidic soils (Cline, 1961)
 underlain by  Upper Devonian  shales and siltstones
 (New York State Education Department, 1970).  Cluster
 B has bedrock geology and soils similar to the A
 clusters  but  there are some  agricultural areas in the
 watersheds of this cluster.   Clusters E and F repre-
 sent  agricultural  watersheds in lime-rich well-
 drained  soils.  The watersheds of cluster E are un-
 derlain  by Devonian shales and limestones while the
 watersheds of the  F cluster  are underlain by Upper
 Silurian  shales, dolostones  and evaporites.

 Examination of the geographic distribution of the
 Q-mode clusters (Figure 4) would dictate that the
 unique Wolf Creek  sample  be  a member of cluster B.
 Inspection of the  data from  Wolf Creek shows rela-
 tively high sodium (600 mg/l)  and chloride (1200
 mg/l).   The apparent  reason  for the uniqueness of
 this  site  is  the  existence of a large salt mining
 operation  in  the Wolf Creek  watershed (New York
 State  Department of Health,  1961).
Table 2 summarizes the average value and range of
three water quality parameters, chloride, nitrate-
nitrogen and alkalinity by Q-mode cluster.  Alka-
linity, chloride and nitrate are decidedly higher in
the agricultural clusters (E and F).  Figures 5
through 8 show the geographic distribution of chlo-
ride, nitrate-nitrogen alkalinity and electrical
conductivity.  What causes these trends is most
probably a combination of land use, soil type and
bedrock geology.  An ongoing study  (New York State
Department of Environmental Conservation, 1974) is
attempting to determine which of these three factors
is most directly responsible for trends in stream
water quality in the Genesee River  Basin.

The R-mode analysis resulted in seven groups of water
quality variables (Figure 3):

     l)  Ca++, hardness
     2)  504, non-carbonate hardness
     3)  Dissolved solids, electrical conductivity
     4)  pH, water temperature, SiO~3
     5)  HCOjj, alkalinity
     6)  Stream discharge, total Fe, total Mn
     7)  Na+, Cl~

The composition of several of the R-mode clusters are
readily explainable, showing redundancy between state
variables.  For example, cluster 5  contains alka-
linity and HCO;j, which is the major component of
alkalinity encountered at neutral pH.  Cluster 3 con-
tains electrical conductivity and dissolved solids,
the concentration of the latter being the major
factor in determining the electrical conductivity of
an aqueous solution.

Cluster 1 contains hardness and its major constituent
calcium.  Cluster 6 relates stream  discharge, iron
and manganese.  This cluster results from high iron
and manganese in post-storm samples only, possibly
related to increased levels of suspended sediment.
Cluster 7 probably results from the influence of
salt deposits (NaCl) in the watershed.  Cluster 2
                                                      684

-------
relates sulfate ion and non-carbonate hardness
(polyvalent dissolved cations  such  as strontium  and
zinc).  Cluster 4 consists of  pH, stream  temperature
and silicate.  The author can  offer no  simple ex-
planation for clusters 2 and 4.

Thus, cluster analysis can be  used  to examine spatial
variation in stream water quality data  from  the
Genesee watershed and has allowed classificiation  of
seven groups of water quality  parameters.

The Mill Creek Watershed. 1975

Eighteen water quality parameters were  measured  at
Mill Creek, New York, for one  year  beginning March 1,
1975 (Hetling, Carlson and Bloomfield,  1976).  The
watershed has an area of about 25 km2 and land use
is evenly divided between agriculture and forest
(El-Baroudi, James and Walter,  1975).   Stream dis-
charge was gaged continuously  and one water  sample
was collected each morning.  Additional samples  were
collected during major runoff  events.

Cluster analysis was performed on a data  set con-
sisting of the morning samples collected  between
April 1, 1975 and October 31,  1975  (214 samples).
The Q-mode dendrogram is summarized in  Figure 9.   In
general, the samples representing the three  major
runoff events (April 4, August 8 and October 18)
form one cluster (j).  Each of these events  repre-
sent a prolonged stream discharge of over 2.0m3/sec.
Other clusters represent smaller runoff events
D, H), recessional periods immediately  following
events (A, B, C, E, G, I) or prolonged  periods of
drought (F).  These twelve unique Q-mode  clusters
account for slightly over one-third of  the total
number of samples.  The remaining samples form a
very diffuse cluster of similarity  greater than  95%.
These samples represent base flow conditions and
yield water quality information that is extremely
redundant.

The results of the Q-mode analysis  suggest more
frequent sampling during runoff events  with  fewer
samples during base flow.  Anomalous extended dry
periods should be sampled more frequently when
possible.


The R-rnode dendrogram (Figure  10) shows three major
clusters of water quality variables:

1)  Dissolved organic carbon, dissolved kjeldahl
    nitrogen, chloride,  dissolved orthophosphate
    phosphorus,  total dissolved phosphorus,  ammonia
    nitrogen and nitrate nitrogen.

2)  Alkalinity,  electrical conductivity,  sulfate,
    pH,  dissolved oxygen and nitrate nitrogen.

3)  Particulate  organic carbon, particulate phosphorus,
    suspended solids,  particulate kjeldahl nitrogen
    and  stream discharge.

Cluster 1 represents dissolved  constituents which
exhibit  changes  in concentration during runoff events.
Cluster 2 also represents dissolved  constituents.
However,  these variables tend not to be as influenced
by runoff events.   The constituents  in cluster 3 are
extremely influenced by runoff  events.   Cluster 3  in-
cludes stream discharge and  four particulate (retained
on a 0.45 pm membrane  filter) variables.  In summary,
the R-mode  analysis  indicates that  three subsets of
water quality variables can  be  defined based upon
their relationship to  runoff events.
                     Conclusions

Cluster analysis  can be used  to examine  spatial  and
temporal heterogeneity in  stream  water quality data.
It can also be used to check  for  redundancy among
water quality variables.

Analysis of water quality  data from  the  Genesee  River
Basin indicates that land  use and  geology  tend to  be
the most significant factors  in determining the
actual grouping of subwatersheds  within  the basin.
The use of cluster analysis on time-course data  from
Mill Creek, New York, reveals three  subsets of vari-
ables; stable dissolved, variable  dissolved and  ex-
tremely variable particulate  constituents.   Q-mode
analysis of the Mill Creek data resulted in grouping
samples into runoff event, hydrograph recession  and
low flow groups.  This result tends  to agree with  the
results of Bouldin, Johnson and Lauer (1975) and
Hetling, Carlson  and Bloomfield (1976) that runoff
event-oriented and not fixed-interval sampling yields
more reliable data on stream  water quality.

                  Acknowledgements

The author wishes to acknowledge Dr. L.J.  Hetling,
Mr. G.A. Carlson, and personnel from the New York
State Department of Environmental  Conservation, who
helped in review  and preparation of  this manuscript.
Dr. R.A. Park of Rensselaer Polytechnic  Institute
supplied the computer programs used  in this paper.
This work was in part supported by the Environmental
Protection .Agency Grant No. R005144-01 in  cooperation
with the International Joint  Commission.


                    Bibliography

Bloomfield, J.A., 1972, Diatom Death Assemblages as
  Indicators of Environmental Quality in Lake  George,
  New York.  Unpublished Masters Thesis,  Rensselaer
  Polytechnic Institute, Troy, N.Y., 86 pp.

Bouldin, D.R., A.H. Johnson and D.A. Lauer, 1975, The
  Influence of Human Activity on the Export of Phos-
  phorus and Nitrate from  Fall Creek.  In  "Nitrogen
  and Phosphorus, Food Production, Waste and the
  Environment", K.S.  Porter (Editor), Ann  Arbor
  Science Publishers,  Inc., Ann Arbor, Mich.,
  pp 61-120.

Bray, J.R. and J.T. Curtis, 1957,   An Ordination of
  the Upland Forest Communities of Northern Wisconsin,
  Ecol. Mon., (27), No. 4, pp 3250349.

Cairns, J., R.L. Kaesler and R.  Patrick,  1970,
  Occurrence and Distribution of Diatoms and Other
  Algae in the Upper Potomac River, Notulae Naturae,
  (436), pp 1-12.

Cline,  M.G., 1961, Soil Association  Map of  New York
  State, Cornell University,  Ithaca, N.Y.

Del Prete, A., 1972,  Postglacial Diatom Changes in
  Lake George, N.Y.,Unpublished Doctoral Dissertation,.
  Rensselaer Polytechnic Institute,  Troy,  N.Y.,110 pp.

El-Baroudi, H., D.A.  James and K.J. Walter,  1975,
  Inventory of Forms of Nutrients  Stored in  a  Water-
  shed, Rensselaer Polytechnic Institute,  Troy, N.Y.,
  209 pp.

Fortier, J.J. and H.  Solomon, 1966,  Clustering
  Procedures in Multivariate  Analysis, Academic Press,
  New York, N.Y., pp 493-506.
                                                      685

-------
Gevirtz, J.L., R.A. Park and G.H. Friedman, 1971,
  Paraecology of Benthonic Forminifera and Associated
  Microorganisms of the Continental Shelf off Long
  Island, New York.  Jour. Paleontology, (45), No. 2,
  pp 153-177.

Harbaugh, J.W. and D.F. Merriam, 1968, Computer
  Applications in Stratigraphic Analysis, John Wiley
  and Sons, New York, N.Y., 282 pp.

Hardy, E.E. and R.L. Shelton, 1970, Inventorying
  New York's Land Use and Natural Resources, New York
  Food and Life Sciences, (3), October-December,
  Cornell University, Ithaca, N.Y.

Hetling, L.J., G.A. Carlson and J.A. Bloomfield,
  1976, Estimation of the Optimal Sampling Interval
  in Assessing Water Quality of Streams, in prepa-
  ration.

King, B., 1967, Stepwise Clustering Procedures, Jour.
  Am. Stat. Assoc., (62), pp 79-85.

McCammon, R.B., 1968, Multiple Component Analysis
  and its Application in Classification of Environ-
  ments, Am. Assoc. Petroleum Geologists Bull., (52)
  No. 11, pp 2178-2196.

New York State Department of Environmental Conserva-
  tion, 1974, Genesee River Watershed Study, Des-
  cription and Detailed Work Plan, 36 pp.

New York State Department of Health, 1961, Upper
  Genesee Drainage Basin, Genesee River Drainage
  Basin Survey Series, Report Number 2, 219 pp.

New York State Education Department, 1970, Geologic
  Map of New York State, four sheets.

Park, R.A., 1968, Paleoecology of Venericardia sensu
  lato (Pelecypoda) in the Atlantic and Gulf Coastal
  Provinces:  An Application of Paleosynecological
  Methods, Jour. Paleontology, (42), No. 4,
  pp 955-986.

Park, R.A., 1974, A Multivariate Analytical Strategy
  for Classifying Paleoenvironments, Int. Assoc.  for
  Math. Geo. Jour., (6), No. 4

Sokal, R.R. and P.M.A. Snealth,  1963,  Principles of
  Numerical Taxonomy,  W.H. Freemand and Co.,
  San Francisco, Ca.,  359 pp.

Sorensen, T., 1948, A Method of Establishing Groups
  of Equal Amplitude in Plant Sociology Based on
  Similarity of Species Content and its Application
  to Analyses of the Vegetation on Danish Commons,
  Biol. Skr. (5), No.  4, pp 1-34.

United States Department of the Interior, 1973,  Water
  Resources Data for New York for 1972, Part 2,  Water
  Quality Records,  262 pp.
                     Key to Figure 1

  Watersheds  sampled  prior to Hurricane Agnes

       1)   Spring Creek (Pumpkin Hill)
       2)   Black Creek (Churchville)
       3)   Hotel Creek
       4)   Mill  Creek (West Chili)
       5)   Spring Creek (Mumfbrd)
       6)   Pearl Creek
       7)   Stony Creek
       8)   Warner Creek
       9)   Trout Brook
     10)   Wolf  Creek
     11)   Beards Creek
     12)   Jaycox Creek
     13)   Christie Creek
     14)   White Creek (Canawaugus)
     15)   Dugan Creek
     16)   Honeoye Creek
     17)   Spring Brook
     18)   Mill  Creek (Honeoye  Park)
     19)   Bradner Creek
     20)   Stony Brook
     21)   Sugar Creek
     22)   Ewart Creek
     23)   Cold  Creek
     24)   Rush  Creek
     25)   Crawford Creek
     26)   Wigwam Creek
     27)   White  Creek (Belfast)
     28)   Baker  Creek
     29)   Black  Creek (Benetts)
     30)   Phillips Creek
     31)   Knight  Creek
     32)   Vandermark Creek
     33)   Brimmer Brook
     34)   Elm Valley Creek
     35)   Railroad Brook
     36)   East Valley Creek
     37)  Dyke  Creek
     38)  Quig Hollow Brook
     39)   Ford Creek
     40)  Marsh Creek
     41)   Chenunda Creek
     42)  Cryder Creek

Streams sampled  after Hurricane  Agnes

     43)  Genesee River  (Rochester)
     44)  Genesee River  (Avon)
     45)  Genesee River  (Mount Morris)
     46)  Canaseraga  Creek (Dansville)
     47)  Genesee River  (Portageville)
     48)  Genesee River  (Wellsville)
                                                      686

-------
                            TABLE 1

DESCRIPTION OF LAND USE, SOILS DATA  AND  BEDROCK GEOLOGY FOR THE
                SEVEN PRE-STORM Q-MODE CLUSTERS
Q-mode Number of Dominant
Cluster Watersheds Land Use
A^ 6 Forest - 6

A2 5 Forest - 4
Agriculture). ^
and forest )
A3 3 Forest 3

A^ 5 Forest 5

B 4 Agriculture 2
Agriculture)
with some )- 2
forest )
E 8 Agriculture - 8



F 4 Agriculture - 4



Q-mode Number of
Cluster Watersheds
A^ 6
A2 5
A3 3
A4 5
B 4
E 8
F 4
Soils Data
Texture p'H
Slightly Acidic
coarse
Coarse Acidic


Slightly Slightly
coarse acidic
Slightly Acidic
coarse
Slightly Acidic
coarse


Fine Basic



No trend Slightly
basic
TABLE 2
AVERAGE CHEMISTRY
Drainaqe
Somewhat poor

Moderate


Somewhat poor

Moderate

No trend



Well-drained



Well-drained



Nitrate-Nitrogen
Chloride (mq/l) (mq/l)
Avq Range
21.1 1.9-81
9.9 3-26
3.4 2.2-5
4.8 1.7-9.8
11.7 5.7-20
44.5 27-72
48.8 36-69
Avg Range
0.03 .01-. 07
0.16 .01-. 30
0.01 .01-. 20
0.18 .04-. 30
1.23 .07-1.80
0.84 .02-2.70
0.98 .70-1.30
Bedrock Geoloqy
Upper Devonian shale and
siltstone
Upper Devonian shale and
siltstone with some sand-
stone
Upper Devonian shale and
siltstone
Upper Devonian shale and
siltstone
Upper Devonian shale and
sandstone with some silt-
stone

Middle Devonian shale and
limestone with some upper
Devonian shale and lime-
stone
Upper Silurian shale, dolo-
stone, salt and gypsum


Alkalinity
(mq/l)
Avg Ranqe
64 54-80
58 34-74
93 83-99
26 16-32
119 97-128
204 128-245
234 194-291
                              687

-------
       Figure 1.   Location  of Sub-Watersheds Sampled
                    During June, 1972
                                  %  SIMILARITY
                                    80     70
       BRIMMER  BR.
       PHILLIPS  CR.
       BAKER CR
       WIGWAM CR.
       CHENUNOA CR.
       KNIGHT CR.
        EWART CR.
        SUGAR CR
       RAILROAD  BR.
        STONY BR.
        FORD CR
       CRYDER CR.
   WHITE CR.  (BELFAST)
       CRAWFORD CR.
        RUSH CR.
        DYKE CR.
    QUIG HOLLOW BR.
    EAST VALLEY Cfl.
     ELM  VALLEY CR.
     VANDERMARK CR.
        MARSH CR.
        TROUT CR.
       BRADNER CR.
        WARNER CR.
        STONY CR.
MILL CR. (HONEOYE PARK )
GENESEE R.  ( PORTAGEVILLE }
     CANASERAGA CR.
 GENESEE R. ( WELLSVILLE )
 GENESEE R. (AVON, 6/23)
 GENESEE R. ( MT MORRIS }
 GENESEE R. (AVON. 6/30)
       BEARDS CR
       SPRING  BR.
       JAYCOX  CR.
 WHITE CR. { CANAWAUGUS )
        PEARL CR.
       HONEOYE  CR.
 MILL CR. (MUMFORO }
       CHRISTIE  CR.
        DUGAN CR.
SPRING  CR (PUMPKIN HILL)
 MILL CR. (WEST CHILI )
        HOTEL CR.
        WOLF CR.
•A I
  STREAM
 MKHMSt

  SODIUM

 CHLORIDE

CAMONATE
       FIGURE   3    R-MOOE DENDROGRAM OF  GENESEE  DATA.
A3

A4



B

C
                                                POST STORM SITES

                                                WOLF CREEK
  FIGURE  2.   0-MODE  DENDROGRAM  OF  GENESEE  DATA.
                                                                              Figure  h.   Geographic Distribution of
                                                                                           Q-Mode  Clusters
                                                                688

-------
Figure 5.  Geographic Distribution of Chloride
           (mg/l)
Figure T.  Geographic Distribution of
           Alkalinity (mg/l)
Figure 6.  Geographic Distribution of
           Disolved HOj-N
Figure 8.  Geographic Distribution of
           Electrical Conductivity
           (M mhos/cm)
                                              689

-------
FIGURE  9.  Q-MOOE DENDROGRAM OF MILL CREEK DATA
                                              B


                                              C
                                              G

                                              H
FIGURE  10.   R-MODE  DENDROGRAM OF  MILL  CREEK DATA
                                                                                 %  SIMILARITY

                                                                    100  tO  (0  70   «O  50  40   SO  10
                                                        DISSOLVED
                                                     ORGANIC CARBON

                                                        DISSOLVED
                                                    KJELDAHL NITROGEN

                                                        CHLORIDE

                                                     ORTHOPHOSPHATE
                                                      PHOSPHORUS

                                                        DISSOLVED
                                                      PHOSPHORUS

                                                        AMMONIA
                                                        NITROGEN

                                                        NITRITE
                                                        NITROGEN

                                                      ALKALI NITY

                                                      ELECTRICAL
                                                      CONDUCTIVITY

                                                        SULFATE
                                                           pH

                                                        DISSOLVED
                                                         OXYGEN
                                                         NITRATE
                                                         NITROGEN

                                                        PARTICIPATE
                                                       ORGANIC CARBON

                                                        PARTICIPATE
                                                        PHOSPHORUS

                                                         SUSPENDED
                                                           SOLIDS


                                                    KJELDAHL NITROGEN

                                                     STREAM DISCHARGt
                                                       I A
                                                       IB
                                                        690

-------
                               APPLICATION  OF  PATH  ANALYSIS  TO  DELINEATE  THE

                             SECONDARY  GROWTH EFFECTS  OF  MAJOR LAND  USE  PROJECTS

                                     Tom  McCurdy,  Environmental  Planner
                     Strategies and Air Standards  Division,  EPA,  Durham, North  Carolina

             Frank  Benesh,  Environmental  Planner             Peter  Goldberg, Senior  Scientist
                                 Walden Research,  Cambridge,  Massachusetts

                                           Dr.  Ralph D'Agostino
                               Professor of Mathematics, Boston University
                      Abstract

    This  paper  presents  a  path  analytic  modeling  pro-
cess used  to  test  various factor theories of induced
urban  development.  The original  and  final  "trimmed"
path models are  discussed,  as well  as statistical
problems associated with  using  path analysis to des-
cribe  a non-recursive  system.

                    Introduction

    This  path  analysis effort  is a part  of an Envi-
ronmental   Protection  Agency (EPA)  project entitled
"Growth  Effects  of Major  Land Use Projects", or
GEMLUP for short.
    The main purpose  of  GEMLUP is  to formulate a  non-
proprietary statistical methodology to predict air
pollution  emissions  from  (1) two major land use devel-
opment types—large  places  of employment  and large
residential  projects,* (2)  secondary  development that
is induced**  by the  major project,  and (3) motor ve-
hicular  traffic  associated  with both  kinds of urban
development.   Subsidiary  purposes are to  formulate and
test a factory theory  of  induced development using
path analysis and  to  generate and apply land use
oriented  emission  factors based on  current energy con-
sumption  dataJ
     GEMLUP relates  to a  number of  EPA programs, in-
cluding  air quality  maintenance areas (AQMA) planning2,
environmental impact  statement  (EIS)  review', the
indefinitely  suspended portions of  indirect source re-
view , and the prevention of significant  air quality
deterioration,  or  non-degradation.5  Explicit or im-
plicit in  these programs  is an  evaluation of air qual-
ity impacts of land  use plans or project  developments,
but the Agency does  not provide, specify, or recommend
a method  for  evaluation in  any  of the programs.  GEM-
LUP is designed to formulate and test a method of
evaluating land use  impacts at  the  project level.
                Theory and Approach
     Of the many t\
 in the literature
pes of scientific theory discussed
 ,  the most rigorous type of theory
     *Definitions of the project types investigated in
 GEMLUP are:
 1.  Place of employment: an office building or complex,
 an  industrial building or complex, or a research and
 development building or complex constructed between
 1954 and 1964 and having a minimum employment of
 2,250 persons within five years of initial operation.

 2.  Residential project: an apartment structure or
 complex, residential subdivision, planned unit devel-
 opment, or new town constructed between 1954 and 1964
 and having a minimum population of 4,500 persons with-
 in  five years of initial occupancy.
    **Induced development is land use development
 caused by, or constructed because of, the major land
 use project.
   ***See references 6-10.
                                      that we could ascribe to is "factor theory," which is
                                      characterized by narrow and non-overlapping generali-
                                      zations, a selective, explicit enumeration of (all)
                                      factors thought to influence a given phenomenon,  and
                                      the utilization of empirically defined variables  to
                                      represent the factors involved.   While almost every
                                      effort at causal explanation involves factor theory, it
                                      is limited theory because it does not readily suggest
                                      other generalizations, due to its relatively narrow  fo-
                                      cus6.  Consequently,  we had to operationalize the fac-
                                      tor theory by using a model.  The next step was then
                                      clear: formulate a theory of induced development  and
                                      choose a model  to test it.

                                      A Theory of Induced Development

                                           Taking the industrial/offices major land use pro-
                                      ject type as the more general  case of the two types
                                      investigated, we devised the following theory of  in-
                                      duced development.*

                                           Constructing a large source of employment like  an
                                      industrial/office complex generates jobs which result
                                      in the nearby construction of dwelling units; these  in-
                                      duce retail development to locate near them and gener-
                                      ate demand for community, cultural, and religious faci-
                                      lities (schools, recreation areas, libraries, churches,
                                      theaters, fire and police stations, etc.).   All of this
                                      requires the construction of streets and highways that
                                      then improve accessibility to the area.  Better access
                                      fosters continued urban development, particularly high-
                                      way-oriented commercial  and office land uses.  Addition-
                                      al sources of employment come into the area as secondary
                                      (and tertiary)  industry or services located near  the
                                      original major project,  spurring on another round of
                                      residential development, and so forth.  This can  be
                                      summarized as:
                                           induced land use
                        f  (size  of major  project,
                        other  endogenous  variables,
                        other  exogenous variables)   (1)

where the other endogenous variables are  the other in-
duced land uses in the model  and the other exogenous
variables are vacancy rates,  accessibility measures,
etc. which affect the influence of the major project on
induced development.

     That is what was hypothesized to happen.  As can
be noted, feedback in the system was explicitly hypo-
thesized.  Our theory also specified that within a
10,000 acre circular "area of influence"  centered on
the original major land use project, all  of the above
development would occur within ten years  after the
employment source opened.  Our rationale  for the where
and when will have to be discussed elsewhere.11 The
structural equations and path analytic diagrams dis-
cussed below rigorously depict the model  used to test
the theory; however, we still  must explain why we chose
path analysis as the modeling methodology in the first
place.	
  *The  theory  is  not entirely  new.   Most  of  the  urban
development models  referenced in  the  next  section are
based  upon  the  same general relationships  posited in our
theory,though they  are  usually  less  explicit than ours.
                                                       691

-------
 Analytic  Approach^
      Part of the reason  for  using  path  analysis  to _
te<-t our theory was programmatic:  we did not have time
or money to do more.   For instance, the  highly detailed
deterministic approaches  patterned after Lowry^  or
ForresterlS Or others working in the same vein were
simply infeasible.  They  were also inappropriate  be-
cause of their concern with highly aggregated regional
development and/or long planning horizons.   Because of
this and the problems associated with deterministically
modeling a social system, it  was decided to utilize a
statistical or probabilistic  approach.
     It was also obvious  that a dynamic  modeling  ap-
proach was infeasible because of the effort involved  in
obtaining longitudinal data to incorporate time into
the system and in solving the simultaneous differential
equations "involved.  For  similar reasons, a difference
equation approach (i.e. predicting the change in  land
use over the ten year period) was also infeasible.
The static approach to testing our inherently change-
oriented theory  is justified  by three factors: (1) our
theoretical assumption that induced development follows
a single basis causal structure for all  cases or  obser-
vations,  (2) the use of a cross-sectional method  of ob-
taining data for variables observed in a static state
and the assumption that input variables  were initial-
ized at time 0 and held constant long enough so that
all the causal consequences in the system were re-
alized, and  (3)  the use of certain time-lagged exoge-
nous variables in the system.  The conceptual useful-
ness of these factors in testing causal  theory is well
described  in HeiselS and Blalock.20

     Conceptually, the total  land use in the 10,000
acre area of influence at the end of the ten year time
period can be defined as  three components:
     total land  u
                       (2)
      (prior land uset)+(project induced land

       uset^.+10)+(non project induced land
       usVt+io'
 The  prior land use in time t is the amount of land use
 existing in the year of the initial occupancy of the
 major  project.  Non project induced land use in the
 period  t to t+10  is the amount of land use growth,
 t  to t+10, that occurred in the area of influence but
 was  not induced by the major project.  The non project
 induced land use  includes growth due to regional ex-
 pansion and random effects.

     The selection of a cross sectional approach limi-
 ted  our model to  the prediction of the total land use
 in the year t+10.  Consequently, the basic structure of
 our  model is a series of simultaneous equations of the
 following form:
      total  land use
                   t+10
f(Type I variables,
  Type II variables)   (3)
 where Type  I variables are predictors of the induced
 portion of  the total land use  (see Equation 1) and Type
 II variables are predictors of the prior land use in
 year t and  non project induced land use in the period
 t to t+10.

     Finally, there was not enough existing information
 on project-level induced development to be able to de-
 fine the  form of the relationships in the system.  The
 form, therefore, was assumed to be linear.  This is not
      Hill  [14], Seidman  [15], and Center for Real Es-
 tate and Urban Economics  [16].  In addition to these
 general or  comprehensive models, there are many single
 sector models that have been developed since the early
 1960's.  See references 17 and 18 for a review of ur-
 ban development models.
a bad a priori approximation  in most  social  science
applications, and it allows the use of well  developed
statistical techniques.  There is accumulating  evi-
dence that many social systems can be approximated by
a linear function as long as  operating conditions re-
main fairly stable. 19  Even complex non-linear  rela-
tionships can often be approximated by a  constant re-
lation in discrete subregions of the  relationship.21
Also, if a relationship is thought to be  nonlinear on
theoretical grounds (e.g., multiplicative, exponential),
the data can be transformed prior to  entering it in
the linear analysis.22

     For all of these reasons, the path analytic tech-
nique based on multiple regression analysis  seemed
appropriate to test our theory of induced development.
Path analysis was developed by biologist  Sewall Wright
in the 1920's as a technique  for examining observed
interrelated variables that are assumed to be comple-
tely determined by exogenous  variables.

     It is not capable of deducing causal  rela-
     tions from available quantitative information
     (viz., correlation coefficients), but rather
     it is intended to combine this quantitative
     information with qualitative information that
     is available on the causal relations to give
     a quantitative interpretation.   It is a
     technique useful  in testing theories rather
     than in generating them and it can be used
     to study the logical  consequences of various
     hypotheses involving causal  relations.  In
     order to implement the technique, the re-
     searcher must make explicit a theoretical
     framework or model.22

     Use of the technique requires two assumptions
about causality in the system: (1) a weak causal order
exists among the variables and it is  known, and (2)
relationships among variables are causally closed.
Weak causal ordering exists in a two variable set,  Xj
and Xj, it it is known on logical, empirical, or
theoretical grounds that X-j may (or may not)  affect Xj
and that Xj cannot affect Xn-.  Causal  closure is simply
the concept that given a bivariant covariation between
X-j and Xj and weak causal  ordering (X-j->Xj),  the ob-
served covariation between the two variables  must be
due to the causal  dependence of Xj on X-j, their mutual
dependence on some outside variable(s),  or a  combina-
tion of these two factors.23

     Path diagrams (see below) depict the hypothesized
causal  relations among variables.   Causality  is shown
by a single-headed arrow (or path) and interaction
(correlation) between  two variables is shown  by a cur-
ving double headed arrow.   A,coefficient P-H  is asso-
ciated with each path  and can be interpreted  as  a re-
gression coefficient;  that is, the amount of  change in
the dependent variable caused by a one unit change  in
the independent variable with all  other independent
variables held constant.  The coefficient on  the inter-
action arrows is the correlation  coefficient  R-JJ.  All
of the statistical  assumptions of regression  analysis
(e.g.,  independence of observations,  uncorrelated re-
siduals, a normal  distribution of means of sample data)
apply to path analysis as  well.

     Path analysis has been used  at least once before
in environmental modeling.  Researchers at Argonne  Na-
tional  Laboratories chose the technique to causally re-
late four independent  variables (land area, number  of
employees, process weight rate, and energy use)  to  a
dependent variable, regulated point source emissions of
particulate air pollution.24  While the authors were
unhappy because land area  and number of employees did
not relate well  to the dependent variable, the rela-
tionships depicted in  the system are logical  and cer-
tain of the path coefficients are significant.
                                                       692

-------
                 Model  Specification

     The theory of induced development was translated
into  two models, a residential  project model  and an
industrial/office project model.  Due to the limited
length of this paper, only the industrial/office model
will  be discussed.  The specification and testing of
the residential model was undertaken in a similar
manner.
     The industrial/office project model was specified
as 13 equations, 5 of which are simultaneous and the
remaining 7 are recursive.  The path diagram of the
model is shown in Figure 1.  The definition of each
variable is shown in Table 1.  The theoretical basis
of each path is discussed elsewhere.25

                 Methodological  Issues

     Based  on  certain criteria,  the most important
criterion being the  minimum size of major project
(see  page 1),  a sample  of twenty case  studies  of each
major project  type was  selected.   The  size of  the
sample was  limited by available  resources (over 7%  man-
months was  required  to  collect  data for the present
sample).   This small  sample size created two  important
methodological  problems.

Degrees of  Freedom in First Stage of Two Stage Least
Squares

     While  one would prefer as  large a  sample  as pos-
sible, the  use of a  sample of twenty in an ordinary
least squares  regression  with four or  five independent
variables,  such as in the seven  recursive equations of
our model,  is  not uncommon.   However,  in the  specifica-
tion of our model there is a simultaneous block of  five
equations.   In this  situation it would be inappropriate
to use ordinary least squares (OLSQ),  as such a tech-
nique would produce biased and inconsistent estimates.
Accordingly, we made use of the two stage least squares
(2SLS) technique to estimate the structural coeffi-
cients in the five simultaneous equations.  As the
first stage of 2SLS estimates the right hand side endo-
genous variables with all the exogenous variables in
the simultaneous block, the degrees of freedom in the
first stage of the 2SLS in our analysis is zero.
(There are twenty exogenous variables).

     Our approach to solving this problem was to  use
stepwise OLSQ to delete those instrumental variables
that were insignificant in the first stage of the 2SLS.
First, one instrumental variable for each endogenous
variable was identified as having the strongest a_
priori causal  influence.  These five variables, in ad-
dition to the major project size, were entered in the
first step of a stepwise OLSQ predicting each endoge-
nous variable.  (This was done to insure identifica-
tion).  Then the others were allowed to enter in  the
order of their significance of their added partial con-
tributions.   The seven instrumental that overall  were
of least significance were deleted.   This left 6  de-
grees of freedom (20-1-13=6) for the first stage  of the
2SLS.

Stability of Estimated Coefficients

     In order for path analysis to be of use the  re-
sulting estimates should be stable.   They will be of
little use if they change radically depending on  sample
observations.   In most applications of path analysis,
this is achieved by using large samples and reliable
measuring instruments.  In our application, with  a sam-
ple of twenty, the stability of the estimates must be
explicitly addressed.

     Our approach to this problem involved the use of
the statistical techniques of "Jack-Knife"  Instead of
running one regression using twenty observations, 20
                                                                                             (MINCRJ
Figure 1. ORIGINAL SPECIFICATION OF MODEL
                                                       693

-------
                      Table 1
            MODEL VARIABLES AND DEFINITIONS

1ES = number of housing  units  in  area of  influence  in
      1970 (excluding  major project).
COMM = commercial  land use  in  area  of influence  in  1970
       in 1,000 square feet.
OFFICE   office land  use in area  of influence  (exclu-
         ding major project) in 1970 in 1,000  sq. feet.
>1ANF   manufacturing  land use  in  area of  influence  (ex-
       cluding major  project)  in  1970 in  1,000 square
       feet.
WHOLE   wholesale/warehouse land  use in area of  in-
        fluence in  1970  in  1,000  square feet.
HOTEL   Hotel and motel  land use  in area  of  influence
        in 1970 in  1,000 square feet.
HOSPTL   hospital,  etc.  land use  in area  of  influence
         in 1970 in 1,000 square  feet.
CULTUR   cultural  land use  in  area  of influence  in  1970
         in 1,000  square feet.
CHURCH   religious  land  use in area of  influence in
         1970 in 1,000 square  feet.
ED = public educational  land use  in area  of  influence
     in 1970 in 1,000 square feet.
REC = active outdoor recreational land  use in  area  of
      influence.
HWLM   highway lane miles in area of influence in 1970.
DU-ACRE   dwelling units per acre in area of influence
          in 1960.
VACACR = percent vacant developable acreage  in area of
         influence in year  (t  +  0)
VACHSG   percent vacant housing  in  area  of influence
         in  1960.
HWYINT   highway interchanges  in  area of  influence  in
         year  (t + 5)
MINCC   median income of families and  individuals in
        area of influence relative to  U.S. median
        income in 1960.
INCMP   variable indicating the  median  income level of
        major  project compared to surrounding  community
        in year (t + 2}
OFFVAC   percent office buildings vacant  in  metropoli-
         tan area in year (t + 0)
OFFACR   office employment  per acre in  area  of influenc
         in  year (t + o)
DISCED -> distance from center  of major  project to CBD
         in  year (t + 0)
ENERGY   cost  factor for electricity ($/1500 KWH) for
         commercial users in the metropolitan area  in
         year  (t + 0) divided  by the average U.S.
         commercial rate in 1960.
RRMI   railroad mileage in  area  of influence in year
       (t +  0)
WWEA   warehouse and wholesale employment per acre in
       area  of influence in year (t + 0)
EMPACR   total employment per acre in area of influ-
         ence  in year (t + 0)
NONHSE   nonhousehold population per acre in area of
         influence in 1960.
MPKIDS   school-age children per dwelling unit in major
         project in year (t + 2)
ENRACR   public school enrollment  per acre in area of
         influence in 1960.
MANACR   manufacturing employment  per acre in area of
         influence in year (t +  0)
DELPOP   growth factor for total  regional population
         between 1960 and 1970 (county data)
DELEMP   growth factor for total  regional employment
         between 1960 and 1970 (county data)
MINCR = median income of the region in year (t + 0)
         relative to the median U.S. income in 1960.
 MAJOR PROJECT = number  of  employees in major  project
                in 1970, 1968, t + 2.
 AUTO =  automobile  drivers  per acre in country in 1960.
regressions are run each containing nineteen observa-
tions.  The path coefficients are then examined for
stability.  In our analysis, after trimming the model
to a final set of path coefficients, each equation was
subjected to a jack-knife.  The brevity of this paper
does not permit the presentation of this analysis.   It
is discussed elsewhere."

                 Path Analysis of Model

      Our approach to path analysis was to develop the
most elaborate system possible, given the sample size,
and then after estimating the involved path coeffi-
cients, refine or trim the system by dropping those
paths that have coefficients that are "close to zero".
The original model was thus refined or trimmed and
thereby made more parsimonious.

      Specifically, vie retained a specific path if
    its t value exceeded unity in absolute value.  This
guaranteed that the adjusted R2 is larger with its in-
clusion that without its inclusion for OLSQ.)
  - its beta weight exceeded .1 in absolute value. This
was judged to reflect a substantially meaningful  rela-
tion, and
  - it was deemed a priori to be of substantive impor-
tance and its sign (i.e., the sign of the coefficient
b) was of the expected direction.

It is important to note that our trimming procedure was
an interactive process; at each step the remaining path
coefficients were examined to see how the deletion of
one path coefficient affected our ability to reproduce
the original observed correlations.  This is particular-
ly critical in 2SLS, where' the deletion of one exoge-
nous variable in one equation can effect another  equa-
tion because of its deletion as an instrumental  variable.

      Following the above procedure, the original  path
diagram was trimmed to model shown in Figure 2.   The
number on each path is the path coefficient; the number
inside the box of an endogenous variable is the R? of
the equation predicting that variable.

                   Summary of Results

      In general, the trimmed path model is a success-
ful test of our theory of induced developments.   The
size of the major project in time t+10 was specified as
an exogenous variable in the residential.commercial,
office, manufacturing, highway facilities, hotel/motel,
and hospital equations.  Though it was trimmed from the
 hospital  and  highway  facilities  equations,  its causal
 influence in  the  remaining  equations  was  substantial.

       Additionally,  the  causal analysis  of  the model
 leads  us  to be optomistic about  the model's  subsequent
 calibration.   Pending  the calibration and a  possible
 validation study,  our preliminary assessment  is that
 the static cross-sectional  modeling approach  can  be
 used  to  predict the induced land  uses from  the construc-
 tion  and  operation of a  major  land use development.

                        References
 1. Thomas McCurdy,"Request for Proposal: Growth Effects
 of Major Land Use Projects"(RFP #DU-75-C181,Dec.23,1974).
 2. 40 Federal Register 49048 (October 20, 1975),
    40 Federal Register 41941 (September 9, 1975),
    40 federal Register 25814 (June 19, 1975),
    40 rederal Register 23746 (June 2, 1975),
    40 Federal Register 18726 (April 29, 1975),
    39 Federal Register 16343 (May 8, 1974),
    •38 Federal Register 15834 (June 18, 1973),
    38 Federal Register  9599 (April 18, 1973), and
    38 Federal Register  6279 (March 8, 1973).
 3. U.S. Environmental Protection Agency. Review of Fed-
 eral Actions Impacting the Environment.  Washington  oTC:
 LPA, 1975 (Manual TN2/3-1-75).	
                                                        694

-------
   Office  of  Federal  Activities, U.S. Environmental Pro-
tection Agency.  Guidelines for Review of Environmental
Impact Statement's;  Volume I; Highway Projects. Washing-
tbn.D.C.:  EPA,  1973.  (Volume II on Airports and Volume
III on Steam  Channelization will be published shortly.)
   39 Federal  Register 16186 (May 7, 1974).
   Office  of  Air Quality Planning andStandards, U.S.
EPA. Guidelines for Preparing Environmental Impact
Statements.  Research Triangle Park, N.C.: OAQPS.May 1975.
$. 40 Federal  Register 28064 (July 3, 1975),
   39 Federal  Register450T4 (December 30, 1974),
   39 Federal  RegTster 25292 (July 9, 1974),
   39 Federal  Register  7270 (February 25, 1974), and
   38 Federal  Register 15834 (June 18, 1973).
40 Federal  Register 25004
39 Federal  Register 42510
39 Federal  Register 31000
                              June 12, 1975),
                              December 5, 1974),
                              August 27, 1974),
   38 Federal Register 18986 (July 16, 1973), and
   37 Federal Register 23836 (November 9, 1972).
 6. Eugene J. Meenan. The Theory and Method of Political
 Analysis. Homewood, 111.: The Dorsey Press, 1965.
 7. Ernest Greenwood, "The Relationship of Science to
 the Practice Professions," Journal of the American In-
 stitute of Planners (1958) 28: 223-232.
 8. May Brodbeck. Readings in the Philosophy of the So-
 cial Sciences.  New York: The Macmillan Company, 1968.
 9. Abraham Kaplan. The Conduct of Inquiry. San Fran-
 cisco: Chandler Publishing Company, 1964.
 10. Karl Popper. The Logic of Scientific Discovery.
 New York: Harper and Row Publishers, 1959.
 11. Thomas McCurdy and Frank Benesh, "A Causal Analysis
 of  Induced Development"  (in preparation).
 12. Ira Lowry. A Model of Metropolis. Santa Monica,
 Calif.: The RAND Corporation, 1964 (RM-4035-R6).
 13. Jay Forrester. Urban Dynamics. Cambridge, Mass.:
 The MIT Press, 1969.
 14. Donald Hill,"A Growth Allocation Model for the Bos-
 ton Region" Journal of the American Institute of Plan-
ners (1965) 31: This is the EMPIRIC model.
15. D.R. Seidman. The Construction of an Urban Growth
Model. Philadelphia: Delaware Valley Regional Planning
Commission, 1969. This is the Penn-Jersey model.
16. Center for Real Estate and Urban Economics. Jobs,
People, and Land:Bay Area Simulation Study.Berkeley:
U. of CA.,1968(Special Rpt. #6).This is the BASS model.
17. James C. Ohls & Peter Hutchinson,"Models in Urban
Developmenf'pp.165-200 in: Saul I. Gass & Roger L. Sis-
son (ed s.). A_GjjJKJe_Jx^1odjl_s_jjT_G^)y^
Operations. Potomac,Md.: Sauger Books, 1975.
18. Office of Air Quality Planning & Standards.Guide-
lines for Air Quality Maintenance Planning & Analysis;
Vol. 4: Land Use & Transportation Considerations. Re-
search Triangle Park, N.C.: U.S. EPA,1974 (EPA-450'4-74-004).
1.9. David R. Heise.Casual Analysis. Chapel Hill.N.C.:
Unpublished manuscript, 1974.
20. Hubert M. Blalock.Jr. Theory Construction. Engle-
wood Cliffs, N.J.: Prentice-Hall, 1969.
21. William S. Meisel & David C. Coll ins,"Repro-Model-
ing:An Approach to Efficient Model Utilization & Inter-
pretation ','^EEE_Jrj[nj>aj:1nj)£s_p^
netics (1973) 3:349-358.
22. Ralph D'Agostino,"Path Analysis,"Boston :Unpub'd, 1975.
23. Norman Nie et al. Statistical Package for the So-
cial Sciences, McGraw Hill, 2nd ed., 1975.
24. Thomas E. Baldwin and Allen S. Kennedy.  "The Fea-
sibility of Predicting Point Source Emissions Using In-
dustrial Land Use Variables: A Path Analysis;" 67th An-
nual Meeting of the Air Pollution Control  Association,
Denver, Colorado: June 1974 (Paper 74-145).
25. Frank Benesh, Dr. Ralph D'Agostino, Peter Guldberg.
"Growth Effects of Major Land Use Projects. Volume I:
Specification and Causal Analysis of Model." Prepared
by Wai den Research for Strategies and Air  Standards
Division, EPA, Durham, North Carolina, April  1976.
  Figure 2. TRIMMED MODEL WITH PATH COEFFICIENTS
                                   v_y
                                                        695

-------
                                 PREDICTION OF PHYTOPLANKTON PRODUCTIVITY IN LAKES

                        V.  W.  Lambou,  L.  R. Williams, S. C. Hern, R. W. Thomas, J. D. Bliss
                           Water and Land Quality Branch, Monitoring Operations Division
                                 Environmental Monitoring and Support Laboratory
                                     U.S. Environmental Protection Agency
                                               Las Vegas, Nevada
SUMMARY

     This study presents relationships between phyto-
plankton productivity as measured by yearly mean
chlorophyll a levels, and ambient water quality and
hydrologic measurements.  Among the nutrients examined,
phosphorus forms were most highly correlated with
chlorophyll a levels.  The effects of such factors as
retention time, primary nutrient limitation, strati-
fication, and macrophyte dominance upon productivity
responses are evaluated.  Additional parameters re-
lated to productivity include turbidity, Secchi disc,
nitrogen to phosphorus ratio, pH, total alkalinity,
and forms of inorganic nitrogen.  Discussions of the
factors affecting phytoplankton productivity and the
application of the limiting nutrient concept are in-
cluded.

INTRODUCTION

     The National Eutrophication Survey was initiated
in 1972 in response to an Administration commitment
to investigate the nationwide threat of accelerated
eutrophication to freshwater lakes and reservoirs.
Consistent with the Survey objectives to develop
information on nutrient sources and impact on fresh-
water lakes, we are examining relationships between
ambient nutrient concentrations and existing lake
conditions.

     The purpose of this report is to help to establish
lake classes and elucidate the relationships between
ambient nutrients and lake water quality by lake type.
The data were collected in 1972 from the New England
States, New York, Michigan, Wisconsin, and Minnesota.
Only lakes sampled during three seasonal sampling
rounds are included.

MATERIALS AND METHODS

Lake Selection: Selection of lakes and reservoirs in-
cluded in the Survey in 1972 was limited to lakes 40
hectares or more in surface area, with mean hydraulic
retention times of at least 30 days, and impacted by
municipal sewage treatment plant (MSTP) effluent either
directly or by discharge to an inlet tributary within
40 kilometers (km) of the lake. Specific selection cri-
teria were waived for lakes of special State interest.

Lake Sampling: Sampling was accomplished by two teams,
each consisting of a limnologist, pilot, and sampling
technician, operating from pontoon-equipped helicopters.
With few exceptions, each lake was sampled under spring,
summer, and fall conditions.  Sampling site locations
were chosen to define the character of the lake water
as a whole and to investigate visible or known problem
areas, e.g., algal blooms, sediment or effluent plumes.
The number of sites was limited by the survey nature of
the program and varied in accordance with lake size,
morphological and hydrological complexity, and practi-
cal considerations of time, flight range, and weather.
At each sampling depth, water samples were collected for
nutrient, alkalinity, pH, conductivity, and dissolved
oxygen determinations. Contact sensor packages were used
to measure depth, conductivity, turbidity, pH, dissolved
oxygen, and temperature. Fluorometric chlorophyll a
(chla) analyses were performed at the end of each day in
the mobile laboratory.  Nutrients and alkalinity were
determined by automated adaptations  of procedures
described in "Methods for Chemical Analysis  of Water
and Wastes'   at the Las Vegas laboratory.  Details
of Survey methods are presented elsewhere.  >^

Data Management: Data collected were stored  in STORE!
and manipulated, as prescribed by Bliss, Friedland, and
Hodsen.^  Basic calculations for parameters  measured in
sampled lakes were performed in such a way to give
equal weight to: each depth sampled  at a station;
each sampling station sampled on an  individual lake
during a sampling round; and each sampling round
on an individual lake during a sampling year.

     Mean parameter values for each  sampling station
were calculated as follows:
                 D
_
Parj = I Par^/D,
                                                   (1)
      _
where Parj = mean value for a parameter at the j^h sam-
pling station during a sampling round, Par£ = value for
the £*" depth, and D = the number of depths for which a
parameter was measured at the j*™ sampling station dur-
ing a sampling round.  Mean parameter values for each
sampling round were calculated as follows :
          ==    S _
          Parfe = I, Par-/S,                         (2)
                 =               th
where Par^- = mean value for the k   sampling round on a
given lake, and S = number of sampling sites. Mean lake
parameter values for a given sampling year were calcu-
lated as follows:
          _s   3  ===
          Par = I  Par7,/3,                         (3)
               k=l    ^
where Par = mean parameter value for a given sampling
year.  Lake parameter values were calculated only when
values were available for the first, second, and third
sampling rounds during a given sampling year from a
lake. Formulas 1, 2, and 3 were used to determine
parameter values for total phosphorus  (TP) , dissolved
phosphorus (DP) , ammonia-N (NH) , nitrite-nitrate-N (NO) ,
ammonia-nitrite-nitrate-N (IN), and total alkalinity
(AL) , all expressed in milligrams per liter (mg/liter) ,
temperature in degrees Celsius (°C) (T) , turbidity in
percent transmission (TB) , pH (PH) , Secchi disc in
inches (SD) , and hydraulic retention time in days (RT) .

     The ratio of IN/DP (N/P) for each sampling station
was  calculated as follows :
                   = IN/DP,
                                         (4)
however, at the formula  1 level a dissolved phosphorus
value was deleted if either nitrogen complement was
missing.  Round and yearly values were  calculated using
formulas 2 and 3.

     Unlike the above parameters where  measurements
were made at various depths, only one chla measurement
was made at any individual sampling station during
a sampling round.  Therefore,
          chla.,- = chla
              u
                                                    (5)
where chlaj =  the mean chla  concentration in micrograms
per liter  (pg/liter)  for  the jth  sampling station
during a sampling round,  and chla =  the chla concentra-
                                                       696

-------
tion in yg/liter for an integrated water sample from
the surface to 4.6 meters (m) or to a point just off
the bottom when the depth was less than 4.6m.

DATA LIMITATIONS

     The primary selection criterion for the 1972 Sur-
vey lakes was direct or indirect receipt of MSTP efflu-
ent (151 out of 191 lakes), resulting in a list of
obvious bias.  Fortunately, special interest lakes,
representing a broad range of water quality, were also
included which provided some trophic balance to the
list.  Although lakes selected were not necessarily rep-
resentative of average conditions existing in the study
area, the relationships observed between ambient nutri-
ents and lake water quality should not be biased.

     A number of factors required for a complete nutri-
ent budget analysis have not been evaluated thus far.
Among these factors are the following: 1) groundwater
contributions were not considered; 2) macrophytes were
not measured quantitatively; 3) nitrogen-fixation was
not estimated; 4) coincidence of sampling with "turn-
over" or macrophyte nutrient release periods was not
sufficiently precise to make accurate estimates of
nutrient maxima; 5) no estimates of sediment load,
sediment-water nutrient exchange or sediment binding
capacity were made; and 6) sampling frequency was
generally inadequate to determine dynamic changes in
nutrient limitation, where present; however, seasonal
shifts could be determined.

LIMITING NUTRIENTS

     Nitrogen and phosphorus are frequently mentioned as
the nutrients most likely to limit growth of plants. The
concept of a limiting nutrient, as related to Leibig's
Law of the Minimum, is that some nutrient, least avail-
able relative to the growth requirements of a given
organism, imposes primary limitation on the growth of
that organism.

     The Algal Assay Procedure Bottle Test5 utilizes the
response of the green alga Selenastrvm oapvioovnutum to
nutrient spikes, usually nitrogen and phosphorus, alone
and in concert, to determine the growth-limiting nutri-
ent. The assumption is that if indeed a specific nutri-
ent is limiting the growth of the algal culture,
addition of that nutrient will result in a positive
growth response.  If the addition of the "limiting nu-
trient" Is large enough, growth will proceed until
another takes over the controlling role, now being the
least available relative to the growth needs of the
culture.  Various estimates have been proposed as to
what constitutes that ratio of nitrogen to phosphorus at
which the addition of either results in the limitation
by the other.  Such estimates have been reported as low
as 5/1 and as high as 30/1 or more, by weight, generally
centering about 12/1 to 14/1.  These are in reasonable
agreement with theoretical needs based upon stoichio-
metric equations of algal constituents.

     Frequency distributions of the ratios of inorganic
to orthophosphorus (N/OP) were determined from chemis-
tries taken on fall samples just prior to algal ass-
say. 2>3  Among the nitrogen-limited (N-limited) lakes
(by algal assay) the N/OP values were distributed as
follows: <10 = 60 lakes; 10 to 14 = 10; and >14 = 1.
For the phosphorus-limited (P-limited) lake group the
values were: <10 = 3 lakes; 10 to 14 = 7; and >14   59.
Note the overlap of P- and N-limited lakes in the zone
extending from N/OP = 9 to 15.  Within this range fell
several additional lakes which evidenced "co-limitation"
on assay and were included in neither distribution.  In
these samples no growth response was noted with the ad-
dition of either nitrogen or phosphorus, but response
was dramatic to the simultaneous addition of both.
      We divided the lakes sampled into P-limited
(N/P>14), transition (KKN/PXL4), and N-limited  (N/P<10)
groups based upon the yearly mean N/P observed in the
lake.  The frequency distribution of N/P values was:
<10   79 lakes; 10 to 14 = 44; and >14 = 69.  While
admittedly arbitrary, the suggested division represents
a convenient means of comparing groups of lakes presum-
ably representing "largely nitrogen-limited" and
"largely phosphorus-limited" populations, and a third
group, representing a buffer between the first two
groups.  This group contains a number of lakes whose
N/P ratios, by sampling round, suggest seasonal shifts
from one dominant limiting nutrient to the other across
a transition zone in which pronounced interaction is
likely.

      The ranges selected are not suggested to possess
sharp cutoffs at which shifts in limiting nutrient oc-
cur.  Preliminary analyses of Survey algal assay data
suggest that any such sharp cutoff is unlikely with
laboratory monocultures, much less with mixed natural
populations.  Rather the N- and P-limited groups rep-
resent "tails" toward which the influence of the
secondary interactant is progressively reduced.  Also,
as N/P ratios progressively deviate from the buffer
zone in either direction, the influence of the secondary
interactant is continually reduced. Chiaudani and Vighi'
present data which suggest a range of N/P ratios about
four to five units wide within which neither nitrogen
nor phosphorus effects are independent.  However, they
found no response to phosphorus addition below N/P = 10
with Selenastman.

      While the limiting nutrient concept has some
utility in allowing prediction of the potential growth
limits of laboratory monocultures, its extrapolation
into natural system studies should be approached warily.
A number of enrichment studies have found the effects of
phosphorus and nitrogen to be interdependent, >''•'-'
modified by the presence of trace organic materials in
the waters,   and dependent upon previous algal culture
exposure, i.e., prior luxury uptake of nitrogen or
phosphorus.

      It is not unlikely that a mixed natural phytoplank-
ton  population would contain elements with a range of
optimal growth requirements and predisposing nutritional
status.  Addition of either nitrogen or phosphorus could
potentially evoke a net increase in phytoplankton growth,
especially in those cases in which the ambient N/P ratio
is intermediate between the optimal growth requirements
of the various phytoplankton elements.  The presence of
organic materials may increase nutrient assimilation
in some members of the community^,12 or Inhibit  it in
others.13  Conditions may exist, over a range of N/P
values, favoring response to the addition of either
phosphorus or nitrogen and representing essentially
"net co-limitation."  Different species within a
community may be limited by different nutrients
simultaneously.*-^  A theoretical basis for simultane-
ous  co-regulation of the specific  growth rate of a
single population by multiple nutrient is presented by
Sykes.l5  Verduin^ proposes the use of the Baule-
Mitscherlich equation, with slight modification, to
predict yield as a product of the  levels of interacting
nutrients.  The goodness-of-fit of the "Verduin model"
is presently being tested with  the Survey data base;
the  results will be reported in the near future.

      Although phosphorus and nitrogen-are considered  the
most important limiting nutrients  in freshwaters^  and
the  supply of  Inorganic  carbon  and total carbonate is  in
excess  in most natural waters'-'^'  the  possibility  of  at
least transient  carbon  limitation, under highly  en-
riched  conditions, should not be  ignored.1°   It  should
be noted that  the Algal  Assay Procedure  Bottle Test,
without modification, does not  detect  carbon  limitation.
                                                        697

-------
         Table  1.  Product moment  coefficients  of  correlation (r)  of parameters affecting productivity with yearly
                  mean  lake  chla  concentrations.   All data were converted to base 10 logarithmic expressions.
                      All  lakes  with:
                                                              Lakes with RT£l4 days and are:


Par
No. of
Lakes
TP
DP
TP-DP
NH
NO
IN
AL
PH
TP(AL)
T
RT
N/P
SDl

TB1


All
Lakes
191

0.74*
0.66*
0.81*
0.48*
0.26*
0.42*
0.37*
0.49*
0.71*
0.30*
0.04
-0.51*
-0.74*
(178)
-0.51*
(186)

RT
<14 Days
60

0.35*
0.28§
0.50*
0.11
0.24t
0.20
0.21
0.285
0.40*
0.27§
0.40*
-0.21
-0.40*
(54)
-0.12
(59)

RT
>14 Days
131

0.84*
0.77*
0.89*
0.59*
0.33*
0.53*
0.44*
0.58*
0.78*
0.33*
-0.02
-0.18§
-0.84*
(124)
-0.56*
(127)

P-
Limited
54

0.91*
0.85*
0.92*
0.76*
0.51*
0.71*
0.42*
0.48*
0.80*
0.325
-0.15
-0.33t
-0.87*
(52)
-0.51*
(53)


Transition
24

0.84*
0.64*
0.86*
0.62*
0.49§
0.62*
0.27
0.21
0.70*
0.51*
-0.09
0.04
-0.73*
(23)
-0.485
(24)

N-
Limited
53

0.72*
0.65*
0.79*
0.23t
0.25t
0.25t
0.345
0.65*
0.66*
0.30§
0.21
-0.67*
-0.75*
(49)
-0.63*
(50)


Stratified
80

0.80*
0.72*
0.87*
0.65*
0.17
0.51*
0.48*
0.53*
0.73*
0.23t
-0.18t
-0.55*
-0.81*
(80)
-0.15
(80)

Non-
Stratified
51

0.80*
0.72*
0.85*
0.48*
0.45*
0.51*
0.43*
0.54*
0.77*
0.23t
0.24t
-0.41*
-0.77*
(44)
-0.67*
(47)
Phyto-
plankton
Dominated
66

0.88*
0.82*
0.90*
0.68*
0.38*
0.62*
0.58*
0.68*
0.84*
0.24§
0.08
-0.64*
-0.86*
(63)
-0.55*
(64)
Macro-
phyte
Dominated
65

0.78*
0.67*
0.87*
0.41*
0.28§
0.35*
0.16
0.38*
0.65*
0.48*
-0.17
-0.42*
-0.76*
(61)
-0.60*
(63)
  iNumber of lakes given in parentheses.
  *r significant at 0.01 level.

FACTORS AFFECTING PRODUCTIVITY

     Chlorophyll a concentration (a measure of phtyo-
plankton biomass) was used as an index of productivity
of the lakes sampled.  Dillon and Rieler, " Jones and
Bachmann,20 and Bachmann and Jones,2^ and others have
presented the strong relationship which exists between
summer chla  levels and ambient TP concentrations meas-
ured at spring turnover, under summer conditions, or
estimated from total inputs.  The Dillon  and Rigler19
study lakes are Canadian shield lakes which undergo
summer thermal stratification.  Jones and Bachmann's^O
study lakes are mostly wind-driven systems in Iowa in
which summer stratification, if any, is transitory.
Regression equations for the two studies  are quite
similar, and each has a high coefficient  of correlation.
The summer chla/TP relationship (r = 0.95) presented
by Jones and Bachmann   is derived from a composite of
their data and literature-cited data from 143 lakes
covering a broad range of trophic states. The regression
equation for the composite data is:

          log chla = -1.09 + 1.46 log TP  (mg/m3).   (6)

If the phosphorus units are changed to mg/liter (our
units), the comparable regression equation is:

          log chla = 3.29 + 1.46 log TP.             (7)

The regression equation for all 191 Survey lakes is:

     log chla = 1.78 + 0.57 log TP (r = 0.74)       (8)

Our equation yields lower chla values per unit TP than
the Jones and Bachmann equation given.  The most likely
explanation for this discrepancy is that  the averaged
seasonal chla values used in generating our response
equations underestimate the summer chla maxima.  To
clarify the relationships of factors affecting produc-
tivity in the lakes sampled, a series of  regressions was
computed.  The effects of phosphorus, nitrogen, total
alkalinity, pH, light penetration, hydraulic retention
time, nitrogen to phosphorus ratio, stratification, and
phytoplankton versus macrophyte lake-domination were
considered.
§r significant at 0.05 level.
tr significant at 0.10 level.

      Retention times of the lakes were calculated using
flow data provided by the U.S. Geological  Survey  (USGS)
and known or estimated lake volumes.  Where USGS  flow
data were not available, estimates of retention time
were obtained from local or State agents familiar with
the lake.  Stratification was established  using depth/
temperature relationships from the Survey  data and was
verified or supplemented, where possible,  by State per-
sonnel.  An attempt was made to restrict "stratified
lakes" to those which maintained a thermocline (minimum
of 1° C change/meter depth) throughout most of the sum-
mer.  Lakes exhibiting brief temporary stratification
within the spring or summer periods were included as
non-stratified.  Information on phytoplankton- versus
macrophyte-dominance was obtained from field observa-
tions, historical information, and contact with State
and local personnel. Lakes which exhibited extensive
reaches of submerged or floating higher aquatic plants,
with histories of recurrent weed problems, and/or re-
ported to be problem lakes in this regard  were consid-
ered "macrophyte-dominated."  Otherwise the lakes were
included in the "phytoplankton-dominated"  category.

      The product moment coefficients of correlation (r)
for the regressions are given in the table. The r values
for all variables in the subpopulation of  lakes with
RT<14 are much lower than the corresponding r for lakes
with RT>14, with the exception of RT itself.  As the RT
falls below 14, the relationship between chla and
variables weakens as RT is insufficient to reach poten-
tial biomass development.  In studies by Payne22 asymp-
totic levels for Selenastrwn, Anabaena, and M-ioroayetis
were generally reached within 14 days after nutrient
enrichment.  The correlation of chla with  RT is better
in short RT lakes; chla development increases with
time until it plateaus at about 14 days.   The predic-
tion of chla for all lakes with RT>14 is:

           log chLa = 1.95 + 0.68 log TP             (9)

with 71% of the variation in chla being explained by
changes in TP levels (r = 0.84; r2 = 0.71).
                                                       698

-------
     Chlorophyll a correlations with DP mimic TP  correl-
ations but are lower in all subpopulations.  Total  phos-
phorus is highly correlated (r = 0.98) with DP  and  the
regression equation is:
               TP = 0.03 + 1.17 DP.
(10)
It is not surprising that the highest  chla correlations
were with particulate phosphorus  (TP-DP), as both  are
components of phytoplankton.  The prediction equation
for chla for all lakes with RT>14 is:
          log chla = 2.36 + 0.75 log (TP-DP).
(11)
     The ratio of TP-DP/chla for 184 Survey lakes was
2.0,  while Antia et al.,^3 working with marine phyto-
plankton,  reported an average ratio of 1.8.  The close
agreement between the ratios indicates that TP-DP/chla
ratios are consistent between freshwater and marine com-
munities.   High mobility and short turnover times of
phosphorus2^ through and between compartments within
the general "phosphorus pool" make TP a good approxi-
mation of bioavailable phosphorus.

     Of the forms of nitrogen examined, NH was found to
be most strongly correlated with mean chla.  This posi-
tive correlation is strongest in the P-limited (r =
0.76), declines in the transition (r = 0.62), and is
very weak in the N-limited lakes studied (r = 0.23).
This pattern of decline is also noted with NO and IN.

     That the relationships between the dissolved nitro-
gen forms tested and mean chla decline dramatically as
we move from P- to N-limited lakes is somewhat of a
paradox.  It is not unreasonable to expect higher cor-
relations between a nutrient and biological response as
that nutrient exerts a greater degree of limitation on
the biological response, e.g., the phosphorus relation-
ship.  A possible explanation for this apparent contra-
dication is increased fixation of atmospheric nitrogen
by blue-green algae.  The N-limited lakes studied were
generally nutrient rich.  An increase in the frequency
of Andbaena or Aphani-zomenon  blooms (both nitrogen-
fixers) in association with nutrient enrichment is con-
sistent with general observations in the aquatic litera-
ture.  Short RT lakes show very weak relationships
between the nitrogen forms examined and chla.  The
relationship with respect to NO does not appreciably
improve in longer RT lakes, but improves substantially
for NH.  It should be noted that similar processes often
produce parallel increases in phosphorus and NH within
the hypolimnion.

     The relationship of PH to chla was found to be
much stronger in phytoplankton- than in macrophyte-
dominated lakes (r = 0.68 versus r = 0.38).  The weak
relationship in the latter group is not unexpected, as
no attempt was made to quantitatively sample macrophytes
or their associated chla. The PH increases with the re-
moval of carbon dioxide ((X>2) by photosynthetic activity.
It should be noted that phytoplankton utilization of C0£
per unit volume (and hence the corresponding PH change)
has been reported to be 10 times as high as macro-
phytes.25  The differences noted in short and long RT
lakes (r = 0.28 versus r = 0.58) suggest that diurnal
C02 changes alone do not explain the PH/chla relation-
ship.

     The relationship between AL and chla is, once
again, much stronger in phytoplankton- than macrophyte-
dominated lakes.   In general, increased AL and nutrient
enrichment go hand in hand.  However,  the degree and
nature of  macrophyte "dominance" and its effects upon
nutrient reduction (competition with phytoplankton),
shading (submerged versus floating macrophytes),  etc.,
result in  a broad scatter of chla values and generally
weak  relationship.
     The two-factor parameter TP(AL) is used to assess
to what degree the "unit response"  (chla per unit TP)
is a function of AL levels.  Addition of AL did not im-
prove the basic chla/TP relationship; the correlations
were slightly lower throughout the lake groups examined.

     Chlorophyll a response as a function of N/P ratio
was found to be much stronger in N- than P-limited or
transition lake groups (r = 0.67 vs. r = 0.33 or r =
0.04).  As phosphorus levels increase, the N/P ratio
generally decreases and chl  levels increase.  This is
consistent with our observation that the N-limited lakes
examined were generally high in phosphorus.

     Edmondson26 found a negative hyperbolic relation-
ship between SD and chla concentrations in Lake
Washington.  We found SD and TB to be negatively cor-
related with chla.  The largest difference between SD
and TB correlations (r   -0.81, r = -0.15, respectively)
occurred in the stratified lakes.  A likely explanation
is that TB values were averaged through the entire water
column and include a greater percentage of "clear"
waters from below the euphotic or epilimnetic zones in
stratified lakes.  The strongest correlation was noted
in non-stratified lakes where TB values represent meas-
urements taken within the effective mixing zone.  These
relationships suggest that the bulk of photic zone
turbidity in the stratified lakes sampled was of phyto-
plankton origin.

     The correlations for various chemical and physical
parameters, with the exception of NH and NO, are quite
similar in stratified and non-stratified lakes.  The NH
correlation with chla is higher in stratified lakes
than in non-stratified lakes; the reverse is true for
NO.  A possible explanation for this is that under
reducing conditions, such as hypolimnetic deoxygen-
ation, NH and phosphorus are concomitantly released.
In non-stratified lakes the prevalent inorganic nitro-
gen component is nitrate-N.

     Most correlations of chla with chemical and
physical parameters are higher in the phytoplankton-
dominated subpopulation than in the macrophyte-
dominated subpopulation.  The contribution of
macrophyte chla was not measured, and no informa-
tion on the relative quantities of submerged versus
floating weeds is available for Survey lakes.  Many
submerged aquatics, with little reliance on ambient
nutrient levels in the water, can survive on nutri-
ents absorbed through their root systems.  However,
free-floating macrophytes and phytoplankton have
similar ambient nutrient requirements.  The predic-
tion equations for all phytoplankton-dominated lakes
with RT>14 are:

          log chla = 1.91 + 0.68 log TP, and        (12)

          log chla = 2.31 + 0.73 log (TP-DP).       (13)

STUDIES IN PROGRESS

     Other aspects being investigated include the ef-
fects of additional parameters on productivity, the
intercorrelation of parameters affecting productivity,
effects of lake use upon water quality "suitability,"
effects of manifestations of nutrient enrichment on
water use, and prediction of lake condition from
ambient and loading conditions.

LITERATURE CITED

1. U.S. Environmental Protection Agency.  1971.
  Methods for chemical analysis of water and wastes.
  Analytical Quality Control Laboratory, Cincinnati,
  Ohio.  312 p.  EPA-625/6-74-003.
                                                       699

-------
2.  	.   1974.  National Eutrophication Survey
    methods for lakes sampled in 1972.   National
    Eutrophication Survey Working Paper No. 1.
    National Environmental Research Centers, Las
    Vegas, Nevada, and Corvallis, Oregon.  40 p.

3.  	.   1975.  National Eutrophication Survey
    methods 1973-1976.  National Eutrophication
    Survey Working Paper No.  175.  National Environ-
    mental Research Centers,  Las Vegas, Nevada, and
    Corvallis,  Oregon.  91 p.

4.  Bliss, J.  D., M. J.  Friedland, and  J. Hodsen.
    1975.  Statistical manipulation of  National
    Eutrophication Survey data in STORET.  National
    Eutrophication Survey Working Paper No. 472.
    National Environmental Research Center, Las
    Vegas, Nevada.  15 p.

5.  U.S. Environmental Protection Agency.  1971.
    Algal assay procedure: bottle test.  U.S.
    Govt. Printing Off.: 1972-795-146/1. Region X.
    82 p.

6.  Vollenweider, R. A.   1968.  Scientific
    fundamentals of the eutrophication  of lakes
    and flowing waters,  with particular reference
    to nitrogen and phosphorus as factors in
    eutrophication.  Technical Report to OECD,
    Committee for Research Cooperation.  159 p.

7.  Chiaudani, G., and M. Vighi.  1974.  The N:P
    ratio and tests with Selenastmm to predict
    eutrophication in lakes.   Water Res. 8:1063-1069.

8.  Hutchinson, G. E.  1941.   Limnological studies
    in Connecticut, IV.  The mechanisms  of inter-
    mediary metabolism in stratified lakes.  Ecol.
    Monogr. 11:21-60.

9.  Goldman, C. R., and R. Armstrong.  1969.  Primary
    productivity studies in Lake Tahoe, California.
    Verh. Int.  Verein. Theor. Angew. Limnol. 17:49-71.

10. Ketchum, B. H.  1939.  The absorption of phos-
    phate and nitrate by illuminated cultures of
    Nitzsahia alosterium.  Amer. J. Bot. 26:399-407.

11. Powers, C.  F., D. W. Schults, K. W. Malueg, R.  M.
    Brice, and M. D. Schuldt.  1972. Algal responses
    to nutrient additions in natural waters, II.
    Field experiments.  In: G. E. Likens (ed.).
    Nutrients and eutrophication: the limiting
    nutrient controversy.  Amer. Soc. Limnol.
    Oceanogr.,  Publ. 1:141-154.

12. Rodhe, W.   1958.  The primary production in lakes:
    some methods and restrictions of the -^C method.
    In: Measurements of primary production in the
    sea, rapp.   P-V Reun. Cons. Perm. Int. Explor.
    Her. 144:122-128.

13. Williams,  L. R.  1975.  The role of heteroin-
    hibition in the development of Anabaena
    flos-aquae waterblooms.  In: Proc.  of the Bio-
    stimulation Nutrient Assessment Workshop.  1973.
    Corvallis,  Oregon.

14. Fitzgerald, B. P.  1964.   Detection of limiting
    or surplus nutrients in algae.  Progress report
    to NIH, 1961-1964 project.  Working Paper
    No. 297.  48 p.

15. Sykes, R.  M.  1974.   Theory of multiple limiting
    nutrients.   J. Water Pollut. Control Fed. 46(10):
    2387-2392.
16.
17.
18.
    Verduin, J.  1964.  Principles of primary  pro-
    ductivity: phothosynthesis under completely
    natural conditions.  In: D. F. Jackson  (ed.).
    Algae and man.  Plenum Press, New York.
    p. 221-237.
    Ketchum, B. H.
    phytoplankton.
    5:55-74.
                1954.  Mineral nutrition  of
                Amer. Rev. Plant Physiol.
Maloney, T. F., W. E. Miller, and T.  Shiroyama.
1972.  Algal response to nutrient additions in
natural waters.  In: G. E. Likens (ed.).
Nutrients and eutrophication: the limiting
nutrient controversy.  Amer. Soc. Limnol.
Oceanogr. 1:134-140.
19.  Dillon, D. J., and F. H. Rigler.  1974.  The
    phosphorus-chlorophyll relationship in lakes.
    Limnol. and Oceanogr. 19:767-773.

20.  Jones, J. R., and R. W. Bachmann.  In press.
    Prediction of phosphorus and chlorophyll levels
    in lakes.  J. Water Pollut. Control Fed.

21.  Bachmann, R. W., and J. R. Jones. 1974.
    Phosphorus inputs and algal blooms in lakes.
    Iowa State J. Res. No. 49(2):155-160.

22.  Payne, A. G.  1973.  Response of the three test
    algae of the algal assay procedure bottle test.
    Presented at the 36th Annual Meeting of the
    American Society of Limnology and Oceanography.

23.  Antia, N. J., C. D. McAllister, T. R. Parsons,
    K. Stephens, and J. D. H. Strickland.  1963.
    Further measurements of primary production using
    a large-volume plastic sphere.  Limnol. Oceanogr.
    8(2):166.

24.  Lean, D. R. S.  1973.  Movements of phosphorus
    between its biologically important forms in lake
    water.  J. Fish. Res. Bd. Canada 30(10):1525-
    1536.

25.  Verduin, J. 1953.  A table of photosynthetic
    rates under optimal, near natural conditions.
    Amer. J. Bot. 40(9):675-679.

26.  Edmondson, W. T.  1970.  Phosphorus, nitrogen
    and algae In Lake Washington after diversion of
    sewage.  Science 169:690-691.
                                                        700

-------
                  APPLICATIONS OF THE SINGLE SOURCE (CRSTER) MODEL TO POWER PLANTS:  A SUMMARY
                                                Joseph A. Tikvart*
                                                Connally E. Mears
                                          Source Receptor Analysis Branch
                                   Office of Air Quality Planning and Standards
                                       U.S.  Environmental Protection Agency
                                           Research Triangle Park, N.C.

                                     *0n Assignment from the National Oceanic
                                       and Atmospheric Administration (NOAA)
     For the last three years the Environmental  Pro-
tection Agency has conducted a series of atmospheric
dispersion  model  studies of power plants.  These
studies have considered the impact of approximately
700 utility power plants whose generating capacity is
25 megawatts or greater.  Included in these studies
are (1) dispersion model estimates of S0? concentra-
tions downwind from each power plant, (2) validation
of the Single Source Model  with data for several typi-
cal power plants  and (3) a  sensitivity analysis  of
this model.   The  results of these studies have been
used effectively  in a number of energy/environmental
policy considerations.   This paper summarizes the
findings of the various studies.

                    Introduction

     Shortages in the availability of low-sulfur fossil
fuels have  been given national prominence.   These
shortages are particularly  significant to utility
power plants for  two reasons:  (1) power plants  typi-
cally use large quantities  of fossil fuels  and (2)
many of the State Implementation Plans (SIPs) require
severe reductions in sulfur dioxide emissions from
power plants which burn fossil fuels.  The  shortage of
low-sulfur  fuel  necessitates the elimination of  unduly
stringent SIP control  regulations, where this can be
done without endangering air quality standards.   The
fuel shortage has also  led  to legislation which  em-
powers the  Federal Energy Administration to require
that specific power plants  switch from oil  or gas to
coal.  This switch to coal, however, cannot be allowed
to result in a threat to air quality standards.   Fur-
thermore, to meet the Clean Air Act requirement  for
.attainment  and maintenance  of acceptable air quality,
it may be necessary to  revise the SIPs for  selected
source categories, including power plants.   The  power
plant studies summarized in this paper support actions
like those  noted  above.

     Estimates of the air quality impact caused  by
power plants are  major  components of these  studies.  A
dispersion  model  is a commonly used technique for re-
lating pollutant  emissions  to ambient air quality.  It
is a mathematical  description of pollutant  transport,
dispersion,and transformation processes that occur in
the atmosphere.   The Single Source (CRSTER)  Model  is
the primary dispersion  model  applied in all  the  power
plant studies discussed in  this summary paper.

     Due to  severe time constraints and the  fact that
models like  the Single  Source Model are widely applied
and considered state-of-the-art, the accuracy of this
model  was not analyzed  in the initial phase  of the
power plant  studies.  However, some analyses of  the
Single Source Model have been recently completed and
others are continuing.  These include validation
studies, sensitivity analysis and model improvement.

     Following sections of this paper discuss (1) the
Single Source Model, (2) power plant studies in which
it is applied, (3) evaluation of the model through
validation and a sensitivity analysis, and (4) appli-
cations to energy/environmental policy considerations.

             Single Source (CRSTER) MODEL

     The Single Source (CRSTER) Model is a Gaussian
plume model.  It is based on the dispersion coeffi-
cients and equations described by Turner  and on the
                                        2
plume rise equations described by Briggs .  The model
is essentially the same as that discussed by Hrenko
et al .  It is designed to estimate concentrations
for averaging times of 1  hour, 24 hours, and 1 year
due to sources at a single location.  The concentra-
tions are estimated for a circular array of receptor
sites which are located so as to approximate the
downwind distances at which the highest concentra-
tions are likely to occur.

     The model estimates concentrations for each hour
of a year, based on wind direction (in increments of
10 degrees), wind speed, Pasquill stability class,
and mixing height.  Meteorological surface data for
1964 are frequently used in the power plant studies,
although, with the proper data, any year could be
used.  The reasons for the routine use of 1964 mete-
orological data are (1) data from earlier years do not
have an adequate resolution of wind direction, and
(2) data from subsequent years are not readily avail-
able on an hourly basis.   Mixing height data are from
the upper air observations made at selected National
Weather Service stations.  Hourly mixing heights are
estimated within the model by use of an objective
interpolation scheme.   Decay of the pollutant between
source and receptor is ignored.

     To simulate the effect of elevated terrain in
the vicinity of plant sites, a terrain adjustment
procedure is used.  This procedure decreases the
effective plume height by an amount equal to the
difference in elevation between the plant site and
the specific receptor site.  The model then uses the
adjusted plume height in estimating concentrations at
that receptor.  In those cases where terrain features
are found to be greater than the effective plume
height of the plant, the Single Source Model is not
apolied.
                                                       701

-------
                  Power Plant Studies

Purpose and Limitations

     The power plant studies have considered the
impact of approximately 700 utility power plants whose
generating capacity is 25 megawatts or greater.  The
studies may be divided into three parts.  These are
analyses for (1) the feasibility of compliance exten-
sions in 51 selected Air Quality Control Regions
(AQCRs), (2) the feasibility of oil-to-coal conversions
at selected power plants and (3) the general impact of
power plants on ambient S02 concentrations in 128

AQCRs.  In all cases the studies are primarily con-
cerned with estimates of the maximum 24-hour concen-
trations of S0?.    This averaging time and this

pollutant are the critical ones for which power plants
must meet primary National Ambient Air Quality
Standards (NAAQS).  The second study is the only one
which considers particulate concentrations.  Also, in
those cases where it is estimated that neighboring
power plants could contribute concentrations which add
to those caused by the plant under consideration, an
interaction analysis is performed.

     All source data used in the power plant studies
are taken from the Federal Power Commission (FPC Form
67) for base years of 1971 or 1972.  In those cases
where emissions are projected to 1975, appropriate
                                                  A
data are taken from "Steam Electric Plant Factors" .

     Emissions data are based on average monthly oper-
ations for each month of the year; such monthly data
are the limit of detail routinely available from the
FPC.  A power plant could quite possibly operate at
near-maximum rated capacity for 24 hours, which
would not be apparent from the monthly data.  If
these operations were coincident with days of poor
dispersion conditions, the estimated maximum concen-
trations could be significantly low.  Thus, two sets
of emission conditions are routinely considered.  One
is the nominal load case in which average hourly
emission rates are used; they are assumed to be con-
stant, except for variations by month.  The other is
the maximum load case where emissions and plume rise
are based on the plant continuously operating at 95
percent of rated capacity.  Both sets of emissions
data are considered and the one which results in the
highest estimated concentrations is used.

     It should be noted that any use of these studies
must recognize the inherent limitations resulting from
the data and procedures used in the modeling effort.
Before final judgment on the control of specific
plants is made, other factors, not addressed in these
studies, should be considered.  These include: the
impact of other sources in the area, projected growth
in the area, measured air quality data, known or sus-
pected downdraft or fumigation problems, unique nearby
terrain features, nearby land use patterns and popu-
lation distributions, more specific operational data
for the plant, impact of new units, specific meteoro-
logical studies for the area, and additional studies
or findings by other investigators.

Compliance Extension Studies

     In 1972 a study by EPA on the aggregate demand
created by the SIPs for low-sulfur coal was conducted.
This study indicated a nationwide potential deficit of
about 100 million  tons/year of such coal  by 1975.
The deficit was  considered most acute in 12 states
with high coal consumption rates.  One means to alle-
viate the deficit would be to selectively reduce the
 requirements  for  low-sulfur coal  in those cases where
 a  higher  sulfur coal  could be used without endangering
 the  NMQS.

      An initial modeling  study of S0_ emissions in
 several AQCRs had been  conducted.  This study showed
 that some of  the  large  power plants could be temporar-
 ily  allowed to burn  coal  at 1970  sulfur levels with-
 out  threatening the  24-hour NAAQS.  Based on the
 results of this study,  it was decided to consider
 selected  power plants in  12 states which are heavily
 dependent on  coal.   This  involved a total  of approxi-
 mately 200 power  plants in 51  AQCRs.

      The  study '   finds that at approximately 55 per-
 cent of the plants considered, some relaxation of
 emission  limitations is possible.  Relaxation could
 result in increasing the  average  allowable percent
 sulfur content of fuel  from approximately 1  percent
 sulfur content to 2  percent sulfur content at the
 plants considered.   Thus, the projected deficit in
 low-sulfur coal could be  eliminated.

 Fuel  Conversion Studies

      The  compliance  extension studies discussed in
 the  preceding  section had been conducted prior to  the
 overall oil shortage and  energy crisis which became
 apparent  in late  1973.  The oil shortage initiated a
 second study  of selected  power plants on the U.S.
                                  7 R
 East  Coast.   In this second study ', fuel conversion
 from  oil  to coal  for selected boilers within specific
 plants is  analyzed to evaluate the impact  on S0~ and
 particulate concentrations.   Increased SOp emissions
 due  to fuel conversions at 16  of  43 plants considered
 are  estimated  to  result in concentrations  from the
 plants alone which exceed the  24-hour NAAQS.   Seven
 of the plant  conversions  are  estimated to  result in
 concentrations from  the plants  alone  which exceed  the
 24-hour particulate  NAAQS.   The analysis indicates
 that  in some cases partial  conversion from oil  to
 coal  at selected  power plants  appears to be  a  viable
 option for alleviating the East Coast oil  shortage.

 Studies of Power  Plants in 128 AQCRs

    Further studies '   of about 400  power plants dis-
tributed  throughout the  U.S. have  been conducted in 1974
 and  1975.   The purpose is twofold:  (1)  to  complete,
 on a  national   basis, analyses  of  the  threat  of  large
 emitters  of S02 to the NAAQS  and  (2)  to  add  to  the
 overall analysis  of  the power  plant industry being
 conducted  by governmental  agencies  and industry
 itself.   Thus, a  base for further analyses is  devel-
 oped  and  is available if  additional decisions must be
 made  concerning general  EPA policy on compliance
 extensions or  fuel use options  for  power plants.  Of
 these 400 additional  plants  it is  found  that nearly
 20 percent currently may  exceed, by themselves, the
 24-hour S02 air quality standards.

                  Evaluation  of Model

 Validation Studies

      To determine the validity and  overall accuracy
 of the Single  Source Model,  validation studies  have
 been  performed for the Canal,  Paradise,  Philo,  Stuart
 and Muskingum  River  power plants.   The Canal  Plant
 is located in  Massachusetts  along  Cape Cod Bay.  The
               12  13
 Paradise  Plant  '    is located  in Western  Kentucky.
 The other  three plants are located  in Southern
                                                       702

-------
    1415
Ohio  '  •   In all cases,  hourly variations in SO-
emissions are determined for  each plant.  These
emissions are then used with  hourly meteorological
data which are representative of transport and dis-
persion in the vicinity of the plant.   These data are
input to the model and 1-hour, 3-hour, 24-hour, and
annual concentration  estimates are made for the sites
at which air quality  monitors are located.  The esti-
mated and the observed concentrations  are then sub-
jected to several statistical  comparisons.  These in-
clude comparisons of  highest  and of second-highest con-
centrations and comparisons of observed and estimated
concentration frequency distributions.

     As shown in Table 1,  the model generally tends to
underestimate the highest  and the second-highest 24-
hour average concentrations.   This is  also true for
3-hour average concentrations.  However, 1-hour
averages are equally  divided  between overestimates and
underestimates.  In cases  where surrounding terrain is
nearly as high as the stack top (see the Philo Plant
in Table 1), the model overestimates concentrations
for all averaging times.   It  should be noted that most
dispersion models comparable  to the Single Source Model
are not truly applicable in the vicinity of such sig-
nificant terrain features.
 Table 1.  Comparison  of Observed and Estimated
 Concentrations

            1-Hour Average Concentrations 24-Hour Average Concentrations
            2nd Highest   Highest     2nd Highest    Highest
      Sampling       b
 Plant  Station  0    E     0   E     0     E      0    E
Canal



Stuart






Musklngum
River


PMlo





1
2
3
4
1
2C
3
4
5
6
7C
1
2
3
4
1
2
3
4
5
6
435
553
446
575
685
685
1022
750
495
980
325
857
786
996
735
525
735
745
665
575
565
253
174
446
427
1372
814
565
515
823
595
976
980
1304
873
465
1295
945
4049
1945
1279
2369
438
618
732
638
857
1014
1153
883
565
1053
435
• 925
786
1179
786
893
891
917
695
675
595
283
179
509
479
1393
948
1022
541
1219
693
1000
1083
1310
933
645
1639
1059
4593
1981
1344
2482
66
36
77
63
259
63
181
79
63
147
69
133
131
165
109
132
67
127
62
87
121
16
9
38
4
149
75
91
45
57
69
73
81
82
73
45
133
86
471
165
222
282
75
46
83
75
277
159
225
83
77
195
77
170
137
227
115
133
110
132
158
94
138
29
11
39
16
161
98
102
49
75
83
120
97
91
74
47
147
104
541
220
226
356
 Observed concentrations with subtracted background.
 Estimated concentration.
 Samplers were In operation for less than half the year.
     In the comparison of observed  and  estimated fre-
 quency distributions, disparate  results are found.
 There is considerable variation  in  comparisons from
 site-to-site and plant-to-plant.  However,  agreement
 improves for frequency distributions  which  include all
 monitoring sites around a particular  plant.  As shown
 in Figure 1, all but the few  highest  and lowest con-
 centration percent!les are accurately estimated for
 the distributions which include  all sites.

     Until further studies become available, it may
 be concluded from these validation  studies  that the
 Single Source Model Is generally accurate within a
 factor of two.   This is not surprising  since this
accuracy  is  widely accepted for such point source
models.   However,  an important element is identifica-
tion of the  tendency to underestimate, rather than
overestimate,  concentrations for averaging times
associated with  NAAQS.   This tendency undercuts the
position  of  those  who contend that such models are
overly conservative when used in determining emission
control requirements.  It also places an added burden
on pollution control  officials to ensure that an envi-
ronmental threat is not understated.
                PERCENTAGE OF CONCENTRATIONS
                GREATER THAN INDICATED VALUE
                PERCENTAGE OF CONCENTRATIONS
                 LESS THAN INDICATED VALUE
 Figure 1.
Stuart Plant Cumulative Frequency Distr-
                                  Stations.
 bution for 24-Hour SOp Concentrations at All
Sensitivity Analysis

     To further understanding  of the behavior of the
Single Source Model,  a  sensitivity analysis   has
been conducted.  Specifically, this analysis examines
the impact of variations  or  errors in the input data
on the concentration  estimates produced  by the model.
Thus,  it identifies the model  parameters which have
the greatest influence on  concentration  estimates.

     In the analysis  the  incremental  change in pre-
dicted concentration  is determined for an incremental
change in input.  A case  study approach  is used with
the three Ohio power  plants  noted above.  The analysis
is limited to the maximum  estimated 24-hour concentra-
tion,  since this is generally  considered to be the
most important averaging  time  for power  plants with
regard to primary air quality  standards.

     Both source parameters  and meteorological param-
eters are considered.  The source parameters are (1)
stack  height, diameter, gas  exit velocity, and gas
exit temperature,  (2) emission rate and  its monthly
variation and (3)  terrain  adjustment. The meteorolog-
ical parameters considered are mixing height, wind
speed, ambient temperature and stability class.  With
the exception of stability class, each parameter is
varied by a factor of +_ 5, +_ 10, and + 25 percent
while all other parameters are held constant.

     From the analysis  summarized in Tables 2 and 3,
it is found that for  sources with relatively short
stacks, for example the Philo  Plant which has stacks
about 300 feet high,  a percent change in any stack
parameter results  in  at least  that percent change in
the maximum 24-hour concentration.   For  sources with
relatively tall stacks, for  example the  Stuart Plant
which has stacks about 800 feet high, a  lack of such
sensitivity is found.  Stability class,  a meteorolog-
ical parameter, is found  to  be a highly  sensitive
                                                        703

-------
factor for all plants, since this parameter can take
on only six discrete values.  The importance of
parameters such as wind speed and mixing height varies
depending on the meteorological conditions that result
in highest concentrations for a plant.  In all cases,
the percent change in the maximum 24-hour concentra-
tion is less than the percent change in these meteoro-
logical parameters.  Tables 2 and 3 indicate percent
changes in maximum 24-hour concentrations for positive
variations in source and meteorological parameters.
Comparable changes in concentration can also be shown
for negative variations in these parameters.

Table 2.  Percentage Change From Base Case—Maximum
24-Hour Concentrations Due to Variations in Source
Related Parameters.
^\4.n Parameter
Paraneter ^v
Stack height (m)
Stack temp (°C)
Exit ve1ocity{m/s)
Stock diameter(m}
Terrain ADO (m)
Emissions{gm/sec)
Musk in gun
River
+ 5
-2
-4
-5
-11
1
5
+10
-5
-8
-9
-17
3
10
+25
-11
-17
-19
-30
12
25
Philo
+ 5
-6
-4
-6
-11
5
5
+ 10
-12
- 8
-10
-20
9
10
+25
-27
-18
-23
-43
24
25
Stuart
+ 5
-2
-2
-2
-3
1
5
+ 10
-5
-4
-3
-6
1
10
+ 25
-11
- 7
-7
-15
3
25
Table 3.  Percentage Change From Base Case—Maximum
24-Hour Concentrations Due to Variations in Meteoro-
logical Parameters.
    Mixing height (m)
    Hind speed (m/s)
    Ambient temp ("C)
    Stabi1ity class*
                  IHiskiricjuin
                    River
    *Uiased by +1 Stability Class


     The sensitivity of the maximum estimated concen-
trations to changes in meteorological data sets is
also determined.  Three data sets are used with each
set of source data.  Changes in maximum concentration
from the base case which are shown in Table 4, range
from an increase of nearly 50 percent to a decrease of
almost 30 percent.  Inherent in the change of maximum
concentration are the effects of the wind direction
and the variability of wind direction.  These are not
considered individually in the sensitivity analysis.
However, wind direction and its variability, which are
a function of the meteorological conditions peculiar
to each data set, play a major role in the percent con-
centration changes shown in Table 4.  This illustrates
the importance of a meteorological data set which is
as representative of transport and dispersion in the
vicinity of the plant as possible.

     As a result of this analysis it can be concluded
that: (1) the sensitivity of model estimates to accu-
racy in the input parameters varies from source to
source; (2) accuracy in the source parameters
becomes more critical  as  the stack becomes shorter;
(3) errors  in  individual  meteorological parameters,
with the exception  of  stability class, result in some-
what smaller errors in estimated concentrations; (4)
the cumulative  errors  in  meteorological parameters,
which result from the  use of data from an unrepresent-
ative site, can cause  substantial errors in estimated
concentrations.
Table 4.  Percentage Change From Base Case—Maximum
24-Hour Concentrations Due to Variations in the
Meteorological  Data Sets.
Surface/Upper Air
Data Set
Hunt ington/.lunting ton
Col umbtis/Cay ton
Cincinnati /Day ton
Muskingum
River

17.8
11.6
Philo
-28.4

-5.8
Stuart
-19.4
36.0

Model Improvement

     As a result of the model  validation  and  the  sen-
sitivity analysis, studies  to  improve  the Single
Source Model are being undertaken.   Two specific  areas
under investigation are (1)  the  use  of other  stability
classification and dispersion  parameters  which may
allow better estimates of plume  dilution  and  (2)  the
use of more precise information  on the stack  param-
eters which affect plume rise.   Also,  additional
analyses are being undertaken  to evaluate the accu-
racy of hourly concentration estimates for various
meteorological regimes.  The goal is to assess the
need for better data inputs  or more  precise algorithms
in the model.  Based on these  studies, improvements
in the model will be considered.

            Applications of  Power Plant Studies

     Limitations on the model  and its  application in
the power plant studies have been noted.   Even with
these 1 imitations, the power  plant studies are of
value for use in generalized analyses  which assess
the overall effect of some plan  of action for the
utility industry.  These studies have  been used effec-
tively in a number of energy/environmental policy con-
siderations.

     The Clean Fuels Policy  is an EPA  program to
encourage some states to eliminate unnecessarily
stringent control regulations  in their SIPs and there-
by alleviate the shortage of low sulfur coal.  The
power plant studies demonstrated the potential use-
fulness of such a policy and helped  to indicate those
SIPs where unnecessarily stringent regulations might
exist.

     The power plant studies were used in early analy-
ses of proposed oil-to-coal  conversions.   They were
useful in indicating the types of sources which were
good candidates for conversion and specifically indi-
cated several plants that were poor  candidates.
These studies have been used for roughly  assessing
the allowable percent sulfur coal which could be  used
in oil-to-coal conversions required  under the Energy
Supply and Environmental Coordination  Act.  They will
serve as a basis for more detailed subsequent analy-
ses.

     In the development of  EPA policy  on  tall stacks
and meteorological control systems,  the power plant
studies were used frequently.  They  were  used to
analyze alternatives for limitations on stack height
                                                       704

-------
increases.   They allowed the frequency and amount of
emission reductions that would be required by meteoro-
logical  control  systems to be compared, for various
categories  of power plants, to permanent control re-
quirements.

     The power plant studies have been the basis for
analyses in support of a viable S02 control strategy

for Ohio.  They were used as justification for exist-
ing regulations in the 1974 Ohio S02 hearings.  They

were used as an initial base in developing EPA Region

V's current proposed regulations for Ohio  .   They
have also been used by Region IV in the development
and revision of SIPs applicable to power plants lo-
cated in the Southeastern United States.

     Industry has used the power plant studies in
          •I Q
statements    to the U.S. Congress on options  for con-
trol of SOg.  These studies have also been used in
evaluating  the impact of proposed legislation to pre-
vent significant deterioration of air quality.

     Based  on the demand for the reports resulting
from such power plant studies, it is logical  to con-
clude that  other regulatory agencies and industrial
groups are  using these studies.  In most cases, they
are being extended by more detailed analyses.  It
appears that these studies will continue to play an
important role in the development of regional and
national environmental policies which affect  utility
power plants.

                    Acknowledgments

     The authors wish to recognize the major  contri-
butions of  their co-workers to these power plant
studies.  Major contributions were made by D. Barrett,
W. Freas and R.  Lee under the overall direction of
H. Slater.   Special recognition is also due to those
individuals who performed the bulk of the work under
contract to EPA.  These include: P. Morgenstern and
L. Morgenstern of Walden Research Division of Abcor,
Inc.; R. Koch of GEOMET, Inc.; and M. Mills and
R. Stern of GCA Corporation.  Thanks are also due to
Mrs. B. Stroud who diligently prepared this manu-
script.

                      References

  1.  Turner, 0,B., "Workbook of Atmospheric Dispersion
     Estimates."  Office of Air Programs Publication
     No. AP-26.   Superintendent of Documents, Govern-
     ment Printing Office, Washington, D.C.,  1970.
  2.  Briggs, G.A., Plume Rise, U.S. Atomic Energy
     Commission, Division of Technical Information,
     Oak Ridge, Tennessee, 1969.
  3.  Hrenko, J., D.B. Turner, and J. Zimmerman,
     "Interim User's Guide to a Computation Technique
     to Estimate Maximum 24-Hour Concentrations from
     Single Sources," Meteorology Laboratory, Environ-
     mental  Protection Agency, Research Triangle Park,
     N.C.,  1972 (Unpublished Manuscript).
  4.  National Coal  Association, "Steam Electric
     Factors," Washington, D.C., 1973.
  5.  Morgenstern, P., "Summary Report on Modeling
     Analysis of Power Plants for Compliance  Exten-
     sions  in 51  Air Quality Control Regions."  Publi-
     cation  No.  EPA-450/3-75-060. Prepared by Walden
     Research Division of Abcor, Inc., under  Contract
     No.  68-02-0049.   Environmental  Protection Agency,
     Research Triangle Park, N.C.,  1973.
  6.  Morgenstern,  P., et al,  "Modeling Analysis of
     Power  Plants  for Compliance  Extensions  in 51 Air
     Quality Control Regions," J. Air Poll.  Control
     Assn., Vol. 25, No. 3, 1975.

  7.  Morgenstern,  L., "Summary Report on Modeling
     Analysis of Power Plants for Fuel Conversion."
     Publication No. EPA-450/3-75-064. Prepared by
     Walden Research Division of Abcor, Inc. under
     Contract No.  68-02-1377.  Environmental Protection
     Agency, Research Triangle Park, N.C., 1975.

  8.  Morgenstern,  L., et al,  "Air Quality Modeling
     Analysis of Power Plants  for Fuel  Conversion."
     APCA Paper No. 75-33.6, Boston, Mass., 1975.
 9.  Morgenstern,  L., "Summary Report on  Modeling
     Analysis of Selected Power Plants  in  128 AQCRs
     for Evaluation of Impact  on  Ambient  SO. Concen-

     trations,  Volume I".  Publication No.  EPA-450/3-
     75-062.  Prepared by Walden  Research  Division of
     Abcor,  Inc.,  under  Contract  No. 68-02-1484.
     Environmental  Protection  Agency, Research
     Triangle Park, N.C.,  1975.
10.  Koch, R.,  "Summary  Report on Modeling Analysis  of
     Selected Power Plants  in  128 AQCRs for Evaluation
     of Impact  on  Ambient SO,,  Concentrations, Volume
     II." Publication No.  EPA-450/3-75-063.  Prepared
     by GEOMET,  Inc., under Contract No.  68-02-1483.
     Environmental  Protection  Agency, Research  Triangle
     Park, N.C., 1975.
11.  Mills,  M.,  "Comprehensive Analysis of Time--
     Concentration  Relationships  and the Validation  of
     a  Single Source Dispersion Model." Publication
     No. EPA-450/3-75-083.  Prepared  by GCA Corporation
     under Contract No.  68-02-1376.   Environmental
     Protection  Agency,  Research  Triangle  Park,  N.C.,
     1975.
12.  Klug, W.,  "Dispersion  from Tall Stacks."
     Publication No. EPA-600/4-75-006. Environmental
     Protection  Agency,  Washington,  D.C.,  1975.
13.  Enviroplan, Inc.,  "A Comparison of Predicted
     and Measured  Sulfur Dioxide  Concentrations  at
     the Paradise  Power  Plant  in  1969."  Draft  Report
     No. 1,  prepared under  Contract  No. 68-01-1913.
     Environmental  Protection  Agency, Washington,
     D.C., 1975.
14.  Mills,  M.,  and R. Stern,  "Model Validation  and
     Time—Concentration Analysis of Three Power
     Plants." Final Report  prepared  by GCA Cor-
   '  poration under Contract No.  68-02-1376,  Environ-
     mental  Protection Agency, Research Triangle
     Park, N.C., 1975.
15.  Lee, R., M. Mills,  and R. Stern, "Validation
     of a Single Source  Model." Paper presented  at
     the 6th NATO/CCMS  International Technical
     Meeting on  Air Pollution  Modeling, Frankfurt/
     Main, Germany, FR,  September, 1975.
16.  Freas,  W.,  "Sensitivity Analysis of the  Single
     Source  Model."  Office of Air Quality Planning
     and Standards, Environmental  Protection  Agency,
     Research Triangle Park, N.C., 1976  (Unpublished
     Manuscript).
17.  Environmental  Protection  Agency, "Technical
     Support Document: Development of a Sulfur
     Dioxide Control Strategy  for the State of  Ohio,
     Volume  1'"  Chicago, Illinois, September, 1975.

18.  Environmental  Research and Technology,  "An
     Evaluation  of  Sulfur  Dioxide Control  Require-
     ments for  Electric  Power  Plants."  Report  pre-
     pared for  Edison Electric Institute,  New York,
     N.Y., April,  1975.
                                                       705

-------
                                     MODIFIED DISPERSION MODELING PROCEDURES
                                            FOR INDIANA POWER PLANTS
                   S. K. Mukherji, Chief
                  Program Support Branch
          Indiana Air  Pollution Control Division
                   Indianapolis, Indiana

                       C. R. Hansen
                  Program Support Branch
          Indiana Air Pollution Control Division
                   Indianapolis, Indiana
                         M.  W.  Bobb
                   Program Support Branch
           Indiana  Air  Pollution Control Division
                    Indianapolis,  Indiana

                       H.  D.  Williams
                          Director
           Indiana  Air  Pollution Control Division
                    Indianapolis,  Indiana

                     David R.  Maxwell
                 Standards § Planning Branch
           Indiana  Air  Pollution Control Division
                    Indianapolis,  Indiana
     A modified procedure for short-term dispersion
modeling of Indiana power plants located along river
valleys is presented in this study.  Rough terrains
and occasional high winds persistent for hours produce
high surface turbulence in this particular region.
Based on empirical observations, the meteorological
stability input to the PTMTP modeling program of the
UNAMAP package was appropriately decreased.  The
artificial stability change simulated the augmented
atmospheric turbulence due to surface friction.
Generation of more accurate sulfur dioxide level
estimates indicated the feasibility of using conven-
tional short-term models with suitable changes in
particular cases.

                       Introduction
     A modified procedure for short-term sulfur dioxide
dispersion modeling of four Indiana power plants
located in the Ohio River § Wabash River valleys is
presented in this study.  These river valley regions
are characterized by undulating hills and bluffs at
some distance from the river and occasionally
persistent strong winds blowing across the river towards
the rough terrain.  The basic assumptions incorporated
in the available simulation models for atmospheric
transport of S02 do not account for this type of
situation.  As a result, the conventional short-term
dispersion models, viz., the programs included in
United States Environmental Protection Agency's
modeling package consistently underpredicted maximum
S02 levels around the power plant.  A simple change
involving meteorological parameter inputs to the
computer model was, therefore, initiated to simulate
the actual atmospheric conditions more closely and gen-
erate more accurate estimates, using conventional
modeling programs.

     The particular UNAMAP package model targeted for
modification was the Multiple Point Source routine
PTMTP1 (also identified as DBT51).  In this program the
usual simplifying assumptions are made; namely, steady
and uniform meteorological conditions with no wind
direction shear, Gaussian Plume behavior, flat or
gently rolling terrains, no aerodynamic downwash
conditions, etc.  One important and desirable feature
of PTMTP is that an hourly stability condition can be
assessed by a meteorologist from the available ambient
data before being input to the model.  Since the local
topography and wind data suggested the possibility of
considerable mechanical turbulence generation, the
stability class evaluation was isolated as  the program
area suitable for prediction improvements.  The
analytical considerations that led to hourly stability
postulations are now described.

                  Analytical Background
     A theoretical treatment of the effects of an
extensive area of given roughness on the short-range
vertical spread from a source, using gradient-
transfer methodologies, has been available for some
time. 2  Influences of terrain roughness changes on
pollutant dispersions have also been detailed by
Pasquill, et al .  '   '    Expectedly, moderately
rough terrains, i.e. , ridges and valleys seem to
cause strong mixing. > ' >    This mixing pattern is
usually prominent during late morning hours under
slightly unstable conditions of the atmosphere and
compares favorably with thermally induced stability
alterations.  It is true that low-level (less than
100m) sources within a confined narrow valley some-
times do not reflect any mechanical mixing. 9  However,
it is generally acceptable that local atmospheric
turbulence can be greatly increased, at least, within
the lower 500 to 1000 meters of the atmosphere through
surface friction effects. 3

     Counihan1^ studied average strong wind or neutral
boundry layers and concluded their appropriate depth
to be about 600 meters.  He proposed an expression for
surface friction velocity u^ as follows
                                          •, /?
          (Ug/20)( 1  H 0.24 log  (z0/0.38))    _     (1)
In this equation, Ug denotes the geostrophic wind
speed and zo represents the characteristic surface
roughness.  For wooded rural terrain, Counihan cites a
value of z0 = .38m.  Rougher terrains encountered in
the Ohio River or Wabash River Valley regions of
Indiana can be presumed to possess surface roughness
eqivalent to approximately 0.58m.  In other words,
the friction velocity for the Indiana study ranges
from O.OSOUg to 0.051 Ug.  A theoretical analysis by
Wippermanll suggests that the boundary layer depth is
about 0.8 kug/f falling to 0.2 kuj/f under very stable
conditions (where f, the Coriolis parameter approxi-
mately equals 10/4s and k is the  dimensionless
Von Karman's constant   0.4).  Thus the mesoscale
boundary layer depths h in the vicinity of the power
plants vary from 160Ug to 163Ug  for so-called neutral
                                                       706

-------
stabilities.  The value  of h decreases to about 40U  in
very stable conditions.   Translated in terms of numEers,
the boundary layer  can extend to about 800m for
sustained windspeeds  greater than 5 m/s typically en-
countered in the area.

     Dispersion in  a  mechanically stirred boundary
layer has been discussed by Moore.12  According to the
analysis, maximum diffusivity within a boundary layer
is expected to occur  at  around h/4 corresponding to
200m height on windy  days.   Indiana power plant stacks
under study are well  below the depth of 200m.
Therefore, any plume  behavior on these days is expected
to be significantly affected by the mechanically
induced turbulence  in the atmosphere.   The effects of
higher turbulence on  the S02 plume were simulated in
the PTMTP model by  reducing the atmospheric stability
by one class from the conventional one when the
situation dictated  so.   The algorithms for the
stability reduction are  described below.

                    Modeling Modification
     To begin with, preliminary topography and wind
 direction surveys were  carried out for the individual
 power plant to be modeled.   The purpose was to
 ascertain the magnitude of  terrain roughness changes
 within 5 km of the plant and its likely effects on the
 plume rise and spread.   Next,  a set of guidelines was
 laid down to establish  the  likelihood of strong
 friction-generated turbulence interacting with the
 plume.  This consisted  of scanning (a) the elevation
 changes around the plant and (b) the hourly
 meteorological data,  namely, time of day, season, cloud
 cover, wind speeds, etc.  Upon the set of pre-estab-
 lished criteria being met,  the conventional stability
 class (based on Pasquill-Gifford-Turner suggestions)
 was decreased by one.   Thus, if during summer daytime
 the windspeed was greater than 4 m/s with the sky
 being at least partly clear, the stability class was
 lowered by one provided the wind was blowing the plume
 over the rough terrain.   The flow chart leading to the
 synthetic decrease of the stability parameter as input
 to the PTMTP model is indicated in Figure 1.
                         2. It plinl locaud wjtMn 2 km of L«k« Mlcnioan
                         3.0ir««tion> Injm which wind will produc* • topographical •tt*cl
  Figure  1.   Flow  Chart  to Assess Stability Alterations
 It is noted that no  change  of stability was allowed for
 classes 1 and 2.  Turbulence generated by thermal
instability associated with  these  two classes was
assumed to be much more dominant compared to dynamic
turbulence production.

                          Results
      Typical elevation changes  around two of the power
plants are presented  in Figures  2(a)  and 2(b).   Fig-
ure 2(a) depicts a sudden  110m jump  in elevation at a
distance of 6 to 12 stack  heights  from the source,
i.e., a step change in the floor level of the flow.
A more modest elevation increase is  seen in Fig-
ure 2(b), where the elevation changes  may be construed
as rough elements embedded at the  lower boundary of
the surface layer flow.

                   CLIFTY CREEK
                                 9  10 11  12 13  14
                Distance  (thousands of feet )
Figure 2(a).  Elevation Changes  at  Clifty Creek Plant
                         ALCOA
 (hundred*
 •of ltd)
                                                                           Oistanc* (thousands ol fe«t)
                                                                   Figure  2(b).  Terrain Roughness Around
                                                                                Warrick-Culley Units
In both cases, an artificial change of stability  for
any flow over the elevated terrain generated more
accurate estimates.

     Comparisons of the actually monitored  sulfur
dioxide readings with the levels projected  by  the
lowered stability input to the PTMTP model  are
compiled in Table 1 on the following page.
                                                        707

-------
     TABLE  1.   Comparison of Actual Reading Versus
                24-Hour Predictions  (ug/m-5)
                 Conventional  Altered      Actually
                   Stability   Stability   Monitored SC>2
     Point Source    Assumptions  Assumptions    Reading
     Karrick-Culley
     Clifty Creek
    Wabash River
                                                                                                3/3/74
226
17
126
97
0.002
0.3
435
368 •
532
336
126
49.6
479
479
506
275
29
20
                     23
                                31
                                            31
Assessments of  the  SC>2 levels using conventional
stability assumptions  are shown side by  side.   The
results reflect  the days  when persistent strong winds
were measured flowing  over the rough terrains.   In
both sets of computations, the actual elevation of the
receptors was taken into  account.  It should be noted
that the Warrick-Culley monitors/receptors were
generally very  close to the plume centerline during the
days of study.   On  the other hand, the monitors for the
Clifty Creek and Wabash River plants were located
kilometers away  from the  estimated maximum SCU  impact
sites.  For the  low-level ranges of SC^  indicated by
the monitors, the accuracy of the equipment was within
10 percent of the observed readings at best.  Finally,
the background  S02  levels over the river valley basins ,
are estimated to be around 10-25 ug/m^ on the basis of
acquired data and wind persistence studies.

     Typical alterations  in SC>2 isopleths when  the
stability classes were decreased are portrayed  in Fig-
ures 3(a) and 3(b)  for two power plants.
                      CLIPnT CREEK STATION
                                 Conceatratiooa at
                                   High Stability
                     km from Source

      Figure  3(a).  Isopleths for Projected Levels
                    Around Clifty Creek Plant
                     km from Source

.Figure 3(b).  Estimated S02 Impacts of Warrick-Culley
              Stations

Displacements of the maximum  impact locations caused
by the meteorological changes  are  clearly discerned.
It is  also  evident from these  figures that the effects
of stability  changes drop  off  sharply from the
maximum  impact location.   At a distance of 3-5 kilo-
meters away from the maximum S02  level region, the
difference  between the two estimates  becomes
negligible.   The minor differences between the
numerical S02 estimates for Clifty Creek and Wabash
River monitor/receptors are attributable to this
effect.  In view of these  considerations, it is
reasonable  to propose that the actual stability
parameters  estimate the pollution  levels somewhat
better than the conventional stability inputs.

     To  substantiate this  statement further, a
detailed review of resuls obtained from the Warrick-
Culley Power  Plant complex is  presented in Table 2.
                                                                TABLE 2.  Comparison of Estimated S02  Levels Near
                                                                          Warrick-Culley Plants
                                                                 Monitor  Monitor Location Obs.  24-hr 24-hr S07 Level  Estimates (ug/m3)
                                                                                    SO? Leve~   (includes background levels)
                                                                                            Lower Stability  Higher Stability
   Ml
   CMobile Site) 0.95 km.
472
472
83
131
104
52
20
52
208
314
421
365
222
99
120
26
30
26
151
126
30
26
162
60
69
' 134
31
133
26
26
                                                         708

-------
These results represent days on which high levels of
S02 were projected to occur at or near one of the
operating samplers due to direct impact.  Such a choice
of days tended to provide a more valid comparison
between the two estimates (based on different hourly
stability parameters), since the uncertainty effects
for off-axis plume concentration predictions were less
significant.  It is seen from the results that the
lower stability assumptions generated more accurate
projections for nine out of ten days.  Within this
duration the total number of hours for which stability
levels were artificially decreased by one class
exceeded one hundred.  An hourly correlation analysis
between the actual readings and the two estimates
yielded low values for the coefficient 'r', i.e. 'r'
was less than 0.40.  However, the analysis spanning
this period indicated better regression results for
the lower stability cases.  The lesser S02 estimates
for the nearby monitor Ml probably resulted from
inaccurate wind direction assessment for the two days.

     In the overall analysis, forced lowering of
stability to represent a more turbulent flow field
provided a more accurate assessment of the samplers.
On the basis of results presented it is suggested that
similar procedures be routinely incorporated in any
dispersion modeling scheme where dominant mechanical
turbulence effects are anticipated.

                Summary of Conclusions

1)  A modification was surmised for conventional short-
term dispersion models, such as the UNAMAP packaged
PTMTP, to assess S02 levels around some Indiana power
plants located in a rough terrain.

2)  The alteration consisted simply of an artificial
lowering of the hourly stability class by one when an
appropriate combination of topography effects and
meteorological patterns occurred.  A suitable
algorithm could be easily incorporated within the
PTMTP model.

3)  Projections based on decreased hourly stability
were much closer to actually sampled S02 levels than
conventional stability predictions.

4)  Incorporation of similar procedures is suggested
for any modeling scheme where strong mechanical
turbulence due to surface friction is likely to occur.

                        REFERENCES

1.   Turner, D.B. and A.D. Busse, PTMTP   "Users' Net- •
     work for Applied Modeling of Air Pollution," Nat.
     Env. Res. Ctr., U.S. Environmental Protection
     Agency, Research Triangle Park, N.C., 1973.

2.   Kalder, K.L., "Eddy Diffusion and Evaporation in
     Flow Over Aerodynamically Smooth and Rough
     Surfaces," Quart. J. Mech.  Applied Math., 2,
     p. 153, 1949.

3.   Pasquill, F., Atmospheric Diffusion, Halsted-
     Wiley (2nd Edition), New York, N.Y., 1947.

4.   Csanady, G.T., Turbulent Diffusion in the Environ-
     ment, Reidel Publishing Co., Boston, Mass., 1973.

5.   Lumley, J.L. and H.A. Panofsky, The Structure of
     Atmospheric Turbulence, Wiley Inter Science, New
     York, N.Y., 1964.
10.
11.
12.
Holland, J.A., "A Meteorological Survey of the
Oak Ridge Area," USAEC Report ORO-99, Technical
Information Center, Oak Ridge, Tenn., 1953.

Hewson, E.W., "The Meteorological Control of At-
mospheric Pollution by Heavy Industry," Quart.
J. Royal Met. Soc., 71, p. 266, 1945.

Cermak, J.E., "Fluid Mechanics Applications to
Problems of Wind Forces on Structure and Air
Pollution," Development in Mechanics, 7_, Univ.
of Pittsburgh Press, Pittsburgh, Pa., 1973.

Panofsky, H.A., and B. Prasad, "The Effect of
Meteorological Factors on Air Pollution in a
Narrow Valley," J. Appl. Met., 6_, p. 493, 1967.

Counihan, J., "Adiabatic Atmospheric Boundary
Layers," to be published in Atmospheric Environ-
ment, 1976.

Wipperman, F., "The Planetary Boundary Layer of
the Atmosphere," Deutsch.  Wetterdienst,  p. 106,
1973.

Moore, D.J., "Application of CEGB Plume Rise
and Dispersion Results to Prediction Models for
Ground Level Concentrations," Proc. Inter. Clean
Air Conf., Rotorua, New Zealand, 1975.
                                                       709

-------
                       SEVERITY OF  STATIONARY  AIR  POLLUTION SOURCES -
                                    A  SIMULATION APPROACH

                           E.  C. Eimutis,  B. J. Holmes,  L.  B.  Mote

                                Monsanto Research  Corporation
                                    Dayton, Ohio    45'407
                  Abstract

     A measure of specific point source air
pollution severity has been defined as the
ratio of its ground level concentration con-
tribution of a given species relative to some
potentially hazardous concentration of that
species.  For well-documented source types,
e.g. coal-fired steam electric utilities, it
is possible to analyze the severity on a
plant-by-plant basis and to examine the sever-
ity frequency distribution deterministically.
For many other source types, e.g.  industrial/
commercial boilers, cotton gins, asphalt batch
plants, solvent evaporation, etc.,  the points
of emission number in the thousands and in
some cases in the hundred thousands.  These
source types require a statistical approach.
We present a Monte Carlo simulation technique
together with efficient algorithms for fitting
the inverse Weibull, Gamma, normal, and log-
normal cumulative density functions.  Using
coal-fired steam electric utilities as an ex-
ample, we show a significant correlation be-
tween deterministic and simulated severity
results.

               Source Severity

     The air pollution severity, S, of a
given source should in some way be proportion-
al to the degree of potential hazard it im-
poses upon individuals in its environment.
The relative hazard, H, from a specific emis-
sion can be defined as being directly propor-
tional to the delivered dose, the probability
of dose delivery, and number of people who
would receive it, and inversely proportional
to the toxicity of the material as follows:
                    H
                         NPV
                        LD
                          50
                                    (1)
where
                                  of the peo-
   S = source severity
   H = relative hazard
   N = number of persons
LD50 = lethal dose for
       pie exposed
   P = probability of dose delivery
   f = delivered dose = B-R'-/x(t)dt
   B = average breathing rate
  R' = lung retention factor
x(t) = concentration time history
     The source severity is herein, defined as
the ratio of the dose of a pollutant delivered
to a population, relative to some potentially
hazardous dose.  Since LD50 data are not
available for human beings, another measure of
potentially hazardous dosage was used.

     The potentially hazardous dose for a
given pollutant from a specific point source
in a given region is defined as follows:
                                                      = NBR'
                         TLV(t)  K dt
 (2)
where
                                                   V_, = potentially hazardous dose, g

                                                    N = population exposed to a specif-
                                                        ic source, persons
                                                    B = average breathing rate, m3/s-
                                                        person
                                                   R' = lung retention factor for the
                                                        pollutant of interest (dimen-
                                                        sionless factor, 0
-------
           Simulation Methodology

     In many statistical analyses of data, it
is frequently desired to consider a random
variable which is a function of other random
variables.  An example pertinent tq air pollu-
tion studies is given by the severity equa-
tions for ground-level concentrations of air
pollutants.1  For example, the severity equa-
tion for S02 emissions from the stacks of
coal-fired electric utility plants is given
by:
                   o _ 50Q
                   3 "
where  Q = emission rate, g/s
       h = emission height, m

The emission rate can be calculated from:
                                          (10)
                        sulfur)(Kj)
                                          (ID
where
 or
             CC  =  coal  consumed,  g/yr
              E  =  emission factor =
                  0.01'9 g S02(l$  sulfur coal)
                       g of coal  consumed
         sulfur  =  percent of sulfur  in the
                  coal       _8
             Kj  =  3.171 x 10   (to convert
                  g/yr  to g/s)

               (K2)(CC)(5? sulfur)
           S —
                                          (12)
where  K2  =  3
                10
                  -9
     Next, consider a general  setting where
 the  random variable z is a  function  of the
 random variables xls  ..., xn given by z  =

 f(xls  ..., xn)  for some function  f.  Suppose
 the  actual distributions of the input random
 variables x1?  ..., x  are known including
 their probability density functions  (p.d.f.)
 and  the  corresponding cumulative  distribution
 functions (c.d.f.).  Then it seems reasonable
 to assume that  the distribution of the random
 z can be obtained.  In a sense this  is true in
 that integral  formulae have been  developed
 which give the  probability  density function
 and  the  cumulative distribution function for z
 as a function  of the  same functions  for  the
 x,.2 These  formulae, however, are complex
 even for the case of the simple sum, differ-
 ence, product,  or quotient  of  two random vari-
 ables.   Also,  even if the integrals  are  suc-
 cessfully evaluated, the resulting probability
 density  function for  z will in general not be
 exactly  one  of  the standard distributions and
 as a result  may be difficult to handle.  There
 are  certain  special cases in which the result-
 ing  p.d.f. will be known.   In  these  instances,
 the  analytical  approach to  finding z explicit-
 ly is by far the best approach.   In  other in-
 stances  certain simplifying assumptions  about
 the  distribution of z can be made provided
 certain  things  are true about  the coefficient
 of variability  or equivalently the coefficient
 of skewness  of  the input variables.  However,
 in cases where  there  are more  than two input
 variables or there is considerable skewness
 exhibited by the variables  or  the function f
 becomes  complicated,  then the  strict analyti-
                                                cal approach to finding the  distribution of z
                                                explicitly  will in general not  be applicable.

                                                     Sometimes  it  is  desired to find informa-
                                                tion on the distribution of  z when some things
                                                are known about the distribution of the input
variables
                                                                        Since  the  general ap-
proach of finding the explicit distribution
function for z is not possible,  "many" values
of z may be calculated for explicit values of
the input variables xi,  ..., x   and these val-
ues may be used to estimate  (rather closely if
enough values of z are known) such things as
the mean, standard deviation, etc., for z.
This approach is called  the  deterministic ap-
proach because in this technique it is possi-
ble to determine explicit values for z from
.explicit values of the input variables xls
     Consider the situation when either no ex-
plicit values of the input variable are avail-
able from which values of z can be calculated
or the number of such values is too small to
permit calculation of enough values of z to
determine useful information regarding Its
distribution.  In this situation we use a com-
puter simulation to obtain values for z.  For
example, instead of knowing many values for
the input variables xla  ..., xn, only limited
information may be available, such as an esti-
mate of the mean and possibly the range and
symmetry or skewness properties.  In this
case, the input variables are fitted to some
theoretical distribution and the small amount
of available information about the variables
Is used to determine the parameters of the
distributions.  A computer is then used to
sample from each input variable's distribution
function and to subsequently use these data to
calculate values of z from which the mean,
standard deviation, etc., can be estimated and
frequency histograms and cumulative distribu-
tion plots for z can be prepared.  Some of the
techniques and procedures used In such a com-
puter simulation are described below.

     The equation for the severity (equation
12) of ground-level concentrations of S02
emissions from the stacks of coal-fired elec-
tric utilities will be used to illustrate the
methodology utilized in the simulation ap-
proach.

     When all of the input random variables
are independent random variables, the method-
ology is relatively simple.  A large sample
 (e.g., of size n) is drawn from the distribu-
tion of each of the input variables.  These
data are then used one by one to calculate n
values of S.  From these n values of S, the
mean, standard deviation, etc., can be calcu-
lated and a frequency histogram and cumulative
distribution can be plotted.

     Some comments are in order regarding the
method by which samples  are drawn from the
distribution of the input variables.  First,
it should be noted that  the input variables
are restricted to one of four types of contin-
uous distributions:  the Weibull, Normal, Gam-
ma, or Log-normal distribution.  The type of
each input variable and  the corresponding par-
ameters for its distribution  function must
                                              711

-------
thus be specified.  The method of obtaining
the "best11 type for each variable and the cor-
responding parameters is described in another
publication.1  It is necessary to have a ran-
dom sample of data points for the input vari-
able in order to be able to fit it to the
proper distribution.  However, certain situa-
tions may arise when that much information
about the input variable is not available.
For example, two extreme points on the distri-
bution and either the mean or mode may be
known, or some information may be available to
determine whether the distribution is symmet-
ric or skewed.  In such situations where the
goodness-of-fit program is inoperable, it may
still be possible to fit the variable to one
of the four distributions above and to obtain
its parameters.

     As a demonstration of the above proce-
dure, consider the following example.  Suppose
that for an input variable, x, it is known
with 95% confidence that the values of x will
be between e and e5 (where e = 2.1..).  Sup-
pose also that the mode of the distribution Is
known to between e and e2 and that the mean is
approximately equal to e3.  These points then
indicate that x is a rather heavily skewed
right distribution.  The graph for the p.d^f,
of x may resemble the one shown below:
     Since it is known that the 0.025 point on
the cumulative graph is approximately equal to
e and the 0.975 point is approximately equal
to e5, this information can be used alone to
calculate A and B in a Weibull fit.  Thus, one
finds that A = 1.25 and B = 7.29 x 10~3.
These values of A and B yield a theoretical
mean y = 47-7 which is larger than the esti-
mated e3 value for the mean.   The theoretical
mode is 1*1.2 which again is larger than the
estimated mode.  Thus, the Weibull fit could
be used as an approximation to the "true" dis-
tribution of x.

     Another way of obtaining a distribution
for x is to assume that it is log-normally
distributed since the Log-normal distribution
is a right-skewed distribution.  If x is  as-
sumed to be a Log-normal distribution, then
log x must be Normal.  Hence, by taking the
logarithm of the 0.025 point  and 0.975 point
of x, the same points on the  cumulative graph
of log x are obtained which were assumed  to be
Normal.  These points are 1 and 5, respective-
ly.   Thus, the mean y of log  x should be  taken
to be 3 and, since 1 and 5 are the 0.025  and
0.975 points, respectively, it is found that
a = 1.2.  The values u = 3 and o = 1.2 can
thus be used as parameters to sample from the
Normal for values of log x.  By taking anti-
logarithms of the sample, a sample for x  can
be obtained.

     In view of the above discussion, it  is
evident that  several  avenues  are  available
for obtaining a distribution  to  fit  the given
data or information about  each input varia-
ble.  The  simulation  program  (for the case of
independent input variables)  simply  takes the
parameters for the given type of  distribution
for an input  variable and  samples from this
distribution  to obtain a random  sample for
that input variable.

    Example of Use of Simulation  Approach
     with  Coal-Fired  Electric Utilities

     In order to obtain an indication of  how
well the simulation procedure approximates
the "true" population,  S02  emissions from the
stacks of  coal-fired  electric utilities were
examined.  Data were  available on %  sulfur,
CC, and h  for 224 power plants in the United
States.  This was considered  to be the total
population which was  to be  simulated by using
only a small  number (24) of plants in order
to obtain  information about the distributions
of % sulfur,  CC, and  h.

     To obtain a ''random"  sample,  the first
24 plants on  the list  were  selected.   % sul-
fur, CC, and  h for these 24 plants was  then
fitted to the four distributions  considered
in the simulation program.  The distributions
were then selected which appeared to  fit  the
data better on an overall basis considering
the SE, x2-value, actual class interval com-
parisons, and coefficient  of  skewness  and
measure of kurtosis calculations.  For  %  sul-
fur, the Weibull Maximum Likelihood  Fit was
selected and  clipped  at the 5% and 99%  points.
Also, h was found not  to be independent of CC.
Hence, It was decided  to treat h  as  a depend-
ent variable  correlated with  the  independent
variable CC by using  the raw  data on  the  24
plants to obtain R.   The coefficient  of skew-
ness indicated that h  was not normal  but
skewed to the right.   Furthermore, the  coef-
ficient of skewness and measure of kurtosis
for log h indicated "near-normality."   Hence,
it was decided to use  the Log-normal  distri-
bution for h.

     Using equation 12  for S, the data, as
indicated above, are  entered  into the  simula-
tion program  and 5000  values  were calculated
for S.  Subsequently,  the mean, standard de-
viation, maximum value, and minimum value
were calculated.  A deterministic calculation
of these values was performed for all  224
plants in the population and  the results are
compiled in the table  below:

     Table 2.   RESULTS  OF DETERMINISTIC
               CALCULATIONS

Parameter
Mean
Standard devi-
ation
Maximum value
Minimum value
Simulated
value
9-25
12.5

154.5
0.08
Deterministic
value
8.9
12.4

136.0
0.36
     Frequency histograms and cumulative fre-
quency plots were also drawn for both the sim-
ulated values and the deterministic values of
S and these are shown in Figures 1 through 4.
                                              712

-------
     The  large-sample t-test  was performed to
determine whether  there was  a significant
difference in the  simulated  and deterministic
mean values obtained above.   The test,  as
might be  expected,  showed no  significance in
the difference at  the 0.01 or 0.05 levels.
Furthermore, the F test for  significant dif-
ference  in the variances was  also negative,
indicating no significant difference.
                                SflHPLE SIZE = 5000
                                MIN. VHLUE = 0.08
                                MflX. VflLUE = 154.45
                                HERN = 9.25
                                STD. DEV.  12.48
                                SRMPLE SIZE = 224
                                HIM. VflLUE - 0.36
                                flflX. VflLUE = 135.SB
                                KERN = e.ee
                                STO. DEV.  12.31
                                SRHPLE SIZE = 224
                                HIM. VflLUE = 0.36
                                MHX . VRLUE = 135.96
                                HERN = 8.88
                                STD. DEV. = 12.37
                Acknowledgment s

     This work was  conducted  under EPA  con-
tract  No. 68-02-1874 with Dr. Dale Denny  as
EPA contract project officer  and Dr. R. C.
Binning  as the MRC  project manager.

                   References

1.  E. C. Eimutis,  B.  J. Holmes, L. B.  Mote,
    "Severity of  Stationary Air Pollution
    Sources - a Simulation Approach", EPA
    Contract No.  68-02-l8?4,  Final report in
    print.

2.  E. Parzen, "Modern Probability Theory and
    It's Applications", John  Wiley & Sons,
    New  York, Wiley Publication in Statistics,
    I960.
                                 SfltlPLE SIZE = 5000
                                . MIN. VflLUE = 0
                                 (MX. VRLUE = 154.45
                                 MEHN = 9.2S
                                 STO. OEV. = 12.49
                "•JWSV3 BPflrafWfc. rtffits,
    Figure 3. Cumulative frequency for the Severity of SO2 emissions froi
                                                  713

-------
                       ATMOSPHERIC  POLLUTANT DISPERSION USING SECOND-ORDER
                               CLOSURE MODELING OF THE TURBULENCE*

                                 W.  S.  Lewellen and M. Teske
                       Aeronautical Research Associates of Princeton, Inc.
                                      Princeton, New Jersey
                   Abstract

      A method  is  described  for  calculating
 turbulent  diffusion  of plumes in  the  planetary
 boundary  layer based on Donaldson's  second-
 order closure  approach to turbulent  flows.
 The  method calls  for solving dynamic,  partial
 differential equations for  the  species flux,
 variance,  and  its  mean concentration,  as  well
 as the second-order  turbulent velocity and
 temperature correlations to determine  the
 turbulence in  the  ambient atmospheric  boundary
 layer in  which the plume is embedded.   The
 parameters governing dispersion in the plane-
 tary boundary  layer  are identified and dis-
 cussed.   Results  from a sample  calculation of
 dispersion in  a free convection layer  are com-
 pared with laboratory observations.

             1.   Introduction
                                            This  exact equation introduces variables other
                                            than  second-order correlations and thus leaves
                                            the  system of equations undetermined.  The
                                            task  of second-order closure is to model these
                                            terms as functions of the second-order corre-
                                            lations and mean flow variables.

                                                 Our philosophy has been to choose the
                                            simplest models  that have proper  tensor
                                            symmetry,  dimensionalization, and the desired
                                            physical properties.  The modeled form of the
                                            species flux equation may be written for high
                                            Reynolds numbers as
                                             9t
                                                   +  0.3
                                            (2)
      A  valid  estimate  of  turbulent diffusion
 in the  atmospheric boundary  layer is required
 to determine  the  impact of a pollutant release
 on the  air  quality at  some distance from the
 point of release.  Our purpose here is to re-
 view and present  some  results from a model
 based on solving  a specific  dynamic differ-
 ential  equation for the turbulent flux of
 species.  This approach,  based on second-order
 closure of  the ensemble-averaged moments of
 the fluctuating variables, is currently being
 studied by  a  number of investigators for com-
 puting  turbulence in the  atmosphere.1"11  We
 review  the  essence of  the model in the next
 section.  The parameters  identified by this
 model as  governing the dispersion of a
 neutrally buoyant, nonreactive species in the
 planetary boundary layer  are presented in
 Section 3.  A sample calculation is compared
 with results  from a laboratory simulation of
 dispersion  in a free convection layer in
 Section 4.

              2.  Model Equations

      We take as our starting point the en-
 semble-averaged,  Eulerian equation of mass
 continuity  for the species concentration  C
3C
          3(UJC
                        = S + D ±±        (i)
This equation is exact but undetermined even
if the velocity  U^_  is known because of the
presence of the additional variable  FT .   By

taking moments of the instantaneous variables
and averaging, we can generate an exact
equation for the species flux  uTc" . 2
t
 This research has been partially funded with
 Federal funds from the Environmental  Pro-
 tection Agency under Contract  No.  EPA 68-02-
 1310,  and from National Aeronautics and Space
 Administration under Contract  No.  NAS8-31380
     We do not expect that the last two
modeled terms in Eq. (2) used to replace the
complex terms of the exact equation will
faithfully represent all of the information
present.  However, for most problems, we are
interested in only a small part of the infor-
mation contained in the complete turbulent
spectrum.  We believe that the two modeled
terms provide at least the minimum amount of
desired information needed to close the
system at the second order.  The first modeled
term introduces diffusion to prevent excessive
gradients in the species flux.  The other
modeled term, a tendency-towards-isotropy
term, introduces the required feedback which
permits the flux to reach an equilibrium level
even in the presence of large production con-
tributed by the first three exact terms on the
right-hand side of Eq. (2).

     The effect of stability on diffusion
comes into Eq. (2) in two ways:  through the
influence of stability on the velocity
fluctuations2 and through the buoyant term
appearing directly in Eq. (2).  This term is
not a result of our closure modeling but
arises directly from the buoyant term in the
momentum equation.  However, modeled terms
must appear in the equation derived for  06 .
If these are treated in a similar fashion to
those in Eq. (2), the equation for  c9  may be
written as
                                                3e6
                                                3t
        + U.
3c6
3Xj
                   - U.5
                      J
                         3C
                             -u.c
30
3x.
                                                     0.3
                                           (3)
                                                With  the  velocity  and temperature  fields
                                            specified,  Eqs.  (1),  (2)  and (3)  form a com-
                                            plete  set  for  the  determination of C  .  The
                                            velocity and length scales,   q   and  A  ,
                                            appearing  in Eqs.  (2) and (3), are appropri-
                                            ately  related  to the  companion scales  q
                                              714

-------
and  A  of the ambient turbulent field.  The
mean velocity, temperature, and second-order
velocity and temperature fluctuations may be
obtained from field observations or calculated
from similarly modeled equations?'9'10  As
long as we are dealing with a nonreactive,
neutrally buoyant species, the two sets need
not be coupled together.

          3-  Governing Parameters

     A solution to the dispersion equations,
Eqs. (1-3), for a neutrally buoyant, non-
reactive species requires specification of the
          U,
velocity
stress,  u.u.  ;

scale,  A ;  and
               temperature, 0 ;  Reynolds
               heat flux,  u.0 ;  turbulent

               source  S ,  as functions of
time and space.   Although one may argue with
the details of our modeling,  it  appears un-
likely that a more accurate model would re-
quire less information.   Thus, very detailed
measurements of turbulence  are required if
one attempts to predict  dispersion on the
basis of measured wind and  turbulence fields
alone.  Measurement of the  average wind speed
and direction plus an estimate of the
stability class of the turbulence are unlikely
to provide sufficiently accurate data.

     It is naturally desirable to parameterize
this dependence with as  few a number of para-
meters as possible.  The critical parameters
may be deduced by examining the  equations
governing the ambient turbulence in the
planetary boundary layer.10

A.  Surface Layer Parameterization

     Within the surface  layer, when the equi-
libration time  A/q   of the  turbulence is
small in comparison to flow times over changes
in surface features, a few  direct parameters
will suffice.  Estimates of the  surface shear
stress,  Ujf , surface heat  flux, 8S , and the
effective surface roughness,  z  , are adequate

to specify the wind, temperature, and turbu-
lence fields completely  through  the Monin-
Obukhov similarity functions.  These empiri-
cally correlated functions  may in fact be pre-
dicted from our modeled  turbulence equations
in the limit of stationary, unidirectional
flow with  us  and  6S  held  fixed, while the
other variables are allowed to be functions of
the vertical coordinate  alone. "*   For this to
be true, it is necessary to have
(U1A /q ( ))g( )/3x1«l  .   This  same re-
striction may be applied to the  species flux
equation, Eq. (2).   If we also neglect the
diffusion terms, which should be small in the
lower portions of the surface layer where
A  = A = 0.65z 5 and  q   =  q  , then Eq. (3)

may be used to eliminat^ c6  , and an alge-
braic expression for  we" obtained.  Thus,
    we
           /4wwA
           V 3q
            1 +
                               !£
                               3z
The bracketed quantity in Eq. (4) defines an
effective eddy viscosity  K  which is a
                                                 function only of  us ,  z  and  L

                                                 face-layer  K  is plotted in Fig.
                                                 by-its neutral value of
                                                 strong function of   z/L
                                                                 3.2r
                                                                 2.8
                                                                 2.4
                         0 .
                                    This sur-

                                    normalized
                                    It is a
                                                                  2.0
                                                                   .8
                                                                          Z/L
                                                 Fig. 1.  Effective turbulent diffusion co-
                                                   efficient  K  as a function of height and
                                                   Monin-Obukhov length  L  in the surface
                                                   layer as .predicted by the superequilibrium
                                                   limit of our turbulent model.

                                                      The downstream vertical dispersion of a
                                                 plume from its source depends on the wind
                                                 distribution as well as  K .  This observation
                                                 introduces a dependence on  z   in addition to
                                                     and  L
                                                               Under neutral conditions,  the
                                                 surface layer extends to approximately 100m;
                                                 under very stable conditions its extent may be
                                                 reduced to only 20m.  Under unstable con-
                                                 ditions the height of the surface layer de-
                                                 pends upon the height of the inversion layer
                                                 at the top of the planetary boundary layer.
                                                 Surface layer approximations are valid in this
                                                 case for  z/z. <_ 0.1 .   In no case, however,
                                                 is the surface layer approximation valid above
                                                 an altitude of a few hundred meters.

                                                      The only difficulty with dispersion in
                                                 the surface layer is the prediction of the
                                                 horizontal dispersion.   The conditions
                                                 necessary for the horizontal wind variance to
                                                 obey Monin-Obukhov similarity are much more
                                                 strenuous than those for the vertical vari-
                                                 ance.  This fact is reflected in the large
                                                 scatter observed in the reported measured
                                                 values of the horizontal wind variance in the
                                                 surface layer.
                                                                   Theoretically,  the cause
appears to be the low frequency lateral ve-
locity fluctuations forced by inhomogeneities
in terrain or mesoscale meteorological phe-
nomena.  These low frequency flucuations have
a much larger time constant associated with
their decay.  Thus, the flow conditions must
be steady and spatially homogeneous on a much
larger scale for the horizontal wind variance
to satisfy in detail the surface layer
approximation necessary for Monin-Obukhov
similarity to hold.
                                              715

-------
B.  Parameterization in the Ideal Planetary
    Boundary Layer

     Above the surface layer region the para-
meterization of dispersion becomes much more
complicated.  Not only are several additional
parameters introduced, but the time required
for the flow to reach an equilibrium state is
greatly increased.  The characteristic time
for the neutral planetary boundary layer is

the reciprocal of the Coriolis parameter  f
(approximately 3 hours at mid-latitudes).  In
practice neutral conditions rarely exist long
enough for the steady state, neutral layer to
be achieved.  Rather, the surface heat flux
forces a continuing evolution of the boundary
layer above it.  This evolution as a function
of time for a typical summer day in the Mid-
west , as computed by our model, has been
given previously.7'10  Similar calculations
using different versions of the closure model
have been made by Mellor and Yamada.6'11

     At sunrise an unstable surface layer be-
gins to grow, developing into a deep mixing
layer with high turbulence by afternoon.  The
height of the unstable mixed layer then con-
tinues to increase slowly until sunset.
Shortly after sunset a stable layer with a
temperature inversion develops at the surface.
This surface inversion layer increases in
depth during the nocturnal hours while the
upper level inversion slowly decreases in
altitude.  This development leaves a mixed
layer of decaying turbulence trapped between
the two inversion layers until the unstable
surface layer breaks through the low-level in-
version the next morning to re-energize this
region.  The turbulence distribution across
the entire boundary layer at any particular
time of day may only be crudely represented
by a single stability parameter.
     3.2r
     Because of the  slow  growth of the noc-
turnal, low-level  inversion  during the early
morning hours, it  is possible  to parameterize
approximately the  distribution below this
altitude as a function of a  single stability
parameter  (such as a Richardson number)  and a
Rossby number parameter to indicate the  rela-
tive importance of rotation.   We may also
approximate the strongly  unstable distri-
butions in terms of the characteristic ve-
locity  Wjf  appropriate for  free convection.
Our model prediction for  the vertical variance
in this limiting similarity  form compares
quite well with laboratory simulations.9  Away
from this limiting case,  two parameters,  one
measuring the stability and  one measuring the
height of the inversion layer,  are required  to
specify the distributions even under quasi-
steady conditions.  These two  parameters  may
be taken as a Richardson  number  Ri  and  as
the ratio of the inversion height  to the
Monin-Obukov length  z./L  .  The relative in-

fluence of Rossby number  and Richardson number
on the profiles of the quasi-steady wind  and
vertical velocity variance are  shown in
Pig. 2.  Several observations  are:  (1) for
equal  Ro  the neutral or stable  boundary
layer becomes thicker as  the distance  from the
equator and/or the geostrophic  wind increases;
(2) the dimensionless height of the  boundary
layer is reduced as  Ro   increases;  (3) the
crosswind, perpendicular to the  geostrophic
wind, increases as  Ro  increases;  (4) in-
creasing  Ri  also increases the  crosswind
component and decreases the boundary  layer
thickness; (5)  the only significant  shift in
the direction of the wind with  altitude for
the unstable profiles is in the  vicinity of
the upper level inversion.
                                                              .3
                                                 V/U0
                       .04   .08   .12    .16   .2

                             (w"w)

                             U0
Fig.  2.  Profiles for (a)  mean wind in the direction of the geostrophic wind, (b) mean wind in
  the direction normal to  the geostrophic wind,  (c)  vertical velocity variance, for various
  values of  Ro = Ug/zQf and  Ri ,  the bulk Richardson number based on the velocity and

  temperature differences  between the surface and 10m height.  The height may be read directly
  in Km for  U  = lOm/sec   and  f = 10"^ sec"1 .
                                              716

-------
     These limiting parameterizable  cases
probably occur more often than the neutral,
steady-state profiles, but  still represent
the exception rather than the rule.  Even in
an ideal diurnal variation  case, the para-
meterization would represent somewhat less
than half of the altitude-time domain,  since
it does not account for the mixed layer be-
tween the two inversion layers between  sunset
and noon the next day.  When the diurnal sur-
face heat flux variation is significantly re-
duced in the presence of a  relatively strong
stable lapse rate, the altitude of the  top
inversion layer is drastically reduced  and
the domain over which this  parameterization
is valid correspondingly decreases.13

     At least two other physical mechanisms
reduce the domain over which the previously
presented parameterization  is valid: baro-
clinicity and radiation flux divergence.  Both
of these influences are frequently present in
the planetary boundary layer.  Not only do
they serve to increase the  number of para-
meters governing the flow,  but they  introduce
additional dynamics and reduce the time over
which the quasi-steady parameterization is
valid.  Results for some assumed time vari-
ations of these two influences as calculated
by our model have been presented in  ref. 13.
Even when influences of nonhomogeneous terrain
are eliminated, the characterization of the
wind and turbulence distributions in the
planetary boundary layer in terms of two or
three parameters is necessarily rough and at
time highly erroneous.  We believe 'much
better results for a valid prediction of
the distributions at any given time may be
•found by tracking the time  variation of the
forcing boundary conditions for at least
twelve hours prior to the desired observation
time.

C.  Additional Dynamics Introduced by
    Diffusion Equations

     When the wind and turbulence fields are
known, the additional parameters necessary to
determine the dispersal of a neutrally
buoyant, nonreactlve species are those neces-
sary to characterize the source  S .  Whenever
the characteristic time scale of the turbulent
mixing is much less than any time scale  t
                                          S
associated with  S , the left-hand side of
Eq. (2) may be neglected and a superequilibri-
um  (or K) theory should be approximately
valid.  In the surface layer this reduction
would lead to Eq.  (4)  for  K .   Although this
approximation will not be valid for pollutant
sources with sufficient spatial inhomogenei-
tieSj it should be useful in many cases and,
in fact, probably forms the basis of the
limited success of Gaussian plume models para-
meterized for different stability classes.

     It is easy to think of many cases where
A/q  is no longer much smaller  than  t  .   Two
                                      S
common examples are when  A  « A ,  or when

the plume scale divided by the  crosswind ve-
locity is less  than or of the  same  order.as
A/q .   In such  cases  K  theory may lead to
considerable  error.   In general,  reliable  dif-
fusion models  must be  able to  compute  u.c
accurately whether or  not  the  time  rate of
change of  u1
                                significant.  We believe Eq.  (2) has this
                                capability.  Results of sample calculations
                                for both line and point source releases have
                                been presented elsewhere . 1 ° ' 1 "*  Space limits
                                us to one example here.

                                  4.   Sample Calculation for Free Convection

                                      Figure 3 presents the results of a
                                sample model calculation for dispersion in a
                                free  convection mixed layer.  Deardorff and
                                Willis15)16 simulated dispersion in the atmos-
                                pheric mixed layer by releasing a large number
                                of small, neutrally buoyant particles as an
                                instantaneous line source into the bottom of a
                                water convection tank.   They interpret  their
                                results in terms of a continuous point  source
                                release into a uniform wind.  Initial compari-
                                sons  of our predictions with their obser-
                                vations for both the species dispersal  and
                                turbulence field have been made .   » l '*  Here we
                                will  take advantage of their most recent
                                published results   to  update this comparison.
                                                                         3.0
                                Fig. 3-  Isopleths of the crosswind integrated
                                  concentration,  Cr  ,  as a function of down-
                                  stream distance and height when a continuous
                                  point source is released into an unstable
                                  mixed layer.  (a) Model predictions;  (b)
                                  Laboratory observations of Deardorff and
                                  Willis.16

                                      We begin our calculations with a
                                Gaussian plume distribution with  a  = a  =
                                                                   y    z
                                0.006z. , since our model can not actually
                                start with a point release.  A uniform wind is
                                applied and the calculation marches in  x ,
                                the direction of the wind, to follow the plume
                                development.  We plot the variation of the
                                normalized crosswind integral of the con-
                                centration
(or the advection of  u^c)  is
                                              717

-------
                     + 00
cy =
                         CUz,
                                           (5)
In the completely mixed layer,  approached as

x _,. oo ,  c~y = 1  for all  z .   In the early
development of the plume,  both the obser-
vations  and the predictions show the local
maximum in concentration rising above the
level of initial source,*  z  = 0.067z.  .
                           o          J-
Willis and Deardorff show that this effect
would correspond to a negative  K  over much
of the spatial domain if one  attempted to
predict  this by  K  theory alone.  In our
model it is a direct result of the buoyant
forcing term in Eq. (2).

     The greatest discrepancy between the pre-
dictions and the observations occurs in the
upper portions of the plume during the early
development and at the surface near  X = 1 .
The discrepancy at the upper  edge of the
plume may be partially due to the low
Reynolds number of the experiment (~ 1730
when based on  q  and  z.) while our model
                                     7
run was made for much higher  Re (= 10 ) to
more nearly simulate atmospheric conditions,
but probably reflects some error in our
turbulent scale as the inversion layer is
approached.  The higher rise  of the plume as
well as a stronger horizontal dispersion of
the plume at altitude allows  the observed
surface concentration to be lower than that
predicted.  The general character of the dis-
persion is quite favorably predicted, es-
pecially considering the fact that no em-
pirical information from this particular
experiment has been used in determining the
model.
            5.  Concluding Remarks

     Our model is currently capable of making
calculations for individual,  neutrally
buoyant, nonreactive plumes.   A three-
dimensional source release may be followed if
the wind and turbulent fields are assumed
stationary over the characteristic time re-
quired for the development of the plume.  For
a two-dimensional source release this station-
arity requirement may be relaxed, but the wind
field must be independent of the third di-
mension.  Steps are now underway to incorpo-
rate the capability to compute a buoyant plume
with internally generated turbulence.  Some
refinements in the modeled terms and coef-
ficients should be expected as more compari-
sons with reliable measurements are made and
as fundamental theoretical work proceeds.
However, comparison of the results of the
current model with experimental observations
demonstrates that it is a valid tool for
studying the sensitivity of dispersion to
different time and space variations of the
boundary conditions on the planetary boundary
layer.

     The model shows that quasi-steady para-
meterization is valid within  the surface
layer, and in some limited regions of the
physically realizable time-altitude domain of
the planetary boundary layer; but, solution
of the species flux equation  appears to be the
way to deal more accurately with the dispersal
problem in general.
                  References

 """Donaldson, C. duP. , Sullivan,  R.D.,  and
  Rosenbaum, H.:  "A  Theoretical  Study  of the
  Generation of Atmospheric Clear-Air  Turbu-
  lence," AIAA J. 10_, 2, 1972, pp.  162-170.
 2
  Donaldson, C. duP.: "Construction of a
  Dynamic Model of the Production  of Atmos-
  pheric Turbulence  and the Dispersal  of
  Atmospheric Pollutants," Workshop on Micro-
  meteorology, AMS,  Boston, 1973,  PP.  313-392.
 ^Lewellen, W.S.  and Teske, M.:  "Prediction of
  the Monin-Obukhov  Similarity Functions  from
  an Invariant Model of Turbulence," J.  Atmos.
  Sci., 30, 7, 1973, PP. 1340-1345.
  Lumley, J.L. and Khajeh-Nouri, B.: "Compu-
  tational Modeling  of Turbulent Transport,"
  Advances in Geophysics, 18A, Academic  Press,
  New York, 1974, pp. 169-192.
 ^Wyngaard, J.C., Cote, O.R., and  Rao, K.S.:
  "Modeling the Atmospheric Layer,"  Advances
  in Geophysics,  ISA, Academic Press,  New
  York, 1974, pp. 193-212.
 6Mellor, G.L. and Yamada, T.: "A  Hierarchy of
  Turbulence Closure Models for  Planetary
  Boundary Layers," J..Atmos. Sci.,  31,  1974,
  pp. 1791-1806.
 7
 'Lewellen, W.S., Teske, M., and Donaldson, C.
  duP.: "Turbulence Model of Diurnal Vari-
  ations in the Planetary Boundary Layer,"
  Proc. 1974 Heat Transfer and Fluid Mechanics
  Inst., Stanford U. Press, 1974, pp.  301-319.
 8Lewellen, W.S., Teske, M., et  al.: "In-
  variant Modeling of Turbulent  Diffusion in
  the Planetary Boundary Layer," EPA Rept. No.
  EPA-650/4-74-035, 1974b.
 ^Lewellen, W.S., Teske, M., and Donaldson, C.
  duP.: "Examples of Variable Density  Flows
  Computed by a Second-Order Closure De-
  scription of Turbulence," AIAA Paper No.
  75-871, 1975.
10
11
12
13
14
15
16
                                   Lewellen, W.S. and Teske, M.: "Turbulence
                                   Modeling and its Application to Atmospheric
                                   Diffusion," EPA Rept. No. EPA-600/4-75-016,
                                   Dec. 1975.
                                   Yamada, T. and Mellor, G.L.: "A Simulation of
                                   the Wangara Atmospheric Boundary Layer Data,"
                                   J. of Atmos. Sci., 32_, 1975, PP. 2309-2329.
                                   Panofsky, H.A.: "The Atmospheric Boundary
                                   Layer Below 150 Meters," Annual Review of
                                   Fluid Mechanics, Annual Reviews, Inc., Palo
                                   Alto, Calif., 1974, pp. 147-177.
                                   Lewellen, W.S. and Williamson, G.G.: "Wind
                                   Shear and Turbulence Around Airports," Parts
                                   1 & 2, A.R.A.P. Rept. No. 267, Jan. 1976.
                                   Lewellen, W.S. and Teske, M.E.: "Second-Order
                                   Closure Modeling of Diffusion in the Atmos-
                                   pheric Boundary Layer" (to be published in
                                   Boundary-Layer Meteorology).
                                   Deardorff, J.W. and Willis, G.E.: "Physical
                                   Modeling of Diffusion in the Mixed Layer,"
                                   Proc. Symp. on Atmospheric Diffusion and Air
                                   Pollution, Santa Barbara, Calif., Sept. 1974,
                                   AMS, Boston, pp. 387-391.
                                   Deardorff, J.W. and Willis, G.E.: "A Para-
                                   meterization of Diffusion in the Mixed
                                   Layer," J. of Appl. Meteorology, 14, Dec.
                                   1975, PP. 1451-1458.
                                              718

-------
                                   POINT SOURCE TRANSPORT MODEL WITH A SIMPLE

                                      DIFFUSION CALCULATION FOR ST.  LOUIS


                                                 Terry L. Clark*

                                                 Robert Eskridge*
                                        Meteorology and Assessment Division
                                   Environmental  Sciences and Research Laboratory
                                          Environmental Research Center
                                       Research Triangle Park, N.C.   27711
                     Abstract
     The transport of an inert gaseous contaminant in
St.  Louis is  modelled by a numerical  method.   The nu-
merical  model calculates, from a wind field,  a two-
dimensional  field of streamfunction values character-
izing the air flow.   The wind field is objectively
analyzed from 15-minute averaged RAPS (Regional Air
Pollution Study)  data using orthogonal functions.
The streamfunction values are calculated from an el-
liptic equation solved by successive over-relaxation.

     After assuming a non-divergent, two-Himensional
flow, streamlines are analyzed from the streamfunction
field.  Trajectories are then computed by displacing
puffs of a contaminant along a specified streamline.
A simple diffusion calculation is included in the
model to demonstrate one of its possible uses.

     Measurements obtained from a SF, tracer  study
provide data with which the results of the transport
model are compared.   Six of the nine sets of  measure-
ments obtained along 3 highways in St. Louis  during
August 12-13, 1975 are considered.

                    Introduction
     Accurate simulation of air pollution concentra-
tions has been of interest,for many years beginning
with the works of Roberts.    He developed the basic
plume formulas, which have been used for point source
releases and other applications.  Mathematical models
based upon these and other formulations have been de-
veloped to simulate air pollution concentrations.  In
the last several years, urban-scale grid point models
have been developed by Systems Applications Incorpo-
rated and IBM 2,3. m addition, an urban-scale trajec-
tory grid point model has been developed by Eschen-
roeder.4

     The grid point models  include variable winds,
but because of limited resolution, are unable to be
applied to single point-source cases.   Gaussian plume
models, on the other hand,  are capable of calculating
concentrations for single point sources, but do not
account for variable wind velocities.   A model with a
high spatial resolution that employs wind fields vary-
ing over short periods of time would be useful on an
urban-scale.  Some practical uses of this type of
model would include identifying areas  affected by in-
stantaneous releases from one or more point sources
and supplying the transport mechanism for deposition
studies.
     The transport model described in this paper was
developed to apply to instantaneous point-source
emissions in the form of puffs.  Wind and streamfunc-
tion fields, from which trajectories were calculated,
were generated from 15-minute averaged data.  The cal-
culation of trajectories ensured a spatial resolution
much better than the grid point models.  Moreover, the
selected averaging period reflected small-scale changes
of the wind velocity, which, theoretically, would en-
hance the Quality of the trajectory calculations.

     In this paper, the development of the transport
model and results of six puff releases are described.
The results were compared to an Atmospheric Tracer
Study which involved continuous SF, releases.  In or-
der to demonstrate  the usefulness of the transport
model, a simple diffusion calculation was added.  A
more complex diffusion calculation can be substituted
easily.
                        Data
A.  Meteorological
     The wind fields employed in this application of
the transport model were analyzed from wind data meas-
ured  from the Regional Air Pollution Study (RAPS) in
St. Louis, Missouri during August of 1975.  The Re-
gional Air Monitoring Stations (RAMS) network consists
of 25 sites, 21 of which are located within 26 km of
downtown St. Louis (Fig. 1).  Data from these 21 sites
were considered for the wind analyses.  The grid lo-
cations of two of these sites, namely 116 and 121, were
relocated slightly so that they were located on the
wind analysis grid.  One-minute averages of wind speed
and direction were obtained from continuous measure-
ments atop a 10-meter tower at sites 108,110,114-118,
and 121 and a 30-meter tower at the remaining sites.
The height differences of the levels of measurements
accounted for local obstructions to the air flow.
After the data were validated, 15-minute averages were
computed for each, site for selected time periods.

B.  Tracer Measurements
* On assignment from the National Oceanic and Atmos-
  pheric Administration, U.S. Department of Commerce
     Measurements from the Atmospheric Tracer Study
taken by the California Institute of Technology in Aur
gust of 1975 were employed to compare the results of
the trajectory and concentration computations.   In
this study, continuous releases of SFg tracer were
completed from one of three sites in St. Louis during
five periods in August of 1975.  The results of the
model corresponding to the tracer study case for Au-
gust 12-13 are discussed in this paper.

     The SFg tracer was released, in this case, from a
point 20 feet above the ground at Webster College
(just to the southwest of St. Louis) from 8:40 P.M. un-
til 3:00 A.M. at a rate of 6.2 gm sec  .
                                                       719.

-------
During this period, the sky was clear,  the surface
temperatures were in the lower 70's, and the winds
were strong from the south-southwest.

     Between 10:25 P.M. on August 12 and 2:26 A.M. on
August 13, 9 automobile traverses were  conducted
along segments of 3 highways in St.  Louis where the
SF, plume was expected to pass.  Throughout each tra-
verse, a passenger in the automobiles  took a grab
sample in a 30 cm3 plastic syringe every 0.1, 0.2 ,•
0.3, 0.4, or 0.5 miles along the route.   The interval
depended upon both the distance from the point source
and the steadiness of the wind.
                                                 116
      r
Fig. 1.  The 40 x 40 km analysis  grid  and  location
of 21 of the 25 RAMS sites in  St.  Louis, Missouri.
Every tenth grid point on the  interior and every
tenth grid point on the boundary  are  indicated.
The "*" indicates the location of Webster  College.

                Hind Field Analysis

     Meridional, zonal, and resultant wind fields were
objectively analyzed on a 40 x 40 grid (Fig.  1 )  by a
technique based upon a generalized orthogonal  function
approach developed by Jalickee and Rasmusson.P   The
technique prescribes a relationship between a set of
M  observations,
              ,   i    1-.2, ..... M
with space-time coordinates,

     * i (x^y^z^t.)    ,   i  =  1,2,..

and a set of N base functions,

       f       Ic   1  9       M
       Tk   ,   K   l,^,....,N

multiplied by  a set of  coefficients,

       bk   ,   k   1,2,	M

according to Eq. 1.


               >l f !.(*-• )  + Z.
                                         M
 The  term,  z.  represents  a random variable signifying
 noise  in  the  observations.

     The  particular set  of 15 base functions employed
 in the model  were

  I,x.xy,y,x2,x2y,x2y2,xy2,»2.x3,x3y.xy3,y3,x4,y4.
The optimal set of coefficients was  determined by
minimizing the quantity
                                                   (1)
                        15
Tfie resulting equation  
-------
were more agreeable with the  data when  determined  from
a smaller grid.  The values obtained  at each  grid
point were dependent upon the total wind field,  due  to
the iterative process employed.  Isolines  of  the re-
sulting streamfunction values  represented  streamlines
for the case of two-dimensional, non-divergent  flow.

     Assuming this type of flow, the  equation of con-
tinuity can be written as

                                                     (2)
                 3U.
               o.
                         _
                        ay
The streamfunction, 
-------
     After the grid point values of the streamfunction
were calculated, the value of the streamfunction at '   i
the specified location of the parcel  or puff was de-
termined.  This was accomplished by the use of a 16-
point interpolation scheme developed by Gandin and
Boltenkov.9  This value identified the isoline or
streamline the puff would follow during the ensuing
15-minute period.

     Next, the equation of the particular streamline
was determined.  First, the points where the stream-
line intersected the borders of the 16 grid squares
nearest the puff were identified.  The identification
was made possible by the use of an interpolation
scheme employing finite differences CEq- 9).

P(x) = f[xQJ + f[x-|,x0](x-x0) + f[x2,x1,x0](x-x0)(x-x1)
                                                    (9)
where x  ,x,, and x? are x- coordinates on the grid.
The y- Boordinates of the points of intersection ar
    y- Coordlnates~ot the points
calculated in a similar fashion.
                                                 are
     The x- coordinate of the initial location of the
puff (x ) and the x- coordinates of the two nearest
points  of intersection downwind from the puff
(x-i,Xp) were fitted by a quadratic polynomial, gCx),
using finite differences (Eq. 9.).  The puff was then
displaced along the curve described by gfx) at a rate
equal to an interpolated value of the wind speed for a
period of 100 seconds.  The final displacement posi-
tion relative to the starting point was determined by
using the arc length.  Eq.  10 is the arc length for-
mula used to calculate the puff displacement in the x
direction.  The formula to calculate the puff displace-
ment in the y direction is similar.
where
                              1/2
            g(x)  =  ax  + bx  +  c.
                                  dx
                                                   (10)
     The values of x  and D were known and x.- was de-
termined by Newton's method.  The value of xf, which
represented the displacement along the x- axis, was
added to or subtracted from the position along the
axis at the end of the 100-second period.

     This process was repeated 8 more times using the
same wind field.  At the termination of 900 seconds
(15 minutes), a new wind field was calculated based
upon up-dated, 15-minute averaged winds.  The puff
was transported for an additional 15 minutes starting
from the termination point of the previous period.
This process ended when the position of the puff was
beyond the area where the plume from the tracer study
was sampled.

              Concentration Calculation
     The  concentration of each puff was calculated at
 the end of each 100-second period.  This was performed
 using  a diffusion calculation based upon Eq. 11, which
 was derived by Roberts.1
            c(x,y,z) =
                            -3/2
                                      -r
                                                   (n;
where Q is the initial generation of contaminant (gm);
K is the diffusion coefficient (m2 sec-1); t represents
the time after the release of the puff from the source
(sec); and r is the distance from the center of the
puff (m).   From the tracer study, the emission rate of
SF6 was known.  The value assigned to Q in Eq. 11 was
the mass of SF6 emitted during a one-minute period.
     It was assumed that the diffusion coefficients
along the x-,y-, and z- axes were equal to a constant
coefficient, K.  However, the method of determining
the values of the coefficient was beyond the scope of
this research.  The treatment of the K- theory  in ex-
isting models was examined instead.  The pollution
model constructed by IBM assumed a value of 500 m
sec'1. 7  The planetary boundary layer model developed
by Gerrity used values of K between 1 and 100 nr
sec-1. 1°  In the transport model, a value of 10 was
chosen for generally stable conditions and 100 nr
sec-1 for generally unstable conditions.

               Results and Conclusions

     The results of the transport model for 6 of the 9
instrumented automobile traverses of the St. Louis
tracer study during August 12 and 13, 1975 are pre-
sented in Figs. 3A-F.  The puff positions are shown
on the 40 x 40 km grid at the termination of every 15-
n'nute period.  In every case, the source point was
located at Webster College (denoted by "WC") near the
southwest corner of the grid.  The shaded numbers in-
dicate RAMS sites where data was omitted intentionally
(#103 and #104 on the east side of St. Louis), or ei-
ther missing or highly questionable.  It is important
to note that data were missing from several key sites
throughout the periods.

     The small bars along portions of highways #40,
#70, and #270 represent segments of the roads where
SFs tracer was -measured via the traversing automo-
biles.  The maximum measured concentrations of the SFg
plumes are listed adjacent to each figure.  This value
might not equal the actual maximum concentration,     !
since grab samples were taken at prescribed intervals
along the routes.

     Fig. 3A indicates that the model transported the
puff released at 12:30 A.M. across highway #40 at a
point 1.5 km east of the area where the plume was de-
tected.  At this point, the puff was 5 km downwind
from Webster College.  The calculated concentration at
the surface directly beneath the center of the puff
was 841 ppt (parts per trillion).  This is 30% of
the maximum concentration measured at this time.

     Fig. 3B shows that the puff released at midnight
traversed highway #70 at approximately 1:00 A.M.  The
point of intersection agrees with the measurements.
At this time, the puff was 14 km from the source.  The
calculated concentration was 220 ppt  or 91% of the
maximum concentration measured there.

     Fig. 3C indicates that the puff released at 11:45
P.M. on August 12 traversed highway #270 at approxi-
mately 1:15 A.M. on August 13.  The point of inter-
section also agrees with the tracer study data.  The
puff at this time was approximately 20 km from the
source point.  The concentration calculated as the
puff crossed the highway was 122 ppt, which is 67% of
the maximum concentration measured.

     Fig. 3D indicates that the puff released at 1:45
A.M. crossed highway #40 at a point less than 1 km
from the section of the highway where the tracer plume
was detected.  At this point, the puff was 5 km from
the source point.  The calculated concentration at
this time was 841 ppt or 43% of the maximum concentra-
tion measured.

     Fig. 3E shows that the puff released at 1:00 A.M.
crossed highway #70 at a point where the tracer plume
was detected.  The puff at this time was approximately
14 km from the source.  The concentration calculated
was 286 ppt or 47% of the maximum concentration mea-
sured.
                                                      722

-------
                                                                      •©—
                                                                     •
-------
     The puff released at 12:30 A.M.  was transported
across highway #270 near the intersection of highway
#67, as shown in Fig.  3F.   This was in agreement with
the tracer study data.  The puff was  more than 20 km
from the source point  at this time.  The calculated
concentration was 125  ppt or 55% of the maximum con-
centration measured.

     Four of the six puff trajectories presented here
indicate that the model  was spatially accurate in
transporting the puffs across three highways in St.
Louis.  However, the temporal accuracy cannot be de-
termined from the data used here, since the SFg tracer
was emitted continuously.   It should  be noted,  how-
ever, that both the tracer study and  the transport
model indicated that at approximately 2:00 A.M., the
contaminant traversed  a segment of highway #270 4-5
km to the east of the  segment where the contaminant
traversed the highway  at approximately 1:15 A.M. (see
Figs. 3C and 3F).

     The remaining trajectories were  approximately 1
km from the section of the highways where the tracer
plume was detected by  the instrumented automobile
traverses.  In these two cases, the puffs were trans-
ported across the grid in an area where data were
missing from key RAMS  sites #111 and  #119.  No data
were available in the  southwest corner of the grid,
so the assumption was  made that the 15-minute aver-
aged wind velocity at  RAPS sites #106 was representa-
tive of the averaged wind velocity at site #119.
Therefore, the objective analyses of  the wind was
biased towards the wind velocity at site #106.  This
could have been the cause of the slight eastward dis-
placement errors illustrated in Figs. 3A and 3D.

     The concentrations calculated by the simple dif-
fusion equation at the surface directly beneath the
puffs were either comparable with or  2-3 times small-
er than the maximum concentrations observed from the
tracer study.  The value of the diffusion coefficient
in the diffusion equation was not calculated in this
model.  It was assigned a value thought to represent
the atmospheric conditions at the time of the tracer
study.  Moreover, the  amount of contaminant in the
puff at its time of generation was arbitrarily de-
fined by the amount of contaminant released during a
one-minute time period.   This diffusion calculation
was a simple one and was used only to demonstrate an
application of the transport model.
                      References

1.   Roberts, O.F.T.; Pro. Roy. Soc. London, Series
     A, 104, 640-654, 1923.

2.   Reynolds, S.D.,  P.M. Roth and J.H. Seinfeld;
     Atmos.  Env. , ]_, 1033-1061, 1973.

3.   Shir, C.C. and L.J. Shieh; J_. App. Met., }3_> 185-
     204, 1974.

4.   Eschenroeder, A.Q. and J.R. Martinez; G.R.C.,
     Santa Barbara, Calif., IMR1210, 1969.

5.   Shair,  F.H., B.K. Lamb, and J.D. Bruchie; EPA
     Grant No. R 802160-03-2, 206 pp., 1975

6.   Jalickee, J.B. and E.M. Rasmusson; Proceedings
     of the  Third Conference on Probability and
     Statistics in Atmospheric Science, June 19-22,
     1973.

7.   Heffter, J.L., 6.J. Ferber, andA.D. Taylor;
     NOAA Tech Mem ERL ARL-50, 28 pp., 1975.

8.   Frankel, S.P.; Math.  Tables Aids Comput.,  4_,
     65-75,  1959.

9.   Gandin, L.S.; Israel  Program for Scientific
     Translations, IPST Cat. No. 1373, Jerusalem,
     1965.

10.   Gerrity, Jr., J.P.; Mon.  Wea. Rev., 95,  261-282,
     1967.
                                                      724

-------
                                          THE CHANGE IN OZONE LEVELS
                                        CAUSED BY PRECURSOR POLLUTANTS:

                                            AN EMPIRICAL ANALYSIS*
                     Leo Breiman
           Technology Service Corporation
              Santa Monica, CA. 90403
                  William S. Meisel
           Technology Service Corporation
              Santa Monica, CA.  90403
                      Abstract
An empi-r-ical  analysis of ambient air data is used to
relate the one-and two-hour change in oxidant levels
in the urban  environment to the preceding level of
precursor pollutants and to meteorological variables.
The intent was to demonstrate the feasibility of de-
veloping a set of empirical difference equations for
the production of oxidant over time.  The main vari-
ables determining one-and two-hour oxidant changes
were extracted using nonparametric regression tech-
niques.  A model  for two-hour oxidant changes was
developed using nonlinear regression techniques.
The implications  of the model are discussed.

                    Introduction

Typical objectives of a modeling effort are (1)
qualitative understanding and (2) quantitative impacts.
In air quality modeling, these objectives are aimed at
the ultimate  objectives of determining the effects of
alternative control policies and understanding which
policies will be  most productive.  Ozone air quality
modeling efforts  have been largely concentrated at
extremes of the spectrum of approaches to modeling:
(1) simple statistical models with limited application,
or (2) complex models based on the underlying physics
and chemistry of  the process.  The former class of
models provides easy-to-use, but rough, guidelines,
the latter class  of model is capable of detailed tem-
poral and spatial impact analysis, but costly and dif-
ficult to use.

This paper illustrates the feasibility of an inter-
mediate class of  model which is relatively inexpensive
and easy-to-use,  but which is capable of providing
reasonably detailed temporal and spatial estimates of
oxidant concentration.  Further, the form of the model
makes it possible to understand (with careful inspec-
tion) the qualitative implications of the model as a
guide to the  design of control strategies.

We hasten to  emphasize, however, that a full model in
this class is not a result of this paper; rather, we
present an analysis which we believe indicates the
feasibility of the development of such a model.  In
particular, we develop an empirical  difference equa-
tion for the  production of oxidant from chemical
precursors, as effected by meteorological variables.
A full model  would involve difference equations for
the precursor pollutants as well.  Further, data
easily available  did not include all meteorological
variables of  possible interest or emission data.
(Since ozone  is a secondary pollutant, emissions of
primary pollutants over a brief interval, e.g., one
hour, will not effect the change in  ozone levels over
that interval  to  the degree they effect the change in
primary pollutant levels.  Since we  did not derive
difference equations for the primary pollutants in
^
 This work was supported in part by  Contract No.
68-02-1704 with the Environmental Protection Agency.
this study,  not  including emissions did  not  prove
serious.)  The context  in which  the reader should
then interpret the  results  is as  the degree  to which
the change in ozone can be  explained despite  these
limitations.  Whatever  degree of  explanation  of  the
variance in  one- or two-hour changes in  ozone we can
achieve within these limitations  can be  improved when
more of the  omitted factors are taken  into account.
This analysis will  thus provide a pessimistic estimate
of the degree of success that can be expected in a
full-scale implementation of the  approach.  This paper
summarizes work reported more fully elsewhere [1].

                     Form of Model
We consider a "parcel" of air, and define 03(1) as the
oxidant concentration averaged over the hour preceed-
ing time t.  We further define A03(t) as the change in
the hourly average oxidant concentration (in pphm) in
the time interval At following t; explicitly,
       A03(t)
03(t + At)
03(t)
(I!
 (We will consider At   1 hour and At   2 hours.)
We seek an equation predicting the change in hourly
average concentration after time t from measurements
of pollutants and meteorology available at time t.
Pollutant measurements other than ozone we will con-
sider as possible precursors include the following,
all of them in terms of concentration averaged over
the hour preceding time t:
       N0(t)
       N02(t)

       HC(t)


       CMt)
 NO concentration (pphm)

 NO- concentration (pphm)

 Non-methane hydrocarbon concentration
 (ppm)

 Methane (ppm).
Meteorological variables considered explicitly include
the following, again averaged over the hour preceding
time t:
                                           o
       SR(t)     solar radiation (gm-cal/cm /hrs)

       T(t)      temperature (°F).
Mixing height was not used in the present study.

We thus seek a relationship of the form


       A03(t)   F[03(t), N0(t), N02(t), HC(t),

                  CH4(t), SR(t), T(t)]
                                (2)
which accurately reflects observed data.  Referring to
(1), equation (2) can be alternatively written as
                                                      725

-------
      03(t + At)   03(t) + F [03(t), N0(t), N02(t),

                              HC(t), CH(t), SR(t),
This form indicates explicitly how such a relationship,
if derived, can be used to compute a sequence of hourly
or bi-hourly oxidant concentrations.  (Similar equa-
tions would be derived for the primary pollutants to
provide a complete model.)

We must incorporate transport effects.  We have adopted
a rather simple model.  The model  estimates the tra-
jectory of a "parcel" of air from ground-level measure-
ments of the wind field.  A parcel arriving at a given
location at a given time (e.g., Pasadena at 1600 hours)
is estimated, from the current wind direction, to have
been at another location upwind one hour earlier.  The
distance traveled from that direction is given by the
current wind speed.  The trajectory is tracked back-
wards to give a sequence of hourly locations.  The
"actual" values of pollutant levels at these points at
the given times are obtained by an interpolation pro-
cedure from measured data at fixed monitoring stations.
The motivation for tracking parcels backwards rather
than forwards is to allow choice of parcels which end
up at monitoring stations; in part so that the last
(and often highest) pollutant concentration need not
be interpolated.  The air parcel trajectory approach
is obviously a simplification of the true physics of
the system; this approach is similar to assumptions
employed in some physically based air quality models
[2].  In the present empirical modeling context, the
trajectory approach is a statistical approximation
rather than an assumption; that is, the inaccuracy of
the approximation will be reflected in the overall
error of the final empirical model.

                      The Data
Data collected by the Los Angeles Air Pollution Control
District was employed.  Air quality data from the
seven monitoring stations indicated in Figure 1 was
utilized.

We interpolated the wind field in a region of the Los
Angeles basin so that we were able to track parcels
of air as they moved through the basin.   The pollutant
readings at seven APCD stations were also interpolated
so that we could keep hourly records of the pollutants
discussed.  We also had the hourly solar radiation
readings at the Los Angeles Civic Center location of
the APCD, and hourly temperature readings at three
representative locations in the basin.

Our study was carried out using data from the five
summer months, June through October 1973.  About 7000
trajectories were formed and placed in the primary
data base.

From the bank of 7000 trajectories, we extracted a
sample of about 1900 data vectors of the form


         (A03, 03,  NO, N02, HC, CH4, SR, T) ,


where A03 was a one-hour change and about 1800 vectors
where A03 was a two-hour change.

                    The Analysis

Since time is only  implicit in (2), we search for a
fixed relationship
        A03    F(03,  NO,  N02>  HC,  CH4> SR, T) .    (4)


 Equally important,  we want to determine which  of the
 variables were  most significantly related to the
 change  in ozone.  Therefore,  we  really had two objec-
 tives in this study:

      (1)  To  find those subgroups of the variables
          most  significantly  related to the ozone
          change.

      (2)  To  find the form of the function F providing
          the best  fit  to  the data.

 Variable Selection

 For the variable selection and exploratory phase,  we
 used  INVAR, a general nonparametric  method for esti-
 mating  efficiently  how  much of the variability in  the
 dependent variable  can  be  explained  by a subgroup  of
 the independent variables  [3].   This technique esti-
 mates the limiting  value of percent  of variance ex-
 plained (PVE) by a  "smooth" nonlinear model.*   We  first
 tested  all independent  variables  as  individual  pre-
 dictors, then pairs of  variables,  and then added vari-
 ables to find the best  three,  etc.   Some results for
 single  variables are  tabulated in  Table 1.   The most
 significant individual  variables  (in approximate order
 of importance)  are  SR,  NO^, T, and 0.,.

 Exploring pairs of  variables,  we  found the results
 shown in Table  2.   Other pairs were  run that resulted
 in lower percent of variance  explained than those  in
 the table.

 Triplets of variables were then explored with  one
 really  significant  improvement showing.   Some  results
 are shown in Table  3.

 The final significant increase occurred  when we added
 temperature  to 03, NO?, SR.   But, somewhat strangely,
 the increase was significant  only  for the data  base of
 one hour AO,.   Here we  obtained
         Variables
      03, N02, SR, T
One-Hour PVE
    65.9
In all of the INVAR runs using HC and CH4, neither of
them significantly increased the PVE.  For instance,
when HC and CH^ were individually added to the vari-
ables NOo, NO, Oo, and SR, the maximum increase in PVE
was 2.1%.       J

These results are encouraging; the three variables
03, N02, SR predict about 71% of the variance in two-
hour ozone changes, that is, with a correlation be-
tween predicted and actual values of 0.84 over 1800
samples.

Specific Functional Relationship

The exploratory analysis provided nonparametric esti-
mates of the degree of predictability of two-hour
A03 as a function of 03, N02, SR.  In this section,
 Percent of variance explained equals


      ,QO    -I   variance of error in prediction
                 variance of dependent variable
                                                      726

-------
                           BURSANK	      PASADENA

                              i\               7*\                AZUSA
                                              '   *                  «-
                                                                               ~~--^POMONA
                                                                        N
                                                                           SCALE IN MILES
                                                            TON BEACH
                                    Figure 1.   We Study Region.
Table 1.   Percent of Variance Explained
               (Single Variables)
 Table 3.  Percent of Variance Explained
               (Triplets of Variables)

Variable
°3
NO
N02
HC
,CH4
SR
T
Table 2.
One-hour
15.8
13.4
25.7
19.2
16.6
30.3
17.1
A03 Two-hour A03
24.1
12.9
41.7
15.6
19.3
36.7
35.8
Percent of Variance Explained
(Pairs of Variables)

Variable
N02> 03
N02, SR
N02> T
N02> NO
03, SR
SR, T
0,, T
One-hour
40.8
42.3
38.7
34.8
53.0
43.6
33.2
A03 Two- hour A03
55.0
49.6
52.9
50.1
58.8
52.1
46.7

Variables
0,, NO,, SR
«J L.
N02, NO, SR
0,, NO,, T
3 2
One-hour A03
60.2
46.7
46.7

Two- hour
71.1
53.8
64.9

A03




                                                    we discuss the derivation  of a  specific  simple func-
                                                    tional  form to make explicit this  relationship.

                                                    To get  a  continuous functional  form for  the relation-
                                                    ship of A03 to 03,  N02,  and  SR, we used  continuous
                                                    piecewise linear regression  [4,5].  Since the function
                                                    generated by this method is  smoother and less general
                                                    than that used in INVAR  estimates, we did not achieve
                                                    the level of PVE obtained  by INVAR.  The continuous
                                                    piecewise linear function  which minimizes the mean-
                                                    square  error in the fit  to the  1800 sample points is
                                                    given by*
                                                         A03 =   5.125 •  max {A,B,C}
                                                                 1.167 •  max {D,E,F} + 10.48
                                          (5)
                                                    where:

                                                    A     0.2146
        0,
  .0701  •  NO,
.002268
.02114- 0,

.1638 •  0,
 -.1013

- .09855-  NO
                                                                                N02 - .01075
          SR + .9376

         SR + 2.275

.005938 • SR    .2263
                                                     The notation max {A,B,C} means the largest of the
                                                    three values computed by equations A, B, and C.
                                                727

-------
0 =  .02709 •  C

E = -.009565 •

F = -.0144 •  0,
 .3015 •  N02 + .001298 •  SR + 2.304

+.0005252 • N02   .001079  •  SR+.2306

 .2066 •  N00   .003171 •  SR   2.943
(The unusual  form of the equation has no physical
interpretation,  but is simply a consequence of the
particular methodology employed.)  This equation ex-
plained 60.7% of the variance, a correlation between
predicted and actual values of 0.78.

This equation can be used to calculate a sequence  of
oxidant concentrations in a parcel  of air by using
known values  of  the other pollutant concentrations
(since difference equations for these pollutants have
not been derived).   Figures 3, 4, and 5 illustrate the
result for three air parcel trajectories.

         INTERPRETATION OF MODEL IMPLICATIONS

Let us attempt to interpret the functional  form in (5).
The final fited  surface is fairly simple, consisting of
a continuous  patching together of eight hyperplane
segments.  A  three-dimensional slice of this surface
is graphed in Figure 2.  Of the eight regions, there
are three small  regions that together contain only
1.0% of the total number of points.  We will ignore
these and restrict our analysis to the information
contained in  the functional fit to
other regions.
                                   AO, in the five
As a quick preliminary summary, in Table 4 we give the
means of all variables corresponding to the points in
each region.

          Table 4.  Means of Variables by Region
                                                          three regions, with a total of 20% of the sample
                                                          points, represent more extreme conditions.

                                                          The linear equations in each region are  given  in
                                                          Table 6; these are derived from equation (5).
                                                            Table 6.  Linear Equations for AO, by Subregion

Region
1
2
3
4
5
Equations
-.14 (03)
-.097(03)
-.87 (03)
-.092(03)
-.82 (03)
+ .87
+ .52
+ .86
+ .28
+ .26
(NO
(NO
(NO
(NO
(NO
2>
2>
2>
2)
2>
+ .054
+ .056
+ .029
+ .059
+ .034
(SR)
(SR)
(SR)
(SR)
(SR)
-3
-1
+9
+2
+15
.9
.4
.0
.3
.1
                                                          Before discussing these results, since the size of the
                                                          above coefficients depend on the scaling of the vari-
                                                          ables, we introduce normalized variables by dividing
                                                          the original variables by their overall standard de-
                                                          viations, i.e., denoting normalized variables by primes:
0,
                                                    03/6.2, N02   N02/5.2, SR = SR/52.8.  (6)
                                                          The equations are given in terms of the normalized
                                                          variables, in Table 7.
                                                                   Table 7.  Normalized Equations for AO,
                                                                                 (AO, not normalized)
Region
Overall
Region 1
Region 2
Region 3
Region 4
Region 5
Percent of
Points
100
46
33
8
7
5
A03
7.1
3.7
11.0
1.2
14.7
8.7
°3
6.1
3.6
4.6
20.4
5.4
16.7
N02
9.0
4.9
11.9
7.3
20.3
12.8
SR
100
73
118
139
119
149
In Table 5 the mean values are characterized by region.
        Table 5.  Mean Value Characteristics

Region
1
2
3
4
5
A03
very low
high
very low
high
above avg.
°3
very low
low
high
below avg.
high
N02
very low
above avg.
below avg.
high
above avg.
SR
very low
above avg.
high
above avg.
high
This layout of mean values is itself interesting.
Region 1, containing almost half of the sample points,
is representative of low pollution levels, low 03
production, and low solar radiation.  Region 2, with
33% of the points, contains data with above average
mean M02 and solar radiation levels, below average
03 levels, and high positive changes in 03.  The other

Region
1
2
3
4
5
Equations
-0.
-0.
-5.
-0.
-5.
9
6
4
(o3)
(o3)
(0,)
57(03)
1
(o3)
+4.
+2.
+4.
+1.
+1.
5
6
4
4
4
(N02)
(N02)
(N02)
(N02)
(N02)
+2.8
+2.9
+1.5
+3.1
+1.8
(SR)
(SR)
(SR)
(SR)
(SR)
-3.
-1.
+9.
+2.
+15.
9
4
0
3
1
                                                          The major qualitative conclusions that can be inferred
                                                          from these tables (see [1] for fuller discussion)  are
                                                          the following:
                                                               o:
                                                               (2)
                                                                    At below average 03 levels, the 03 change
                                                                    is determined largely by the SR and NOg
                                                                    levels, with larger values of these latter
                                                                    two related to larger values of the 03 change.
                                                                    The largest positive changes in 03 occur in
                                                                    this regime.

                                                                    At above average 03 levels, the Og has a
                                                                    strong negative association with 03 change,
                                                                    and moderate to high levels of N02 and SR are
                                                                    associated with low to only moderately above-
                                                                    average changes in 03.


                                                                               Conclusion
                                                          We were able to derive surprisingly accurate equations
                                                          predicting the short-term change in oxidant concentra-
                                                          tion (considering the limitations of the data and the
                                                          difficulty of the problem).  These results are encour-
                                                          aging in terms of the practicality of a full model
                                                          involving emission variables and all the major reactive
                                                          pollutants.
                                                      728

-------
                      References

1.   Breiman, Leo, and William S. Meisel, "Short-term
    Changes  in Ground-Level Ozone Concentrations:
    An  Empirical  Analysis," Final Report for EPA
    Contract No.  68-02-1704, October 1975.

2.   Wayne,  Kokin, and Weisburd, "Controlled Evaluation
    of  Reactive Environmental Simulation Model (REM),"
    Vols.  I  & II, NTIS PB 220 456/8 and PB 220 457/6,
    February 1973.

3.   Breiman, L.,  and W.  S. Meisel, "General Estimates
    of  the  intrinsic Variability of the Data in Non-
    linear  Regression Models," March 19, 1975, to be
    published in  Journal  American Statistical Asso.

4.   Horowitz, Alan, W. S. Meisel, and D. C. Collins,
    "The Application of Repro-Modeling to the Analysis
    of  a Photochemical Air Pollution Model," Final
    Report  for EPA Contract No. 68-02-1207,
    December 31,  1973.

5.   Meisel,  W. S., and D. C. Collins, "Repro-Modeling:
    An  Approach to Efficient Model Utilization and
    Interpretation," IEEE Trans, on Systems, Man, and
    Cybernetics.  July 1973.
         Figure 2.   Graph of Regression Surface,
                    with SR = 100.
            O Actual  value
            • Predicted  value
(pphm)
        7 AM
                                1 PM      3PM
                                                              (pphra)
            O Actual value
            • Predicted value
                                                                              9 AH
                                                                                              1 PM
                                                                          O Actual value

                                                                          • Predicted value
                                                               (pphm)
                                                                     7 AM    9 AM
                                                                                     11 AM     1 PM
                                                                                                     3 PH     5 PM
                                                              Figures  3,  4,  &  5.   Wind  Parcels  Arriving  at the
                                                                                  Pomona  Station  at 3,  4,  & 5
                                                                                  PM, respectively.
                                                       729

-------
      QUALITY ASSURANCE AND DATA VALIDATION  FOR THE

         REGIONAL AIR MONITORING SYSTEM  OF THE

        ST. LOUIS REGIONAL AIR POLLUTION STUDY
                  Robert B. Jurgens*
      Environmental Sciences Research  Laboratory
     Research Triangle Park, North  Carolina  27711

                          And

                   Raymond C.  Rhodes
    Environmental Monitoring and  Support  Laboratory
     Research Triangle Park, North  Carolina   27711
     The success of model development  and  evaluation
from a body of monitoring data depends  heavily upon
the quality of that data.  The quality  of  the
monitoring data in turn  is dependent upon  the various
quality assurance (QA) activities which have been
implemented for the entire system,  commencing with
the design, procurement, and  installation  of the
system and ending with validation of monitoring data
prior to archiving.  Of  the many sources of aeromet-
tric and emissions data  that  exist, the St.  Louis
Regional Air Pollution Study  (RAPS) is  the only known
study specifically designed for model  development and
                                    1 2
evaluation on an urban/rural  scale.  '

     The prime objective of RAPS is to  develop and
evaluate mathematical models  which  will  be useful in
predicting air pollution concentrations from informa-
tion of source emissions and  meteorology.   In addition
to detailed emissions and meteorological data, an
extensive base of high quality pollutant monitoring
data is required to verify and to refine the models.

     The Regional Air Monitoring System (RAMS) is the
ground-based aerometric  measurement system of RAPS and
consists of 25 automated data acquisition  sites
situated in and about the St. Louis metropolitan area.
Data from these 25 stations are transmitted over
telephone lines to a central  computer  facility for
processing and then sent to Research Triangle Park for
archival.  Details of RAMS have been described by

Meyers and Reagan.   The complex air pollution,
meteorological, and solar radiation measurements that
are made at RAMS sites are shown in Table  1.  Also
shown are the recording  intervals and  the  number of
recording stations for each instrument.

     Two main challenges exist for  an  effort of the
magnitude of the St. Louis study:

     1.  To efficiently  and effectively handle the
large quantity of monitoring  data;  and

     2.  To obtain high  quality monitoring data.

     In general, data validity results  from:  (1) A
quality assurance system aimed at acquiring acceptable
data, and (2) A screening process to detect spurious
values which exist in spite of the  quality control
process.
         Table  1.   RAMS  NETWORK MEASUREMENTS
                                                               AIR QUALITY:
    METEOROLOGICAL:
    SOLAR RADIATION:
SULFUR DIOXIDE
TOTAL SULFUR


HYDROGEN SULFIDE

OZONE
NITRIC OXIDE

OXIDES OF NITROGEN

NITROGEN DIOXIDE

CARBON MONOXIDE
METHANE

TOTAL HYDROCARBONS

WIND SPEED

WIND DIRECTION
TEMPERATURE
TEMPERATURE GRADIENT

PRESSURE

DEW POINT

AEROSOL SCATTER

PYRANOMETER

PYRHELIOMETER
PYRGEOMETER
                                    MEASUREMENT
                                    INTERVAL (min)

                                         5
                                         t

                                         5
                                         i

                                         \
                                         1

                                         1

                                         1

                                         5

                                         5

                                         5
 13

 12

 13
 13

 25

 25

 25

 25

 25

 25

 25

 25

 25

25

 7
 7

25
25

 E
 4

 4
               QUALITY  ASSURANCE SYSTEM

     The following  list includes the elements of a
total quality assurance system for aerometric
monitoring:
     Quality policy
    *Quality objectives
    *Quality organization
       and responsibility
     QA manual
    *QA plans
     Training
    *Procurement  control
       Ordering
       Receiving
       Feedback and
        corrective action
    *Calibration
       Standards
       Procedures
     Internal QC  checks
     Operations
       Sampling
       Sample handling
       Analysis
           Data
             Transmission
             Computation
             Recording
          *  Validation
          *Preventive maintenance
          *Rellability records and
             analysis
          *Document control
          Configuration control
          *Audits
             On-site system
             Performance
           Corrective action
           Statistical analysis
           Quality reporting
           Quality investigation
           Inter!ab testing
           Quality costs
*0n assignment from  the  National  Oceanic and Atmos-
 pheric Administration,  U.S.  Department of Commerce.
     Detailed definition  and discussion of the
elements of quality  assurance for air pollution  „
measurement systems  have  recently been published.

     The elements of particular concern to RAMS
fall into three general categories:

     1.  Procurement and  management, those activities
which need to be established or accomplished early
in the program;

     2.  Operation and maintenance,  those activities
which need to be performed  routinely to assure
continued operation  of the  system; and
                                                           *These particular elements, of major  concern  to  data
                                                            screening, are discussed herein.
                                                        730

-------
     3.   Specific data quality control activities,
those activities which involve the calibration and
data output from the meteorological and pollutant
measurement instruments and are explicitly involved
in acquiring quality data.

Procurement and Management

     Data Quality Objectives.  A requirement of the
initial  contract stated that 90% valid data were to
be achieved.  Valid data for pollutant measurements
were defined as the data obtained during periods
when the daily zero and span drifts were less than
2 per cent, with an allowance for the time required
to perform daily zero/span checks and periodic
multi-point calibrations.

     Procurement.  In planning to achieve the
objectives very stringent requirements were placed
on the suppliers of the various instruments of the
system and extensive performance tests (with numerous
rejections) were conducted prior to final acceptance.

     First Article Configuration Inspection (FACI).
The first remote station was installed and performance
tested by the contractor under EPA review.  Various
indicated corrections were made before proceeding
with the installation of the entire network.

     System Acceptance Test  (SAT).  After installation
of the entire network, a one-month system performance
demonstration was required to assure satisfactory
operation with respect to obtaining data of adequate
quantity and quality.  The SAT was completed in
December 1974.

     Incentive Contract.  The current contract has
 introduced award fee performance incentives for manage-
 ment, schedule, and for quality.  The quality portion
 of the award fee provides a continual motivation for
 obtaining and improving data quality.

     Quality Assurance Plans.  An extensive QA plan has
 been developed by the contractor.  A point of emphasis
 is that the QA plan (and its implementation) is dynamic
 —continually being revised and improved based upon
 experience with the system.  The QA plan outlines in
 detail the activities of the various QA elements
 previously mentioned.

     Organization.  To implement the QA plan, one
 full-time employee is assigned to overall QA
 responsibilities reporting directly to the Program
 Manager.  In addition, two persons are assigned for QA
 on a half-time basis, one for the remote monitoring
 stations, and the other for the central computer
 facility.

 Operation and Maintenance

     Document Control.  Detailed operation and
 maintenance manuals have been prepared for the remote
 stations and for the central computer facility.  These
 manuals are issued in a loose-leaf revisable and
 document-control format so that needed additions
 and/or revisions can be made.  Also, a complete history
 of changes are kept so that traceability to the
 procedures in effect for any past period of time can
 be made.  A document control system also exists for
 the computer programs.

     Preventive Maintenance.  Record-keeping and
 appropriate analysis of the equipment failure records
 by instrument type and mode of failure have enabled
 more ef.ficj.ent and effective scheduling of maintenance
 and optimum spare parts inventory with resultant
improvement in instrument performance.  RAMS station
preventive maintenance is completed twice each week.
Normally, the remote stations are unattended except
for the weekly checks, for other scheduled maintenance,
or for special corrective maintenance.

     Central Computer Monitors.  Central computer
personnel, using a CRT display, periodically monitor
the output from all stations to detect problems as
soon as possible.  To maximize the satisfactory opera-
tion of the network equipment, the assigned QA
personnel review the following activities associated
with preventive maintenance:

     1.  remote station logbook entries,

     2.  remote station corrective maintenance reports,

     3.  laboratory corrective maintenance reports,
and
     4.  central computer operator log.
     Additionally, the QA individuals are in frequent
verbal communication with field and laboratory
supervisors to discuss quality aspects of the
operations.

     Reliability Records and Analysis

          Telecommunications Status Summaries.  Each
day, a summary of telecommunications operations is
prepared to determine which stations and/or telephone
lines are experiencing significant problems that
might require corrective action.

          Daily Analog/Status Check Summaries.  Each
day, the central computer prepares a summary of analog/
status checks by station so that major problems can be
corrected as soon as possible by available field
technicians.  These analog/status checks are explained
in the section on data validation.

          Configuration Control.  Histories are kept
of the station assignment of specific instruments,
by serial number, so that possible future problems
with specific instruments can be traced back to the
stations.  A logbook for each instrument is maintained
for recording in a systematic manner the nature and
date of any changes or modifications to the hardware
design of the instruments.

Specific Data Quality Control Activities

     Calibration

          Calibration References for Gaseous Pollutants.
NBS standard reference materials are used for calibra-
tion standards if available.  Otherwise, commercial
gases are procured and certified at NBS for use as
standards.

          Multipoint Calibrations.  As a check on the
linearity of instrument response, an on-site, 5-point
calibration is scheduled at each station at 8-week
intervals.  Originally, acceptability was determined
by visual evaluation of the calibration data plots;
more recently, quantitative criteria are being
established for linearity.

          Measurement Audits.   Independent measurement
audits for pollutant instruments are performed by the
contractor using a portable calibration unit and
independent calibration sources at each station once
each calendar quarter.  Similar audits are performed
on the same frequency for temperature, radiation, and
                                                       731

-------
mass flowmeters;  and  independent checks are made  on
relative humidity, windspeed, and wind direction
instruments.   In  addition to the internal audits  per-
formed by the  contractor on his own operation, a
number of external audits have been performed by  EPA

and other contractors  to check the entire measurement
system.

     Qn-Site System Audit.   A thorough, on-site quality
system audit of  RAMS  was performed for EPA by an

independent contractor.    The results of this audit
pointed out several areas of weakness for which
corrective actions have been implemented.

     Data Validation.  As a part of the overall QA
system, a number of data validation steps are
implemented.   Several data validation criteria and
actions are built into the computer data acquisition
system:

          Status  Checks.  About 35 electrical checks
are made to sense the condition of certain critical
portions of the  monitoring system and record an
on-off status.   For example, checks are made on power
on/off, valve  open/shut, instrument flame-out, air
flow.  When these checks are unacceptable, the
corresponding  monitoring data are automatically
invalidated.

          Analog Checks.  Several conditions including
reference voltage,  permeation tube bath  temperature,
and calibration  dilution gas flow are sensed and
recorded as analog values.  Acceptable limits for
these  checks  have been determined, and,  if exceeded,
the corresponding affected monitoring are invalidated.

          Zero/Span  Checks.  Each day, between 8-12  pm,
each of  the gaseous  pollutant instruments in each
station  are zeroed and spanned by automatic, sequenced
commands from the central computer.  The results  of
the zero/span checks  provide the basis for a two-point
calibration equation, which  is automatically computed
by the central computer  and  is used for  converting
voltage  outputs  to pollutant concentrations  for  the
following calendar day's data.   In addition, the
instrument  drift at zero and span conditions between
successive  daily checks  are  computed by  the  central
computer and  used as  a basis for validating  the
previous day's monitoring data.  Originally, zero and
span drifts were considered  as acceptable if less than
2 per  cent,  but the span drift criterion has recently
been increased to 5 per  cent, a more realistic  level.
If the criteria are not  met, the minute  data for the
previous day  are flagged.  Hourly averages  are
computed during routine  data processing  only with data
which  have  not been flagged  as invalid.

                 DATA SCREENING IN RAMS

     The tests which  are used to screen  RAMS data are
summarized  in Table  2.   Specific tests and  associated
data base  flags are  listed.  The types of screens that
have been  employed or tested will be detailed,  the
mechanisms  for flagging  will be  reviewed, and  then
the  implementation of screening  within RAMS  will  be
discussed.
    Table 2.  SCREENING CATEGORIES AND ASSOCIATED FLAGS FOR RAMS DATA



      Category                                 FU^


I   Modus Operand!

      Ho Instrument                            '0

      Hissing measurement                        1037

      Status                                Value « 1

      CalIbratlon                             10


II.  Continuity and Relational

   A.  Intrastation

         Gaseous analyzer drift                    Value

         Gross limits                         lO3^

         Aggregate frequency distributions

         Relationships

         Temporal contlnulty

             Constant output
                                             Being Implemented
                                             u= iHA rt in"
                                                   23
      H.   Interstation

             Meteorological network uniformity

             Statistical outliers

                Dixon Ratio

   III. A Posteriori

          Review of station log

          Unusual events or conditions


          Visual inspection of data
                                         Value * 10'


                                         Being implemented


                                         Value * lO1*1
                                               -20
                                         Value * 10
                                         ValIdate - Remove
                                          flag
     For  descriptive purposes, the  tests  are divided
into three  categories.   The first category,  "Modus
Operandi,"  contains checks which document the network
instrument  configuration and operating  mode  of the
recording system.   Included are checks  for station
instrumentation,  missing data, system analog and
status sense  bits,  and  instrument calibration mode.
These checks, which have been described above, are
part of the quality control program  incorporated in
the data  acquisition system and central facility data
processing, and  are an  important data management
function  used to  document system performance.

     The  second  category, "Continuity and Relational,"
contains  temporal  and spatial continuity  checks and
relational  checks  between parameters which are based
on physical and  instrumental considerations  or on
statistical patterns of the data.  A natural  sub-
division  can  be made between intrastation checks,
those checks which  apply only to data from one station,
and interstation  checks, which test  the measured
parameters  for uniformity across the RAMS network.

     Intrastation  checks include tests  for gaseous
analyzer  drift,  gross limits, aggregate frequency
distributions, relationships, and temporal continuity.
The drift calculations, which are part  of the quality
control program,  have been discussed above.

     Gross  limits,  which are used to screen  impossible
values, are based  on the ranges of the  recording
instruments.  These, together with the  parametric
relationships which check for internal  consistency
between values,  are listed in Table  3.  Setting limits
for relationship  tests  requires a working knowledge  of
noise levels  of  the individual instruments.   The
relationships used  are  based on meteorology,  atmos-
pheric chemistry,  or on the principle of  chemical mass
balance.  For example,  at a station  for any  given
minute, TS  cannot  be less than SO, + H2S  with  allow-
ances for noise  limits  of the instruments.
                                                         732

-------
            Table 3.  GROSS LIMITS AND RELATIONAL CHECKS
PARAMETER
Ozone
Nitric Oxide
Oxides of
Nitrogen
Carbon Monoxide
Methane
Total Hydro-
carbons
Sulfur Dioxide
Total Sulfur
Hydrogen Sulfide
Aerosol Scatter
Wind Speed
H1nd Direction
Temperature
Dew Point
Temperature
INSTRUMENTAL OR
NATURAL LIMITS
LOWER
0 ppm
0 ppm
0 ppm
0 ppm
0 ppm
0 ppm
0 ppm
0 ppm
0 ppm
0.000001m"1
0 m/s
0"
-20°C
-30° C
- 5°C
UPPER
5 ppm
5 ppm
5 ppm
50 ppm
50 ppm
50 ppm
1 ppm
1 ppm
1 ppm
0.00099m"1
22.2 m/s
360°
45°C
45°C
5°C
    Gradient

   Barometric
    Pressure

   Pyranometers

   Pyrgeometers

   Pyrhellometers
950 mb


- 0.50

 0.30

-0.50
                                        INTERPARAMETER
                                         CONDITION
                                     N0*03 .10.04
                                     CH4 - THC ^Noise (CH4)

                                     CH   THC < Noise (THC)
                                      4    ~


                                     S02 - T5 < Noise (S02)
1050 mb


  2.50 Langleys/min

  0.75 Langleys/min

  2.50 Langleys/min
                                             tested since it can  remain constant (to the number  of
                                             digits recorded)  for periods much longer than 10
                                             minutes.  The test was  modified for other parameters
                                             which reach a low constant background level during
                                             night-time hours.
                                                               A) SINGLE OUTLIER
                                                        \
                                                               C) SPIKE
                                                                ••*•.•
                                                               El MISSING
G) DRIFT
                                                                                          B) STEP FUNCTION
                                                                            • *• ••*••••••••

                                                                            D) STUCK
                                                                                              .••„
                                                                                          F) CALIBRATION
          Figure 1. Irregular instrument response.
     A refinement of the gross limit  checks  can be
 nade using aggregate frequency distributions.   With a
 cnowledge of the underlying distribution,  statistical
 limits can be found which have narrower  bounds  than
 the gross limits and which represent  measurement
 levels that are rarely exceeded.  A method for  fitting
 a parametric probability model to the underlying
 distribution has been developed by Dr. Wayne Ott of
 EPA's Office of Research and Development.7.   B.E.
                   Q
 Suta and G.V.  Lucha  have extended Dr. Ott's program
 to estimate parameters, perform goodness-of-fit tests,
 and calculate quality control limits  for the normal
 distribution,  2- and 3-parameter lognormal distribu-
 tion, the gamma distribution, and the Weibull
 distribution.   These programs have been  implemented
 on the OSI computer in Washington and tested on
 water quality data from STORE!.  This technique is
 being studied for possible use in RAMS as  a  test for
 potential  recording irregularities as well as a
 refinement of the gross limit check currently
 employed.

     Under intrastation checks are specific  tests
 which examine the temporal  continuity of the data  as
 output from each sensor.   It is useful to  consider,
 in general,  the types of atypical or  erratic responses
 that can occur from sensors and data  acquisition
 systems.   Figure 1  illustrates graphically examples
 of such behavior, all of which have occurred to some
 extent within  RAMS.   Physical causes  for these
 reactions  include sudden  discrete changes  in component
 operating  characterisitcs,  component  failure, noise,
 telecommunication errors  and outages, and  errors in
 software associated  with  the data acquisition system
 or data processing.   For  example, it  was recognized
 early in the  RAMS program that a constant  voltage
 output from  a  sensor indicated mechanical or electri-
 cal  failures  in  the  sensor  instrumentation.  One of
 the  first  screens that was  implemented was to check
for  10 minutes  of constant  output from each  sensor.
Barometric pressure  is not  among the  parameters
                                                  A technique which can  detect any sudden jump in
                                             the response of an instrument,  whether it is from an
                                             individual outlier, step  function or spike, is the
                                             comparison of minute successive differences with
                                             predetermined control limits.   These limits are
                                             determined for each parameter  from the distribution
                                             of successive differences for  that parameter.  These
                                             differences will be approximately normally distributed
                                             with mean zero (and computed variance) when taken over
                                             a sufficiently long time  series of measurements.

                                                  Exploratory application of successive differences,
                                             using 4 standard deviation  limits which will flag 6
                                             values in 100,000 if the  differences are truly
                                             normally distributed, indicate  that there are abnormal
                                             occurrences of "jumps" within  certain parameters.
                                             Successive difference screening will be implemented
                                             after further testing to  examine the sensitivity of
                                             successive difference distributions to varying
                                             computational time-periods  and  to station location.

                                                  The type of "jump" can easily be identified.  A
                                             single outlier will have  a  large successive difference
                                             followed by another about the  same magnitude but of
                                             opposite sign.  A step function will not have a return,
                                             and a spike will have a succession of large successive
                                             differences of one sign followed by those of opposite
                                             sign.

                                                  The interstation or  network uniformity screening
                                             tests that have been implemented in RAMS will now be
                                             described.  Meteorological  network tests are performed
                                             on hourly average data and  are  based on the principle
                                             that meteorological parameters  should show limited
                                             differences between stations under certain definable
                                             conditions typically found  in winds of at least
                                             moderate speeds (>4 m/sec).  Each station value is
                                             compared with the network mean.  The network mean is
                                             defined as the average value for a given parameter
                                             from all stations having  reported valid data.  (If
                                             more than 50% are missing,  a network mean is not
                                                       733

-------
computed and the test is not made.)  Values exceeding
prescribed limits are flagged.  The limits have been
set on the advice of experienced meteorologists.  The
tested parameters and flagging limits are listed
below.

    Maximum allowable deviations from network mean
     under moderate winds (network mean > 4 m/sec)
Wind speed


Wind direction

Temperature

Temperature difference

Dew point

Adjusted pressure
                      2 m/sec or MEAN/3
                        (whichever is larger)

                      30°

                       3°C

                      .5°C

                       3°C

                       5.0 millibars
     In addition to network screening techniques
which are based on knowledge of underlying physical
processes, methods from statistical outlier
      n in
theory     were also examined.  Specifically, the

Dixon ratio test   was implemented to determine
extreme observations of a parameter across the RAMS
network.  The Dixon ratio test is based entirely on
ratios of differences between observations from an
assumed normal distribution and is easy to calculate.
The Dixon criteria for testing a low suspect value
from a sample size of n, n <_ 25, are shown in
Figure 2.  Though the entire sample is shown as
ranked, only the extreme 2 or 3 values need to be
ordered.  Associated with each value of n are
tabulated ratios for statistical significance at
various probability levels.  For example, if n=25,
X, would be considered as an outlier at the 1% level
of significance when r-p L .489.  Since the under-
lying distribution may not be normal, the calculated
probabilities may not be exact, but are used as
indicators of heterogeneity of the network observations
at a given time.
ORDERED SAMPLE: *i 
-------
SENSOR
IWTRUMEKTS


ACQUISITION
SYSTEM


PROCESSING
AND
ANALYSIS


SCREENING


DATA
BASE
ARCHIVAL
Figure 3. Generalized data flow for environmental measurement systems.
          Data screening should take place as near to
data acquisition as possible either in data processing
which is traditionally concerned with laboratory
analysis, conversion to engineering units, transcribing
intermediate results, etc., or in a separate module,
as illustrated, designed specifically for the screening
process.  Screening data soon after data acquisition
permits system feedback in the form of corrective
maintenance, changes to control processes, and even
to changes in system design.  This feedback is
essential to minimize the amount of lost or marginally
acceptable data.

          The RAMS screening tests, which have been
developed at Research Triangle Park (RTP), are now
part of the data processing carried out at the RAPS
central facility in St. Louis.  Slow computation
speeds of the St. Louis POP 11/40 computer required
restricting the intrastation screening tests to hourly
average data.  RAMS data is still passed through the
RTP screening module before archiving.

SUMMARY

     The experiences gained in RAMS and applicable to
other monitoring systems are:

     1.  Data validity is a function of quality
assurance and data screening.

     2.  A QA plan and data screening rules should
be established initially and maintained throughout
the program.

     3.  The QA plan and screening rules are dynamic,
being improved as additional knowledge and experience
is gained.

     4.  Applied during data acquisition or shortly
thereafter, quality control and screening checks
constitute an important feedback mechanism, indicating
a requirement for corrective action.

REFERENCES

1.   Burton, C.S. and G.M. Hidy.  Regional Air
     Pollution Study Program Objectives and Plans,
     EPA 630/3-75-009, Dec. 1974.

2.   Thompson, J.E. and S.L. Kopczynski.  The Role of
     Aerial Platforms in RAPS, Presented at an EPA
     meeting on Monitoring from Las Vegas, Nevada,
     March 1975 (unpublished).

3.   Meyers, R.L. and J.A. Reagan.  Regional Air
     Monitoring System at St. Louis, Missouri,
     International Conference on Environmental Sensing
     and Assessment, Sept. 1975 (unpublished).

4.   Quality Assurance Handbook for Air Pollution
     Measurement Systems, Volume I, Principles,
     EPA 600/9-76-005, March 1976.
5.   von Lehmden, D.J., R.C. Rhodes and S. Hochheiser.
     Applications of Quality Assurance in Major Air
     Pollution Monitoring Studies-CHAMP and RAMS,
     International Conference on Environmental Sensing
     and Assessment, Las Vegas, Nevada, Sept. 1975.

6.   Audit and Study of the RAMS/RAPS Programs and
     Preparation of a Quality Assurance Plan for RAPS,
     Research Triangle Institute, Research Triangle
     Park, N.C.  27707, EPA Contract No.  68-02-1772.

7.   Ott, W.R.  Selection of Probability Models for
     Determining Quality Control Data Screening
     Range Limits, Presented at 88th Meeting of the
     Association of Official Analytical Chemists,
     Washington, D.C., Oct. 1974.

8.   Suta, B.E. and G.V. Lucha.  A Statistical
     Approach for Quality Assurance of STORET-Stored
     Parameters, SRI, EPA Control No. 68-01-2940,
     Jan. 1975.

9.   Grubbs, F.E.  Procedures for Detecting
     Outlying Observations in Samples, Technometrics
     11 (1), 1-21, 1969.

10.  Anscombe, F.J.  Rejection of Outliers,
     Technometrics 2 (2), 123-147, 1960.

11.  Dixon, W.J.  Processing Data for Outliers,
     Biometrics 9 (1), 74-89, 1953.
                                                       735

-------
                               QUANTITATIVE RISK ASSESSMENT FOR
                             COMMUNITY EXPOSURE TO VINYL CHLORIDE
              Arnold M.  Kuzmack
      Office of Planning and Evaluation
     U.S.  Environmental  Protection Agency
           Washington, D.C.  20460
               Robert  E.  McGaughy
     Office  of  Health  and Ecological Effects
     U.S. Environmental  Protection Agency
            Washington,  D.C.   20460
                   Summary

    Vinyl chloride is a known human carcino-
gen; it has produced liver angiosarcoma, a
very rare form of cancer, as well as other
cancers and non-cancer effects in occupation-
ally exposed populations.  It is also known to
be emitted into the atmosphere from plants
which produce vinyl chloride monomer (VCM
plants) and plants which polymerize the mono-
mer to polyvinyl chloride (PVC plants).  Al-
though concentrations of vinyl chloride in the
ambient air are much less than those which
caused cancer in workers, it is generally con-
sidered prudent to assume that there is no
threshold for chemical carcinogens, so that
any exposure involves some risk.  In conjunc-
tion with EPA consideration of rulemaking ac-
tion to regulate emissions of vinyl chloride
from VCM and PVC plants, the Administrator of
EPA requested that an analysis be performed
which would estimate quantitatively the risk
resulting from VC emissions and assess the re-
liability of the estimates.   This paper re-
ports the results of that analysis.  The de-
tails are presented in Appendices A through E,
which are available from the authors on re-
quest .

             Method of Analysis

    The analysis involves three steps which
are discussed below.  They are an estimate of
size of the exposed population, concentrations
of vinyl chloride to which it is exposed, and
number of liver angiosarcomas and other health
effects which would result from this exposure.
Of these, the last estimate is by far the most
difficult to make.  In addition, an investi-
gation was made of the places of residence of
all people known to have died of liver angio-
sarcoma in the last 10 years in an attempt to
detect clustering around PVC and VC plants.

    Although excess birth defects have been
reported in communities near some plants, the
current data is too fragmentary for conclu-
sions to be drawn; thus, these effects have
not been considered in this paper.

         Size of Exposed Population

    A study  by the American Public Health
Association (APHA), performed under contract
to EPA's Office of Toxic Substances, deter-
mined the number of people living within
various distances, up to S miles, from each of
9 VCM plants and 33 PVC plants.  The study was
based primarily on census tract information.
The validity of the methodology used was con-
firmed by a more detailed analysis of the
population living around a few plants, per-
formed by EPA's Office of Planning and Evalua-
tion.
    The total population living within  5 miles
of all PVC and VCM plants is shown in the
following table:
         Distance (mi)
           0 - 1/2
           1/2   1
           1   3
           3   5

                TOTAL
             Population
                 47,000
                203,000
              1,491,000
              2,838,000

              4,579,000
    Thus, a total of 4.6 million people live
in the vicinity of these plants.  The use of
residence data involves some error, of course,
since people spend part of their time away
from their homes and are exposed to varying
levels of vinyl chloride.  There does not
seem to be any practical way around this
problem, short of a detailed study of travel
patterns of 4 million people in over 40 sepa-
rate communities.

    Ambient Vinyl Chloride Concentrations

    Annual average ambient concentrations of
vinyl chloride were calculated by standard
diffusion modeling techniques.  Two independ-
ent studies were made, one by EPA's Office of
Air Quality Planning and Standards (OAQPS)
and one by Teknekron, Inc.2  The agreement
among the two studies was good, with differ-
ences generally less than 25%.  It was
decided to use the Teknekron results in the
actual calculations since they included data
on variations in meteorological conditions
from location to location.

    For an average uncontrolled PVC or VCM
plant in an area with average meteorological
conditions, the annual average concentration
of vinyl chloride in each annulus around the
plant is shown in the following table:
Distance (mi)
  0   1/2
1/2   1
  1   3
  3   5
Vinyl Chloride Concentration
         Tppb]
 PVC Plant
    323
     57
     15
      5.7
VCM Plant
   113
    20
     5.2
     2.0
It can be seen that concentrations around VCM
plants are significantly smaller than around
PVC plants.  This fact combined with the much
smaller population living near VCM plants
implies that by far the greatest part of the
public health risK is from emissions of PVC
plants.
                                             736

-------
     In calculating the average population
exposure,  it is necessary to consider,for
each population affected, the type of plant
(VCM or PVC),  the size of plant, the multi-
plicity of plants nearby, and the meterologi-
cal conditions at the plant site.  Informa-
tion from OAQPS and from the APHA study was
used to determine areas where more than one
plant was  located, and OAQPS characterized
the size of each plant as "average" or
"large."  The  Teknekron study was used to
categorize the meteorological and topographic
conditions at  each location.

     The net result of these calculations is
that the average exposure faced by a person
chosen at random from the 4.6 million people
living within  5 miles of plants is 17 ppb.

     Unfortunately, it has not been possible
to make a systematic comparison of the
diffusion modeling results with data obtained
from actual monitoring, although they appear
generally consistent.  It is therefore diffi-
cult to estimate the uncertainty of these
estimates.  Lacking anything better, we can
take the difference between the two diffusion
modeling efforts of up to 25% as an estimate
of that uncertainty.

Health Effects Resulting From Exposure

     What are  the results of exposing 4.6
million people to an average of 17 ppb of
vinyl chloride?  The first major decision to
face in answering this question is to arrive
at some combination of two basic approaches.
One approach is to rely largely on human
data (which exists for vinyl chloride but not
for many other chemicals of concern to EPA);
the second is  to make projections from
animal experiments.  Both involve difficul-
ties.  Use of human data eliminates the un-
certainties that result because we do not
know the differences in response between the
test animals and humans.  On the other hand,
with the data on human (occupational) expo-
sure, it is necessary to guess at exposure
levels over the past 30 years and approximate
the total number of workers involved and the
number of cancers caused by past exposures
for which symptoms may not appear until many
years in the future.  By using animal data we
can avoid these problems, but only at the
price of uncertainty in the relevance of
animal experiments to human exposures.  The
approach taken in this analysis is to use
animal data to predict the probability of
human liver angiosarcoma, and then use the
human data to  the greatest extent possible
to interpret those predictions.

     A second  major decision that must be
made is how to project the results observed
at high doses  in animal experiments and in
the occupational exposures to the much lower
doses encountered in the environment.  Two
alternative assumptions are frequently made
in the scientific literature.   The first is
a straight-line projection to zero dose,
assuming no threshold (the "linear model").
This is also referred to as the "one hit"
model,  since it would follow logically from
the assumption that each minute increment of
exposure to a  carcinogen has the same inde-
pendent probability of causing a cancer,
regardless of the dose level.  This assumption
is generally accepted as prudent  in radiation
carcinogenesis.  For chemical carcinogenesis,
the model is usually considered to provide an
upper limit to the level of effects likely at
extremely low doses, because the  existence of
detoxification mechanisms would render small
doses less effective in causing cancer and
would therefore result in a threshold, or at
least fewer effects.

    The second commonly used projection method
is based on the assumption that the observed
changes in response with dose are the result
of variations of susceptibility in the popula-
tion, which is assumed to be log-normally
distributed with dose.   For convenience, we
refer to this as the "log-probit" model
because it forms a straight line when the
logarithm of the dose is plotted against the
proportion of responses expressed in probabi-
lity units (probits).   The log-probit model
is used in the Mantel-Bryan procedure.

    In this analysis,  both models are used.
For technical reasons,  the log-probit model
is difficult to apply to this case.  There-
fore, the basic calculations were done using
the linear model, but a sensitivity analysis
was done to show how the results would change
under the log-probit assumption.  Thus, the
log-probit model results are shown below as a
range, not a definite number.

    A third decision that must be made is how
to predict human incidence rates from animal
data.  Again, there is  little hard data to
provide guidance.  The  assumption used here
is that a lifetime exposure of humans to a
given concentration of vinyl chloride would
produce effects in the  same proportion of
individuals as a lifetime exposure of rats.
Thus, the one-year exposure in the animal
experiments would be equivalent to about 30
years of exposure for humans.

    A fourth decision to make is how to use
the available human data on liver angiosar-
coma cases among highly-exposed workers to
calculate the probability per year of expo-
sure that cases will eventually develop in
people.  This calculation is needed for com-
parisons with the animal model.  There are
three aspects to this problem:  1) to find in
the literature a realistic estimate for the
fraction of highly-exposed workers who have
contracted liver angiosarcoma at some time in
their lives, 2)  to account for the fact that
the currently-observed rates underestimate
the actual incidence because they do not
include workers who have been exposed more
recently than IS to 20  years ago, and 3) to
account for the fact that people can die from
other causes before a latent case of liver
angiosarcoma becomes manifest.

    These issues were treated as follows:  Of
the four occupational epidemiology studies
from which it is possible to estimate an
incidence rate,  3-6 the  two with the smallest
number of subjects and the best separation of
highly-exposed workers  from the group of all
workers^'^ had the largest incidence of
angiosarcoma.  This incidence was assumed to
be valid for all highly-exposed workers.  The
                                             737

-------
 latency  time distribution  for  liver  angiosar-
 coma  and the growth  in  the number  of person-
 years  of exposure  since  1940 are two factors
 which  affect the number  of cases we  have  ob-
 served through  1974.  These factors  are ana-
 lyzed  in Appendix  D.  The  result of  the analy-
 sis is an estimate of the  probability per year
 of exposure that a person  will  get angiosar-
 coma  some time  in  his life.  The remaining
 problem  of multiple  risks  competing  for mor-
 tality was not  treated  because  of  its complex-
 ity.

      A fifth decision that must be made is  how
 to quantatively describe the other effects  of
 vinyl  chloride  exposure  besides liver angio-
 sarcoma.   This  problem  was handled by estimat-
 ing from the literature3,7-9 ratios  of the
 number of people with other cancers  and the
 number of people with liver damage compared to
 the number with angiosarcoma.   As  an index  of
 liver  damage, the  bromsulphalein (BSP) test is
 used  because it, among  all liver function
 tests  that have been used, correlates best
 with  vinyl chloride  exposure and because  an
 abnormal BSP test  indicates that severe damage
 has occurred in the  liver, either  because the
 liver  cells are not  able to assimilate the  in-
 travenously injected BSP dye from  the blood
 and excrete it  into  the  bile passages, or that
 the bile passages  are no longer structurally
 intact enough to carry  the dye  out of the
 liver.

             Results of Analysis

      The results of  these  five  aspects of the
 problem  are presented below in  reverse order.
 The approximate ratio of severe liver damage
 cases  to liver  angiosarcoma cases  is  about 30,
 the result being consistent for two  indepen-
 dent  occupational  studies.   It was also found
 that  about twice as  many cases  of cancer  of
 all sites  are caused by vinyl chloride as
 cases  of liver  angiosarcoma alone.    The animal
 experiments have shown approximately  the  same •
 ratio  of all cancers to liver angiosarcoma,
 after  background incidence is taken  into
 account.

      In  calculating  the probability per year
 of exposure that a highly-exposed worker will
 get angiosarcoma some time in his life,  we
 found  that the  fraction of highly-exposed
 workers  who have been currently diagnosed is
 0.02.  They have been exposed for an  average
 of 17  years before diagnosis.   The analysis,
 which  was  based on the available data on  the
 time distribution  of person-years of  exposure
 and the  distribution of latency times from
 first  exposure  to  diagnosis,  showed that only
 about  401  of the highly-exposed workers  who
 are expected to get angiosarcoma some time in
 their  lives have been diagnosed already.
 Since  the  data was  incomplete,  several assump-
 tions had  to be made in order to complete  the
 analysis.  Therefore, the probability that one
 of these people will get angiosarcoma some
 time  in  their lives is 0.02/(17 x 0.40)
 0.003 per year of exposure.  In Appendix D,
 the calculation is  explained  in greater  detail.

     The  17-year average concentration to
which these workers were exposed was  estimated
to be  350 ppm on the basis  of  one study.   Only
one company has  reported measurements of
vinyl chloride for the jobs in their plant.
These measurements, started in 1950, show
the highest exposure jobs ranged from 120 to
385 ppm before 1960, when the exposures were
reduced because of suspected toxicological
problems with vinyl chloride.  In estimating
the average, it was assumed that the other
factories, most of which probably did not
monitor the concentration of vinyl chloride,
were less concerned about industrial hygiene
and, therefore, took fewer precautions to keep
the levels low.

    In predicting the human angiosarcoma rate
from the animal dose-response data, it was
projected, from the linear model, that expo-
sure to vinyl chloride would cause 0.071
cases of liver angiosarcoma and 0.15 cases of
all types of cancer per million people per
year per ppb of continuous exposure.  Details
of these calculations are given in Appendix
B.  Converting to a 7-hour per day, 5-day
per week work schedule of exposure to 350 ppm,
the model predicts an angiosarcoma incidence
rate of 0.0052 per person-year exposure.  It
is shown in Appendix D that this rate is
numerically indistinguishable from the rate
of 0.003 calculated from the human data,
considering the known quantifiable errors of
estimating the parameters of the animal and
human data.  It can be concluded that the
slope of the linear animal dose-response
relationship for angiosarcomas is consistent
with the human data.

    The extrapolation of the animal dose-
response relationship to a concentration of
17 ppb (the average concentration around the
uncontrolled plants) yields the following
predicted number of cases in the 4.6 million
people living within 5 miles of the plants.
For details, see Appendix B.

                  Cases Per Year of Exposure
                                   Log-Probit
  Type of Effect     Linear Model    Model
All cancer               11         0.1 - 1.0
Liver angiosarcoma        5.5      0.05   0.5

    This is the expected number of cases pro-
jected to be caused per year at current
levels of emissions; the people exposed now
will not be diagnosed for another 15-20 years.
Similarly, any cases observed now would have
been caused by exposure 15-20 years ago (if
in fact caused by vinyl chloride) when pro-
duction was about 10% of current levels.

    In order to arrive at a final estimate of
the number of people adversely affected by
vinyl chloride emissions, the important re-
sults of this analysis to consider are as
follows:  1) the number of cancers at all
sites caused by vinyl chloride is twice the
number of liver angiosarcomas; 2) the number
of people with severe liver damage is 30
times the number of liver angiosarcomas; 3)
the animal model predicts that the number of
liver angiosarcomas in the population around
plants is 5.5 cases per year of exposure;
4) the number of cases calculated from the
human data is 60% of the number predicted from
the animal model; 5) the use of a log-probit
model for extrapolation to low doses gives
predictions of 0.1  to 0.01 times the number
                                              738

-------
predicted by the linear model; 6) the error in
the estimate of 5.5 cases per year ranges from
+55% to -10%.  This error includes statistical
uncertainty in estimating the dose-response,
uncertainty in ambient concentration estimates,
and errors resulting from not considering ex-
posures beyond 5 miles or decomposition of
vinyl chloride in the atmosphere.  It is not
symmetrical because it includes possible ef-
fects beyond 5 miles from the plants, which
were not explicitly considered in the analysis.
It does not account for our uncertainty about
the appropriateness of using a linear model
extrapolated to zero dose or of extrapolation
from animal data; 7) the quantifiable error in
the rate calculated from the human data is
about ± 67%.  This includes uncertainties in
the 17-year average dose received by the
workers, uncertainty in the number of hours
per day of actual high exposure, and uncer-
tainty in the fraction of highly-exposed
workers who have been diagnosed with liver
angiosarcoma.  Other errors cannot be quanti-
fied, and are discussed in Appendix D.

                 Conclusions

     When all these uncertainties are consid-
ered, our judgment is that the number of liver
angiosarcoma cases produced per year of expo-
sure in people residing near vinyl chloride
plants is somewhere between less than 1 and 10
cases.  The cases produced by this year's ex-
posure will not be diagnosed until 15 to 20
years from now.  If the EPA regulations are
implemented, the number of cases is expected
to be reduced in proportion to the reduction
in the ambient annual average concentration,
which is expected to be 5% of the uncontrolled
level.

     The vinyl chloride exposure around plants
is also producing somewhere between less than
1 and 10 cases of primary cancer at other
sites, mainly lung, brain, and bone.  Assuming
no threshold for liver damage, somewhere be-
tween less than 1 and 300 cases of serious
liver damage would be predicted.  The number
of liver damage cases is likely to be less
than this because a liver damage threshold at
low dose probably exists.

     In order to find out whether people liv-
ing near VC-PVC plants have, as of 1974, had
higher rates of liver angiosarcoma diagnosis
than the overall U.S. population, a search of
the residence records of all known liver angio-
sarcoma cases in the last 10 years was per-
formed using data collected by the Center for
Disease Control.  Out of 176 cases where resi-
dence at time of death was known, 3 people
lived within 5 miles of a plant.  Unfortun-
ately, the diagnosis of these cases has not
yet been confirmed by the National Cancer In-
stitute.  In addition one infant whose parents
lived within 1 mile of a plant died of a rela-
tively common liver tumor.  It was shown in
Appendix E that this rate of occurrence is not
higher than the national average.  However,the
survey is too incomplete to draw any conclu-
sions at the current time.

     Considering the results of the foregoing
analysis, one would only now expect to be see-
ing some evidence of vinyl chloride exposure.
If the highest rate in our range were actually
occurring, 10 cases of liver angiosarcoma per
year of exposure would be developing; 15-20
years ago when the vinyl chloride production
was about 10% of current levels, one case
would be expected per year of exposure (with
constant population).   This is to be compared
to a background rate of 0.6 cases per year ex-
pected in the population around the plant.

     The survey of liver angiosarcoma cases
would probably detect the existence of 10 cases
over the 10-year period.  Since this was not
observed we can conclude that the real in-
cidence is not significantly greater than the
predicted upper limit of 10 cases initiated per
year of exposure unless migration of people in
and out of the regions around plants has been
excessive.  If the lower rates in the range of
the above analysis were to be true, increased
incidence of angiosarcoma would not be ob-
servable .

                 References

1.   Landau, E., "Population Residing Near
       Plants Producing Polyvinyl Chloride,"
       American Public Health Association,
       Contract Report.  EPA, August 1975.
     Teknekron, Inc. unpublished report, 1975.

     Tabershaw, I.R. and Gaffey, W.R., "Mortal-
       ity Studies of Workers in the Manufac-
       tory of Vinyl Chloride and Its Poly-
       mers," J.  Occupational Medicine 16 509-
       518, 1974.

     Wagoner, J.K., Testimony in Vinyl Chlor-
       ide , Hearings before the Subcommittee
       on Commerce, 93rd Cong., 2nd Sess.
       (Serial No. 93-110), August 21, 1974,
       p. 59.

     Nicholson, W.J.; Hammond, E.G.-, Seidman,
       H.-, Selikoff, I . J . ;  "Mortality Experi-
       ence of a Cohort  of  Vinyl Chloride-
       Polyvinyl Chloride Workers," Ann. N.Y.
       Acad.  Sciences,24^,  225-230, 1975.

     Heath, C.W.  and Falk,  H., "Characteristics
       of Cases of Angiosarcoma of the Liver
       Among Vinyl Chloride Workers in the
       United States," Ann.  N.Y. Acad.
       Sciences,246, 231-236, 1975.

     Marsteller,  H.J.-, Lelback, W.K.; Muller,
       R.; Gedigk, P.; "Unusual Splenomegalic
       Liver Disease as  Evidenced by Perito-
       neoscopy and Guided  Liver Biopsy  Among
       Polyvinyl Chloride Production Workers,"
       Ann.  N.Y.  Acad.  Sciences, 246, 95-134,
       1975.

     Veltman, G. -,  Lange, E.E.; Juhe, S.; Stein,
       G.; and Bachner,  U.; "Clinical Manifes-
       stations and Course  of Vinyl Chloride
       Disease," Ann.  N.Y. Acad. Sciences,
       246, 6-17,  1975.

     Creech,  J.L.  and Makk, L., "Liver Disease
       Among Polyvinyl Chloride Workers," Ann.
       N.Y. Acad.  Sciences, 246, 88-94,  1975.
                                              739

-------
                                   NEW M3DELS FOR OPTIMAL SEWER SYSTEM DESIGN
                                                  Ben Chie Yen
                                              Harry G. Wenzel, Jr.
                                                  Larry W. Mays
                                                  Wilson H. Tang
                                         Department of Civil Engineering
                                    University of Illinois at Urbana-Champaign
                                             Urbana, Illinois  61801
                        SUMMARY

Three new models have been developed for the least-cost
design of storm sewers.   All three models consider the
sewers as a system.   The basic model designs the crown
elevations, slopes,  and diameters of the sewers.  The
sewer system layout  is predetermined.   Routing is
accomplished by lagging the hydrographs by a travel
time.  Optimization  is achieved through a discrete dif-
ferential dynamic programming technique to produce the
least-cost design of the system based on specified cost
functions for installation of the sewers and manholes.
The second model is  an expansion of the basic model
incorporating risk-based damage costs in the design
procedure, and the risks for each sewer associated with
the least-cost design are also given as part of the
design results.  The third model is similar to the
basic model except that the least-cost sewer layout is
also a part of the design result instead of being
predetermined.

                      INTRODUCTION

Urban storm sewer simulation models can be classified
into two basic categories.  The majority are flow
simulation models for existing systems.  They are
useful for urban storm runoff management, operation and
pollution control purposes by providing information
useful for flow regulation.  Many of these models have
been mislabeled as "design" models whereas in fact they
produce nothing more than runoff hydrographs that may
be used for design.   Evaluations of the important flow
simulation models have been reported by Chow and
Yen , Brandstetter
others.
James F. MacLaren, Ltd. ,  and
The second category are design models for determination
of the size, and perhaps also slopes and layout of the
sewers.  There are only a few design models in
existence.  A comparative study of hydraulic design
models was reported by Yen and Sevuk  .   These models
determine the sewer sizes with predetermined sewer
slopes and layout.  Recently a number of optimization
models for the least-cost design of sewer systems have
been proposed and a review has been reported by the
       12
authors  .  Most of these models offer a limited de-
gree of optimization in determining the sizes and
slopes of sewers using linear programming or dynamic
programming.

In this paper three new sewer design models are re-
ported.  An optimization procedure is incorporated into
each of the models to determine the least-cost design
for the entire sewer system.  The first model  employs
a simple hydrograph shift routing scheme and determines
the crown elevations, slopes, and diameters of the
sewers.  The second model is based on the first,how-
ever the uncertainties and risks are considered in the
design procedure.  The third model is similar to the
first in its scope and extent except that it also
determines the layout of the sewer system.  In the
                                     following the constraints, assumptions  and basic  opti-
                                     mization techniques adopted in all  the  three models are
                                     first discussed.  The three design  models developed and
                                     listed in Table 1 are then described briefly.  Finally
                                     an example is presented to illustrate the advantages of
                                     the new design models over the traditional design
                                     methods.

                                                   CONSTRAINTS AND ASSUMPTIONS
The following constaints commonly used in sewer designs
are adopted in this study:
   (a)  Free-surface flow exists for the design dis-
        charges or hydrographs, i.e., the sewer system
        is "gravity flow" so that pumping stations and
        pressurized sewers are not considered.
        The sewers are commercially available circular
        sizes no smaller than 8 in. in diameter.  The
        pipe sizes in inches are 8, 10, 12, from 15 to
        30 with a 3 in. increment and from 36 to 120
        with an increment of 6 in.
        The design diameter is the smallest commercial-
        ly available pipe that has flow capacity equal
        to or greater than the design discharge and
        satisfies all the appropriate constraints.
        Storm sewers must be placed at a depth that
        will not be susceptible to frost, drain base-
        ments, and allow sufficient cushioning to
        prevent breakage due to ground surface loading.
        Therefore, minimum cover depths must be
        specified.
        The sewers are joined at junctions such that
        the crown elevation of the upstream sewer is no
        lower than that of the downstream sewer.
        To prevent or reduce permanant deposition in
        the sewers, a minimum permissible flow velocity
        at design discharge or at barely full-pipe
        gravity flow is specified.  A minimum full-
        conduit flow velocity of 2 fps is required or
        recommended by most health departments and is
        adopted in this study.
        To prevent occurrence of scour and other un-
        desirable effects of high velocity flow, a
        maximum permissible flow velocity is also
        specified.  The most commonly used value is 10
        fps and is adopted here.
        At any junction or manhole downstream sewer
        cannot be smaller than any of the upstream
        sewers at that junction.
                                        (b)
                                        (c)
                                        (d)
                                                             (e)
                                                             (f)
                                        (g)
                                        (h)
                                     Furthermore, the following additional  assumptions are
                                     made:
                                        (a)  The sewer system is a dendritic network  con-
                                             verging towards downstream without  closed
                                             loops.
                                        (b)  The sewer system consist  of junctions or man-
                                             holes  (nodes) joined by sewers  (links).  For
                                             the sake of simplicity and to  demonstrate the
                                             models, other facilities  such  as weirs,  regu-
                                             lators, interceptors, etc. are not  considered.
                                                       740

-------
                            TABLE  1.  Illinois Least-Cost Sewer System Design Models
Model
ILSD-1
ILSD-2
ILSD-3
Design
Sewer diam, crown
elevations, man-
hole depths
Sewer diam, crown
elevations, man-
hole depths
Sewer layout,
sewer diam, crown
elevations, man-
hole depths
Optimization
Technique
DDDP
DDDP
DDDP and
set par-
titioning
Considering
Hydraulics Risks
Hydrograph time lag No
and Manning's formula
Hydrograph time lag Yes
and Manning's formula
Hydrograph time lag No
and Manning's formula
Input
Sewer layout, ground eleva-
tions, min soil cover,
acceptable max and min
velocities, cost functions,
time and space increments
for routing computations ,
optimization parameters
Same as above, in addition,
design service life, risk-
safety factor relationship
Manhole locations , ground
elevations, min soil cover,
acceptable max and min
velocities, cost functions,
time and space increments
for routing computations,
optimization parameters
   (c)

   (d)


   (e)

   (f)
No negative slope is allowed for  any sewers  in
the dendritic network.
The direction of the flow in a. sewer is
uniquely determined from topographic consider-
ations .
The design inflows into the sewer system are
the inlet hydrographs.
A set of simple cost functions proposed  by Alan
                    1,8
       M. Voorhees  & Assoc."
       lustrative purposes.
                         are  adopted  for 11-
              OPTIMIZATION  TECHNIQUES

Isonodal Line Representation of  Manholes

For all the  three  design  models  discussed in this
paper, the locations  of the  manholes must be predeter-
mined and is input data for  the  design.   Imaginary
lines called isonodal lines  (INL)  are used to divide
the dendritic sewer system into  stages.   An INL of a
given stage  passes through all the nodes (manholes)
which are separated from  the sewer system outlet by the
same number  of links  (sewers).   For the  purpose of
optimization a stage  n includes  all the  sewers
connecting upstream manholes on  INL n and downstream
manhole on INL n+1.   As an example, the  INL's for an
                                     2
example system used in ASCE  Manual 37  is shown in Fig.
1.  When  the sewer layout is specified,  the links be-
tween the manholes for different stages  are known.  If
the layout is also to be  designed, all the feasible
manhole connections should be considered.

Discrete Differential Dynamic Programming (DDDP)

For each possible  connection of  manholes there are
many possible sewer slopes and corresponding diameters
which could  carry  the design discharge,  although only
one of these gives the least-cost system.  However,
the slope is equal to the difference of  crown eleva-
tions between the  ends of the sewer divided by its
length, and  the  diameter  can be  determined from the
slope and discharge.   Hence, the crown elevation at
each end  of  the  sewer is  chosen  as the optimization
variable.  The objective  is  to select the set of up-
stream and downstream crown  elevations,  among the many
possible  crown elevations (states) as shown in Fig. 2,
that gives the least-cost sewer  system.   Although
standard dynamic programming could be used as the
                                     Q
search technique,  DDDP has been  found  far superior for
such optimization  problems and therefore is adopted.
DDDP is an iterative technique for which a trial set of
crown elevations for the entire system  (called the
initial trial trajectory) is first selected together
with a range of crown elevations (called corridors)
within the state-stage domain (feasible crown eleva-
tions) .  The recursive equation of DP is then used
within a corridor to search for an improved trajectory
within the corridor.  Subsequently, the improved tra-
jectory is used to set up the new corridor for the next
iteration.  This procedure is repeated until a least-
cost design is obtained within an acceptable cost
error.  Details of DDDP applied to sewer design have
                        6,7,8,12
                                                   been presented elsewhere
                                                   here.
                                 and hence not repeated
                                                                         MODEL ILSD-1

                                                   The Illinois Least-Cost Sewer System Design Model 1
                                                   (Model ILSD-1)  is the simplest among the three models
                                                   Introduced in this paper.   In this model the design
                                                   Involves the determination of the crown elevations, and
                                                   consequently the slope, diameter of the sewers, and the
                                                   depth of the manholes.  The sewer system layout is pre-
                                                   determined and  serves as input into the model.  Risks
                                                   are not considered in the  design.  DDDP is applied to
                                                   select the least-cost sewer system.  The sewer dia-
                                                   meter, d In ft,  is computed by using Manning's formula
                                                   assuming just-full gravity flow

                                                                               2   , 3/4
                                                   in which n is Manning's roughness factor; S  is the
                                                   sewer slope;  and Q  is the peak discharge in cfs of the
                                                   sewer inflow  hydrograph.   The sewer outflow hydrograph
                                                   is obtained through lagging the inflow hydrograph by a
                                                   travel time,  t , computed as
                                                                               L/V
                                                     (2)
                                                   in which L is the sewer length and V is a velocity com-
                                                   puted by
                                                                          V =
                                                                              •n d'
                                                   The manhole junction condition is described by the
                                                   principle of mass conservation
                                                     (3)
                                                       741

-------
ill
i


ffl
i 1 i ~°
t
1 t
©
C-OJ
A" 21
U
'I/ _A'/\
h
\
:
-o 	
D
y4 /
J-OJ /

, \/ /
V \
**"*'- \
K-J
-2
>— Drain wuh M.H. nu
Dr*In m!et
" .
)
y
•^s
'I1
i
V
4
mb*r»
>
_.j 	
>a
r / »°j
'/ /i~35 /
\
P
1 1
n V

	
\ 1
} C-0./

Line A /
V® /
V C-OJ /
/ X-3.B/
,' ^?
i™ D ,' _ rv
\ 1 ©
1 \c-°3
/ T"N
/ — *>

•v / C-0.3 S
'' ©
\B C-03
V
»-5 V,
1 '(
v
A
f l
\
t t
V
,/
A/
B-l\
L,n, » _i
\ x/
) /S
/*'

	 „. Lin* D /
\ /
\C-06j'
U-341
/ \
! CT
/^" b»c .CT
\ ® /
! C-0.6 '
n
* 	 	
— ( 	
/
^

i
1
\
' 1
^ x Line B /~
,-4
i
A-3
XI
1
|
J
/
/
P

ICO 200 300 400 £W^£^
                                                          manhole,  and  Q     is  the  outflow from the manhole into
                                                                        °Ut                    10
                                                          the downstream  sewer.  Yen and Sevuk   have shown that
                                                          this simple routing method produces  hydraulic sewer de-
                                                          signs very similar to  those obtained by using more
                                                          sophisticated routing  methods.   Therefore, this hydro-
                                                          graph time lag  method  is  adopted for all three models
                                                          because of its  simplicity and relatively small computer
                                                          requirements.   The flow charts and computer program
                                                          listing for Model  ILSD-1  can be found in Yen et al.12

                                                                               M3DEL ILSD-2

                                                          Model ILSD-2  is  the Illinois Least-Cost Sewer System
                                                          Design Model with  risk considerations.   Risks are
                                                          considered in the  design  through the use of a set  of
                                                          risk-safety factor curves for the drainage basin con-
                                                          sidered.  The development of the risk-safety factor
                                                                                                           9  11  12
                                                          curves for a basin has been described elsewhere.  '  '

                                                          In the least-cost  design  considering risks,  the cost
                                                          consists  of the  sum of the installation cost of sewers
                                                          and manholes  and the expected damage cost during the
                                                          service period of  the  sewer.   The latter cost is
                                                          evaluated as  the product  of the assessed damage value
                                                          in the event of  a  flood exceeding the sewer capacity,
                                                          Q , and the risk,  i.e., the probability of occurrence
                                                          of this event during the  service period of the sewer.
                                                          To evaluate the  risk,  the safety factor is first com-
                                                          puted by  SF = Q  /Q where
              (a) Street System Layout"*
                    = 0,463 d8/3   1/2
                  en         o
                                                                                            (5)
                   Isonodal Lines
            Fig.  1.   Example sewer System
I Q
                     in
                                                    (4)
in which Q.   is the discharge of the inflowing sewers
into the manhole;  Q. is the surface inflow at the
With the value of SF known, the corresponding risk can
be obtained from the risk-safety factor curve cor-
responding to the service life of the sewer.

Model ILSD-2 is essentially Model ILSD-1 with the
addition of risk considerations.  The same hydraulic
method is used and the sewer system layout is pre-
determined.  Details and flow charts for Model ILSD-2
                          12
can be found in Yen et al.               ,

                      MODEL ILSD-3

Model ILSD-3 is the Illinois Model for Least-Cost
Sewer System Design including Layout.  This is a
screening model based upon DDDP.  In addition, the
model consists of a scheme (using set-partitioning) to
select the least-cost connection of manholes.  The
sewer system input includes the locations of the man-
holes and ground elevations and the design gives the
sewer layout in addition to the crown elevations,
slope, and diameter of the sewers and depth of manholes
as for the previous two models.  The same hydraulic
method is used as in the previous two models.  Risks
are not considered in the design.  Details on the opti-
mization technique have been presented by Mays.

                     DESIGN EXAMPLE

Many sewer engineers are familiar with the example
                                   2
sewer system used in ASCE Manual 37  demonstrating the
design of sewer diameters for a given network layout
and slopes using the traditional rational method.
Hence this sewer system (Fig. la) is adopted here as an
example to illustrate the advantages of the proposed
least-cost design models over the traditional hydraulic
design methods.

In ASCE Manual 37 the inflow data were given only as
peak flows obtained by using the rational formula.
These peak flow data are converted into manhole  inflow
                                                       742

-------
                                                                TABLE  3.  Designs  of  Example  Sewer  System
                Stage
                                      Crown
                                     Elevatio
  Fig. 2.  Drops in Crown Elevations Between Manholes
hydrographs.  The inflow hydrographs  are  assumed  to be
symmetric and triangular in  shape with  a  base  time of
40 min starting at  the same  initial rise  time  and with
a constant base flow of 0.10 cfs.  The  peak  discharges
are listed in Table 2.  The  minimum sewer size used is
12 in. and Manning's n is  0.013  for all the  sewers.
The minimum soil cover depth is  3.5 ft.

         TABLE 2.  Design  Example Input Data


Isonodal
Line

1
2

3


4

5

6
7


Manhole
Number

1
1
2
1
2
3
1
2
1
2
1
1


Ground
Elev.
ft
98.4
94.9
96.2
91.8
92.3
94.6
89.7
92.7
89.5
91.6
88.5
88.0
Down-
stream
Manhole
Number

1
1
2
1
1
2
1
1
1
1
1



Sewer
Length
ft
400
400
400
400
400
400
400
400
400
400
125


Peak
Inflow
QD
cfS
2.0
3.1
4.7
6.6
1.2
1.5
10.1
1.0
5.0
2.0
7.1

Models ILSD-1 and ILSD-2 are applied to the example
system and the resulting sewer diameters, slopes, and
crown elevations of the least-cost designs are
summarized in Table 3.  In applying Model ILSD-2, a
5-yr risk-safety factor curve developed for Urbana,
Illinois is assumed applicable and the assessed damage
value is assumed to be $10,000 for each of the sewers.
The traditional rational method design given in ASCE
Manual 37 is also summarized in Table 3 for comparison.
The risks associated with these designs over a 5-yr
period are listed in Table 3 and the costs are given in
Table 4 for comparison.  For the ILSD-1 and ASCE de-
signs, the risks are evaluated by using the same 5-yr
risk-safety factor curve employed in Model ILSD-2.
The safety factor for a sewer is computed by
SF = Q /Q  with Q  given by Eq. 5.  Accordingly the
risk for the sewer associated with the design can be
determined.  The expected damage cost for each sewer is
Upstream
Isonodal
Line


6
5

4

3


2

1


6
5

4

3


2

1


6
5

4

3


2

1

Up-
stre
Crown
Elevations

am Up-
Manhole stream


1
1
2
1
2
1
2
3
1
2
1


1
1
2
1
2
1
2
3
1
2
1


1
1
2
1
2
1
2
3
1
2
1

ft
Design Using
83.75
84.69
88.10
86.20
89.20
88.30
88.80
91.10
91.40
92.70
94.90

Design Using
84.13
85.44
88.10
86.20
89.20
88.30
88.80
91.10
91.40
92.70
94.90

Design Given in
83.55
85.15
85.15
86.25
86.75
87.90
88.05
89.55
91.00
91.80
94.35


Down-

Sewer
stream Slope
ft
Model
83.00
83.75
85.00
84.69
86.00
86.20
86.20
89.20
-88.30
88.80
91.40

Model
83.00
84.19
84.13
85.44
85.44
86.20
86.20
89.20
88.30
88.80
91.40

ASCE
83.05
83.55
81.55
85.05
83.15
90.30
86.55
86.75
87.40
87.80
90.75


ILSD-1
0.00600
0.00234
0.00775
0.00378
0.00800
0.00525
0.00650
0.00475
0.00775
0.00975
0.00875

ILSD-2
0.00900
0.00312
0.00994
0.00191
0.00941
0.00525
0.00650
0.00475
0.00775
0.00975
0.00875

Manual 37
0.0040
0.0040
0.0090
0.0040
0.0090
0.0060
0.0070
0.0070
0.0090
0.0100
0.0090

Sewer
Dia-
meter
in.

36
36
12
30
12
21
18
12
15
15
12
average

42
42
12
42
12
24
18
12
18
18
12
average

36
36
12
30
12
21
18
12
15
15
12
average


Risk


0.283
0.592
0.077
0.610
0.142
0.554
0.125
0.051
0.416
0.217
0.051
0.283

0.002
0.036
0.032
0.046
0.086
0.113
0.115
0.051
0.020
0.007
0.050
0.051

0.670
0.453
0.066
0.685
0.249
0.572
0.145
0.020
0.451
0.228
0.066
0.328
evaluated as the product of the risk and the assessed
damage value.  The expected damage cost computed for
the entire system as well as the installation and total
costs for the ILSD-1, ILSD-2, and ASCE designs are
given in Table 4.

As seen from Table 4, applying optimization indeed pro-
duces designs with lower costs, and Table 3 further
shows that the risk of failure is also reduces, e.g.,
from 0.328 for ASCE design to 0.283 for Model ILSD-1.
The installation cost of the ILSD-1 design is n lower
than the ASCE design.  In fact, by merely adding the
peak discharges successively in the network gives the
same design discharges as for the ASCE design and a
DDDP optimization design for these discharges
the installation cost from $70,087 to $69,062.
                                             12
reduces
The superiority of Model ILSD-2 is clearly demonstrated
in Table 4.  The total cost of this design is 25% lower
than that of the ASCE design for a 5-yr service period,
and the savings will be considerably more for a longer
service period.  In order to offset the expected damage
costs due to flooding, the sewer sizes are larger than
those for the ILSD-1 and ASCE designs (Table 3), pro-
viding a better trade-off between installation and
damage costs to give a minimum total cost.  With larger
sewer sizes for the ILSD-2 design, the associated risks
are reduced considerably as shown in Table 3.
                                                       743

-------
  Model
                         Cost  in  Dollars
  ASCE
  ILSD-1
  ILSD-2
TABLE 4.   Cost Comparison for Example Designs       10-  Yen> B-  c> '  and A-  s-  Sevuk,  "Design of Storm
                                                    Sewer Networks,"  Jour. Env.  Eng. Div.,  ASCE, Vol. 101,
                                                    No. EE4, pp.  535-553, Aug.  1975.

                                                    11.  Yen, B.  C.,  and W.  H.  Tang, "Risk-Safety Factor
                                                    Relation for  Storm Sewer Design,"  Jour.  Env.  Eng. Div.,
                                                    ASCE, Vol. 102, No. EE2,April  1976.
             Installation	Damage	Total
70,087         (36,037)     (106,124)
67,001         (31,183)      (98,184)
76,155	5,602	81,757
                     CONCLUSIONS
                     	                           12.   Yen,  B.  C.,  H.  G.  Wenzel, Jr., L. W. Mays, and W.
 Considerable  savings in  sewer designs  can be  achieved      H"  Tan§>  "Advanced Methodologies for Design of Storm
 by  considering  the  sewers as a system  and searching  for    Sewer Systems,"  Research Report No. 112, Water
 the least-cost  design  using optimization techniques.       Resources  Center, University of Illinois, Urbana, 111.,
 Considering the uncertainties and  risks in  the  design      March 1976.
 through  evaluation  of  risk-based expected damage  costs
 provides further improvement.  In  this paper  three
 such least-cost design models are  briefly described.
 Crown elevations and slopes of sewers  in addition to
 their diameters are all  determined in  the design
 procedure.  In  addition, the least-cost sewer layout
 can also be determined if desirable by using  the
 appropriate model.

 ACKNOWLEDGMENT

 This paper is a product  of the research project,
 "Advanced Methodologies  for Design of Storm Sewer
 Systems," sponsored by the Office  of Water  Research and
 Technology, USDI, under  Agreement  No. 14-31-0001-9023,
 Project  No. C-4123.

                     REFERENCES

 1.   Alan M. Voorhees & Associates, Inc., "Sewer System
 Cost Estimation Model,"  Report to  the Baltimore,  Md.
 Regional Planning Council, McLean, Va., (available as
 PB  183981, from NTIS,  Dept. of Comm., Springfield, Va.)
 Apr.  1969.

 2.   American  Society of  Civil Engineers, and Water
 Pollution Control Federation, "Design and Construction
 of  Sanitary and Storm  Sewers," ASCE Manual No. 37,
 New York, 1969.

 3.   Brandstetter, A.,  "Comparative Analysis of Urban
 Stormwater Models," Battelle Pacific Northwest Lab-
 oratories, Richland, Wash., Aug. 1974.

 4.   Chow, V. T., and B.  C. Yen, "Urban Stormwater Run-
 off-Determination of Volumes and Flowrates," Environ-
 mental Protection Technology Series. National Environ-
 mental Research Center, US EPA, 1975.

 5.   James F. MacLaren, Ltd., "Review of Canadian  Storm
 Sewer Design Practice  and Comparison of Urban Hydro-
 logic Models,"  Unpublished Report  to Canadian Center
 for  Inland Waters, Burlington, Ont., 1973.

 6.   Mays, L. W., "Optimal Layout and Design of Storm
 Sewer Systems,"  Ph.D.  Thesis, Dept. of Civil Eng.,
 Univ. of  Illinois at Urbana-Champaign, 111., 1976.

 7.   Mays, L. W., and H. G.  Wenzel,  "A Serial DDDP
 Approach  for Optimal Design of Multi-level Branching
 Storm Sewer Systems,"  to be published in Water
 Resources Research, April 1977.

 8.  Mays, L. W., and B. C.  Yen,  "Optimal Cost Design of
 Branched  Sewer  Systems," Water Resources Research.
 Vol. 11, No. 1, pp. 37-47,  February 1975.

 9.  Tang, W. H., and B. C.  Yen,  "Hydrologic and
 Hydraulic Design Under Uncertainties," Proceedings.
 International Symposium on Uncertainties in Hydrologic
 and Water Resource Systems,  Vol.  2, pp. 868-882, Tucson
Ariz., December 1972.

                                                       744

-------
                                           THE USE OF LITHIUM CHLORIDE
                                      FOR AERATION TANK PERFORMANCE ANALYSIS
  ^Robert C. Ahlert
Professor, Chemical &
Biochemical Engineering
Rutgers University
                       Abstract
         Thomas J. Olenik
       Assistant Professor
Civil and Environmental Engineering
New Jersey Institute of Technology
 Robert Gesumaria
 Graduate Student
Rutgers University
     Lithium Chloride  (LiCl) was used as a tracer to
analyze the flow-through performance of a mechanical
and diffused aeration process at two different act-?
ivated sludge plants.  A slug of aqueous LiCl was
dumped at the entrance of each tank and effluent
samples were anlayzed using an atomic absorption spec-
trophotometer.  These results and  the corresponding
mathematical models showed how the existing  facilities
were operating in an inefficient manner.  It is reason-T-
able to assume that this technqiue can be applied to
all aeration tanks with the hope of eliminating dead
space and shortcircuiting.

                     Introduction

     The design of aeration tanks  for the activated
sludge process revolves around several basic design
parameters.  These parameters are: biochemical oxygen
demand (BOD) loading, detention time in the  tank,
mixed liquor suspended solids  (MLSS) , sludge age etc..
All of these design values are supposed to guarantee
sufficient destruction of waste products in  order for
the sewage treatment plant (STP) to achieve  its design
removals of BOD and suspended solids.  After construct-
ion of the facilities, there is seldom any checking of
aeration tank performance unless removals are not being
met or operational problems appear.  However, it is
entirely possible that an adequately designed aeration
tank may be operting at very Inefficient levels with
regard to the flow of the mixed liquor through the
tank's volume.  That is, short-circuiting, existence
of dead space or a combination of  the two may be
occuring that result in less than  ideal tank perform-
ance.

     A check of the flow-through conditions by use of
a tracer will indicate the existing conditions.  Based
on this analysis, which is described below, it may be
possible that an existing aeration tank can accept a
higher loading in the form of flow and/or waste.
Therefore, by performing this analysis a municipality
may be able to avoid unnecessary expansion of its
aeration tank system or the plant  may be able to accept
additional flow.

           Lithium Chloride Tracer Analysis

lithium Chloride as a Tracer
     One of the previous drawbacks of tracer analysis
of aeration tanks was the poor performance of the
tracer used.  That is, organic dyes are subjected to
biological breakdown along with incorporation of the
dye into the sludge particle resulting in inaccurate
test results.  In the studies described below, it was
decided to use lithium chloride (LiCl) as the tracer
for the following reasons:
    1.  LIC1 is highly soluble in  small amounts of
        water.
    2.  The LiCl will not be incorporated into the
        sludge particles.
    3.  The concentration of the Li can be detected
        accurately to 0.01 milligrams per liter (mg/1)
        by an atomic adsorption spectrophotometer (AAS).
    4.  LiCl is fairly inexpensive (approx. $l;20/lb.)_-
                    Analysls  of Tracer Testing

                         As outlined in Himmelblau and Bischoff's  (2)  work
                    on Population-rBalance Models,  a vessel whether it  is
                    for a chemical or biological reaction, can be  describ-
                    ed through the use of age distribution functions.
                    Most chemical and biological reactors have been studied
                    under the assumption that their flow patterns  are
                    either plug flow or perfectly  mixed.   Plug flow can be
                    defined as that flow in which  the fluid velocity is
                    uniform over the entire cross—section of the vessel
                    (fluid particles do not intermingle with other fluid
                    elements).  Perfect mixing assumes that the tank's
                    contents  are completely homogeneous (effluent  proper-
                    ties are  Identical to the tank's properties).   In
                    actual reactor performance,  the flow patterns  lie  be-
                    tween these  two extremes.  In order to describe what
                    Is occuring within the tank and, in turn,  achieve  a
                    description of the effluent's  characteristic,  an age
                    distribution function Is developed through use of  a
                    tracer or other tracking mechanism.  Therefore, a
                    graph of  lithium concentration versus time is  developed
                    by sampling the effluent end of an aeration tank.
                    This graph can then be compared to the one shown as
                    Figure 1.  The bell-shaped curve in this figure is
                    what is expected for actual reactors.  The other curve
                    exhibits  dead space (long tail) and some short-circuit-
                    ing (peak to the left of t or  average detention time).

                          Mathematical modeling  of biological  reactors such
                    as aeration tanks and receiving waters has been
                    studied with great intensity over the last decade.
                    These models often attempt to  describe the ability of
                    the reactor to remove BOD, etc., by obtaining  a large
                    amount of field data and applying it to the model.
                    Recently, a method based on the "black-box" approach
                    developed by Wilson and Norman (3) has been attempted.
                    Very simply this method uses a network of  ideal well-
                    stirred tank and plug flow reactors,  to fit residence
                    time distribution data from either laboratory  scale
                    models or field tracer studies.  Thus, this input-out-
                    put method allows all of the complex internal  process-
                    es (turbulence, etc.) to be reflected directly in  the
                    network without monitoring all of the interior and
                    often quite complex processes.  The network model  will
                    not of necessity take the same physical appearance as
                    the natural system.  But the great advantage of in-
                    cluding micro—mixing processes and stochastic  varia-
                    tions greatly affect the lack of direct physical
                    correspondence to the actual reactor.  This method of
                    using combined plug flow and well-stirred reactors can
                    be of great value when describing the partially mixed
                    reactors  that occur in treatment plant or in the en-
                    vironment.  The model can be used to predict the eff-
                    ect changes in a system, such as loadings, may have  on
                    the reactor unit process or a receiving water.

                         The  modeling techniques that were used are those
                    based on  Wilson's work (3) and a recent paper  by
                    Ahlert and Hsueh (1) of the Department of Chemical and
                    Biochemical Engineering at Rutgers University.  Mr.
                    Hsueh was especially helpful in setting up the program
                    and analyzing the data.
                         Wilson utilized the "black-box1' approach and the
                    basic concepts of the Fourier Transform Function. That
                                                        745

-------
is, by ignoring the complex internal mechanics of a
unit process, the problem of data collection involved
in a deterministic type model is avoided.  The Fourier
Transform Function or Laplace Transform allows trans-
fer of time domain data collected at the effluent end
of a tank into the frequency of or domain.  This action
yields a description of the system by algebraic re^
lationships in the frequency domain instead of com-
plicated linear differential equations in time domain.
This change allows the use of block diagrams to des^
cribe the process with a model that can predict what
will happen to the system when loaded differently.

     The transfer function is defined by the equation:
                                                          where
         Z(s)
                  X(s)
                                             CD
where Z(s) is the transfer function, Y(s) and X(.s)
are the Laplace Transform of the output and input
data respectively.  Therefore, once Z(s) is known for
a system, any other applied loading in terms of X(s)
can be converted to output in the frequency domain and
the characteristic equation of the output In the time
domain (y(t)) by performing the inverse transform
operations.

     The transfer function and the inverse computat-
ions are relatively easy to perform.  This fact is
applied in using Wilson's approach, as the lithium
chloride was inputed as a Dirac-delta function.  The
equations and assumptions involved in Wilsonrs method
are described below:

     The frequency content S(o)) of an aperiodic
function is described by its Fourier Transform
       S(ui) = F(f(t))
f(t)
                                 -jut  ,
                                e J   dt
                                                                                 cos (cot) dt
         B    /   y y(t) sin(uit) dt
              o
         C  =
                                              x  x(t) cos(ut) dt
                                                                            x(t) sin(u)t) dt
                                                 (8)
                                                                                                         (9)
                                                                                                        (10)
                                                                                                        (11)
                                              (2)
      The model block diagram construction can start
 based on two ideal chemical reactors.  The two re-
 actors are the plug-flow tubular reactor (PFTR)  which
 acts as a pure time delay mechanism, and the continu-
 ous stirred tank reactor (CSTR) which is an instant-
 aneously mixed system where dispersion reaches a max-
 imum.  The PFTR and CSTR are linear in the time  domain
 and are transferred into the complex (s)-domain  to
 obtain the system Transfer Functions.  Linearity all-
 ows the construction of complex networks based on
 these components.

      Taking the field data, Equation 5 through 11 are
 used to derive the real and imaginary parts of the
 transfer function.  A Bode plot is used in defining
 poles and zeros; from these a first estimate of  the
 number of CSTR components needed can be made for a
 trial network configuration.  The network configura-
 tions are evaluated by a least—squares procedure by
 using the sum of the squared vectorial deviations in
 the frequency domain as shown below in Equation  12.
where
                                                                             (Re(oo)o -
For  convenience purposes, S(uj) is normalized by
dividing it by the value at zero frequency.
       S(ui)
                 Area under f(t)
                                             (3)
then the Fourier Transfer Function is  given by

     7< ,   fj y(0  e ->* dt
     Z^>  -  	.	.	            (4)
                 x(t) e
                             dt
where T  and T  are the upper limits of the integra-
tion for the output and input function, respectively,
and x(t) and y(t) are the time domain functions of
the input and output signals respectively.  By apply-
ing the identity
      -jut
                cos u) t -j  sinojt
                                             (5)
 the real, Re(m), and imagining, Im(u>) , parts of the
 Transfer Function are given as:
               AC + BD
                                             (6)
    Re (co)
               C2+D2

               AD - BC
                                              (7)
                                                                                                       (12)
 where the o and p refer to observed data and predicted
 values using the model, respectively.


            Tests Performed and Results
                                                               The techniques described were used at two activa-
                                                          ted sludge plants, the Madison-Chatham and Hanover
                                                          Park sewage treatment plants.

                                                          Madison-Chatham Plant
      The aeration system at the Madison-Chatham plant
 is  divided into two physically distinct tanks.   The
 first and largest tank is a diffused air system.   The
 other tank, containing mechanical aerators, was the
 one chosen to be traced using lithium chloride
(Figure 2).  The equipment employed was the following:

      1.   50 Ibs. of LiCl.
      2.   2 Sigmamotor automatic samplers.
      3.   Garbage can.
      4.   10 feet of 6-inch smoke pipe to dispense the
          solution of LiCl.

      The test procedure was to dump instantly a slug
 of  aqueous LiCl solution, by use of the garbage can,
 into the influent pipe of the mechanical aeration
 system.   An instantaneous slug was necessary to approx-
 imate a Dirac-delta function to make the mathematical
                                                      746

-------
model of the system easier to produce.  Automatic
samplers CSigmamptor Co.X were needed to obtain samples
on a continuous basis after the LiCl had been dumped.
These samplers can be set at time intervals ranging
from 1 to 60 minutes.  At the end of each sample period
the machine resets itself to another bottle, so
discrete, not combined samples are taken.  Through
use of an automatic purping device, the sample pump
and tubing are evacuated.  Two samplers were needed
because of recycling of secondary settling tank sludge
that could contain some lithium and return it to the
system.  Two samplers were located as shown in Figure
2, and were in operation for about 72 hours.  Samples
were analyzed using an atomic adsorption spectophoto-
meter  (Perkins-Elmer Model No. 403).

     The data are plotted using concentration versus
time as the scales in Figure 3.  A computation of the
mass balance for lithium showed that 88 percent of the
lithium was accounted for.  Since this first run
operated at sample time intervals of 30, 40 and 60
minutes, and since it was felt that a great deal of
short-circuiting occurred in the first 30 minutes, the
first  3 hours of the experiment was repeated using 35
pounds of LiCl.  The time interval of the sampling
for  the second run was  1 minute for the first half
hour and 5 minutes to the end of the experiment
 (4 hours total).  A mass balance showed that 20 per-
cent of the lithium was accounted for in this time
period  (Figure 4) .

     The curve of concentration versus time for the
 first  run as shown in Figure 3 can be compared to
 Figure 1.  This curve shows that appreciable amounts
 of tracer still exist in the aeration tank well past
 the  average detention time of about 5.3 hours  (design
 detention time is 4.5 hours).  Also, during the first
 run  (50 pounds of LiCl) periodic samples of the dead
 spots  (see Figure 5) showed higher concentration of
 lithium when compared to the effluent suggesting these
 dead spots were    isolated areas where detention
 time of  the mixed liquor is higher.  It should be
 noted  that the return sludge, did not contribute
 appreciable amounts of  lithium because the sludge had
 been diluted considerably by flow from the much larger
 diffused aeration tanks.

     From the above experimental results it can be
 seen that the hydraulic flow through characteristics
 of  this  aeration tank causes a severe short-circuit-
 ing and  a limited amount of dead space.  It is obvious
 that this tank is not being used in an efficent
 manner.  This fact is brought out  further by an ex-
 amination of the mathematical models produced  for  the
 two  runs described above.  An examination of Figures
 6 and  7  that were developed through the use of a
 computer program allows the following additional
 conclusions:

 1.       The optimum models obtained for each run and
 their  transfer functions can now be used to predict
 the  performance of the  aeration tank  for various
 loadings.

 2.       The second run  is an improvement over  the
 first,  because of the fact that the second run's data
 were taken  at a  smaller time  interval.  From Figure  7,
 the  small effect short-circuiting  (segment f2"fl')
 has  on the overall model can be seen.  The bottom
 half of  the model  (segment f2') is where 95 percent
 of  the flow passes through, which  results in a  large
 increase in the residence time.  Thus, the summation
 of  the residence times  (.Tj, T4, and Tj) adds up  to
 approximately 380 minutes  (6.3 hours) which exceeds
 the  average design detention  time  of  4.5 hours.  It
 is  felt  that the poor hydraulics of the system causes
this effect.

Florham Park Plant

     This plant was chosen to be tested in a similar
fashion because of the fact that its method of aeration
was by diffusers.  A flow diagram is shown below
(Figure 8).

     The same procedure used at the Madison-Chatham
Plant was applied at this plant.  The actual results
are shown in Figures 9 and 10 with the following obser-
vations made:

1.   An actual average detention time of 8.56 hours
was observed (Figure 9) as compared to the design
detention time of 4 to 6 hours.

2.   On a mass basis, 89.5 percent of the Li was
recovered in 30 hours of sampling.

3.   Branch f^ (0.76Q) shows a large amount of dead
space in the system.  The detention time in this
branch (Tj^ + T3)  is 12.37 hours (742.42 minutes). This
value greatly exceeds the design detention time of
4 to 6 hours which is computed by  the ratio of tank
volume to flowrate.  Branch f2 (0.24Q) shows a
significant amount of short-circuiting in the system.
The detention time in this branch (T2 + T2 + TS)  is
2.05 hours (123.12 minutes) which is considerably less
than the design detention time of 4 to 6 hours cited
above.

          Conclusions and Recommendations

     It appears obvious from the two studies performed
concerning aeration tanks used in the activated sludge
process, that poor flow^through conditions cause a
gross inefficiency in the treatment plant system.
While this technique can produce a very involved and
costly analysis if the modeling is performed, a simple
concentration of Li versus time can give an engineer
an adequate picture of what is occurring in the
aeration tank that is the key unit process in the
treatment plant.   This is not limited to the analysis
of aeration tanks.  It is hoped that this technique
will be applied to receiving water analysis along with
expanded uses in analyzing existing sewage treatment
processes.

                   Bibliography

1.   Ahlert, R.G., and Hsueh, S.F., "A Reactor Network
Model of the Passaic River," Rutgers, The State
University, 1973.

2.   Himmelblau,  D.M., and Bischoff, K.B., "Process
Analysis and Simulation: Deterministic Systems," New
York: John Wiley and Sons, 1968.

3.   Wilson, A.W., "Mathematical Modeling of Partially
Mixed Reactors," Doctoral Dissertation, McMaster
University, Hamilton, Canada, 1971.
                                                       747

-------
                   Time, hours

     FIGURE 1  IDENTIFICATION  OF INEFFICIENT AERATION
                            TANK OPERATION
Return Sludge
Sampling Point
                                                         To  2nd
                                                      Settling Tank
                            30'
                                           30'
      Surface
        Aerator
                 0
                                  ©
                                       ©
                                            .uent'
                                        Effli
                                         Sampling
                                           Point
                       f '  '   14"g     f ^ influent
                                                      Flow
                                                      Splitter
                                                      Box
                                                    .«.._..,   Inflow
                                                To diffused
                                                 Aeration
                                                   Tanks
 Tank
Numbers   feet
         Depth    Volu^e    Volume
                           Gallons
                   feetj
10
10

14
                   9,000
                   9,000

                  12,600
                           67,320

                           67,320

                           94,248
          FIGURE 2  MECHANICAL AERATION SYSTEM,
                       MADISON-CHATHAM PLANT
                                                                                                          100      150
                                                                                  FIGURE 4  LITHIUM TRACER STUDY, SECOND RIM, MADISON-CHATHAM PLAHT
                                                                                                    MECHANICAL AERATION SYSTEM
                                                                                                                                 210      240
                                                                                                                          Dead Spot
                                                                                                                                        7
                                                                                                                           To  Secondary
                                                                                                                           Settling Tank
                                                                        FIGURE 5   PLAN VIEW OF MECHANICAL AERATION TANK  SHOWING

                                                                                                    LOCATION OF DEAD SPOTS
             FIGDRE 3 LITHHH TRACER STUDY .  FIRST RUN, MADISON-CHATHAM PLAHT

                              MECHANICAL AERATION SYSTEM
                                                                    748

-------
Q   - 1.2  ngd
f.  - 0.43877
f,  - 0.56123
q  • 40.53 mln.
t   • 246.32 mln.
T^  - 6.63 mln.
• (*)  - 0.12602
                                                         Transfer Function
               FIGURE  6   MIXING MODEL  FOR RUN NO.  1, MADISON-CHATHAM PLANT
                                        MECHANICAL AERATION SYSTEM
                                                                                                                                                            1 VERSUS TIKE
                                                                                                                                           FLOBHAH PARK SEWAGE TREATMENT PLANT
                                                                                                                                                 FLORHAH PABX. NEW JERSEY
-1.2 ragd
• 0,05
- 0.95
- 0,30925
• 0.69075
- 1.6184 mln
- 14.22 mln.
- 140.0 nln.
                               • 75.058 mln
                               • 164.75 mln
                               - 0.05628
                               - 0.0035
                               - 0.034
                                                                                             Q  - 0.7 MCD
                                                                                             f^ - 0.76051
                                                                                             :2 - 0.23949
                                                                                             i, - 645.76  mln.
                                                                                                                            T2 - 13.277 mln.
                                                                                                                            T3 - 96.657 rain.
                                                                                                                            i'Q - 6.0000 nln.
                                                                                                                             » - .023182
                                                                                                                       FIGURE_1_Q  REACTOR NETWORK Cffi)FIGURATION, DIFFUSED AERATION  SYSTEM
                                                                                                                                      FLORHAM PARK SEWAGE TREATMENT PLANT
                                                                                                                                          FLORHAM PARK. NEW JERSEY
       PICURE_7  MIXING MODEL FOR RUN NO. I, MADISON-CHATHAM PLANT MECHANICAL AERATION TANK
                               Exceaa  Activated Sludge
               yiCURE_B  FLOW DIAGRAM OF THE FLORHAM PARK SEWAGE TREATMENT
                              PLJWT, FLORHAM PARK,  NEW JERSEY
                                                                                                 749

-------
                                                    SWAN
                                A SEWER ANALYSIS AND MODELING  SYSTEM
        Elias C.  Tom'as,  P.E.
Director of Computing  Center and Associate
            Philip C. King, P.E.
            Project Engineer
                     ERDMAN,  ANTHONY,  ASSOCIATES,  CONSULTING  ENGINEERS,  ROCHESTER, NEW YORK
ABSTRACT

The need for a thorough  understanding  of  the  sewage
collection systems for many municipalities  has
resulted in the development of a  system of  computer
programs to analyze an existing network.  This
computer system,  called  SWAN (an  acronym  for  Sewer-
Analysis), can be employed  to  examine  the collection
network as a whole or in part(s),  thus enabling  the
investigators to see the total  character  of the  net-
work at a glance and to  make coordinated  decisions
concerning expansion and improvement.

SWAN can store an entire sewage collection  network
on a data base which can be easily modified and
employs a mathematical model to simulate  the  network
flows under various sanitary and  storm conditions
and combinations thereof.  SWAN was  originally based
on the Rational Method.   Recent modifications have
incorporated the Surface Hydrograph  Method  commonly
referred to as the Chicago  Method.

Although SWAN's primary  purpose is to  analyze exist-
ing sewer systems, it may be utilized  as  a  design
tool.  The principle of  design by iterative analysis
is facilitated immensely by SWAN's built-in feature
of recommending appropriate pipe  sizes for  upgrading
inadequate sewer reaches.  Graphic documents  produced
automatically by the computer  system may  be utilized
as final report or contract documents.

SWAN is ideally suited for  a small (IBM-1130) compu-
ter readily available to many  consulting  engineering
offices and small governmental  agencies and municipal-
ities.  Although SWAN is intended for  batch proces-
sing, it may easily be revamped for  an inter-active
environment.

The authors wish to acknowledge the  efforts of   ,
Mssrs. Charles S. Hodge  and Alfred J.  DeYoung during
the development of the original system.

INTRODUCTION

The original sewer system in many older communities
was a storm sewer, as this  was the bigger and more
visible problem,  which evolved into  a  combined sani-
tary and storm sewer. In many other communities, it
was economics that dictated that  the sewers be built
as combined sewers.   As time  went on, community-
wide treatment facilities were built.  The  excessive
storm flows necessitated the construction of  over-
flows into any convenient water body.  It has become
necessary therefore for  municipalities to take a
very close look at their wastewater  collection
systems, especially where the  collection  systems are
combined facilities.

The result is a growing  need for  information  about
existing sewer networks  and the character of  the
storm flows that are carried in these  networks.  Too
often, a municipality's  wastewater collection system
is not known and is not inspected unless a problem
exists.  Existing sewer systems are usually the
result of a series of expansions and improvements to
the original, and often no longer adequate set of
conduits.  Urban growth has placed a burden on the
older conduits and in many situations the systems
have not been improved to handle these additional
flows.

This need to understand the sewage collection and
transport systems has necessitated the development
of numerical techniques for analysis and flow simu-
lation of complex sewer networks.  There are present-
ly several computer software packages designed to
accomplish this.

When creating a sewer modeling program, the following
parameters must be considered:

1.   Methodology of load generation
2.   Methodology of load transport
3.   Special features to be analyzed such as pumps,
     overflows, weirs, etc.
4.   Extent of network to be analyzed (how many
     pipes, manholes, overflows, or other special
     features).
5.   Ease of use by the practicioner with respect
     not only to input generation but also to output
     interpretation.
6.   Availability of and selection of hardware

SWAN is an entirely analytic system.  It does not
employ statistical  methods other than those necessary
to reduce field observed statistics.  SWAN employs
common hydraulic principles that are used by hydrau-
lic engineers in normal design and analysis such as
Manning's Equation, Hazen-Williams Equation, the
Rational  Method, the Surface Hydrograph Method
(Chicago Method), hydrograph and backwater techniques.
SWAN's use is enhanced by its ability to apply them
to a large network, thereby relieving the engineer
from tedious calculations and allowing him to concen-
trate on the major questions.

OPERATIONAL CONCEPTS

SWAN receives data and performs operations through a
series of commands.  These commands are one word
signals which transfer control to the various oper-
ations which may in turn receive data.  Commands may
be streamed together.  Each command has some terminal
device which will cause the system to seek a new
command.   All of the commands are tied together in
that they all use the various files which are devel-
oped in a specific order.  Therefore, some commands
may be prerequisite for others.

Most of the commands employ extensive error diagnos-
tics.  There are five basic groups of commands in
SWAN:
                                                      750

-------
1.   The CONTROL, GEOMETRY and EDIT cornnands are
     used to build the data base from which all
     other operations take their cue.
2.   The VELOCITY, GAUGING and FLOWS commands are
     used to reduce field observations and generate
     reports for system verification.
3.   The PROFILE, CONDITIONS and PLOT commands
     generate profile plots with a table of conduit
     descriptions.
4.   The LOAD, CURVE, HYDROGRAPH and BACKWATER
     commands are used to analyze and simulate the
     flows.
5.   There are three other commands:  END, STOP and
     EXIT, which are used to terminate commands or
     the entire system run.

The prerequisites of SWAN are basic in concept:

1.   There must be a problem definition   something
     to analyze.
2.   The system to be analyzed must be described
     accurately.
3.   The user has to be familiar with the actual
     network and should have an in-the-field aware-
     ness of actual  conditions.
4.   The user must have a knowledge of hydraulics
     and hydrology.

PROBLEM DEFINITION

The use of SWAN must be pointed towards a specific
situation.  The user must know what he is attempting
to show with his sewer analysis.   SWAN is not a
mysterious miracle worker.   It is a tool  to be
employed to effectively analyze a problem.  In turn,
it may be used to assist in designing a correction
for the problem.

In order to utilize  SWAN,  data must be collected and
entered into the data base.  Sewer networks have to
be described by their geometries.  This description
of the sewer geometry requires the following infor-
mation:

1.   Length of each  conduit between manholes.
2.   Shape of conduit and  dimensions.
3.   Invert elevations at  each manhole.
4.   Manning's n for each  conduit.
5.   Continuity of flow.
6.   Manhole rim elevation  if  profile plots are
     desired.

The geometry is usually available to  municipalities
in the form of design or as-built plans or previous
sewer studies.   The  definition of the sewer geome-
tries is fundamental  to  proper SWAN operation.
Should data  about existing  sewers be  questionable,
field measurements should be made to  resolve  these
questions.   SWAN can  analyze systems  containing  a
multiplicity of different basic conduit cross-
sections.  There are  presently 33 shapes  on line.

The hard geometric data which  defines a sewer must
be supplemented  by a  geometric  logic  describing  the
layout of each  sewer  component.   This logic  is des-
cribed by nodes  (manhole numbers) and incidences
(downstream  manhole to upstream manhole).   The sewer
network has  three basic elements:   the "reach",  the
 strip",  and the  "drainage  area".   The reach, the
smallest and most descriptive  element, consists of
two manholes and  their connecting conduit.  Manholes
may be real or  "pseudo":  a real  manhole  actually
exists  in  the network; the  "pseudo" or imaginary
manhole  is a nodal point at which some feature in
the conduit changes.  The pseudo manhole  provides
the system with  the flexibility to  describe changes
in the conduit between manholes.  Thus the conduit
in each reach is uniform in shape, material and
slope.

Many reaches linked end to end would constitute a
"strip".  A strip begins at some downstream manhole
(real or pseudo) and proceeds manhole by manhole to
some upstream manhole.  This upstream point may be a
dead end manhole or it may be the limit of investiga-
tion.

Many strips tied together by common manholes would
constitute a "drainage area".  The drainage area can
have only one outlet.

FIELD OBSERVATIONS AND VERIFICATION

The mathematical model of flows in a sewer network
built by SWAN is only as good as the data base on
which it is founded.  Therefore, it is advisable
that field observations be made in order to validate
the base and subsequent modeling.

Field observations should include determination of
Manning's n where necessary, storm flow gauging, dry
weather flow gauging, condition of sewer conduits
and manholes and resolution of ambiguous geometric
data.

The character of drainage areas serviced by a network
may change with time, especially in districts close
to central business areas.  Urbanization results in
a general increase in the imperviousness of a water-
shed, rendering storm or combined sewers in these
districts inadequate.  Observations of these condi-
tions are extremely important in order to validate
the computer model.

Verification of the SWAN modeling may be made by
placing flow meters capable of recording the water
surface with respect to time at control manholes.
By appropriate reduction of recorded data, actual
hydrographs for recorded storms may be obtained at
each of these test manholes.  Theoretical hydrographs
for the same points and recorded storms may be
generated by SWAN and superimposed upon the actual
hydrographs for verification.

HYDROLOGIC CONSIDERATIONS

There are presently several methods available to
determine the amount of rainfall entering a sewer
network ranging from oversimplified approximations
to highly mathematical modeling approximations.  All
of the available methodologies, regardless of their
sophistication, do depend upon empirical data whether
they are runoff coefficients, imperviousness factors.
or impoundment constants.  Of these, two have gained
widespread use in present practice, the Rational
Method and the Surface Hydrograph or Chicago Method.

SWAN was originally designed to determine runoff by
the Rational Method.  Verification of its results
was made for 25 year design storms.  Recently the
Chicago Method was incorporated into the system to
allow the user this option.

SWAN's application of the Rational Method is based
on rainfall-intensity curves promulgated by local
weather offices utilizing contributing areas of city
block size or smaller and composite imperviousness
factors.  The program uses the inputed time of
concentration and the computed elapsed time of flow
in the sewer as a storm duration abscissa to find
the rainfall intensity from the rainfall/intensity
curve.  Times of concentration, much like the Manning
                                                      751

-------
n factors, require judgment and evaluation.  Pub-
lished tables are available to assist in their
selection.

In addition to direct time of concentration entry,
SWAN offers the ability to compute such time by the
equation:

                           0.50        0.333
TINLT   1.8(1.1-BCOF)(BLEN)    /(BSLOP)

Where:    TINLT is the inlet time in minutes
          BCOF is the basin coefficient of runoff
          BLEN is the basin length in feet
          BSLOP is the basin slope in percent

The Rational  Method is based on a uniform rainfall
on the entire drainage area under investigation.
Violent summer storms of infrequent occurrence do
not fit this pattern as they are not uniform in
rainfall intensity nor uniform on the entire area.
Users therefore should be aware of these limitations
because they tend to yield conservative results and
generally indicate too many conduits as being under-
sized when such a storm is applied.

SWAN's application of the Chicago Method utilizes a
model hyetograph and generates time dependent surface
hydrographs for every manhole.  These hydrographs
take into consideration surface imperviousness and
impoundments as well as evaporation.

SYSTEM LOADS

SWAN offers the ability to load the sewer system
with domestic and industrial loads, storm flows and
infiltration.  The storm load determination has been
described under the hydrologic considerations of
this report.   Domestic loads may be defined by
acres, population density and per capita usage,
census population and per capita usage or direct
point load.  Infiltration and industrial  loads are
entered as direct point source loads or as a per
capita allowance.

The loading feature (called the LOAD operation) is
employed to analyze nearly all situations.  It
applies the normally accepted design methods in an
analysis mode.  It indicates those conduits which
are undercapacity for the specified condition.

The LOAD operation can be applied directly to analyze
proposed sewers.  The effect of urbanization of a
network's watershed can also be analyzed to indicate
the deficiencies or unused capacity of the network.
The effect of proposed sewers on an existing network
is still another powerful  application which can
provide insights into land use management under
existing conditions.

With sanitary accumulations, combined sewer overflows
to receiving waters can have their pollutant quantity
predicted for specific storm conditions.   Increases
in dry weather sanitary flows caused by dramatic
changes in the area served by a network may require
readjustment of a combined sewer's overflow regulator.

The LOAD command, having received the loads of the
manholes, proceeds to accumulate them upstream to
downstream keeping track of time of concentration
(Rational Method) and elapsed travel  time of flow in
the sewer.

Having determined the flow,  the program institutes a
half interval  search based upon the continuity of
flow and Manning's equations to find the depth of
flow.  The search starts at one-half  the full  depth
and continues until one of the  following acceptance
criteria  is met:

1.   Change in depth from previous  trial  to  present
     trial less  than 0.001 inches.
2.   Load flow previously computed  differs from flow
     at trial depth by less than 0.01 cfs.

Once the depth is found, the velocity is  easily
calculated and the elapsed time of  flow  is incre-
mented by the time-of-flow in the present reach.

If the flow developed previously 1s greater  than the
capacity of the conduit, the search operation is
omitted and the time is computed based on full depth
of flow.  Any reach which had a computed  flow greater
than capacity is flagged and its description is
stored for future tabulation.  The command can
institute a "design" which will give the  size of a
circular conduit which will carry the load using a
Manning's n of 0.013 and at the slope of  the existing
sewer.

HYDROGRAPH DEVELOPMENT

Hydrographs of flow from a watershed are useful
tools for analysis of any facility carrying or
treating storm flow runoffs.   While the Rational
Method is weak for analysis of runoff, the hydrograph
development techniques incorporated into SWAN can  be
utilized for analysis of any size uniform intensity
storm passing in any direction across the watershed
area.  Hydrographs can be developed for any reach  in
the drainage area for any given storm.  In this
application, the physical wave front of the storm
flow can be developed and time-flow relationships
can be derived and applied to a backwater analysis.

Storm flow hydrographs are powerful  devices for
analyzing quantity of overflow to receiving waters.
SWAN's hydrograph module also provides mass flow
diagrams.  The developed hydrographs and mass dia-
grams in the form of coordinates of flow or mass
versus time may be printed or plotted.

Sewage treatment of storm water is gaining in popu-
larity.  Although it is not new, the treatment of
storm water is far from being a well established
science.  Design criteria are presently being formu-
lated by regulatory agencies.  The basic design
parameters for storm water treatment are rate and
quantity of storm runoff.  These parameters are
developed by the hydrograph feature of SWAN.

The actual hydrograph development is based on an
accumulation of simple trapezoidal  hydrographs of
each runoff area.  For the Rational  Method, time of
concentration is an important factor-   A new element
has been added:   "start time".  For even the most
intense storms,  rainfall  must saturate the ground
before runoff begins.  Asphalt and concrete pavements
have cracks and voids that trap water before runoff
begins.  This period before water eventually reaches
the sewer is called "start time".  It can be a
matter of seconds or minutes  and is a  judgment
factor as is time of concentration.   For the Chicago
Method, the above are taken into account in the
surface hydrograph generation.

The HYDROGRAPH command accumulates the local  water-
shed hydrographs.  It takes its ordinate  (either
flow or watershed area) and adds to it all the other
local hydrographs, displacing each by its travel
time down to the reach being investigated.  Each of
the local watersheds should not exceed an area of
                                                     752

-------
about 2 acres.  If the areas are larger, the assump-
tion of a trapezoidal local hydrograph becomes
questionable.

The hydrograph method can be used to give a reasonable
approximation of the effect of a uniform storm on
the drainage area but should not be substituted for
the actual field observed flows.

The application of a uniform storm to a drainage
area is a problem with the hydrograph as it is with
the Rational Method in the LOAD command.  The differ-
ence is that with the HYDROGRAPH command, the input
may be selected to conform to an actual storm.
Accepted unit hydrograph techniques can be employed
by using an intensity of 1.0 to create the unit
hydrograph.

BACKWATER

Backwater techniques have long been recognized for
the analysis of open channel flow (creeks, rivers,
ditches, etc).  In a closed conduit the backwater
principles are the same except when the sewer is
surcharged, i.e. flowing full under a head.  Sur-
charged sewers can be analyzed using pressure flow
principles.

The availability of a backwater technique analysis
to flow in sewers allows a multitude of applications.
The most frequent application is the analysis of a
series of reaches having a constriction caused by a
singular undercapacity conduit.

The backwater analysis in SWAN can perform two func-
tions:

1.   Calculate the required piezometric head to
     sustain a given flow.
2.   Calculate the flow sustained by a predetermined
     piezometric head.

The first function can be applied to analyze the
surcharge capacity of a sewer.  This capacity may be
significantly larger than the gravity flow capacity.
The surcharge capacity of a sewer is important when
sewers are analyzed for short-term high-intensity
storms.  This type of storm has been historically
ignored because of the complex nature of any manual
analysis.  Runoff flows through the sewer in a time-
related manner, i.e. the peak flow, need be sustained
for only a short period of time (say 3 to 5 minutes).
This peak may be 25% to 50% greater than the gravity
flow capacity.  The sewer can in many cases carry
this flow.  It has been shown that a storm sewer
that was designed for a 10 year recurrence-period-
rational-method storm can sustain a 25 year high-
intensity short-term storm.  The analysis of these
storms is a difficult process and requires the
development of hydrographs for specific storms.

The ability to calculate a flow sustained by a given
head is useful in determining surcharged flows for
specific upstream conditions in a sewer.  Flow
splitting caused by overflow regulators or relief
sewers is a good example.  Overflow regulators are
usually designed in a gravity flow environment.
However, in actual practice, these regulators may be
surcharged during peak flows, especially if they are
regulating flows from dense urban areas.

The operation must start at a known elevation at the
downstream end of the investigation.  In a sewer
this could be one of two conditions.
1.   The water surface elevation of a receiving
     water if the outfall conduit is submerged.
2.   The known starting elevation as a function of
     the conduit hydraulic properties if the sewer
     flow falls free such as in a drop manhole
     device or in an overflow outfall.

Using the starting elevation, the operation applies
"gradually varying flow" theory to find the water
surface within a reach up to its upstream manhole.
Using the conservation of energy principle, the
starting elevation for the next reach can be calcu-
lated.  This procedure is repeated until  the limit
of investigation is reached.

Gradually varying flow theory considers that the
water surface can be predicted by a profile curve.
This profile is a function of the quantity of flow
and the geometries of the channel.

The flows utilized by the BACKWATER command may be
defined in one of three manners.

1.   Direct Q (flows in cfs) at each manhole.
2.   Flows in terms A*C with a request for a half
     interval iteration search to determine the flow
     and subsequent water surface elevation (piezo-
     metric head).
3.   A modification of 1 above, where Q's are read
     from the surface runoff hydrographs  (Chicago
     Method) computed earlier in the system and
     stored in a file.

The third type of loading above may be explained by
Figures 1A and IB which depict a dendritic network
and corresponding manhole loading hydrographs respec-
tively.  .
         O2

0 at
MH5

Q at
MH4

Oat
MH3
Qot
MK2

Oot
MH 1

/~
/

/
y


031

J?


Oil
051
\
\
041
~N


/"
t
022
/T

012
X


052

.042
\
032
~\
\


013




053


043


033
Q23


~>v
\
      FIGURE  i
      DENDRITIC
       NETWORK
     TIME

FIGURE   IB:
  LOADING
HYDROGRAPHS
                                                      753

-------
Water profiles by backwater means are determined at
various time intervals t],  t2, tj and so on with t=0
being the start of the storm.  In this manner,  the
worst profile for any sewer reach may be determined.

For time interval \-i, for example, the loading  at
Manhole 1 is Q12 obtained from the surface hydrograph
contributing to Manhole 1.   Similarly Q22 is obtained.
At Manhole 3, the hydrograph includes the accumula-
tion of the hydrographs at Manholes 3, 7, 8 and 9,
taking into account the respective times of travel
to Manhole 3.  This methodology is continued to
Manhole 5 at the upstream end of the system.

Within each manhole the following losses may be
taken into account by SWAN:

1.   Bend losses   varying with the upstream velocity
     head.
2.   Change in velocity losses - due to the change
     in velocity heads and always positive.
3.   Change in flow losses - due to an increase in
     flow caused by flow entering from a side strip.

If the sewer is surcharged because the flow is
greater than the capacity,  the piezometric head is
defined by the energy gradient from the Hazen-
Williams equation.  If the surcharge is caused  by
downstream conditions and the conduit would normally
carry the flow, the piezometric head is defined by
the friction slope from the Manning equation.

The BACKWATER command is* a very useful tool. Sur-
charged storm sewers can be shown to have larger
capacities than that found by the Manning equation.
A deep sewer can sustain a  very large surcharge and
perform with flows 100% higher than normal  flow
capacity.  However, a constriction in a sewer can
cause the capacity to be reduced for all upstream
reaches.  This may not be evident from analysis by
other means.

Backwater techniques can be employed to analyze
relief sewer capacity by simulating a surcharged
system up to the piezometric head that equals the
relieved sewer's overflow wiers.  Overflow structures
can be analyzed using this  same approach.

These techniques should not be used when the sur-
charged piezometric head is in excess of three  or
four times the conduit diameter above the sewer
crown.  The assumptions are questionable under  these
conditions.

CONCLUSION

The use of SWAN on various  projects since its incep-
tion in 1971 by Erdman, Anthony, Associates has
shown it to be a valuable and accurate tool for the
simulation of sewer flows.   Its use, however, must
be tempered with good engineering judgment, for it
was not intended to circumvent the Engineer.

The implementation of SWAN  can be a very useful  and
inexpensive method of keeping accurate records  of a
municipality's sewer networks.  As the community
grows,  the data base can be expanded to include
additions and improvements.  Additions and  improve-
ments can be analyzed in the planning stages or
checked in the design stage and thus reduce the
chance of inadequate service.  The municipal agency
responsible for wastewater  collection and disposal
would have its network records and the procedures
for analysis in one easily  accessible source.
As the environmental movement picks up momentum and
as more monies are made available for water pollution
abatement, SWAN will be even more valuable to engi-
neers and planners.  Municipalities which do not
have long-term statistical data available may employ
SWAN to develop reasonably accurate models of criti-
cal events, and reduce the time from investigation
to design.

It is the intent of the authors, depending of course
on the availability of time and funds, to utilize
the options of SWAN to compare the Rational Method
with the Chicago Method and to determine their
effects upon analysis and design.  Several theore-
tical comparisons have been made, but to the know-
ledge of the authors, none have been made on actual
complex and extensive sewer systems.

SWAN is a simple to use computer system which re-
quires neither monstrous hardware nor tedious or
complicated input form preparation.  The output is
neatly presented for ease of interpretation and the
plotter features reduce the efforts of transferring
computer output to hard copy.  SWAN is upward compa-
tible with respect to hardware within a FORTRAN
environment.  Lack of space prohibits a detailed
description of SWAN and Its related output.  Those
interested may feel free to contact the authors for
further details.

BIBLIOGRAPHY

1.   "OPEN CHANNEL HYDRAULICS" by Ven Te Chow,
     McGraw-Hill, 1959.

2.   "HANDBOOK OF APPLIED HYDROLOGY" by Ven Te  Chow,
     McGraw-Hill, 1964.

3.   "HANDBOOK OF APPLIED HYDRAULICS" by C.V.  Davis,
     2nd Edition, McGraw-Hill, 1952.

4.   "DESIGN AND CONSTRUCTION OF SANITARY AND STORM
     SEWERS" by the American Society of Civil  Engi-
     neers and the Water Pollution Control Federation  •
     A.S.C.E. M&R No. 37.

5.   "HYDROLOGY OF URBAN RUNOFF" by A.L. Tholin and
     C.J. Keifer, ASCE Transactions Paper 3061,
     March 1959.

6.   "CHICAGO HYDROGRAPH METHOD   NETWORK ANALYSIS
     OF RUNOFF COMPUTATIONS (N.E.R.O.)" by C.J.
     Keifer, J.P. Harrison and T.O. Hixson, City of
     Chicago D.P.W., 1970.

7.   "SYNTHETIC STORM PATTERN FOR DRAINAGE DESIGN"
     by C.J. Keifer and Henry Hsien Chu, Proceedings
     ASCE, Hydraulics Division Paper 1332, August
     1957.

8.   "WATER SUPPLY AND POLLUTION CONTROL: by Viessman
     and Clark, International Textbook Company,
     1966.

9.   "WATER SUPPLY AND WASTE DISPOSAL" by W.A.
     Hardenburgh and E.B. Rodie, International
     Textbook Company, 1963.

10.  "WATER SUPPLY AND WASTE WATER DISPOSAL" by G.M.
     Fair and J.C. Geyer, John Wiley and Sons,  1966.

11.  "HYDRAULICS" by H.W. King and C.O. Wisler and
     J.G. Woodburn, John Wiley and Sons, 1963.
                                                      754

-------
                                     ON-LINE MODELS FOR COMPUTERIZED CONTROL

                                            OF COMBINED SEWER SYSTEMS
            J.W.  Labadie and N.S.  Grigg
          Department of Civil Engineering
             Colorado State University
              Fort Collins, Colorado
                     P.O. Trotta
           Department of Civil Engineering
               University of Colorado
                  Denver, Colorado
     Automatic  computer control is a cost-effective
approach to controlling polluting discharges from com-
bined sewer systems.   Perhaps the greatest challenge
is development  of programmable models and control
logic that can  find the best positioning of field con-
trol elements within the restrictions of the on-line,
real-time environment.   Control strategies can be
developed off-line or on-line, and may be reactive or
adaptive.  It appears that simple reactive control,
or rule curves,  can adequately control total overflows,
but may produce high overflow rates.  Stochastic adap-
tive policies produce a smoother distribution of over-
flows,  but are  highly dependent on the accuracy of the
storm inflow forecasting model.  Autoregressive moving-
average transfer function models are proposed as an
efficient approach to forecasting.  Initial indications
are that total  city-wide automatic control is feasible,
both technically and economically.

                   Introduction

     Increasing political and economic pressures are
causing today's urban water manager to place a greater
emphasis on cost-effectiveness of urban services and
efficient spending of the public dollar.   Accordingly,
he has a great  interest in searching for innovative
solutions.  It  has been conclusively demonstrated that
storm and combined sewer discharges are significant
contributors to the total pollution reaching our re-
ceiving waters  and the price tag to clean up these
discharges has  been estimated to be in the $200 billion
range.   Since neither this country as a whole nor
individual cities can afford expenditures of this
magnitude for the problem, better ways to manage
existing systems and affordable new systems must be
found.

     Automatic  computer control has been applied with
success in industry for more than 15 years.  It is
only recently,  however, that a few U.S. cities have
implemented limited scale computer systems for con-
trolling combined sewer overflows from portions of
their urban complexes,  with a number of other cities
in various stages of planning for such systems.

     It makes sense to consider automatic control of
storage and flow in a combined sewer system by digital
computer, especially in light of the tremendous ad-
vances  made in  recent years in industrial computer
control.  The availability of attractive computer
hardware, however,  does not by itself guarantee success
in automating a system.  The cost of such hardware may
be the  least of the costs involved.

     The normal  computer control project  should pro-
ceed cautiously through phases, beginning with the
simple  to the more  complex.   The four basic levels of
computer control can  be listed as follows:

     1.   Data logging and processing
     2.   Conventional remote supervisory control
     3.   Automation of parts of systems and computer
assisted control
     4.   Closed loop  automatic computer control.
     Once a commitment to the eventual implementation
of automatic control is made, the greatest challenge
is the development of programmable models and control
logic that can ensure the most effective utilization
of storage and treatment in the system.  It is the
control strategy development problem that is addressed
herein.

                 Control Objectives

     Application of automatic control to combined
sewer systems requires that strategies be developed
and implemented for remotely controlling adjustable
valves, orifices, gates, and pumps within the system,
during a real-time storm event, in such a way that
certain control objectives are met as closely as
possible, such as:

     1.  minimize the total volume of overflows
reaching receiving waters during a storm event.
     2.  minimize the maximum rate of overflow dis-
charge.
     3.  minimize the total mass of pollutants.
     4.  minimize the maximum rate of pollutant dis-
charge .
     5.  minimize the detrimental impact of untreated
overflows on the receiving water.
     6.  maximize the effective utilization of treat-
ment plant, interceptor sewer, trunk sewer,  and stor-
age capacities.
     7.  minimize localized flooding from surcharged
sewers.

     Objective 5, though highly desirable,  is depend-
ent on the availability of accurate models  for pre-
dicting wastewater quality and its impacts  on re-
ceiving waters.  Such models may not be available,
since water quality prediction is much more difficult
than quantity modeling and prediction.  Objectives 3
and 4 are also dependent upon wastewater quality pre-
diction models.  The City of Cleveland  has reported
using Objective 3 for their control system.   Utili-
zation of Objective 4 might be an indirect  means of
satisfying 5,  since it appears that the pollutant
loading rate from discharges is more critical for
receiving waters than the total mass of pollutants.

     In the absence of adequate quality prediction
models, one must settle for Objectives 1 or 2, used
in conjunction with Objective 7.  This latter objec-
tive is usually given a higher priority, due to the
high nuisance level and sanitation problems associated
with localized flooding and the fact that it bsitb
people ulh&ie. tk&y Li\)i.  Sewer discharges leading to
overflows, on the other hand, tend to pass  oat oft
Ai.gkt, out 0(j mind.  It is obvious that any computer
control system must be designed to control  the system
with regard to a proper tradeoff between overflow
minimization objectives and localized flooding mini-
mization objectives.  Satisfaction of these objectives
will indirectly satisfy Objective 6.

     Figure 1 presents a simple example as a means of
comparing Objectives 1 and 2.  Suppose that application
                                                     755

-------
  5

  4
«
*
— 3

Q>
> n
O 2

  I

  0
                                Totol Overflows
                                    = 0+5 + 1 =6
                                Total Squared Overflows
              t-l      t      f+l
              Discrete Time Periods

            a)  Results of Objective I
   5

   4
 tn

 I 3
Total Overflows
    =2+3+2 = 7
Total Squared Overflows
                 != 17
              t-l      t      t+l
              Discrete Time Periods

            b)  Results of Objective  2
     Figure  1.  Comparison of Objectives 1 and 2

 of Objective  1 for optimal control of  stormwater  re-
 sults  in the  overflow distribution shown in Figure
 1(a),  for some hypothetical real-time  storm event.
 For  Objective 2, an  indirect means of  minimizing  the
 maximum overflow rate, or the total overflows during
 any  discrete  time period, is to minimize the sum  of
 the  ^nuojiid overflows.  This might result in an over-
 flow distribution as shown in Figure 1 (b), for the
 same storm event.  Even though total overflows resul-
 ting from Objective  2 may be greater than those from
 Objective 1,  the pollution shock on receiving waters
 may  be less in the former case, where  the maximum
 overflow rate is less.

     Notice that in the case of Objective 2 (Figure
 l(b)) overflows are taken during period t-l, even
 though there might be available storage capacity  to
 store these flows.  This is allowed in the interest of
 Smoothing out the distribution of overflows so that
 impacts on the receiving waters are lessened.

     Furthermore, multiplying overflows (or squared
 overflows) at particular points in time and space by
We-igktotg (,0£.toti!> is an indirect way of considering
pollution impacts.  For example, bypass points with
 a history of overflows with higher pollutant concen-
trations could be assigned a higher weight than those
 for other bypass points.  In this way, overflows  are
more heavily penalized at this location and therefore
given a greater priority for control.  Likewise,  due
to initial ^tu&ki.nQ effects,  overflows occurring
early in the storm event could be weighted more
heavily than those occurring later.  In addition, tidal
or river level fluctuations might necessitate the
adjustment of weighting factors.  The utilization of
weighting factors is a way of expressing subjective
information in a quantitative manner, in the absence
of accurate quality prediction models.  It essentially
                           is  a means of setting up a priority scheme for
                           allowing overflows when there is no choice but to
                           allow them.

                                Objective 2,  in a real-time context, could be
                           indirectly expressed as:
                                                                minimize
                                                                           N
                                                                          V
                                               M
                                              £
                                          i=l t=m
                                                          where  0 (t)  are the total predicted overflows, as a
                                                          result of some control policy, at bypass point  i  ,
                                                          during a discrete interval  t ; u .    are the weighting

                                                          factors on overflow;  N  is the total number of bypass
                                                          points;  m  is the current real-time interval since
                                                          the storm began at t=l;  M  is some future time inter-
                                                          val to which storm inflows are forecasted (m < M);

                                                          Q (t)   are the predicted throughflows to treatment
                                                          and  c  is a positive coefficient which oted-cts through-
                                                          flows  and discourages unnecessary storage.

                                                                           Control Constraints
                               Having  specified  the  control  system objectives,
                          which  must in  some  way be  placed in quantitative
                          terms, it is then necessary to  specify the constraints
                          under  which  the  control  system  is  to operate.   These
                          can be listed  as follows:

                               1.  The interceptor,  trunk sewers,  and detention
                          storage devices  have a limited  capacity which,  if ex-
                          ceeded, will result in localized flooding and un-
                          treated overflows.
                               2.  The treatment plant(s)  has (have) a maximum
                          capacity for treating  wet  weather  flow.
                               3.  The transfer  of rainfall  to runoff to  sewer
                          flow operates  under certain dynamic physical laws
                          that can be  approximated by mathematical  models.   In
                          effect, these  laws act as  constraints on  the control
                          system.
                               4.  The remote data acquisition system will  have
                          a limited capacity for retrieving  and transferring
                          information, both in time  and space.
                               5.  The hardware  and  software associated with
                          the computer control system will have a limited capa-
                          bility.
                               6.  The computer  control system will  have  a re-
                          stricted amount  of time,  to render  decisions in  real-
                          time and properly respond  to a  rapidly progressing
                          storm  event.
                               7.  The system must operate under constraints of
                          possible human error and equipment malfunction  and
                          breakdown.

                               The goal  of the computer control system, then,
                          is to  meet the specified objectives as closely  as
                          possible, while  operating  under the above  constraints.
                          The design of  the computer control system is there-
                          fore based on  analysis of  tradeoffs between cost  of
                          the system and its effectiveness in meeting the ob-
                          jectives under these constraints.

                                 Off-Line vs. On-Line Control Development

                               The question that now arises  is:   how should
                          computer control logic be  developed in order to meet
                          the above control objectives, subject to  the con-
                          straints?  There are two basic  approaches  to control
                          logic  development:  off-line and on-line.   By off-line
                          development, we  mean that  the control policies  are
                          synthesized  independently  of the real-time control
                          situation.   That is, instead of programming the
                                                      756

-------
necessary mathematical  models  onto the real-time
computing machinery, they are  programmed  onto  batch-
mode computing systems not interfaced  with  the actual
control system.  Many optimizations  are then performed
for an assumed range of probable  storm events  that
could occur, based on historical  or  synthetically
generated events.  The resulting  optimal  strategies
are then stored as rule curves in the  on-line  computer
system for real-time control.

     The advantage, of off-line optimization is that
sophisticated models of the sewer system  and accurate
analysis techniques can be used in an  off-line manner,
whereas it would be difficult  to  use them in an on-
line computing system with limited hardware and time
for making control decisions.  The major  disadvantage.
of off-line optimization is that  its effectiveness is
based on how well the range of predetermined storm
events corresponds to what actually  can occur  in real-
time.  Obviously, there is an  infinite number  of
possible events that can take  place.

     On-line development implies  that  the control
optimization is actually carried  out in real-time on
the on-line computing system as a storm is  passing
over the urban area.  The obvious advantage is that
optimal controls can be developed that uniquely respond
to the event at hand, as well  as  the current state of
the sewer system in terms of flows and storage levels.
The disadvantage is that simplified  models  and analy-
sis techniques may be required because of hardware
and software limitations of the on-line computing
system  (which might be  a minicomputer, for  example)
and the limited time available to render  control
decisions.

      Reactive vs. Adaptive Control  Policies

     Control policies resulting from off-line  develop-
ment tend to be /ie.acJu.vQ. in nature.  Other  terms would
be toaaJL, bit point, or myopi-C control.   That  is,
these kinds of control policies are  £&54  dependent on
anticipation or forecasting of storm inflows.   They
simply react to the current flow  situation.  Some
forecasting may be necessary if there  are several rule
curves programmed onto  the computing system, and it
is desired to select the best  one for  the current
event, as well as modify it so some  extent  as  the
storm progresses.

     On-line optimization implies a  gx.e.ateA dependence
on storm forecasting, as well  as  more  extensive fore-
casting.  The extreme would be an attempt to forecast
future storm inflow rates over short time increments.
Control policies based  on on-line optimization might
be termed adaptive.. • Though it is possible  to  have
on-line optimization which is  reactive (Brandstetter,
et.al.2) and off-line control  development which results
in adaptive policies,^ there is generally less emphasis
on comprehensive forecasting in off-line  development.
It may be limited to forecasting  only  total depth and
duration of the storm.

     Adaptive control is based on the  sequential and
systematic updating of  storm inflow  forecasts  as new
information on the ensuing storm  is  gathered from an
automated data acquisition system.   Control policies
can then be appropriately modified,  based on the up-
dated forecasts.  In effect, then,  a forecasting
model is designed to be a te.aAnln.g mode/  that  can
efficiently incorporate new information into its
structure as it becomes available, so  that  succeeding
forecasts ideally become better as real-time infor-
mation on the storm event in progress  is  obtained.
     The ultimate goal is what might be termed i>to-
        adaptive. c.on&wl.  That is, instead of totally
basing the evaluation of control policies on a fore-
casted storm event, it is recognized that there is con-
siderable u.nc.eAta>wty (although a more appropriate
term is itlbk} associated with that forecast, and this
uncertainty tends to increase as we attempt to fore-
cast further into the future.  This approach is more
realistic since if an optimal control policy is based
on the certain occurrence of a sequence of future in-
flows, and it turns out that the actual inflows
deviated considerably from the forecast, then the
optimality of the control is in question.  It would
be better if a band oft an.c.eJvtaAntij (Figure 2) was
associated with the forecast and a .A-tocAoi^c control
policy development carried out which considered cer-
tain assumed probabilities of deviation from the most
likely or expected levels of the forecast.
                           Bond of Uncertainty
 Inflows
        Figure 2.
    Future Time


   Inflow Forecasting Under
   Uncertainty (or Risk)
     Under stochastic adaptive control in real-time,
Equation 1 might be written as:
     minimize E
 [N   M
E  E  K
i=l  t=m   1
[o1^]2
(2)
where  E  denotes e.x.pe.cte.d \iaLuZ.

            On-Line Modeling Requirements

Rainfall-Runoff and Routing Models

     Off-line control development can be based on
either the simplest and most intuitive of analyses,
or sophisticated studies involving mathematical models
of system response.  Given a set of historical or syn-
thetic storm events, these rainfall data are passed
through a tLOA-n^oJUL-tiuno^ model, which predicts direct
inflows to the sewer transport system.  A 4eweA Muting
modeJt, is then required for predicting overflows.

     Rainfall-runoff models range in sophistication
from simple unit hydrographs to kinematic wave approa-
ches (as in SWMM).   Likewise, sewer routing models
range from simple time-lag approaches to solution of
the full unsteady flow equations (as in the San Fran-
cisco Stormwater Model (SFSM)9).

     Again, off-line control development offers greater
latitude in the degree of model sophistication, but
results in more reactive type control.  On-line control
development, on the other hand, requires more simpli-
fied models, but can more uniquely respond to a
current event in an adaptive mode.
                                                       757

-------
     The major limitation  for  on-line control is in the
area of sewer transport  routing.   It  is not yet feasi-
ble to solve the St. Venant  equations on-line in real-
time.  Seattle^ nas  found  that the kinematic wave model
developed for SWMM,  modified to include backwater
effects using simple continuity relationships, per-
formed satisfactorily  in real-time.

Forecasting Model

     In addition to  rainfall -runoff and sewer routing
models, some kind of forecasting  model is  required for
on-line control development.   The more advanced models
currently available  attempt  to describe the activity
of storm >wJiyi dUJLLk , which can be defined  as local
areas of convective  circulation resulting  in more in-
tensive rain.  Rain  cells  in turn operate  within larger
areas of less intense  rain called band&.   A large
collection of investigators  have  contributed to an
understanding of the life  cycle of cells within bands.
An extensive bibliography  on this subject  can be found
in TrottalO.  The overall  result of this rain
cell research is that  there  appear to be definable
statistical properties associated with rain cell acti-
vity.  Since these models  were primarily developed for
simulation studies,  there  is some question as to their
adaptability to real-time  forecasting.  The models
tend to be large and time-consuming.

     As an alternative to  these comprehensive simula-
tion models, certain techniques originating from elec-
trical engineering may be  applicable  to rainfall fore-
casting.  The two most important  general forecasting
approaches are (i) the extended Kalman filter, and
(ii) the so-called autoregressive moving-average
transfer function models.  Graupe^ has concluded that
the latter models are  preferable  in terms  of compu-
tational speed and simplicity,  especially  when certain
aspects of the persistence or  degree  of autocorrelation
of the inputs are not  well understood, as  is the case
with stormwater forecasting.

     Figure 3 gives  a  simple illustration  of this
approach.  The regression  relations are of the form
(with the moving average terms deleted) :
                                            a R(t-p)
               J'J(i)
                      [bn.Rj(t)
                                                    (3)
where  t  is the current real-time  interval;   R1(t+l)
is the forecasted inflow;  J(i)   is the  set  of all
pertinent locations  j  adjacent  to  i  ;  a_  and  b_
are parameters determined  from historical data and  the
current storm event.  These parameters can be easily
updated in real-time, as shown by Trotta^1-1.   Though
stationarity is assumed for the above model,  nonsta-
tionarity can be considered by using a cU.^{,eAe.nc^Lng
operator.  Equation 3 can be used sequentially to
generate forecasts for any lead time.

     In some cases it may be advantageous to  forecast
direct storm runoff rather than rainfall  input.   This
is because the rainfall-runoff process tends  to per-
form a smoothing and integrating  action  on rainfall
input.  These integrated data might be more  conducive
to analysis for forecasting purposes than rainfall
data.
                        Correlated
                          Noise
                         (Random
                          Model
                          Error)
  Historical Inflows
     At Location x
  Historical Inflows
     At Adjacent
     Locations

   Information from
     Other Sources
   (e.g., Meteorologic
   and Radar Data)
        Linear
      Regression
       Relations
      Correlating
    Input and Output
      (Parameters
     Updatable in
       Reol-Time)
                                      Forecasted Inflows
At Location x
        INPUT
      Figure 3.
                          MODEL
                                           OUTPUT
Illustration of Autoregressive
Moving-Average Transfer  Function
Model
Optimizing Model

     A quantified control  objective and mathematical
specification of all pertinent  constraints on system
response  (including specification of some kind of
sewer routing model),  in conjunction with a syste-
matic optimization algorithm for finding the best or
near-best controls, is called an opt-Lm-iz-Lng model..

     The on-line control environment places restric-
tions on the degree of sophistication of the opti-
mization algorithm, which  in turn restricts the level
of the mathematical models used (particularly the
sewer routing model).   The most popular optimization
algorithm is the simplex method of lA.ne.afL ptiogna.mming,
which has been applied by  Bradford1 to combined sewer
control.  Obviously, the use of linear programming
constrains all sewer routing to be linear or piece-
wise linear.  Nonlinear routing can be used with dy-
namLc. pftognamming, but other computational difficul-
ties arise. 1  Application  of the ma.)wnm psu.nc.-i.pte.
and Sia.gu£.eutoSi tke.osiy3  is also a possibility, but
routing also is a problem  here.

     Introducing stochastics further complicates the
optimizing model.  In  addition, the city-wide control
problem is large-scale and unwieldy, with many control
variables.  Labadie, et.al.6 have proposed a hierar-
chical or multilevel optimization approach which can
effectively deal with  the  large-scale problem.

     Development of efficient,  yet sufficiently accur-
ate optimizing models  for  on-line use, remains a
challenging area for future research.

Research Results and Conclusions

     Intensive research on automatic control of com-
bined sewer systems has been carried out at Colorado
State University for the past five years.  Work has
primarily concentrated on  the San Francisco Master
Plan for Wastewater Management  as a case study.

     The most recent research results are described
in Trotta^.  The hierarchical  approach to the control
optimization is applied to the  San Francisco system,
where the urban area is divided into a number of
                                                       758

-------
subbasins which are essentially independent  except  for
their contributions of storm runoff to a common  inter-
ceptor and treatment facility.  The controls  for each
subbasin are derived separately by the use of a  sto-
chastic dynamic programming formulation.  Each subbasin
problem, however, is constrained by an upper limit  on
its releases to the interceptor, which is determined
by a master control problem.  This master control pro-
blem, which ties together the separate subbasin  pro-
blems, decides how interceptor and treatment  capacity
should be allocated to the subbasins.  It uses a modi-
fied cyclic coordinate search algorithm.  The inflows
are forecasted using an autoregressive transfer  func-
tion model which can be updated in real-time  to  respond
to new information on the storm event.

     The control algorithm was tested for selected
design storms which were based upon the historic
record.  The tests were conducted on a batch-mode com-
puter, but a hierarchy of minicomputers appears  to  be
a more efficient approach to effecting the multilevel
optimizations in real-time.

     The results of this work indicate that  the  large-
scale algorithm can converge within the time  frame
anticipated for real-time control.  Controls  based
upon the stochastic models were superior to  those
based upon forecasts which were assumed deterministic.
The adaptive aspects of the model appear to  be justi-
fied by the superior distribution of the overflows
which resulted when overflows were unavoidable.   That
is, the maximum rate of overflow was lowest  for  this
model.  This result is notable in that the forecasting
model was deliberately designed to be relatively in-
accurate.  Total overflows were, however, minimized
to a higher degree by a reactive model which  was also
tested, though the maximum overflow rate was  higher.
The overall conclusion appears to be that even though
the adaptive model with risk is highly dependent on
the accuracy of the forecasting model, at least  some
stormflow anticipation will reduce maximum overflow
rates.  Thus, reactive policies better meet  Objective
1, as long as weighting factors are not used,  and sto-
chastic adaptive policies are superior for Objective 2.

     As illustrated in Figure 4, if a storm  event is
definitely considered to be non-overflow producing,
then simple rule curves or reactive policies  are ade-
quate.  If a storm is definitely overflow producing,
rule curves tend to produce higher rates of  overflow
than stochastic adaptive policies.  There is,  of
course, a gray area in between, the size of which
depends on the accuracy of the forecasting model.   The
safest procedure in these gray areas is to use reactive
policies, since if the forecasting model is  relatively
inaccurate, unnecessary overflows may be taken.

     Initial cost estimates presented in Grigg,  et.al.5
show that computer hardware would cost around $200,000
for implementing the proposed city-wide hierarchical
control strategy for San Francisco.  Software, develop-
ment costs would be about the same, for a total  of
$400,000.  This is a relatively insignificant amount,
in comparison with total project costs that  could
approach $1 billion.

                  Acknowledgments

     The financial support of the National Science
Foundation (Research Applied to National Needs)  and
the Office of Water Research and Technology,  Department
of Interior,  are gratefully acknowledged.  Much  of  the
data which made the research possible were furnished
by the Department of Public Works, City and County  of
San Francisco.   In addition, the technical advice and
                      Definitely on
                 Overflow — Producing Storm
                  Use Stochastic Adoptive
                     Control Policies
                                               Poor
                                             Forecasting
                                               Model
                   Definitely Not an
                 Overflow — Producing Storm
                 Use Reactive Control Policies
   Figure 4.  Effect of Forecasting Model Accuracy
              on Control Policy Selection

assistance provided by Mr. Murray B. McPherson, Direc-
tor, ASCE Urban Water Resources Research Program, was
instrumental in stimulating the research.

                     References

1.  Bradford, B.H., "Real-Time Control of a Large-Scale
    Combined Sewer System," Ph.D. Dissertation, Depart-
    ment of Civil Engineering, Colorado State Univer-
    sity, August 1974.

2.  Brandstetter, A., R.L. Engel, and D.B. Cearlock,
    "A Mathematical Model for Optimum Design and Con-
    trol of Metropolitan Wastewater Management Sys-
    tems," Water Resources Bulletin, Vol. 9, No. 6,
    pp. 1188-1200, 1973.

3.  Chan, M.L., "Optimal Real-Time Control of Urban
    Stormwater Drainage," TR 87, Water Resources and
    Marine Sciences Center, Cornell University, 1974.

4.  Graupe, D., Identification of Systems, Van Nos-
    trand Reinhold Company, New York, 1972.

5.  Grigg, N.S., J.W. Labadie, G.R. Trimble, Jr., and
    D.A. Wismer, "Computerized City-Wide Control of
    Urban Stormwater," to appear as Tech. Memorandum
    of the ASCE Urban Water Resources Research Program.

6.  Labadie, J.W., N.S. Grigg, and B.H. Bradford,
    "Automatic Control of Large-Scale Combined Sewer
    Systems," Jour, of the Environmental Engineering
    Division, ASCE, Vol. 101, No. EE1, pp. 27-39,
    February 1975.

7.  Labadie, J.W., N.S, Grigg and P.O. Trotta, "Mini-
    mization of Combined Sewer Overflows by Large-
    Scale Mathematical Programming," Computers and
    Operations Research, Vol. 1, pp. 421-435, 1974.

8.  Leiser, C.P., "Computer Management of a Combined
    Sewer System," USEPA Report, July 1974.

9.  San Francisco Department of  Public Works, San  Fran-
    cisco Stormwater Model:  User's Manual and Program
    Documentation, prepared by D.F.  Kibler, et.al.,
    Water Resources Engineers,  Inc., Walnut Creek,  CA.

10. Trotta, P.O., "Adaptive On-Line  Control of Com-
    bined Sewer  Systems," Ph.D.  Dissertation, Dept.  of
    Civil Engineering, Colorado  State University,
    December 1975.
                                                      759

-------
                                      MATHEMATICAL MODELS FOR CALCULATING
                             PERFORMANCE AND COST OF WASTEWATER TREATMENT SYSTEMS
                                               Richard G.  Eilers
                                     Systems and Economic  Analysis Section
                                         Wastewater Research Division
                                  Municipal Environmental  Research Laboratory
                                     U.S.  Environmental Protection Agency
                                               Cincinnati,  Ohio
 ABSTRACT

 The Systems  and Economic  Analysis  Section of the
 Wastewater Research Division of EPA in Cincinnati,
 Ohio is concerned with finding quantitative expres-
 sions for calculating the performance and cost  of
 wastewater treatment processes as  a function of the
 nature of the  wastewater  to  be treated and the  design
 variables associated with the individual  unit pro-
 cesses.   These models are intended primarily to
 characterize the treatment of municipal sewage.
 Since the procedure for solving all of the quantita-
 tive equations is usually too laborious or complex
 to be accomplished by hand calculation, various
 FORTRAN computer programs have been developed to
 perform the  task.

 BACKGROUND

 Mathematical models for wastewater treatment processes
 are required to express the  performance of the  pro-
 cesses over  the full range of operational  modes  and
 design criteria.   These models can be steady state,
 quasi-steady state,  or time-dependent.  By quasi-
 steady state it is meant  that  a steady state model is
 used to  simulate a process that is,  in reality,  not
 necessarily  steady state.  Most sewage treatment
 systems  are  not steady state.   The time-dependent or
 dynamic  models are of interest when the quality  of the
 effluent  stream from a process is  important  as  a func-
 tion of  time,  or when the effectiveness of various
 kinds of  control  schemes  on  a  process  is being  studied.

 For a model  to be  fully effective  for  design and plan-
 ning purposes,  it  must  be based on valid scientific
 principles,  flexible enough  to simulate experimental
 data from a  full-scale  process  (not  merely pilot-scale
 data), and represent the  performance  and cost of the
 process with adequate precision.

 The collection of  valid,  complete  experimental data
 followed  by  adjustment  of the  model parameters to
 make the  computed  results  agree with  experimental
 results within  an  acceptable tolerance is  also an
 important phase of model  development.

 Packaging mathematical models  as computer programs
 not  only provides  ease  and accuracy of calculation,
 but  also has the additional advantage of convenience
 of  distribution to interested  individuals, such as
 consulting engineers and urban planners, in a
 readily usable  form.

MODELS DEVELOPED

Over the past eight years, a number of computer models
have been developed  in-house by the Systems and
Economic Analysis Section and through contracting
activity with outside sources.  Each program deals
in some way with the cost and/or performance of waste-
water treatment systems.  All of the computer pro-
grams were written in FORTRAN and designed to run on
a 16K IBM 1130 machine, and supporting documentation
has been prepared for each.  Table 1 gives a listing
of the models which were produced in-house, and
Table 2 shows the models which resulted from extra-
mural sources.  A brief description of the most
significant of these computer programs will follow.
Table 1.  Computer programs produced by the Systems
	and Economic Analysis Section.	

 1. Preliminary Design and Simulation of Conventional
    Wastewater Renovation Using the Digital Computer
    (1968).
 2. Executive Digital Computer Program for Preliminary
    Design of Wastewater Treatment Systems (1968).
 3. A Mathematical Model for a Trickling Filter (1969).
 4. Preliminary Design of Surface Filtration Units-
    Microscreening (1969) .
 5. A Generalized Computer Model for Steady State
    Performance of the Activated Sludge Process (1969).
 6. Fill and Draw Activated Sludge Model (1969).
 7. Mathematical Simulation of Ammonia Stripping
    Towers for Wastewater Treatment (1970).
 8. Mathematical Simulation of Waste Stabilization
    Ponds (1970).
 9. Simulation of the Time-Dependent Performance of
    the Activated Sludge Process Using the Digital
    Computer (1970).
10. Economics of Consolidating Sewage Treatment
    Plants by Means of Interceptor Sewers and
    Force Mains  (1971) .
11. Per Capita Cost Estimating Program for Waste-
    water Treatment (1971) .
12. Wastewater Treatment Plant Cost Estimating Pro-
    gram (1971).
13. Design of Concrete and Steel Storage Tanks for
    Wastewater Treatment (1971).
14. Water Supply Cost Estimating Program (1972).
15. Cost of Phosphorus Removal in Conventional
    Wastewater Treatment Plants by Means of
    Chemical Addition (1972).
16. A Mathematical Model for Aerobic Digestion (1973).
17  Design and Simulation of Equalization Basins
    (1973).
18. Mathematical Model for Post Aeration (1973).
19. Optimum Treatment Plant Cost Estimating Program
    (1974) .
20. Waste Stabilization Ponds Cost Estimating Program
    (1974).
                                                       760

-------
21.  Granular Carbon Adsorption Cost Estimating Program
    (1974).
22.  Control  Schemes for the Activated Sludge Process
    (1974) .
23.  Cost Estimating Program for Disinfection by Ozona-
    tion (1974) .
24.  Nitrification/Denitrification Cost Estimating Pro-
    gram (1975) .
25.  Cost Estimating Program for Alternate Oxygen
    Supply Systems (1975) .
26.  Cost Estimating Program for Land Application
    Systems  (1975).
27.  Combustion Model for Energy Recovery from Sludge
    Incineration (1975) .
28.  Energy Consumption by Wastewater Treatment Plants
    (1975).
29.  Stream Model for Calculating BOD and DO Profiles
    (1976).
Table 2.  Computer programs produced as a result of
	contract activity.	

 1. Ammonia Stripping Mathematical Model for Waste-
    water Treatment (1968).
 2. Mathematical Model for Wastewater Treatment by
    Ion Exchange (1969).
 3. Mathematical Model of the Electrodialysis
    Process (1969).
 4. Mathematical Model of Tertiary Treatment by Lime
    Addition  (1969).
 5. Mathematical Model of Sewage Fluidized Bed Incine-
    rator Capabilities and Costs (1969).
 6. Reverse Osmosis Renovation of Municipal Waste-
    water (1969) .
 7. Methodology  for Economic Evaluation of Municipal
    Water Supply/Wastewater Disposal Including Con-
    siderations  of Seawater Distillation and Waste-
    water Renovation  (1970).
 8. Mathematical Model of Recalcination of Lime
    Sludge with  Fluidized Bed Reactors  (1970).
 9. Computerized Design and Cost Estimation for
    Multiple Hearth Incinerators (1971).
 10. Cost program for Desalination Process  (1971).
 EXECUTIVE PROGRAM

 The major product of all this effort has been the
 "Executive Digital Computer Program for Preliminary
 Design of Wastewater Treatment Systems."   It was
 realized that a tool was needed which would allow
 the process designer to select a group of  unit pro-
 cesses, arrange them into a desired configuration,
 and then calculate the performance and cost of the
 system as a whole.  The Executive Program  meets
 this need by simulating groups of conventional
 and advanced wastewater treatment unit processes
 arranged in any logical manner.  Each unit pro-
 cess is handled as a separate subroutine which
 makes it possible to add additional process models
 to the program as they are developed.  There are
 presently 24 process subroutines in the program, and
 these are listed in Table 3.  Additional subroutines
 are planned to be included in the future,  and a
 tentative list is shown in Table 4.

 The first step in using the Executive Program is to
 draw the desired system diagram showing the unit pro-
 cesses to be used and the connecting and recycle
 streams.   All streams and processes are then numbered
 by the program user.  Figure 1 depicts a typical,
 conventional activated sludge treatment system with
 incineration for sludge disposal.  Volume  and char-
acteristics of the influent stream to the system and
design variables for each process used must be
supplied as program input.  By an iterative technique,
each process subroutine is called in the proper
sequence and all stream values are recomputed until
the mass balances within the treatment system are
satisfied.  Performance, cost, and energy requirements
for each unit process and the system as a whole are
included in the final printout.


Table 3.  Unit process models contained in the
	Executive Program.	

 1.    Preliminary Treatment
 2.    Primary Sedimentation
 3.    Activated Sludge-Final Settler
 4.    Stream Mixer
 5.    Stream Splitter
 6.    Single Stage Anaerobic Digestion
 7.    Vacuum Filtration
 8.    Gravity Thickening
 9.    Elutriation
10.    Sand Drying Beds
11.    Trickling Filter-Final Settler
12.    Chlorination-Dechlorination
13.    Flotation Thickening
14.    Multiple Hearth Incineration
15.    Raw Wastewater Pumping
16.    Sludge Holding Tanks
17.    Centrifugation
18.    Aerobic Digestion
19.    Post Aeration
20.    Equalization
21.    Second Stage Anaerobic Digestion
22.    Land Disposal of Liquid Sludge
23.    Lime Addition to Sludge
24.    Rotating Biological Contactor   Final Settler
Table 4.  Unit process models to be added to the
	Executive Program	

 1.    Ammonia Stripping of Secondary Effluent
 2.    Granular Carbon Adsorption
 3.    Ion Exchange
 4.    Electrodialysis
 5.    Reverse Osmosis
 6.    Bar Screening
 7.    Comminution
 8.    Grit Removal
 9.    Flow Measurement
10.    Waste Stabilization Ponds
11.    Microscreening
12.    Rough Filtration
13.    Multi-Media Filtration
14.    Ozonation
15.    Nitrification
16.    Denitrification
Detailed cost data applicable for preliminary design
estimates are generated by the Executive Program.  Con-
struction cost (in dollars), amortization cost, opera-
tion and maintenance cost, and total treatment cost
(all in cents per 1,000 gallons of wastewater treated)
are calculated individually for every unit process,
and a sum total of each cost is given for the entire
system.  Capital cost is also computed by adding onto
construction expenses the costs of yardwork, land,
engineering, administration, and interest during con-
struction.  All of the cost information can be updated
or backdated with respect to time by means of cost
indices that are supplied as input to the program.
                                                      761

-------
The Executive Program cannot be used for extremely
detailed design purposes.   However, it can be a
valuible preliminary design tool for the consulting
engineer or planner.  The performance of existing
or proposed wastewater treatment plants can be
simulated along with providing cost estimates for
building and operating these plants.  It is also
possible to optimize a particular treatment system
by varying design parameters and noting the effect
on performance and cost.  Cost-effectiveness studies
can be made by comparing alternate treatment systems.
Initial studies along these lines are becoming of
increasing importance because of the soaring costs
of plant construction that are now being experienced.

A recent application of the Executive Program was
an investigation of the potential economic advant-
ages associated with 261 different methods for
treating and disposing of sewage sludge.  Sludge
production and the costs of constructing and operating
the various systems were computed.  Each system was
either primary or activated sludge treatment followed
by some combination of the following 12 sludge hand-
ling processes--lime stabilization, gravity thicken-
ing, air flotation thickening, single-stage anaerobic
digestion, two-stage anaerobic digestion, aerobic
digestion, elutriation, vacuum filtration, centri-
fugation, sludge drying beds, multiple hearth incine-
ration, and land disposal of liquid sludge.  The
outcome of the study showed that the cost (in January
1974 dollars per ton of dry solids processed) for
treating and disposing of sewage sludge ranges from
about $30 per ton for anaerobic digestion followed
by dewatering on sand drying beds to over $100 per
ton when the sludge is dewatered by vacuum filtra-
tion or centrifugation and then incinerated.  Treat-
ment and disposal of sludges produced in municipal
wastewater treatment plants were shown to account
for as much as 60% or as little as 20% of the total
cost of treatment.  Therefore, careful consideration
should be given to selecting the sludge handling
method which meets the site-specific constraints at
a minimum cost.  The Executive Program, which is
capable of examining the cost and performance of a
wide variety of alternative sludge handling schemes,
can be used as a management tool to narrow the range
of options when design conditions are known.

The Executive Program has been around for several
years now, beginning with its original development
in 1968.  The model has been expanded, modified,
and corrected many times since then, and it will
continue to change in the future.  The goal will
remain the same:  to provide the best possible
characterization of the cost and performance of
municipal wastewater treatment systems.

MODELS FOR THE ACTIVATED SLUDGE PROCESS

Considerable effort has been expended in develop-
ing more accurate models for the activated sludge-
final settling process.  Previous models that were
produced by various researchers covered a wide
range of forms corresponding to differing sets of
assumptions about the hydraulic and biological
relationships believed to be significant in the
process.  Because of the problems of measurement
and the difficulty of fitting data to complex
models, simplified models were often used which
either omit or make some plausible assumption
concerning the role cf various factors in the
process.

In all, four different digital computer models for
the activated sludge process have been developed.
The first, CSSAS (Continuous Steady State Activated
Sludge),  is a  steady  state model  which is  flexible
enough to  simulate  the performance  of any  configura-
tion proposed  (complete mix, plug flow,  multiple
aeration  tanks,  step  aeration,  step return flow,
contact stabilization, extended aeration,  etc).
Two classes of microorganisms  are considered:
heterotrophs which  use 5-day BOD  as substrate  and
Nitrosomonas which  use ammonia nitrogen  as sub-
strate to  produce new cells.   The model  allows the
maximum rate constant for synthesis to vary with
process loading.  The second program,  FADAS (Fill
and Draw Activated  Sludge), attempts to  simulate
the biological activity in a fill and draw bench
experiment where activated sludge is mixed with
substrate  in any proportion.   The third  program,
TDAS (Time-Dependent  Activated Sludge),  simulates
the dynamic behavior  of the biological aspects of
the activated  sludge  process.   The  model numerically
integrates the mass balance and biological  rate
equations  which  are assumed to represent the
process.   Three  classes of microorganisms  are
considered:  heterotrophs, Nitrosomonas, and
Nitrobacter.  This model can also be used  to
investigate the  potential advantages associated
with the following control schemes:   dissolved
oxygen control,  sludge wasting control, and
sludge inventory control.  The  fourth  program,
CMA.S (Completely Mixed Activated  Sludge),  is used
to simulate the  performance of conventional and
modified activated sludge, separate  nitrification,
or separate denitrification.   With  an  adjustment
of the process parameters, it  can also be used
to characterize  the pure oxygen activated sludge
system.

SPECIALIZED COST ESTIMATING PROGRAMS

When making preliminary cost estimates for building
and operating certain wastewater  treatment systems,
it is often necessary to have more detailed cost
data.   For this  reason, special economic models were
developed  for several particular  applications.

A waste stabilization pond cost estimating program
computes the costs of stabilization ponds and aerated
lagoons along with influent pumping, surface mechani-
cal aerators,  embankment protection, and chlorination
facilities.  The granular carbon  adsorption cost
estimating program calculates the costs of influent
pumping, carbon  contactors,  regeneration facilities,
and initial carbon required.  The nitrification/
denitrification  cost  estimating program predicts
the costs  of dispersed floe systems  for the removal
of nitrogen from wastewater.  A cost estimating pro-
gram for wastewater treatment by direct land appli-
cation computes the costs of preapplication treatment,
transmission,  storage, field preparation, distribution,
renovated water recovery,  and monitoring facilities.
All of these economic models factor  in the costs  of
yarkwork,  contingencies,  engineering,  land, adminis-
tration, and interest during construction.

CONCLUSION

The primary goal of this modeling effort is to
improve the rule-of-thumb or hand calculation
method of process design which  is still commonly
used today.  The principal deterrents to better
process design are usually the manual effort
required in computing the cost and performance
of alternative designs and the  labor required to
accumulate and correlate the large amount of
experimental process design performance data
which is often available.   The mathematical com-
puter model can minimize the computational work
required for examining alternative designs, and,
                                                      762

-------
if the model has been correctly developed, it
will reflect the best experimental and scientific
information obtainable.  Thus, the process designer
has within his grasp the tools for quantitatively
selecting the most cost-effective system of processes
to achieve any desired wastewater treatment goal.  The
Systems and Economic Analysis Section within EPA is
very much interested in promoting the use of compute-
rized design techniques in order to achieve better
treatment at a minimum cost.
                                  RWP      raw wastewater pumping
                                  PREL   -  preliminary treatment
                                  MIX      stream mixer
                                  PRSET    primary sedimentation
                                  AERFS    activated sludge/final settler
                                  SPLIT    stream splitter
                                  CHLOR    chlorination/dechlorination
                                  THICK    gravity thickening
                                  DIG      single stage anaerobic digestion
                                  DIG2     second stage anaerobic digestion
                                  SHT      sludge holding tanks
                                  VACF     vacuum filtration
                                  MHINC    multiple hearth incineration
               Figure 1.  System diagram for a conventional activated sludge treatment plant.
                                                      763

-------
                              THE ECOLOGICAL.MODEL AS APPLIED TO LAKE WASHINGTON
                    Carl W. Chen
                  Tetra Tech, Inc.
               Lafayette, California
                                                 Donald J. Smith
                                                 Tetra Tech,  Inc.
                                              Lafayette, California
INTRODUCTION

A lake is a giant reactor where many physical, chemi-
cal and biological processes take place.  Heating and
cooling at the water surface due to solar insolation,
conductance, back radiation and evaporation generate
the thermal stratification and seasonal overturn
phenomenon.  Thermal stratification separates the lake
water into layers with different physical, chemical,
and biological characteristics.  It influences where
the tributary inflows are deposited and where outflows
are withdrawn.

Tributary inflows bring in plant nutrients (carbon,
nitrogen and phosphorus) either in the dissolved form
or as particulate organic matter.  Bacteria or fungi
decompose organic matter to liberate nutrients and
consume oxygen in the process.  With the help of solar
energy, phytoplankton reuse the nutrients to synthe-
size new organic materials and produce oxygen.  Under
stratified conditions, phytoplankton activity predomi-
nates in the epilimnion (above the thermocline) and
bacterial activity predominates in the hypolimnion,
creating an imbalance for oxygen resources.  At the
surface, the water may be saturated with oxygen, but
the hypolimnion water may become anerobic, killing
fish and other organisms residing near the bottom.

Biomass generated by bacteria and phytoplankton serve
as food for zooplankton, benthic animal and fish.
Carbon, nitrogen and phosphorus contained in the bio-
mass are conserved in each succession of organisms.
Upon death, they become organic materials to be worked
at by bacteria.

This paper describes an ecological model that simu-
lates the physical, chemical and biological processes
of the complex lake ecosystem.  The model represents
the state of ecosystem by a set of water quality para-
meters including biomass of various organisms.  The
model calculates throughout the annual cycle the
vertical profiles of temperature, dissolved oxygen,
biochemical oxygen demand (BOD), pH, plant nutrients
(C02, NH , N02, NO , PO,), particulate organic matter,
organic sediment, algae, zooplankton, benthic animals,
and fishes.

The model provides a. scientific tool for engineers to
evaluate effectiveness of various management alterna-
tives .  It also serves for the multidisciplinary in-
tegration of bits and pieces of information that has
been accumulated in various branches of sciences, i.e.,
meteorology, hydrology, hydrodynamics, limnology, eco-
logy, chemistry, biology, and sanitary engineering.

Lake Washington data, collected by Dr. Edmondson of
the University of Washington, were used for the model
calibration and sensitivity analyses.

LAKE WASINGTON
                                              f\  9
Lake Washington has a surface area of 110 x 10  m  and
                               Figure 1 shows the location of Lake Washington  and itSi
                               tributaries including waste water discharges and storm
                               water overflows.  As shown, the Sammamish River enters
                               the lake in the north and the Cedar River in the south.
                               Numerous small creeks empty into the lake for the local
                               drainage.  The outflow is regulated by the ship canal
                               connecting the lake and Puget Sound.  Table 1 sum-
                               marizes the inflow and outflow hydrology of Lake
                               Washington.
                    9  3
a volume of 3.6 x 10  m .
65 m.
It has a maximum depth of
Figure 1.  Lake Washington and Its Tributaries

Table 2 presents the mean monthly weather conditions
for the years 1966 and 1967 as observed at the Tacoma
Airport.  The weather conditions are assumed the same
as those experienced by the lake.

In 1941, the lake received one discharge of secondary
effluent.  By 1963, the sewage discharge reached about
0.53-0.75 CMS (cubic meter per second) or 12-17 MGD
(million gallons per day), exclusive of combined storm
water overflows.  In 1963, sewage diversion began and
by February, 1968 all sewage was exported.
                                                      764

-------
Table 1.  Inflow  and  Outflow Hydrology
              SirumuTmh Ccdir Local  Lil
         Month   Rivet  River dninage Ciij
        crurjrtc include Sh
inuary 20
•rch V 1
pril 1
»y
une
uly
ugust
September
October
November 1
3eccmber 1



6
[
)
S
S
S
0



3.6 85 0
3.0 7.0 0.
S 2 5.6 0.
».0 3 4 0.
0 0 2.3 0.
0.0 1 S 0.
0.0 1.3 0
8.S 1 S 0.
5.0 20 0.
S.O 4.5 0.
S.S 70 0


5

S
2
0
0 C
•

5
S C


11 0.08 0.01
11 0 OS 0.01
10 0 08 0.01
C9 0 08 0.01
07 007 001
.09 O.OS 0.01
09 010 0.01
.10 010 001
.11 010 OOt
.1 0.11 O.Ot












rogd.
2,0
rt.O
S.O
S.O
0.0
70
5.0
7.0
S.O
0.0
0.0

                  charge* include San Point
                                and Kirkland. the Dcllevue dii-
                                     INFLOWS
                 rwotK, and Eul Mercer, and the Bryn Mawr discharge* include Boeing
Table 2.  Weather  Conditions at Tacoma Airport
                   Cloud Sharlv,nve'
                             ingwnve* Dry bulb Wet bulk
January
February
Mirch
April
May
June
July
Auguit
September
October
November
December
1015
1017
1016
1020
1018
1017
1016
1018
1018
1019
1015
1014
92
HO
77
67
60
76
60
52
61
77
86
9+
0.009
0.020
0.034
0.045
O.OS2
0.055
0065
0.055
0042
0.020
O.OU
0.008
0.06S
0.06S
0.065
0.068
0070
0.075
ooao
0.078
0.075
0.070
0.068
0.065
48
6.3
7.5
9.5
1.0
5 0
80
8.0
5.0
2.0
8.5
54
2
2
2.6
S.O
6,6
9.2
11. 4
11.8
12.2
7.8
52
4.6
.6
.5
.5
0
0
8
.S
.4
.3

.8
.0
         • Computed from longitude 47.36N, latitude 122 2W, sun angle!, and cloud coven for 1966-67.

 The secondary effluent contained 4-12 mg/1 of  phos-
 phorus,  8-20 mg/1  of  nitrogen, and 5-25 mg/1 of BOD.
 In 1963, sewage contributed about 84% of  the total
 phosphorus and 40% of the total nitrogen  input to the
 lake.  The total annual  phosphorus and nitrogen inputs
 were estimated at  120,000 kg and 220,000  kg respec-
 tively.

 According to Edmondson's data, the lake water  is well
 mixed horizontally.   A set of historical water quality
 data have been furnished by Edmondson for this study.

 ECOLOGICAL MODEL

 The concepts of the ecological model have been pre-
 sented previously  by  Chen (1).  A detailed description
 of modeling approaches,  mathematical formulation and
 solution techniques can  be found elsewhere (2).

 Basically, the physical  geometry of the lake is repre-
 sented by stacked  layers of  water.   Layers are added
 or deleted with the rise or  fall of the water surface.
 The cross sectional area,  volume, width and side slope
 of each layer are  input  items.

 Water quality parameters are defined for each layer,
 and are expressed  in  the appropriate unit. The object
 of the model is to calculate the parameter values for
 each layer as a function of  time.

 To facilitate the  computation, differential equations
 are developed to describe  the changing rate of mass
 concentration or heat  content as a function of such
 physical processes as  1)  deposition of tributary in-
 flow; 2) withdrawal of outflow;  3)  advection between
 layers; 4) diffusion  between layers;  5) sedimentation,
 if any, from the upper layer to  the lower layer;  6)
 reaeration of the  surface  (CO  and  0.); and 7)  solar
 insolation near the surface.

 Deposition of tributaries  and the concomitant advection
 are idealized by Figure  2  which  shows  the sinking of
 the inflow to an appropriate layer  with approximately
 the same density.
C


*i
^

i \
" \
OUTKLDWV.

NX

s,, x
1
•^
/
/
/

/
Figure 2.  Inflows, Outflows,  and Concomitant Advection

In addition, the mass  concentration of quality consti-
tuents can be modified by  the  biological processes of
1) the oxidation of ammonia, nitrite,  BOD, and detri-
tus; 2) photosynthetic oxygenation, nutrient uptake
and release of algae;  and  3) the growth, mortality,
and respiration of zooplankton,  fish and benthic ani-
mals.  The biological  processes  are usually represen-
ted by the product function of a temperature dependent
coefficient and the mass concentration of the reacting
constituents.  For the biological parameters, the
growth rate is expressed by a  hyperbolic function of
the substrate concentration.

Resulting differential equations are typically of the
form
                  dz  '
 dt

where:  V
        C
        t
        Q
              water volume
              mass concentration or water
              time
              advective  flows
                                           temperature
       -:—  =  diffusion  between layers
       dz
        S
              various  quality constituents, sources,
              sinks, and  chemical, physical and biologi-
              cal reactions  within each layer
There are as many equations  as  there are quality con-
stituents and physical  layers modeled.   Numerical solu-
tions are provided by the  computer program which calcu-
lates C's from an initial  time  (t ) to  a short incre-
              (t  = At),
                                (t )
                         in a recursive manner.  Time
ment of time
step of computation  can be as long as a day.
The output contains  the vertical distribution of
temperature, dissolved oxygen,  BOD, alkalinity, pH,
CO , NH  , NO , PO, ,  coliform,  algae (2 groups), zoo-
                                                       765

-------
plankton, detritus, IDS, organic sediment and benthic
animals.  Fish productivity, evaporation loss, and
formation are calculated for the surface on a per unit
area basis.

SIMULATION RESULTS

The model was applied to simulate 1) the pre-diversion
condition of sewage from Lake Washington; 2) the recov-
ery of the lake after sewage diversion; 3) the sensi-
tivity analyses.

Pre-diversion Case

Figure 3 shows the observed and computed temperature
profiles throughout the annual cycle.  The comparison
is good considering that the mean weather conditions
have been used as the model input.

                     TEMPERATURE, °C
y 9 10 2
10-
20-
30-
40-
50-




»
0 0 1

i
Computed


JAN
-
-
-
_
)
>

,
0 2



00 10 20 0 10 20



FEB

_ ,
- (
- c
IV 1
0

MAR
-
:
-
_
i i


APR
   V
  ir  10--
  UJ
  ti  20--

  l" 30--
  t
  u  40-]-  c
     5CM-  d
I
               MAY
                          JUN
                                      JUL
                                                 AUG
   V
10-
20-
30^
40-
50^


— <

- <
/
'


SEP
-
-
- c
- a
- 0
/'



OCT
-
-
-
-
_
1



NOV
-
-
-
-

1



DEC
 Figure 3.   Calculated and Observed Temperature Prof iles

 Figure 4 plots the concentration profiles of dissolved
 phosphorus as observed in the field and computed by
 the model.  Upon the onset of thermal stratification,
 the dissolved phosphorus at the surface is seen to be
 consumed by algae, creating a reversed concentration
 profile.

 Other quality profiles that have been plotted to illu-
 strate the reasonableness of the model include dissol-
 ved oxygen (DO), nitrate, and ammonia.

 Lake Recovery

 The model was set up to simulate the lake recovery
 after sewage diversion.  It was run for a time period
 of three years.  Sewage input was imposed during the
 first year and excluded for the last two years.  All
 other boundary conditions remained the same.
                    DISSOLVED, P04-Plxug/l
    r?   0  20  40  60   0   20  40  60  0   20  40  60
    V       I    J_ ^ -J	I	t   —i   ~i	'—   '  — I    t _
   to
   tr
      10--
     20--
                                                  •=  30--

                                                  t  40-
                                                  LU
                                                  Q  50--
     60
          Computed
                                                      i-L FEB
o

 o

 o

 o
                        _ APR
                                       _ JUN
V _ - -


co 10-
CL
UJ
LtJ
^30-
zf
t 4°-
UJ
Q 50-
60-
~ i y \ \ \ \

*> (
-« ^"^-— .^
o ^^v


0

o

o
_ AUG o
I


'

1
•
1 HI 1 - 1 1 1 I
•1 o
- •^o*,^^
^^^
0 Ik
o

o

o
_ OCT o

.

i

i


i t i i i

t

— i
- i




_ DEC

i I






i

•
,
Figure 4.  Calculated and Observed Concentation Pro-
files of Phosphorous

Figure 5 shows the model responses for P, N, DO, algae
and pH at the surface and hypolimnion.  Representative
data observed during the pre-diversion years are plot-
ted with the first year results and those observed
during the post-diversion years are plotted with the
third year results

The rapid recovery of the lake is predicted adequately
by the model.  The end of year phosphorus concentra-
tion is  seen to reduce from 59 yg/1 to 45 yg/1  in one
year and to 37 yg/1  in two years after diversion.
Nitrogen levels were reduced  from 440 yg/1  to 420 yg/1
to 415 yg/1.

The minimum hypolimnion DO improves from 1.9 mg/1 to
3.5 mg/1.  The algal density  is  shown to reduce from
3.0 mg/1 to 2.2 mg/1.  The observed chlorophyl  £ level,
converted to biomass by an approximate factor of 1:20,
was shown to have a  dramatic  reduction in 1967  as pre-
dicted.

Sensitivity Analyses

During  the early phase of model  development,  the half
saturation constant  of C02 was estimated at 0.6 mg/1
carbon.  The  CO  exchange rate at  the air-water inter-
face was assumed to  be 90% of the  oxygen reaeration
rate.

As a  result,  the model never  predicted  a pH higher than
8.4  during the  summer.   To maintain a pH of 9.2 in  the
summer  as observed  by  Edmondson, the sensitivity
analysis indicated  that  the  C02  exchange rate should

be 10%  of  the oxygen reaeration  rate.   The  half satu-
ration  constant  for CO  should be 0.025  mg/1 carbon.
The latter value has been confirmed by  laboratory
 study (3).

 To assess  the relative importance of oxygen sinks in
 the hypolimnion,  model simulations were performed with
 1) the  doubled decay rates of detritus and organic
 sediment;  2)  algal respiration rate reduced to  zero;
                                                       766

-------
and 3) eliminating the waste input without modifi-
cation of the initial conditions.
                                    pH 7.1
                                    ALK 30
Figure 5.   Recovery  of  Lake After Sewage Diversion

Model responses  of DO profiles  are presented in
Figure 6.   The most  important sinks of  hypolimnion DO
are the respiration  of  algae and the decay of organic
materials  accumulated in the bottom.

The waste  inputs appear to contribute very little to
the direct oxygen consumption.   They stimulate the
growth of  algae  which settles and respires in the
hypolimnion.

SUMMARY AND CONCLUSIONS

A general  purpose water quality ecological model was
developed  and applied to Lake Washington.  The model
represents the Lake  as  stacked  layers of hydraulic
elements.  Hydraulic  routing, heat budget and mass
balance computations are performed with a daily time
step throughout  the  annual cycle.  The  output contains
the vertical distribution of temperature, dissolved
oxygen, BOD, alkalinity, pH, C02, NH3,  N03, P04>
Coliform,  algae  (2  groups), zooplankton, detritus,
TDS, organic sediment,  and benthic animal. The evap-
oration loss, ice formation, surface algal produc-
tivity, and fish productivity for cold, warm and
benthic fishes are  also calculated.

Application of the  model to pre and post diversion
cases of sewage  from Lake Washington indicates the
validity of the  results as -compared to Edmondson's
data.  Sensitivity analyses indicate that half satu-
ration constant for C02 should be 0.025 mg/1.  The
sinks of hypolimnion oxygen are equally divided be-
                                                         tween the decay of organic materials  accumulated near
                                                         sediment and the respiration of settling algae.   The
                                                         model predicts the rapid recovery of  the lake due to
                                                         the flushing effects of tributary inflows.


                                                          i-DEPTH, METERS
<^^L/IOOWI_V I_L/ wAiUL_iM,my/i 	 w
yo 24689246892468





10-

20-
30-
1
\
•
Sensitivity^^ •


Base
_ Case-



^m
.•' s
.• /
• — -"•










*ii
f













^


B-
•** ^
1
)
.j>

.••'"''


-
_



i














' a

Jf
• •
fj
.**'?
t.-' s
• .
• /
• *
— • 1
                                                            JUN             AUG             OCT
                                                         a.  Increased Decay Rates of NH,, NOp, BOD, Detritus

                                                         V




10-

20-

30-
1 1 1










„'
if





\ i



-



-
i


s
f





j
s \
i
•
•
'*



i i i i i
t
/;
/;
~ s •'
S /
* •
j •
I •
j :

                                                             JUN             AUG
                                                           b.  Reduced Algal Respiration
                                                                                              OCT
                                                          V


10-



20-

II 1 IV
4
^
':
t
\
I
1
I:
1:
1-
i i iii
j
X
c
\
i
i
|
i i i i i

/


/
~
[ /
                                                             JUN
AUG
                                                                                               OCT
                                                           c.  Reduced Waste Load Input
                                                           Figure 6.  Model Responses to Sensitivity Analyses

                                                          REFERENCES

                                                          1.   Chen,  C.W. ,  "Concepts and Utilities  of Ecological
                                                              Model", J.  Sanitary Engineering Division,  ASCE 96,
                                                              No.  SA5, 1970.
                                                          2.   Chen,  C.W. ,  and G.T.  Or lob,  "Ecologic Simulation
                                                              for  Aquatic  Environments," in Systems Analyis and
                                                              Simulation  in Ecology, Vol.  Ill, Academic  Press,
                                                              N.Y.,  1975.
                                                          3.   Goldman, J.C.,  W.J. Oswald,  and D. Jenkins,  "The
                                                              Kinetic of  Inorganic Carbon Limited  Algal  Growth,"
                                                              Journal WPCF, Vol 46, No. 3, March 1974.
                                                     767

-------
                                       A LIMNOLOGICAL MODEL FOR EUTROPHIC
                                             LAKES AND IMPOUNDMENTS
                    Robert G. Baca
         Water and Land Resources Department
           Battelle-Northwest Laboratories
                 Richland, Washington
                 Ronald C. Arnett
                Research Department
        Atlantic Richfield Hanford Company
                Richland, Washington
                       ABSTRACT
A general limnological model is formulated in terms of
key environmental variables including dissolved oxygen,
biochemical oxygen demand, temperature, phytoplankton,
zooplankton and the principal nutrients.  The major
controlling factors such as light, temperature, nutri-
ent loading rates, sediment interactions, and flow
patterns are integrated into the model formulation to
provide a detailed portrayal of the important limnetic
processes.  The model formulation  is generalized to
apply to well-mixed and stratified systems.  The
capabilities of the limnological model are demon-
strated with applications to three lakes:  Lake Wash-
ington near Seattle,  Lake Mendota and Lake Wingra at
Madison, Wisconsin.

                   INTRODUCTION

The problems associated with the eutrophication of
lakes and impoundments are becoming of increased con-
cern.  One preliminary *survey of problem lakes and
reservoirs in the U.S. identified numerous cases
where the water quality has deteriorated to the extent
that restoration measures are needed.  Considerable
research has been devoted to understanding the role
of principal nutrients2'3'1* in controlling the rate of
eutrophication.  Particular emphasis has been placed
on applying this new knowledge to the development of
effective control and restoration measures5.  Although
a wide variety of techniques^ have been developed, no
general guidelines are currently available with which
to assess the technical and economic feasibility of
alternate techniques.  Considering the high costs
generally associated with the implementation of lake
rehabilitation techniques, there is a great need for
reliable predictive tools with which to assess and
evaluate the effectiveness of rehabilitation techniques
and to estimate the rates of lake recovery.  The
logical tool which can be used to develop such a
predictive capability is mathematical modeling.

In a recent research project7 for EPA, a general
methodology based on the application of modeling tech-
niques was developed for use in assessing the rates of
eutrophication in lakes and impoundments.  The
methodology uses three specific modeling techniques
for:  (1) estimating nutrient loading rates from land
use patterns and historical data8, (2) predicting the
long term changes in key eutrophication indicators as
functions of nutrient loading rates , and (3) predict-
ing short term changes in several water quality param-
eters as functions of nutrient loading, climatic con-
ditions, lake morphometry, hydrologic features,
thermal and ecologic regimes  .  In this paper, we
present the generalized limnological model developed
for short term predictions (less than 10 years) with
results from recent applications.

                 MODELING APPROACH

The conceptual framework of the generalized model is
based on a description of the fundamental limnetic
processes such as heat transport, constituent transport,
hydromechanics, chemical and biological cycles.  A quasi-
two-dimensional approach is used based on a segment-
layer representation shown in Figure 1.

                                                       768
     Segments
                                         River
                                         Inflow
                                          Tributary
                                           Inflow
                                       Layers
      Figure 1.  Segment-Layer Representation

This approach reduces the multi-dimensional problem
into a series of one-dimensional ones.  The fundamental
conservation laws for mass and energy are used to
derive principal model equations.  Chemical and
biological cycling are modeled by first order kinetic
equations.

TRANSPORT

The heat and mass balance equations for the segment-
layer representation are

          Q.,
3_T
3 t
?F = D
4  3 z    z
                                      - Qv, JO +  H
                                                  (D
and
     3C
     3 t
        (k)
            3_C
            3z
                    (k)
                      = D
                     32C(k)
                                 + S
                                          (k)
             -Mo   c  (k)
             AAz^h.i i
                                - Q.
                                   ;h,o^
                                                   (2)
where
  (k)
 T,T

C.  '

Qh,o
  Q
  ^  '  =
      A,Az
 = lake and inflow temperatures, °C
 = lake and inflow concentrations, mg/1

 = horizontal inflow and outflow.m /day

 = vertical flow rate, m /day

   settling velocity, m/day
              source  or  sink  terms,  °C/day,  mg/l-day
                                                   2
              element surface area and thickness,  m ,
 The source terms H account for the atmospheric     ^s
 heat  exchanges  across the air-water interface and S
 describes the  biogeochemical cycling in the aqua'tic

-------
system.  The specific formulations for S are given in
the following sections.

DISPERSION AND MIXING
                                                                G    = Min
                                                                np
                                                                                                            (6)
Wind shear on the water surface plays a major  role  in
generating epilimnetic mixing.  The  following  empiri-
cal formula is used to calculate vertical  dispersion:
          a. + a.v e
           1    2 w
 thermocline, m; and
where v^ is wind speed, m/sec; d is the depth of the

                       and a. are empirical constants,

m^/sec, m.   When well-mixed conditions exist, i.e., no
thermocline, the depth parameter, d, is set to 6
meters; this represents a minimum stirred depth.  Con-
vective mixing produced by density induced instabil-
ities is modeled as a mechanical mixing process.  The
procedure consists of checking the density profile
obtained from the predicted temperatures, locating
any region of instability and then mixing the adjacent
layers until a stable condition is reached.  The out-
come of this process is a mixed mean temperature and
concentration over the region of instability.

PHYTOPLANKTON

The phytoplankton submodel is based on a carbon bal-
ance.  A single species formulation is used given by

     f - (G  - D )P                               (4)
     dt     p    p                                 v '

where P is the phytoplankton concentration, mg-C/1;
G  is gross  specific growth rate, I/day; and D  is
the death rate, I/day.  The equation for G  relates

the growth rate to the limiting nutrient concentration,
light intensity and temperature:
          G  G.G
           m  £ np
                                                  (5)
 The  term, G  , is the maximum  specific  growth  rate  and
 is corrected for temperature  according to  a Q10  formula.
 The  light limiting term G., is  calculated  using  the
 equation:
                                                   (5)
 where A is a light constant, I/lux;  I is light inten-
 sity, lux; and a is a photoinhibition factor, I/lux.
 Light intensity is distributed vertically  through the
 water column according to
                                                          where  C,  and  C,  are  ammonia  and  nitrate nitrogen,
                                                          mg-N/1; D.  is inorganic  phosphorus,  mg-P/1;  and Kn and
                                                          K  are Michaelis constants.
                                                            P
                                                          The decrease  in  phytoplankton concentration  occurs
                                                          through endogenous respiration,  decomposition,  sinking,
                                                          and zooplankton  grazing.   The formulation for D  is
                                                                D  =
                                                                 P
                                                                      R  + C Z, z d
                                                                       P    8       e
                                                                                                            (9)
                                                           where R  is endogenous respiration, I/day; F  is

                                                           decomposition rate, I/day; C  is zooplankton grazing,
                                                           1/mg-C-day; and d  is euphotic depth, m.  All algal
                                                           cells within the euphotic zone are treated as active
                                                           cells which photosynthesize in a lighted environment
                                                           and respire in a dark one; below the euphotic depth,
                                                           all algal cell, are inactive cells in the death
                                                           phase and thus decomposing.

                                                           ZOOPLANKTON

                                                           The formulation of the zooplankton submodel is given
                                                           by
                                                                8
                                                                                                            (10)
                                                           where G  and D  are the growth and death rates, I/day.

                                                           The zooplankton growth and death rates are calculated
                                                           from
                                                                G  — A.   C  T,   i  TI
                                                                 z    zp  g K   + P
                                                                             mp
                                                                D  = R  + F
                                                                 z    z    z
                                                                                                            (11)


                                                                                                            (12)
                                                           where A   is  the conversion efficiency,  decimal;  C
                                                                  zp                                         g
                                                           is grazing rate, I/day;  K   is Michaelis constant,
                                                                                    mp
                                                           mg-C/1; and R    respiration rate,  I/day;  F  is a
                                                                        z                             z
                                                           predation rate,  I/day.   Growth and  death rates a
                                                           corrected for temperature.

                                                           PHOSPHORUS
                                                   (7)
 where I  is surface light intensity, lux;  u  is extinc-
 tion coefficient of water, m   ;  g is self-shading
 factor, m  /mg-C/1; and P~ is average phytoplankton con-
 centration above depth, z, mg-C/1.  The diurnal pattern
 of light is calculated from standard light day equation:
     1=1    %(1 + cos 4^ t)
      o    max           X
                                                   (8)
                                                           The phosphorus submodel considers algal uptake and
                                                           release,  zooplankton release,  degradation of organic
                                                           phosphorus (D~)  with consequent release of inorganic
                                                           phosphorus (DI), loss of both  organic and inorganic
                                                           phosphorus to  the sediments (D ), and anaerobic
                                                           release from the sediments.  The organic phosphorus
                                                           pool is assumed to be in a particulate form.  Settling
                                                           of particulate P is accounted  for in the transport
                                                           equation.   The submodel equations are
where I    is calculated from net short wave radiation;
       max
t is time, hours;  and A is the day length factor hours.
The factor,  G  ,  relates the growth rate to the concen-
             np
tration of the principal nutrients:
                                                                dt
                                                                                                             (13)
                                                        769

-------
           + R PA   + D ZA
              P  PP    z  pz
                      d4V - i2D2
     dt
                    (I4D2- [I3D3])
                                                  (14)
                                         (15)
                                                          DISSOLVED OXYGEN

                                                          The dissolved oxygen submodel considers the effects  of
                                                          (1) temperature,  (2) oxidation of suspended and  dis-
                                                          solved organic matter,  (3) benthic uptake,  (4)
                                                          reaeration,  (5) algal photosynthesis, respiration  and
                                                          decomposition:
where A   and A   are the yield coefficients; I, is
       PP      Pz                              -1
sediment uptake rate, I/day; I2 is organic phosphorus

decay, I/day; I, is sediment  release, I/day; and 1^
is sediment trapping, I/day.  The terms in parenthesis
apply  for  the hypolimnion;  brackets designate
processes dependent on anaerobic conditions.   The rate
coefficient I., !„, I, and I, are corrected for tem-

perature.

NITROGEN

The nitrogen submodel considers algal uptake and
release, zooplankton release, decay of organic (C.)

and sediment (C,-) nitrogen, and the oxidation of

ammonia  (C.) and nitrite (C2) to nitrate (C,).  During
anaerobiosis, nitrification is inhibited and denitrifi-
cation occurs with the loss of nitrate.  Sediment
interactions include nitrate uptake and release of
ammonia.  The organic nitrogen pool is assumed to be
in particulate form; settling of particulate N is
accounted for in the transport equation.  The submodel
equations are
                            Jl
                           c1+c3
     dCj

     dt
     dt    J1C1   J2C2
     dC                   C

     IT ' J2C2 - PGPV ^TcT - J3C3
                                   J4C4 + (J5C5)
(16)



(17)


(18)
                                                               dDO
                                                               dt
                                                                       a-(G -D )P
                                                                        3  p  p
                                                                              a,J,,C, - T5- + K (DO -DO)
                                                                               222   Az    r   s
                                                                                                  (21)
                                                          where DO  is dissolved oxygen saturation, mg/1; L, is

                                                          benthic oxygen uptake rate, g/m -day; a  , a  and a.

                                                          are stoichiometric constants; and K  is reaeration
                                                          coefficient I/day.  All other variables are as
                                                          previously defined.

                                                          A simple linear relationship is used to model
                                                          reaeration rate as functions temperature and wind
                                                          speed:
                                                                 a.v )0
                                                                  2 w
                                                                                (T-20)
                                                                                                           (22)
                                                 where v  is wind speed, m/day; a.  and a_ are empiricial
                                                        w                        12
                                                 coefficient; and T is temperature in °C.  6  is  1.075.

                                                                   MODEL APPLICATIONS

                                                 The chemical and biological submodels described  above
                                                 have been implemented into a generalized computer model.
                                                 The simulation procedure for a general application con-
                                                 sists of two steps.   First, the heat transport,  mixing
                                                 and fluid flow are simulated for the entire period of
                                                 interest using 1 day time steps.  Next,  these results
                                                 are input to the limnological model for  solution of the
                                                 equations for constituent transport, which are solved,
                                                 using 4 to 12 hour time steps, for each  of 11 constitu-
                                                 ents.   A finite difference technique is  used to  obtain
                                                 approximate solutions for the transport  equations.

                                                 LAKE WASHINGTON
     dt
= J,C,
                            zZANZ
                       ,  is organic nitrogen decay,

                                             J,.
 is sediment nitrogen decay,  I/day;   ,.
                                                is
                                             are the
where J. is the ammonium oxidation rate, I/day; J0 is

nitrite oxidation rate, I/day; J  is denitrification

rate constant, I/day; J

I/day; J

sediment uptake rate, I/day; and A   and A,
                                  NP      NZ
nitrogen to carbon ratios for algae and zooplankton,
mg-N/mg-C.  Denitrification is inhibited during aerobic
conditions.  The rate coefficients J ,  J ,  . .  .,  J

are adjusted for temperature.

BIOCHEMICAL OXYGEN DEMAND

The behavior of BOD is modeled by
     dL
     	c
     dt
    K,L  +
     1 c
                    a (F P + R Z)
                     op     z
where L  is the BOD5 concentration, mg/Jl, K  is the
decay rate, I/day, and
mg-02/mg-C.
                 is a stoichiometric constant,
(19)     Lake Washington  is a  large,  relatively  deep, mesotrophic
        lake located  at  Seattle, Washington.  The  pollution
        history and recovery  of  the  lake  has become a  classic
        example of how a eutrophic lake can show a positive
        response  to nutrient  diversion.   Because of the lake's
        depth  and aerobic environment, the lake lacks  signifi-
        cant sediment-water interactions.  Consequently, Lake
        Washington is a  good  case for  testing a model's ability
        to  describe the  transport of dissolved  and particulate
        matter, nutrient cycling and algal growth  patterns as
        they are  influenced by  the annual stratification cycle.

        The limnological model was applied for  the time period
        April  1 through  December 31, 1962.  This period was
        selected  because it represented that portion of the
        year over which  the chemical and  biological changes
        were most dynamic.  A single segment with  20 elements
        was used  to represent the lake geometry.   The  inflow
        quality data  for the  lake were adapted  from estimates
        developed by  Chen11 in  a previous modeling effort.  The
        model  results are compared with observed data  in
(20)     Figure 2. The good-to-excellent  agreement of  these
        results indicate the  model has a  strong capability to
        describe  the  direct and  reciprocal relationships be-
        tween  algae dynamics  and nutrient cycling. Calibration
        of  the temperature model required about 10 minutes
        computer  time, on a POP  11/45, and 30 minutes  of staff
                                                       770

-------
time.  The  limnological model  required about 1 hour of
computer  and  staff time to calibrate.
                          TEMPERATURE, °C
D

0 20 »
'

10 20 w o in ?n M o 10 20 10 o 10 » *

•

•
i
•/

,
<

t
1

) 10 20 30

•
  ] 10 a) » jo so o ?o 40 60 so
                         CHLOROPHYLL a, yg'l
                     0 20 JO 60JjO   0 20 dp 60 SO   0 20 JO 60 30   0 20 JO 60 SO
0 10

H 20
g 30

1 «
-I
A 50
                                I
                                              15   0  5  10  15
     0 M  Q.08
     I
                          INORGANIC P mo/1
            0  OM O.OS  0  O.M O.QE   Q  O.OJ 0.09   0  O.OJ  O.OS
         FIGURE 2.   Lake Washington Application

LAKE MENDOTA

Lake Mendota  is a large, moderately deep,  eutrophic
lake located  at Madison, Wisconsin.  The  largest of the
Madison lakes, it enjoys the notable distinction of
being one of  the  most studied and well  characterized
lakes in the  world.   The limnological patterns in the
lake are very diverse and complex.  The phytoplankton
cycle is characterized by a species succession typical
of eutrophic  lakes.   The nutrient cycles  are  very
dynamic and strongly related to the phytoplankton and
dissolved oxygen  cycles.  Thus, Lake Mendota  represents
a considerable challenge for any limnological model.

The model was applied to Lake Mendota for  the period
from May 8 through October 18, 1972.  The  major chem-
ical and biological  transformations occur  within this
period.   Considerable, in-lake data.12-are  available for
the lake.  The  inflow quality was estimated  from data
                                                             compiled by Sonzogni and Lee.
                              13
                                  The lake  was  repre-
sented by a  single segment with 16 layers.   The model
results are  compared with the observed data  in Fig-
ure 3.  These  results clearly show that  the  model has
a strong capability to track the algal growth and
nutrient cycling  patterns in stratified  lakes with
sediment interactions and anoxic regimes.  These
results further demonstrate that the model can realis-
tically relate the biological, chemical  and  physical
response of  the lake to major controlling factors.   The
model produced excellent results for temperature,
chlorophyll  a., inorganic phosphorus, and ammonia nitro-
gen.  Good-to-excellent results for dissolved oxygen
were obtained.  The results for nitrate  nitrogen indi-
cate that additional development work is needed to
improve the  nitrogen submodel.
                                                             0  10  20 30
                                                                                     TEMPERATURE. C

                                                                            10  20 30    0   10  20  30    0  10  20  30
                                                                                     CHLOROPHYLL a, uq/l
                                                             0 20  40  60   0  20  40 60    0   20  40 ffl    0  20 40  60
                        7
                                                                                    DISSOLVED OXYGEN, mg'l
                                                             0   10   ?0    0   10 _  20    0   10    20   0   10   20
                                                                                      INORGANIC P, mg'l
                                                                                      Q  O.d  OS
                           AMMONIA N, mg/l
 0123    0123    0123    0123
         FIGURE 3.  Lake Mendota Application
                                                                                                              0   10  20  30
                                                                                                             0  20  40  60
                                                                                                              0    10   20
                                                 0123
                                                         771

-------
LAKE WINGRA

Lake Wingra is a small shallow lake located near the
university of Wisconsin at Madison.  The lake is pre-
sently in an advanced eutrophic state.  Because the
lake receives its nutrients from intermittent sources
(precipitation, dry fallout, urban and rural runoff)
the net loading rate history resembles a high frequency
random signal.  The instantaneous flushing rate also
varies from a few weeks to several months.  The net
result is that the chemical and biological state of the
lake are very sensitive to the boundary conditions dur-
ing certain parts of the year.

The limnological model was applied for the time period
April 1 through October 31, 1970.  Inflow quality data
were developed from measurements reported by
Kluesener14 and estimates of diffuse inputs (dry
fallout, rainfall).  The in-lake data collected
Kluesener1*4 and Koonce15 were used for initial condi-
tions and the model comparison.  A single segment
representation composed of six layers was used.
The comparisons between the in-lake data and the model
results show good agreement for most water quality
parameters.  Temperature and dissolved oxygen were
modeled exceptionally well.  Since the biomass data
were calculated from cell counts and volumes there are
some questions about the comparability of the phyto-
plankton biomass results.  Nevertheless, the model
results are quite reasonable and show the proper rela-
tionship between parameters.
                      CALENDAR YEAR
       J   F   M   A   M	J____J   A  S   0   N   D
       DATA
       oT
       • DO
                                                  15  1
    1.6
 ~O
 Oi
 E

 g
DATA
• BIOMASS
o P04-P
   2.0 |	1	1	\	1	1	1	1	1	1	1	1	F	10.10


                                                 0.08


                                                 0.06


   0.8 -                                          - 0.04

                   !A •     • N. /V*
   0.41-       ^JT.  !A        «>)Liv  .          -J0.02

                                                 0
     0     50    100   150    200    250   300    350    400
                       JULIAN DAYS


         FIGURE  4.  Lake Wingra Application

                       CONCLUSIONS

The general capability of the models developed  in
this work has been demonstrated with applications to
three lakes:  Lake Washington at Seattle, Lakes
Mendota and Wingra at Madison, Wisconsin.  These
                                                    three lakes represent a class of eutrophic lakes with
                                                    different morphometric, meteorologic, hydrologic,
                                                    thermal,  and ecologic regimes.  The comparisons
                                                    between observed data and model results demonstrate
                                                    that the models have a general capability to track
                                                    the seasonal water quality patterns in limnetic
                                                    systems.

                                                    The generalized limnological model presented in this
                                                    paper can provide a convenient interpretative tool
                                                    and can be used to develop an understanding of the
                                                    limnetic system as a whole.  By providing information
                                                    on the cause and effect relationships, these models
                                                    can help expand our insight and improve our abilities
                                                    to predict the ecological consequences of altering
                                                    various controlling factors.
                        REFERENCES

 1.  Ketelle, M. J. and  P.  D.  Uttormark.   "Problem Lakes
     in the United States,"  University of  Wisconsin,
     Madison, WI, December 1971.
 2,  Likens, G. E. (Editor). Nutrients and Eutrophica-
     tion:  The Limiting-Nutrient  Controversy,  American
     Society of Limnology and Oceanography,  Inc.,
     Lawrence, KS, 1972.
 3.  Allen, H. E. and J.  R.  Kramer (Editors). Nutrients
     in Natural Waters, Wiley-Interscience,  New York,
     NY, 1972.
 4.  Eutrophication: Causes, Consequences,  Correctives,
     Proceedings of a Symposium, National  Academy  of
     Sciences, Washington, D. C.,  1969.
 5.  Anon.  "Measures for the Restoration  and Enhance-
     ment of Freshwater Lakes," EPA-430/9-73-005,  U.S.
     Environmental Protection Agency,  Washington,  D. C.,
     1973.
 6.  Dunst, R. C., et.  al.   "Survey of Lake  Rehabili-
     tation Techniques  and Experiences," Technical
     Bulletin No. 75, Department of Natural  Resources,
     Madison, WI, 1974.
 7.  Baca, R.G., R. C. Arnett, W.  C. Weimer, L.  V.  Kimmel,
     H. E. McGuire, A. F. Gasperino and A. Brandstetter.
     "A Methodology for Assessing  Eutrophication of Lakes
     and Impoundments,"   Battelle, Pacific Northwest
     Laboratories, Richland, WA, January 1976.
 8.  Weimer, W. C., H. E.\McGuire  and  A. F.  Gasperino.
     "A Review of Land Use Nutrient Loading  Rate Rela-
     tionships,"  Battelle, Pacific Northwest Labora-
     tories, Richland, WA, January 1976.
 9.  Baca, R. G., L.  V. Kimmel and W.  C. Weimer.   "A
     Phosphorus Balance Model For Long.Term  Prediction
     of Eutrophication in Lakes and Impoundments,"
     Battelle, Pacific Northwest Laboratories, Richland,
     WA, January 1976.
10.  Baca, R. G. and R. C. Arnett.  "A Limnological
     Model for Eutrophic Lakes and Impoundments,"
     Battelle, Pacific Northwest Laboratories, Richland,
     WA, January 1976.
11.  Chen, C. W. and G. T. Orlob.  "Ecologic Simulation
     for Aquatic Environments,"  Water Resources Engin-
     eers, Inc., Walnut Creek, CA, December  1972.
12.  Sonzogni, W. C.   "Effects of Nutrient Input Reduc-
     tion on Eutrophication of the Madison Lakes,"  Ph.D.
     Thesis, University of Wisconsin,  1974.
13.  Sonzogni, W. C.  and G. F. Lee.  "Diversion  of  Waste-
     Waters from Madison Lakes," Environmental Engineer-
     ing,  ASCE, 100,  EG1, pp. 153-170, February  1974.
14.  Kluesener, J,  W.  "Nutrient Transport and Transforma-
     tions in Lake Wingra," Ph.D. Thesis, University of
     Wisconsin, 1972.
15.  Koonce, J. F.   "Seasonal Succession of  Phytoplankton
     and a Model of the Dynamics of Phytoplankton  Growth
     and Nutrient Uptake," Ph.D. Thesis, University of
     Wisconsin, 1972.
                                                       772

-------
                               MATHEMATICAL MODELING OF PHYTOPLANKTON DYNAMICS IN

                                             SAGINAW BAY, LAKE HURON
                V. J. Biennan, Jr.
       U.S.  Environmental Protection Agency
           Large Lakes Research Station
     Environmental Research Laboratory-Duluth
               Grosse lie, Michigan
                     D.  M.  DoIan
        U.'S.  Environmental  Protection Agency
           Large Lakes  Research Station
      Environmental Research Laboratory-Duluth
                Grosse lie, Michigan
     A mathematical model of phytoplankton production
has been applied to a set of physical, chemical, and
biological data from Saginaw Bay, Lake Huron.  The
model includes five phytoplankton types, two zooplank-
ton types, and three nutrients:  phosphorus, nitrogen,
and silicon.  The phytoplankton types include diatoms,
greens, both nitrogen-fixing and non-nitrogen fixing
blue-greens and "others".

     The purpose of the paper is to illustrate the
use of the model in both research and management ap-
plications.  A major research use to be discussed is
the interpretation of experimental data.  An example
is the calibration of model output for total phos-
phorus concentration to actual field data.  This cali-
bration indicated the possibility of a previously
unconsidered phosphorus source influencing the bay
in the fall of 1974.

     An important management application of the model
is its use as a tool for comparing the effects of var-
ious wastewater management strategies.  An example is
the simulation of differences in response among the
various phytoplankton types as a function of nutrient
load reduction in Saginaw Bay.  These examples and
others are discussed in this paper.

                   Introduction

     Mathematical modeling techniques can provide a
quantitative basis for the comparison of various
management strategies designed to reduce waste load-
ings to receiving bodies of water.  Different manage-
ment strategies for the Great Lakes have been analyz-
ed in this manner.  Thomann, et al. have used these
techniques to investigate the effects of phosphorus
and nitrogen reduction on chlorophyll levels in Lake
Ontario.1  Bierman, et al. have investigated the ef-
fects of changes in phosphorus, nitrogen, and silicon
loadings on phytoplankton biomass in Saginaw Bay,
Lake Huron.2

     Exhaustive research must be conducted with math-
ematical models prior to their use as management
tools, to ensure that they accurately describe the
particular physical, chemical and biological process-
es that they were designed to simulate.  Canale and
Middlebrooks, et al. have reported on various re-
search oriented whole-system and component models
which were designed to obtain greater insight with re-
gard to chemical and biological processes in aquatic
ecosystems. > "*

     The present work is part of the International
Joint Commission's Upper Lakes Reference Study invol-
ving Saginaw Bay, Lake Huron.  The ultimate goal of
this work is to develop a mathematical model which
can be used both to describe the physical, chemical
and biological processes that occur in Saginaw Bay
and to predict the effects of reduced waste-loadings.

     Model development is proceeding along two par-
allel pathways.  The first of these involves the
development of research-oriented process models which
include biological and chemical detail but which,
for simplicity, do not include any spatial detail.
The second pathway involves the development of an
engineering-oriented water quality model which des-
cribes, as closely as possible, the actual physical
system, including spatial detail.  At any point in
time, the water quality model will simulate those
chemical and biological processes which have been
successfully investigated and developed using the
spatially-simplified model.  There is constant feed-
back between the above two pathways and constant
interaction between the entire modeling effort and
an ongoing sampling program on Saginaw Bay.

     The purpose of the present paper is to de-
scribe the basic concepts of the Saginaw Bay model
and to present sample output from the model which
illustrates its use both as a research tool and as
a management tool.  All reported results were ob-
tained using a spatially-simplified model applied
to the inner portion of Saginaw Bay (Figure 1),
which has been assumed to be a completely-mixed
reactor.

                  Model Concepts

     The basic model equations and preliminary simu-
lations appear elsewhere.5'6  The compartments in
the model are five phytoplankton, two zooplankton,
higher predators, and three nutrients:   phosphorus,
nitrogen, and silicon (Figure 2).  The phytoplankton
types are diatoms, greens, both nitrogen-fixing and
non-nitrogen-fixing blue-greens, and "others", mostly
dinoflagellates and cryptomonads in Saginaw Bay.

     The motivation for a multi-class modeling ap-
proach is that different classes of algae have very
different nuprient requirements; for example, dia-
toms have an absolute requirement for silicon and
certain types of blue-greens can fix atmospheric
nitrogen.  In addition,  not all of these classes
have the same nuisance characteristics.  Diatoms
and green algae are grazed by zooplankton, but
blue-green algae are not significantly grazed and
can form objectionable floating scums.

     A unique feature of the model is that cell growth
is considered to be a two-step process involving sepa-
rate nutrient uptake and cell synthesis mechanisms.
The motivation for this variable stoichiometry ap-
proach is that an increasingly large body of experi-
mental evidence indicates that the mechanisms of nu-
trient uptake and cell growth are quite distinct.7»8»9
10,11  The mo
-------
of a cell's immediate metabolic needs.  Specific cell
growth rates are assumed to be dependent on the intra-
cellular levels of these nutrients, in contrast to the
use of Michaelis-Menten equation for relating growth
rates directly to extracellular nutrient concentra-
tions.

               Model Implementation

     A major problem in attempting to implement a com-
plex chemical-biological process model is the lack of
sufficient experimental data.  It is often possible
that more than one set of model coefficients could
produce acceptable agreement between the model output
and a given data set.  In the transition from single-
class to multi-class models, this problem becomes
particularly acute because it is no longer sufficient
to ascertain a range of literature values for a given
coefficient.  Multi-class models necessitate the de-
finition of class distinctions within this range.
Given the present state of the art of ecosystems mod-
eling and associated experimental work, many of the
coefficients in such models must simply be estimated.

     The primary operational differences among the
phytoplankton types in the model are summarized in
Table 1.  The working equations of the model and sen-
sitivity analyses of some of the more important co-
efficients have been presented elsewhere.

     One of the implicit assumptions of the model is
that cell biomass concentration is a more accurate in-
dicator of phytoplankton standing crop than is chloro-
phyll a_ concentration.  Furthermore, chlorophyll &_ is
a lumped parameter and cannot be used to distinguish
between different functional groups of phytoplankton.
For these reasons, chlorophyll a_ concentration does
not appear in any of the kinetic equations of the
model.

     The computer program which actually solves the
model equations is written in FORTRAN IV and is
structured in a form such that any number of phyto-
plankton and zooplankton types can be simulated,
along with any set of food web interactions for these
groups.  The version of the model in Figure 2 con-
sists of 23 simultaneous differential equations.
The solutions were obtained using a fourth-order
Runge-Kutta method with a time step of 30 minutes
for the nutrient kinetics equations and a time step
of 3 hours for the growth equations.  For a 365-day
simulation, approximately 5 minutes of CPU time are
required on an IBM 370/158 computer.  For the same
simulation, approximately 60 minutes of CPU time is
required on the Grosse lie Laboratory's PDP-8/e mini-
computer with floating point hardware.

                 Experimental Data

     Chemistry and chlorophyll data were collected
for Saginaw Bay by Cranbrook Institute of Science.12
During 1974, 12 cruises were conducted and samples
were collected from 59 stations.  Samples were taken
at 1 meter and at all depths from 5 meters to the
bottom in 5-meter intervals.  A total of 111 sta-
tion-depth combinations were sampled on most of the
cruises.  Analyses were conducted for 21 chemical
parameters, including phytoplankton chlorophyll.
Since the present modeling study is restricted only
to the inner portion of Saginaw Bay (Figure 1), only
data from the 33 field stations in this region were
used.

     The phytoplankton data used were collected on
the above cruises by the University of Michigan at
1 meter depths.1^  Species counts were  conducted on
all samples.  In order to transform  these  data for
comparison with model output, the species  counts
were first integrated to the genus level.   At  this
level, cell volumes were assigned and these volumes
were then integrated to the level of the five  func-
tional groups in the model.  The cell volume con-
centrations at this level were converted to dry
weight (biomass) concentrations.

     The zooplankton data used were  collected  on
the above cruises by the University  of  Michigan at
the same station-depth combinations  as  the chemical
data.    Individual species counts were converted
directly to dry weight concentrations and  then inte-
grated to the level of the two functional  groups  in
the model.

     All phytoplankton and zooplankton  mean concen-
trations are reported as the geometric  mean +  34% of
the area under the frequency distribution  curve.
This is analogous to the arithmetic  mean + one full
standard deviation.  Analyses of the biological data
indicated that a log-normal distribution was a more
accurate representation than a normal distribution.
All other data are reported as the arithmetic  mean
+ one-half standard deviation.

     Nutrient loadings to Saginaw Bay from the Sagi-
naw River, the primary source, were  determined on
the basis of a field sampling program.  For  the
first half of the year, samples were taken at  two to
three-day intervals at the Dow Chemical Company
water intake at the mouth of the Saginaw River.
From July to December, samples were  taken  from the
Midland Street Bridge in Bay City every two weeks.
During this period, the Dow intake was  too  strongly
influenced by the bay itself because of the  intru-
sion of bay water up the river.  The Midland Street
Bridge is approximately 5 miles upstream from  the
river mouth and is not influenced by the bay during
this period.  Concentrations were obtained for chlo-
ride and total and dissolved forms of phosphorus,
nitrogen, and silicon.  Daily flow rates were ob-
tained from the U.S. Geological Survey.

     Boundary Conditions and Forcing Functions

     Since the physical system under consideration
is only part of a larger physical system, Lake
Huron proper, the interaction between Saginaw Bay
and Lake Huron is extremely important.  The predomi-
nant flow pattern in the bay is counterclockwise
with Lake Huron water flowing in along  the north
shore and a mixture of Lake Huron water and  Saginaw
River water flowing out of the bay along the south
shore (Figure 1) .  The concentrations of nutrients
and biota in the water which flows across  the indi-
cated inner bay-outer bay boundary are  examples of
boundary conditions which must be specified.  These
concentrations were determined using the cruise data
from the two sampling stations nearest  to  the area
of water inflow from the outer bay.  Daily  concen-
tration values were calculated by linearly interpola-
ting between the cruise averages for these stations.

     External nutrient loads are the most  important
forcing functions in the present study.   Total daily
flow from the Saginaw River was calculated by summing
the primary tributary gauges and the estimated flow
from the ungauged tributary area.  Daily nutrient
loading rates were calculated using  the measured
nutrient concentrations on that day.  These  daily
loading rates were then plotted and  time-series of
loading rates were generated by linearly interpolat-
                                                       774

-------
ing between all of the significant peaks and  troughs.
For example, for total phosphorus, a  series of  28
loading rates/time-breaks was generated.  For ortho-
phosphorus, a series of 46 loading rates/time-breaks
was generated.

                  Model Calibration

     The spatially-simplified Saginaw Bay model has
been calibrated to 12 simultaneous and independent
parameters:  chloride, biomass concentrations for
five functional groups of phytoplankton, total  zoo-
plankton, total phosphorus, total nitrogen, and dis-
solved forms of phosphorus, nitrogen, and silicon.
For simplicity, only selected results are presented
here (Figures 3-5).

     Water circulation rates between  the inner  and
outer bay were determined by modeling chloride  concen-
trations in the bay and chloride loadings from  the
Saginaw River in a manner similar to  that of Richard-
son.15  Advective flows and turbulent dispersions in
the model were adjusted until the chloride output cor-
responded to the field measurements  (Figure 3).  Time-
variable flows were used which corresponded to  hydrau-
lic detention times ranging from 45 to 120 days for
the inner bay.

     Model output for total phosphorus (Figure  4) is
consistent with the actual data with  the exception
of the late-fall period.  Since the only external
nutrient sources considered were the  Saginaw River
and Lake Huron, the present results must be considered
preliminary in nature.  The possible  roles of sedi-
ments and atmospheric sources must be considered be-
fore a complete picture of the nutrient dynamics in
Saginaw Bay can be obtained.

     One of the recent advances in the area of  phy-
toplankton modeling has been the resolution of  total
phytoplankton biomass into functional groups.   There
are important differences in biology  and nutrient
chemistry among different types of algae, as well as
differences in water quality implications.  In  Sagi-
naw Bay, the principal concern is with the differ-
ences between diatoms and blue-green  algae.  The
total biomass curve (Figure 5) was therefore resolved
into its diatom and blue-green components (Figure 6)
by plotting the computed biomass concentrations of
these phytoplankton types.  Comparison of the curves
indicates that the diatoms (Figure 6a) comprise 99%
of the first biomass peak, while the  sum of non-
heterocystous blue-greens (Figure 6b) and heter-
cystous blue-greens (Figure 6c) comprises 80% of the
second biomass peak.  The results agree reasonably
well with biomass data for individual phytoplankton
types.

                Research Applications

     The present model can be applied to a variety of
research problems.   It can be an extremely useful re-
search tool when used in numerical experimentation
or sensitivity analyses.   Those system parameters
which are sensitive over the range of interest can be
identified.  Given a limited research budget, the in-
formation can be useful in optimally directing spend-
ing.   The model provides an alternate framework for
data analysis which can supplement traditional methods
such as statistical summaries or empirical models.
Use of the model can lead to new interpretations of
existing data or make clear new data requirements.

     The total phosphorus calibration (Figure 4) pro-
vides a good example of the model providing a frame-
work within which to interpret data.   The calculated
phosphorus concentration in the 4th quarter is con-
sistently low when compared to the actual data.  This
indicates that an additional total phosphorus source
is probably influencing the system in the fall and
that this source is not included among the model in-
puts.  In support of this hypothesis, the calculated
phosphorus concentrations agree quite well with ob-
served data in the other parts of the year and the
calculated chloride concentrations and hence, water
circulation rates agree with the observed data over
the entire year.  Phosphorus sources thought to be
insignificant on an annual basis have been recon-
sidered for possible seasonal inputs.  Such sources
include contributions from the atmosphere, resuspen-
sion of sediments and possible leaching from dredge
spoils.  This apparent discrepancy could not have
been discovered by looking at total phosphorus load-
ings and open-water concentrations alone.  The model
provides a link between the two that allows such a
conclusion to be made.

     Additional research insight can be gained by the
resolution of total phytoplankton biomass into var-
ious functional groups.  With this increased resolu-
tion, the full range of phytoplankton-nutrient inter-
actions can be investigated including:

     1.  nutrient recycling among different
         functional groups,

     2.  differences in nutrient stoichlometries
         and kinetics among the functional groups,

     3.  effect of silicon and nitrogen on species
         composition and succession,

     4.  supply of nitrogen to the system by
         nitrogen fixing blue-green algae.

     Research with a sophisticated mathematical model
requires the investigator to consider new data and  to
reconsider existing data.   Explanation of previously
undiscovered phenomena now becomes necessary.   Also,
as the model attains more realism, empirical coeffi-
cients and constants are eliminated and experimental-
ly determined parameters take their place.  Some ex-
amples follow.

     Chlorophyll a_ concentrations in water are rela-
tively easy to determine.   In conventional chlorophyll
models, the chlorophyll a_ to biomass  ratio for phyto-
plankton is assumed to be constant.   Chlorophyll a_
concentrations are therefore taken to be adequate
measures of phytoplankton abundance.   However,  when ac-
tual biomass data were collected for  calibration of the
multi-class model in Saginaw Bay, the chlorophyll a_
to biomass ratio was found to vary over the year by
as much as a factor of 16.   Furthermore,  these data
indicated that the chlorophyll a_ to biomass ratio
changed as phytoplankton species succession occurred
throughout the year.  This observation suggested that
each functional group in the model should have a dis-
tinct chlorophyll a_ to biomass ratio,  and that the
overall ratio at any given time depends on the rela-
tive abundance of each of the functional groups.   As-
signing chlorophyll a_ to biomass ratios by phyto-
plankton group reduced significantly  the yearly var-
iation in the overall chlorophyll a_ to biomass ratio.

     Field data alone cannot provide  information need-
ed to replace empirical coefficients  in simpler mod-
els.  The development of the model has necessitated
comprehensive process-rate studies to determine phy-
toplankton-nutrient uptake kinetics,  as well as phy-
toplankton-zooplankton interactions.   These types of
process studies have value independent of their mod-
                                                      775

-------
eling utility, but the modeling process can assure
that they are conducted in an orderly fashion.

              Management Applications

     A model that has been rigorously calibrated and
verified can be used for planning and management pur-
poses.  The achievement of significant reductions in
algal biomass, especially nuisance blue-greens, is a
problem that the multi-class phytoplankton model is
uniquely qualified to address.  The problem can be
quantified by introducing the possibility of reduc-
tions in the key nutrients:  phosphorus, nitrogen
and silicon in the case of Saginaw Bay.  The model
is capable of predicting reductions in each class of
algae given a percent reduction in the loadings of
these three nutrients.  It should be emphasized that,
in practice, such reductions would have to be accom-
plished by consideration of the controllable portion
of each of the nutrient loads, the timing of the
loadings and the availability of each nutrient to
the algae.

     Although, in the strict sense, the present model
is not verified, hypothetical simulations were con-
ducted in which the external loads of phosphorus, nit-
rogen and silicon, respectively, were reduced by 50%.
The effect of a 50% reduction in nitrogen loadings
was found to have a negligible effect on algal bio-
mass.  This is not surprising because nitrogen is a-
bundant in Saginaw Bay at the time of the spring dia-
tom bloom and nitrogen-fixing blue-greens can make up
any deficit in. the supply of dissolved nitrogen later
in the season.  A 50% reduction in silicon loading
was found to cause a minor reduction in the diatom
crop.  Silicon reductions were less effective than
expected because zooplankton grazing is apparently
as important as silicon depletion in the termination
of the spring diatom bloom.  In addition, a large
amount of silicon enters the bay from Lake Huron.

     Two sets of boundary conditions were considered
for the simulation of 50% reduction in phosphorus
loadings.  In the worst case situation, it is assumed
that the outer bay phosphorus concentration remains
the same despite the phosphorus load reduction.  In
the best case situation, it is assumed that the outer
bay phosphorus concentration becomes similar to the
Lake Huron phosphorus concentration in response to
the phosphorus load reduction.  The actual "state of
nature" will lie within these two extremes.  Such an
approach is necessitated by the lack of spatial re-
solution in the present model.  With spatial resolu-
tion, the outer bay could be modeled also.  Thus the
ambiguity in boundary conditions would be removed
since the system boundary would be Lake Huron which
has well defined concentrations for the parameters
of interest.

     The reduction in algal biomass for the 50% phos-
phorus load reduction occurs primarily in the latter
half of the year.   Therefore, it is essentially a re-
duction in blue-greens, since 80% of the second bio-
mass peak is blue-green biomass.  Actual percent re-
duction in total blue-green biomass depends on the
specification of the boundary conditions (Figure 7).
The best case blue-green reduction (Figure 7b) is
73% of the peak biomass, while the worst case reduc-
tion (Figure 7a) is 26% of the peak biomass.  Im-
proved estimates of blue-green responses to waste
load reductions in Saginaw Bay can only be obtained
with a spatially segmented version of the model.

     Nutrient reduction simulations can be used by
managers and planners to decide which nutrient or
nutrients to focus on in reduction programs and how
much reduction in the nutrient is required  for  signi-
ficant improvements in water quality.

                  Future Research

     Near term research with the model  has  two  im-
portant goals:  calibration of a spatially  refined
model to 1974 data and verification of  this model
with 1975 data.

     A 5-segment version of the model has been  devel-
oped and is awaiting calibration.  The  additional
spatial resolution will allow examination of effects
in different areas of the bay and will  reduce the
dependence of future projections of water quality on
boundary conditions.

     Additional spatial resolution depends  on ade-
quate representation of the water movement  between
spatial segments of the model.  Work is underway on a
hydrodynamic model that will be compatible  with the
phytoplankton model and will specify the transport on
a time varying basis when given wind speed  and direc-
tions as input.

     Longer term goals for Saginaw Bay  include the
monitoring of water quality trends in the bay as
nutrient loadings decrease.  During the next several
years nutrient reductions are expected  to occur due
to ongoing abatement programs.  Actual  projections
made with the model indicate that significant improve-
ments in water quality will occur as these  reductions
are attained.  The Saginaw Bay sampling program should
detect these trends and thus provide an opportunity
for model verification.

     This modeling effort has, in addition, some
broader, user oriented goals.  The phytoplankton mod-
el will be tested on other physical systems to deter-
mine its generality and to obviate any unforeseen
difficulties which might be experienced.  An ultimate
goal is to transfer a documented version of the model
to interested users for research, planning, and
management purposes.

                    References

1.   Thomann, R.V., DiToro, D.M., Winfield, R.P. and
     O'Connor, D.J.  1976.  Mathematical Modeling
     of Phytoplankton in Lake Ontario.  II.  Simula-
     tions Using Lake 1 Model.  U.S. Environmental
     Protection Agency.  In press.

2.   Bierman, V.J., Jr., Richardson, W.L.,  and Dolan,
     D.M.  1975.  Responses of Phytoplankton Biomass
     in Saginaw Bay to Changes in Nutrient  Loadings.
     Saginaw Bay Report.  A report to the Internation-
     al Reference Group on Upper Lakes Pollution, In-
     ternational Joint Commission, Windsor, Ontario.

3.   Canale, R.P., ed.  1976.  Mathematical Modeling
     of Biochemical Processes in Aquatic Ecosystems.
     Ann Arbor Science Press.  In press.

4.   Middlebrooks, E.J., Falkenburg, D.H. and Maloney,
     T.E., eds.  1973.  Modeling the Eutrophication
     Process.  Proceedings of a Workshop, September
     5-7, 1973.  Utah Water Research Laboratory and
     Division of Environmental Engineers, Utah State
     University, Logan.  National Eutrophication
     Research Program, U.S. Environmental Protection
     Agency, Corvallis, Oregon.

5.   Bierman, V.J., Jr.  1976.  Mathematical Model
     of the Selective Enhancement of Blue-Green Al-
     gae by Nutrient Enrichment.  In press  in
                                                       776

-------
     "Mathematical Modeling of Biochemical Processes
     in Aquatic Ecosystems", R.P. Canale, ed., Ann
     Arbor Science Press.

6.    DePinto,  J.V., Bierman, V.J., Jr. and Verhoff,
     F.H.   1976.  Seasonal Phytoplankton Succession
     as a  Function of Phosphorus and Nitrogen Levels.
     In press  in "Mathematical Modeling of Biochemi-
     cal Processes in Aquatic Ecosystems", R.P.
     Canale, ed., Ann Arbor Science Press.

7.    Fuhs, G.W.  1969.  Phosphorus Content and Rate
     of Growth in the Diatoms Cyclotella nana and
     Thalassiosira fluviatilis.   Journal of Phycology
     5: 312-321.

8.    Fuhs, G.W., Demmerle, S.D., Canelli, E. and
     Chen, M.   1971.  Characterization of Phosphorus-
     Limited Planktonic Algae.  Nutrients and Eutro-
     phication:  The Limiting Nutrient Controversy.
     Proceedings of a Symposium, February 11-12, 1971.
     American Society of Limnology and Oceanography
     and Michigan State University, East Lansing,
     Michigan pp. 113-132.

9.   Droop, M.R.  1973.  Some Thoughts on Nutrient
     Limitation  in Algae.  Journal of Phycology 9:
     264-272.

10.  Caperon, J. and Meyer, J.   1972a.  Nitrogen-
     Limited Growth of Marine Phytoplankton-I.
     Changes in  Population Characteristics with
     Steady-State Growth Rate.   Deep  Sea Research  19:
     601-618.
                             11.  Caperon, J. and Meyer, J.  1972b.  Nitrogen-
                                  Limited Growth of Marine Phytoplankton-II.
                                  Uptake Kinetics and Their Role in Nutrient
                                  Limited Growth of Phytoplankton.  Deep Sea
                                  Research 19: 619-632.

                             12.  Smith, V.E.  1975.  Saginaw Bay (Lake Huron):
                                  Survey of Physical and Chemical Parameters.
                                  Saginaw Bay Report.  A report to the Inter-
                                  national Reference Group on Upper Lakes Pollu-
                                  tion, International Joint Commission, Windsor,
                                  Ontario.

                             13.  Stoermer, E.F.  1975.  Saginaw Bay Phytoplank-
                                  ton.  Saginaw Bay Report.  A report to the Inter-
                                  national Reference Group on Upper Lakes Pollution,
                                  International Joint Commission, Windsor,  Ontario.

                             14.  Gannon, J.J.  1975.  Crustacean Zooplankton in
                                  Saginaw Bay, Lake Huron.  Saginaw Bay Report.  A
                                  report to the International Reference Group on
                                  Upper Lakes Pollution, International Joint Com-
                                  mission, Windsor, Ontario.

                             15.  Richardson, W.L.  1976.  An Evaluation of the
                                  Transport Characteristics of Saginaw Bay Using
                                  a Mathematical Model of Chloride.  In press in
                                  "Mathematical Modeling of Biochemical Processes
                                  in Aquatic Ecosystems", R.P. Canale, ed., Ann
                                  Arbor Science Press.
                                                     TABLE 1

                                          Operational Differences Among
                                               Phytoplankton Types
      Characteristic
         Property
                              Diatoms
                                             Greens
                                                           Others
                                            Blue-Greens
                                           (non n-fixing)
                                                  Blue-Greens
                                                   (n-fixing)
      Nutrient
      Requirements
Phosphorus
Nitrogen
Silicon
Phosphorus
Nitrogen
Phosphorus
Nitrogen
Phosphorus
Nitrogen
Phosphorus
      Relative Growth
      Rates  Under
      Optimal Conditions      High

      Saturation Light
      Intensity               High

      Sinking Rate             High

      Grazing Pressure         High
               High


               High

               High

               High
              Low


              High

              High

              None
                 Low


                 Low

                 Low

                 None
                   Low


                   Low

                   Low

                   None
                                                       777

-------
                10    0    10    20   30 M|
                 10  0  10 20 30 40 50 ....
                  i   i   i   I   i   i   i  KM
                Scale,  1:1,000,000
Figure 1.      Saginaw Bay watershed indicating dis-
               tinctions between inner and outer
               portions of the bay.
        HIGHER PREDATORS





MS


ABLE


LABLE
ON


> y
X
/ \







ZOOPLANKTER
2
.. i 	
|
1
GREEN ALGAE

	 "i 	
I
T
PHOSPHORUS
i

NON AVAILABLE
PHOSPHORUS





OTHERS BLUE-GREENS BLUE-GREENS ', j _„

	 • 	 t 	
AVAILABLE ATMOSPHERIC
NITROGEN NITROGEN
{
]
NONN,Tro£r |
Figure 2.      Principal compartments of the Saginaw
               Bay,  inner portion,  as compared to
               model ouput.
50
Ol
^ 40
z
0
H
cc 30
H
z
LU
0
I 20
UJ
D
CC
g 10
I
0
0
	 1 	 1 	 1 —




_

x^1
-


1 1 F 1 M ' >
1






t



* 1
	 1 	 1 	 1 	






Vi
J- J


vl 1 J ' J ' t-
— 1 	







1



x 1 s
— r






_^ .'



i
— i — >

~


-

_^


H"
.


0 ' N ' D
                                                                                      1974
                                                           Figure 3.      Chloride distribution for 1974 in  '
                                                                          Saginaw Bay, inner portion, as com-
                                                                          pared to model output.
\ IUU
01
3.
C/5
§ 8°-
i
Q-
OD
O
S 60
_]
<
H
O
1—
40
Z
g
i-
<
K 20
z
LU
(J
§ 0
<— > U
1 1 1 1 1 1 1 1 1 1 1


-




/X-"~\T ITT
^J \ • T
/ 'N T IT
/ 4 T
J I1 V\ fl
T----J
1



J'F'M A M jljlAls'o'N'D
1974
Figure 4. Total phosphorus distribtution for
1974 in Saginaw Bay, inner portion,
as compared to model output .
1 go 	 — 	
en
a.
§ 10.0
TOPLANKT
Q_
	 I
0
00
^ 0.10
^
O
CO
n ni
U.U 1
5 =
~ T "~
" T ~
! T'/\ T T 1
L K HTIHLH
h — "•• ilY l 1
- ' -

= 1 =
— ^
- -

J I F I M | A M J | J | A I S | 0 | N D!
                                                                                      1974

                                                           Figure  5.       Total biomass  distribution for  1974
                                                                           in  Saginaw Bay,  inner  portion,  as
                                                                           compared  to model  output.
                                                      778

-------
               JFMAMJ|J|A|SONO
 Figure 6(a).   Diatom biomass distribution for 1974
                in Saginaw Bay, inner portion, as com-
                pared to model output.
              E  (bl NON-HETEROCYSTOUS BLUE-GREENS   E
               BJ 1  F 1 M | A
                              /
                                              I  o
  Figure 6(b).   Biomass  distribution of non-hetero-
                 cystous  blue-green algae for  1974 in
                 Saginaw  Bay,  inner portion, as  com-
                 pared  to model output.
              E  (c) HETEROCYSTOUS BLUE-GREENS
               J|F|M|A|M|J|J|A|S|O|N|D
                              1974
                                                                2.0 -
                                                              < 1.0
                                                                2-0
1.0
                                                                00
    	CALIBRATION RUN INO P LOAD
       REDUCTION!                   /  \
                                  /     \
    	 OUTER BAY BOUNDARY CONDITIONS  /
       150% P LOAD REDUCTIONI        / i
-CALI BRA TION RUN INO P LOAD
 REDUCTIONI

-LAKE HURON BOUNDARY CONDITIONS /
 150% P LOAD REDUCTIONI        /

                         I
                                                                                              J  ' A  '  S '  0  ' N  '  D
                                                                                           1974
                                                               Figure 7(a).   Comparison between calibration run and
                                                                              50%  P  load reduction simulation for
                                                                              blue-green biomass in Saginaw Bay with
                                                                              outer  bay boundary conditions.
Figure 6(c).   Biomass  distribution of  heterocystous
               blue-green algae for 1974 in Saginaw
               Bay,  inner portion, as compared to mod-
               el output.
                                                          779

-------
                           THE APPLICATION OF A STEADY-STATE  WATER QUALITY MODEL
                             TO THE PERMIT WRITING PROCESS, LAKE MILNER,  IDAHO

                                              John R. Yearsley
                                                EPA Region X
                                         Seattle, Washington   98101
                      SUMMARY
     The Milner Reach of the Snake River, between
Minidoka Dam and Milner Dam (see Figure  1), is
classified as being water quality limited.  One of
the important limiting water quality parameters is
dissolved oxygen.  Data collected by the Federal
Water Quality Administration (FWQA) and  the
Environmental Protection Agency (EPA) at Milner
Dam show extended periods of low dissolved oxygen.
Conditions have been particularly critical during
periods of low flow when the discharges  from
municipal and industrial waste sources were at
their peak.  For example, during November, 1969
the minimum dissolved oxygen was less than 6.0
mg/1 on twenty-three days.  The effects  of low
dissolved oxygen upon aquatic life have  reached
serious proportions.  Major fish kills occurred in
the Milner Reach during the 1960, 1961,  and 1966
food processing seasons.  In addition to the
discharge of organic wastes from industrial and
municipal sources, the oxygen demand associated
with return flow from irrigation wasteways, decay
of algae in impoundments and oxygen demand from
bottom sediments contribute to the observed dis-
solved oxygen problems.
     Reductions in waste discharge since 1971,
coupled with above-average flows in the  Snake
River, have resulted in substantial improvement in
the dissolved oxygen of the Milner Reach.  No
dissolved oxygen levels below 6.0 mg/1 have been
observed since 1971.  However, dissolved oxygen
levels below 90% saturation were measured during
the food processing seasons of 1973 and  1974.
     In October, 1974, the Idaho Operations Office
of the EPA Region X drafted National Pollutant
Discharge Elimination System (NPDES) permits for
the industrial waste sources, J. R. Simplot and
Ore-Ida, in the Burley-Heyburn area.  A  steady-
state dissolved oxygen model was used to support
the permit writing process.  At the same time, a
comprehensive field study program was designed to
verify the model results in the Milner Reach.
                                                            Cg " the saturation dissolved  oxygen,  mg/1,

                                                            K-£ " the deoxygenation rate, I/days,

                                                             L - the carbonaceous biological  oxygen
                                                                 demand (BOD), mg/1,
                                                            c - the dissolved oxygen sources, mg/1/second

                                                           r_ • the dissolved oxygen sinks,  mg/1/second.

                                                             Similarly, the BOD budget is:

                                                          u dL - -K,L  +  *  - T                       (2)
                                                            -7—•     J.       L     L
                                                            dx

                                                        where,

                                                             L " the BOD sources, mg/1/second,

                                                           r^ = the BOD sinks, mg/1/second,

                                                             In the initial permit analysis,  the only
                                                        dissolved oxygen sources considered were those
                                                        associated with surface and groundwater return
                                                        flow.   The only dissolved oxygen sink was  the
                                                        demand associated with bottom sediments.
                                                             Sources of BOD were associated with surface
                                                        and groundwater return flows, and municipal and
                                                        industrial discharges.  No BOD sinks  were
                                                        included.
                                                             The solutions to Equations (1) and (2) are,
                                                        respectively:
                                                                              -K2x
                                                          c = cs - (cs - c0)
                                                                                      .
                                                                                  (K2 -
                                                                                                        (3)
                                                        and,
                METHOD OF ANALYSIS
     The steady state dissolved oxygen budget for
a vertically and laterally well mixed stream, in
which diffusion and dispersion processes are
neglected, can be written:
u dC - -K2(C - CS) - KjL
  dx
                                                (l)
where,
         the stream velocity, feet/second,
         the dissolved oxygen, mg/1,
         the distance along the axis of the river,
         positive downstream, feet,
         the reaeration rate, I/days,
                                                                L0
                                                                         <*L -
(4)
                                                             For the special case when there is no
                                                        reaeration (K2  = 0.0), which is of interest
                                                        during the winter ice cover condition, Equation
                                                        (1) has the following solution:
                                                          C = GO + L0(e
                                                                    APPLICATION OF THE MODEL

                                                             Estimates of the dissolved oxygen and  BOD
                                                        concentrations were made for that portion of  the
                                                        Milner Reach between Snake River Miles 654.0  and
                                                      780

-------
640.0.  These estimates were  obtained  from
Equations (3),  (4),  and  (5) using  various  organic
loading levels  for the NPDES  permits.   The effect
of these loadings upon water  quality was estimated
for January with no  ice cover,  January with
complete ice cover,  March, August,  and October.
These months were chosen  as critical seasons  from
the standpoint  of river flow  and in-stream water
quality.
     The water  quality characteristics, river
hydrology, and  cross-sectional  characteristics  for
each of the months analyzed are described  below.

Water Quality Characteristics

     The water  quality characteristics for the
Snake River at  River Mile 654.0 were estimated
from the results of  surveys made by EPA Region  X
in 1971, 1972,  and 1973.  The data from these
surveys are stored in the EPA's STORET system.
The concentrations of temperature,  dissolved
oxygen, and BOD used in the analysis are shown  in
Table 1.

     Table 1.   Water quality  characteristics  of
                the Snake  River  at  River mile  654.0
                                           Hydrology

                                                Discharge  of  the  Snake River at River Mile
                                           654.0 was varied over  a range of flows.   The
                                           quantity of  surface  and groundwater return flows
                                           was kept constant, as  shown in Table 3.   These
                                           return  flows correspond to  one-half (50%) of the
                                           total return flow  in the Milner Reach,  as given
                                           in the  U. S. Bureau  of Reclamation's 1971 base
                                           flow study.   The average monthly and l-in-10 seven
                                           day low flow for the Snake  River below Minidoka
                                           Dam are also shown in  Table 3.
                                                Table  3.
                                                Month
                                                 January
                                                 March
                                                 August
                                                 October
                                                                     Hydrologic characteristics of the
                                                                     Snake River below Minidoka Dam for
                                                                     selected  months.
                                                                 Average  l-in-10    Return
                                                                   Flow    Flow     Flow
                                                                   (cfs)   (cfs)   (cfs/tnile)
                                                                             2167
                                                                             3189
                                                                             8750
                                                                             2488
 276
 488
5528
1593
 4.0
 0.0
16.0
15.0
Temp.
(C)
0.0
0.0
5.0
22.0
10.0
D.O.
(mg/1)
11.3
11.3
9.9
6.4
10.0
B.O.D.
(mg/1)
1.5
1.5
1.5
1.5
1.5
     Month
     January
     (no ice cover)
     January
     (100% ice cover)
     March
     August
     October
     Water quality  for  the  surface  and ground
 water return flow,  assumed  to  be  the  same for both
 sources, is shown in Table  2.   Data of this nature
 for the Milner Reach are  limited.   It was,
 therefore, necessary to use estimates made  from
 available data.  In this  case,  water  quality
 studies from the Boise  River basin  were used as a
 means for estimating quality of the return  flows.
     Table 2.
     Month
January
(no ice cover)
January
(100% ice cover)
March
August
October
Water quality characteristics of
surface and groundwater  return  flow
flow in the Milner Reach,  as
estimated from Boise River data.
Temp.
(C)
0.0
0.0
22.0
10.0
D.O.
(mg/1)
8.0
8.0
6.0
8.0
B.O.D.
(mg/1)
0.0
0.0
1.0
1.0
                                            River  Cross-sectional  Characteristics

                                                The Milner  Reach  below River  Mile  654.0 was
                                            divided into  five  segments.   River widths and
                                            depths were assumed  to be  constant throughout each
                                            of  the five segments.   River miles included in
                                            each segment  and corresponding  width and  depth are
                                            given  in Table 4.
                                                                          Cross-sectional characteristics
                                                                          of segments in the Milner Reach
                                                                          of the Snake River.
                                                                Table 4.
                                                 Segment
                                                  No.
                                            Rate  Constants

                                                 The deoxygenation  rate,  Kj, was  assumed  to
                                            be 0.15 I/days  (base  e),  at  20  C,  for the  entire
                                            reach.  This  rate was obtained  from long  term BOD
                                            measurements  of  the J.  R.  Simplot  effluent in
                                           -March 1972.   The rate was  adjusted for temperature
                                            according  to  the relationship:
River
Mile
654-653
653-649
649-645
645-643
643-640
Width
(feet)
1200
1200
1200
1200
1200
Depth
(feet)
4.5
4.8
8.7
4.9
18.0
                                                                   20
                                                                            (T - 20)
                                                                       1.047
                                                                                           (6)
                                                      781

-------
where,
                                                          Field  Studies  and Model Verification
     KI ~ the deoxygenation rate at temperature,
    2Q    T, I/days (base e),
   K^   - the deoxygenation rate at temperature,
          T = 20 C, I/days (base e).

     The reaeration rate, K2 , was estimated from
the method given by O'Connor and Dobbins^  :
     ,20
                  0.5
            12.9 u
                                               (7)
                  1.5
                 H
and adjusted for temperature, T, according to:


           20       (T ' 20)
     K2 - K|u  1.024
                                                (8)
     The sediment oxygen demand rates,  FC, were
obtained from field studies made by Kreizenbeck2  .
Observations were made at four stations, and the
results are given in Table 5.  It was assumed
that the values remained constant throughout each
segment.  Furthermore, it was assumed that the
sediment demand varied with temperature according
to the relationship:

               0.07(T - 22)
     fcTc  e                                 <9>

     Table 5.  Observed sediment oxygen demand
               and  corresponding D.O. sink
               strength in the Milner Reach of  the
               Snake River (after Kreizenbeck2  ).

     River           Oxygen         Strength of
      Mile           Demand          D.O, Sink
                    (gm/m2/day)      (mg/l/sec)

    654-653           0.89          7.54x10-6
    653-649           1.04          8.21x10-6
    649-645           1.85          8.03x10-6
    645-643           1.85         14.28x10-6
    643-640           5.33         11.22x10-6
Loading Levels

     Best practicable control technology  (BPT)
currently available for the Ore-Ida and J. R.
Simplot waste discharges was used as a starting
point for the analysis.  These loadings,  terms of
BOD  (5 day) are given in Table 6.

     Table 6.  Organic waste loadings for Ore-Ida
               and J. R. Simplot in the Milner
               Reach, as determined by BPT.
             Source
            Ore-Ida
            J. R. Simplot
                             BOD (5 day) Load
                                (Ibs/day)

                                 4100
                                 6300
     During October  1974, a  comprehensive field
study program was conducted  in  the  Milner Reach of
the Snake River  for  the purpose of  verifying the
mathematical model.   In-stream  water quality,
industrial and municipal discharges,  irrigation
return flow and  river hydrologic characteristics
were measured by EPA Region  X,  EPA'S National
Field Investigation  Center  (Denver),  and  the State
of Idaho's Department of Health and Welfare.
     Survey results  indicated that  irrigation
return flow was  not  significant between Snake
River Mile 654.0 and River Mile 640.0.  In
addition, algal  photosynthesis  and  respiration
were found to be important sources  and  sinks,
respectively, of dissolved oxygen.   Detailed
results of this  study are reported  by Yearsley^  .
     Comparison  of predicted and observed
dissolved oxygen levels, using  data  from  the
October 1974 field study, are shown  in  Figure 2.
Sensitivity of the mathematical model to  random
errors in sediment oxygen demand, net algal  oxygen
production, deoygenation rate,  reaeration rate and
river velocity are reflected by the  one standard
deviation (CT) band  in Figure 2.
     The success of  the model in simulating  field
measurements, coupled with its  relative
lack of sensitivity  to errors in parameter choice,
indicated that the model would  be useful  for the
purposes of permit writing.  The model, as
described previously was applied to  the permit
writing process.  Algal productivity  was  not
included in the  permit analysis,  since  it was
felt that this was not a reliable source  of
oxygen.  Simulation  results  also showed that when
algal oxygen production was  not included  in  the
model, simulated dissolved oxygen levels  were very
nearly the same  as minimum dissolved  oxygen levels
measured during  the  field studies.
                  PERMIT ANALYSIS

     The State of Idaho water quality criterion
for dissolved oxygen in the Milner Reach of the
Snake River requires that the dissolved oxygen be
greater than 6.0 mg/1, or 90% saturation, which-
ever is greater.  For the initial conditions
given in Table 1 and loading levels given in Table
6, model simulations indicated that these
standards would be violated whenever the flow was
less than the monthly average (Table 3).  The
permits for the two discharges were, therefore,
designed such that the treatment levels varied
with the river flow.  It was assumed that a ten
(10) per cent variation in dissolved oxygen at any
flow was not significant.  The BOD loading from
Ore-Ida and J. R. Simplot, only, which caused this
much variation was computed, as a function of
flow.  For those flows resulting in loadings
equal to, or greater, than 100 per cent of those
values given in Table 6, BPT was acceptable.  For
river flows resulting in a loading between fifty
(50) and 100 per cent of the values in Table 6,
advanced waste treatment was required.  When the
computed loading was less than fifty  (50) per cent
of the values in Table 6, no discharge to the
river was permitted.  The resulting flow
restrictions, as estimated from the mathematical
                                                      782

-------
model, are given In Table  7.
    Table 7.
     Month
             Treatment requirements, as a
             function of  flow in the Snake
             River, for Ore-Ida and J. R.
             Simplot, in  the Mllner Reach of
             the  Snake River.
     January
     (no ice cover)
     January
     (100 % ice cover)
     March
     August
     October
Zero Discharge
Below

')
450 cfs
690 cfs
800 cfs
1450 cfs
700 cfs
BPT
Above
750 cfs
1030 cfs
1190 cfs
2600 cfs
1300 cfs
                 REFERENCES
     1. O'Connor, D.J., and Dobbins, W.E., "The
       Mechanism of Reaeration in Natural
       Streams," ASCE Trans., Vol. 123,  1958,
       pp.  641-666.

     2. Kreizenbeck, R.A., "Milner Reservoir
       Benthic Oxygen Demand Study," EPA Region
       X, 1974.

     3. Yearsley, J.R., "Evaluation of Lake Milner
       Water Quality Model," EPA Region X,
       Working Paper No. EPA-910-8-75-092, 1975,
       81 pp.
                                                                          -i-MAX - OBSERVED
                                                                     646    648
                                                                     SNAKE RIVER MILE
 PREDICTED AND OBSERVED DISSOLVED OXYGEN
IN THE LAKE MILNER REACH OF THE SNAKE RIVER.
      SURVEY  ON 10/22/74  10/24/74.
     LOCATION  OF  MAJOR INDUSTRIAL  AND MUNICIPAL
     DISCHARGES  IN  THE  LAKE  MILNER REACH  OF
     THE SNAKE  RIVER,  IDAHO    (OCTOBER  1974)
                  MAIN DRAIN
     tOCATION MA
                                            FIGURE 1


                                              783
          1 AMALGAMATED SUGAR

          2 RUPERT STP

          3 J.R. SIMPLOT

          4 HEYBURN STP

          5 BURLEY STP

          6 BRYANTS MEATS

          7 ORE-IDA

-------
                                              BUOYANT SURFACE JET
                    Mostafa A.  Shirazi
                Research Mechanical  Engineer
       Corvallis Environmental  Research Laboratory
            U.S. Environmental  Protection Agency
                  Con/all is, Oregon   97330
                    Lorin  R.  Davis
                  Associate Professor
         Department  of Mechanical  Engineering
               Oregon  State University
               Corvallis,  Oregon  97330
                     ABSTRACT

In order to obtain improved prediction of heated plume
characteristics from a surface jet, a comprehensive
set of field and laboratory data was correlated and
used for modification of an existing analysis due to
Prych.  The correlated data was conveniently subgrouped
and used for comparisons with related predictions from
the model.  This way, all the coefficients such as
entrainment, turbulent exchange, drag and shear values
were estimated based on the mean of each subgroup of
data.  Modifications were made to the model  to best
obtain an overall  agreement with the data.

                   INTRODUCTION

Various mathematical models of heated surface jets are
available for the prediction of two and three dimen-
sional plume configurations.   Two widely accepted
methods are used for solving the equations in these
models, namely one based on the integral  analysis
approach and the other based on the differential  numer-
ical analysis methods.  The latter approach, while
capable of greater generality, is considerably more
costly and due to limited funds and resources was ex-
cluded from further consideration for this work.   How-
ever, a certain degree of generality of results is re-
tained by considering only three dimensional plume
models herein.

A comprehensive review of thermal plume models is pre-
sented in Reference 4.  Among the three dimensional
surface jet models seriously considered is one by
Stolzenbach and Harleman (MIT Model)   ,  another by
Prych  and the third model by Stefan, et al.    It is
outside the scope of this paper to discuss in detail
results of all experiments on the three models during
our attempt to provide a working program.  The MIT
model, despite its many fine features, runs  into con-
siderable computational difficulties.  Prych's model
is the result of reasonably successful attempt to re-
move from MIT's model some of these difficulties.
Stefan's model was written for the developed zone alone
and thus can't be compared with others directly.   Even
though it includes wind effects absent in the other
two, it ignores the hydrostatic pressure in the longi-
tudinal direction.

In general,^the MIT and Prych models yield comparable
predictions .  The greatest deviation between the pre-
dictions of both models and data is in plume width.
Both models overestimate the plume width.

An effort is made here to introduce modifications in
Prych's model to make it better agree with existing
data.  These modifications as well as certain other
additions, are discussed below.

                 BUOYANT SPREADING

Stolzenbach and Harleman  present an order of magnitude
analysis of the momentum equations as applied to the
jet.  They show that the lateral acceleration of fluid
particles within the plume is neglible only when the
jet is nonbuoyant.  Otherwise, the fluid particles
accelerate (spread laterally) due to the influence of
two interacting forces, namely, the inertia and buoy-
ant forces.  Since the full nonlinear equations of
motion describing a buoyant plume are too difficult to
solve, the lateral spreading due to buoyant forces in
the MIT, and Prych models are calculated independently
of spreading due to nonbuoyant forces.  The two
spreading rates are assumed to make additive contribu-
tions, thereby ignoring the nonlinear interaction be-
tween the two forces.  As a consequence of the assump-
tions in this linearization their analyses overesti-
mate the plume width when the inertia and buoyant
forces are the same order of magnitude (i.e., when the
densimetric Froude number is not too large).  When the
plume inertia forces are dominant such as with strong
ambient current or large densimetric Froude numbers,
reasonable width predictions can be obtained.

The buoyant spreading function used by Prych is based
on the analysis of an immiscible film, such as oil
spreading over water that ignores the shear interaction
between the fluid systems and the variation in density
of the lighter fluid from the edge to the center of the
plume.  In this analysis, the fluid particles are
assumed to move with a velocity equal  to the velocity
caused by abrupt density waves alone.

In a separate analysis of a buoyant spreading of a pool
of warm water, Koh and Fan  accounted for the inter-
facial shear interaction but ignored the actual en-
trainment of the cool water.  They found that near the
source the spreading velocity and the fluid velocity
used by Prych are the same, i.e.,
                   g'H
Where H is the local depth of the buoyant pool.  How-
ever, far away from the source where the shear forces
become very important, the fluid front velocity is
          V
Where g'H is proportional to c , (e/Hp) is proportional
to the shear velocity and H/B is the ratio of the local
pool  depth to its width.  If interpreted in terms of
plume spread, this finding implies that spreading vel-
ocity is inversely proportional  to the local aspect
ratio of the plume.

The appearance of the local  aspect ratio in the expres-
sion for the plume velocity offers an intuitively
appealing ground for assuming,
            2 ~
                (g'H)(H/B)
This can also be explained as follows.  The lower den-
sity of the plume causes it to rise slightly about the
free surface of the surrounding water.  The height of
rise at any point is proportional to the local verti-
cal density difference between the plume and the
ambient and the depth of the plume at this point.
Since both the density difference and depth of the
plume decrease from the center to the edge, this
height varies from a maximum at the center to zero
at the edge causing the plume to spread in that dir-
ection.  The spreading rate due to buoyancy is re-
lated to the slope of this free surface.  Since the
                                                       784

-------
height of rise at the center is proportional to
gAp H/p, the slope of the free surface and thus the
spreading rate is a function of gAp H/pB.

This slight modification to Prych's analysis was intro-
duced in the model.  As a result, a satisfactory fit
with data became possible.

                DEVELOPMENT LENGTH

Analysis of the jet development zone is complicated
because of the need to examine simultaneously the
characteristics of a core region as well as a tur-
bulent outer jet region.  Stolzenbach and Harleman
developed a three dimensional program for this re-
gion, but in his modification of the program, Prych
adopted a one dimensional approach in which he em-
ployed celerity relations for the spreading of the
buoyant unmixed core region.  He then used the appro-
priate conservation equations to relate the fluid pro-
perties at four jet diameters away from the outlet to
the fluid properties at the outlet.  The fixed develop-
ment length of four diameters is based on the assump-
tion of a semicircle with an area 2 B H .  Prych's
development length S. can be written as

          j|i  = 6.38A1/2

where A is the channel aspect ratio.

Note that the above development length does not change
with the initial densimetric Froude number.  However,
calculations with the MIT model show that the develop-
ment length does change with initial densimetric
Froude number as well as the jet aspect ratio.

Since a better agreement of model predictions with
the data is expected if this aspect of the model is
also appropriately adjusted, resort was made to labora-
tory experiments to obtain this information.  Experi-
ments were conducted in a still water tank with a
heated jet at the EPA Corvallis Environmental
Research Laboratory.  Several jet aspect ratios and jet
densimetric Froude numbers were tested.  A hot film
anemometer probe was used to traverse the jet develop-
ment zone laterally at several stations downstream
from the outlet.  The presence of the core was detected
from subdued turbulent temperature fluctuations as well
as the temperature level.  The coincidence of the
increased turbulence fluctuations, the beginning of the
temperature drop, and the disappearance of a uniform
core at a point downstream of the outlet signaled the
end of development zone.  The data for this length was
correlated to give

          !l _ 5 4 /A^l/3
          Ho '     V
This tentative result is subject to refinement (parti-
cularly with respect to the effect of the ambient cur-
rent) when better experimental investigations currently
underway  become available.  Meanwhile, the use of this
correlation was found very helpful to fit the model
with available plume data.

            FITTING THE MODEL WITH DATA

Reference 2 provides a comprehensive set of data that
is a good representation of available experiments
both in the field and laboratory.  The data provide a
wide range of plume conditions with which one can test
and accordingly adjust numerous analytical  functions of
the plume model.  The plume model contains a number of
free variables such as entrainment coefficient E , tur-
bulent exchange coefficients E, , E , drag coefficient
CD and shear coefficient Cp.  The magnitudes of these
coefficients must be prespecified so that the model
produces the best fit with the measured plume charac-
teristics.

In order to accomplish this task, the following pro-
cedure is adopted:  (a) Data for plume character-
istics are subgrouped with a narrow range of certain
experimental parameters such as the current ratio, R,
the densimetric Froude number, F , the jet aspect
ratio, A, and the angle of discharge, 0 .  Each sub-
group consists of several experiments and several
sources, thus providing considerable degree of realism
with respect to possible experimental scatter and var-
iations in experimental parameter scales.  The choice
of a narrow range in certain experimental parameters
was dictated by the desire to obtain as strong a cor-
relation of the data within a given subgroup as pos-
sible,  (b) For each subgroup, the range and the mean
of all experimental  parameters are determined,  (c) The
data are correlated using dimensional analysis and
multiple regression methods separately for each sub-
group following the procedure outlined in Reference 2.
(d) The measured plume characteristics are plotted
against dimensionless axial distance using the cor-
relation results,  (e) A representative smooth curve
is drawn through the mean data and local  standard
deviations are displayed on both sides of the mean
curve to show the scatter.   This mean curve is a fair
representation of the subgroup, and is represented by
the mean parameters  obtained in item b above,   (f)
Finally, the program is used to calculate the plume
characteristics in each subgroup for the mean of the
experimental parameters R,  F, A, and 0 .   Agreement
between the calculated characteristics and the data
mean is sought by adjusting one or more of the model
coefficients E , E. , E , DD and Cp.   This process is
repeated for several subgroups, adjusting in each
trial one or more coefficients until best fits are
obtained to plume characteristics for all  subgroups.

It should be pointed out that correlations of each
data subgroup are useful mainly for the mean data in
that subgroup.   They are not universal  correlations
and cannot be used outside  the data range they re-
present.

The data set most suitable  for determining the effects
of ambient turbulence on plume behavior is provided
by Weil ' .   In his  experiments, Weil  injected heated
water at the surface in a turbulent channel  from a
semi-circular jet at a relatively large densimetric
jet Froude number.  The discharge was in  the direction
of the channel  current (0    0).  The jet velocity in
all his experiments  was held equal  to the local  chan-
nel flow velocity.  Since the relative velocity be-
tween the plume and  ambient water is zero and  since
buoyancy effects are small  due to a  high  Froude num-
ber, dilution is largely due to turbulence effects.

For the conditions of this  experiment,  the following
simplifications can  be introduced in the  mathematical
model:  (a)  there is no relative velocity between the
jet and the ambient  water.   Therefore the contribution
of the terms containing the entrainment coefficient,
E , can be set equal to zero,  (b)  For the same reason,
contribution of terms containing the shear coefficient,
Cp, is also zero,  (c) The  drag coefficient CQ, is
zero because the jet is parallel to the ambient cur-
rent and the pressure distributions  on the left and
right hand sides of the plume are identical,   (d) For
the dimensionless surface heat exchange coefficient,
one can choose a typical value of K   10"  without
affecting the calculated plume characteristics greatly
one way or another,  because we are dealing with small
areas and small temperature differences,   (e)  Since
the jet densimetric  Froude  number is high, the
                                                       785

-------
influence of the buoyant forces on the plume, spread is
not substantial.  The plume width grows predominantly
due to turbulent entrainment of the ambient water, a
mechanism which the model accounts for through Eh and

 v'
Figure 1 is the plot of correlated temperature data
showing the local mean and standard deviations.  Figure
2 is the replot of the mean temperature data together
with several computer calculations based on the model
for F   16, A   2, 0    0, and K = 10" .  Calculations
are made for several°values of E,  and E /E,  as well as
the free factor of the spreading function, XK1.  The
plots for the calculated and measured plume width data
          L EGEND
               LABORATORY DATA
               RER2),WEIL(I9T2)
          •    LOCAL ME AN OF DATA
          •    LOCAL STANDARD DEVIATION
                         10
                                             100
Fig. 1   Correlated Temperature Data for Coflow
        Discharge,  R=l
   io'r
 Fig.  2   Comparison of Calculated Temperatures with
         Measured Plume Temperature Data of Fig.  1
 Fig.  3  Comparison of Calculated and Mean Measured
         Widths for Coflow Discharge,  R=l
                                                          are  shown  in  Figure 3.   The measured width data were
                                                          closely  spaced  with excellent correlation.  For this
                                                          reason individual  data  points were not plotted.
                                                          Instead, a narrow band  showing the spread of all exper-
                                                          mental data are presented.

                                                          A visual inspection of  Weil's data of Figures 2 and 3
                                                          shows  that the  best fit is  obtained with E,  = .02, E /
                                                          Eh   .2  and XK1 = 1.4.

                                                          The  next group  of data  consists of information from
                                                          several  sets  of laboratory  and some field experiments
                                                          for  a  surface discharge in  zero or negligibly small
                                                          cross  current.   The correlation of temperature data are
                                                          plotted  in Figure 4 and the width data in Figure 6.
                                                           Fig. 4  Correlated Temperature Data for Discharge into
                                                                   Zero or Negligible Ambient Current
                                                                  OMPUTED FOR Eh •-02. E •-2 . '
                                                                  8  f  *  "  E0  c,   ;
                                                                   O 8.3  Z.O
                                                                  ~t V 2.0  20

                                                                  'A 30  23
                                                                  ' 0 Zfl

                                                                  ' f 2.

                                                                  It) ^
                                                                    I.O
-------
     ~  10' ~^
              t- ; -|furT7/7f'''r \fr   L"E«
Fig.  6  Correlated  Width Data for Discharge into
        Zero or Negligible Ambient Current
    itf
    10°
                           10
Fig. 7  Comparison of Calculated Widths with Mean
        Measured Surface Plume Width Data of Fig. 6

For given values of discharge angle, Froude number,
aspect ratio, and ambient current, the plume trajec-
tory is mainly influenced by the entrainment of am-
bient fluid with a minor influence due to pressure
drag.  Since the entrainment coefficient is prescribed
from the above, only the drag coefficient can be used
to further adjust the trajectory.  Consequently, we
need to regroup the trajectory data for a reasonably
wide range of all plume parameters mentioned above.
Such data are plotted in Figure 8 showing the-data
sources, the local mean and standard deviations.  Fig-
ure 9 is a replot of the mean trajectory showing the
comparison with computed values.  It is found that the
best fit is with C  = 1.0.
Fig. 9  Comparison of Calculated and Mean Measured
        Trajectory Data of Fig.  8
In order to complete the adjustment of the model to fit
the data, we need to check the model against measured
plume width and temperature for a wide range of para-
meters.  If agreement is obtained with such data with-
out the need to readjust the previously specified co-
efficients E , E, , E , CF and CD, then the fitting of
the model with data is considered complete.

The raw data and calculated values based on previous
coefficients are compared in Figure 10 for plume width
and Figure 11 for plume temperature.  The agreement
obtained from the comparison of calculated and measured
plume width is excellent and the agreement for plume
temperature is reasonably good.
                          X/Hn
 Fig.
10  Comparison of Calculated and Measured Width
    Data for Discharge into a Cross  Current
 Fig. 8  Correlated Trajectory Data  for  Discharge
        into an Ambient Current
 Fig.  11   Comparison  of  Calculated  and  Measured  Temp-
          erature  Data for  Discharge  into  a  Current
                                                        787

-------
                    DISCUSSION
                                                                Entrainment  coefficient
A notable degree of data scatter could not be avoided
when attempting to correlate information on plume
characteristics from several sources.   Physical  factors
not included in the data analysis, but which are be-
lieved to have contributed to the data scatter are:
a) the lack of a universal simple exponential correla-
tion such as used in this report; b) the influence of
diverse turbulence scales, c) the influence of surface
heat transfer, d) time dependency and  boundary effects,
and 3) instrumentation and experimental  errors.

The exponential correlations employed  are intended for
data presentation in a compact form within each  data
grouping.  They are not used to explain  the physics  of
the problem exclusive of the mathematical model.   They
do, however, provide a statistical presentation  of the
level  of data scatter one can expect when dealing with
data from numerous sources.

There are at least two reasons why data  from more than
one source should be used.  These are:  (1) there
exists no single set of data that covers a sufficiently
wide range of parameters relating to initial jet condi-
tions and ambient current; (2) data obtained for a wide
range of ambient turbulence levels are reported  in the
literature.  While the ambient turbulence level  and
turbulence scale affect the plume characteristics, in-
formation on these parameters is lacking in nearly all
the data reported.  It is felt, therefore, that  a plume
analysis based on several sources carries a greater
degree of realism than one based on a  single source.

It should be noted that the turbulence exchange  coef-
ficient in the model is back-calculated  based on the
best fit with the data.  The coefficient is entered
in a form of a turbulent Reynolds number (H U0/e)~
where e is the turbulent eddy diffusivity and H   and
U  are the jet depth and velocity respectively.   In
tnis manner, even though diffusivities are not
directly measured in each experiment the use of  the
model  does provide an indirectly calculated value for
the correlation parameter e/H U  that  best represents
the available data.  If in a given application one has
a better knowledge of this or any other  coefficients
entering the model then, of course, those should be
used in the model instead.

     Calculations based on the foregoing modified sur-
face jet model are presented in great  detail in  Ref-
erence 11.  That workbook provides a compilation of
numerous nomograms suitable for use in practical  pro-
lems.   Even though the computer program  and sample
examples are also given, the use of the  workbook
directly might be preferrable to the majority of the
users.

                  LIST OF SYMBOLS

B    Local characteristic width of jet   /20

BQ   Half width of outlet

B^g Plume half width   177 an

Cp   Form drag coefficient

Cp   Interfacial shear drag coefficients

c    Celerity of a density front

D    Local plume depth = 2a

E,    Dimensionless horizontal eddy diffusion coeffi-
     cent eh/UoHo
EV   Ratio  of vertical to  horizontal  eddy  diffusion
     coefficients E /E,
     Densimetric Froude number at  outlet,  U  //g'H

     Acceleration due to gravity

     Reduced gravitational acceleration g  Ap/p
     Local characteristic thickness of jet

     Depth of outlet

K    Dimensionless heat transfer coefficient  lO/pc U

1C   Atmospheric heat transfer coefficient

s    Curvilinear coordinate along jet centerline

S.   Distance from outlet to end of initial zone.

AT   Local excess water surface temperature on center-
  c  line

AT   Difference between outlet and ambient water temp-
     peratures

TH   Angle between positive S- and X- directions (0)

U    Local excess jet velocity on jet centerline

U    Discharge velocity from outlet

X    Rectilinear coordinate parallel to ambient current

Y    Rectilinear coordinate, horizontal  and perpendi-
     cular to X

Z    Coordinate in vertical  direction

V    Ambient current velocity

a    Angle used in data analysis of Ref.  (2) a = Tr-00

Ap   Difference between outlets and ambient water
     densities

E, ,EV  Ambient turbulent diffusion coefficient for
     horizontal and vertical directions

0    Angle between X axis and outlet velocity direc-
     tion

v    Kinematic viscosity

p    Fluid density

                    SUBSCRIPTS

a    Ambient conditions

c    Centerline value at surface

i    Refers to variables at end of development zone

o    Discharge conditions
                                                      788

-------
                    REFERENCES

1.    Prych,  Edmund A.   "A Warm Water Effluent Analysis
     as  a  Buoyant Surface Jet"  Swedish Meteorological
     and Hydrological  Institute, Series Hydroli, Nr 21,
     1972.

2.    Shirazi,  Mostafa  A., "Some Results from Experi-
     mental  Data on Surface Jet Discharge of Heated
     Water"  Proceeding of the International  Water
     Resources Association, Chicago, 1973 (see also
     Reference 12).

3.    Stolzenbach, K.  D.  Harlemann, D.  R. F.   "An Analy-
     tical  and Experimental Investigation of Surface
     Discharges of Heated Water."  Water Pollution
     Control  Series 16130 DJV 02/71, February, 1971.

4.    Policastro, A.  J. and Tokar, J. V.  "Heated
     Effluent Dispension in large Lakes:  State-of-the
     art of Analytical Modeling Part I, Critique of
     Model  Formulations?  Argonne National Laboratory
     ANL/ES-11 January 1972 (see also Reference 12).

5.    Stolzenbach, K.  D., Adams, E. E.  and Harleman, D.
     F.   "A User's Manual for Three-Dimensional Heated
     Surface Discharge Computations" Environmental
     Protection Technology Series EPA-R2-73-133, Jan.
     1973.

6.   Koh, R.  C. Y., Fan, L. N.  "Mathematical Model for
     the prediction of Temperature Distributions Resul-
     ting from the Discharge of Heated Water into Large
     Bodies of Water"   Water Pollution Control Series
     16130 DWO 10/70,  October 1970.

7.   Stefan, Heinz.   Personal Communication (see also
     Reference 13).

8.   Weil, J.   "Mixing of a Heated Surface Jet in Tur-
     bulent Channel  Flow"  Report No.  WHM-1, Department
     of Civil  Engineering, University of California,
     Berkeley, June 1972.

9.   Ellison, T. H., and Turner, J. S., "Turbulent En-
     trainment in Stratified Flows."  Jour,  of Fluid
     Mechanics, Vol. 6, Part 3 p. 423-448.

10.  Stefan, Heinz, Hayakawa N., and Schiebe, F. R.
   x  "Surface Discharge of Heated Water"  Water Pollu-
     tion Control Research Series 16130 FSU 12171,
     December 1971.

11.  Shirazi, Mostafa  A., and Davis, Lorin R.  "Work-
     book of Thermal Plume Prediction."  Vol. 2,
     Surface Discharge, Environmental  Protection Tech-
     nology Series, EPA-R2-72-005b, May 1974.

12.  Dunn, W., Policastro, A. J. and Paddock, R.
     "Surface Thermal  Plumes:  Evaluation of Mathe-
     matical  Models for the Near and Complete
     Field"  Argonne National Laboratory ANL/WR-75-3,
     Part One May 1975, Part Two August 1975.

13.  Stefan,  Heinz, Bergstedt, Loren and Mrosla,
     Edward,  "Flow Establishment and Initial En-
     trainment of Heated Water Surface Jets" Envir-
     onmental  Protection Agency, Ecological  Research
     Series  EPA-660/3-75-014.
                                                       789

-------
                                     AGROECOSYSTEM   A LABORATORY MODEL ECOSYSTEM
                                       TO SIMULATE AGRICULTURAL FIELD CONDITIONS
                                               FOR MONITORING PESTICIDES

                                 M.  Leroy Beall, Jr, Ralph G. Nash § Philip C. Kearney
                                USDA.ARS, Agricultural Environmental Quality Institute,
                                          Pesticide Degradation Laboratory,
                                      Beltsville Agricultural Research Center,
                                             Beltsville, Maryland 20705
ABSTRACT

Quantitative measurements of rates and modes of disap-
pearance of pesticides under field conditions are dif-
ficult to obtain because environmental parameters can-
not be satisfactorily controlled and monitored.  A
laboratory model agroecosystem was constructed to
simulate field conditions which permitted simultane-
ous measurement of pesticide residues in soil, plants,
water and air.  The design and construction of five
agroecosystems are described in detail.  The first
phase of research in the agroecosystem was devoted
to measuring pesticide residues in air.

Our agroecosystem has a number of advantages, i.e. it
is inexpensive, easy to operate, monitor, and sample;
versatile in the number of plants and soils that can be
studied; adjustable to rainfall and potentially adjust-
able to wind velocity, light intensity and duration;
and conducive to balance studies where pesticide mobil-
ity can be compared under similar conditions.  It has
an advantage over previous systems because the large
volume of air exchanged provides cooling, prevents
moisture condensation, and permits sufficient air sam-
ple volumes for measurement of very low residue concen-
tration.  The aerial residues in the exhaust air are
trapped on polyurethane foam plugs, which are sampled
periodically.  Initial results demonstrated that toxa-
phene and DDT volatilized off of fiber-glass cloths
and cotton leaf surfaces, but the rate of volatiliza-
tion decreased very rapidly with time.  Efficiency of
trapping by the polyurethane plugs was very high with
recoveries
Our initial objectives are to test the utility of the
agroecosystem for comparing the mobility of different
classes of pesticides and thereby identifying potential
environmental problems.  Our long-term objectives are
to explore the possibilities of determining bioaccumu-
lation of pesticides in terrestial organisms and in-
terfacing our system with other model ecosystems,
particularily the aquatic ecosystem.  Our ultimate
objective is to devise methods of reducing pesti-
cide mobility.

BACKGROUND

Monitoring the behavior and disappearance of pesticides
under field conditions is often difficult for a variety
of reasons.  Among these is an uneven pesticide distri-
bution on plant or soil surfaces, drift, and volatili-
zation during and after application. Accurate air sam-
pling is difficult because of changes in wind currents.

In an attempt to somewhat control field variability,
glass chambers (agroecosystems) were designed and built
for monitoring pesticides in the air, soil, water, and
on plants.  The chambers are large enough to grow many
crop species to maturity.  Although the concept of
model ecosystems for air sampling is not new (Hill et
al.  1971), our system was designed to incorporate a
new method of sampling air for pesticides.  Further,
our system is inexpensive compared to elaborate growth
chambers.  The method consists of drawing air through
flexible porous polyurethane foam filters, then extract-
ing the filters with an organic solvent to remove the
pesticide for analysis.

In 1970, Bowen  reported the absorptive properties of
polyurethane foam and used this material to concentrate
metallic ions from dilute aqueous solutions.  In 1971,
Gesser et al.  successfully used polyurethane foam to
absorb polychlorinated biphenyls (PCB) from water and
mentioned that the foam was not specific for PCB, but
could absorb organochlorinated pesticides as well.  In
1974, Bidleman and Onley found the foam to be highly
efficient in trapping PCB from air.  We were introduced
to the possible use of polyurethane foam for trapping
pesticides in air by Taylor, Glotfelty and Turner
(1975).

MODEL AGROECOSYSTEM DESCRIPTION

Chamber Construction

Five retangular chambers were constructed (Renwar
Scientific Co., Gaithersburg, Md. 20760) and placed
in the greenhouse (Fig. 1 and 2).  The chambers were
constructed from 3/8" (1-cm) plate glass and held to-
gether with clear silicone aquarium cement.  All
sides, top, and bottom were made of glass to assure
a minimum of pesticide adsorption and ease of clean-
ing between experiments.  Inside dimensions are 150
cm long, 115 cm high and 50 cm wide.  After allowing
for a 15-cm soil layer, the remaining volume is
0.75 m3. To add rigidity and to protect the bottom
of the chamber, each chamber was assembled directly
and remained permanently in a 3/4" (1.9-cm) plywood
tray lined with 1/4" (0.6 cm)-thick felt padding
to absorb shock.  Walls of the tray are 15 cm high.
The chambers were set at a 1% slope back to front.

For servicing, one side of each chamber is equipped
with two sliding access panels (72.4 cm high) which
ride in felt-lined aluminum channels.  Each panel
contains two 2.5-cm finger holes for sliding or lift-
ing.  When closed, the panels butt against each other,
cushioned and sealed by a strip of polyurethane foam
weather stripping attached to the end of one panel.

Centered in the front-end of each chamber are two 2.5-
cm holes; 5 and 15 cm from the bottom.  The lower hole
is used to siphon off soil-leachate water and the up-
per hole is used to collect run-off water.  Both ends
of each chamber contain 12 5-cm holes for air intake
and exhaust.  They are centered 20 cm apart vertically
and 16.7 cm horizontally, beginning 35 cm from the
bottom of the chamber.

Sprinkler Construction

To simulate rain, each chamber is equipped with a
                                                       790

-------
Fig. 1 - Front angle closeup of a model agroeco-
         system.  Note the 12 polyurethane foam
         plugs in the glass thimbles protruding
         into the manifold box.
sprinkler system, centered and running the length of
the chamber 2 1/2" (6 cm) from the top (Figs. \, 2,
and 3).  Each system was fashioned from standard 1/8"
brass pipe (ca. 0.6 cm i.d.)> threaded fittings, and
four spray nozzles spaced 37.5 cm apart.  The noz-
zles (Model 1/8TTG0.3 from Spraying Systems Co., Whea-
ton, 111.) each deliver 0.042 gal (159 ml) of water
per min at 20 p.s.i.  and give a solid cone spray
pattern.  When "rain" is desired, tap water is suppli-
ed to a sprinkler system through a small rubber hose
fitted with a quick release coupler.  Water supply is
controlled by an adjustable pressure regulator and
timeclock-controlled solenoid valve.  At 20 p.s.i., 1"
(2.5 cm) of "rain" is delivered in ca 29 min.

Manifold Construction

An equal amount of suction to each of the 12 exhaust
holes is provided by a rectangular manifold box  (Fig.l)
constructed from 1/4" (6.4 mm) clear acrylic plastic
sheet and reinforced inside with six pieces of extruded
acrylic tubing 3/4" (19 mm) o.d., 1/2" (12.7 mm) i.d.
One end of the manifold contains 12 2 1/4" (5.7-cm)
diameter holes to line-up with the 12 exhaust holes in
the front of the agroecosystem chamber.  Centered on
the other end of the  manifold is a 5" (12.7-cm) exhaust
hole into which was cemented a piece of 12" (30.5-cm)
acrylic tubing, 4 1/2" (11.4 cm) i.d. which extends
from the manifold box and is reinforced with a 5/8"
(1.6-cra) thick x 7" (17.8-cm) square acrylic collar
cemented to both the  manifold box and the exhaust tube.
A 1/2" (1.3-cm) hole, 1 7/8" (4.8 cm) from the blower
end, allows entrance  for a hot-wire anemometer probe
for measuring air speed in the tube.  A 1" (2.5 cm)
hole is located on one side of each manifold for a man-
ometer connection.

Final  connection of the manifold exhaust tube to a suc-
tion fan was made with a 9" (22.8-cm) length of 5"
(12.7  cm) i.d. flexible-spring-steel-reinforced nylon
and vinyl covered hose.  This provides an overall dis-
tance  of 50 cm between the manifold and suction fans.
                                                          Fig. 2  - Overall view and  spatial  arrangement  in
                                                                   the greenhouse.
Wooden benches support the suction fans and manifold
boxes.  Latex caulking provides an air-tight seal be-
tween the manifold box and agroecosystem chamber.

Air System

Air is pulled through the chamber using a 115 V, 1/3 HP
high-pressure direct-drive blower (suction fan) for
each chamber.  These suction fans provide ca 3 m3/min
air at ca 13-cm water pressure under our conditions.
The high-pressure suction fans were necessary to pull
air through filters that were positioned in each of
the 12 air-exhaust holes of the chamber.

Air movement through a chamber, provided by suction
fans, serves three purposes; 1) to collect volatilized
pesticides, 2) to provide cooling, and 3) to prevent
moisture condensation inside the chamber.  Air volumes
are calculated by measuring air velocity with a hot-
wire anemometer in the tubing which separates the suc-
tion fan and manifold box.  A mean velocity for each
set of 12 plugs is determined by taking 11 measure-
ments (10 equal annular areas and a central circle) at
the intersections of a diameter and the set of circles
which bisect the annuli and the central circle.  Mea-
surements are taken on each side of the cross section
at  V(2n-l)/10 (n=l,2,3 to 10/2) of the tube radius
from the center (Perry et al., 1963).  In our case, we
could only obtain 9 velocity measurements because the
physical size of the anemometer prevented measuring of
the outer cross-sectional areas.  Therefore, the lowest
measurement obtained on each side was doubled to approx-
imate measurements 10 and 11.  The outer velocity mea-
surements were very low compared to the central circle
and the adjacent cross-sectional areas.   Typical mea-
surements ranged from 600 to 2000 ft/min (183 to 610
m/min), with the two outer measurements slightly higher
than their adjacent inner measurement.  The unorthodox
velocities near the edge apparently results from turbu-
lance in the short 50-cm length of the tube plus flex-
ible hose between the manifold and suction fan.

Although there was some variation among chambers and
among sets of plugs in a given chamber, airflow aver-
aged 2.9 +_ 0.3 m3/min (mean of five chambers and stan-
dard deviation).  This flow rate translates to an
average air speed of 0.22 mph  (0.35 km/hr) through the
chambers.  Our system, therefore simulates calm wind
conditions.

Trapping Filters

Pesticide trapping filters were made by cutting 2"
                                                       791

-------
 (5-cm) circular plugs from 2" (S-cm) thick polyure-
thane foam.  Cutting was done with a twisting motion
of a length of brass pipe sharpened on one end like
a cork borer.  Ti.e foam used was a dark gray, ester
base, open cell type with a density of 2 Ib +_ 10% per
ft   (ca. 0.032 g/cc) manufactured by the William T.
Burnette Co. of Baltimore, Md.  Prior to use, the
plugs were extracted for 12 hr with hexane:acetone
 (1:1 v/v) in a Model 11EX/H1 Jobling extractor rigged
for Soxhlet extraction.  Approximately 46 plugs can be
extracted at one time if carefully stacked in the
extractor.  After extraction, the plugs were squeezed
fairly dry and stored in a large rectangular chroma-
tography jar.  The remaining solvent is allowed to
evaporate before use.

Plugs are held in place (in each of the 12 exhaust
holes of an agroecosystem chamber) by thimbles (Fig.
1) fashioned from 45 mm i.d. borosilicate glass tube
with 2-mm walls (Renwar Scientific Co.).   Total
length of the thimble is 68 mm.  The intake end of
the thimble is expanded to 46 mm i.d. for 30 mm of
its  length to allow easier insertion of a plug.  The
rim of the intake-end has a 3-mm rounded lip to re-
tain a rubber 0-ring for sealing and to prevent the
thimble from going all the way through an exhaust
hole.  The exhaust end of the thimble contains a glass
rod  grill to retain the foam plug.  The thimbles are
installed from inside the agroecosystem chamber and
protrude out into the manifold box.
Fig. 3 - Rear angle closeup showing some of the air
         intake filters and part of the sprinkler
         system.
Each of the 12 air intake holes on the back of the
agroecosystem chamber is fitted with a 2 5/8" (6.7-cm)
diameter disk of 1/8" (0.3 cm) thick polyurethane foam
air filter to prevent the entrance of insects and dust
(Fig.  3). Filter holders were fashioned from clear
acrylic tubing.   The holder body consists of a 1 7/16"
(3.7-cm) length  of tube 2" (5 cm) o.d.^ 1 3/4" (4.5 cm)
i.d. with a 3/8" (0.9S-cm)-wide ring collar [made from
2 1/2"  (6.4 cm) o.d.,  2"  (5.0  cm)  i.d.  tube]  concen-
trically positioned and cemented  9/16"  (1.4 cm)  from
one end.  The collar  limits  the distance  that the hold-
er can  enter the air  intake  hole.   The  filter disk is
installed by placing  it over the  outside  end  of  the
holder  body then pressing  a  removable ring (same size
as the  ring collar above)  over the disk and holder
body.   This mechanism provides a  quick  and simple meth-
od for  changing the filter disks.   Each filter holder
is removable but held  firmly in its hole  in the  agro-
ecosystem chamber when pressed through  a  2" (5 cm)  i.d.
rubber  0-ring which is cemented around  the periphery
of the  chamber hole.   These  filters result in slight
(0.2 to 0.5 cm water)  negative pressure inside the
chambers.

Temperature and Lighting

The chambers are subjected to  normal greenhouse  tem-
perature fluctuations, however, air flow  through the
systems prevents excessive heat buildup and moisture
condensation.  On hot  sunny  days,  chamber tempera-
tures may occasionally reach 3°C  above  ambient green-
house temperatures.   Some  moisture  condensation  was
observed on very cool  but  bright  days after the  plants
filled  the chambers.

Recently, a 180 W low  pressure sodium vapor light was
installed (not shown in Figs.) 5  cm above each of the
agroecosystem chambers.  The 42"  (107 cm)-long tubu-
lar lights (from Norelco-North American Phillips
Lighting Corp., Highstown, N.  J.)  are time-clock con-
trolled and can be used to supplement and/or  extend
daylight periods.

PRELIMINARY TESTS WITH FOAM  PLUGS

Extraction Efficiency

Since DDT (1,l,l-trichloro-2,2-bis[£-chlorophenyl]-
ethane) and toxaphene  (chlorinated  camphene,67-69%
chlorine) were selected to be used  in our first  agro-
ecosystem experiment,  it was necessary  to develop a
method of extracting these pesticides from the plugs.
A group of randomly selected plugs  (pre-extracted with
hexane:acetone[1:1 v/v]) were  treated with ^C-labeled
DDT or toxaphene.  Treatment consisted  of making five
100 yl injections of a benzene solution randomly into
each plug.  Each plug  received a total  of 141pg  of DDT
or 675 us of toxaphene.  Solvent was allowed  to  evapor-
ate prior to extraction trials.  Soxhlet  extraction
with 150 ml of petroleum ether (30-60°C b.p.)  proved
to be quite effective.  Scintillation counting of ali-
quots of the concentrated  extracts  showed  that quanti-
tative recovery (based on  four replications)  of both
DDT and toxaphene was  obtained in  four  hours  (one plug/
Soxhlet). Even with two plugs/Soxhlet,  97.2%  of the
DDT and 96.8% of the toxaphene was  recovered  in only
two hours.

Pesticide Trapping Efficiency

To test the pesticide  trapping efficiency of  the foam
plugs under conditions similar to the agroecosystems,
a plug testing system  was built.  The system  consists
of a manifold box connected  to a high-pressure suction
fan and a set of 12 special plug-holding  thimbles.   The
manifold box and suction fan are essentially  identical
to those used with agroecosystems,  except  that the box
lays horizontally with the exhaust  tube in one end and
the 12  intake holes facing upward.  The glass test
thimbles were made identical to those used in the agro-
ecosystems except that a 5-cm-long  widened (5.9  cm i.d.
6.4 cm o.d.) extension tube  was added at  their rims.
The thimbles fit down  into the 5-cm diameter  holes in
the manifold box with  the  extension tube  remaining out-
                                                      792

-------
side.  A rubber 0-ring around the thimble  (just below
the widened extension tube) provides a seal.  Polyure-
thane foam plugs are placed down into the thimbles,
then a 10-cm-square piece of loosely woven fiberglass
cloth is placed over the end of the thimble extension.
The cloth is held in place by pressing a snug fitting
plastic ring around the cloth and the rim of the ex-
tension.  The cloth provides a surface on which a pes-
ticide can be applied.  Pesticide molecules volatiliz-
ing from the cloth are trapped by the plug as air flows
through the testing system.  With the suction-fan run-
ning, organic solvents evaporate quickly from the
cloth, thus, repeated applications of ca. 100 yl of
pesticide solution can be made if necessary.  Suction
created by the fan assures that the volatilizing pesti-
cide is drawn toward the plug.

At the end of a run, the cloths and plugs are removed
from their thimbles and analyzed for pesticide content.
Pesticide trapping efficiency is determined by summing
the amounts in the plugs and on the cloths.  If the to-
tal is less than the amount applied, the difference is
assumed to be the amount not trapped by the plugs.

DDT and toxaphene were tested on the system at room
temperature and were both found to be effectively trap-
ped by the plugs.  Hexane solutions of 11(C-labeied DDT
or toxaphene  (500 yg pesticide/thimble) were applied
and the suction fans allowed to run continuously for 72
hr.  At the end of the run the plugs and fiberglass
cloths were Soxhlet extracted for 4 hr with 150 ml pet-
roleum ether.  Aliquots of the concentrated extracts
were counted by liquid scintillation.  Of the amounts
originally applied to the cloths, 97.45% of the DDT and
99.02% of the toxaphene were accounted for, based on
four replications.  Of the DDT applied, 30.42% remained
on the cloths, while 67.03% was found in the plugs.  Of
the toxaphene applied only 8.99% remained on the cloths
while 90.03% was found in the plugs.  Of the amounts of
the pesticides that actually volatilized, the plugs
trapped 96.33% of the DDT and 98.92% of the toxaphene.
During the 72 hr run, approximately 922 m3 of air pass-
ed through each plug.

Efficiency of trapping by the plugs was further tested
by applying toxaphene and DDT to fiber-glass cloths as
above and harvesting the plugs at 0.5, 2.5, 24, 72, 144
and 168 hr.  Fresh plugs were installed at each time
period, but the original treated cloths were reinstal-
led.  The plugs were extracted for 4 hr.  After 168 hr
97.24% of the toxaphene could be accounted for and
99.90% of the DDT.  The fiber-glass cloths contained
6.13% of the applied toxaphene and 56.78% of the DDT,
while the accumulative sets of plugs contained 91.11%
of the toxaphene and 43.12% of the DDT.  These two ex-
periments demonstrated that the polyurethane plugs were
very efficient absorbers of volatilized toxaphene and
DDT whether the time period was short (0.5 hr) or long
(72 hr).

Test Run

For the first experiment, cotton (Gossypium hirsutum
L., var. 4-42-77 glanded) was treated weekly for 6
weeks with commercial emulsifiable toxaphene and DDT.
DDT was sprayed at the rate of 1.33 kg/ha the first 2-
weeks, then 1 kg/ha thereafter.  Toxaphene rates were
double that of DDT.  Two chambers were used for DDT
and two for toxaphene, leaving one for a control.

The polyurethane foam plugs were harvested and
replaced with clean plugs at 0.5, 2.5, 24, 72 and 144
hr after each application.  The plugs were extracted
and the extract analyzed by gas-liquid chromatography.

The highest insecticide residue concentration in the
air occurred the first 30 min (Table 1), then decreas-
ed very rapidly with time.  After 6 days, residue con-
centration was only about 5% of that found initially.
Toxaphene was more volatile than DDT.  The amount of
toxaphene volatilized was consistently more than double
that of DDT, though the treatment rate was just twice
as much.  Repeated insecticide applications had little
effect on the magnitude of the values obtained after
an additional application.  The magnitude of the values
appeared to be affected more by ambient temperatures
than by repeated applications.  Even though there was
considerable variation in aerial residue concentrations
among treatments, the shapes of the curves were almost
identical when aerial concentration was plotted against
time.  A more detailed presentation of the results is
under preparation for a following publication.

Table 1.  Toxaphene and DDT volatilization from
          an agroecosystema
Hours
Compound
after Toxaphene
treatment

0
2


.5
.5
24
72
144

15.
9.
2.
1.
0.

,108
,046
.414
,425
,815
p_,p_ -DDE
Wg/

0.
0.
0.
0.
0.

097
065
017
008
005
o,p_ -DDT £,

1.
0.
0.
0.
0.

,033
,746
,231
,087
.032

1.
1.
0.
0.
0.
p_ -DDT

720
292
445
175
112
amean of six weekly treatments
Mention of proprietary products does not imply endorse-
ment or approval by the U.S. Department of Agriculture
to the exclusion of other suitable products.

REFERENCES

1. Bidleman, T. F. and C. E. Olney.  1974. High-volume
   collection of atmospheric polychlorinated biphenyls.
   Bull, of Environ. Contam. and Toxicology. 11:442-450.

2. Bowen, H. J. M.  1970. Absorption by polyurethane
   foams; new method of separation. J. Chem Soc.
   (Sec. A)  1082-1085.

3. Gesser, H. D., A. Chow, F. C. Davis, J. F. Uthe and
   J. Reinke. 1971. The extraction and recovery of
   polychlorinated biphenyls (PCB) using porous poly-
   ethylene foam.  Analytical Letters. 4:883-886.

4.  Hill, A. C. 1967. A special purpose plant environ-
    mental chamber for air pollution studies. J. Air
    Poll. Control Ass. 17:743-748.

5. Perry, R. H., C. H. Chilton and S. D. Kirkpatrick,
   Editors. 1963. Chemical Engineers Handbook, 4th ed.
   McGraw-Hill Book Co., New York, N.Y.

6. Taylor, A. W., D. E. Glotfelty and B. C. Turner.
   1975. Personal Communication. USDA,ARS,AEQI, Agri-
   cultural Chemicals Management Laboratory, Belts-
   ville, Md,
                                                       793

-------
                         A CONCEPTUAL MODEL FOR ECOLOGICAL EVALUATION OF

                              POWER PLANT COOLING SYSTEM OPERATION
         Marc W. Lorenzen
         Senior Research Engineer
         Tetra Tech, Inc.
         Lafayette, California
                  Summary

Mathematical models can be useful tools to
systematically analyze the impact and signi-
ficance of cooling system operation.  Such
models, to have predictive value, must con-
sider the important physical, chemical, and
biological processes associated with the cool-
ing system.  The first and perhaps the most
important stage of this model conceptuali-
zation was to select a suitable physical
representation of the system.  The selected
representation provides realistic approxima-
tion of entrainment probability, residence
times, and material transport in the aquatic
ecosystem, the power plant, and the interface,
which includes intake and discharge zones.

For a given location, there are physical and
chemical properties and biological components
including phytoplankton, zooplankton,
benthic animals (eggs and larvae), and fishes
 (eggs, larvae, young, and adult).  Inter-
actions among these components are approxi-
mated by kinetic expressions for biological
and physical processes with particular
emphasis on the effect of temperature.  The
population dynamics of organisms can be
influenced by entrainment and the imposed
temperature regime.  Both direct and second-
ary impacts of cooling system operation can
thus be calculated for interpretation.

               Introduction

Installation and operation of large cooling
systems may produce many environmental effects.
Planktonic organisms(phytoplankton, zoo-
plankton, fish eggs, fish larvae, and benthic
animal larvae) may be entrained and suffer
direct biological damage:  thermal shock,
thermal death, mechanical stress and other
disruption.  At the point of discharge, more
organisms may be subject to plume entrain-
ment, thermal shock and turbulence.  Dis-
location of organisms between intake and dis-
charge points may also be significant.  The
heated physical environment in the discharge
area may influence the distribution of
fishes.  It may change the temperature regime
influencing rooted aquatic plants,  which
in turn can produce an impact on animal
habitat.   Nutrient-rich and oxygen-poor water,
which may also be saturated with nitrogen gas,
may be heated and transported from bottom to
surface water.

The direct damage resulting from plant and
plume  entrainment can induce further ecolog-
ical effects.   Under certain circumstances
the loss  of eggs and larvae could mean a
reduction in subsequent adult populations.
Large losses of phytoplankton and zooplankton
could alter the patterns of production and
predator  prey relationships in the receiving
water.
          Carl W.  Chen
          Director,  Environmental
          Systems  Engineering
          Tetra Tech,  Inc.
          Lafayette, California

The Water Pollution Control Act Amendments of
1972 (PL 92-500) classify heat as a pollutant
and provide for regulation of thermal dis-
charges.  However, the Act recognizes that not
all discharges are necessarily detrimental and
provides a mechanism  for exemptions to efflu-
ent limitations if it can be shown that no
appreciable harm would be inflicted on the bal-
anced indigenous community of the receiving water.

Large expenditures may be required to conduct
field and laboratory studies to quantify cool-
ing system effects.   The collected data must
then be analyzed and interpreted to ascertain
if a cooling system operation has or will
inflict appreciable harm on the balanced
indigenous community of the receiving water.
Both the direct effects of entrainment and the
subsequent manifestations in the receiving
water must be determined as quantitatively as
possible.  A model which can integrate the
chemical, physical,  and biological character-
istics of a cooling system including the
receiving water environment would be a useful
tool in providing such an analysis.  In addi-
tion to providing integration of data,
environmental or ecological models can and
should influence the design and operation of
plants.  To provide an overall picture, and
more importantly,  to predict impacts in quanti-
tative terms, a model must consider the
important physical,  chemical, and biological
processes associated with the cooling
system.

The general approach to developing such a
model has been first to examine specific
problems and develop sub-models.  These sub-
models are then integrated into larger system
models.  Sub-models include 1) thermal plume
simulations to define the area, volume, and
residence time of water at various tempera-
tures with in the plume; 2) receiving water
transport models to compute entrainment ratios;
3) temperature dose-biological effect models
for passage through the cooling system and
4) water quality-ecological models that
simulate water quality behavior and population
dynamics of the biota.

        Prototype Representation

In order to model a complex ecological system
it is necessary to carefully select the meth-
ods of idealizing or representing the physi-
cal conditions and biological processes
occuring withing the system.  For evaluation
of cooling system effects, a physical and a
biological representation are necessary.
Appropriate units for quantification of
environmental characteristics must be selected
so that changes can be evaluated.

Physical

Figure 1 shows a general physical representa-
                                              794

-------
tion of  a  power  plant  cooling  system.   The major
components of  the  physical  environment  which
are important  to impact  assessment  are:   the
boundaries,  intake zone,  condensers,  discharge
conduit, and mixing zones defined by  water
volumes  at various temperatures.  Each  physi-
cal segment represents a  point or volume  for
which physical,  chemical, and  biological
characteristics  can be defined.
         BOUNDARY CONDITION
                                          QIO
CONDEN-
SER
+Q2
OUTLET
CONDUIT
I S
02


Y
PLUME AND MIXING ZONES
Figure 1.  Physical Representation of Power
Plant Cooling System

The representation shown in Figure 1 allows
for a variable boundary location and recircu-
lation between discharge and intake.  The
establishment of boundary conditions is ex-
tremely important because small-scale models
can be completely dominated by the imposed
conditions at the boundaries.  Ideally, the
water and constituents entering the model
boundary are not influenced at all by pro-
cesses occurring within the modeled area.  It
may be desirable to run several models with
different boundaries in order to establish the
significance of boundary location.  The in-
take zone must also be carefully defined.  A
great deal of attention has been given to dis-
charge plumes and circulation.  However, it
is equally important to know the source of
the intake water.  The concentration of
planktonic oranisms may be very dependent on
the origin of cooling water.

The characteristics of the condensers are
considered to include temperature changes,
pressure gradients, and time of passage.
Operational characteristics may be somewhat
different than originally designed.  For
operational plants, measurements should be
made  to  determine  actual  temperature increases
as  a  function  of unit load.   Time of passage
and pressure changes  should  also be measured.

The discharge  conduit must be considered an
integral part  of the  system.   This portion
of  the cooling system may inflict the greatest
damage to  plankton because it normally repre-
sents the  greatest time of exposure to
elevated temperatures.

The discharge  zone must be considered due to
its effects on both plant- and plume-entrained
organisms  as well  as  a potential impact on
benthos, rooted plants, and  fish.   The time
of  exposure to various temperatures within the
plume should be known in  order to  compute
thermal  doses  to entrained organisms.

Biological

Figure 2 shows a biological  representation
designed for evaluation of cooling system
impacts.   For  each physical  segment represent-
ing the  study  area, a system  consisting of
chemicals, algae,  zooplankton,  fish,  benthos,
detritus,  and  rooted  plants can  be simulated.
The system shown in Figure 2  has only  one
compartment for each  group.   However,  any
number of  species  or  taxonomic  groups  could
be  included.   The  system  shown  allows  for two
types of direct impact: plant  entrainment and
plume entrainment.  Mortality  is computed
directly according to exposure-mortality
relationships.
                                                      PLANT ENTRAINMENT
                                                      PLUME ENTRAPMENT

                                                      PREVIOUSLY ENTRAINED
Figure 2.  Biological Representation of Power
Plant Cooling System

The ecological consequences of the direct im-
pacts are computed indirectly by modeling the
larger system.  Sub-lethal effects are
computed by distinguishing between organisms
which have and have not passed through the
plant.  The pool of organisms which have
passed through the plant may have altered
survival or reproductive characteristics.
"Offspring"of organisms which have been en-
trained are returned to the "normal" pool.

           Mathematical  Formulation

The mathematical formulation of the model(s)
consists primarily of a set of differential
equations describing the rates of change in
concentration of each parameter in each
compartment modeled.

For example, for constituent i, in compartment
                                              795

-------
k, connected to compartment n:
dCi,k  _
 dt
ZQ C .    - Q, C .
  n i,n    k i
                                 + G - £L
where:   Q  = advective flow from compartment n

        Q,  = advective flow from compartment k
         jc
      C.  ,  = concentration of constituent i
       lf    in compartment k

      C.    = concentration of constituent i
        /n   in compartment n

        V,  = volume of compartment k
         K.
         t = time            '

         G = growth

         L = loss

The present conceptualization assumes that
advective flows will be input from either
hydrodynamic models or an analysis of circula-
tion patterns and plant operation.

The gross growth rate of all organisms is
considered to be temperature-dependent.
Figure 3  shows a typical relationship between
growth rate and temperature.  This approach
allows an increasing growth rate up to some
optimum temperature (range)  followed by a
declining growth rate at higher temperatures.
Gross growth rates are also considered to be
"substrate"-dependent as shown in Figure 4.
The "substrate" may be nutrients, prey,  or
light.   All growth is considered to be first
order with respect to the organism modeled.
    1.5
       TEMPERATURE

Figure 3.   Temperature Modulation of Gross
Growth Rates

Loss rates include respiration, sinking,
natural mortality, predation, and effects of
the plant.  Respiration is considered to be
a simple first order reaction which increases
exponentially with temperature.  Sinking rates
for phytoplankton are input parameters that
can be modified as a result of sudden pressure
changes.  Predation is considered to be first
order with respect both to the predator and
the prey.   Susceptibility to predation can be
modified as a result of thermal or mechanical
stress.  The direct effects of the plant must
be input as specific functions for each group
of organisms.  For example, plant-induced
mortality can be input as a function of
temperature, temperature increase, and time
of exposure.
     1.0
                                                   LJ
                                                  tro
                                                       0
                                            Z.O
                                                 0.5        1.0        1.5
                                            SUBSTRATE CONCENTRATION
                                  Figure 4 .   Food Density Modulation of Growth
                                  Rate Coefficient
                                  It must be noted that the detailed formulation
                                  of growth and loss terms may include the mass
                                  concentration of other constituents and coup-
                                  ling effects of the various quality constitu-
                                  ents are therefore included in the model.
                                  There are as many differential equations as
                                  quality constituents and physical compartments
                                  modeled.  The equations are solved simultane-
                                  ously to yield the concentrations of each
                                  constituent as a function of time.  The re-
                                  sults can then be assessed to determine en-
                                  vironmental impacts of a cooling system.

                                             Modeling Framework

                                  The general framework for model development
                                  and application consists of four interacting
                                  stages.  During stage I, data are compiled for
                                  the pertinent biological, physical, and cool-
                                  ing system characteristics.  These data rela-
                                  tions, temperature-pressure mortality rela-
                                  tionships ,  substrate and temperature growth
                                  rate relationships, physical boundaries,
                                  bathymetry, circulation patterns, and a defi-
                                  nition of conditions to which entrained or-
                                  ganisms will be exposed.

                                  During stage II, data are input to a storage
                                  and retrieval system.  Summaries and statis-
                                  tical analysis can be provide.  Regressions
                                  and correlations can be determined for pos-
                                  sible use in defining relationships needed in
                                  the models.

                                  Stage III is the actual model formulation and
                                  includes definition of equations and program-
                                  ming modifications necessary for site-specific
                                  conditions .

                                  The final stage is the execution of the model.
                                  The program can be used for baseline  simula-
                                  tions, computation of direct impacts,  and
                                  ultimate effects in the ecosystem.

                                  Biological  Data

                                  The first step in defining the biological
                                  system is to determine the major and impor-
                                  tant components of the ecosystem which may be
                                  affected.  Importance may be a reflection of
                                  food web relationships, aesthetics, economics,
                                  or recreational resources.  For example, the
                                  major ecosystem components of the
                                  San Francisco Bay-Delta are the striped bass;
                                  king salmon;  and the oppossum shrimp,  Neo-
                                              796

-------
mysis mercedis.  The phytoplankton,  zooplank-
ton, and benthos as general  groups  are  impor-
tant but are less  likely  to  be  affected by
cooling system operation.

Analysis of the major  species or  groups must
then be conducted  to determine  life  histories
and thermal tolerance  data.  Life histories
should be complete enough to determine  which
life stages are susceptible  to  adverse
effects and when they  occur  in  relation to
cooling system operation.  Thermal  tolerance
data should define the effects  of short- and
long-term exposure to  elevated  temperatures.
For some groups, the effects of exposure to
heat are dependent on  both temperature  in-
crease and time of exposure.  An  example of
this type of relationship is shown  in Figure 5.
Other organisms, such  as  Neomysis,experience
mortality as a funciton of maximum  temperature,
as shown in Figure 6.
   Ld
   to
   <
   Ld
   cc
   o
   LU
   cr
   h- - -
   cc
   UJ
   0.
                             INCREASING
                             SEVERITY
                             OF EFFECT
     EXPOSURE TIME
 Figure  5.  Time-Temperature-Effect  Plot

 Following  identification  of  susceptible  life
 stages  and thermal  tolerance data,  general
 relationships between  temperature and  growth
 or  reproduction rates, mortality rates and
 spawning activities should be determined.   It
 is  important to key these relationships  to
 time as well as temperature  in order to  inte-
 grate the  results with plant operational data.
 Physical Data

 One of the most  important  and  often  neglected
 aspects of cooling  system  evaluations  is
 analysis of the  physical transport of  organ-
 isms which are subject  to  entrainment.  There
 are three basic  types of physical systems
 which must be treated differently.   Unidirec-
 tional river flow is the simplest and  normally
 would require only  a hydrologic  analysis to
 relate frequency of river  flow to seasons  and
 abundance of organisms  of  interest.  The per-
cent of river flow, or organisms entrained,
which is a function of time is then a simple
ratio of cooling water flow to river flow.  It
is necessary to determine the level of lateral
mixing and distribution of organisms (side,
mid-channel, surface, bottom).
uu-





80-





60-
40-
20-









7
0—0-^
~~ ^
V
6 Minute Heat Exposure \
Mortal/ties in the Laboratory \
fl'l'r l1^ ~*l ' ^ 9
— (nOtr, Ij/U /T

Through the Plant Mortal/ties
at Pittsburg Power Plant, 1969
(Kelly /5*7/1


"









| |
0 75 80 85


\
X
»!


\
I
\
\
N,V
'


1 \
\ \
L >
\ \
\ \
\ \
\ X
\ X
\ X
N^ X
90 9
    MAXIMUM TEMPERATURE (°F)

Figure 6.   Effects of Elevated Temperature on
Neomysis Survival

Tidal estuaries are somewhat more complicated
due to the oscillatory nature of the flow.
However, an "entrainment ratio" can be calcu-
lated with the use of a number of simulation
models.  Figure 7 shows computed entrainment
ratios for a power plant with cooling water
flow of 1000 cubic feet per second located
mear the confluence of the Sacramento and San
Joaquin Rivers on the San Francisco Bay-Delta.
The figure shows the fraction of water at each
location that would have passed through the
power plant if steady-state conditions were
reached.  The computations were carried out
for summer conditions with a net fresh water
flow of 4000 cubic feet per second and a
typical semi-diurnal tidal cycle.

Open coast locations are the most difficult
to quantify due to the uncertain boundary
conditions.  Coastal currents and circulation
patterns can be input as boundary conditions
to the study area.  However, the definition of
the study area is extremely subjective.  One
approach to this difficulty is to provide an
analysis with several different study area
boundaries and attempt to assess the effects
as a function of study area size.  For example,
2% of the zooplankton within three miles of a
plant may be killed each day.  Whereas only
0.5% may be killed each day within six miles
of the plant.
                                               797

-------
  0.3- -
  0.2--
  01- -
o
<
cr
    MILES ABOVE GOLDEN GATE

 Figure  7.  Fraction of Water Which Has  Passed
 Through the Plant Under Steady-State Condi-
 tions of 4000 cfs Delta Outflow and 1000  cfs
 Cooling Water Flow

 In general, physical data  should  include  bath-
 ymetry,  flow  regimes, currents, circulation
 patterns, and entrainment  ratios.  Thermal
 plumes  should be determined in sufficient de-
 tail to describe location, area,  volume,  res-
 idence  time,  and whether or not the plume
 affects rooted plants or animals.

 Cooling System Data

 Cooling system data must be sufficient  to
 evaluate the  impacts on both the  organisms
 entrained in  the plant and the plume.   For
 power plants  it is necessary to define  the
 temperature and time of exposure  for plant
 passage.  Most cooling systems consist  of a
 number  of units which may  have different
 characteristics.  It may be important to  ob-
 tain operational data describing  these  condi-
 tions through each unit as a function of
 travel  time.  Sheer stress, pressure changes,
 and chemical  additions may affect survival of
 organisms and should be ascertained.

                 Model Use

 Although this model is only in the conceptual-
 ization stage, previous work has  shown  the
 utility of certain sub-models and confirmed
 the need for  an integrated approach.  For
 example, the  design of units 2 and 3 at the
 San Onofre Nuclear Generating Station calls
 for extended  diffusers on  the once-through
 cooling water system discharge.   The design
 will expose plant-entrained organisms to
 temperature increases of approximately  20°F
 for 7 to 12 minutes longer than the existing
 system  for unit 1.  The model which has been
 described would be a valuable tool to analyze
 the significance of extended exposure to  ele-
 vated temperatures compared to slightly more
 rapid dissipation of heat  in the  discharge
 plume.   The significance of alternative ef-
 fluent  locations could also be analyzed.

 In another application of  a limited modeling
 approach, hourly power generation data  for
 each of seven units of a power plant were
 used to compute temperature increases through
 the condensers.  The temperatures experienced
 passing through each unit were then related to
 survival of young striped bass using survival
 functions reported in the literature.3'4
 The  percent of young bass which would be ex-
 pected to have survived plant passage were
 averaged on a weekly basis.  The results are
 shown in Figure 8.  Based on estimates of
 striped bass abundance in the intake area, the
 number of fish passing through the plant were
 estimated and the cumulative number killed
 were computed.
                     UNITS  1-6
             Kerr Mortality Function
                                                        1970
                                                             I
                     I
                                                                        1972
                            I
                     UNITS  1-6
           Kelly & Chadwick Mortality Data
                1971   I   1972   I   1973

                   WEEKLY AVERAGES
                                        1974
Figure 8.  Computed Percent Survival of Young
Bass Passing Through a Power Plant Cooling System

 Unfortunately,  time  and funds  have not per-
 mitted  the  analysis  to go the  next step in
 determining the significance of these  com-
 puted mortalities to the overall bass  popu-
 lation.   Furthermore,  the computations are
 based on a  number of assumptions (survival
 data, power generation-temperature increase
 relationships  intake concentrations).   How-
 ever, the approach is  quantitative and pro-
 vides a systematic appraisal of data needs
 as  well as  a methodology for evaluation.

                 Conclusions

 Comprehensive  models can be used to provide
 a rational  and quantitative interpretation
 of  data as  well as guidance in monitoring
 program design.  Modeling results can  be  used
 to  evaluate the impact of alternative  designs.
 Although direct effects can be computed and
 verified with  field  data, consequences often
 cannot  be verified.   The most  valuable benefit
 of  modeling is therefore, the  capability  to
 project ecological consequences resulting from
 a variety of assumptions and hypotheses.
                 References
                                           (1):
1.  Hair, R.J., California Fish & Game 57
17-27, 1971.
2.  Kelly, R.,  California Fish & Game, Anad.
Fish. Br. Admin. Kept. 71-3, 6 pp.  (mimeo),
1971.
3.  Kelly, R.,  and H.K. Chadwick, California
Fish & Game, Anad. Fish. Br. Admin. Rept.
71-9, 11 pp. (mimeo), 1971.
4.  Kerr, J.E., Fish Bull. No. 93, California
Fish & Game, 66 pp., 1953.
                                              798

-------
                              REVIEW OF THE STATUS OF MODELING

                                    ENVIRONMENTAL NOISE
                                    William J. Galloway
                                Bolt Beranek and Newman Inc.
                                  Canoga Park, California
ABSTRACT

Models for predicting the noise produced
around airports and highways have been devel-
oped over a period of years, have reached a
reasonably high degree of accuracy, and are
in widespread use.  These models provide
site-specific information.  More recent
models have been developed to predict
general urban noise and noise produced by
construction equipment and other major noise
sources.  Differentiation is made between
models for a specific site, as required in
environmental impact reports for specific
projects, and models predicting the total
population exposed to noise, as used in
assessing proposed noise source regulatory
actions.  The effect of noise source data
requirements, sound propagation modeling
problems, and operating condition specifi-
cations on modeling precision and accuracy
are discussed using specific examples
relevant to current EPA activities.

BACKGROUND

Acoustical modeling uses both scale model
analogs of a real environment and simulation
through mathematical models.  Physical scale
models are used primarily to study sound
propagation phenomena in the presence of
complicated geometric configurations, e.g.,
inside a building, between buildings in an
urban area, the effect of barriers along a
highway.  As such, these models consider a
restricted geographic area, use artificial
sound sources that usually do not scale in
magnitude to their real counterparts, and
do not generate sound levels representative
of a real environment.

Mathematical models of noise environments,
on the other hand, are used to predict noise
environments at a point, over a local area,
or even to estimate national noise exposure.
Existing models vary widely in detail,
scope, and purpose.  The purpose of this
paper is to review the general characteris-
tics of simulation models for predicting
noise environments and provide a current
status report on existing models.

GENERAL CHARACTERISTICS

An acoustical model can vary widely in terms
of its sophistication.  For example, the
noise produced at a point 20 m to the side
of a level road by an automobile traveling
at a constant speed is a simple algebraic
expression involving only a constant with
speed and distance as variables.  On the
other hand, modeling the noise produced in
a community from a complex stretch of
highway, complete with multiple lanes of
traffic moving at different speeds with
different vehicle mixes, curving roadways at
varying grade levels, including the effect
of noise barriers, requires a sophisticated
program operating on a high speed scientific
computer.  Each model, however, has certain
attributes in common:

1) Information on the magnitude, frequency
   distribution of sound level, and time
   variation of the source must be provided.

2) A propagation model from source to
   receiver needs to be defined.

3) A measure of noise suitable for use in
   describing human response is required.

The complexity of the modeling process
relates directly to which of these points
receives the most emphasis.  For example,
the designer of a jet transport aircraft
attempting to predict whether a new aircraft
will comply with FAA noise regulations will
put essentially all his sophistication into
describing the noise sources in detail so
that he can predict the time pattern of
sound pressure levels in one-third octave
frequency bands at a point on the ground
during a flyover of the aircraft.  The air-
port planner, on the other hand, wants to
compute the cumulative noise exposure, at
points in the entire community surrounding
the airport, produced by the total complex
of different aircraft and flight paths used
at the airport.  For this purpose the noise
source descriptor is chosen to be as simple
as possible, with the emphasis in the model
being placed on summing the contributions,
at many points in the surrounding area,
from the variety of sources involved.

Finally, again using aircraft as an example,
it is often of use to have a model in which
all the operational factors, propagation
characteristics, and exposed population
distributions have been aggregated in such
a way that changes solely in source strength
can be related directly to national impact.
Evaluating the impact on the national noise
exposure of retrofit noise control measures
for the air transport fleet is an application
of such a model.

In the following discussion examples of
these three levels of modeling will be con-
sidered, the "micro," "macro," and "global"
approaches'.  The examples are primarily
aircraft and surface transportation noise
sources, since, by far, these are the per-
vasive sources of environmental noise and
thus have the most well developed noise
models.  Models for factories, construction
sites, refineries, and other sources of
                                             799

-------
community noise have been developed to a
much lower state of sophistication only
because of their lesser importance as
major noise contributors to the national
noise environment.

Before describing environmental noise model
building history, it is worth noting that
one of the primary psychoacoustical contro-
versies in community noise evaluation, the
selection of a community noise descriptor,
has been only a minor factor in the phy-
sical and mathematical evolution of noise
model development.   Whether loudness level,
perceived noise level, A-weighted sound
level, or any other of a myriad of noise
descriptors is chosen, the physics of the
model building process is basically un-
changed.  The only circumstance where choice
of noise measure has influenced the model
development process is in the desire in
some models to predict measures of the time
distribution function of noise level, for
example the median level, denoted LCQ' or

the level exceeded 10 percent of the time,
L-.Q.  It is usually fairly simple to

estimate such measures for a single class
of noise sources assumed to produce normal
distributions of level with time.  Where
multiple sources having different time
distributions are involved, estimating the
combined distribution function is almost
hopeless.

Fortunately, the relatively recently evolved
international consensus that community
response is most directly related to the
mean square value of sound pressure, aver-
aged over a specified time period, greatly
simplifies the modeling process.  With this
assumption, the contributions of individual
sources to the cumulative noise exposure  at
a point is a simple mathematical process.
A major step forward in developing a unified
presentation of model results was the EPA
publication of its "Levels" document1 in
which it prescribes that all environmental
noise, irrespective of source, should be
specified in terms of average (sometimes
called equivalent) A-weighted sound level
over a specified time interval.  This
quantity is simply ten times the logarithm
of the time integral of A-weighted, squared
sound pressure, divided by a specified
reference time (one hour, twenty-four hours,
etc.) and reference sound pressure.  A
number of simplified expressions for cal-
culating this measure for typical time
distributions of noise signals are provided
in Appendix A of Ref. 1.

SURFACE TRANSPORTATION MODELS

Noise from motor vehicles is the most exten-
sive source of noise in most communities2.
Most early attempts to model noise from
traffic have considered the "freely flowing"
case of  a freeway on flat, open terrain.
One of the first models used a Monte Carlo
simulation of a Poisson flow of vehicles  to
predict the noise level distribution at  a
point as a function of vehicle speed and
mean traffic flow volume3.  Later models
generally assumed a uniform distribution
of vehicles along a roadway,  and  for  high
volumes obtained the same  answers  as  the
Monte Carlo simulation "*.

One of the first attempts  to  simulate a com-
plex flow of traffic, expanding on the  Monte
Carlo simulation was completed in  19675.
Although the simulation was performed with
a computer, the model was  really not  suited
for routine analysis of highway problems,
in that highway configuration, grade  differ-
ences, and other real highway configuration
effects were not considered.  The  first
design guide provided by the Highway  Research
Board to account for these factors  was
completed in 19696.  The attempt here was to
reduce the simulation results to a series
of nomograms suitable for  use in hand cal-
culations.  The procedure was still cumber-
some, and was first programmed for  computer
use by the Michigan Highway Department.  An
alternate model, similar in nature, was
developed by the Transportation System
Center in 19727.

An impetus to use these models routinely was
provided by the requirement of the  Federal
Highway Administration8 that noise  pre-
dictions for highway planning and  improve-
ment projects be performed in all  federal-
aid highway programs.   Meanwhile, with more
highway departments using the models  a
number of noise measurement programs  were
conducted to determine the accuracy of the
models.   It was also found that hand  imple-
mented versions of the models were not
sufficiently detailed to satisfy many
highway designers.   An improved program
incorporating many detailed refinements and
reflecting the measurement program results
was developed in 197^9 and is now available
in Fortran versions for CDC and IBM com-
puters.   Another model developed by the
Ontario Ministry of Transportation also
appeared in 19751°.

The present state of highway noise simulation
is represented by the  HRB, TSC, and Ontario
models.   Each has detailed differences and
varying ease of application, but is capable
of predicting environmental noise to  accur-
acies of the order of a few decibels  of the
real values obtained from validation
measurements11.

AIRCRAFT NOISE MODELS

Simulation of the noise produced by aircraft
operations has had a history similar  to that
of highway noise.  Early airport noise models
were designed to provide a means for  com-
puting the noise produced at a point  by a
number of different aircraft, generally
clustering operations  by general aircraft
types, e.g., transport aircraft, fighters,
propeller-driven, using nomograms and manual
computation12.  Means  were provided to
develop families of contours of equal noise
exposure that could be used to define the
total noise environment around an airport if
one knew the number and kind of aircraft
involved and the flight tracks flown.  An
improved version was implemented as a Fortran
program in 1 **, but still did not provide for
detailed consideration of individual  aircraft
                                              800

-------
performance,  and  required  summation  of
individual  contours  by  a hand  drafting
procedure.

Implementation  of the Environmental  Policy
Act by the  Department of Defense  and the
Federal Aviation  Administration gave impetus
to the development of substantially  more
refined airport noise models in the  past
few years.  Using somewhat  different mathe-
matical approaches to achieve  the same
goals, two  major  computer  programs have
been developed  by FAA and  the  Air Force15'16.
•These programs  now allow input of the
detailed performance characteristics of
individual  aircraft, variation in power
management  schedules during a  flight oper-
ation, dispersion in flight paths, variations
in atmospheric  conditions,  and a  host of
other improvements.  Output is now available
in completed  contour form  through the use
of highly sophisticated plotting  routines.

The accuracy  of airport noise  model  pre-
dictions is very  much a function  of  the
accuracy of the input operational data.
Noise source  characteristics of aircraft
are quite well  known, but  the  ability to
describe flight paths accurately  is  diffi-
cult.  Further, the  accuracy of noise
prediction  decreases as the distance from
the aircraft  increases.  The daily average
noise level of  a  complex of operations can
be predicted  to within  one  to  three  decibels
for distances of  up  to  10,000  feet from a
flight path.  At  farther distances the
accuracy decreases due  to  variation  in both
knowledge of  where the  flight  paths  really
are and variation in sound propagation in
a real as compared to ideal atmosphere.

Two additional  approaches  to aircraft noise
modeling are  of interest.   One very
ambitious effort  at  NASA Langley  Research
Center is a long  term project  to  permit
noise prediction  as  a function of detailed
aircraft design characteristics17.   This
program consists  of  a series of individual
modules related to state-of-the-art  pre-
diction of  individual noise sources  such
as jet noise, compressor noise, etc., and
an executive  program to combine the
individual  component effects into a
composite of  the  noise  produced by the
whole aircraft.   The goal  here is to upgrade
individual  models as the knowledge of
detailed source predictions improves.  This
is in contrast  to the usual airport  noise
models that treat the overall  noise  of an
aircraft directly.

The second  approach is  a "global" model of
airport noise that predicts total population
exposed to  a  specified  noise value in terms
solely of current  aircraft  fleet  noise
characteristics and numbers of operations—
either for  a  single airport or for all air
carrier airports   .  Use of this  model
allows a simple calculation of changes in
population  affected by  a source noise change,
or change in  numbers of operations.   This
model is currently used by  the Civil
Aeronautics Board as a  screening  tool in
airport/route changes to determine whether
a change is minor or major, and thus
requiring more  detailed analysis19.
URBAN NOISE MODELS

No suitable models exist for predicting
general urban noise from a collection of
discrete sources.  A first cut estimate,
based on a statistical sample of urban
noise, allows a general space average noise
exposure estimate to be made on the basis
of population density alone20.  For any
specific location the level may vary as
much as + 8 decibels from this average,
depending upon local street structure and
traffic volumes.  Development of a general
purpose urban noise model that accurately
accounts for discrete noise sources, local
topography, and urban design is the next
major challenge in environmental noise
model development.

CONCLUSION

Sophisticated models exist for predicting
the environmental noise produced by freeway
and airport operations.  Accuracies of
prediction are of the order of a few
decibels—comparable to the discriminability
of people to assess noise.  Generalized
urban noise models are not yet available,
although some work is in progress.
REFERENCES

1) Anon., "Information on Levels of
   Environmental Noise Requisite to Protect
   Public Health and Welfare With an
   Adequate Margin of Safety," 550/9-74-004,
   EPA, March 1974.

2) W. J. Galloway, G. Jones, "Motor Vehicle
   Noise:  Identification and Analysis of
   Situations Contributing to Annoyance,"
   Trans. Soc. Auto. Eng., 1060-1074, 1972.

3) W. J. Galloway, W. E. Clark, "Prediction
   of Noise From Motor Vehicles in Freely
   Flowing Traffic," Proc. IV International
   Congress on Acoustics, August 1962.

4) D. R. Johnson, E. G. Saunders, "The
   Evaluation of Noise From Freely Flowing
   Road Traffic," J. Sound and Vib.  7_,
   No. 2, 287-309, 1968.

5) W. J. Galloway, W. E. Clark, J. S.
   Kerrick, "Highway Noise—Measurement,
   Simulation, and Mixed Reactions, NCHRP
   Report 78, 1969.

6) C. G. Gordon, W. J. Galloway, B.  A.
   Kugler, D. L. Nelson, "Highway Noise -
   A Design Guide for Highway Engineers,"
   NCHRP Report 117, 1971.

7) J. E. Wesler, "Manual for Highway Noise
   Prediction," Report DOT-TSC-FHWA-72-1,
   1972.

8) "Noise Standards," PPM 90-2, Fed. Hwy.
   Adm.., June, 1972.

9) B. A. Kugler, D. E. Commins, W. J.
   Galloway,  in publication as NCHRP Report.
                                              801

-------
10) J. J. Hajek, "Ontario Highway Noise
    Prediction Method," Research Report 197,
    Ministry of Transportation and Communi-
    cations, Ontario, Canada, 1975-

11) "Highway Traffic Noise Prediction
    Methods," Transportation Research
    Circular Number 175, January 1976.

12) K. N. Stevens, A. C. Pietrasanta,
    Procedures for Estimating Noise Exposure
    and Resulting Community Reactions Prom
    Air Force Operations," WADC TN-57-10,
    Wright-Patterson Air Force Base, Ohio,
    1957-

13) W. J. Galloway, A. C. Pietrasanta, "Land
    Use Planning Relating to Aircraft Noise,"
    BBN Report 821, published by PAA in 1964.

14) D. E. Bishop, R. D. Horonjeff, "Proce-
    dures for Developing Noise Exposure
    Forecast Areas for Aircraft Flight
    Operations," FAA Report DS-67-10,
    August 1967.

15) C. Bartel, L. C. Sutherland, "Airport
    Noise Reduction Forecast," Report DOT-
    TST-74, August 1974.

16) R. D. Horonjeff, R. R. Kandakuri,
    N. R. Reddingius, "Community Noise
    Exposure Resulting From Aircraft
    Operations:   Computer Program
    Description," AMRL TR-73-109, Aerospace
    Medical Research Laboratory, Wright-
    Patterson Air Force Base, Ohio,
    October 1974.

17) J. P. Raney, "Development of a New
    Computer System for Aircraft Noise
    Prediction," 2nd AIAA Aeroacoustics
    Specialists  Conference, NASA Langley
    Research Center, March 1975.

18) W. J. Galloway, "Predicting the
    Reduction in Noise Exposure Around
    Airports," Proc. Inter-Noise 72,
    356-361, October 1972.

19) Part 312, Economic Regulations, Civil
    Aeronautics  Board, 24 September 1975.

20) W. J. Galloway, K. M. Eldred, M. A.
    Simpson, "Population Distribution of
    the United States as a Function of
    Outdoor Noise Level," EPA Report
    550/9-74-009, June 1974.
                                              802

-------
                                          COMMUNITY NOISE MODELING
                                                         by
                                                 Basil  Manns
                                   U.S.  Environmental Protection Agency
                                           Washington,  D.C.  20460
SUMMARY:  This paper discusses  the  use  and need for
mathematical models for planning, developing,  and
managing regulatory programs  for community noise
control.  Both sources and propagation  noise emission
models are essential to determine the beneficial im-
pacts in the community for each new source regula-
tion.  Very elementary and statistical  models  have
been used thus far in dealing with  the  major noise
sources. However, as additional noise sources  are
identified for regulation, the  continued  use of these
elementary models will not show the real  benefits of
a particular regulation to the  community.  This paper
will focus on the description of models used for urban
traffic and freeway noise.  Other models  used  for
construction sites or airports  will not be discussed
in this paper.

NOISE DESCRIPTORS:  It must be  clearly  understood
that one role of the Environmental  Protection
Agency's Office of Noise Abatement  and  Control
(EPA/ONAC) is to reduce and control community noise
by developing regulations for noise emission of newly
manufactured products,  interstate railroad and inter-
state motor carrier.  The 3PA/ONAC  does not establish
ambient standards.  It does establish operating
standards for interstate rail and motor carriers, and
certain newly manufactured products at  the time of
manufacture.  The products  identified for regulation
are generally based on those  that have  greatest impact
on the cortmunity.  The Table  I  shows that urban traffic
category is the highest regarding community noise.
Home appliance is ranked as high as it  is because the
impact level goes down to 45  dB.

Community noise requires the  inclusion  of all  the
noise in the outdoor acoustical environment. The out-
door community noise environment varies in both mag-
nitude and character at various locations.  The
community noise environment also varies as time of
day. Thus in describing descriptors for community
noise it is necessary to determine  the  time and loca-
tion of variations in the outdoor noise environment
throughout the community in such a  manner that the
descriptors are relevant to its effects on people
located in various land use categories, either
indoors or outdoors.

In describing sound and its effects on  people  the
factors to consider are the frequency spectrum, the
overall Sound Pressure Level  (SPL), and the temporal
variations of both spectrum and SPL. To  simplify the
approach the frequency spectrum has been  weighted to
the human hearing sensitivity and summed  to obtain a
single SPL number.  This is the A-weighted SPL in
decibels, written as dB(A).
                                 Although the A-weighted SPL is weighted with the
                                 human hearing sensitivity it is not a perfect method
                                 for accounting for a persons perception of the fre-
                                 quency characteristics of a sound.  Many other scales
                                 have been developed to better quantify loudness and
                                 noisiness. The tone corrected perceived noise level
                                 better accounts for the human hearing response by dif-
                                 ferentiating betweeen broadbands and pure tones.
                                 Perceived noise levels exceed the A-weighted noise
                                 levels typically by 11 to 17 decibels. Because the
                                 perceived noise level scale is somewhat more exact
                                 than the A-weighted in relating to physical charac-
                                 teristics of a sound to perceived noisiness, particu-
                                 larly for aircraft, it has become a major element in
                                 certifying aircraft.

                                 Tone corrected perceived noise level measurement
                                 methodologies require complex instrumentation and
                                 data analysis to define a sound.  Therefore, they
                                 have found little application in the measurement of
                                 outdoor community noise.  The simple A-weighted sound
                                 level meter so far appears to serve the  purpose
                                 adequately.  Therefore most analytical and computer
                                 models dealing with surface transportation noise
                                 impact analysis are based on the A-weighted sound
                                 pressure level in decibels.

                                 The temporal variations of the noise level in terms
                                 of the dynamic range variations, discrete single event
                                 occurences, and the time and length of these occur-
                                 ences can easily be observed on a graphic recorder. In
                                 order to cope with these variations statistical des-
                                 criptors are used.  These descriptors give the percen-
                                 tage of total time that the value of the noise is
                                 above a given level.  Frequently used levels are L10,
                                 L50 and L90 corresponding to the sound level that is
                                 exceeded 10%, 50% and 90% of the time respectively.
                                 Other descriptors used particularly by the EPA/ONAC
                                 are Leq and Ldn shown in equations 1 and 2.
                                          • 10 los>.
                                                                  P2(t)
                                                                          dt
                                                                                  (1)
                                    dn
                                                      15  10
                                                            Ld/10
                                                                                    (2)
                                                                10
                                                                  (Ln+10)/10
                                 The Leq is the energy equivalent noise level or the
                                 average SPL over a given time period, usually 8 hours
                                 or 24 hours.  The Ldn is the day-night energy equiva-
                              Table 1.  Summary of Noise Impact in the United States by Category

                 Cumulative Number of People Whose Exposure Exceeds Indicated Idn  (Millions)

                          45dB   5CWB    55dB    60dB    65dB    70dB   75dB   80dB    85dB
       Urban Traffic
       Home Appliances
       Aircraft Operations
       Industrial
       Construction
       Freeway Traff ic
       Operators/Passengers
       Ran Line Operations
79.7
       44.2
93.4
17.1
24.5
—
26.2
13.7
—
2.0
59.0
4.4
16.0
—
8.7
8.1
—
0.9
24.3
0.6
7.5
—
2.4
4.5
—
0.3
69.9
0
3.4
16.7
0.5
2.3
11.5
0
1.3
0
1.5
12.2
0
1.0
11.5
0
0.1
0
0.2
8.6
0
0.3
1.6
0
0
0
0
3.8
0
0
1.6
0
                                                                  1975       1992
                                                                  Noise      No ise
                                                                  trpact      Impact
                                                                  (Millions of units)
                                                                            Total Impact
34.6
26.5
10.2
 8.2
 6.2
 5.3
 5.1
 0.55
T77T"
5.9
3.9
2.5
2.3
0.5
1.7
0.7
0.04
                                                                                                      ITTT
                                                        803

-------
lent noise level where a 10 dB penalty is  given to the
9 night hours (10 p.m. to 7 a.m.).   The Leq and Ldn
levels have been used by the EPA/ONAC to characterize
the health and welfare impacts associated  with  noise.
Impulse or single event noise has been shown to cause
interference with communication, disruption of  sleep,
annoyance, and other physiological  effects in addition
in some cases hearing loss. However, it is not  treated
separately but averaged into the Leq or Ldn level  to
describe the health and welfare impacts.

MODEL DESCRIPTORS:  The models used to describe the
noise levels produced by various vehicles  can be used
both for highway planning and design projects and  for
developing strategies and assessing regulatory  alter-
natives of noise emission source regulations.  The
basis of all models for highway is  the description
of the noise produced by a single vehicle  observed at
a fixed point as the vehicle passes along  a straight
highway.  Three basic approaches to the modeling of
highway noise have been used, (1) compute  the instan-
taneous noise levels expected for a randomly occurring
flow of vehicles along a single lane equivalent
roadway,  (2) superimpose the individual sources to
constitute a flow, assuming vehicles to be uniformly
distributed and spaced along a single lane equivalent
roadway, and  (3) compute the total  acoustic power  that
is distributed along a single lane  equivalent roadway.
Using these approaches mathematical expressions have
been derived that will describe the observed noise
level at a given distance as a function of the  vehicle
speed, the vehicle flow rate, and the vehicle noise
reference level.  Using propagation theory and  empher-
ical correction factors additional  parameters such as
vehicle noise frequency spectrum, barriers, ground
absorption, atmospheric absorption, reflection, and
road geometry are incorporated into the model.

A number of equations have been developed  to predict
the propagated noise level to a given distance. An
early equation, equation  (3), developed by Johnson
and Saunders shows L50 or the median noise level at
sufficient distances from the highway and/or at higher
traffic densities.  The noise source is shown to smear
       L50=20+101ogV-101ogD+201ogS
(3)
out  into a line source whose levels decrease by 3 dB
per  double distance.  The variable  V is the vehicle
per  hour, D  is the distance from the highway and S is
the  vehicle  speed.  Galloway was the first to develop
a  simulation model that accounts for the statistical
LlO-Leq, Decibles


/
/


/
/




^—





~\L10-Leq=1.28s-0.115s2





\
\




\
\
\







           0246     B    10    12    !-J

                 Standard Deviation, Decibles
       Figure I.  Difference between  L10 and Leq for
                 a Normal Distribution.
         distribution of the noise level as a function of time.
         The basic equation is shown in equation  (4).  This

               L50=29+101ogV-151ogD+301ogS+        (4)
                    101og(tanh(1.19xlO~3VD/S))

         equation is for automobiles, and a similar equation is
         used for trucks.  The variables V, D, and S are the
         same as above.  To convert L50 to L10, equation (5) is
             L10=101og
               cosh(1.19xlO~3pD)
            cosh(1.19x10-3PD)-0.95
                          +L50
(5)
         used where PO is the traffic density. In this simula-
         tion empherical expression the traffic represents
         something between a line source and a point source.
         Figure I is then used to convert L10 to Leq, since Leq
         is used by EPA/ONAC as a primary descriptor to assess
         impact on residential and land use categories,  A
         normal distribution is assumed which is often not a
         valid assumption for traffic or freeway noise.
         Another expression developed by the Department of
         Transportation shown in equation (6), calculates the
         average intensity for a vehicle group knowing the mean
         SPL of the vehicle group.  The variable  r is the ref-
         erence distance, d the perpendicular distance from
          1=
       d
road
segment
                              10ioe.s.35)2!
                   ,-D/lO,-L/io  0.5(s/4.
vehicle
group
                                                            (6)
         roadway, Aa the enclosed angle at the receiver at two
         ends of  the road segment, D the attenuation of sound,
         L the mean SPL, is the standard deviation of the
         normal distribution of the reference SPL.
         The Leq  is the calculated as  10 Log I.

         A similar approach developed  by Plotkin  caluculates
         the equivalent energy by using equation  (7) for each
                                                                                                     (7)
          single  lane  "j" of traffic.  I  is the vehicle inten-
          sity level,  P  is  the fraction of the vehicle that
          produces a sound  level I, S the vehicle passby per
          unit time, V the  vehicle speed, d the passby distance
          and  dj  the lane distance.  Again the assumption of a
          line source  is used and Leq is  found from 10 Log I.

          It is really not  the purpose of this paper to discuss
          the  details  of these simulation expressions or others
          that have been developed.  However, the basic features
          of these expressions need to be understood to apply
          mathematical predictive methodologies to assess urban
          or community noise  impact.  The important feature  in
          assessing  impacts on the community  is that a specific
          site or scenario  needs to be defined, including the
          source  since the  propagation characteristics of
          sources do  vary.

          ASSESSMENT  METHODOLOGIES:  As part  of the requirement
          of the  Noise Control Act of 1972, the EPA  identified
          community noise  levels that are "requisite to protect
          the public  health and welfare with  an adequate margin
          of safety."  Various  land use areas  include residential,
          commercial,  industrial, educational, recreational
          areas,  and  inside transportation.   Generally, Leq
          levels  of 70 decibels are  identified to protect
          against activity interference.  Other levels  are shown
          on Table  II.
                                                       804

-------
     Table II.  Noise Levels Protective of Health and Welfare
                                                            dB(A)  and the population model  is  in  Ldn,  any change
                                                            in  Ls  would produce an equal displacement  in Ldn.  To
             Human Response                  Leg        Idn

 Hearing Loss-' (8 hours per day)**              75         —

 Hearing Loss* (24 hours pec day)               70         —

 Outdoor Annoyance                            —         55

 Indoor Annoyance, Speech Interference          —         45

 *6ased on exposure over 40 years at eat level.
 **As long as the exposure over the remaining 16 hours per day
 is low enough to result in a negligible contribution to the
 24 hour average.
 The procedures used to assess  impact due to environmen-
 tal noise follows the same fundamental analysis used
 for any environmental assessment.   First, the initial
 acoustical environment must be defined. Second, the
 final acoustic environment must  be defined.  Third,
 the relationship between specific  acoustic environments
 and the expected human impact  must be analyzed.  These
 three steps are used in planning and developing highway
 construction projects and also in  assessing the impact
 of  planned or developed regulations of specific noise
 emission sources.  When planning and developing a par-
 ticular road, one uses this assessment approach on a
 single or group of houses.  When assessing the impact
 of  a noise emission source regulation, all houses near
 the entire national and local  highway system are
 considered.

 To  simulate various traffic conditions in the United
 States, the EPA/ONAC has considered both an urban
 traffic situation and a urban  freeway situation.
 For both scenarios, the models are designed to
 assess the total United States population affected by
 particular noise source emission regulations.

 URBAN TRAFFIC MODEL: The urban traffic model is a
 statistical model used for estimating the national
 urban population benef itted from a regulation. The
 model uses the following assumptions.

 (1)  In an urban environment the  average speed is
     27 mph.

 2)  The vehicle mixture is 1% heavy trucks,  6% medium
     trucks; 91.5% automobiles, 0.5% buses and 1.0%
     motorcycles.

 (3)  The base line noise levels for each vehicle is
     heavy truck 85 dBA, medium truck 77 dBA, automo-
     bile 65 dBA, buses 79 dBA  and  motorcycle 83 dBA.

 (4)  The population density as  a  function of outdoor
     noise level is based on empherical data shown on
     Figure II.

 The results shown on Figure II are taken on a sample
 of  100 sites chosen to represent a wide range of  popu-
 lation densities throughout the United States. The
 sites were chosen away from freeways,  construction
 sites,  airports and aircraft noise in order to
 represent urban traffic noise.   The cumulative U.S.
 population exposed to levels in excess of specific
 values  are shown on Figure III.  These data are also
 taken from the  100 site study and  takes into account
 other  noise sources.

 The  effect of a noise emission regulation on a vehicle
 category in an  urban environment can be assessed  by
 analyzing the change in the equivalent source level
produced  by changes in the particular  noise emission
source. Although the equivalent source level Ls,  is in

50
0
-rl
£
O
n
m
3
D1 20
tn
Q)
CU
Q)
(^
° in
0) IU
C^
jJ
-iH
U)
C
5
C
O
4-)
,-1
CU
0
PM
2

«
i
-
-
o
-

~










-












CURRENT
PREVIOUS







0

<*




•
0
.



B

//
//
//

STUDY
STUDIES







*
.
.
•
•
— " 	 5~
0 •
• (
• 9

• "J
'"I
/
7
•A
If •_
'
•



o
*
#
* «


*/
1
"' 3
jj
g Jf
•* v
7

7 • *
7 • °

*.»" •
.
^



•

•
'//
i
r
if
I
//
7
1 ,
•
•

.
,


•

0
'








•'/•'
'."

*

.










•

•

- 10 l»9 P


























* 22 dB
4 26 dB






          50    ° 55      40     S5      70      75

         Day Night Average Sound Level, Ldn(dB)
Figure II.  Population Density as a function of Day Night
           Average Sound Level.
              Day Night Average Sound Level, Ldn(dB)
      Figure III.  Cumulative Population Exposed to Levels in
                Excess of Day Night Average Sound Levels.
                                                         805

-------
 calculate the effect of a noise emission regulation,
 compute Ls for all noise source categories involved,
 heavy trucks, medium trucks, automobiles, buses and
 motorcycles.  Knowing the distribution an dthe noise
 levels of each vehicle, Ls is just the logrithmic
 summation, expressed in equation (8), where i is the
 vehicle, Li is the noise level of the i vehicle and

                      Di10Li/10
(8)
 Di is the fractional distribution of the i vehicle.
 Ls is calculated before regulatory levels are imposed
 and after, to determine the before and after conditions
 for a particular source regulation.  The "before" and
 "after" Ls is equated to equal values of Ldn.

 Figure II and III are then used to show relative bene-
 fits of a regulation.  Figure II shows a lowering of
 the overall urban noise or din at a given population
 density. The plot on Figure II will be shifted or
 displaced left as a result of a source regulation.
 Additional regulations would also incrementally shift
 the plot left so that at a given population density the
 overall urban din can be shown to be less.   Figure III,
 however, shows the cumulative population exposed to
 levels in excess of specified values.  Again since any
 change in Ls calculated by equation (8)  is  equal to
 a change in Ldn, a change in population can be deter-
 mined that is exposed to levels in excess of the
 specified values.  Thus the number of people actually
 benefitted by the reduction of the urban noise level
 is found.

 FREEWAY NOISE MODEL:   The Freeway Noise Model used by
 the EPA/ONAC estimates the number of people benefitted,
 or impacted by a particular regulation that live
 along the miles of urban freeway.   As mentioned pre-
 viously,  the analysis to determine the impact due to
 noise is site specific.   For national impact analysis
 a number of very general assumptions has been used to
 simulate a site to represent the average urban freeway.
 The scenario consists of a freeway passing  through an
 urban residential area having the  following properties:

 (1)  Freeway consists  of  six traffic lanes

 (2)  Freeway has no grade relative  to surrounding
     property

 (3)  Population density adjacent to the freeway is 5000
    people  per square mile

 4)  EWellings  are single-family, one story  high located
    on one  hundred foot  lots, two lots  deep

 (5) Traffic volume is 7200 vehicles per  hour  with an
    average speed of  55  miles per  hour

 (6) Traffic distribution is 10% heavy  diesel  trucks  and
    90% automobiles

 (7) There are  8000 miles of urban  freeway

These  assumptions are then used  to  compute  the hourly
equivalent  levels at  various  distances from the
freeway.  The  level is computed  for  both trucks  and
automobiles  and  their combinations. The same equation
used for the urban traffic model, equation  (8),  is
used for comparing the equivalent source  level,  usually
at 50  feet  for a  particular combination  of  sources.
The individual source  levels  used are  the levels  im-
posed  by the standards being  considered.  It  is
assumed that any  change  in  the equivalent source  level
 would produce an equal change  in  the day  night average
 sound level, Ldn.

 The Figure IV represents the noise level  adjacent  to
 urban freeways using a 1974 base  case  of 7200 vehi-
 cles per hour, 10% trucks and  90% automobiles, at an
 average speed of 55 mph.  Using equation  (8)   and
 calculating the "before" and "after" conditions the
 change in Ls will produce an equal change of Ldn.
 Thus an increase in distance from the freeway  for
 the "after" condition is found.   The Ldn  levels in
 5 dB increments from the freeway  are computed,  and
 the population in these contours  are estimated knowing
 the population density of 5000 people per  square mile
 and 8000 miles of urban freeway.

 It is important to realize that Figure IV  illustrates
 a base case using empherical data representing  a
 situation resembling an average urban freeway.  The
 attenuation is considerably greater than the 3 or 4.5
 dB per double distance as described in equations 3
 thru 7.  The attenuation of 9 dB per double distance
 is used since this represents the average  attenuation
 of the particular source spectrum expressed in terms
 of day night average sound level taking into account
 atmospheric attenuation,  ground absorption, building
 reflection and  refraction,  and  other  physical losses.
 As other  sources  are analyzed in this model the par-
 ticular attenuation  rate will need to be determined.

 PROBLEM AREAS:  There are a number of questions that
 often  arise in dealing with the noise descriptors or
 the models.   The  descriptors Leq and  Ldn have the
 basic  advantage  in that  they are easy to measure.
 However,  the  objection to the use  of  Leq or Ldn might
 be that the long  term average is not  appropriate to
 describe  harmful  or  annoying  effects  of short dura-
 tion, high level  noise.   This is true where short
 duration  levels may  result  in permanent hearing dam-
 age. However  the  nature of  the  decibel  is  such  that
 an impulse of 110 dB  for  one  second will raise  a 24
 hour 55 dB average level, to  a  24  hour-Leq of 62 dB.
 The average descriptors show  an effect to  impulse
 noise and  therefore  in most cases  it  is sufficient
 to  use these average descriptors.

Another area which is requiring  further study is the
 use of nationally averaged scenarios to represent envi-
 ronmental  impact of various noise  emission sources. In
using an averaged scenario, the high impact areas are
lost in the average.  There  is  a need for  a national
 indicator  for assessing the relative  impact of  various
regulations.  However, the national indicator would be
more meaningful if the average were arrived through
a system of scenario  sampling.   Often more eirmision
 sources are used in certain geographic areas
             V >
             •as
                  0.01
                                            , —
                                              Lan= 30-30 log i
                                       0.03   0.07   0.10       0.80
                        Distance  from Right of Way, Miles
            Figure  IV.  Day  Night  Average Sound Levels Adjacent
                        to Urban Freeways.
                                                       806

-------
than in others.  Many situations exist where commun-
11Tles ^re severely  impacted by highway noise.  Other
situations exist where expensive noise abatement pro-
grams are underway. Obviously all worst case scenarios
cannot be found, but a statistical sample of both urban
and freeway scenarios can be developed.  Techniques are
available to accurately predict the levels of exposure
in each scenario and determine the number of people
impacted.

Currently efforts are underway to look at a number of
urban sites and identify characteristic features that
could lead to a sampling model.  In each scenario
site features include acceleration/deceleration areas,
lane miles, traffic characteristics (volume, speed,
and distribution), topography  (vegetation and barrier
types) , roadway characteristics (configuration and
grade), types of housing  (single dwelling, or multi-
family) and general land use areas such as school,
recreation, industrial and commercial areas. These
sites could then be used to illustrate a range of scen-
arios and the impacts and benefits of each regulation
could be expressed  in terms of specific sites.  The
next step would be to indicate how these sites repre-
sent subgroups within the national population distri-
bution impacted by noise sources.
CONCLUSION:  The question that arises when using any
model is the accuracy.  The data gathered from the
100 sites shown in Figure II  indicate  that the stan-
dard error of estimate of the Log of the population
density on Ldn is about 4dB.  This  is considered
analogous to a standard deviation of 4dB.  The stan-
dard deviation gives the measure of the dispersion of
the distribution.  However, no data are available as
to the variability of traffic mixes used in the Urban
Traffic Model. The assumptions used need to be under-
stood in interpreting the results.

Propagation noise model variations are another area
where accuracy and understanding of variations are
critical.  Consider a model which predicts a 70 dB
contour at 500 feet from a freeway.  If the model over
predicted by 5 dB, the 70 dB contour would be 1075
feet from the freeway.  An over prediction of 10 dB
would move the 70 dB contour out to 2320 feet.  Errors
of overprediction may cause needless noise abatement
programs, either regulations or actual highway bar-
riers, soundproofing buildings or acquisition of
public land.  Errors of underprediction would cause
excessive noise impact on the public.

Finally it should be realized that there is enough
known about the characteristics of noise and that
technology is available to develop  complex models.
However, the amount of information required to include
all sources of environmental noise, all factors
affecting propagation, and the wide range of human
responses would be overwhelming.  Therefore a number
of assumptions has been made  to reduce the complexity
of the models, primarily because of lack of data.
As more data become available the models will include
additional parameters to more accurately describe and
assess impacts at specific sites and project site im-
pacts to the nation as a whole.
SELECTIVE REFERENCES:

1.  "Community Noise:  U.S. EPA, NTID  300.3,
     December 1971.

2.  Galloway, W. J. et. al.,  "Population Distribution
    of the United States  as a Function of Outdoor
    Noise Level," U. S. EPA,  550/9-74-009,  June  1974.
Hajek, J. J. "Ontario Highway Noise Prediction
Method," Ministry of Transportation and Communica-
tions, RR 197, January 1975.

"Highway Noise, .a Design Guide for Highway
Engineers Cooperative Highway Research Program
Report 117, 1971.

"Information on Levels of Environmental Noise
Requisite to Protection Public Health and Welfare
with an Adequate Margin of Safety," U. S. EPA,
550/9-74-004, March 1974.

Kugler, B.A. et. al., "Design Guide for Highway
Noise Prediction and Control," Transportation
Research Board Report 3-7/3, November 1974.

Plotkin, Kenneth J., "A Model for the Prediction
of Highway Noise and Assessment of strategies for
its Abatement through Vehicle Noise Control,"
Wyle Research Report, September 1974.

Sharp, B. H., Research on Highway Noise Measure-
ment Sites." California Highway Patrol, March 1972.

Wesler, J. E., "Manual for Highway Noise Predic-
tion," DOT-TSC-FHWA-72-1, March 1972.
                                                       807

-------
                            THE COST OF WATER SUPPLY UTILITY MANAGEMENT
       Robert M.  Clark
       Systems Analyst
   Water Supply Research Division
U. S. Environmental Protection Agency
    Cincinnati, Ohio 45268
               James I.  Gillean
                   President
               Act Systems, Inc.
               Orlando,  Florida
                  ABSTRACT

Passage of the Safe Drinking  Water  Act  has
intensified  a  growing awareness of problems
related to the supply of safe drinking  water
to  the American public.  Of major concern is
the economic impact which might  result  from
promulgation  of regulations under the "Act".
In an attempt to  understand  these  impacts,
EPA's   Water  Supply  Research  Division  is
conducting a study in which one or more water
utilities are being investigated in each of
EPA's    ten    regions.    In   this   paper
representative cost data which have been
collected   from   these   case  studies  are
presented.  These  data  will  be  useful  in
evaluating  the  economic  impact of the Safe
Drinking Water Act.  They should also lead to
a  greater  understanding  of  the   economic
factors which affect the costs of the various
components making up a water supply system.

                INTRODUCTION

Problems  related to water supply have become
increasingly important in recent  years.   In
the past, supplying water to the consumer was
considered  to be a routine matter, and water
itself  seemed  to  be  available  in  almost
unlimited  quantities.  But this is no longer
the case in most parts of the United  States.
Perhaps   water   itself   is  not  a  scarce
resource, but supplying water  of  acceptable
quality  to  an increasingly urban population
is no  longer  a  simple  matter.2  Spreading
urban  boundaries  force many potential water
supply  customers  to  locate   farther   and
farther  away  from  available water sources.
Some areas which are inherently water limited
have attracted significant population growth,
thereby   straining   the   available   water
resource.  A scarcity of the land, labor, and
capital  resources  needed to convey water to
places of useful application have contributed
to these problems.

Passage of the Safe  Drinking  Act  with  its
primary   and   secondary   regulations   has
intensified a growing  interest  in  problems
related  to  water  supply  and  water supply
utility management.5 The primary  regulations
which  are  health related and the secondary,
non-enforceable,      aesthetics      related
regulations   cannot   help   but  have  some
economic impact.  For this reason one of  the
primary concerns expressed in the Act relates
to  the  magnitude  and form of this economic
impact upon the American public.
In an attempt to obtain data which can be
utilized  to assess the Act's economic impact
and to understand the factors which influence
the cost of  water  supply  the  EPA's  Water
Supply  Research Division has been conducting
a  series  of  case  studies.   One  or  more
utilities have been investigated in each of
EPA's ten regions.  Data from  one  of  these
case study areas (Cincinnati Water Works) are
presented in  this  paper.   These  data  are
typical of those being collected in the other
case  studies,  and reflect the costs as they
affect the functional categories and physical
supply problems associated with water  supply
utility management.

          DATA GATHERING PROCEDURES

Water  supply  systems are generally composed
of (1)  collection  works,  (2)  purification
systems, where needed, and (3) transportation
and  distribution  systems.   The  collection
works either tap a source of water  that  can
satisfy  present and reasonable future demand
on a continuous basis,  or  they  convert  an
intermittent  source into a continuous supply
by  storing  surplus  water  for  use  during
periods of low flows.  If the water is not of
satisfactory   quality   at   the   point  of
collection,  it  is  treated   to   make   it
esthetically attractive and palatable.

Water   containing   iron   or  manganese  is
subjected      to      deferrization       or
demanganization;     corrosive    water    is
stabilized chemically; and  excessively  hard
water  is  softened.   The transportation and
distribution works convey the  collected  and
treated  water  to the community, where it is
distributed to the consumers.

Because   large   operating    and    capital
investments  are involved, it is important to
be able to compare costs between utilities to
understand the components which make  up  the
operation.^    To   make   these   kinds   of
comparisons it is necessary  to  collect  the
data in a standardized manner.  One approach,
and  the  one  which will be utilized in this
report, is to define the utility's operations
in such a manner that they can be categorized
into functional areas.  Figure 1  illustrates
a  typical  utility  in  which the operations
have been defined as being  composed  of  the
functions   of  acquisition,  treatment,  and
distribution.   This  is  an   oversimplified
categorization   but   serves   as  a  useful
beginning  point.   One  important  area  not
included  is  the  management  function.   By
collecting   data   that    describe    these
                                                     808

-------
functional   categories  it  is   (in  theory)
possible to compare the costs  of  one  water
supply  with  those  of another.  This is the
principle that has been used to  gather  data
on  the  Cincinnati Water Works, although the
functional  categorizations  are  much   more
detailed than presented in the example.

The  Cincinnati  Water  Works operations have
been  defined   as   follows:    acquisition,
purification,  transmission and distribution,
power  and  pumping,  and  support  services.
These functional categories are common to all
water   utility   operations   although   the
specific costs assigned  to  each  functional
category  may  vary depending on the utility.
All of the costs, with the exception  of  the
support  services  category,  are those which
make  that  specific  activity   operational.
Support    services    includes   management,
customer services, and  all  of  those  costs
which  do  not  relate  to specific operating
activities, for example, laboratory personnel
costs  are  included  in   the   purification
activity,  but  the  mangement  costs  of the
purification treatment division are  included
in    the    support    services    category.
Maintenance and repair costs are allocated to
each category where appropriate.

In addition to the Operating Costs, one  must
also include Capital Costs in the analysis in
order  to  be  complete.  For the purposes of
this analysis, Capital Costs are  defined  as
the  depreciation  on  the utility's existing
plant in service, and  the  interest  on  any
types   of  borrowing  mechanisms  which  the
utility may use to raise  money  for  capital
investment.  Depreciation as reported here is
based  on  the  actual  cost  of the facility
divided  by  its  useful  life,  and  not  on
reproduction  cost.  The data as reported for
depreciation, therefore, will  reflect  lower
costs  for  older utilities.  This is true in
the case of the Cincinnati Water Works, since
most of its facilities were built  from  1930
to   1940.    In   order  to  understand  the
magnitude  of  the   bias   which   such   an
assumption  introduces,  an  analysis  of the
replacement cost for the utilities facilities
has  been   made.    Using   a   standardized
construction cost index, the original cost of
each  facility  currently  in  use  has  been
inflated to a 1974 cost base.4  The  analysis
is  contained in a section which will follow.
The  interest  costs  are  those  which   the
utility has historically paid for money.
Table 1 summarizes the cost categories utilized
in this analysis.

                   TABLE 1

Operating Costs

    Overhead

    Acquisition

    Purification

    Transmission and Distribution

    Power and Pumping

Capital Costs

    Depreciation

    Interest
All  of  the  cost  analysis  which  will  be
discussed in this paper is based on  revenue-
producing  water.   The  unit costs presented
will  be  calculated   using ,  the   revenue-
producing water pumped by each utility during
the water year from 1964 through 1973.

                SERVICE AREA

The present service area lies almost entirely
within Hamilton County with fringe extensions
into three adjoinging counties.  Although for
the  most  part  they  are  surrounded by the
Cincinnati Water Works service area, a number
of communities maintain  their  own  systems.
Emergency  service  is  provided  to  most of
them, but, as long as their source of  supply
can  be  maintained,  most of the communities
will not change their present status.

The current source  of  supply  is  the  Ohio
River,  from  which  water  is  pumped to the
treatment plant.  It has a  capacity  of  235
million  gallons  per  day  (mgd), in 1973 it
treated an average  of  136  mgd.   Water  is
distributed  to  the east through a series of
pumping stations and tanks.  To the north and
west,  water  passes  through   two   gravity
tunnels  and through two pump stations into a
large reservoir and  is  then  repumped  into
outlying service areas.

                COST ANALYSIS

Figure  2 shows the total water pumped by the
utility during calendar  years  1964  through
1973  as  well as metered (revenue-producing)
water and water which was accounted  for  but
did  not  produce revenue.  All cost data are
based   on   revenue-producing   water,   for
example,  purification  costs  in dollars per
million gallons  ($/mil  gal)  are  based  on
revenue-producing  water and not on the total
number of gallons  of  water  pumped  by  the
utility.   As  can be seen from Figure 2, the
total water pumped exceeds  revenue-producing
water  by  nearly  13,000  million gallons in
1973.
                                                     809

-------
Table 2 contains the total operating cost for
each of the previously mentioned  categories.
The Support Services category includes all of
those  operating  costs  that support but are
not directly chargeable to the production  of
water.   It  includes  such  items as general
administration,  accounting  and  collection,
and meter reading.  The Purification category
includes  those  costs related to the cost of
operating the laboratory, labor  involved  in
the   treatment   function,   chemicals   for
purifying the water, and maintenance  of  the
treatment  plant.  Power and Pumping includes
those  costs  related  to  operating   labor,
maintenance,  and  power  for  pumping  water
throughout    the    service    area.     The
Transmission    and   Distribution   category
includes the operating labor and  maintenance
costs  associated with supplying water to the
consumer.

It can  be  seen  from  the  table  that  the
Support Services costs have more than doubled
between  1964  and 1973.  Although all of the
other cost categories increased  during  this
period,  their rate of increase was less than
that of this category.  Total operating costs
increased by about 65 percent.

Table 2 also contains the total average  unit
operating costs for each major category based
on  the  number  of revenue-producing gallons
pumped in a given year.  As can be seen,  all
the  cost categories increased by a factor of
less than two, and the total  operating  cost
increased  by  about  40  percent.  Each cost
category is presented as a percent  of  total
operating  cost.   It is obvious that Support
Services  accounted  for  a  significant  and
increasing  portion  of the utility's budget,
from approximately 26 percent in 1964 to 31.5
percent in 1973.  The other  cost  categories
either   decreased   or   remained  constant.
Depreciation and Interest Expense are defined
as the capital expenses  for  the  waterworks
system.    These  capital  expenses  remained
essentially constant but  operating  expenses
increased  by  approximately  65 percent.  As
can be seen from  Table  2,  the  percent  of
expenditures  allocated  to capital decreased
from approximately 27 percent to  22  percent
during  the  period.   Operating expenditures
are always reported in  inflated  or  current
dollars,   while   capital  expenditures  are
depreciated in historical dollars over a long
period of time.  The problems related to  the
depreciation  of  capital  will  be discussed
later.  Since the Support Services  category,
which   is   labor   intensive,   played   an
increasingly important role in  the  cost  of
water  supply,  labor and manpower costs will
be analyzed in the following section.

Labor Cost Analysis

To evaluate the  impact  of  labor  costs  on
operating  costs  for  water  supply,  it  is
necessary to examine the payroll of the water
utility (Table 3).  It can be seen that labor
costs  accounted  for  64  percent   of   the
utility's  operating costs in 1964 and for 62
percent in 1973.  The average cost  per  man-
hour  increased  71 percent, while the number
of man-hours/mil gal of  metered  consumption
decreased  by 23 percent.  The bottom  line  in
the table shows  a  decreasing  capital/labor
cost ratio.  Although economies of scale were
achieved  with  respect  to the number  of man-
hours used to produce water,  the  effect   on
cost  was  nullified  by wage increases.  The
table, therefore, illustrates the  importance
of  labor in what is typically presumed to  be
a capital intensive industry.

Depreciation Analysis

As mentioned  earlier,   capital  expenditures
comprise a large portion of the cost of water
supply.    Depreciation  reflects  historical
costs and not the cost of replacing a  capital
facility based on current costs.   Historical
costs refer to the original construction cost
of  a  capital  facility,  while reproduction
costs  reflect   the   capital   expenditures
necessary  to build an identical plant today.
Historical cost is  exact,  but  reproduction
cost  is  based  on  the original investment
modified by an appropriate index.

The records of  the  Cincinnati  Water  Works
show  the  historical  value of the plant-in-
service to be  $111,700,315.   The  value   of
pipelines,  plant,  or   equipment  previously
replaced or fully depreciated is excluded.

Using the historical  costs,  a  reproduction
cost  was  calculated  using the ENR Building
Cost Index (1913 =  100)  for  buildings  and
equipment and the ENR Construction Cost Index
(1903  =  100)  for  pipes  and  valves.    (A
skilled labor cost factor is used to   compute
the  Building  Cost Index, and a common labor
cost  factor   is   used   to   compute   the
Construction  Cost  Index).   Having weighted
these capital expenditures  with  the  proper
indices,  a reproduction cost of $458,990,287
was found for the  current  plant-in-service,
which  represents a 311  percent increase over
the   historical   value.    These     capital
expenditures   do  not   include  the   capital
investment in a new  treatment  plant  (Great
Miami)  which  is  expected to be operational
soon.  Derivation  of  a  reproduction  value
facilitates examining the impact of inflation
on  capital  cost  and   the  current worth  of
capitals   contribution   to   output.    The
computations  discussed  in  this section are
summarized in Table 4.

              SYSTEM EVALUATION
Using  the  cost   data   for   the   various
functional  areas  discussed  earlier,  costs
were   allocated   to   specific   treatment,
transmission, storage, and pumping facilities
in the system.  A general cost was determined
for  distribution,  interest,  and  overhead.
Using costs  based  on  1973  $/mil  gal  and
assuming  a  linear allocation of costs for a
given area against capacity required to serve
it, the facility costs associated  with  each
service  area,  such  as pumping and storage,
were established as shown in  parentheses  in
Figure 3.

The codes in the schematic diagram (Figure 3)
can  be related to cost values.  For example,
the acquisition cost for water from the  Ohio
                                                     810

-------
River, including depreciation of the facility
and operating costs, is $16.70/mil gal.  As a
unit  of  water  (mil gal) moves through each
facility to another service  area,  the  unit
cost  of  moving  water  through that area is
added to the cost of getting  water  to  that
area,  thereby  creating  incremental  costs.
The facility and transmission costs are added
to the costs of distribution,  interest,  and
overhead  to  yield  an  average unit cost to
serve that area.  A service zone represents a
customer service area and a demand point  for
water.   For  purposes  of  this  analysis an
attempt was made to discriminate between  the
water  demanded  in a given distribution area
and the water transmitted  through  the  area
into the next service zone.

To  illustrate  the way in which cost changes
from one service  area  to  another,  we  can
examine  the Bl and B2 cost areas (Figure 4).
The cost per million gallons for area  Bl  is
composed   of   acquisition   cost  ($16.70),
treatment cost  ($60.26),  distribution  cost
($50.52),   interest   cost   ($17.57),   and
overhead cost ($85.22).  This yields a  total
cost  of  $336.86/mil  gal.  For the B2 area,
the pumpin'g and storage  costs  ($80.45)  and
the transmission costs ($60.26) must be added
to   the  Bl,  and  this  yields  a  cost  of
$477.60/mil gal.  These values are plotted in
Figure  4.   The  costs  in  each  zone   are
described  by  a.  step function.  As water is
pumped from the treatment plant  through  the
Bl zone, the average cost per million gallons
(using   this   analysis)  remains  constant,
however, as water is  repumped  into  the  B2
zone,  the  costs  take a definable jump to a
higher level.

The step function  suggests  the  possibility
that as additional service zones are added to
the periphery of the utility service area the
cost functions will continually increase.  It
is revealing to compare this costing analysis
to the prices actually charged in the utility
service area.

              PRICING ANALYSIS

Figure  5  is  a map of all of the cost zones
which make  up  the  Cincinnati  Water  Works
service  area.  Table 5 contains a comparison
between the revenues received  from  the  ten
largest  users  in  the  service area and the
cost of service.  It can be seen that many of
the major users are not meeting the costs  of
supplying water to them.

             NATIONAL EVALUATION

Cost   data  for  the  other  water  supplies
studied have been developed in the same format
as presented in this paper.  Table 6 contains
the costs for these utilities using the  cost
categorizations   discussed.   The  following
approximate breakdown of  the  percentage  of
cost   which   makes   up  each  category  is
interesting: Acquisition   15%;  Treatment
12%;  Distribution    29%; Support Services
24%; and Interest - 20%.
            SUMMARY AND CONCLUSIONS

This report documents the  application  of  a
functional  approach to the analysis of water
supply     utility     management      costs.
Functionally,  these  costs have been defined
in the following manner:   Support  Services;
Acquisition; Purification; Transmission; and,
Power  and  Pumping.   Having  defined  these
costs in a functional  manner,  they  can  be
reaggregated into capital and operating costs
for  the  various  physical  components which
make up the water  delivery  system.   It  is
apparent   from   the   first  analysis  that
manpower costs  are  a  significant  part  of
water  supply  operating  costs and that this
factor is playing an  increasingly  important
role  in the total cost of water as delivered
to the consumer.  As  water  is  pumped  from
treatment plant to consumer, costs are added,
and  they  increase  with respect to distance
from the central supply.  By using a specific
utility as an example, this kind of  analysis
can   be   related  to  "real  world"  costs.
However,  it  is  obvious  that   the   basic
principles   discussed  apply  to  all  water
supplies and that they must be considered  in
planning  and  design  of water systems.  The
functional analysis  is  extremely  important
for  regional  considerations.   Perhaps  the
major choice  facing  most  small  to  medium
water supplies will be to join a larger water
system  or  to  develop and improve their own
water supply systems (4).  The approach taken
in this  analysis  should  materially  assist
planners  and  policy  makers in making these
types of decisions.

                 REFERENCES

1.  Cincinnati Water Works, Annual Report
    1973.

2.  Clark, Robert M., Cost and Pricing for
    Water Supply Management, accepted for
    publication in the Journal of the
    Environmental Engineering Division of the
    American Society of Civil Engineers.

3.  Clark, Robert M., and Goddard, Haynes C.,
    Pricing for Water Supply: Its Impact on
    Systems Management, EPA-670/1-74-001,
    April 1974, National Environmental
    Research Center, Office of Research and
    Development, U. S. Environmental Pro-
    tection Agency, Cincinnati, Ohio 45268.

4.  Engineering News Record, McGraw Hill
    Publishing Co., March 20, 1973, p. 63.

5.  S.433, Public Law 93-523, Safe Drinking
    Water Act, 93rd Congress, Washington, D.C
    (Dec. 16, 1974).
                                                    811

-------
                STORAGE   TRANSPORT TREATMENT  STORAGE     DISTRIBUTE
            ACQUISITION               TREATMENT       DISTRIBUTION

   FIG  1  -SCHEMATIC DIAGRAM OF ACQUISITION, TREATMENT AND
            DISTRIBUTION  FUNCTIONS FOR A TYPICAL WATER
            SUPPLY SYSTEM
i  «-!
S
    .T
 FIGURE 2. Pumped and metered water for Cincinnati Water Works (1964-1973).

(J3143)





C3b



(136 86)
($37.50}
(S46.881






C2













JSO 311


Gravity









A

($5537) (180.481

isnojo)

t
(13605)
                                                                       9 Kro9*, Compony
                                                                       10. E. Kohn't and Son'i

                                                          FIGURE 5. Major facilities in Cincinnati Water Works i


Supp



Acqu



Furl






Tran
Disc



Iota

Dcpr

Inta

Iota

Iota
Capl



ten

ill
oE


111

/ml
lea
of
/ml

111

/ml
Did
ibu
111
of
/ml
Op
111
ela
111

111
Co
ill
Op
Bl
111
/ml

64 65
OCVlCBB
on S 1.360 1.331
total 25.6 25.2
EBl 42.41 40.24

on S 0.395 0.369

gal 12.25 11.15
Ion
total 17.2 17.2
gal 28.48 27.42
Pimping
on S 1.086 1.115
total 20.5 21.1
gal 33.88 33.74


on S 1.558 1.554
total 29.3 29.5
gal 48.60 47.00
rating Costs
on S 5.310 5.275

on S 1-177 1.230

on S 0.826 0.947

on $ 2.003 2.177
rating and

on S 7.314 7.452
gal 228.10 225.41

" 67 68

1.413 1.499 1.616
25.2 24.9 26.1
41.90 43.87 46.55

0.374 0.372 0.380
6.7 6.2 6.1
11.10 10.90 10.94

16.6 16.7 16.4
27.69 29.41 29.14

1.1B2 1.256 1.247
21.0 20.9 20.2
35.07 36.77 35.92


1.711 1.885 1.928
30.5 31,3 31.2
50.74 55.19 55.52

5.615 6.017 6.183

1.422 1.550 1.605

0.927 0,877 0.887

2.349 2.427 2.492


7.964 8.444 8.665
236.14 247.19 249.56

KQ

2.109
29.9
58.25

0.405
5.8
11.19

14.8
28.76

1.412
20.0
39.01


2.084
29.5
57.57

7.051

1.634

0.8fl7

2.521


9.571
264.41

in

2.081
28.6
56.06

0.427
5.9
11.50

14 '.6
2B.69


19.0
37 , 23


2.323
31.9
62.58

7.277

1.632

0.793

2.425


9.702
261.39

71

2.371
29.1
62.20

0.496
6.1
13.02

14 ', 3
30.54

1.638
20.0
42.97


2.4B7
30.5
65.23

8.158

1.657

0.802

2.459


10.617
27B.45

72

2.633
30.7
69.43

.480
.6
1 .66

1 .4
32.70

1.635
19.0
43.10


2.606
30.3
68.72

8.595

1.699

0.711

2.410


11.005
290.14

7?

2.766
31.5
72.60

0.485
5.5
12.73

13. B
31.75

1.667
19. D
43.75


2.654
30.2
69.65

8.7B2

1.771

0.669

2.440


11.223
294.54
  FIGURE 3. Schematic diagram of facility costs in Cincinnati Water Works system.

               (To convert S/mil. gal. to S/1000 cum, multiply by 0.26.)
        o
        J
           S336.86*
                                        COST CURVE
                               B1
                         SERVICE AREA
       B2
SERVICE AREA
FIGURE  4. Step function cost curve for B1 and  B2  service areas.

       (To  convert $/mil. gal. to  S/1000 cum, multiply by 0.26.)
                                                                                                               -^965	1966	L96T
 32.063    33,061   33,725   34,160   34,722  36,199   37,117  3B.128   17,MB   38,104


105.840   102 812   108.660  115.540  117.676  122,84S  120.358  130.604  13B.711  143.675


 34.62    33.76   32.70    32.Bl   33.08   31.53   30.06   28.70    28.25   27.47


 3.06    3.04    3.32    3.52    3.56    3.B9    4.00    4.55    4.91    5."
 Mecertd"

Total Rour»/HC»'*


Avetago Coat For
 Kan Hour

Capital/Labor Co«
                                                                                 812

-------
                                                                                             TABLE 5

                                                                         Actual Charge Versus Cost Comparisons for Ten "ajor Users
                                                                                      in Cincinnati Water Works


Historical and


TABLE 4
Reproduction Costs of Plant-In-Service
for Cincinnati Water Works (Dollars)


Capital
Facility


Plant

Pipe
Misc. Plant*

Total



*Capital expenditures





Historical Reproduction
Cost Cost (1973-74 Dollars)


42,649,160 146,981,272

54,848,943 296,771,626
13,202,213 15,237,389

110,700,315 458,990,286



which are not specifically identified.



User

Norwood
Hilton Davis

Sun Chemical


Procter & Gamble


Davison Chemical

Metropolitan Sewer

Cincinnati Milacron

Kroger Company
(Suburb)
Kroger Company
(City)
E. Kahn's Sons

Revenue
($/MG)
294.12
168.83
174.67
169.87
175.44

308.70
321.12

87.54
180.26
175.19
185.44
175.07
187.95
313.54
328.26
181.90
197.73
181.67
195.17
Cost*
($/mg)
272.80
262.99

275.54


275.54


272.80

264.56

272.80

262.99

264.56

264.56

                                                                 *The value for $/MG (dollars per million gallons)  can be converted

                                                                  to dollars per 10 cubic meters by multiplying by 0.26.
                                      TABLE  6.  - Summary  of  Costs for Utilities Studied
                                                            (1973-74)
                          1973-74
                          Billed      Acquisition  Treatment  Distribution  Support   Interest  Private  Total   Dividends
    Utility              Consump-                                            Services            Utility  Cost
                          tion                                                                     Taxes
                         (bil gal/yr) ($/mil/gal)  ($/mil/gal)($/mil/gal)   ($/mil/gal)($/mil/  ($/mil/  ($/mil/ ($/mil/
                                                                                           gal)      gal)     gal)     gal)
Kansas City, Mo.
Dallas, Texas
San Diego, Calif.
New Haven, Conn.*
Fairfax Co. Virginia
Kenton Co. Kentucky
Orlando, Fla.
Elizabeth Water Co.*
New Jersey
Cincinnati, Ohio
26.9
63.0
47.2
17.7
19.2
2.2
12.5
38.2
38.1
15
25
279
28
34
12
39
59
16
.28
.17
.61
.97
.79
.41
.65
.52
.70
81
51
27
15
61
102
25
42
60
.98
.70
.47
.38
.54
.60
.51
.07
.26
138.
119.
105.
107.
128.
124.
132.
111.
127.
64
91
86
34
33
41
82
45
41
144.52
83.46
95.64
118.19
88.27
81.63
110.31
89.80
72.60
50.32
57.71
6.73
116.70
208.57
73.26
85.12
113.16
17.57
430
337
515
196.44 583
521
394
393
96.71 512
294
.70
.95
.31
.02 87.86
.50
.31
.41
.71 45.63
.54
*Privately  Owned
                                                             813

-------
                                           MATHEMATICAL MODELING  OF
                                           DUAL WATER  SUPPLY  SYSTEMS
                      Arun K. Deb
                  Roy F. Weston,  Inc.
              West Chester, Pennsylvania
                    Kenneth  J.  Ives
              University  College  London
                    London,  England
ABSTRACT

A small percent of total domestic water usage  is
usually required to be of potable water quality; the
rest of domestic need may not warrant excellent
quality.  Dividing water supply  into two portions,
potable and nonpotable, a mathematical model of con-
ventional  and dual supplies has  been developed to
evaluate the technical and economical feasibility of
dual supplies under various conditions.  The sensi-
tivity of the model has been evaluated for various
parameters.
 INTRODUCTION

Technological advances coupled with  increases  in popu-
 lation during the past decades have caused the demand
for fresh water and the discharges of effluents and
wastewater to rivers, lakes and coastal waters to
 increase.  A fundamental need of any community is an
adequate supply of biologically and chemically safe,
palatable water of good mineral quality.   If the
present rate of growth of population and  industry
continues, the quality of natural water will deteri-
orate and it will be difficult to guarantee the high
quality of bulk water supply for domestic  uses.  With
the development of new chemical compounds  day by day
for an ever-increasing demand of the consumer market,
and with the increasing use of chemicals  in agri-
culture and  industry, new micro-pollutants are finding
their way into natural water courses.

Although it  is possible that by treatment  the mineral
quality and palatability of water can be  improved,
additional treatment cost to remove trace  chemicals
and high TDS will be high.  tt would be difftcult and
costly to produce very high quality bulk water for
all domestic purposes from such sources.

 It has been reportedl that of the water used in
households in England only about 3.2 percent is used
for drinking and cooking and about 9.6 percent is
used for dishwashing and cleaning.  About  35 percent
is used for personal  hygiene;  another 35 percent is
used for toilet flushing, and 10 percent  is used for
laundering.   The remainder is used for gardening and
car washing.   This analysis of various household uses
indicates that about 87 percent of household water
does not require water of very good quality with re-
spect to TDS and trace chemical contaminants which
would cause objection if ingested for a long time.

However, if it is assumed that only a small fraction
(about 13 percent) of household water must be of the
quality of drinking water, the volume of water to be
treated by expensive sophisticated treatment pro-
cesses would be small enough to allow economy  in
treatment.   The remaining nonpotable portion of the
domestic water would be biologically safe  and
supplied through a separate distribution system.

Haney and Hamann^ made a rational comparative study
of conventional  and dual water systems.  The objective
of the present study  is  to  develop  a  mathematical
model to evaluate  the  technical  and economical  feasi-
bility of dual supply  systems  for two hypothetical
British towns using twelve  alternative schemes  of
treatment and supply.
PROJECTED WATER DEMAND

In this study the planning  period  was  taken  as  1971
to 2001.  The demands on  public water  supply for  do-
mestic and  industrial uses  are assessed  separately.
Instead of  projecting the total demands  of  past
years, in this study the  contributing  factors are
separated into per capita domestic demand,  per  capita
industrial  demand, and  population  growth.

By regression analysis  of past domestic  water con-
sumption data of nine British towns, the best-fit
equation for the per capita domestic demand  index
percent is  given by:
100 IDt = 67.24 + 1.23 t
(1)
in which 100  IDt  =  per capita domestic  demand  index
(percent)  in  the year t; t  =  number  of  years after
the year 1950.  Similarly, the best-fit equation  for
the per capita  industrial demand  index (percent)  has
been developed  as:

100 ITt = 63.85 + 1.314 t1'0888                           (2)

in which 100  ITt  =  per capita industrial  demand
index percent  in the year t after  1950.

Combining Equations  (1) and  (2) and giving  proper
weighting for domesttc and industrial  demand, per
capita total  demand  index percent  can  be  approxi-
mated as:

100 lt = 65.95  +  1.26 t1'08"   .                          (3)


The value of  lt for the year 1971  is  1.00.

By regression analysis of past population data of
various towns,  the best-fit equation  for  population
index percent  is obtained as:

100 I Pt = 82.65 + 0.826 t                               (4)

After assessing separately the growths of population
and per capita  water demand, the  total water  demand
projection for  a town can be obtained  by  combining
per capita water demand with population:
   = POP71 (187.10 + 3.57 t "   ) (0.827 +  0.00827 t) ,      (5)
 in which 0_t  =  total water  demand  in  t-th year after
 1950,  in million  liters;  POPyi   =   population in the
 year  1971  in thousands.
                                                       814

-------
COST FUNCTIONS

To develop mathematical  models for conventional and
corresponding dual  suppltes,  the capital costs and
0 & M costs of  various units  of treatment and dis-
tribution as functfons of flow are required.  Cost
data for various  units of treatment and distribution
which are valid for England  have been taken from the
1 iteratureS.'t and  updated and formulated in mathe-
matical functions  valid  for  1971, the base year in
this study.  All  the various  components considered  in
this study are  divided into  two groups as treatment
and distribution  and are listed with useful life
periods in Table  1.
Unit No.
Unit Component
Useful
Life
years
1
2
3
A
5
6
7
8
9
10
11
12
River Intakes
Impounding Reservoir
Conventional Treatment
Chiorination Equipment
Contact Tank
Wells
Activated Carbon
Elect rod i a lys is
Pumping Mains
Pumping Stations
Service Reservoirs
Distribution Mains
30
60
30
15
1*0
15
15
15
30
15
1(0
30












Table 1,  Useful  Life  Periods of Components.

Capital cost  (y)  and 0 S  M  cost (Y)  functions of
various treatment units valid for 1971  are given in
Tables 2 and  3,  respectively.  Costs are expressed in
British Pounds  (i 1  = $2.07)  and flows (Q) are ex-
pressed in mi 11 Ton  liters  per day.
              Wells

              Activated Cai'bon
                              • 65,000 +  (3,500 H
                                d - dosage (mg/L)
                            yn • (7.78 IDS  + 5.070) 0. + 11,275;
                             0   IDS •> TDS In raw water (rag/L)
            • In pounds ( C 1 • S2.07).  Q • plai
Table 2.  Capital Cost  Functions.
Table 3.  0 & M  Cost  Functions of Treatment Units.


Distribution System

From available  literature^, the total  installed
Unit No.
I
2
3
*
5

6
7
8
Treatment Unit
River Intakes
Impounding Reservoir
Conventional Treatment
Chiorination Equipment
Contact Tank

Wells
Activated Carbon
Electrodialysls
•Costs are In pounds/year HI - 52
million liters.

'1
V2
Y3
\
Y,.

¥6
Y7
V8
07).
0 S H Cost Functions for 1971*
- 651 q
- 27-5 q1'35
- 1,635 q
= 36.5 q
" 0.42 q

- 930.75 q + 173.9 q *
= l,785q/(q 8.5)°-'07
. 8,888 q°'9
q ™ production per day In
capital cost of  a  pipeline  can  be expressed as a
function of diameter:

C = KDm                                              (6)

in  which C  =   cost of pipeline per meter length and
D   =   diameter  of  pipe in millimeters.  For England
(1971)  in open  areas the values of K  =  0.0067 and
m   =   1.272;  for built-up areas K  =  0.0134 and
m   =   1.272.  The  0 S M cost of water distribution
mains  of a  town  in England  has  been found to be £76
per kilometer per  year.

Pumping station  capital cost has been expressed in
the 1 iterature't.S  as a function of installed power:

Vio =  k,0 (kW)m'<>                                     (7)

in which yig  =  capital  cost of pumping  station,
kW  =  installed power  in kilowatts,  and  k^Q and
m-\Q are parameters  of  the cost  function.   For England
(1971) the value of kio =   523.0 and  of  mio  =
0.785 when yig is  expressed  in  pounds.

Operating costs of  a  pumping station  including the
costs of labor, electricity  and  maintenance  for
England (1971) are  a  function of operating  head as:

Y|0    (13.61 H + 379.0) q                                (8)

in which Y    =  pumping station 0  6  M  cost  in
£ /year; q  =  average  daily pumping  rate in million
liters per day, H  = operating head,  in  meters.

From a regression  analysis of the cost  data  from
England and considering a 24-hour storage in the
service reservoir,  the  capital  cost  function for a
service reservoir  can  be expressed  in  terms  of
design flow:

yn    kii Qm"                                         (9)

in which yn  =  capital cost in pounds;  Q   =  design
flow in million liters/day;   kji   =   19,169 and
m-| i  =  0.723.  Operating costs  of  the  service
reservoir in pounds  can be expressed  as a function
of design flow in million liters/day  as:

Yn  =  20 Q   .                                         (10)
DISTRIBUTION SYSTEM ANALYSIS

Considering the  total cost of a  pumping  system con-
sisting of capital costs of pumps  and  pipelines and
their 0 S M costs, a mathematical  model  of a  pumping
system has been  developed  in order to  optimize the
total cost in seeking the  least  cost diameter for the
pipelineS.  The  cost functions for pumps and  water
mains valid for  England  (Equations 6,  7, and  8) have
been used to obtain the most economical  diameter as a
function of flow:
                                                           D
                                                             opt
                                                     (11)
                                                           where  kg and  mg  are  functions of cost function param-
                                                           eters,  flow equation parameters and interest rate.
                                                           For  this study,  it has  been found that
                                                            D
                                                             opt
                                                                                                                 (12)
combining with  the  cost  function of capital  cost of
the pipeline  (Equation 6):
Capital Cost of Optimum Main, y   ex  Q
                                 0.59
(13)
                                                        815

-------
Pumping Ha ins

To compare the optimum capital costs of a convention-
al (single)  system and a dual system of supplies,
consider a total flow of 0_, potable flow of rO_, and
nonpotable flow of (1 - r) 0_.  For the same lengths
of mains, the cost of mains  in a single system

Yr is proportional to 0_°-59;  in the dual system
                        0.59   0.59         0.59 ,
Yn is proportional to Q    lr    +  (1 — n   J.
The ratio of cost of mains  in a dual system and a
single system can be given as:
          0.59        .0.59
YD/YS = r   +d-r)

Gravity Mains
(14)
The costs of single and dual mains under the same
hydraulic gradient have been compared.  Using the
Hazen-Wi11iams Equation for ptpe flow and pipeline
cost function  (Equation 6), the cost of a gravity
main with constant hydraulic gradient can be expressed
as proportional to 0.0.483.  The ratio of cost of
gravity mains  in a dual system and a single system
can be expressed as:
YD/YS = r °-483 +  (1 -r)0-483                           (15)

Distribution mains from the service reservoir to the
consumers have been assumed to be under a constant
hydraulic gradient.
MODEL FORMULATION

Mathematical models of dual and conventional water
supplies considering  12 different treatment systems
(Table 4) have been developed.  Two typical hypo-
thetical British towns with 1971 populations of
100,000  (Town A) and  500,000  (Town B) have been con-
sidered  to develop treatment  system and distribution
system models of dual supply.  Total  treatment and
distribution costs of conventional supply and of dual
supply for all 12 treatment systems have been formu-
lated, and the difference of  treatment and distri-
bution costs between  single and dual  supplies for all
the  12 systems have been calculated.   In formulating
the  mathematical models, the  parameters such as
potable-to-total-flow ratio,  r; interest rate,  i;
annual capital cost increase  rate, cc; and annual
0  £  M cost  increase rate, co, are considered as
variables.
                 ils:^	 ??"••';•..

 Table k.   Various Treatment Systems Considered for
           Dual  Supply.
Basis of Formulation

The econo-mathematical models for  all  the  systems  have
been developed on the following  basis:   1)  The models
represent hypothetical new  British towns and  there-
fore are general theoretical models  rather than
specific ones.  2)  Cost functions are derived from
the literature and provide  only  approximate costs;
they are indicative, not definitive.   They are
certainly not applicable, without  adjustments, to
specific cases.  3) The quality  of water from the
single-supply source is assumed  to be  the  same as
the potable supply in a dual-supply  system.   4)  Quanti-
ties of water required are  obtained  by projecting
per-capita domestic and industrial  demand;  however,
the rate of growth has been kept as  a  variable so
that other rates of growth  can also be incorporated
in the model.  5) A leakage loss of  15 percent has
been assumed.  6) Administrative costs have been in-
cluded in all cost functions.

In comparing the costs of single supply and corre-
sponding dual supply, all the costs  incurred  during
the planning period (1971-2001)  have been  converted
to the present value of the base year  (1971).   If
some of the treatment or distribution  units have
residual design life remaining at  the  end  of  the
planning period, the residual values of the units
have also been considered as assets  in the calcu-
lation of the system cost.

Treatment Systems

The present values of all  capita]  and  0 & M costs incur-
red during the planning period of  all  treatment  units
have been calculated using  corresponding cost  functions
for design flow, 0_; potable flow rQ; and nonpotable
flow (l-r)0_.   As the design period for chlortnatton
equipment, activated carbon treatment,  electrodialysis
and pumps has been assumed  to be 15  years,  the design
flow for these units has been taken  as  the  water
demand in the year 15 years after  installation.  The
design period for all  other units  has  been  assumed  to
be the same as that of the water demand at  the end  of
the planning period.

The operational cost functions for various  units have
been related to the variable water demand,  Q,t,  during
the planning period.  The present  value of  the total
operation cost of a unit throughout  the planning
period has been obtained by summation  of the  present
values of all yearly operational costs, which  may be
expressed as     30

Ypv
-------
capital cost  per year.   Equation 17 can be rewritten
for potable and  nonpotable flow in order to  incor-
porate  the cost  of  a dual  supply system.

The present value formulations of capital  and 0 & M
costs of treatment  of single and dual  supply of 12
treatment systems of Table k have been made.   A sche-
matic diagram of  Treatment System No.  1 is shown  in
Figure  1 and a corresponding investment diagram during
the planning period  is  shown in Figure 2.
 Figure  1.   Schematic Diagram of Treatment System No.1.
                             1986

                   TREATMENT COST SCHEMATIC DIAGRAM
                  DISTRIBUTION COST SCHEMATIC DIAGRAM
Figure  2.   Treatment System No. 1.

The total  present value formulations  of  capital  and
0 6 M costs of Treatment System No.  1  for  single and
dual  systems are given as follows:

(i)  Single Supply
ypv(S1S) =  y0(TN3)Qi30  + y0(TN4)Qils  +V0(TN5)Q(30


        + PVy,s(TN4)Q:30 +  RVy30(TN5)Q/30



          t = ,   • T + i  •  LYtQ,t  + YtQ,t


        +  Yt(TN5)Q|t f                                (18)


in which ypv(S1S) =  present value of total treatment
costs of System No.  1  in single  supply.
Ypv =
                                                                 Dual  Supply

                                                                         rQ,3o + Yo'TN4>rQ,ls +  Vo'TN5'rQ,3o
                                                                    Y0 t + Yt (TN5)(1 _ r)Q;t]   (

                                                            in which ypvCSID) = present value of  total  treatment
                                                            costs of system No. 1 for dual supply.

                                                            In Equations 18 and 19 the treatment  costs  of  treat-
                                                            ment units 1 and 2 have not been  included,  since the
                                                            costs of these units  in both the  systems  are equal.
                                                            In a similar way the  treatment cost formulation  of
                                                            all  the 12 treatment  systems has  been developed  and
                                                            incorporated in the model.
Distribution System Formulation

The distribution system formulation  of  all  the 12
treatment systems will be the same.  The  total  costs
of a distribution system consist of  capital  costs  and
operation and maintenance costs of pumping  mains,
pumping stations, service reservoirs, gravity  mains
and yearly addition of gravity mains in the distribu-
tion system.  All the costs  involved during the plan-
ning period have been converted to present  value and
formulated as follows:

Single supply
ypv (DSS) = Present value (1971) of total distribution cost in a
           single supply system.

    =  Yoam,3o + YoQ,3o-RVV3o'TU11>Q,3o
         30
        v               I"  i + c  *
        ^ Yt(TU13)Qt    (—IS)  -  -L (^-1±
       1  =  1           '      1 + i        30  1 +i
                                                                                                     ,5  a,
                                                                                                                (TV

                                                                                                               30
                                                                      , Yt(TU11)Q(,
                                                                                       1 + i
                                                                   t= 1
                                                                                                                 (20)
Dual Supply

Vpv
                                                            yov (DSD)   Present value (1971) of total distribution cost in a
                                                                       dual supply system.
                                                                = YorQm,3o + Y0(TU9)(1_r)Qm/30

                                                                + y  (TU10)Q     + y (TU10)(1    ,
                                                                             rrv                  Q~,, is
                                                                + PVy15 (TU10)rQ  _30 + PVy15 (TU10)(1 _ r)Q

                                                                + Y0(TU11>rQ,3o + YorQ,3o + Yo
-------
Vt(TU13)(1_r)Qit)
                                      [Yt(TU12)rQt
                         t  =  1   1 -

  + Yt (TU12)(1 _ r)Qit +  Yt (TU10)rQ/t  + Yt (TU10)(1 _ r)Q/t


  f YtrQ,3o + Yt'TU11>(1-r)Q,3o]   .            (21)

Equations 20 and 21 represent the total  present  value
in pounds of all capital and operation costs of  distri-
bution systems during the planning period  (1971~2001)
in single supply and dual supply systems,  respectively.


RESULTS AND DISCUSSION

The econo-mathematlcal models for single and dual  sup-
ply for 12 treatment systems of total present  costs  of
treatment and distribution of water were solved  using
a high-speed computer for various potable/total  flow
ratios (r values),  interest  rates (i values),  capital
cost  increase rates (cc values), operational cost  in-
crease rates (co values) for A-type  (base  population
100,000) and B-type (base population 500,000)  towns.
The computer output comprises total treatment  costs,
both  capital costs and 0 S M costs, and  total  distri-
bution costs for all the 12  systems.  The  cost
advantage of dual supply over single supply, DEL,
is expressed by the difference of total  present
value costs of single and dual systems in  pounds
sterling.
                POPULATION -100,000
                RATE OF INTEREST = 0.07
                RATE OF INCREASE OF CAPITAL COST = 0.04
                RATE OF INCREASE OF OPERATION COST - 0.06
                0.1       0.2       0.3        0.4

                    POTABLE TO TOTAL FLOW RATIO, r
Figure 3.   Cost Ratio of Dual to Single System versus
           Flow Ratio.
The cost  ratio of  a  dual  system  to a  single system has
been  plotted with  potable to  total  flow ratio,  r, for
12 treatment systems  In  Figure 3.   The cost advantage
of dual supply over  single  supply  (DEL values)  has
also  been plotted  with  interest  rate,  i,  and operation
cost  increase  rate,  co,  in  Figures  4  and  5, to  show
the sensitivities  of  i  and  co to DEL  values.
                                                       Figure  k.   DEL versus i.  Figure 5.  DEL 'versus CQ
                                                           For  Treatment  System No.  1,  where the potable supply
                                                           requires  complete conventional  treatment and the non-
                                                           potable supply requires  only chlorination, and dual
                                                           system  is  found  to be more economical than a conven-
                                                           tional system  if the potable requirement is less than
                                                           29 percent of  the total.

                                                           Where the  raw  water source contains high TDS and
                                                           demineralization is required (Treatment Systems
                                                           3,' k, 6,  1,  9, 10 and 11), a dual system is more
                                                           economical  than  demineralizatton of the entire supply.
                                                           Where a limited  supply of  high  quality ground water
                                                           is available,  a  dual  system  is  more economical than a
                                                           conventional system.
                                                       REFERENCES

                                                       1.  Working Party on  Sewage  Disposal, "Taken For
                                                           Granted," Report  of  Jeger Committee, H.M.S.O.,
                                                           England, 1970.

                                                       2.  Haney, P.O.  and Hamann,  C.L.,  "Dual  Water Systems,1
                                                           JAWWA, Volume 57,  No.  5,  September 1965.

                                                       3.  Burley, M.J. and  Mawer,  P.A.,  "Desalination as a
                                                           Supplement to Conventional  Water Supply," Tech-
                                                           nical Paper  60, Water  Research Association,
                                                           England, 1967.

                                                       k.  Miller, D.G., Burley,  M.J.  and Mawer, P.A., "A
                                                           Survey of Water Supply Costs," Chem. and Ind.
                                                           No. 21, 23 May  1970.

                                                       5.  Deb, A.K., "Pipe  Size  Optimization in a Pumping
                                                           System," J.  Inst.  of Engrs.  (India), Volume
                                                           53 PH, October  1972.
                                                        818

-------
                                DATA COLLECTION FOR WATER QUALITY MODELING IN THE
                                         OCCOQUAN WATERSHED OF VIRGINIA
                                   T.J. Grizzard, C.W. Randall, and R.C. Hoehn
                                         Department of Civil Engineering
                              Virginia Polytechnic Institute and State University
                                              Blacksburg, Virginia
                      ABSTRACT

A large-scale water quality monitoring program has
been instituted to provide runoff water quality data
in sufficient detail to facilitate calibration of a
predictive model using pollutant washoff theory.  The
sampling program involves the installation of automa-
tic sampling stations, automated chemical analysis of
collected samples, and use of the EPA STORET system as
a data management tool.

                     BACKGROUND

The Occoquan Reservoir lies on the southern periphery
of the Washington, B.C. Metropolitan area.  The con-
tributing drainage basin comprises portions of six po-
litical jurisdictions as shown in Figure 1.  Impounded
in 1957, the reservoir today provides a useful storage
of 9.8 X 109 gallons (3.7 X 1010 liters), and serves
as the raw water supply for an estimated 600,000 cus-
tomers in suburban Virginia.  In the late 1960's,
rapid development began to occur immediately above the
headwaters of the reservoir, creating the unusual sit-
uation of having an urbanizing area directly upstream
of a water supply impoundment.  Currently, eleven se-
condary waste treatment plants in the Manassas-Western
Fairfax County area discharge about 8 MGD  (3 X 107
LPD) of treated wastewater to the surface waters of
the basin.

Observations of the reservoir in the period 1968-1970
showed advancing signs of cultural eutrophication,
characterized by periodic blooms of nuisance algae and
accompanying low raw water quality at the Fairfax
County Water Authority Treatment Works  (1).  Unusual
steps, including the application of massive quantities
of copper sulfate to the reservoir body, hypolimnetic
aeration of the intake area and the addition of acti-
vated carbon slurry to the treatment flow, have been
taken to date to assure the continued use of the im-
poundment as a raw water supply.

In an effort to solve the above problem, the Virginia
State Water Control Board, in July 1971, issued a
"Policy for Waste Treatment and Water Quality Manage-
ment in the Occoquan Watershed" (2).  Two major arti-
cles of that document required that existing waste
discharges be consolidated and treated by a "state-of-
the-art" advanced wastewater treatment  (AWT) plant in
the Manassas area, and that a continuous, basin-wide
water quality surveillance program be instituted to
evaluate the effectiveness of the AWT processes in re-
ducing pollution problems in the reservoir.  As a
corollary to this, it was necessary for the monitoring
program to initiate efforts to quantify and project
the sources of diffuse pollutant yields in the storm-
water runoff from urban and agricultural lands in the
basin.

                  POLLUTANT WASHOFF

At present, most stormwater quality models (3, 4, 5)
assume first order kinetics in simulating the washoff
of pollutants from the land surface during runoff
events.   That is to say, the amount of any pollutant
removed from the ground surface during a given time
interval is proportional to the quantity present at
the beginning of the time interval, as in the follow-
ing equation:
                        dx
                        — = -kx
                                  .[1]
where,
    x = Pollutant Load  (mass)
    t = Time
    k = Decay Coefficient (time ~x)

This relationship has been used widely as a predictive
tool in modeling the washoff of pollutants that accum-
ulate on the land surface.  It does not account for
pollutant runoff yields associated with soil erosion
and, therefore, has its best pure application in the
simulation of washoff from urban (impervious) land
uses.  It has, however, been shown to be a reasonable
tool to use in the simulation of applied materials
washoff from agricultural lands (4, 5).  The inclusion
of such items as fertilizers, crop residues, and
animal wastes in this category greatly enhances the
suitability of the relationship for use in agricul-
tural areas.

Upon integration and applying appropriate boundary con-
ditions equation [1] becomes:
X0-X = X0 (1 - e-k
                                                 .[2]
where,
    XQ = Initial pollutant load on ground surface
         (mass)
    X  = Pollutant load remaining at time, t
    X-Xg = Pollutant load washed off at time, t

Empirical evaluation of the constant, k, is essential
to the application of equation [2] to the simulation
of pollutant runoff loads.  One approach is to assume
that k varies in direct proportion to the rate of
stormwater runoff according to:
                   k = br
                                  [3]
where,

    r = runoff rate for watershed (depth/time)

In order to evaluate b, it is necessary to make an as-
sumption about the quantity of pollutant removed from
the ground surface by a given runoff event.  One ap-
proach has been to assign a 90 percent removal to a
uniform runoff of 0.5 inch/hour on an impervious sur-
face and 50 percent on pervious surfaces (4).  This
results in the following relationships:
                X0-X - x0  (1 - e-A.6rt).

for impervious surfaces (4) and:
                                  .[4]
                                                      819

-------
                                                  .[5]
                X0-X = X0

for pervious surfaces (A).
The runoff rate, r, may be satisfactorily predicted
using a number of hydrologic models currently availa-
ble (6, 7).

                   MODEL CALIBRATION

The previous equations allow the investigator to com-
pute the quantity of a given pollutant washed off the
ground surface during a runoff event, and allow,
therefore, the determination of runoff-borne pollutant
loads and assessment of their impacts on receiving wa-
ters downstream.

Such a tool, however, can be only as good as its cal-
ibration from real-time observation of water quality
data during runoff events.  The key factor in the mo-
del is the successful estimation of XQ, the quantity
of pollutant on the ground surface at the initiation
of runoff.  The determination of XQ is based upon the
assumption of a constant rate of accumulation of a
given constituent on the ground surface during the dry
days preceding a runoff event.  Figure 2 is a dimen-
sionless representation of the assumed relationship
between storm runoff, pollutant loading, and pollu-
tant-loading graph are approximately the same.  This
observation has been reported numerous times in the
literature (8, 9, 10) and is descriptive of most types
of surface runoff except where the so-called "first-
flush" phenomenon is observed in heavily storm-sew-
ered areas (9).  The bottom portion of the figure is a
representation of the washoff function described by
equation  [2] rearranged to read:
                    X   X0EXP(-kt)
                                                 .[6]
The discontinuities in the function occur at those
times when runoff ends and begins anew, respectively.
The linear portions between those times are represen-
tative of the assumed-to-be-constant "pollutant ac-
cumulation rate" used to arrive at XQ for the next
storm to occur.  The length of the time axis between
the beginning and end of an accumulation period is in-
terpreted as the number of dry days between storms, as
the length during decay periods is interpreted as the
duration of a runoff event.
It may also be inferred from Figure 2 that:
X0i
                         Ax [t± -
                         At
                                                  [7]
That is, that XQ^ for any storm is determined by sum-
ming the quantity of pollutant remaining after the
last storm and the product of the number of ensuing
dry days and the accumulation rate, AX.
                                    ~Kt
The calibration procedure is as follows:

     1.  A data base consisting of pollutant loading
         graphs and hydrographs for a sequence of
         storm events in the watershed of interest is
         selected.

     2.  A pollutant loading curve for the initial
         storm event is plotted.  Xg-X is taken to be
         the area lying under the curve.  This value
         is substituted into the washoff equation [4]
         [5] along with observed values of "r" and "t"
         and a solution for XQ is obtained.

     3.  A trial accumulation rate, — , is chosen.

     4.  The model is executed for a series of storms,
        the hydrologic and water quality  data for
        which already exist.  The  simulated  pollutant
        washoff loads are then  compared to the ob-
        served loads.

     5. Sequential adjustments  are made in the assumed
        accumulation rate until the  simulated runoff
        loads match the observed ones.

     6. The above procedure  is  repeated for  each con-
        stituent to be simulated.

                 SAMPLING METHODOLOGIES

Data to be used for runoff water quality  model cali-
bration must necessarily be  more detailed than that
generated for periodic ambient  water quality  assess-
ments.  It is necessary to have sufficient information
to calculate a total pollutant  load  for each  runoff
event used in the calibration.  Such information ne-
cessarily must consist of flow  and concentration data
of varying detail.  A discussion of  the methods of
sampling commonly used and commentary on  their suita-
bility follows.  The assumption is made that  flow and
concentration measurements are  made  at the same fre-
quency .

Grab Sampling

Historically, most stream water quality surveys have
been made using grab sampling.  The unmodified proce-
dure has little use in runoff model calibration, how-
ever, because it generates a pollutant load descrip-
tive of only one instantaneous  condition  and  takes no
cognizance of the variation  in  load along the pollu-
tant loading graph.  A load  calculated from the pro-
duct of a single flow and concentration and extended
to include the entire period of runoff could  differ
tremendously from the actual load  (10).

Simple Composite Sampling

In this method, sample aliquots of equal volume are
withdrawn at intervals during a runoff event  and are
composited into one volume for  analysis.  The  flow
used to estimate total load  is  the mean of the instan-
taneous flows at the times of sample collection.  This
method assigns equal weight to  each aliquot of the
composite; consequently, those portions taken during
periods of relatively high flow affect the final con-
centration less than they should.   Depending on the
relationship between concentration and flow,   the true
load may be either over—estimated or under—estimated.

Flow-Weighted Composite Sampling

In this method, variable size aliquots of sample are
composited,with the volume of each being directly pro-
portional to the flow occuring  at  the time of  sam-
pling.  The total load then  is  computed from  the mean
flow and the flow-weighted mean concentration of the
composite sample.  The technique gives an excellent
estimate of Total Pollutant Load during a runoff event
if sampling time intervals are  small (3).   Even so,
the next method of sampling  to  be discussed offers a
better means of characterizing  pollutant variation in
runoff.

Sequential Discrete Sampling

While this method is the most expensive option for
sample collection, because it requires the most analy-
tical work, it also provides the greatest flexibility
for checking the calibration of the washoff equation
[2].  In this method, discrete  samples are withdrawn
at numerous points on the storm hydrograph.   The sam-
                                                      820

-------
pies are separately analyzed and the results  coupled
with flow data taken at the time of collection in
order to produce a number of instantaneous pollutant
loads during the period of runoff.  Plotting  these
loads on a time axis produces an approximation of the
general pollutant loading graph illustrated in Figure
2.  As the interval between samples decreases, the  ad-
herence to the actual loading curve increases  (as do
analytical costs).  The quality of the  total  load es-
timate made by computing the area under the plotted
pollutant loading curve is matched only by that  from
the flow weighted composite method.  The latter, how-
ever, does not allow the investigator to determine  the
shape of the loading curve, and, therefore, prevents
him from making any observations regarding the rela-
tionship between pollutant concentration and  hydro-
graph shape.  Additionally, knowledge of the  pollutant
loading curve  makes it possible to consider  making
more refined estimates of the coefficient b   in equa-
tion  [3].

Table I shows a set of runoff data collected  by  the
Occoquan Watershed Monitoring Laboratory (OWML)  from  a
tributary to Bull Run near Manassas, Virginia.   The
summary loadings  (a through e)  contrast the total load
estimates that would have been  made on  the set of data
using each of the sampling methods discussed  against
the total load calculated by computing  the area  under
a "smooth-curve" of pollutant load vs.  time.   As may
be seen, the single grab sample method  gives  the worst
estimate, errors ranging from -88 to +79 percent.   The
use of the simple composite method produced errors
from -5 to +15 percent, depending upon  the size  of  the
 composite.  The flow weighted composite method pro-
duced an error of -7 percent using the  smaller number
 of samples and an error of less than one percent using
 the full number of samples.  The sequential discrete
 sampling method also produces the same  total  load es-
 timate as the all sample flow weighted  composite.   As
 stated above, however, it also  allows the investiga-
 tor to determine  the morphology of the  loading curve.

       FIELD APPLICATION OF SAMPLING PROCEDURES

 OWML  currently operates automatic sampling stations at
 seven locations in the Occoquan Watershed as  shown  on
 Figure 1.  The drainage areas of the stations and the
 general land use  types are given in Table II.  All
 the streams on which sampling stations  are located  are
 perennial and, therefore, base  loading  measurements
 are necessary to  enable the definition  of runoff
 loads.  Base loads are determined by sampling at all
 stations on a weekly basis during dry weather flow.
 Experience has shown that is not feasible to  rely on
 individuals to occupy sampling  sites during runoff
 events because such events are  so unpredictable.
 During high intensity, short duration rainfall,  runoff
 may commence immediately, and if sampling is  not ini-
 tiated concurrently, a significant portion of the pol-
 lutant load may be missed entirely.  This happens in
 heavily sewered areas in particular, due to  the  likli-
 hood  of observing the so-called "first-flush" effect.
 It appears, then, that satisfactory sampling  of  runoff
 events necessitates the use of  automatic equipment  for
 sample collection, storage, and measurement  of flow.
 Many  automatic sampling devices are now available com-
 mercially, but most were initially developed  for
wastewater sampling; therefore, careful evaluation  of
 proposed units should be made prior  to  purchase  to  as-
 sure  that adequate performance  may be expected in re-
 trieving runoff samples.  In particular, attention
 should be given to the recovery of suspended  solids
because of the propensity of  stormwater runoff to
 carry some materials of higher  specific gravity  than
 those normally carried in wastewater  discharges.  Con-
sideration should be given  to heating  the installation
if normal operation during winter months is desired.
Remote sampling equipment has decreased in size and in-
creased in performance in recent years, and units are
now available that may be easily carried by one person,
and yet perform as well as the earlier, more bulky mo-
dels.  Recent studies (11, 12) have evaluated commer-
cially available equipment and given guidelines for
sampler selection.  In general, an acceptable remote
sampler will meet the following criteria:

     1.  Be weathertight and battery powered.

     2.  Be capable of collecting a minimum of 24 dis-
         crete samples of not less than 500 ml each and
         storing them in an insulated container.

     3.  Be capable of actuation from an external sig-
         nal or from an internal clock at varying in-
         tervals.

     A.  Be capable of lifting a sample against a suc-
         tion head of 10 feet at a minimum transport
         velocity of 3 feet per second (.91 m/s).

     5.  Have the capacity to distribute a single sam-
         ple among several containers as it may be ne-
         cessary to add differing preservatives for
         subsequent analytical work to be performed.

     6.  Be capable of conducting a pre- and post-sam-
         ple purge of the intake hose to prevent clog-
         ging and cross-contamination of samples.

     7.  Have an intake that can be placed sufficiently
         high above the channel bottom to avoid sam-
         pling suspended bed load.

In performing runoff studies, equally important as ob-
taining representative samples is the measurement of
flow, because no loading calculations may be made with-
out reasonably accurate discharge measurements.  In
perennial streams with adequate natural control, flow
measurements may be readily obtained by calculations
involving velocity (obtained with a current meter) and
cross-section measurements (13).  In small watersheds
that drain only during storm events, and lack an ade-
quate natural constriction, the installation of some
artificial control structure may be necessary.  Seve-
ral types of weirs and flumes have been used with suc-
cess in studies of the hydrology of small watersheds
(1A).  In urban storm sewer systems, the use of the
Manning formula to compute flow as a function of stage
provides the simplest method of obtaining flow data.
However, selection of the value of the roughness coef-
ficient, n, in all but the most recently installed con-
duits, poses a difficult problem.  As the sewer ages,
growths and other depositions cause changes in the sur-
face roughness which can only be approximated when se-
lecting a value for n.  If the Manning formula is to be
used with success, an indirect measurement of n for the
reach of sewer in question should be made.  "Calibra-
tion" of a sewer may be readily accomplished by using
chemical gaging techniques to develop a reliable set of
discharge-depth of flow relationships.  The values of
flow thus obtained may be used to compute a valid n for
use in the Manning formula.  Lithium chloride has re-
cently been shown to be a satisfactory tracer for use
in chemical gaging studies (16). In any case, adequate
flow measurements are essential and obtaining them
should be given high priority.

Figure 3 is a schematic of a permanent sampling instal-
lation operated by OWML.  Flow measurements are ob-
tained by making a continuous record of stream stage
and comparing it against a stage-discharge curve pre-
pared previously.  The water-stage recorder wheel holds
                                                       821

-------
bar magnets spaced at 0.25 foot (.076M) intervals
along its circumference.   As the stream stage rises or
falls, the magnets passing over a stationary reed
switch provide a inomentary contact closure that ac-
tuates the sampler in the adjacent building causing
samples to be taken at known stage increments.  Sam-
ples are stored in separate containers until retrieved
and transported to the laboratory for analysis.

               ANALYTICAL TECHNIQUES

As stated earlier, the sequential discrete sampling
method is both the most reliable and the most expen-
sive for generating accurate estimates of total load
and loading rates.  The greatly increased analytical
workload is the major reason for higher costs.  Be-
cause the number of samples to be analyzed may be an
order of magnitude higher than that required in a pro-
gram where samples are composited, consideration
should be given to adopting automated analysis proce-
dures where possible.  Runoff samples from stations
are retrieved as soon as possible following a storm.
Table III shows the analytical schedule considered to
be necessary to adequately describe the impact of nu-
trient and organic material runoff loads on receiving
waters.

Nitrogen and Phosphorus

Nitrogen determinations are performed on both whole
samples and aliquots filtered through 0.45 micron mem-
brane filters (with the exception of nitrite and ni-
trate, because these forms are anionic and do not
readily adsorb on suspended soil particles).  For all
other forms of nitrogen and phosphorus, the two types
of analysis are necessary to determine the distribu-
tion between particulate and dissolved phases.  This
distribution is critical when considering the ultimate
water quality impact of nutrient loadings.

Organic Matter

Two measures of organic loading are utilized:  Biochem-
ical Oxygen Demand (BOD) and Total Organic Carbon
(TOC).  The BOD determinations are made either in sta-
tic bottles or with a manometric apparatus.  TOC mea-
surements are made in parallel with BOD analyses and
correlations established with a view to using TOC as a
"real-time" parameter for the measurement of organic
matter.

Data Storage

All data are currently stored in the EPA STORET data
management system.  Data are reduced in the labora-
tory, coded, and stored on a biweekly basis.  The sys-
tem software greatly simplifies the computations re-
quired to develop pollutant loading information.  By
using the "MEAN" or "PLOT" routines (16), the investi-
gator is able to obtain instantaneous load vs. time
information in 'either tabular or graphical form.  Upon
integrating the loading curve by numerical or plani-
metric procedures, and using the proper scale conver-
sions, "it is possible to obtain the total storm load
from the area lying beneath the curve.

                        SUMMARY

The Occoquan Watershed Monitoring Lab has established
a network of automatic water samplers at locations on
tributaries to the Occoquan Reservoir.  Samplers are
programmed to collect and store sequential discrete
samples at increments of rising and falling stream
stage during runoff.  When combined with concurrent
flow data, analysis of such samples allows the genera-
tion of pollutant loading graphs.  Such loading data
are invaluable in the precise calibration  of most  math-
ematical models used to simulate pollutant quantities
in surface runoff.  For calibration,  the measured
rates of constituent accumulation will be  sequentially
varied to achieve agreement in loadings between  ob-
served and simulated storms.

                   ACKNOWLEDGEMENTS

The authors gratefully acknowledge the contributions
of the political jurisdictions participating in  the
Occoquan Watershed Monitoring Program.  Additionally,
the authors wish to acknowledge the tireless efforts
of the OWML staff members without whom operation of the
monitoring program would be impossible.

                      REFERENCES

1.  Sawyer, C.N.  1969 Occoquan Reservoir Study.  Met-
    calf & Eddy,  Inc. April 1970.

2.  Virginia State Water Control Board.  A Policy For
    Waste Treatment'and Water Quality Management in the
    Occoquan Watershed, 1971.

3.  Marsalek, J., "Sampling Techniques in  Urban Runoff
    Quality Studies,   Water Quality Parameters,  ASTM
    STP 573, 1975, (526-542).

4.  Hydrocomp,  Inc.  "Hydrocomp Simulation  Programming,"
    Palo Alto,  Cal.,  1973.

5.  Metcalf & Eddy,  Inc.  et al.  "Stormwater Management
    Model," Environmental Protection Agency, EPA-
    11024 Doc07/71,  July 1971.

6.  James, L.D.  "Using a Digital Computer  to Estimate
    the Effects of Urban Development on Flood Peaks,"
    Water Resources  Research. 1 (2),  1965, (223-244).

7.  Linsley, R.K.  "A Critical Review of Currently
    Available Hydrologic Models for Analysis of Urban
    Storm Water Runoff."  Hydrocomp International,  Inc.
    Palo Alto,  California (1971).

8.  Colston, N.V.  "Characterization of Urban Land Run-
    off," presented  at National Meeting on Water Re-
    sources Engineering,  ASCE, Los Angeles, Calif.,
    1974.

9.  Randall, C.W., J.A.  Garland, T.J.  Grizzard, and
    R.C.  Hoehn,  "Characterization of Urban Runoff in
    the Occoquan  Watershed of Virginia," Proceedings,
    American Water Resources Association Symposium on
    Urbanization  and Water Quality Control, Rutgers
    University,  1975.

10.  Harms, L.L.  and  E.V.  Southerland,  "A Case Study
    of non-point  source pollution in Virginia," Bulle-
    tin 88,  VA. Water Resources Research Center,
    Blacksburg, VA (1975).

11.  Parr, J.F.,  G.H.  Willis,'L.L. McDowell, C.E. Mur-
    phree, and S.  Smith,  "An Automatic Sampler for
    Evaluating the Transport of Pesticides in Suspen-
    ded Sediment," Journal of Environmental Quality,
    3(3), 1974,  (292-294).

12.  Shelley, P.E.  and Kirkpatrick, C.A., "An Assess-
    ment of Automatic Sewer Flow Samplers," Water
    Pollution Assessment: Automatic Sampling and
    Measurement,  ASTM STP 582, 1975,  (19-36).

13.  Geological Survey, USKI, "Stream-Gaging Procedure",
    Water-Supply  Paper No. 888, 1943.
                                                          14. Agricultural Research  Service, USDA,  "Field Manual
                                                      822

-------
     for Research  in Agricultural Hydrology", Handbook
     No.  224,  June 1962.

15.  Grizzard,  T.J.  and L.L.  Harms.  "Flow Measurement
     by  Chemical Gaging."   Water and Sewer  Works.  121
     (11),  1974, (82-83).

16.  Storet Users  Assistance  Section, USEPA, "Storet
     Handbook"  (Interim Version)",  1976.
                                          /                       \

                                       /    A SEWAGE TREATMENT PLANTS
                                      /      S SAMPLING  STATIONS
                                     / 	 COUNTY BOUNDARY
                                    /, 	 WATERSHED BOUNDARY

                                               SCALE ^-m • I MILE


ngle jjq-
*


*


*
8
9
0*
1
2
3*
4
5
6*



Date/Tira
07-13 1315
1320
1330
1350
1400
1410
1445
1620
1630
1730
1900
07-14 0600
0740
0945
1200
1545


Flow Tot
_fa_ _sa
48
68
90
15
41
69
41
41
69
30
10
05
77 0
80 0
33 0
33 0


al P
/I 	 Sample Mo. Dace/Tim?
17 07-14 1730
IB 1830
19* 1915
20 1950
2 2030
2 2110
2 * 2150
2 2220
1 2310
2 2350
2 * 07-15 0100
2 0230
2 0445
30 0800
31* 1100

b. f. Flow Weighted Corap
                                                                          . Flou Weighted Composite (All Samples) - 1766
             Figure 1.   Occoquan Watershed
            HIGH INTENSITY
            'SHORT DURATION
                                              OOLLUTWIT ACOMJLATION
                                              RATE (WSS/TIlt)
       STATION

Hooes Run Near Occoquan

Bull Run Near Clifton

Occoquan Creek Near Manassa!

Broad Run Near Brlstow

Cedar Run Near Aden

Cub Run Near Bull Run

Bull Run Near CaCharpin
MAJOR LAMP USE

Medium-high Density Residential

Mixed Urban

Mixed Rural (Sum of 4 & 3)

Rural-Agricultural (Pasture)



Mixed Urban
                                                                                                                        511-
  FIGURE 2.   REPRESENTATION Or RELATIONSHIPS 1ETWEEN STOPWATER RUNOFF,POLLUTANT LOADING RATES AND
                           POLLUTAfn" WASHOFF
                                                                                                     TABLE III
                                                                                             Analytical Schedule For
                                                                                                Non-Point Studies
                                                                            PLANT NUTRIENTS

                                                                         Total Phosphorus
                                                                         Ortho Phosphorus
                                                                         Total Soluble Phosphorus

                                                                         Total Kjeldahl Nitrogen
                                                                         Soluble Kjeldahl  Nitrogen
                                                                         Total Organic Nitrogen
                                                                         Soluble Organic Nitrogen
                                                                         Nitrite + Nitrate
                                                                                                                                 ORGANICS
                                                        BOD
                                                        TOG
                                                       SOLIDS

                                                  Total  Suspended
     -I-.JRE Z.  *CT1AT1C 0= FLO- tASURI ^ KG A TOWT1C ^PLIT, INSTALLATFr
                                                                    823

-------
           WATER SUPPLY SYSTEMS PLANNING,
        MANAGEMENT AND COMMUNICATION THROUGH
      AN INTERACTIVE RIVER BASIN SIMULATION MODEL
                    Robert A.  Hahn
           Civil Engineer, Systems Analyst

   U.S.  Army Corps of Engineers,  Ohio River Division
                   Cincinnati,  Ohio
The Washington Metropolitan Area Water Supply Study
initiated the development of a unique river basin
simulation model designed to be incorporated into an
open planning process.   The model is a flexible,  user
oriented tool suitable for a number of different
purposes.  It has been used to educate Corps person-
nel in the intricacies of the Washington Area water
supply system and to evaluate a number of water
supply device alternatives.  Potential uses include
public demonstration of the complexity of the existing
water supply system, evaluation of social, economic,
or environmental impacts of water supply alternatives,
the modeling of operational rules and as a "real-time"
decision tool to show the effects of operational  man-
agement decisions on all parts of the system.
                     Introduction

The water supply simulation model described in this
paper was developed as part of the Northeastern United
States Water Supply (NEWS) Study.-*•  This study was
authorized by Congress^ in response to the mid-60's
drought throughout most of the Northeastern United
States, for the purpose of preparing plans to meet
long-range water supply needs of that area.  The
Washington, D.C. Metropolitan Area (WMA) was identi-
fied as one of several critical areas of the northeast
urgently requiring additional water supply capacity.
Detailed planning began in the WMA in the fall of 1972
with an extensive "open planning program" designed to
find out as much as possible about the alternative
water supply solutions available and the preferences
of the local public and private agencies and indivi-
duals.

It became obvious by the spring of 1973 that the
problem was too complex for hand analysis of the
various planning alternatives.  Meta Systems Inc.^ was
asked to develop a tool which would allow the study
team to examine a large range of alternative solutions
without the time-consuming and error-prone drudgery of
analyzing each variation by hand.  Many aspects of the
study could not be modeled, but those amenable to an
analytic approach and within the limits of modeling
technology were included.  This paper will limit
itself to those aspects of the study incorporated in
the model.  The complexity of the problem is due to
the combined effect of three separate factors: the
unusually complicated nature of the existing water
supply system; the social, economic, and environmental
issues discovered during the open planning process
which broadened the way in which the problem must be
solved; and the range of alternative engineering
solutions proposed.

Existing Water Supply System

The study area was defined as the portion of Maryland,
Virginia, and Washington, D.C. within the Washington
Area SMSA which includes  seven counties, several
incorporated cities, 3,000 square miles,  2.9 million
people using 390 million gallons of water per day,  and
the nation's capital.  The area's water is supplied by
two river basins, the Patuxent, which is small  (930
square miles), well regulated, and located entirely in
Maryland, and the Potomac, which is large  (14,700
square miles), unregulated, and located in four  states
and the District of Columbia.  The major supply
source is the Potomac River, which has large seasonal
variations, highly random daily variations, and  drought
stages of less than six percent of the average.  Since
maximum daily withdrawals have already exceeded  mini-
mum flow in the Potomac, and since most, if not  all,
future source development will be in this basin,
supply analysis becomes a problem in time and fre-
quency.  The question asked is not only how large are
the deficits, but also how long, how often, and at what
probability.

Two of the three major water suppliers in the study
area presently use this source to supply 65 percent of
the region's needs, and the third expects to use the
river in the immediate future.  Two other sources,
reservoirs on the Patuxent and Occoquan Rivers,  are
also used to provide portions of the region's needs.
These independent sources are only minimally inter-
connected, which raises the question of deficit  loca-
tions.  These questions, deficit location, probabil-
ity, frequency, and magnitude are very important to
the analysis of the existing system and evaluating  the
proposed improvements.  They are also very difficult
to answer as they require statistical processing of a
large amount of data.

Social, Economic, and Environmental Issues

During the early "open planning" stage of the study,
several issues were developed which had to be con-
sidered in any water supply solution.  Many of these
issues, though relatively complex socially, polit-
ically, and institutionally, were simple from an
analytic point of view.  An example is the interrela-
tion of water and wastewater management.  Other  issues,
such as the environmental impact of reservoirs, are
also complex analytically and can only be analyzed
quantitatively to the extent that functions can be
derived which relate environmental parameters to water
quantity.  Finally, a large group of questions per-
taining to overdesign and efficient resource use
condense quantitatively to questions of planning not
for the worst conceivable drought but for some lesser
drought defined in terms of magnitude, duration,
location, and frequency of shortage.  Not only had
this question never been seriously considered by water
planners before, but existing literature and analytic
techniques are not capable of answering the question
to everyone's satisfaction.

Range of Alternative Solutions

The broad range of water supply technologies being
considered in the study added further analytical
complications.  These included a range of water  con-
servation measures, interbasin transfers, the use of
treated estuary and wastewater, groundwater, and local
and remote reservoirs.

               Model Conception and Design

The concept  of the model was simple—answer as many of
the above questions as possible.  Furthermore, answer
the questions in a manner that is believable, with  a
model  that can be operated by any technically competent
person; that has flexible input, operation, and  output;
and that can be operated in  an open planning session
involving the public, other  agencies,  or  the study  team.
The difficulty in structuring, coding,  calibrating,
                                                       824

-------
and finally documenting such a model should be obvious
to any experienced model builders, but at the time of
its proposal, no lesser model would satisfy the needs
of the study team.  The model design can be examined
in six main categories:  model structure, interactive
features, nodal definition, hydrologic simulation,
model output, and. social, economic, and environmental
parameters.

Model Structure

The model is structured around a collection of 200
different nodes representing one of eighteen  (18) node
types at which water can be added, subtracted, or
stored and a number of statistics can be collected.
Figure 1 is a schematic illustrating the nodal chain
and node types used.  If, for example, the node were
defined as a reservoir, (node type 1) natural and/or
pumped inflow and outflow can occur, which will vary
the storage within the reservoir accordingly.  Statis-
tics can be maintained of these variables, inflow,
storage, etc., which are then outputs of the model.
These nodes are strung together in a network which
represents the region's water supply system.  The
model begins at the first node at time t = 0 and adds
or subtracts waters from that node according to in-
structions coded for that node type and the values of
user-supplied parameters such as reservoir capacity.
The transaction is recorded, statistical counters are
updated, and the model proceeds to the next node to be
considered.  This process is repeated until all 200
nodes have been processed at which time the model's
clock is incremented by one and the program starts
over again.  The clock increments are either monthly
or daily depending on the output desired.  Decision
switches automatically compare the flow or storage at
a particular node with user-supplied maximum or mini-
mum values, and adjusts the process sequence.  For
example, if the flow in the Potomac is less than a
given minimum, and interbasin transfers are being
modeled, the model will route through the Patuxent
nodes before the Potomac nodes.  At each node and time
increment, the program will print out any parameter
values desired for the nodes chosen.  Finally at the
end of the session, the user can choose to see statis-
tics on any node and parameter of interest.  A wide
      SHENANDOA
    SYMBOL
                  KEY
                            NODE TYPE
                       Flow Point
                       Reservoir
                       Flood Skimming

                       Estuary Treatment
                       Wastewater  Treatment
                       Water Treatment
                       Groundwater Withdrawal
                       Groundwater Recharge
                       Interbasin  Transfer
                       Demand Center
                       Supply Point
                       Conduit
                       Stream-Gaging  Station
                       Demand Reduction
                       Land Treatment
                       Storm Runoff
                       Upstream Reservoir
                                                                    OCCOQUAN
 FIGURE 1.  Schematic  Illustrating Nodal Chain
                                                       825

-------
variety of alternative events can be simulated by
varying the nodal chain, the values of nodal para-
meters, and the switches used.  This movement from
node to node is controlled by the main program which
calls subprograms to do the nodal transactions, to
remember the transaction, to accumulate transaction
statistics, and to control communication with the
operator.

Interactive Features

A unique feature of the model is the way in which it
communicates with the operator.  This feature allows a
technically trained person with some understanding of
the system being modeled to learn to use the model in
several hours.  It also gives interested non-
technical observers confidence that no trick is being
performed and that they can interpret the results.  It
was incorporated into the model for the express pur-
pose of planning in real time so that questions or
alternative suggestions could be answered, or sets of
results obtained, without time-consuming delays
and difficult data manipulation problems associated
with batch process programs.  This feature required
well over half of the coding  (which consists of
more than 3,000 Fortran lines) to be devoted to
the interactive aspects of the program.  It also
consumes a large amount of computer resources
during model operation, but its contribution
in ease and flexibility of operation and in
data handling and believability of results does
add significantly to the value of the model as a
planning tool.

Nodal Definition
Each of the eighteen node types shown in Figure 1
serve to represent a different type of water account-
ing.  Many of the nodal types do not simulate the
indicated function but merely act as a source or sink
for water supply.  The model does not simulate ground-
water movements, for example, but merely supplies
water on demand to a demand center up to a specified
rate.  Simulating the groundwater movement itself
would have been difficult, impossible to calibrate,
and unnecessary.  Eliminating it greatly simplified
the model without causing significant planning inac-
curacies.  The model is equipped with a data base
called "BASE" which includes values for all the para-
meters necessary to simulate the existing situation.
At the beginning of each session, the user has the
opportunity to change the values of the parameters at
the nodes to simulate the construction of a project.
Impoundment A, for instance, does not presently exist
and is represented in the data base as a. reservoir
with zero storage and pumping capacities.  These
capacities can be changed at the beginning of the run
should the user wish to implement that reservoir and
if the results are satisfactory, the revised data base
can be saved for later use.

Hydrologic Simulation

Most of the model consists merely of accounting rou-
tines to subtract water from one node location, add it
to another, and record the transaction.  The one major
exception, the driving force of the model, is the
simulation of hydrologic events.  Each of the rivers
and tributaries modeled consist of a number of "dummy"
and "routing" reaches connecting the river nodes
together.  Most of the reaches are "dummy" reaches, in
which no routing or storage occurs and outflow of the
upstream node becomes inflow to the node below it.
The hydrology of the basin is simulated within the
"routing" reaches.  The flows recorded at eighteen
D.S. stream-gaging stations are used to load the model
 with one of two historical water years (1930 or 1966)
 which were serious drought periods.   Each of the
 "routing" reaches is related to one of the stream-
 gaging stations through drainage ratios,  and the net
 water inflow in the region is allocated to the routing
 reaches as stream runoff or as stream inflow.  Rivers
 act as natural reservoirs, with varying storage ca-
 pacity which is also simulated in the routing reaches
 using Muskingum routing coefficients.  The routing
 reach number,  location, and routing coefficients were
 adjusted during calibration to accurately capture the
 response of the prototype.

 Model Output

 One of the major advantages of the model  is the flex-
 ibility of output.   At the beginning of each run the
 user can chose to observe the dynamic change in one or
 more parameters at one or more of the nodes, and these
 values will be printed at the terminal for each time
 period in the  simulation.   The feature is useful for
 observing changes in parameter values as  they occur,
 and the relationship between values  at a  given time
 increment.   This enables decisions on improvements for
 the next run and choices as to the final  statistics
 desired.   At the end of each run, a programmer can
 choose to see  the statistics of any parameter, for any
 node.   The statistics available are  mean  and standard
 deviation,  a histogram of all events, and  a trace of
 the events as  they occur.   The user  may choose to see
 the output at  the terminal or on a high-speed line
 printer.   Normally,  the user would choose to see a
 small portion  of the output at the terminal,  certain
 key parameters,  for  instance,  and if the  run were
 successful,  he would ask to see all  statistics printed
 on  the high-speed printer for a permanent record and
 for later detailed study.   The user  may also write a
 message at the beginning of the printed output such as
 the date of the run,  the users name, solutions used,
 and preliminary interpretation so that the output can
 be  more readily used at a later date.

 Social,  Economic,  and Environmental  Parameters

 The incorporation of social,  economic,  and environ-
 mental parameters "is a feature built into the model
 that has not yet been used.  At the  present time,
 all output is  limited to water quantity values
 measured as a  rate  (mgd)  or a volume (bg).   Many
 social,  economic,  and environmental  factors related to
 water supply solutions can be described as functions
 of  water quantities.   For example, the region's eco-
 nomic growth is  in part related to deficit probabil-
 ities ,  the cost  of pumping water is  directly related
 to  the volume  pumped,  and certain environmental para-
 meters in the  estuary can be described in terms of the
 volume of fresh  water flowing into the estuary.   The
 model is  designed to incorporate relationships such as
 these and is capable of generating these  values and
 related statistics so that social, environmental,  and
 economic  impacts of  any water supply decision can be
 at  least partially simulated.   To utilize this capa-
 bility,  the appropriate functions relating these
 impacts  to water quantity must be provided.

                   Model Calibration

The value of any model  depends  on the  confidence  one
has in the  accuracy  of  its  output, which  can  only  be
obtained by  calibrating  the model relative  to the
prototype for a range of  conditions.   Establishing  the
model's performance  is particularly  important when new
modeling  concepts are  being used.  It  is  seldom
convenient to  calibrate  an  entire model satisfactorily
                                                       826

-------
and in our case it was impossible because no complete,
consistent system-wide data base exists.  It is un-
likely that such a set will ever be collected because
the model imitates extreme events  (droughts) for which
one must wait on nature for the appropriate sampling
conditions and because it predicts water supply system
failures, which presumably will never be allowed to
happen.  It was, however, possible to calibrate four
critical areas of the model independently to obtain an
estimate of the accuracy of the entire model.  These
are streamflow routing, generation of streamflow gage
records, generation of daily demand records, and the
accounting procedure which moves water from one part
of the system to the other.  In each of these areas,
excellent results were obtained giving an overall
estimate of model accuracy at better than 90 percent,
which far exceeds the requirements for a region-wide
water planning model.

Streamflow Routing

Accurate imitation of drought conditions in a free-
flowing stream requires an adequate procedure for
computing instream storage with respect to time.  The
Muskingum three-coefficient equation
              CoZi
                                         (1)
was used to perform the routing where Co, C]_,  and C2
are routing coefficients and are  a  function of travel
time, routing period, and inflow-outflow weighting
factors.  These coefficients are  difficult to  obtain,
particularly for drought conditons, since they are
sensitive to stream bank conditions and river  stage.
Obtaining these coefficients  (and in the process,
calibrating this part of the model) could only be done
by comparing computer-generated and observed stream-
flow during low flow conditions.  The stream gage
records and water production records of the water
supply utilities were obtained for  October 1970.  The
consultant then varied the number of river reaches  and
the storage coefficients to optimize the reproduction
of the observed record by the generated record.  The
resultant streamflow simulation satisfactorily imi-
tates the prototype with the critical flow parameters,
low flow, mean flow, and temporal response within 10
percent, 4 percent, and 7 percent respectively, of  the
observed values.

Streamflow Generation

The primary input parameter of the  model is stream
gage records from eighteen locations in the region.
The flow in each river reach is a function of  inflow
and outflow  (change of storage) and runoff which are
natural occurrences, and withdrawals and discharges
which are man-made.  The natural  occurrences are
determined through drainage area  relationships from
the stream-gaging records.  Most  of the gaging sta-
tions did not have records for the  1930 drought period,
though fortunately the most significant gages  did,  and
all records had one or more data  gaps.  Because the
model simulated daily events, it  was necessary to have
a  complete daily record for each  gage for any  his-
torical or synthetic drought year modeled.  Several
established generating techniques were tried to com-
plete incomplete records and for  synthetic generation.
These were found unacceptable because they could not
adquately describe daily phenomena  (which is highly
skewed) or else they could not capture drought statis-
tics satisfactorily.  A procedure was found using the
log normal distribution with skew unspecified  for
filling gaps in the record.  The  algorithm for filling
the gaps used serial correlation  for the longer records
and cross-correlation for the shorter records  based
upon correlation coefficients for the portions of the
records which overlap.  This procedure was considered
adequate to complete the records, but was not con-
sidered appropriate for use in a stochastic generator
as it tended to distort serial correlation.  Con-
fidence in the completed historical records was gained
in the process of completing the records.

Synthetic Demand Generation

The purpose of all planning simulation models is to
predict the future behavior of existing or proposed
systems.  In this case the future is represented by
the projected annual water demands, which must be
supplied to the model.  In order to use these demands
in a daily flow model, it is necessary to modify the
demands to reflect cyclic seasonal and random daily
phenomena before they can be used to generate mean-
ingful daily storage statistics.  This requires a
demand generator that takes annual demand for nineteen
demand nodes and develops daily values for those nodes
without distorting any of the meaningful demand sta-
tistics.  Conceptually, this is a much simpler task
than streamflow generation, as significant historical
demand records are maintained at all the water supply
utilities and satisfactory results were obtained
relatively quickly using well established analytic
techniques.  However, few of the records were readily
available and none in machine-readable form,which
greatly increased the labor required to perform the
task.  We are confident that the demand generator will
accurately generate daily demands in the model because
of the accuracy with which it can duplicate historical
demand patterns.

Accounting Procedure

Calibrating the accounting procedure which moves water
from one part of the system to another is simple
though time consuming and consists merely of evalu-
ating the printed output of all parameters at all
nodes for each time period under a number of given
situations.  The model was found to faithfully account
for water movement about the system, neither losing or
creating water, and moving it from location to loca-
tion in the amount and the time expected.  A funda-
mental constraint to the model was that its clock in-
crement was daily which required that all water trans-
fers be in multiple units of days.  The time of travel
(pumping distances) of pressurized water supply mains
are small, significantly less than one day, so it was
assumed that water transfer could occur from one part
of the system to another instantaneously.  Wastewater
flows, however, which travel by gravity over greater
distances, have varying travel times from location to
location within the system.  This was simulated by
delaying wastewater return to the system by one day.
Neither of the assumptions, that pressurized flow
takes zero days to travel and that unpressurized flow
takes one day, should cause significant errors.

                       Model Use

The purpose for which a model is developed and the way
in which it is used when finished do not always
correspond.  This model has not yet been used in an
"open planning" session to illustrate the intricacies
of the water supply system or to  experiment with al-
ternative solutions.  Nor has it  been used to develop
statistics about the location, duration, magnitude,
frequency, and probability of deficients.  It has also
not been used to evaluate the proposed final alterna-
tive solutions.  Time and money constraints, informa-
tion delays, changing roles and approach of the team
towards the study, and other unforeseen and uncontrolled
                                                       827

-------
events led to the completion of the model just  shortly
before the completion of the study.  The credibility
of both the model and the water supply study would
have been enhanced had there been more time available.

Actual Model Use

The model was, however, extremely useful to the study,
for in the process of developing the model much was
learned about the proposed solutions, and the region
itself, which would not have been learned otherwise.
This is primarily because the model was not developed
in one stage but was changed during the study as more
was learned about the prototype, the model, and the
solutions to be analyzed.  At each stage of develop-
ment, the model was operated for a range of system
variations in order to learn as much as possible about
the prototype, the model, and the solutions.  Also,
the difficulties encountered and overcome while
obtaining a consistent correlated model required
thorough analysis of data which revealed much about
the system's hydrologic and demand patterns.  For
example, information gathered in the process of demand
generation revealed that an analytic approach to
system shortages is not possible with present mathe-
matical techniques so that the location,  magnitude,
frequency, duration, and probability of the shortages
must be obtained using sampling technique.   This would
consist of running the model hundreds of times with
the same configuration to obtain statistical repre-
sentation of the computed shortages.  Resources did
not permit this type of analysis but evaluation of the
data revealed that some simplifying approaches would
be appropriate.  Another suprising result of the data
analysis was that the demand patterns were highly
predictable; indicating the possibility that con-
servation techniques may be more reliable than other-
wise thought.

At each stage of the model,  experiments were run to
analyze the proposed solutions to determine their
potential value and, if possible,  obtain  some para-
meter values that appeared promising.  This analysis
was done on a device-independent basis which does not
indicate the interactions between devices in a system.
This is not a concern with the final set  of alter-
native solutions, however,  since they would be imple-
mented and operated with minimal device interaction.
One of the preliminary experiments was to vary the
regulation roles of the existing and proposed reser-
voirs.  This led to the conclusion that these reser-
voirs could be used more efficiently, from a water
supply standpoint than indicated by previous analysis.

Although this model was not used directly in plan
formulation, information learned about the  system
and the alternative solutions led to the  development
of a simpler and more efficient, but also more limited,
model which was used in plan formulation.   Smong other
things,  the simpler model considered the  demand supply
network in two nodes, Pototmac supply and demand and
non-Potomac supply and demand.   It was found in the
early experiments on the water simulation model that
satisfying these two demand supply nodes  would be a
satisfactory simplification of the prototype for the
level of detail documented in the study report.

Potential Model Use

There are several potential uses for this model which
have not been attempted and had not been  considered
when the model was developed.  These became apparent
as the alternative water supply solutions were formu-
lated and as experience with the model was  obtained.
The model will be extremely useful as a public commu-
nications vehicle to educate the public on the existing
water supply system and on the potentials for water
supply development.  Its use should decrease the
problems of complicated and massive data bases neces-
sary to document" proposed projects and should increase
the confidence that the public has in water supply
plans.  It can also be used by planning agencies  as it
was intended, as a planning tool to compare alternatives
and.determine the most appropriate solution to the
water supply problem.  Once the environmental, eco-
nomical, and social parameter functions are included
in the model, it will be valuable in determining  the
impacts of water supply solutions and can be used to
keep the decision makers aware of the impacts of  their
decisions.  It will assist in economic and environ-
mental impact assessments, and could be used in cost-
sharing and in the billing of utilities for the cost
of regional water supply development and operation.
Before the region can decide whether or not it will
accept shortages, it must have a thorough understand-
ing of the deficits that will occur.   The model can
be used to obtain these statistics.   In its present
form it would be extremely inefficient for this task
but it can be converted into a batch process model
relatively quickly and cheaply.  Finally, many of  the
proposed solutions would be dynamically operated  and
the model could be used to establish the most effi-
cient operating policies.  Alternatively, it could be
used to make operational decisions in "real time"
predicting the consequences of any operational deci-
sion before that decision is made, greatly increasing
the efficiency and decreasing the risk of operating
those solutions.

                     References

1.   Publication:
          "Washington Metropolitan Area Water Supply
          Study Report"  (Draft, 1975), Part of
          "Northeastern United States Water Supply
          (NEWS) Study," New York, N.Y., U.S. Army
          Corps of Engineers, North Atlantic Division,

2.   Authority:
          Public Law 89-298, "Northeastern United
          States Water Supply" enacted October 27,
          1965.

3.   Consultant:
          Meta Systems Inc., Cambridge, Mass.
          Principal Investigator—Russell deLucia,
          Model Coding—Lewis Koppel, Hydrologic
          Simulation—Gerald Tierney, Water Demand
          Analysis—Myron Fiering.
                                                      828

-------
                                    FUTURE DIRECTIONS IN URBAN WATER MODELING
            Michael B. Sonnen, Prin. Engr.
            Larry A. Roesner, Prin. Engr.
           Water Resources Engineers, Inc.
              Walnut Creek, California
            Robert P.  Shubinski,  Vice  Pres.
            Water Resources  Engineers,  Inc.
                 Springfield,  Virginia
     A review was made for the Storm and Combined Sewer
Section of the U.S.  Environmental Protection Agency
(EPA)  concerning existing urban water mathematical
modeling capability.   From this review, gaps in needed
modeling technology were identified, and a philosoph-
ical  approach to filling those gaps was developed.
Finally, a phased implementation program for developing
the needed models was suggested.
                     Introduction

     In 1974-75, a review was performed by Water
Resources Engineers (WRE) for EPA's Storm and Combined
Sewer Section concerning the state-of-the-art of urban
water modeling.   Moreover, WRE was then to recommend
what model  development work could be undertaken most
feasibly in the  next five years.   The scope of the
review was  to include all the urban water subsystems,
such as watersheds, water supplies, treatment, or water
use, but the emphasis for obvious reasons was placed on
storm and combined sewer problems and their modeling.

     In this paper we outline our findings with
respect to most of the urban water subsystems
reviewed and suggest that inadequacies that continue
to exist in problem-solving capability are more
philosophical and scientific than numerical.
               Subsystem Modeling Needs

 Urban Watershed Hydrology

     Urban hydrology received considerable attention
 from modelers as soon as computers became routinely
 available to them.   This resulted in part because
 urban flooding and drainage problems were acute;
 damages were high and frequent.  Moreover, analysts
 knew intuitively that the rational method for
 designing runoff facilities was theoretically weak
 and yet tedious in complex applications.  So hydro-
 graphs, unit hydrographs, instantaneous unit hydro-
 graphs, systems of linear reservoirs, infiltration
 equations, Markov chains, and numerous other pieces
 of these and other puzzles were fed into the computer.
 Urban watershed models of quantity and quality have
 been the latest result.   The recent attention to
 "nonpoint" sources of pollution has raised the
 importance of the urban runoff problem, while the
 ability to model the phenomena occurring, partic-
 ularly the quality phenomena, has culminated for the
 moment with "dirt and dust" linkages that are theo-
 retically weak, if empirically capable of calibration.

 Hater Distribution Systems

     Water distribution systems have been analyzed
with computer methods for years.  Numerous utilities
 and private consultants have more than adequate
 versions of programs that balance heads and flows in
these closed systems.  Some, if not most, of the
programs deal with numerically complicating system
paraphernelia such as pressure reducing valves,
variable speed pumps, and the like.   The quality
of water in these systems has not been included,
however, and recent discussions of lead poisoning
and carcinogenic substances in water supplies  may
draw more analysis attention to this important piece
of the urban water system.   Computerized, automatic
operation is on the drawing boards as well,  awaiting
realization.

Water Use
     To the writers'  knowledge,  the urban water use
subsystem has never been rigorously simulated,  in  a
cause-and-effect sense.   The most elaborate model
constructed appears to be MAIN-II, developed by
Hittman Associates.1   This model  either accepts
projections or makes  its own for "independent"
variables such as population density,  values of
dwelling units, and numbers of dwelling units in
each value range.  Among residential,  commercial -
institutional, industrial, and public-unaccounted
sectors of the community, 150 separate water use
categories can be projected.  Other models include
those of Schaake and  Major,2 and the "Data Management
Systems" of WRE3 and  Montgomery-WRE.4  All of these
approaches to water use projection, however, depend
on prior projections  of independent variables,  for
example, population,  per capita income, or water
pricing policy.  As such, they are all computeri-
zations of effects and their trends, rather than
models of water demand causes that simulate resultant
effects.

     TihanskyS and Sonnen^ have each developed some
quality-use-consumer-cost programs that calculate  the
added costs to homeowners or industries of excessive
hardness or TDS in their supplies, but these accept
demands as given and do not account for any diminution
in projected unit demands if quality deteriorates,or
increases in use if quality is improved.  In short,
much more work could be done in simulation and economic
modeling analysis of urban water use.

     The water use subsystem is the most critical  of
all because it sets the quantity and quality demands
for all upstream subsystems, plus it is the source of
the quantity and quality loads imposed on all subsystems
downstream.

Sewer Systems

     Sewer design problems have been approached with
models that adopt the steady-state "design flow"
concept which obviously makes them more applicable to
sanitary sewers than  to storm sewers.   Mathematical
programming techniques have been used to discover
optimal sizes, slopes, and—in rare cases—configu-
rations of drainage networks.  Fisher, et al_.7
presented an integer programming formulation for the
diameter-slope problem.   In spite of finding a 10
percent cost savings  over a traditional design method,
the authors concluded that uncertainties in excavation
costs, the dynamic nature of actual flows, and the
arbitrary nature of velocity constraints detract
                                                      829

-------
considerably from the significance of the indicated
saving.  Argaman, et £[•,  have also considered optimal
network configuration as well  as pipe sizes and slopes.
They found their dynamic programming approach to
require amounts of computer time that severely limit
the size of the sewer network that can be considered.
The development of programming techniques for sewer
design is relatively recent, and their application to
real problems has not been documented.

     Analysis models describe the performance of a
given collection and conveyance system under given
inflow conditions.   Model output is usually in terms
of flow rates, and possibly in terms of impurity con-
centrations over time at various points including at
the system outfall.   Brandstetter9 has conducted a
comprehensive review of the more sophisticated of
these models.  The initially developed hydraulic
transport routine for EPA's SWMM model'0 is typical.
Depending on the level of resolution needed to repre-
sent temporal variables and the pipe network,
relatively coarse to highly sophisticated analytical
models are available.  SWMM is one of the more sophis-
ticated (and expensive) of these models.

     Water Resources Engineers has developed11 and the
Corps of Engineers has documented^ a planning level
model (STORM) in which continuous computer simulation
(at hourly intervals) with historical rainfall records
is used to predict the effects of various treatment
and storage capacities on overflow quantities and
quality.  No consideration is given to the collection
and conveyance system, however, and no cost relation-
ships or optimizing algorithms have been included.  A
significant outcome of using this model, however, has
been the emergence of the concept of the "design
event" including a dry period for accumulation of
pollutants on the watershed, as opposed to the purely
hydrologic concept of a "design storm."

Waste Treatment

     This subsystem received a flurry of modeling
attention from early systems analysts.  Most of this
work was directed at optimizing the amount of waste
treatment at various points along a stream, given a
dissolved oxygen standard and the Streeter-Phelps
equation.  A later tack was taken to simulate waste
treatment processes themselves.  The majority of this
work to date has been an exercise in programming the
rules of thumb of sanitary engineering design, but
some elucidation of process variables has resulted.
A recent article by Christensen and McCarty'3 gives
a hopeful signal that causes can be modeled funda-
mentally to predict effects rather than having to
"predict" the answer from statistical analyses of
the measured answers at 20 other plants.

Receiving Waters

     Many, many programs to solve the Streeter-Phelps
equation along streams were developed in the early
1960's.  Link-node models for estuaries with dynamic
hydraulic solutions were developed by 1965 for the
Delaware estuary and for San Francisco Bay.  Lake and
reservoir temperature models and groundwater models
followed by 1967-1969.  In 1969-1971 the receiving
water model called RECEIV was incorporated in the EPA
SWMM model.  During this period, the feasibility of
modeling several aquatic trophic levels and their inter-
related responses to ambient water quality was shown.
This philosophy was eventually demonstrated for San
Francisco Bay and Lake Washington.  Since then many
stream, estuary, and lake models have been developed
and updated to include the "ecologic model" inter-
relationships.
     Throughout the roughly 15-year history of modeling
of receiving waters, the capabilities of the developed
models have lagged behind the scope of the problems
being faced by urban water management decision makers.
Current problems specified for attention by the Water
Pollution Control Act Amendments of 1972 include deri-
vation of "wasteload allocations" for waters designated
as "water quality class segments."  Current ecologic
models applied to this problem have proved helpful but
less than completely satisfactory.  The Safe Drinking
Water Act of 1974 implies a need for a model to treat
as many as 150 substances and their interactions.
            Recommended Future Water Models

     From its review of the state-of-the-art and its
view of what is 1) most likely of early success,
2) most required in terms of pressing needs, and
3) most feasible in terms of EPA's research posture
and wherewithal, WRE recommended'4 the following
models receive development attention in the next 5
years:

Planning Models

1.  A new and better watershed quality model.

2.  A transport simulation capability in a planning
model for storage/treatment/overflow evaluations
(STORM-II).

3.  Capability to simulate quality control or treatment
processes in STORM-II.

4.  A long-term (10-30 year) receiving water ecologic
model.

5.  An economics model for assessing users' water
supply benefits and costs.

6.  An economics model for assessing receiving water
users' benefits and costs.

Design/Analysis Models

1.  A solids deposition and scour capability in a
hydraulically sound sewer transport model.

2.  Dry-weather waste treatment simulation capability
in a SWMM-type model.

3.  Reclamation or reuse routing capability in a
transport/treatment model.

4.  Nonstructural runoff control simulation capability
in a SWMM-type runoff module.

Operation/Control Models

1.  Real-time control software for sewer systems.

2.  Real-time spatially varied runoff prediction
capability.
                  Modeling Philosophy

     There are two points about modeling that our
project has suggested may be more important to getting
problems solved than the mere statement of a subsystem's
set of unresolved technical circumstances.  These are:
1) What are the consequences of poor communication
between the developer of a model and its subsequent
user?  2) What are the consequences of claiming to
model a process when in reality we are managing somehow
to reproduce the expected value of its output?
                                                       830

-------
     These problems are related to one another, and
they may each be restatements of a more general riddle
which could be stated, Why are we building all these
models anyway?  The quick, obvious answer is,  We need
them, just to perform all the computations for us,
just to do the arithmetic involved in analyzing a
basin-wide pollution problem over a 30-year period.
Fine, that's answer enough.  But such an answer implies
1) that the model developer and the user of the model's
output each understand perfectly what arithmetic needs
to be done, 2) that they are each confident that the
computer is being told to do the correct arithmetic
via the program, and, of course, 3) that the machine
will do correctly what it is told to do.  It seems to
us that the last of these three assumptions is the
only one worth betting on.

     We conducted an informal survey of ten people,
roughly half of which were model developers, acade-
micians, researchers; the other half were model users,
front-line water and waste managers, city officials,
Federal data collectors, utility managers.  With few
exceptions we heard that communication between the
developer of a model (computer program, really) and the
subsequent user has been garbled at best.  Invariably,
a delivered program contains bugs, solves a slightly
different and usually much simpler problem than the
one(s) advertised, or simply will not function or
execute with a different set of data.

     There are many variations of the same communi-
cations problem.  Often the delivered card deck and
documentation reports do not clearly annotate the
options available or assumptions implicit in the
programs.  Sometimes the mathematical statement of
the general problem is far more precise than the data
used to "verify" the model, and hence the program
takes inordinate amounts of time and money to generate
its highly approximated and questionable results.
Saddest of all are the cases where the model developer
and the ultimate user of the model's results, often
the fellow who paid for the development, never
communicated from the start; and the model developed
addresses a problem the user never had, while his real
problem is still unsolved.

     Every developer of a model who has given his pro-
gram to someone to use has heard these complaints.
Ironically, he knew he would, and he let the program
out of his hands anyway.  Usually, he lets it go
because the user bought it from him.  But he knows,
and the user cannot believe, that there will be
problems with the very next application.  Sometimes
there is no excuse for this phenomenon, just as there
is no excuse for somebody else's meatloaf not tasting
like your mom's.  They just are not the same; they
were made differently even though they were called the
same thing.  Another reason it occurs is because the
modeler knows from the start that he is setting out
to approximate a solution to a theoretical problem
with both an approximation of the theory and an
approximation to the prototype water body.  The model
user or the user of the model's results views his
problem, and the theoretical statement, as precise
and infinitesimal.   Almost invariably, the first
application of a handed-over program is made to a
problem that either 1)  lies outside the range of
applicability of the equations simplified in the
program, or 2) requires a time step shorter than the
model or its "theory" can accept.  Highly qualified
and experienced programmers make these mistakes just
as neophytes do.  The nondeveloper-user almost always
expects a new program to be both more exact and more
flexible than it is or was ever intended to be.
Lastly, it occurs because modelers make mistakes.
     Without question, improvements in model  documen-
tation and preparation of user's manuals can  be made.
The communications problems between modelers  and
subsequent users of their products are too numerous
and well documented for simple sloppiness of  expla-
nation to continue.  Responsibilities lie with both
parties, however, and the tedious method of constant
re-explanation between the developer and the  subsequent
user is the only failsafe procedure.  A nondeveloper-
user who picks up a program deck cold certainly is go-
ing to have problems with it, period.

     So much for crossed wires and simple not hearing
what the other fellow said.  The more insidious prob-
lem is the "model" that both the parties accept but
that is not a model at all.  One of the respondents to
our survey said, "... analysis of the results of the
model run are the real key to application of any model."
Right on.  It is worth amplifying that computer print-
outs rarely if ever contain a singular answer to a real
problem.  The significant analysis leading to solution
of a real problem starts when a successful run or set
of runs ends.  A model user has to be a qualified
results analyst, or he is not a user at all.   Getting
a program to execute with data given in the format
described in a user's manual is one problem,  but
interpreting the results is quite another and more
important problem.  To interpret the results  correctly,
of course, means that the user can correctly  interpret
the model's inputs and its general workings as well;
and perhaps most importantly he must know and understand
the particular water body, land surface, or treatment
process being modeled.  In other words, there is an
onus on the user of the model or of its results to sort
through the mass of modeled evidence to satisfy himself
that either the model or the data are not quite correct
or that the prototype could indeed behave in  such a
strange or unexpected way.  If the results are just
what he would have expected, he must still be able to
determine whether the model and he were both  right or
he must be willing to accept the consequences.

     For example, a computer program that predicts the
suspended solids concentration in the effluent of a
primary clarifier, from a relationship between overflow
rate and the removal efficiency measured at 60 existing
plants, is not a model of the behavior of a primary
sedimentation tank.  In a given situation, such a
program may be adequate.  It may even prove to have
been right, once the 61st, "modeled" tank has been
built.  But the program was never written to  simulate
what would happen in the 61st tank, and the model
developer could hardly be blamed if the 61st  tank and
its contents behaved quite differently.  Unless, of
course, he had claimed that he had modeled the sedimen-
tation process, which he clearly had not.  We might
add, since currently there are so many people clamoring
to use runoff quality models that exist today, that many
of them have been built to "predict" qualities not on
the basis of what happens on a watershed surface but on
the basis of what has been measured to have resulted in
the waters that ran off many other surfaces.   There may
be a big difference, and while some modelers  may not
even be aware that they're doing that, a user of the
model must know it.  In other words, many simulation
models around today are designed to predict effects
based on measured effects elsewhere; they are not
designed to simulate causes which operate on  input data
to produce expectable effects.  Future urban  water
models should be, but they are not now so designed be-
cause so much is still unknown about causative factors.

     Future users, remember that despite the  best inten-
tions or dreams of the developer, a model is  always im-
perfect.  What you cannot forget is that a less than
perfect representation of prototype systems is the goal
of the process from the beginning.
                                                       831

-------
Developers, remember that while adding  more and more
padding to a mannekin might make a better and better
approximation of Requel  Welch,  it would be a delusion
to expect Raquel to finally appear in the flesh.  Both
of you remember, it is fallacious to believe that an
extra large bikini  is a model  of Raquel merely because,
for at least part of the prototype, "it seems to fit."
                    Acknowledgments

     This paper is based on a contract (68-03-0499)
sponsored by the Storm and Combined Sewer Section of
the U.S. Environmental Protection Agency.   The Project
Officer for EPA was Chi-Yuan Fan.  Richard Field also
contributed product review for EPA.  The guidance and
support from these gentlemen and EPA is acknowledged
with gratitude.
                      References

1.  Hittman Associates, Inc., Forecasting Municipal
    Water Requirements, Report HIT-413, Hittman
    Associates, Inc., Columbia, Maryland, September
    1969; Vol. I, "The MAIN II System," 208 p.;
    Vol. II, "The MAIN II System Users Manual," 425 p.

2.  Schaake, J.C., Jr., and D.C. Major, "Model  for
    Estimating Regional Water Needs," Water Resources
    Research  AGU, Vol. 8, No. 3, June 1972,
    pp. 755-759.

3.  Water Resources Engineers, An Investigation of Salt
    Balance in the Upper Santa Ana River BasfrTj
    presented to the State Water Resources Control
    Board and the Santa Ana River Basin Regional  Water
    Quality Control Board, March 1969, 198 p.

4.  Montgomery-Water Resources Engineers, Water,  Waste-
    water, and Flood Control  Facilities Planning  Model,
    Submitted to the San Diego Comprehensive Planning
    Organization, January 1974, 189 p.

5.  Tihansky, D.P., "Damage Assessment of Household
    Water Quality," Journal of the Environmental
    Engineering Division, ASCE, Vol.  100, No.  EE4,
    August 1974, pp. 905-918.

6.  Sonnen, M.B., "Quality Related Costs of Regional
    Water Users," Journal of the Hydraulics Division,
    ASCE, Vol. 99, No. HY10,  October 1973, pp.  1849-
    1864.

7.  Fisher, J.M., 6.M. Karadi, and W.W. McVinnie,
    "Design of Sewer Systems," Water Resources Bulletin,
    AWRA, Vol. 7, No. 2, April 1971, pp. 294-302.

8.  Argaman, Y., U. Shamir, and E. Spivak, "Design of
    Optimal Sewerage Systems," Journal of the Environ-
    mental Engineering Division, ASCE, Vol. 99,
    No. EE5, October 1973, pp. 703-716.

9.  Brandstetter, A., Comparative Analysis of Urban
    Stormwater Models, BN-SA-320, Pacific Northwest
    Laboratories, Battelle Memorial Institute,
    Richland, Washington, August 1974, 88 p.

10. Environmental Protection Agency, Storm Water
    Management Model, 4 Volumes, Washington D.C.,
    1971, "Volume I-Final Report," "Volume II-
    Verification and Testing," "Volume Ill-User's
    Manual," and "Volume IV-Program Listing."

11. Roesner, L.A., H.M. Nichandros, R.P. Shubinski,
    A.D. Feldman, J.W. Abbott, and A.O. Friedland,
    A Model for Evaluating Runoff-Quality in Metro-

                                                      832
    poll tan Master Planning, ASCE Urban Water Resources
    Kes.earch. Program lechmcal Memorandum No. 23,
    American Society of Civil Engineers, New York,
    April 1974, 73 p.

12.  U.S. Army Corps of Engineers, Urban Runoff:
    Storage, Treatment and Overflow Model "STQKM",
    Draft Generalized Computer Program 723-58-2250,
    Hydrologic Engineering Center, Davis, California,
    September 1973, 62 p.

13.  Christensen, D.R. and P.L. McCarty, "Multi-Process
    Biological Treatment Model," Journal of the Water
    Pollution Control Federation, Vol. 47, No. II,
    November 1975, pp. 2652-2664.

14.  Sonnen, M.B., L.A. Roesner and R.P. Shubinski,
    Future Direction of Urban Water Models, Prepared
    for the Office of Research and Development, EPA,
    by Water Resources Engineers, Inc., Walnut Creek,
    California, Printed by NTIS as PB-249 049,
    February 1976, 90p.

-------
                                      TRANSPORT MODELING IN THE ENVIRONMENT

                                 USING THE DISCRETE-PARCEL-RANDOM-WALK APPROACH
                   S. W. Ahlstrom
         Water and Land Resources Department
       Battelle Pacific Northwest Laboratories
                Richland, Washington
                                                  H. P. Foote
                                      Water and Land Resources Department
                                    Battelle Pacific Northwest Laboratories
                                             Richland, Washington
Abstract

When formulating a mathematical model for simulating
transport processes in the environment, the system of
interest can be viewed as a continuum of matter and
energy or as a large set of small discrete parcels of
mass and energy.  The latter approach is used in the
formulation of the Discrete-Parcel-Random-Walk (DPRW)
Transport Model.  Each parcel has associated with it
a set of spatial coordinates as well as a set of dis-
crete quantities of mass and energy.  A parcel's move-
ment is assumed to be independent of any other parcel
in the system.  A Lagrangian scheme is used for com-
puting the parcel advection and a Markov random walk
concept is used for simulating the parcel diffusion
and dispersion.  The DPRW.technique is not subject to
numerical dispersion and it can be applied to three-
dimensional cases with only a linear increase in
computation time.  A wide variety of complex source/
sink terms can be included in the model with relative
ease.  Examples of the model's application in the
areas of oil spill drift forecasting, coastal power
plant effluent analysis, and solute transport in
groundwater systems are presented.
Introduction

The fundamental principle upon which the Battelle
Generalized Transport Model and all other mass trans-
port models are based is the law of conservation of
mass.  This law can be expressed as:
   The rate of change of
   mass concentration of
   chemical species k within
   a given control volume
  the net advective flux
= of the species k into
  the control volume

  the net diffusive flux
+ of species k into the
  control volume

  the net rate of produc-
+ tion of species k with-
  in the control volume
                    (1)
A mathematical statement of Equation 1 is usually re-
ferred to as an equation of continuity.  A general form
of the continuity equation for a non-isothermal multi-
component fluid consisting of K chemical species can
be written as :
                                                    (2)
where:
     fe = 1.2.3...K

    F  = the mass concentration of species fe

     v = the mass average velocity of the fluid
     j    the mass flux of k relative to v (diffusive
          flux)

     $    the net rate of production of species fe
          within the control volume.

The addition of K equations of this kind gives the
equation of continuity for a mixture.  Each term of
Equation 2 corresponds directly with the terms of
Equation 1.

The term on the left-hand side of Equation 2 is re-
ferred to as the transient term.  It may be inter-
preted as the total rate of change of mass concentra-
tion of species k at a point in space at a given
instant in time.  The mass concentration of any
species is in general assumed to be a function of
temperature, of time, and of spatial coordinates, as
well as the concentration of all the other species
present.  The primary function of a transport model
is to predict and quantify these changes in concentra-
tion as a function of time and location.

The first term on the right-hand side of Equation 2 is
referred to as the advective term.  This term repre-
sents a change in concentration of the system result-
ing from the gross movement of fluid in which species
fe is transported.  The mass average velocity vector
of the fluid mixture, v", is a function of time, space,
temperature, and the chemical composition of the
mixture.  If v is constant with respect to time, the
flow field is said to be steady.  For most applica-
tions to large scale environmental systems the
assumption of a steady flow field is usually adequate
only for short-term simulations.  For most long-term
simulations, the velocity field cannot reasonably be
assumed to be constant.

The second term on the right-hand side of the equation
of continuity Is called the diffusive term.  This term
represents the change in concentration of the system
resulting from the random molecular motion of each
species in the mixture.  The driving force of the
                    -fe
relative mass flux, j , can be concentration,
pressure, temperature or other gradients.  In many
large scale environmental transport analyses the
contribution of molecular diffusion is often very
minor, but when eddy diffusion is coupled with
molecular diffusion, this term may become much more
significant.  The rationale for the inclusion of eddy
diffusion in this term is discussed below.

The last term in Equation 2 represents all internal
mechanisms that tend to change the net amount of
species fe present in a control volume.  The reactivity
of a chemical system may be a function of temperature
and any or all of the fe mass concentrations in the
mixture.  Ideally this term should consist of a series
of rate expressions which represent all known
mechanisms by which species fe can react with its
                                                       833

-------
immediate environment.   Species for which $  is zero
are referred to as conservative substances because they
are neither created nor destroyed within a control
volume.

Equation 2, as written, is a very general expression.
It applies to both liquid and gaseous mixtures contain-
ing an arbitrary number of components in any ratio,
reacting over a wide range of temperatures and pres-
sures.  Workable models are, by necessity, much more
limited.

Simplifying Assumptions

Equation 2 serves as the starting point for the expla-
nation of the assumptions that are present in the
existing DPRW code.  Simplifying assumptions were made
for one or more of the following reasons :

      1.  A portion of the general equation, based on
         an analysis of the best available informa-
         tion, appeared to be relatively insignifi-
         cant for the anticipated applications of
         the model.

      2.  The quality of existing data or additional
         data that can be reasonably obtained does
         not justify considering anything above a
         certain level of complexity.

      3.  To allow a numerical solution within
         reasonable economical constraints.

Each  simplifying assumption will be denoted by se-
quential numbers enclosed in square brackets preceding
the assumption as it appears in the text; i.e., the
fifth assumption will be preceded by  [5].

When  advective fields are calculated or measured it is
not practical to resolve the micro-advection patterns
that  are known to exist in nearly all large scale
environmental flow systems.  These turbulent flow
patterns, often the primary mixing mechanism, achieve
essentially the same result as diffusive processes
only  much more rapidly.  In some respects, micro-
advection phenomena, commonly called eddy or turbulent
diffusion can be thought of as a random process,
occurring on a larger scale, but having many character-
istics  in common with molecular scale diffusion.  Be-
cause of these similarities, it has been the practice
historically to [1] approximate this phenomena by
including it with the molecular diffusion in the rela-
tive  mass flux term, j .

If it is assumed that  [2] the relative mass flux can
be adequately described by expressions having the form
of Pick's First Law, then j" can be expressed as:
components of  the tensor are  considered  to  be signifi-
cant then Equation  3 can be reduced  to:
      3  =
                ?w  -  r  v
                                                   (3)
where :
     r =  total mass density of the solution

   D =  molecular diffusivity tensor
     m                       J

   D =  eddy or  turbulent diffusivity tensor

   w =  the mass fraction of species fe  (r /r)

 The  diffusivity  tensors are in general  functions of
 both space and time.  If the molecular  diffusivity
 is assumed [3] to be negligible with respect to the
 turbulent diffusivity and if [4] only the longitudinal
              -v-fe
                   -  rpvw
                                                  (4)
where
     V = longitudinal components of  the  eddy
         diffusivity tensor

The velocity distributions required  for  a  transport
simulation can be derived from a hydrodynamic  numer-
ical or physical model study, and/or a field measure-
ment program conducted prior to running  the simulation.
The assumption inherent in this practice is that  [5]
the advection patterns are not dependent on the
chemical composition or temperature  of the solution,
or in other words, the momentum, mass and  energy
transport processes are decoupled.   This assumption
is valid for systems that are not highly non-isothermal
and which contain relatively low concentrations of
contaminants.

Another assumption [6] considers the transporting
medium (water) to be incompressible.  This assumption
is considered valid for most water mixtures that are
not near the boiling point.  The restriction to
incompressible fluids causes the convenction term of
Equation 2 to be simplified as follows:
      = r
             = 0 for incompressible fluids


         "(v-v) + (v-vrfe) = v-vrfe                (5)

If all of the above assumptions are incorporated into
Equation 2, and also assuming that [7] the total mass
density, T, of the mixture remains relatively con-
stant.  The result is:

      nk.          /           /      I
     ar
     3t
           - (v-vr)
(6)
Although this equation was developed from a mass
balance point of view, it can also describe the
transport of heat under appropriate circumstances.
Starting with the law of conservation of energy, and
making assumptions identical or analogous to those
made above, one can derive an energy balance with
the same functional form as Equation 6.  Consequently,
the transport of mass or heat can be calculated by
the same numerical computation code.

Boundary Conditions

Boundary conditions for transport analyses can be
specified quite simply.  Four boundary types are
defined:

     1.  Free Flow Boundary - any matter or energy
         transported across this type of boundary
         is assumed to have exited from the system.

     2,  Reflecting or No Flow Boundary - any com-
         ponent encountering this type- of boundary
         is reflected back into the system.

     3.  Dnconditional Sticking Boundary - any sub-
         stance that comes in contact with this
         type of boundary will adhere to it.

     4.  Conditional Sticking Boundary   when matter
         comes in contact with this type of boundary
         it may adhere to the boundary or be reflect-
         ed from it.  The percentage of the coincident
                                                       834

-------
         matter that is allowed to stick is calculated
         from a predefined probability distribution
         function.

The Numerical Solution Algorithm

A system of matter can be viewed from two alternative
frames of reference.  The classical approach is to
view the advection-diffusion processes from an Eulerian
point of view, establishing the transport equation from
a consideration of concentrations and flux of a con-
tinuum at fixed points in space.  A quantity of matter
can also be thought of as being comprised of a large
number of discrete particles.  Keeping this in mind,
it is also possible to approach the transport problem
from a Lagrangian viewpoint, focusing on the history
of particle motions.  To arrive at useful results,
statistical properties of the motions have to be con-
sidered, so that this second approach may be labeled
as "statistical", in contrast to the "phenomenological"
method mentioned above.

Both of these approaches, if pursued, will yield Equa-
tion 2.  However, the viewpoint that is chosen to
derive the transport equation has some very definite
implications relating to possible numerical solution
techniques.  The phenomenological approach which views
matter as a continuum suggests the application of
finite difference or finite element numerical techni-
ques.  The statistical viewpoint suggests a different
type of approach using a discrete particle, Lagrangian
algorithm.  This type of numerical scheme is used in
the most recent version of Battelle Generalized Trans-
port Model and is referred to as the Discrete-Parcel-
Random-Walk method.

The basic device or tool employed by this technique
is a hypothetical entity called the computational
parcel.  A quantity of matter or energy is represented
as consisting of a finite ensemble of these parcels.
Each parcel has associated with it a set of Cartesian
                                 and a set of discrete
                                n, where:
                      n
                          n
spatial coordinates  (x , y
quantities of matter or heat £
 n.
O
                              P
   p = the parcel index  (p = 1,2,3...P) where P is
       the total number  of parcels used to represent
       a given quantity  of matter.

   k = the transported species index  (fe. = 1,2,3...K)
       where K is the total number of constituents
       present in the system.

   n = the time level index  (W = 1,2,3...N) where N
       is the number of  time increments to be computed.
                                                                          *     n    , n n
                                                                         yp=yp+ At vp
                                                                          *     n  ,  , n n
                                                                         z  - z  + At w
                                                                          P     P
                                                                                (7b)

                                                                                (7c)
                                                           where At is  the time increment,  and  *  denotes  an  inter-
                                                           mediate or temporary value.

                                                           If a smooth  continuous  solution  is desired  the maximum
                                                           value of At  should be limited  by the requirement  that
                                                           the maximum  distance any parcel  is transported must
                                                           be less than or equal to the distance  between  data
                                                           points in the velocity  matrix.

                                                           The dispersive component for each parcel  is then  cal-
                                                           culated by assuming that the parcels are  subject  to
                                                           Brownian-like random motion resulting  from  turbulence
                                                           present in the transporting medium.  From statistical
                                                           considerations it  can then be  shown  that  the root-
                                                           mean-squared (rms)  distance moved by a given parcel
                                                           during the time, At,  in three-dimensional isotropic
                                                           space is
                                                                                v/6t?At
                                                                               (8)
where V is the eddy diffusivity which is proportional:
to the square of the "typical" eddy size.

The dispersive step size for an individual parcel is
generated by:

               rrf = [R]'                          (9)

[R]  represents a random number in the range 0 •+• z
where z must be chosen so that the rms value of all of
the r j generated is equal to the value specified by
Equation 8.  The random number generators available on
most computer systems will return values in the range
0.0 -»• 1.0 .  The rms value of the set of all numbers
output by a number generator of this type is given by
Equation 10 if, in fact, the generator is truly
random.
                                                                                       l/2
                                                                                              VJ
                                                                                                             (10)
                                                           Assuming that an adequate random number generator  of
                                                           this type is available,  dispersive step lengths  with
                                                           the appropriate rms distance can be generated by
                                                                                      [R]
                                                                                (ID
For example, the location of parcel 3 after 5 time

steps is (x5- y5- z5).  If the problem is concerned
           3'  3'  3
with 5 distinct constituents, this parcel would have
associated with it 5 separate heat or mass quantities
                             -5>5
      r2>5
             r3>5
'3  - 53'D' 53>D' S3  '     ?3

The DPRW transport code requires a velocity matrix
describing the flow patterns of the transporting media
as input data.  The flow field is allowed to be a
function of both time and space.  The two spacial
velocity components at the location of parcel "p",
(uH, vrt, wn), can be interpolated from the surrounding
  p'  p'  pj'
matrix of values.  The advective transport component
is then computed by:
                   p
                        . n n
                       At Up
                                                    (7a)
                                                           The new Cartesian coordinates  of each parcel  are  then
                                                           calculated by
                                                                )                (12a)


                                                                )                (12b)

                                                                                (12c)

                                                                2w and * is a random
n+1
Xn
p
yp+1
n+1
ZP
*
= x + r ,cos
p 0.
= y* + rdsin
*
~ zp rdcos
(6) sin

(0) sin

(*)
                                                           where 0 is a random angle from 0
                                                           angle from 0 -*- TT .
                                                           Parcel "p" has thereby been transported by advection
                                                           and diffusion mechanisms from (x
                                                            n+1   n+1
                                                                 n   n.     . n+1
                                                                V V to  (xp  '
                                       ) during time step "n.".  The trace of a parcel
                             during this time step is illustrated in Figure 1.
                                                       835

-------
   n+1  n+1
   'P  ' yP  '
 Figure 1.   Vector Diagram of  Transport  Components
When this computation has been completed  for  every
parcel in the system, a grid network can  be super-
imposed upon the spatially distributed ensemble  of
parcels.  The nodal points of the grid are labeled
with -t,y,£ indices where:

   -i. = 1,2,3....  1 = number of nodal points  in
                      x-direction

   y = 1,2,3....  J = number of nodal points  in
                      y-direction

   L   1,2,3....  L = number of nodal points  in
                      z-direction.

The nodal points form the vertices for (1-1)  x (J-l)
•x.  (i-1) rectangular solids which are referred to as
cells.  Parcel "p" is said to lie within  cell (-i,j,l)
if
                   n+1
             X . < X    < X .  ,
              -c —  p      -c+1
             y . < y
                - y
                   n+1
(13a)
                                                  (13b)
                                                  (13c)
The total amount of matter or energy within cell (-L,

j',£) is computed by summing the E, '   values for all
parcels that lie within the cell for each species:
             fe *
                   "JLjt
                    I
                   m=l
(14)
                        (sfe>n)
                        V '   iljL
n,-;« = number of parcels within cell  (ltjt£)
The volume of the cell,  V- •/,,  is a known quantity.
Consequently, an average intensive quality variable,
usually a concentration or temperature,  can be com-
puted for each constituent in each cell  by:
                                                  (15)
where Z  = an appropriate conversion factor to convert
           F  to the units desired by the user (e.g.,
                    the factor for converting  from cal/cm
                    to °F).

       To complete the numerical scheme the  contributions of
       the source/sink term must now be accounted  for.   The
       method used to model these contributions varies  de- ,
       pending upon the type of mechanisms represented  by   .
       If the source/sink mechanism is simply  a discharge of
       material into the system or a removal of material
       from it, parcels are either added  to  or removed  from
       appropriate areas of the solution  maxtrix.

       Many source/sink mechanisms are of a.  more complicated
       type that describe interactions between the various
       constituents that may be present and  between the
       constituents and the environment.  These types of
       interactions may be specified by reaction rate or
       heat exchange expressions, or by equilibrium constraints.
       A reaction-rate type of mechanism, r  ,  is a set  of pre-
       defined functions that describe the rate of change of
       r  as a function of all the species present in the
       system.
                      fe
       The change of r  in cell (-L,j,t)- during a given  time
       step can be calculated explicitly  by:
             k,n+l _  k,*    k( m,*\
            Tlj£   ~ Tljl + r  ^y£JAt
                     m = 1,2,3...K

       which can be evaluated directly or implicitly by
      k,n+l =  fe,*    fc
      lit      lit
                                                  (16)
                                                          (17)
which can be solved using standard iterative matrix
inversion methods.

If the source/sink mechanism is specified by constrain-
ing the system to be at equilibria at the end of each
time step, the solution of a set of simultaneous non-
linear equations of the form shown in Equation 18 is
required.
                                                               r
                                                               lit
            •\fel)
                                                                                   m =  1,2,3. ..K
                                                          (18)
where E|j represents a set of algebraic functions that
specify the necessary conditions for equilibria to
exist.  These functions usually represent mass and
charge balances and either mass-action expressions or
relationships that specify the minimization of the
Gibbs free energy.  Systems of equations of this type
are usually solved by a Newton-Raphson iteration or
some other type of iterative procedure.  The concen-
tration values immediately following the advection-
dispersion computations are used to provide starting
values for the iterative procedure.
                                                         Once  the concentration at  the next  time  level,  r.V
                                                                                                         *-i
                                                         has been determined  the mass associated  with  each
                                                         parcel  is adjusted by the  ratio  of  the change.
                                                                                                         fe,n+l
                                                               ,fe,w+l
                                                                                                            (19)
       The conversion of  5  to  T does not necessarily have to
       be made prior to computing some  types  of  source/sink
       term contributions,  but F is usually a much more  con-
       venient quantity to  work with than  the extensive
       variable,  5-  For  some  simple rate  expressions, such
                                                      836

-------
as an Irreversible first order decay, the extensive
variables can be modified directly:


          £M+1=CM  e-AAt

where \ is the decay constant.

The solution can then proceed to the next time level.

Examples of Model Application

The Battelle Generalized Transport Model functions in
three operational modes:

   •  Oil spill drift forecasting,

   •  Powerplant outfall analysis, and

   •  Groundwater contaminant plume prediction.

The operational modes differ primarily in the type of
source/sink terms that have been coupled to them.

The model is currently is use in each operational mode
by various governmental land state agencies.  Samples
of the output generated by each mode are shown in
Figures 2-4.  A document describing the details of
the oil spill operational mode is available3.  Documen-
tation of the other two modes is currently under
preparation.
        Figure 2.  Oil Spill Operational Mode
    Maximum Temperature 7.784
   Figure 3.  Powerplant Outfall Operational Mode
    TRITIUM, 1 YEflRS
                                                               Figure 4.  Groundwater Solute Operational Mode
References

1.  Bird, R. B., W. E. Stewart and E. N. Lightfoot,
    Transport Phenomena, John Wiley and Sons,  Inc.,
    1960.

2.  Csanady, G. T., Turbulent Diffusion in the
    Environment, D. Reidel Publishing Company,
    Boston, 1973.

3.  Ahlstrom, S. W., A Mathematical Model for Pre-
    dicting the Transport of Oil Slicks in Marine
    Waters, Battelle, Pacific Northwest Laboratories,
    BN-SA-558, 1975.
                                                       837

-------
                                     AN INTERACTIVE SYSTEM FOR TIME SERIES
                                             ANALYSIS AND DISPLAY
                                                      OF
                                              WATER QUALITY DATA
                                                    S.  Buda
                                   Michigan Department of Natural Resources
                                               Lansing, Michigan
                                                R.  L. Phillips
                                               G. N. Cederquist
                                                 D. E.  Geister
                                  Unidata, Incorporated, Ann Arbor, Michigan
ADROIT is an interactive computer graphics system
which is capable of rapid retrieval, statistical pro-
cessing and graphical display of water quality data.
It is used here to analyze trends in Soluble Ortho
Phosphorus data collected on Michigan's Grand River,
by the Michigan Department of Natural Resources be-
tween 1963 and 1974.  Soluble Ortho Phosphorus concen-
trations are declining, and the decline is not due
solely to increasing stream flows.   Relationships be-
tween concentration, stream flow and loading rates for
Soluble Ortho Phosphorus are examined.  Further analy-
sis, using ADROIT as an analytical tool, to define the
impact of phosphorus abatement programs in Michigan is
recommended.

                        ADROIT
ADROIT" (Automated Data Retrieval and Operations In-
volving Timeseries) is an interactive system which is
capable of rapid retrieval, statistical processing and
graphical display of water quality data.   The system
is basically an interpreter for a special-purpose
problem-oriented programming language.  It has been
designed to produce retrospective statistical time
series analyses of water quality data and, without
further user intervention, to produce report-ready
graphs of selected results.  ADROIT comprises two ma-
jor subsystems, the computational subsystem and the
display subsystem.  In addition, the system includes a
stand-alone program called COMPOSE which is capable of
additional graphical operations.

At the heart of the ADROIT Computational Subsystem
(ACS) is a special purpose interpretive programming
language.  The language has been designed to properly
handle timeseries data types, specifically those per-
taining to water quality observations.  Being an in-
terpretive language, like BASIC, the computational
task specified by the user is carried out immediately;
there is no compilation step as in FORTRAN.  The fa-
miliar data types of logical, string and numeric are
present in ACS as well as novel data types such as obs
and timeint.  Each variable of type obs is actually a
four-tuple of values, comprising the mean, sample
variance, sample weight, and time of observation asso^-
ciated with water quality data.  The introduction of
this data type insures rigorous and proper handling of
data observations in all arithmetic and statistical
operations.  The timeint data type has been introduced
to permit arbitrary time period restriction and aggre-
gation of data.  Using variables of this type in con~
junction with those of type obs enables the user to
perform a wide range of water quality analyses.

In order to facilitate operations with timeseries data
types, ADROIT provides a complete range of special

* ADROIT, A System for Water Quality Data Analysis and
  Display, Unidata, Incorporated, P. 0, Box 2227, Ann
  Arbor, Mi., June 1975,
built T-in functions, as well as the standard numeric
data type functions found in most programming lan-
guages ,   There are functions for extracting the com-
ponents of an obs, for restricting data to a specified
time interval, and for aggregating observations by
specific intervals.  In addition, there are statisti-
cal functions that compute the inverse normal, chi-
squared, Fisher's F and Student's t distributions.
These provide the building blocks for arbitrary com-
plex statistical analyses that can be developed by the
user,

A unique feature of ADROIT is the capability of build-
ing up a library of user-defined procedures.  Thus,
when the user finds there are functions that he fre-
quently performs and finds useful, he can catalog them
in a special procedure file by giving them a unique
name.  When the function is to be invoked, the user
simply types its name (and any appropriate arguments)
and the ACS executes the function immediately.  For
example, a procedure that computes both a water quali-
ty index and phosphate loading at a selected station
is invoked by

      WQIPHOS.('700026' ,P665,TIME 70 THRU 74)

where the arguments are the station number, EPA param-
eter number, and the time interval for which the com-
putations are to be performed.

The ADROIT Display Subsystem (ADS) is so flexible and
provides such a wide range of capabilities that a user
can, on the one hand, specify every aspect of the
graph to be displayed or allow the system to produce
all of its features automatically.

Just as the special data types in ACS are essential to
the operation of the computational subsystem, a rigor-
ous , canonical definition of a graph and its elements
is a fundamental aspect of ADS.  This graph descrip-
tion is maintained by the system in a form called a
structure which the user can interact with to modify
all or part of a graph.   The structure is an ordered
list of graph elements each of which requires one or
more parameters to describe it.  Typical graph elements
would be the axes_. tick marks, grid lines, labels,
titles, etc.  Independent x and y parameter (horizontal
and vertical) specifications of each of these elements
is under control of the user.  Thus, he may elect to
produce y-grid lines only and omit those for the
x-axis.  Among the display options available to the
user are

     .choice of linear or logarithmic axes
     .point, line, or bar graph plotting of data
     .curve smoothing or least squares data fitting
     .absolute, relative or cumulative histograms
     .general textual annotation
     .three color Calcomp plots
                                                       838

-------
Figures 1 through 15 are examples of finished graphs
produced by ADROIT, using the facilities of both the
computational and display subsystems.

Through ADROIT, large amounts of data extracted from
the U.S. Environmental Protection Agency STORET system
are available for rapid access and manipulation.
Thus, the system is expected to be a valuable tool for
research on the effectiveness of water quality control
procedures.

  Statistical Procedures Applied to Grand River Data

A number of ADROIT Procedures were used to process
stream flow and soluble ortho phosphorus data for the
Grand River at Grand Haven, Michigan.

The phosphorus data (mg/1) collected between 1963
through 1974- is plotted in Figure 1.  During this pe-
riod most of the data was collected monthly.  Occa-
sionally 2 samples per month were collected.  There
are numerous occasions when samples were not collect-
ed, particularly during winter months.

Figure 2 depicts this phosphorus data (mg/1) aggre-
gated by year.  The mean of the observations made
during each year is plotted.  An interval of +1 stan-
dard deviation is also plotted with each mean.

The ADROIT Procedure INDICES, was used to compute the
monthly seasonal indices.1  These twelve indices
(Table 1) indicate the fraction of each year's aver-
age phosphorus level (mg/1) that occured in each
month, respectively.  For January, 1.43 indicates that
all January phosphorus levels (mg/1) averaged 43%
higher than corresponding yearly averages over the 11
year period.  If there were no seasonal influence,
each month's level would be the same as every other
month and the seasonal indices would all be close to
1.0.  The seasonal indices for the phosphorus data
(mg/1) are shown in Table 1.

The ADROIT Procedure DESEASON. was used to deseasona-
lize the phosphorus data (mg/1) by dividing each ob-
servation by the appropriate monthly index.   The de-
seasonalized phosphorus data (mg/1) is shown in Figure
3.  The principle effect of this process is to reduce
the variability of the original data by removing that
portion of the total variability due to non-random
seasonal effects.  Figure 4 shows that the standard
deviations have been reduced, while the annual sample
means have remained the same as Figure 2.  (NOTE:  the
standard deviations for 1973 and 1974 increased after
deseasonalization due to partial yearly data.)

Figure 5 is a plot of the sample means of the phospho-
rus data (mg/1) with associated 90% confidence inter-
vals.  In this case the sample means and sample vari-
ances are used to estimate the interval within which
the true population mean is likely to be 90% of the
time.  We are therefore inferring the value of the
population means by using the sample statistics.

Three ADROIT Procedures were used to compute Figure 5.
Figure 5 is the best estimate of the mean phosphorus
concentration for each year, with seasonal and serial
correlation effects accounted for.  Procedure SERIAL.
was used to compute the 1st order serial correlation
coefficient, R, for the deseasonzlized phosphorus
data.3  For this period of record, R   0.43.  To de-
termine if this value of R was significant for the
number of observations used, the 1st order serial cor-
relation coefficient and 95% confidence limits were
computed for an artificial, normal, random time
series.*  This was done with Procedure TESTR.  Since_
R = 0.43 for our data is greater than the upper confi-
dence limit of the random time series of 0.16, R is
significant at the
level.
Procedure VARADJ. is used to adjust the variances of
the deseasonalized phosphorus data to account for the
serial correlation.   In effect the variances were in-
creased by 15-53% depending on the number of observa-
tions each year.  These adjustment factors were com-
puted on the basis of an artificial time series (first
order Markov process) with the same degree of serial
correlation.  Applying these factors to our data is an
approximation, but it allows us to adjust for serial
correlation and arrive at a better estimate of the
confidence intervals.  As the sample size decreases,
the factors increase.

In a similar manner, the flow data is processed and
the results are shown in Table 2 and Figures 6 through
10.

Figures 11 and 12 (previously presented as Figures 5
and 10) are plots of the mean soluble phosphorus con-
centration (mg/1) and stream flow (cfs) respectively,
with 90% confidence intervals.  All data has been de-
seasonalized.  Since stream flow affects concentration
data, the phosphorus loading rate has been computed as
a way to consider changes in concentration and flow
together.

Procedure LOAD, was used to compute these loadings.
The instantaneous loading rate is the product of the
concentration and stream flow at that time.  (in addi-
tion to a unit constant)  Because there are occasions
when more than one concentration sample or flow mea-
surement per month were made, procedure LOAD, aggre-
gates by month first, to assign a single flow and con-
centration observation for each month.  Then for those
months with corresponding concentration and flow ob-
servations the product is taken to yield a monthly
loading observation.

Procedure LOAD, operated on deseasonalized concentra-
tion and flow data and therefore the loading data is
deseasonalized.  Procedures SERIAL, and TESTR. were
used to yield a serial correlation coefficient of 0.30
which is significant at the 95% level when compared to
the highest expected value of 0.18.

Procedure VARADJ. was used to adjust the variances for
this serial correlation and Figure 13 is a plot of the
sample means of the phosphorus loading  (Ibs/day) rate
with associated 90%  confidence intervals.

Figure 14 is a plot  of stream flow versus soluble
phosphorus concentration  (mg/1) for the period 1963
through 1974.  It was produced using procedure VERSUS.
which aggregates all data by month, and assigns plot-
ting pairs for each month.  The regression line is
also plotted.  Procedure LINCOR. was used to compute
the linear correlation coefficient which was   0.297.

Similarly, Figure 15 is a plot of stream flow versus
soluble phosphorus loading  (Ibs/day) for the same
data.  The linear correlation coefficient is +0.783.

                 Discussion of Results
 Soluble Phosphorus Concentrations Over Time

 A major objective of this analysis is to determine if
 water quality is improving or not.  Soluble ortho
 phosphorus., an important nutrient for aquatic organ-
 isms , is examined with these statistical techniques
 as  one of  a number of parameters which can be analyzed
 similarly.  Water Quality is improving if the phos-
 phorus concentration is decreasing over time in a
 statistically significant way.
                                                      839

-------
      SOLUBLE ORTHO  PHOSPHORUS   AS  P,   MG/L
                                             0.4T
D
IT)
                                                             FIG.  2     *W.

                                                          RAW DATA

                                                      STANDARD DEVIATIONS
                                                                             700026
     S3  64 85 68 87  SB  83 70 7i  72  73  74 75

              TIME OF OBSERVATION
                                       83  84  85 88 87 88  63  70 71 72 73  74  75

                                                TIME OF OBSERVATION
    O.*i
 V)
    0.*
    0.1
 a
 in
                    FIG.  3

            DESEASONALIZED  DATA
                               STN • 700028
                                             0.4-1
    0.0
     83
040508070809  70 71 7273  74  75

      TIME OF OBSERVATION
                                             O.tH
                                                     FIG.  4
                                                                         STN - 700028
                                                      DESEASONALIZED  DATA

                                                      STANDARD DEVIATIONS
                                                 84 83 «  87  « M 70 71  72  73  74 75

                                                        TIME OF OBSERVATION
            TABLE  1

    SOLUBLE  ORTHO PHOSPHORUS
       SEASONAL INDICES
    JAN   1.43
    FEB   1.28
    MAR
    APR
    MAY
    JUN
  1.09
  0.70
  0.57
  0.80
JUL   0.85
AUG   0.85
SEP   0.38
OCT   0.96
NOV   1.06
DEC   1.34
                                          i
                                  to
v>
                                          o

                                          o
                                             o.a-
                                             0.1
                                             0.0
                                                     FIG. 5
                                                                 STN " 700020
                                               DESEASONALIZED DATA
                                                 90  PER CENT
                                               CONFIDENCE  INTERVALS
                                               83040308878809 70 71  7273  74 75

                                                        TIME OF OBSERVATION
                                        840

-------
               STREAM FLOWr   CUBIC FT/SEC
  20000.T
«n 15000.

t
u
  I0000. •
   sooo. •
     0.
                  FIG.  6
                RAW  DATA
                                             20000.
               FIG.  7        ""
              RAW DATA
           STANDARD DEVIATIONS
                                                                              - 70002*
               TIME OF OBSERVATION
             TIME OF OBSERVATION
  20000.
  13000.
 u
 t-t
 §
 g 10000.
   3000.
     0.
                     FIG. 8      *TN-™««

               DESEASONALI2ED DATA
                                             20000.
                                             1SOOO.
                                           O
                                           cf
                                              5000.
                  FIG.  9
             DESEASONALIZED DATA
             STANDARD  DEVIATIONS
                                                                            STN - 70002*
        84 OS M «7 «6  «  70  71  72 73 74 75

               TIHE OF OBSERVATION
            OT  *7  M  « 70 71 72 73  74  75

             TIME OF OBSERVATION
              TABLE 2
            STREAM FLOW
        SEASONAL  INDICES
JAN
FEB
MAR
APR
MAY
JUN
0.32
1.15
1.86
1.85
1.14
0.89
JUL
AU6
SEP
OCT
NOV
DEC
0.63
0.55
0.62
0.66
0.83
0.91
                                            t
                                             20000.
15000.
                                             toooo.
                                              sooo.
                                                0.
       FIG.  10
DESEASONALIZED DATA
  90 PER CENT
CONFIDENCE  INTERVALS
                                                                            am • 700020
                                                          TIME OF OBSERVATION
                                          841

-------
ID

ol
in


en
S
                                                o
                                                   o.*i
                                                   0.3-
                                                   0.2
                                                   o.i
                                                   o.o
                                                                   FIG.  14
                                                                                     - 700028
                                                       SOLUBLE PHOSPHORUS
                                                       CONCENTRATION   VS
                                                          STREAM  FLOW
                                                  5000.   10000.  1SOOO.  20000.   25000.  30000.

                                                   STREAM FLOW»CUBIC FT/SEC
  20000.
 > 13000.
 )10000.
 3
 u.
   3000.
     0.
              FIG.  12
      DESEASONALIZED  DATA
          90 PER CENT
      CONFIDENCE  INTERVALS
                                   3TH • 700028
                                   >- sooo.
                                   a
                                   a.
                                                 4000-
                                                 3000.
                                       a- 2000.

                                       S

                                       O 1000.-
                                               O
                                               in
         04  83 W «7 00 W 70  71  72  73  74  73

                TIttE OF OBSERVATION
                                                    o.
                                                           FIG. 15
                                                                                  STN » 70002B
                                                                 SOLUBLE  PHOSPHORUS
                                                                     LOADING   VS
                                                                   STREAM FLOW
                                            0.    5000.   10000.  19000,  20000.   25000.  30000.

                                                   STREAM FLOW.CUBIC FT/SEC
>-
I
   MOO.T
   5000.
   4000.
   3000.
   2000.
   1000.
OT    0.
       FIG.  13
DESEASONALIZED DATA
   90 PER CENT
CONFIDENCE INTERVALS
                                   SIN • 70002B
      83 »4 83 88 87 S» 83  70  71  72  73  74 75

                TIME OF OBSERVATION
                                              842

-------
Figure 5 indicates that the phosphorus concentration
has declined.  The confidence intervals for any two
years may be compared.  If there is no overlap of the
intervals (i.e. 1963 compared to 1972) the difference
in the means may be considered significant.*  We con-
clude therefore that the phosphorus concentrations in
the early 1970's are significantly lower than levels
in the early 1960's.  Indeed the data could be aggre-
gated over 5 year periods to demonstrate this.  The
confidence intervals were computed using the t distri-
bution, well suited for small samples (less than 30
observations), the data was deseasonalized to remove
an important non-random component of the sample vari-
ance and the variance was adjusted to account for the
serial correlation of the data.  This analysis se-
quence can be applied to other parameters to produce
interval estimates of the population mean.

Stream Flow Levels Over Time

We now must consider some possible cause and effect
relationships.  It must be emphasized that this data
base of monthly observations may not be sufficient to
answer all questions we will now raise.  However,
these statistical techniques will give us some in-
sight.

Figure 10 is the stream flow analogy of Figure 5.  It
indicates that stream flow has increased significant-
ly, since 1963.  The large confidence intervals for
1973 and 1974 are due in part to reduced sample size.

Concentration - Stream Flow Relationships

Is concentration decreasing because flow is increas-
ing?  This question is addressed in Figures 11 and 12.
Between 1963 and 1971 concentration declines while
flow increases.  However from 1972 through 1974 con-
centration remains at about the same level (the over-
lap of the confidence intervals implies the apparent
increase is not significant)  while flow continues to
increase.  This latter period seems to contradict the
original relationship.  The relationship is unclear.

Figure 13 is the phosphorus loading rate resulting
from the concentration and flow data of Figures 11 and
12.  The loading rate appears to be influenced most by
the stream flow, with concentration acting as a rela-
tive constant in the loading product.  The best way to
discern the relationships between concentration, flow
and loading is shown in Figures It and 15.

Figure 14- plots concentration versus associated
streamflow.  The relationship is poor, as demonstrated
by a wide range of concentration observations asso-
ciated with flows of 5000 cfs and less.  A regression
line is shown.  The linear correlation coefficient of
0.297 indicates a poor linear relationship.   Figure 14
does not mean that concentration is not a  function of
flow, but rather, flow is not the only factor influ-
encing soluble phosphorus concentration.   It  is likely
that storm intensity and duration along with  other
hydrographic factors are significant.  We must con-
clude however that the apparent decline of phosphorus-
concentrations of Fi'gure 11 cannot be attributed to
flow,  Since phosphorus removal facilities for munici-
pal treatment plants were constructed during  this
period, it is possible that Figure 11 reflects this^
change.  Further analysis is necessary to  define this
relationship.

As expected, Figure 15 demonstrates that there is a
strong relationship between streamflow and soluble^

* The t test may be applied as a rigorous  test, but
  this graphical method is a good approximation.
phosphorus loading rates,  The linear correlation co-
efficient is +0,783 indicating a good linear correla-
tion.  In light of Figure 14, we must conclude that
soluble ortho phosphorus loading rates reflect the
strong influence of flow in the loading rate product,
and that loading rates as a parameter for measuring
trend provide little new information in this case.
References

1 Spiegel, M.R., "Theory and Problems of Statistics",
  (Schaum's Outline Series, McGraw-Hill Book Co.,
  1961), p. 293,

2 Spiegel, p. 299.

3 Yevdjevich, M.V., "Statistical and Probability
  Analysis of Hydrologic Data",  (Handbook of Applied
  Hydrology, McGraw-Hill Book Co., 1964), p. 8-79.

11 Yevdjevich. p. 8-83.

5 Yevdjevich, p. 8-86.
                                                       843

-------
             CONFERENCE COMMITTEES

             CONFERENCE COORDINATOR
                    Vernon J. Laurie

                LOGISTICS COMMITTEE
                     Joseph Castelli
                     Delores J. Platt

                STEERING COMMITTEE
                     John B. Moran
                    Willis Greenstreet

                PROGRAM COMMITTEE
                Wayne R. Ott, Co-Chairman
               Elijah L. Poole, Co-Chairman
                     Oscar Albrecht
                     Robert Clark
                     Robert Kinnison
                       Albert Klee
                      Harry Torno
                     Bruce Turner
                     Ronald Venezia
               ADVISORY COMMITTEE

   Aubrey Altshuller                     Peter House
   Dwight Ballinger                     John Knelson
    Delbert Earth                      Victor Lambou
    William Benoit                      John P. Lehman
   Kenneth Biglane                    William McCarthy
    Matthew Bills                      George Morgan
Andrew W. Breidenbach                  Thomas Murphy
     Ken Byram                       MeMn Myers
   Peter L. Cashman                     Edward Nime
    Daniel Cirelli                      Edmund Notzon
     William Cox                       Robert Papetti
    John Dekany                      James J. Reisa
  Richard T. Dewling                  William Rosenkranz
  David W. Duttweiler                   William Sayers
    Ronald Engel                      S. David Shearer
     R. J. Garner                       Thomas Stanley
     Carl Gerber                        D. F. Swink
   Donald Goodwin                    Christopher Timm
   James Hammerle                     A. C. Trakowski
     Steve Heller                       Frode Ulvedal
    John W. Hollis                      Morris Yaguda
               CONFERENCE SPONSORS

       OFFICE OF RESEARCH AND DEVELOPMENT
                    Wilson K. Talley
                  Albert C. Trakowski

       OFFICE OF PLANNING AND MANAGEMENT
                     Alvin L. Aim
                    Edward Rhodes
                         844

-------
AUTHOR INDEX
Adrian, D.D.-419
Ahlert, R.C. - 745
Alhstrom, S.W. - 833
Allen, M. - 236
Amendola, G.A. - 512
Anderson, J.A. - 308, 353
Anthes, R.A. - 313
Aquilina, M.  595
Arnett, R.C. - 768
Ashamalla, A.F. - 563
Atherton, R.W. - 429
Ayers, R.U. - 288
Baca, R.G. - 768
Baker, D.A. - 204
Ball, R.H.-218
Bansal, M.K. - 335
Barrington, J. - 446
Baughman, G.L.-619
Beal, J.J. - 326
Beall, M.L. - 90
Becker, C.P. - 466
Beckers, C.V., Jr. - 45, 344
Bedient, P.B. - 362
Benesh, F.-691
Berger, P. - 657
Berman, E.B. - 377
Bierman, V.J. - 773
Binkowski, F.S. - 473
Bland, R.A. - 522
Bliss, J.D. - 696
Bloomfield, J.A. - 579, 683
Bobb, M.W. - 706
Brandstetter, A. - 548
Breidenbach, A.W. - 3
Breiman, L. - 725
Broadway, J.A. - 264
Buda, S. - 838
Budenaers, D.H. - 646
Burr, J.C. - 82, 322
Colder, K.L. - 483
Callahan, J.D. - 508
Carbone, R. - 478
Carlson, G.A. - 579
Casey, D.J.- 176
Cederquist, G.N. - 838
Chamberlain, S.G. - 45, 344, 508
Chang, G. - 252
Chang, T.P.- 129
Characklis, W.G. - 367
Chen, C.W. - 764, 794
Cho, B. -  308
Christiansen, J.H. - 77, 97
Clark, L.J. - 133
Clark, P.A.A. - 176
Clark, R.M. - 808
Clark, T.L.-719
Cleary, R.W. - 434
Clymer, A.B. - 82, 517
Cohen, S. - 223
Cooper, A.S. - 414
Covar, A.P. - 326, 340
Crawford, N.H.  151
Cukor, P.M.-218, 223
Curran, R.G. - 532, 639
D'Agostino, R.-691
D'Arge, R.C. - 446
Davis, L.R. - 784
Deb, A.K.-814
Deininger, R.A. - 634
Delos, C- 115
deLucia, R.J. - 453
Demenkow, J.W. - 508
Descamps, V.J. - 591
Diniz, E.V. - 367
DiToro, D.M.-614
Dolan, D.M. - 773
Donigian, A.S., Jr. - 151
Doyle, J.R. - 139
Duffy, R.G.-322, 517
Duttweiler, D.W.  10
Eadie, B.J. - 629
Ehler, C.N. - 407
Eilers, R.G. - 760
Eimutis, B.C.-710
Ellett, W.H. - 161
Elzy, E. - 609
Eskridge, R.E.-719
Eubanks, L. - 446
Fabos, J. Gy. - 396
Falco, J.W. - 156
Falkenburg, D.R. - 57
Field, R.  548
Finnemore, E.J. - 14, 391, 429
Fishman, G.S. - 664
Fitch, W.N. - 69
Foote, H.P. - 833
Gage, S.J. - 223
Galloway, W.J. - 799
Geister, D.E. - 838
Geller, E.W. - 503
Gesumaria, R. - 745
Gillett, J.W. - 624
Gillian, J.I. - 808
Goldberg, S.M.  199
Gorr, W.L. - 478
Greenberg, A.  - 308
Gregor, J.J.- 209
Gribik, P.R. - 86
Griffin, A.M.,  Jr. - 466
Grigg, N.S. - 755
Grimsrud, G.P. 14, 391
Grizzard, T.J.-819
Grossman, D. - 605, 639
Guldberg, P.H.-298, 691
Outer, G.A. - 424
Haefner, J.W. - 624
Hahn, R.A. - 824
Hameed, S.-318
Hamrick, R.L. - 657
Hansen, C.R. - 706
Harley, B.M.-651
Harlow, C. - 353
Hasan, S.M.-139, 358
Hasselblad, V. - 191
Heaney, J.P. -  139, 358, 362
Heilberg,  E. - 40
Hern, S.C. - 696
Hertz, M. - 196
Hess, R.C. - 247
Hetling, L.J. -  579
Hill, D.-401
Hines, W.G. - 62
Hoehn, R.C.-819
Hoenes, G.R. - 204
Holloway, D.E. - 367
Holmes, B.J.-710
Hossain, A. 129
Howe,  C.W. - 247
Hsu, Der-Ann - 673
Huber, W.C. -  139, 358, 362, 493
                                                             845

-------
Hung, J.Y.- 129
Hunter, J.S. - 673
Hwang, P.H. - 92
Iltis, R. - 573
Ives, K.J.-814
Johanson, P.A.  111
Johnston, T.L.  223
Joyner, S.A., Jr. - 396
Jurgens, R. - 730
Katzper, M.-214
Kaufman, H.L. - 144
Kearney, P.C. - 790
Kendall, G.R. - 223, 230
Keyser, D.-313
King, P.C. - 750
King, T.G.-414
Kingscott, J. - 120
Kneese, A.V. - 274
Knelson, J.H. - 191
Koch, R.C. - 92
Kortanek, K.O. - 86
Krabbe,D.M.-381
Kreider, J.F. - 247
Kuhner, J. - 453
Kuo, A.Y. - 543
Kuzmack, A.M. - 736
Labadie, J.W. - 755
La, FuHsiung - 1444
Lambie, F. - 236
Lambou, V.W. - 696
Lassiter, R.R. - 619
Lawson, R.E., Jr. - 493
Lebedeff, S.A.-318
Lee, S. - 522
Lewellen, W.S.-714
Lienesch, W.C. - 453
Lindstrom, F.T. - 609
Liu, Hsien-Ta - 503
Livingston, R.A. - 230
Lo, Kuang-Mei- 414, 419
Logan, S.E. - 199
Lorenzen, M.W.  111,794
Lovelace, N.L. - 133
Lown,  C.S. - 466
Luken, R.A. - 106
Malanchuk, J.L.-619
Manns, B. - 803
Marks,  D.H.-372, 639, 651
Marsalek, J. - 558
Marshall, R.N. - 45, 344
Matystik, W.F., Jr.-614
Maxwell, D.R. - 706
Mays, L.W. - 740
McAllister, A.R. - 298
McCurdy, T.-691
McGaughy, R.E. - 736
McKenzie, S.W. - 62
Mears, C.E.-701
Meenaghan, G.F. -  386
Meier, P.M. - 293
Meisel,  W.S. - 483,  725
Menchen, R.W. - 230, 241
Mendis, M.S.,-241
Metry, A.A. - 537
Mote, L.B.-710
Mukherji, S.K. - 706
Mulkey, L.A. - 156
Murphy, M.P. - 358
Nagel, L.L. - 458
Nash, R.G. - 790
 Nelson, W.C.  191
 Neustader, H.E. - 678
 Norwood, D.L. - 264
'Nossa, G.A.  - 166
 Olenik, T.J.  - 745
 Orr, D.V. - 247
 Parker, P.E.  - 344
 Paul, J.F. 171
 Pechan, E.H. - 106
 Pelton, D.J.  - 92
 Petri, P.A. -  282
 Pheiffer, T.H. - 133
 Phillips, R.L. - 838
 Pielou, B.C.  - 668
 Plotkin, S. -218
 Poole, E.L.  1
 Porter, R.A. - 97
 Prasad, C. - 303
 Princiotta, F. -218
 Pritsker, A.A.B. -259
 Ramm, A.E. - 101
 Randall, C.W.-819
 Rao, S.T. - 499
 Rhodes, R.C. - 730
 Ricca, V.T. - 586
 Richardson,  W.L. - 20
 Rickert, D.A. - 62
 Riggan,W.B.-196
 Riley, J.J. - 503
 Roake, A.F.  - 563
 Robertson, A. - 629
 Roesner, L.A. - 829
 Rosenstein, M. - 591
 Ross, N.P.-214
 Ryan, T.C. - 424
 Samson, P.J. - 499
 Sanders, W.M., III-10
 Sandwick, J.P. - 176
 Santiago, H.P. - 230
 Sargent, D.H.  126
 Sauter, G.D. - 30
 Scavia, D. - 629
 Schaake, J.C., Jr.-553, 651
 Schregardus, D.R.-512
 Schultz, H.L.,111-241
 Schur, D.A. - 595
 Selak, M. - 353
 Sengupta, S. - 522
 Shahane, A.N. - 657
 Shapiro, M. - 453
 Shirazi, M.A. - 784
 Shell, R.L. -  600
 Shelley, P.E. - 583
 Shubinski, R.P. - 69, 829
 Shupe, D.E.  - 600
 Sidik, S.M. - 678
 Skretner, R.G. - 353
 Smith, D.J. - 764
 Smith, E.T. - 439
 Smith, L. - 218, 223
 Snow, R.H. - 627
 Snyder, W.H. - 488, 493
 Soldat, J.K.  - 204
 Solpietro, A. - 176
 Sonnen, M.B. - 829
 Spofford, W.O., Jr. - 407
 Sullivan, R.E. - 161
 Svendsgaard, D.J. - 269
 Sweigart, J.R. - 86
 Tang, W.H. - 740
                                                          846

-------
 Tapp, J.S. - 50, 350
 Teske, M. -714
 Thomann, R.V. - 568
 Fhomas, N.A. - 20
 Thomas, R.W. - 696
 Thompson,  R.S. - 488, 493
 Thuillier, R.H. - 35
 Tikvart, J.A.-701
 Tonias, B.C. - 750
 Toino, H.C. - 548
 Trakowski, A.C. - 2
 Trotta, P.O. - 755
 True, H.A. - 74
 Truppi, L. -  196
 Udis, B. - 247
 Upholt, W.M.-182
 Van Bruggen, J.B. - 196
 Vicens, G.J.-651
 Waddel, W.W. - 111
 Walker, W. -  595
 Walski, T.M. - 532
 Walton, JJ. - 26
 Wang, C.L. -  252
 Weatherbe, D.G. - 330
 Weil, CM. - 186
 Wenzel, H.G., Jr. - 740
 Westermeier, J.F. - 424
 Whang, D.S. - 386
 White,  D.W. - 326
 Whitman, I.L. - 7
 Wickramaratne, P.J. - 508
 Williams, H.D. - 706
 Williams, L.R. - 696
 Winfield, R.P. - 568
 Wisner, P.E. - 563
Wright, T.E. - 298
Yearsley, J.R. - 780
Yen, B.C.  - 740
Young, J.T. -  247
                                                         847

-------
                                   TECHNICAL REPORT DATA
                            (Please read Instructions on the reverse before completing)
 1. REPORT NO.
   EPA-600/9-76-Q16
2.
                             3. RECIPIENT'S ACCESSION NO.
 4. TITLE AND SUBTITLE
   Proceedings of The EPA Conference on Environmental
   Modeling and  Simulation
                             5. REPORT DATE
                                June, 1976
                             6. PERFORMING ORGANIZATION CODE
 7. AUTHOR(S)
   Wayne R. Ott,  editor
     (over 300 authors)
                             8. PERFORMING ORGANIZATION REPORT NO.
 9. PERFORMING ORGANIZATION NAME AND ADDRESS
   N/A
                              10. PROGRAM ELEMENT NO.

                                1HD621
                                                            11. CONTRACT/GRANT NO.
                                                              N/A
 12. SPONSORING AGENCY NAME AND ADDRESS
   U. S. Environmental Protection Agency
   401 M Street S.W.
   Washington, D.C.   20460
                             13. TYPE OF REPORT AND PERIOD COVERED
                                In-house
                             14. SPONSORING AGENCY CODE
                                EPA/ORD
 15. SUPPLEMENTARY NOTES
   N/A
 16. ABSTRACT
        This document  contains the Proceedings of the  EPA Conference on Environmental
   Modeling and Simulation held in Cincinnati, Ohio, on April 19-22, 1976.  This
   national Conference was the first of its kind  to  cover the state-of-the-art  of
   mathematical and  statistical models in the air, water, and land environments.

        This document  contains 164 technical papers  on environmental modeling
   efforts in air quality management, air and water  pollutant transport processes,
   water runoff, water supply, solid waste, environmental management and planning,
   environmental economics, environmental statistics,  ecology, noise, radiation,  and
   health.  The Conference was directed toward the technical and administrative
   communities faced with the need to make environmental decisions and predict  future
   environmental phenomena.  The Proceedings are  believed to be the most complete
   summary of environmental modeling efforts currently available.
 17.
                                KEY WORDS AND DOCUMENT ANALYSIS
                  DESCRIPTORS
                                              b.IDENTIFIERS/OPEN ENDED TERMS
  Mathematical models,  -Computer simulation,
  Econometrics, Systems  analysis, Statistica
  analysis, Pollution  -  Water pollution,
  Air pollution", Sanitary  engineering, Water
  supply,  Environmental  engineering, Civil
  engineering
                 Environmental modeling
                 Environmental statistics
                 Environmental engineer-
                 ing
                 Computer  simulation
                 Systems analysis
                 Operations  research
                         i r.al modeling
                                           c.  COS AT I Field/Group
 04B
 05C
 06F
 12A
 12B
 13B
 8. DISTRIBUTION STATEMENT

  RELEASE TO PUBLIC
                19. SECURITY CLASS (ThisReport)
                  UNCLASSIFIED
21. NO. OF PAGES

   861
                                              20. SECURITY CLASS (This page)

                                                UNCLASSIFIED
                                                                         22. PRICE
EPA Form 2220-1 (9-73)

-------